SD ID: LOFAR data Organisations & Contacts: ASTRON: NLeSC: SURFsara: CWL Project Pythonic.nl: INAF: |
OVERVIEW: Existing LOFAR data will be made readily available to a much larger and broader audience, enabling novel scientific breakthroughs. Important discoveries are regularly made by re-analysing exiting astronomy data. Data integration and data interoperability allow users to exploit the sensitivity of multiple instruments, and are the driving force behind new discoveries. The open science enabled by this project, in combination with the EOSC ecosystem, will be a catalyst to make this happen with LOFAR (Low-Frequency Array) data as well. Existing LOFAR data will be made readily available to a much larger and broader audience, enabling novel scientific breakthroughs. Important discoveries are regularly made by re-analysing existing astronomy data
SCIENTIFIC OBJECTIVES OF THE DEMONSTRATOR:
- Ease the process to locate, access, and extract science from the LOFAR archive without being an expert on data retrieval and data analysis tools.
- Enable the creation of new scientific results based on archived data products.
- Provision of large-scale compute resources
MAIN ACHIEVEMENTS:
- Migration of processing workflows to EOSC infrastructure;
- Registration of LOFAR data in a FAIR-principle based data repository;
- Development of a pilot processing portal, allowing users to initiate workflows to analyze data from the LOFAR.
IMPACT: The LOFAR archive is by its very nature a distributed archive with connected storage and computing resources in three sites and in three countries. This data topology makes it a natural model for next generation Observatories like the SKA and a natural test-bed for how to enable science on such infrastructures.
The challenges are common to many data-intensive domains as well. NLeSC would like to disseminate and valorize the results from this proposal in other scientific domains and research infrastructures, in close cooperation with SURFsara.
RECOMMENTATIONS FOR THE IMPLEMENTATION:
- Support high throughput applications through integration of distributed large-scale data storage facilities with high throughput processing clusters.
- Support portable workflows through containerized deployment and standardized definitions such as the Common Workflow Language.
- Provide standardized solutions for services, if overlapping solutions exist, provide guidance for communities to decide on the most appropriate solution to integrate with.