Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 112 result(s)
myExperiment is a collaborative environment where scientists can safely publish their workflows and in silico experiments, share them with groups and find those of others. Workflows, other digital objects and bundles (called Packs) can now be swapped, sorted and searched like photos and videos on the Web. Unlike Facebook or MySpace, myExperiment fully understands the needs of the researcher and makes it really easy for the next generation of scientists to contribute to a pool of scientific methods, build communities and form relationships — reducing time-to-experiment, sharing expertise and avoiding reinvention. myExperiment is now the largest public repository of scientific workflows.
Country
The program "Humanist Virtual Libraries" distributes heritage documents and pursues research associating skills in human sciences and computer science. It aggregates several types of digital documents: A selection of facsimiles of Renaissance works digitized in the Central Region and in partner institutions, the Epistemon Textual Database, which offers digital editions in XML-TEI, and Transcripts or analyzes of notarial minutes and manuscripts
Country
The UTM Data Centre is responsible for managing spatial data acquired during oceanographic cruises on board CSIC research vessels (RV Sarmiento de Gamboa, RV García del Cid) and RV Hespérides. The aim is, on the one hand, to disseminate which data exist and where, how and when they have been acquired. And on the other hand, to provide access to as much of the interoperable data as possible, following the FAIR principles, so that they can be used and reused. For this purpose, the UTM has a Spatial Data Infrastructure at a national level that consists of several services: Oceanographic Cruise and Data Catalogue Including metadata from more than 600 cruises carried out since 1991, with links to documentation associated to the cruise, navigation maps and datasets Geoportal Geospatial data mapping interface Underway Plot & QC Visualization, Quality Control and conversion to standard format of meteorological data and temperature and salinity of surface water At an international level, the UTM is a National Oceanographic Data Centre (NODC) of the Distributed European Marine Data Infrastructure SeaDataNet, to which the UTM provides metadata published in the Cruise Summary Report Catalog and in the data catalog Common Data Index Catalog, as well as public data to be shared.
PDBe is the European resource for the collection, organisation and dissemination of data on biological macromolecular structures. In collaboration with the other worldwide Protein Data Bank (wwPDB) partners - the Research Collaboratory for Structural Bioinformatics (RCSB) and BioMagResBank (BMRB) in the USA and the Protein Data Bank of Japan (PDBj) - we work to collate, maintain and provide access to the global repository of macromolecular structure data. We develop tools, services and resources to make structure-related data more accessible to the biomedical community.
LINDAT/CLARIN is designed as a Czech “node” of Clarin ERIC (Common Language Resources and Technology Infrastructure). It also supports the goals of the META-NET language technology network. Both networks aim at collection, annotation, development and free sharing of language data and basic technologies between institutions and individuals both in science and in all types of research. The Clarin ERIC infrastructural project is more focused on humanities, while META-NET aims at the development of language technologies and applications. The data stored in the repository are already being used in scientific publications in the Czech Republic. In 2019 LINDAT/CLARIAH-CZ was established as a unification of two research infrastructures, LINDAT/CLARIN and DARIAH-CZ.
The Tromsø Repository of Language and Linguistics (TROLLing) is a FAIR-aligned repository of linguistic data and statistical code. The archive is open access, which means that all information is available to everyone. All data are accompanied by searchable metadata that identify the researchers, the languages and linguistic phenomena involved, the statistical methods applied, and scholarly publications based on the data (where relevant). Linguists worldwide are invited to deposit data and statistical code used in their linguistic research. TROLLing is a special collection within DataverseNO (http://doi.org/10.17616/R3TV17), and C Centre within CLARIN (Common Language Resources and Technology Infrastructure, a networked federation of European data repositories; http://www.clarin.eu/), and harvested by their Virtual Language Observatory (VLO; https://vlo.clarin.eu/).
Europeana is the trusted source of cultural heritage brought to you by the Europeana Foundation and a large number of European cultural institutions, projects and partners. It’s a real piece of team work. Ideas and inspiration can be found within the millions of items on Europeana. These objects include: Images - paintings, drawings, maps, photos and pictures of museum objects Texts - books, newspapers, letters, diaries and archival papers Sounds - music and spoken word from cylinders, tapes, discs and radio broadcasts Videos - films, newsreels and TV broadcasts All texts are CC BY-SA, images and media licensed individually.
Welcome to the York Research Database, where you can find all our research staff, projects, publications and organisational units, and explore the connections between them all. The university of York is a member of the Russell Group of research-intensive universities.
The aim of the Freshwater Biodiversity Data Portal is to integrate and provide open and free access to freshwater biodiversity data from all possible sources. To this end, we offer tools and support for scientists interested in documenting/advertising their dataset in the metadatabase, in submitting or publishing their primary biodiversity data (i.e. species occurrence records) or having their dataset linked to the Freshwater Biodiversity Data Portal. This information portal serves as a data discovery tool, and allows scientists and managers to complement, integrate, and analyse distribution data to elucidate patterns in freshwater biodiversity. The Freshwater Biodiversity Data Portal was initiated under the EU FP7 BioFresh project and continued through the Freshwater Information Platform (http://www.freshwaterplatform.eu). To ensure the broad availability of biodiversity data and integration in the global GBIF index, we strongly encourages scientists to submit any primary biodiversity data published in a scientific paper to national nodes of GBIF or to thematic initiatives such as the Freshwater Biodiversity Data Portal.
FRED is an online database consisting of hundreds of thousands of economic data time series from scores of national, international, public, and private sources. FRED, created and maintained by the Research Department at the Federal Reserve Bank of St. Louis, goes far beyond simply providing data: It combines data with a powerful mix of tools that help the user understand, interact with, display, and disseminate the data. In essence, FRED helps users tell their data stories.
Modern signal processing and machine learning methods have exciting potential to generate new knowledge that will impact both physiological understanding and clinical care. Access to data - particularly detailed clinical data - is often a bottleneck to progress. The overarching goal of PhysioNet is to accelerate research progress by freely providing rich archives of clinical and physiological data for analysis. The PhysioNet resource has three closely interdependent components: An extensive archive ("PhysioBank"), a large and growing library of software ("PhysioToolkit"), and a collection of popular tutorials and educational materials
The World Wide Molecular Matrix (WWMM) is an electronic repository for unpublished chemical data. WWMM is an open collection of information of small molecules. The "Matrix" in WWMM is influenced by William Gibson's vision of a cyberinfrastructure where all knowledge is accessible. The WWMM is an experiment to see how far this can be taken for chemical compounds. Although much of the information for a given compound has been Openly published, very little is available in Open electronic collections. The WWMM is aimed at catalysing this approach for chemistry and the current collection is made available under the Budapest Open Archive Initiative (http://www.budapestopenaccessinitiative.org/read).
Country
The eAtlas is a website, mapping system and set of data visualisation tools for presenting research data in an accessible form that promotes greater use of this information. The eAtlas will serve as the primary data and knowledge repository for all NERP Tropical Ecosystems Hub projects, which focus on the on the Great Barrier Reef, Wet Tropics rainforest and Torres Strait. The eAtlas will capture and record research outcomes and make them available to research-users in a timely, readily accessible manner. It will host meta-data records and provide an enduring repository for raw data. It will also develop and host web visualisations to view information using a simple and intuitive interface. This will assist scientists with data discovery and allow environmental managers to access and investigate research data.
Research data management is a general term covering how you organize, structure, store, and care for the information used or generated during a research project. The University of Oxford policy mandates the preservation of research data and records for a minimum of 3 years after publication. A place to securely hold digital research materials (data) of any sort along with documentation that helps explain what they are and how to use them (metadata). The application of consistent archiving policies, preservation techniques and discovery tools, further increases the long term availability and usefulness of the data. This is the main difference between storage and archiving of data. ORA-Data is the University of Oxford’s research data archive https://www.re3data.org/repository/r3d100011230
Web Soil Survey (WSS) provides soil data and information produced by the National Cooperative Soil Survey. It is operated by the USDA Natural Resources Conservation Service (NRCS) and provides access to the largest natural resource information system in the world. NRCS has soil maps and data available online for more than 95 percent of the nation’s counties and anticipates having 100 percent in the near future. The site is updated and maintained online as the single authoritative source of soil survey information.
Country
data.eaufrance.fr is an information system on water (EIS); it is registered in the National Plan for Water Data (NELS) and will be integrated into the data dissemination web pattern. It also complies with the European Directive 2003/98 / EC of November 2003 on the reuse of public sector information. data.eaufrance.fr is a public data distribution platform open for reuse in the terms and conditions of the Licence Open / Open License. Opendata SIE is coordinated by the National Office for Water and Aquatic Environments (Onema); it was developed by the BRGM and the International Office for Water (IOW).
The Oak Ridge National Laboratory Distributed Active Archive Center (ORNL DAAC) for biogeochemical dynamics is one of the National Aeronautics and Space Administration (NASA) Earth Observing System Data and Information System (EOSDIS) data centers managed by the Earth Science Data and Information System (ESDIS) Project. The ORNL DAAC archives data produced by NASA's Terrestrial Ecology Program. The DAAC provides data and information relevant to biogeochemical dynamics, ecological data, and environmental processes, critical for understanding the dynamics relating to the biological, geological, and chemical components of Earth's environment.
IRIS offers free and open access to a comprehensive data store of raw geophysical time-series data collected from a large variety of sensors, courtesy of a vast array of US and International scientific networks, including seismometers (permanent and temporary), tilt and strain meters, infrasound, temperature, atmospheric pressure and gravimeters, to support basic research aimed at imaging the Earth's interior.
Data@Lincoln is the research data repository for Lincoln University (New Zealand). Datasets may stand alone, or may consist of appendices to theses, or figures or other supplementary material to journal articles and other publications.
ReDATA is the research data repository for the University of Arizona and a sister repository to the UA Campus Repository (which is intended for document-based materials). The UA Research Data Repository (ReDATA) serves as the institutional repository for non-traditional scholarly outputs resulting from research activities by University of Arizona researchers. Depositing research materials (datasets, code, images, videos, etc.) associated with published articles and/or completed grants and research projects, into ReDATA helps UA researchers ensure compliance with funder and journal data sharing policies as well as University data retention policies. ReDATA is designed for materials intended for public availability.
<<<!!!<<< The demand for high-value environmental data and information has dramatically increased in recent years. To improve our ability to meet that demand, NOAA’s former three data centers—the National Climatic Data Center, the National Geophysical Data Center, and the National Oceanographic Data Center, which includes the National Coastal Data Development Center—have merged into the National Centers for Environmental Information (NCEI). >>>!!!>>> NCEI is responsible for hosting and providing access to one of the most significant archives on Earth, with comprehensive oceanic, atmospheric, and geophysical data. From the depths of the ocean to the surface of the sun and from million-year-old sediment records to near real-time satellite images, NCEI is the Nation's leading authority for environmental information. The National Centers for Environmental Information (NCEI), which hosts the World Data Service for Oceanography is a national environmental data center operated by the National Oceanic and Atmospheric Administration (NOAA) of the U.S. Department of Commerce. NCEI are responsible for hosting and providing access to one of the most significant archives on earth, with comprehensive oceanic, atmospheric, and geophysical data. The primary mission of the World Data Service for Oceanography is to ensure that global oceanographic datasets collected at great cost are maintained in a permanent archive that is easily and openly accessible to the world science community and to other users.
<<<!!!<<< The demand for high-value environmental data and information has dramatically increased in recent years. To improve our ability to meet that demand, NOAA’s former three data centers—the National Climatic Data Center, the National Geophysical Data Center, and the National Oceanographic Data Center, which includes the National Coastal Data Development Center—have merged into the National Centers for Environmental Information (NCEI). >>>!!!>>> The National Oceanographic Data Center includes the National Coastal Data Development Center (NCDDC) and the NOAA Central Library, which are integrated to provide access to the world's most comprehensive sources of marine environmental data and information. NODC maintains and updates a national ocean archive with environmental data acquired from domestic and foreign activities and produces products and research from these data which help monitor global environmental changes. These data include physical, biological and chemical measurements derived from in situ oceanographic observations, satellite remote sensing of the oceans, and ocean model simulations.