Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 71 result(s)
The CCHDO provides access to standard, well-described datasets from reference-quality repeat hydrography expeditions. It curates high quality full water column Conductivity-Temperature-Depth (CTD), hydrographic, carbon and tracer data from over 2,500 cruises from ~30 countries. It is the official data center for CTD and water sample profile data from the Global Ocean Ship-Based Hydrographic Investigations Program (GO-SHIP), as well as for WOCE, US Hydro, and other high quality repeat hydrography lines (e.g. SOCCOM, HOT, BATS, WOCE, CARINA.)
Country
The Marine Data Archive (MDA) is an online repository specifically developed to independently archive data files in a fully documented manner. The MDA can serve individuals, consortia, working groups and institutes to manage data files and file versions for a specific context (project, report, analysis, monitoring campaign), as a personal or institutional archive or back-up system and as an open repository for data publication.
Country
More than a quarter of a million people — one in 10 NSW men and women aged over 45 — have been recruited to our 45 and Up Study, the largest ongoing study of healthy ageing in the Southern Hemisphere. The baseline information collected from all of our participants is available in the Study’s Data Book. This information, which researchers use as the basis for their analyses, contains information on key variables such as height, weight, smoking status, family history of disease and levels of physical activity. By following such a large group of people over the long term, we are developing a world-class research resource that can be used to boost our understanding of how Australians are ageing. This will answer important health and quality-of-life questions and help manage and prevent illness through improved knowledge of conditions such as cancer, heart disease, depression, obesity and diabetes.
The GHO data repository is WHO's gateway to health-related statistics for its 194 Member States. It provides access to over 1000 indicators on priority health topics including mortality and burden of diseases, the Millennium Development Goals (child nutrition, child health, maternal and reproductive health, immunization, HIV/AIDS, tuberculosis, malaria, neglected diseases, water and sanitation), non communicable diseases and risk factors, epidemic-prone diseases, health systems, environmental health, violence and injuries, equity among others. In addition, the GHO provides on-line access to WHO's annual summary of health-related data for its Member states: the World Health Statistics.
Content type(s)
The Lamont-Doherty Core Repository (LDCR) contains one of the world’s most unique and important collection of scientific samples from the deep sea. Sediment cores from every major ocean and sea are archived at the Core Repository. The collection contains approximately 72,000 meters of core composed of 9,700 piston cores; 7,000 trigger weight cores; and 2,000 other cores such as box, kasten, and large diameter gravity cores. We also hold 4,000 dredge and grab samples, including a large collection of manganese nodules, many of which were recovered by submersibles. Over 100,000 residues are stored and are available for sampling where core material is expended. In addition to physical samples, a database of the Lamont core collection has been maintained for nearly 50 years and contains information on the geographic location of each collection site, core length, mineralogy and paleontology, lithology, and structure, and more recently, the full text of megascopic descriptions.
Country
The GIGA (German Institute of Global and Area Studies) researchers generate a large number of qualitative and quantitative research data. On this page you will find descriptions of this research data ("metadata") as well as information about the available access options. To facilitate its reuse, and to enhance research transparency, a large part of the GIGA research data is published in datorium, a repository hosted by the GESIS Leibniz Institute for the Social Sciences: https://www.re3data.org/repository/r3d100011062 Our objective is to offer free access to as much of our data as possible, to guarantee the possibility of its citation, and to secure its safe storage. Metadata of research data that cannot be published open access due to its sensitivity is also shown on this page.
CottonGen is a new cotton community genomics, genetics and breeding database being developed to enable basic, translational and applied research in cotton. It is being built using the open-source Tripal database infrastructure. CottonGen consolidates and expands the data from CottonDB and the Cotton Marker Database, providing enhanced tools for easy querying, visualizing and downloading research data.
This is CSDB version 1 merged from Bacterial (BCSDB) and Plant&Fungal (PFCSDB) databases. This database aims at provision of structural, bibliographic, taxonomic, NMR spectroscopic and other information on glycan and glycoconjugate structures of prokaryotic, plant and fungal origin. It has been merged from the Bacterial and Plant&Fungal Carbohydrate Structure Databases (BCSDB+PFCSDB). The key points of this service are: High coverage. The coverage for bacteria (up to 2016) and archaea (up to 2016) is above 80%. Similar coverage for plants and fungi is expected in the future. The database is close to complete up to 1998 for plants, and up to 2006 for fungi. Data quality. High data quality is achieved by manual curation using original publications which is assisted by multiple automatic procedures for error control. Errors present in publications are reported and corrected, when possible. Data from other databases are verified on import. Detailed annotations. Structural data are supplied with extended bibliography, assigned NMR spectra, taxon identification including strains and serogroups, and other information if available in the original publication. Services. CSDB serves as a platform for a number of computational services tuned for glycobiology, such as NMR simulation, automated structure elucidation, taxon clustering, 3D molecular modeling, statistical processing of data etc. Integration. CSDB is cross-linked to other glycoinformatics projects and NCBI databases. The data are exportable in various formats, including most widespread encoding schemes and records using GlycoRDF ontology. Free web access. Users can access the database for free via its web interface (see Help). The main source of data is retrospective literature analysis. About 20% of data were imported from CCSD (Carbbank, University of Georgia, Athens; structures published before 1996) with subsequent manual curation and approval. The current coverage is displayed in red on the top of the left menu. The time lag between the publication of new data and their deposition into CSDB is ca. 1 year. In the scope of bacterial carbohydrates, CSDB covers nearly all structures of this origin published up to 2016. Prokaryotic, plant and fungal means that a glycan was found in the organism(s) belonging to these taxonomic domains or was obtained by modification of those found in them. Carbohydrate means a structure composed of any residues linked by glycosidic, ester, amidic, ketal, phospho- or sulpho-diester bonds in which at least one residue is a sugar or its derivative.
DIAMM (the Digital Image Archive of Medieval Music) is a leading resource for the study of medieval manuscripts. We present images and metadata for thousands of manuscripts on this website. We also provide a home for scholarly resources and editions, undertake digital restoration of damaged manuscripts and documents, publish high-quality facsimiles, and offer our expertise as consultants.
The Rolling Deck to Repository (R2R) Program provides a comprehensive shore-side data management program for a suite of routine underway geophysical, water column, and atmospheric sensor data collected on vessels of the academic research fleet. R2R also ensures data are submitted to the NOAA National Centers for Environmental Information for long-term preservation.
Country
"Seanoe (SEA scieNtific Open data Edition) is a publisher of scientific data in the field of marine sciences. It is operated by Ifremer (http://wwz.ifremer.fr/). Data published by SEANOE are available free. They can be used in accordance with the terms of the Creative Commons license selected by the author of data. Seance contributes to Open Access / Open Science movement for a free access for everyone to all scientific data financed by public funds for the benefit of research. An embargo limited to 2 years on a set of data is possible; for example to restrict access to data of a publication under scientific review. Each data set published by SEANOE has a DOI which enables it to be cited in a publication in a reliable and sustainable way. The long-term preservation of data filed in SEANOE is ensured by Ifremer infrastructure. "
The National Trauma Data Bank® (NTDB) is the largest aggregation of trauma registry data ever assembled. The goal of the NTDB is to inform the medical community, the public, and decision makers about a wide variety of issues that characterize the current state of care for injured persons. Registry data that is collected from the NTDB is compiled annually and disseminated in the forms of hospital benchmark reports, data quality reports, and research data sets. Research data sets that can be used by researchers. To gain access to NTDB data, researchers must submit requests through our online application process
AmphibiaWeb is an online system enabling any user to search and retrieve information relating to amphibian biology and conservation. This site was motivated by the global declines of amphibians, the study of which has been hindered by the lack of multidisplinary studies and a lack of coordination in monitoring, in field studies, and in lab studies. We hope AmphibiaWeb will encourage a shared vision to collaboratively face the challenge of global amphibian declines and the conservation of remaining amphibians and their habitats.
The Antimicrobial Peptide Database (APD) was originally created by a graduate student, Zhe Wang, as his master's thesis in the laboratory of Dr. Guangshun Wang. The project was initiated in 2002 and the first version of the database was open to the public in August 2003. It contained 525 peptide entries, which can be searched in multiple ways, including APD ID, peptide name, amino acid sequence, original location, PDB ID, structure, methods for structural determination, peptide length, charge, hydrophobic content, antibacterial, antifungal, antiviral, anticancer, and hemolytic activity. Some results of this bioinformatics tool were reported in the 2004 database paper. The peptide data stored in the APD were gleaned from the literature (PubMed, PDB, Google, and Swiss-Prot) manually in over a decade.
The NSIDC Distributed Active Archive Center (DAAC) processes, archives, documents, and distributes data from NASA's past and current Earth Observing System (EOS) satellites and field measurement programs. The NSIDC DAAC focuses on the study of the cryosphere. The NSIDC DAAC is one of NASA's Earth Observing System Data and Information System (EOSDIS) Data Centers.
The International Center for Global Earth Models collects and distributes historical and actual global gravity field models of the Earth and offers calculation service for derived quantities. In particular the tasks include: collecting and archiving of all existing global gravity field models, web interface for getting access to global gravity field models, web based visualization of the gravity field models their differences and their time variation, web based service for calculating different functionals of the gravity field models, web site for tutorials on spherical harmonics and the theory of the calculation service. As new service since 2016, ICGEM is providing a Digital Object Identifier (DOI) for the data set of the model (the coefficients).
GNPS is a web-based mass spectrometry ecosystem that aims to be an open-access knowledge base for community-wide organization and sharing of raw, processed or identified tandem mass (MS/MS) spectrometry data. GNPS aids in identification and discovery throughout the entire life cycle of data; from initial data acquisition/analysis to post publication.
Country
KEGG is a database resource for understanding high-level functions and utilities of the biological system, such as the cell, the organism and the ecosystem, from molecular-level information, especially large-scale molecular datasets generated by genome sequencing and other high-throughput experimental technologies