Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 121 result(s)
The Agricultural and Environmental Data Archive (AEDA) is the direct result of a project managed by the Freshwater Biological Association in partnership with the Centre for e-Research at King's College London, and funded by the Department for the Environment, Food & Rural Affairs (Defra). This project ran from January 2011 until December 2014 and was called the DTC Archive Project, because it was initially related to the Demonstration Test Catchments Platform developed by Defra. The archive was also designed to hold data from the GHG R&D Platform (www.ghgplatform.org.uk). After the DTC Archive Project was completed the finished archive was renamed as AEDA to reflect it's broader remit to archive data from any and all agricultural and environmental research activities.
AmoebaDB belongs to the EuPathDB family of databases and is an integrated genomic and functional genomic database for Entamoeba and Acanthamoeba parasites. In its first iteration (released in early 2010), AmoebaDB contains the genomes of three Entamoeba species (see below). AmoebaDB integrates whole genome sequence and annotation and will rapidly expand to include experimental data and environmental isolate sequences provided by community researchers . The database includes supplemental bioinformatics analyses and a web interface for data-mining.
Apollo (previously DSpace@Cambridge) is the University of Cambridge’s Institutional Repository (IR), preserving and providing access to content created by members of the University. The repository stores a range of content and provides different levels of access, but its primary focus is on providing open access to the University’s research publications.
The ADS is an accredited digital repository for heritage data that supports research, learning and teaching with freely available, high quality and dependable digital resources by preserving and disseminating digital data in the long term. The ADS also promotes good practice in the use of digital data, provides technical advice to the heritage community, and supports the deployment of digital technologies.
ArrayExpress is one of the major international repositories for high-throughput functional genomics data from both microarray and high-throughput sequencing studies, many of which are supported by peer-reviewed publications. Data sets are submitted directly to ArrayExpress and curated by a team of specialist biological curators. In the past (until 2018) datasets from the NCBI Gene Expression Omnibus database were imported on a weekly basis. Data is collected to MIAME and MINSEQE standards.
The Arizona State University (ASU) Research Data Repository provides a platform for ASU-affiliated researchers to share, preserve, cite, and make research data accessible and discoverable. The ASU Research Data Repository provides a permanent digital identifier for research data, which complies with data sharing policies. The repository is powered by the Dataverse open-source application, developed and used by Harvard University. Both the ASU Research Data Repository and the KEEP Institutional Repository are managed by the ASU Library to ensure research produced at Arizona State University is discoverable and accessible to the global community.
Country
The Australian National Corpus collates and provides access to assorted examples of Australian English text, transcriptions, audio and audio-visual materials. Text analysis tools are embedded in the interface allowing analysis and downloads in *.CSV format.
Content type(s)
The British Ocean Sediment Core Research Facility (BOSCORF) is based at the Southampton site of the National Oceanography Centre and is Britain’s national deep-sea core repository. BOSCORF is responsible for long-term storage and curation of sediment cores collected through UKRI-NERC research programmes. We promote secondary usage of sediment core samples and analytical data relating to the sample collection.
Brain Image Library (BIL) is an NIH-funded public resource serving the neuroscience community by providing a persistent centralized repository for brain microscopy data. Data scope of the BIL archive includes whole brain microscopy image datasets and their accompanying secondary data such as neuron morphologies, targeted microscope-enabled experiments including connectivity between cells and spatial transcriptomics, and other historical collections of value to the community. The BIL Analysis Ecosystem provides an integrated computational and visualization system to explore, visualize, and access BIL data without having to download it.
The platform hosts the critical edition of the letters written to Jacob Burckhardt, reconstructing in open access one of the most important European correspondences of the 19th century. Save a few exceptions, these letters are all unpublished. On a later stage, the project aims to publish also Jacob Burckhardt’s letters. The editing process has been carried out using Muruca semantic digital library framework. The Muruca framework has been modified over the project, as the requirements of the philological researchers emerged more clearly. The results are stored in and accessible from the front-end of the platform.
Country
The National Forest Inventory (NFI) is a collaborative effort involving federal, provincial and territorial government agencies. They monitor a network of twenty thousand sampling points across Canada on an ongoing basis to provide information on the state of Canada's forests and a continuous record of forest change. They provide data and products to forest science researchers, forest policy decision-makers and interested stakeholders.
Country
Canadian Urban Environmental Health Research Consortium (CANUE) collates and generates standard measures of environmental factors and provides these data to a wide range of health data organizations who pre-link and distribute them to the Canadian research community. Exposure metrics currently distributed by CANUE include air quality (nitrogen dioxide, sulfur dioxide, ozone, and fine particulate matter concentrations), green and blue spaces (Landsat, MODIS, and AVHRR normalized difference vegetation indices), neighborhood factors (access to employment, material and social deprivation indices, marginalization indices, nighttime light, and active living environments), and weather and climate (weather indicators, local climate zones, and water balance).
This is CSDB version 1 merged from Bacterial (BCSDB) and Plant&Fungal (PFCSDB) databases. This database aims at provision of structural, bibliographic, taxonomic, NMR spectroscopic and other information on glycan and glycoconjugate structures of prokaryotic, plant and fungal origin. It has been merged from the Bacterial and Plant&Fungal Carbohydrate Structure Databases (BCSDB+PFCSDB). The key points of this service are: High coverage. The coverage for bacteria (up to 2016) and archaea (up to 2016) is above 80%. Similar coverage for plants and fungi is expected in the future. The database is close to complete up to 1998 for plants, and up to 2006 for fungi. Data quality. High data quality is achieved by manual curation using original publications which is assisted by multiple automatic procedures for error control. Errors present in publications are reported and corrected, when possible. Data from other databases are verified on import. Detailed annotations. Structural data are supplied with extended bibliography, assigned NMR spectra, taxon identification including strains and serogroups, and other information if available in the original publication. Services. CSDB serves as a platform for a number of computational services tuned for glycobiology, such as NMR simulation, automated structure elucidation, taxon clustering, 3D molecular modeling, statistical processing of data etc. Integration. CSDB is cross-linked to other glycoinformatics projects and NCBI databases. The data are exportable in various formats, including most widespread encoding schemes and records using GlycoRDF ontology. Free web access. Users can access the database for free via its web interface (see Help). The main source of data is retrospective literature analysis. About 20% of data were imported from CCSD (Carbbank, University of Georgia, Athens; structures published before 1996) with subsequent manual curation and approval. The current coverage is displayed in red on the top of the left menu. The time lag between the publication of new data and their deposition into CSDB is ca. 1 year. In the scope of bacterial carbohydrates, CSDB covers nearly all structures of this origin published up to 2016. Prokaryotic, plant and fungal means that a glycan was found in the organism(s) belonging to these taxonomic domains or was obtained by modification of those found in them. Carbohydrate means a structure composed of any residues linked by glycosidic, ester, amidic, ketal, phospho- or sulpho-diester bonds in which at least one residue is a sugar or its derivative.
Country
The team have established the CardiacAI Data Repository that brings large amounts of Australian healthcare data together in a secure environment with strict conditions for use of these data with an appropriate level of oversight of research activities. The CardiacAI Data Repository collects de-identified EMR data about cardiovascular patients who are admitted to a group of urban and regional hospitals in NSW and links this with state-wide hospital and emergency deparment visit and mortality data and mobile-health remote monitoring data.
Country
The Climate Change Centre Austria - Data Centre provides the central national archive for climate data and information. The data made accessible includes observation and measurement data, scenario data, quantitative and qualitative data, as well as the measurement data and findings of research projects.
The CDAWeb data system enables improved display and coordinated analysis of multi-instrument, multimission data bases of the kind whose analysis is critical to meeting the science objectives of the ISTP program and the InterAgency Consultative Group (IACG) Solar-Terrestrial Science Initiative. The system combines the client-server user interface technology of the World Wide Web with a powerful set of customized IDL routines to leverage the data format standards (CDF) and guidelines for implementation adopted by ISTP and the IACG. The system can be used with any collection of data granules following the extended set of ISTP/IACG standards. CDAWeb is being used both to support coordinated analysis of public and proprietary data and better functional access to specific public data such as the ISTP-precursor CDAW 9 data base that is formatted to the ISTP/IACG standards. Many data sets are available through the Coordinated Data Analysis Web (CDAWeb) service and the data coverage continues to grow. These are largely, but not exclusively, magnetospheric data and nearby solar wind data of the ISTP era (1992-present) at time resolutions of approximately a minute. The CDAWeb service provides graphical browsing, data subsetting, screen listings, file creations and downloads (ASCII or CDF). Public data from current (1992-present) space physics missions (including Cluster, IMAGE, ISTP, FAST, IMP-8, SAMPEX and others). Public data from missions before 1992 (including IMP-8, ISIS1/2, Alouette2, Hawkeye and others). Public data from all current and past space physics missions. CDAWeb ist part of "Space Physics Data Facility" (https://www.re3data.org/repository/r3d100010168).
Country
The Indian Census is the largest single source of a variety of statistical information on different characteristics of the people of India. With a history of more than 130 years, this reliable, time tested exercise has been bringing out a veritable wealth of statistics every 10 years, beginning from 1872 when the first census was conducted in India non-synchronously in different parts. To scholars and researchers in demography, economics, anthropology, sociology, statistics and many other disciplines, the Indian Census has been a fascinating source of data. The rich diversity of the people of India is truly brought out by the decennial census which has become one of the tools to understand and study India The responsibility of conducting the decennial Census rests with the Office of the Registrar General and Census Commissioner, India under Ministry of Home Affairs, Government of India. It may be of historical interest that though the population census of India is a major administrative function; the Census Organisation was set up on an ad-hoc basis for each Census till the 1951 Census. The Census Act was enacted in 1948 to provide for the scheme of conducting population census with duties and responsibilities of census officers. The Government of India decided in May 1949 to initiate steps for developing systematic collection of statistics on the size of population, its growth, etc., and established an organisation in the Ministry of Home Affairs under Registrar General and ex-Officio Census Commissioner, India. This organisation was made responsible for generating data on population statistics including Vital Statistics and Census. Later, this office was also entrusted with the responsibility of implementation of Registration of Births and Deaths Act, 1969 in the country.
The Central Neuroimaging Data Archive (CNDA) allows for sharing of complex imaging data to investigators around the world, through a simple web portal. The CNDA is an imaging informatics platform that provides secure data management services for Washington University investigators, including source DICOM imaging data sharing to external investigators through a web portal, cnda.wustl.edu. The CNDA’s services include automated archiving of imaging studies from all of the University’s research scanners, automated quality control and image processing routines, and secure web-based access to acquired and post-processed data for data sharing, in compliance with NIH data sharing guidelines. The CNDA is currently accepting datasets only from Washington University affiliated investigators. Through this platform, the data is available for broad sharing with researchers both internal and external to Washington University.. The CNDA overlaps with data in oasis-brains.org https://www.re3data.org/repository/r3d100012182, but CNDA is a larger data set.
Chemical Entities of Biological Interest (ChEBI) is a freely available dictionary of 'small molecular entities'. The term 'molecular entity' encompasses any constitutionally or isotopically distinct atom, molecule, ion, ion pair, radical, radical ion, complex, conformer, etc., identifiable as a separately distinguishable entity. The molecular entities in question are either products of nature or synthetic products used to intervene in the processes of living organisms (either deliberately, as for drugs, or unintentionally', as for chemicals in the environment). The qualifier 'small' implies the exclusion of entities directly encoded by the genome, and thus as a rule nucleic acids, proteins and peptides derived from proteins by cleavage are not included.
Content type(s)
The CEBS database houses data of interest to environmental health scientists. CEBS is a public resource, and has received depositions of data from academic, industrial and governmental laboratories. CEBS is designed to display data in the context of biology and study design, and to permit data integration across studies for novel meta analysis.
The CLARIN Centre at the University of Copenhagen, Denmark, hosts and manages a data repository (CLARIN-DK-UCPH Repository), which is part of a research infrastructure for humanities and social sciences financed by the University of Copenhagen. The CLARIN-DK-UCPH Repository provides easy and sustainable access for scholars in the humanities and social sciences to digital language data (in written, spoken, video or multimodal form) and provides advanced tools for discovering, exploring, exploiting, annotating, and analyzing data. CLARIN-DK also shares knowledge on Danish language technology and resources and is the Danish node in the European Research Infrastructure Consortium, CLARIN ERIC.
Codex Sinaiticus is one of the most important books in the world. Handwritten well over 1600 years ago, the manuscript contains the Christian Bible in Greek, including the oldest complete copy of the New Testament. The Codex Sinaiticus Project is an international collaboration to reunite the entire manuscript in digital form and make it accessible to a global audience for the first time. Drawing on the expertise of leading scholars, conservators and curators, the Project gives everyone the opportunity to connect directly with this famous manuscript.
The Comprehensive Epidemiologic Data Resource (CEDR) is the U.S. Department of Energy (DOE) electronic database comprised of health studies of DOE contract workers and environmental studies of areas surrounding DOE facilities. DOE recognizes the benefits of data sharing and supports the public's right to know about worker and community health risks. CEDR provides independent researchers and educators with access to de-identified data collected since the Department's early production years. Current CEDR holdings include more than 76 studies of over 1 million workers at 31 DOE sites. Access to these data is at no cost to the user.