Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 230 result(s)
NIST Data Gateway - provides easy access to many of the NIST scientific and technical databases. These databases cover a broad range of substances and properties from many different scientific disciplines. The Gateway includes links to free online NIST data systems as well as to information on NIST PC databases available for purchase.
This centre receives and archives precipitation chemistry data and complementary information from stations around the world. Data archived by this centre are accessible via connections with the WDCPC database. Freely available data from regional and national programmes with their own Web sites are accessible via links to these sites. The WDCPC is one of six World Data Centres in the World Meteorological Organization Global Atmosphere Watch (GAW). The focus on precipitation chemistry is described in the GAW Precipitation Chemistry Programme. Guidance on all aspects of collecting precipitation for chemical analysis is provided in the Manual for the GAW Precipitation Chemistry Programme (WMO-GAW Report No. 160).
Country
Research Data Unipd is a data archive and supports research produced by the members of the University of Padova. The service aims to facilitate data discovery, data sharing, and reuse, as required by funding institutions (eg. European Commission). Datasets published in the archive have a set of metadata that ensure proper description and discoverability.
Geochron is a global database that hosts geochronologic and thermochronologic information from detrital minerals. Information included with each sample consists of a table with the essential isotopic information and ages, a table with basic geologic metadata (e.g., location, collector, publication, etc.), a Pb/U Concordia diagram, and a relative age probability diagram. This information can be accessed and viewed with any web browser, and depending on the level of access desired, can be designated as either private or public. Loading information into Geochron requires the use of U-Pb_Redux, a Java-based program that also provides enhanced capabilities for data reduction, plotting, and analysis. Instructions are provided for three different levels of interaction with Geochron: 1. Accessing samples that are already in the Geochron database. 2. Preparation of information for new samples, and then transfer to Arizona LaserChron Center personnel for uploading to Geochron. 3. Preparation of information and uploading to Geochron using U-Pb_Redux.
STOREDB is a platform for the archiving and sharing of primary data and outputs of all kinds, including epidemiological and experimental data, from research on the effects of radiation. It also provides a directory of bioresources and databases containing information and materials that investigators are willing to share. STORE supports the creation of a radiation research commons.
The UK Polar Data Centre (UK PDC) is the focal point for Arctic and Antarctic environmental data management in the UK. Part of the Natural Environmental Research Council’s (NERC) network of environmental data centres and based at the British Antarctic Survey, it coordinates the management of polar data from UK-funded research and supports researchers in complying with national and international data legislation and policy.
Country
Jülich DATA is a registry service to index all research data created at or in the context of Forschungszentrum Jülich. As an institutionial repository, it may also be used for data and software publications.
The Humanitarian Data Exchange (HDX) is an open platform for sharing data across crises and organisations. Launched in July 2014, the goal of HDX is to make humanitarian data easy to find and use for analysis. HDX is managed by OCHA's Centre for Humanitarian Data, which is located in The Hague. OCHA is part of the United Nations Secretariat and is responsible for bringing together humanitarian actors to ensure a coherent response to emergencies. The HDX team includes OCHA staff and a number of consultants who are based in North America, Europe and Africa.
Academic Torrents is a distributed data repository. The academic torrents network is built for researchers, by researchers. Its distributed peer-to-peer library system automatically replicates your datasets on many servers, so you don't have to worry about managing your own servers or file availability. Everyone who has data becomes a mirror for those data so the system is fault-tolerant.
Tropicos® was originally created for internal research but has since been made available to the world’s scientific community. All of the nomenclatural, bibliographic, and specimen data accumulated in MBG’s electronic databases during the past 30 years are publicly available here.
The mission of World Data Center for Climate (WDCC) is to provide central support for the German and European climate research community. The WDCC is member of the ISC's World Data System. Emphasis is on development and implementation of best practice methods for Earth System data management. Data for and from climate research are collected, stored and disseminated. The WDCC is restricted to data products. Cooperations exist with thematically corresponding data centres of, e.g., earth observation, meteorology, oceanography, paleo climate and environmental sciences. The services of WDCC are also available to external users at cost price. A special service for the direct integration of research data in scientific publications has been developed. The editorial process at WDCC ensures the quality of metadata and research data in collaboration with the data producers. A citation code and a digital identifier (DOI) are provided and registered together with citation information at the DOI registration agency DataCite.
OrtholugeDB contains Ortholuge-based orthology predictions for completely sequenced bacterial and archaeal genomes. It is also a resource for reciprocal best BLAST-based ortholog predictions, in-paralog predictions (recently duplicated genes) and ortholog groups in Bacteria and Archaea. The Ortholuge method improves the specificity of high-throughput orthology prediction.
The University of Cape Town (UCT) uses Figshare for institutions for their data repository, which was launched in 2017 and is called ZivaHub: Open Data UCT. ZivaHub serves principal investigators at the University of Cape Town who are in need of a repository to store and openly disseminate the data that support their published research findings. The repository service is provided in terms of the UCT Research Data Management Policy. It provides open access to supplementary research data files and links to their respective scholarly publications (e.g. theses, dissertations, papers et al) hosted on other platforms, such as OpenUCT.
Central data management of the USGS for water data that provides access to water-resources data collected at approximately 1.5 million sites in all 50 States, the District of Columbia, Puerto Rico, the Virgin Islands, Guam, American Samoa and the Commonwealth of the Northern Mariana Islands. Includes data on water use and quality, groundwater, and surface water.
CalSurv is a comprehensive information on West Nile virus, plague, malaria, Lyme disease, trench fever and other vectorborne diseases in California — where they are, where they’ve been, where they may be headed and what new diseases may be emerging.The CalSurv Web site serves as a portal or a single interface to all surveillance-related Web sites in California.
The central mission of the NACJD is to facilitate and encourage research in the criminal justice field by sharing data resources. Specific goals include providing computer-readable data for the quantitative study of crime and the criminal justice system through the development of a central data archive, supplying technical assistance in the selection of data collections and computer hardware and software for data analysis, and training in quantitative methods of social science research to facilitate secondary analysis of criminal justice data
<<<!!!<<< OFFLINE >>>!!!>>> A recent computer security audit has revealed security flaws in the legacy HapMap site that require NCBI to take it down immediately. We regret the inconvenience, but we are required to do this. That said, NCBI was planning to decommission this site in the near future anyway (although not quite so suddenly), as the 1,000 genomes (1KG) project has established itself as a research standard for population genetics and genomics. NCBI has observed a decline in usage of the HapMap dataset and website with its available resources over the past five years and it has come to the end of its useful life. The International HapMap Project is a multi-country effort to identify and catalog genetic similarities and differences in human beings. Using the information in the HapMap, researchers will be able to find genes that affect health, disease, and individual responses to medications and environmental factors. The Project is a collaboration among scientists and funding agencies from Japan, the United Kingdom, Canada, China, Nigeria, and the United States. All of the information generated by the Project will be released into the public domain. The goal of the International HapMap Project is to compare the genetic sequences of different individuals to identify chromosomal regions where genetic variants are shared. By making this information freely available, the Project will help biomedical researchers find genes involved in disease and responses to therapeutic drugs. In the initial phase of the Project, genetic data are being gathered from four populations with African, Asian, and European ancestry. Ongoing interactions with members of these populations are addressing potential ethical issues and providing valuable experience in conducting research with identified populations. Public and private organizations in six countries are participating in the International HapMap Project. Data generated by the Project can be downloaded with minimal constraints. The Project officially started with a meeting in October 2002 (https://www.genome.gov/10005336/) and is expected to take about three years.
<<<!!!<<< This repository is no longer available. >>>!!!>>> BioVeL is a virtual e-laboratory that supports research on biodiversity issues using large amounts of data from cross-disciplinary sources. BioVeL supports the development and use of workflows to process data. It offers the possibility to either use already made workflows or create own. BioVeL workflows are stored in MyExperiment - Biovel Group http://www.myexperiment.org/groups/643/content. They are underpinned by a range of analytical and data processing functions (generally provided as Web Services or R scripts) to support common biodiversity analysis tasks. You can find the Web Services catalogued in the BiodiversityCatalogue.
The Australian National University undertake work to collect and publish metadata about research data held by ANU, and in the case of four discipline areas, Earth Sciences, Astronomy, Phenomics and Digital Humanities to develop pipelines and tools to enable the publication of research data using a common and repeatable approach. Aims and outcomes: To identify and describe research data held at ANU, to develop a consistent approach to the publication of metadata on the University's data holdings: Identification and curation of significant orphan data sets that might otherwise be lost or inadvertently destroyed, to develop a culture of data data sharing and data re-use.