Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 56 result(s)
Subject(s)
A domain-specific repository for the Life Sciences, covering the health, medical as well as the green life sciences. The repository services are primarily aimed at the Netherlands, but not exclusively.
Country
InTOR is the institutional digital repository of the Institute of Virology, Vaccines and Sera “Torlak”. It provides open access to publications and other research outputs resulting from the projects implemented by the Institute of Virology, Vaccines and Sera “Torlak”. The software platform of the repository is adapted to the modern standards applied in the dissemination of scientific publications and is compatible with international infrastructure in this field.
OLOS is a Swiss-based data management portal tailored for researchers and institutions. Powerful yet easy to use, OLOS works with most tools and formats across all scientific disciplines to help researchers safely manage, publish and preserve their data. The solution was developed as part of a larger project focusing on Data Life Cycle Management (dlcm.ch) that aims to develop various services for research data management. Thanks to its highly modular architecture, OLOS can be adapted both to small institutions that need a "turnkey" solution and to larger ones that can rely on OLOS to complement what they have already implemented. OLOS is compatible with all formats in use in the different scientific disciplines and is based on modern technology that interconnects with researchers' environments (such as Electronic Laboratory Notebooks or Laboratory Information Management Systems).
Country
This purpose of this repository is to share Ontario's government data sets online to increase transparency and accountability. We're adding to the hundreds of records we’ve released so far to create an inventory of known government data. Data will either be open, restricted, under review or in the process of being made open, depending on the sensitivity of the information.
Country
Rodare is the institutional research data repository at HZDR (Helmholtz-Zentrum Dresden-Rossendorf). Rodare allows HZDR researchers to upload their research software and data and enrich those with metadata to make them findable, accessible, interoperable and retrievable (FAIR). By publishing all associated research software and data via Rodare research reproducibility can be improved. Uploads receive a Digital Object Identfier (DOI) and can be harvested via a OAI-PMH interface.
Country
DBT is the institutional repository of the FSU Jena, the TU Ilmenau and the University of Erfurt as well as members of the other Thuringian universities and colleges can publish scientific documents in the DBT. In individual cases, land users (via the ThULB Jena) can also archive documents in the DBT.
Provided by the University Libraries, KiltHub is the comprehensive institutional repository and research collaboration platform for research data and scholarly outputs produced by members of Carnegie Mellon University and their collaborators. KiltHub collects, preserves, and provides stable, long-term global open access to a wide range of research data and scholarly outputs created by faculty, staff, and student members of Carnegie Mellon University in the course of their research and teaching.
The LJMU Research Data Repository is the University's institutional repository where researchers can safely deposit and store research data on an Open Access basis. Data stored in the LJMU Research Data Repository can be made freely available to anyone online and located by users of web search engines.
GeneWeaver combines cross-species data and gene entity integration, scalable hierarchical analysis of user data with a community-built and curated data archive of gene sets and gene networks, and tools for data driven comparison of user-defined biological, behavioral and disease concepts. Gene Weaver allows users to integrate gene sets across species, tissue and experimental platform. It differs from conventional gene set over-representation analysis tools in that it allows users to evaluate intersections among all combinations of a collection of gene sets, including, but not limited to annotations to controlled vocabularies. There are numerous applications of this approach. Sets can be stored, shared and compared privately, among user defined groups of investigators, and across all users.
Online materials database (known as PAULING FILE project) with nearly 2 million entries: physical properties, crystal structures, phase diagrams, available via API, ready for modern data-intensive applications. The source of these entries are about 0.5M peer-reviewed publications in materials science, processed during the last 30 years by an international team of PhD editors. The results are presented online with a quick search interface. The basic access is provided for free.
Country
GABI, acronym for "Genomanalyse im biologischen System Pflanze", is the name of a large collaborative network of different plant genomic research projects. Plant data from different ‘omics’ fronts representing more than 10 different model or crop species are integrated in GabiPD.
The Cancer Immunome Database (TCIA) provides results of comprehensive immunogenomic analyses of next generation sequencing data (NGS) data for 20 solid cancers from The Cancer Genome Atlas (TCGA) and other datasource. The Cancer Immunome Atlas (TCIA) was developed and is maintained at the Division of Bioinformatics (ICBI). The database can be queried for the gene expression of specific immune-related gene sets, cellular composition of immune infiltrates (characterized using gene set enrichment analyses and deconvolution), neoantigens and cancer-germline antigens, HLA types, and tumor heterogeneity (estimated from cancer cell fractions). Moreover it provides survival analyses for different types immunological parameters. TCIA will be constantly updated with new data and results.
>>>!!!<<< 2019-01: Global Land Cover Facility goes offline see https://spatialreserves.wordpress.com/2019/01/07/global-land-cover-facility-goes-offline/ ; no more access to http://www.landcover.org >>>!!!<<< The Global Land Cover Facility (GLCF) provides earth science data and products to help everyone to better understand global environmental systems. In particular, the GLCF develops and distributes remotely sensed satellite data and products that explain land cover from the local to global scales.
Country
The main focus of tambora.org is Historical Climatology. Years of meticulous work in this field in research groups around the world have resulted in large data collections on climatic parameters such as temperature, precipitation, storms, floods, etc. with different regional, temporal and thematic foci. tambora.org enables researchers to collaboratively interpret the information derived from historical sources. It provides a database for original text quotations together with bibliographic references and the extracted places, dates and coded climate and environmental information.
Knoema is a knowledge platform. The basic idea is to connect data with analytical and presentation tools. As a result, we end with one uniformed platform for users to access, present and share data-driven content. Within Knoema, we capture most aspects of a typical data use cycle: accessing data from multiple sources, bringing relevant indicators into a common space, visualizing figures, applying analytical functions, creating a set of dashboards, and presenting the outcome.
<<<!!!<<< OFFLINE >>>!!!>>> A recent computer security audit has revealed security flaws in the legacy HapMap site that require NCBI to take it down immediately. We regret the inconvenience, but we are required to do this. That said, NCBI was planning to decommission this site in the near future anyway (although not quite so suddenly), as the 1,000 genomes (1KG) project has established itself as a research standard for population genetics and genomics. NCBI has observed a decline in usage of the HapMap dataset and website with its available resources over the past five years and it has come to the end of its useful life. The International HapMap Project is a multi-country effort to identify and catalog genetic similarities and differences in human beings. Using the information in the HapMap, researchers will be able to find genes that affect health, disease, and individual responses to medications and environmental factors. The Project is a collaboration among scientists and funding agencies from Japan, the United Kingdom, Canada, China, Nigeria, and the United States. All of the information generated by the Project will be released into the public domain. The goal of the International HapMap Project is to compare the genetic sequences of different individuals to identify chromosomal regions where genetic variants are shared. By making this information freely available, the Project will help biomedical researchers find genes involved in disease and responses to therapeutic drugs. In the initial phase of the Project, genetic data are being gathered from four populations with African, Asian, and European ancestry. Ongoing interactions with members of these populations are addressing potential ethical issues and providing valuable experience in conducting research with identified populations. Public and private organizations in six countries are participating in the International HapMap Project. Data generated by the Project can be downloaded with minimal constraints. The Project officially started with a meeting in October 2002 (https://www.genome.gov/10005336/) and is expected to take about three years.
The Magnetics Information Consortium (MagIC) improves research capacity in the Earth and Ocean sciences by maintaining an open community digital data archive for rock magnetic, geomagnetic, archeomagnetic (archaeomagnetic) and paleomagnetic (palaeomagnetic) data. Different parts of the website allow users access to archive, search, visualize, and download these data. MagIC supports the international rock magnetism, geomagnetism, archeomagnetism (archaeomagnetism), and paleomagnetism (palaeomagnetism) research and endeavors to bring data out of private archives, making them accessible to all and (re-)useable for new, creative, collaborative scientific and educational activities. The data in MagIC is used for many types of studies including tectonic plate reconstructions, geomagnetic field models, paleomagnetic field reversal studies, magnetohydrodynamical studies of the Earth's core, magnetostratigraphy, and archeology. MagIC is a domain-specific data repository and directed by PIs who are both producers and consumers of rock, geo, and paleomagnetic data. Funded by NSF since 2003, MagIC forms a major part of https://earthref.org which integrates four independent cyber-initiatives rooted in various parts of the Earth, Ocean and Life sciences and education.
The goal of NGEE–Arctic is to reduce uncertainty in projections of future climate by developing and validating a model representation of permafrost ecosystems and incorporating that representation into Earth system models. The new modeling capabilities will improve our confidence in model projections and will enable scientist to better respond to questions about processes and interactions now and in the future. It also will allow them to better communicate important results concerning climate change to decision makers and the general public. And let's not forget about summer in the Antarctic, which happens during our winter months.
The objective of this database is to stimulate the exchange of information and the collaboration between researchers within the ChArMEx community. However, this community is not exclusive and researchers not directly involved in ChArMEx, but who wish to contribute to the achievements of ChArMEx scientific and/or educational goals are welcome to join-in. The database is a depository for all the data collected during the various projects that contribute to ChArMEx coordinated program. It aims at documenting, storing and distributing the data produced or used by the project community. However, it is also intended to host datasets that were produced outside the ChArMEx program but which are meaningful to ChArMEx scientific and/or educational goals. Any data owner who wishes to add or link his dataset to ChArMEx database is welcome to contact the database manager in order to get help and support. The ChArMEx database includes past and recent geophysical in situ observations, satellite products and model outputs. The database organizes the data management and provides data services to end-users of ChArMEx data. The database system provides a detailed description of the products and uses standardized formats whenever it is possible. It defines the access rules to the data and details the mutual rights and obligations of data providers and users (see ChArMEx data and publication policy). The database is being developed jointly by : SEDOO, OMP Toulouse , ICARE, Lille and ESPRI, IPSL Paris
Within WASCAL a large number of heterogeneous data are collected. These data are mainly coming from different initiated research activities within WASCAL (Core Research Program, Graduate School Program) from the hydrological-meteorological, remote sensing, biodiversity and socio economic observation networks within WASCAL, and from the activities of the WASCAL Competence Center in Ouagadougou, Burkina-Faso.
Geochron is a global database that hosts geochronologic and thermochronologic information from detrital minerals. Information included with each sample consists of a table with the essential isotopic information and ages, a table with basic geologic metadata (e.g., location, collector, publication, etc.), a Pb/U Concordia diagram, and a relative age probability diagram. This information can be accessed and viewed with any web browser, and depending on the level of access desired, can be designated as either private or public. Loading information into Geochron requires the use of U-Pb_Redux, a Java-based program that also provides enhanced capabilities for data reduction, plotting, and analysis. Instructions are provided for three different levels of interaction with Geochron: 1. Accessing samples that are already in the Geochron database. 2. Preparation of information for new samples, and then transfer to Arizona LaserChron Center personnel for uploading to Geochron. 3. Preparation of information and uploading to Geochron using U-Pb_Redux.