Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 340 result(s)
The Humanitarian Data Exchange (HDX) is an open platform for sharing data across crises and organisations. Launched in July 2014, the goal of HDX is to make humanitarian data easy to find and use for analysis. HDX is managed by OCHA's Centre for Humanitarian Data, which is located in The Hague. OCHA is part of the United Nations Secretariat and is responsible for bringing together humanitarian actors to ensure a coherent response to emergencies. The HDX team includes OCHA staff and a number of consultants who are based in North America, Europe and Africa.
Country
<<<!!!<<< This repository is no longer available. >>>!!!>>> The main objective of our work is to understand the pathomechanisms of late onset neurodegenerative disorders such as Huntington's, Parkinson's, Alzheimer's and Machado Joseph disease and to develop causal therapies for them. The disease causing proteins of these illnesses have been identified, but their functions in the unaffected organism are mostly unknown. Here, we have developed a strategy combining library and matrix yeast two-hybrid screens to generate a highly connected PPI network for Huntington's disease (HD).
By stimulating inspiring research and producing innovative tools, Huygens ING intends to open up old and inaccessible sources, and to understand them better. Huygens ING’s focus is on Digital Humanities, History, History of Science, and Textual Scholarship. Huygens ING pursues research in the fields of History, Literary Studies, the History of Science and Digital Humanities. Huygens ING aims to publish digital sources and data responsibly and with care. Innovative tools are made as widely available as possible. We strive to share the available knowledge at the institute with both academic peers and the wider public.
<<<!!!<<<We are moving content from our Hydra repository (hydra.hull.ac.uk) to new repositories.>>>!!!>>> Hydra is a repository for digital materials at the University of Hull. It can hold and manage any type of digital material, and is being developed in response to the growth in the amount of digital material that is generated through the research, education and administrative activities within the University. Hydra contains different collections of datasets from University of Hull research projects as: ARCdoc, domesday dataset, History of Marine animal Populations (HMAP) and others.
Country
HYdrological cycle in the Mediterranean EXperiemnt. Considering the science and societal issues motivating HyMeX, the programme aims to : improve our understanding of the water cycle, with emphasis on extreme events, by monitoring and modelling the Mediterranean atmosphere-land-ocean coupled system, its variability from the event to the seasonal and interannual scales, and its characteristics over one decade (2010-2020) in the context of global change, assess the social and economic vulnerability to extreme events and adaptation capacity.The multidisciplinary research and the database developed within HyMeX should contribute to: improve observational and modelling systems, especially for coupled systems, better predict extreme events, simulate the long-term water-cycle more accurately, provide guidelines for adaptation measures, especially in the context of global change.
HyperLeda is an information system for astronomy: It consists in a database and tools to process that data according to the user's requirements. The scientific goal which motivates the development of HyperLeda is the study of the physics and evolution of galaxies. LEDA was created more than 20 years ago, in 1983, and became HyperLeda after the merging with Hypercat in 2000
The Water Survey has flourished for more than a century by anticipating and responding to new challenges and opportunities to serve the citizens of Illinois. Today, the ISWS continues to demonstrate flexibility and adaptability by developing new programs, while continuing to provide long-standing services upon which Illinoisans have come to rely. The Scientific Surveys of the University of Illinois at Urbana-Champaign are the primary agencies in Illinois responsible for producing and disseminating scientific and technological information, services, and products related to the environment, economic development, and quality of life. To achieve this mission, the Scientific Surveys conduct state-of-the-art research and collect, analyze, archive, and disseminate high-quality, objective data and technical information. The information, services, and products provide a sound technical basis for the citizens and policymakers of Illinois and the nation to make wise social, economic, and environmental decisions.
Country
The Institute for Marine and Antarctic Studies (IMAS) pursues multidisciplinary and interdisciplinary work to advance understanding of temperate marine, Southern Ocean, and Antarctic environments. IMAS research is characterised as innovative, relevant, and globally distinctive. Education at IMAS delivers world class programs, resulting in highly trained graduates who serve the needs of academic institutions, industry, government, and the community. IMAS is naturally advantaged by its Southern Ocean location proximal to Antarctica, and hosts one of the world's largest critical masses of marine and Antarctic researchers. IMAS also operate facilities and host data sets of national and global interest and to the benefit of the community. The guiding framework of IMAS is that all data that are not commercial-in-confidence or restricted by legislation or agreement are owned by the University on behalf of the community or Commonwealth, are hosted by an organisation, and are shared with researchers for analysis and interpretation. IMAS is committed to the concept of Open Data. The IMAS Data Portal is an online interface showcasing the IMAS metadata catalogue and all available IMAS data. The portal aims to make IMAS data freely and openly available for the benefit of Australian marine and environmental science as a whole.
IMGT/GENE-DB is the IMGT genome database for IG and TR genes from human, mouse and other vertebrates. IMGT/GENE-DB provides a full characterization of the genes and of their alleles: IMGT gene name and definition, chromosomal localization, number of alleles, and for each allele, the IMGT allele functionality, and the IMGT reference sequences and other sequences from the literature. IMGT/GENE-DB allele reference sequences are available in FASTA format (nucleotide and amino acid sequences with IMGT gaps according to the IMGT unique numbering, or without gaps).
iNaturalist is a citizen science project and online social network of naturalists, citizen scientists, and biologists built on the concept of mapping and sharing observations of biodiversity across the globe. iNat is a platform for biodiversity research, where anyone can start up their own science project with a specific purpose and collaborate with other observers.
Country
The Ocean Date and Information System provides information on physical, chemical, biological and geological parameters of ocean and coasts on spatial and temporal domains that is vital for both research and operational oceanography. In-situ and remote sensing data are included. The Ocean Information Bank is supported by the data received from Ocean Observing Systems in the Indian Ocean (both the in-situ platforms and satellites) as well as by a chain of Marine Data Centres. Ocean and coastal measurements are available. Data products are accessible through various portals on the site and are largely available by data type (in situ or remote sensing) and then by parameter.
Country
The CliSAP-Integrated Climate Data Center (ICDC) allows easy access to climate relevant data from satellite remote sensing and in situ and other measurements in Earth System Sciences. These data are important to determine the status and the changes in the climate system. Additionally some relevant re-analysis data are included, which are modeled on the basis of observational data. ICDC cooperates with the "Zentrum für Nachhaltiges Forschungsdatenmanagement "https://www.fdr.uni-hamburg.de/ to publish observational data with a doi.
Content type(s)
The repository is no longer available. <<<!!!<<< Visit IRIS at EPA: https://www.epa.gov/iris >>>!!!>>>
Argo is an international programme using autonomous floats to collect temperature, salinity and current data in the ice-free oceans. It is teamed with the Jason ocean satellite series. Argo will soon reach its target of 3000 floats delivering data within 24 hours to researchers and operational centres worldwide. 23 countries contribute floats to Argo and many others help with float deployments. Argo has revolutionized the collection of information from inside the oceans. ARGO Project is organized in regional and national Centers with a Project Office, an Information Center (AIC) and 2 Global Data Centers (GDAC), at the United States and at France. Each DAC submits regularly all its new files to both USGODAE and Coriolis GDACs.The whole Argo data set is available in real time and delayed mode from the global data centres (GDACs). The internet addresses are: https://nrlgodae1.nrlmry.navy.mil/ and http://www.argodatamgt.org
<<<!!!<<< OFFLINE >>>!!!>>> A recent computer security audit has revealed security flaws in the legacy HapMap site that require NCBI to take it down immediately. We regret the inconvenience, but we are required to do this. That said, NCBI was planning to decommission this site in the near future anyway (although not quite so suddenly), as the 1,000 genomes (1KG) project has established itself as a research standard for population genetics and genomics. NCBI has observed a decline in usage of the HapMap dataset and website with its available resources over the past five years and it has come to the end of its useful life. The International HapMap Project is a multi-country effort to identify and catalog genetic similarities and differences in human beings. Using the information in the HapMap, researchers will be able to find genes that affect health, disease, and individual responses to medications and environmental factors. The Project is a collaboration among scientists and funding agencies from Japan, the United Kingdom, Canada, China, Nigeria, and the United States. All of the information generated by the Project will be released into the public domain. The goal of the International HapMap Project is to compare the genetic sequences of different individuals to identify chromosomal regions where genetic variants are shared. By making this information freely available, the Project will help biomedical researchers find genes involved in disease and responses to therapeutic drugs. In the initial phase of the Project, genetic data are being gathered from four populations with African, Asian, and European ancestry. Ongoing interactions with members of these populations are addressing potential ethical issues and providing valuable experience in conducting research with identified populations. Public and private organizations in six countries are participating in the International HapMap Project. Data generated by the Project can be downloaded with minimal constraints. The Project officially started with a meeting in October 2002 (https://www.genome.gov/10005336/) and is expected to take about three years.
INDI was formed as a next generation FCP effort. INDI aims to provide a model for the broader imaging community while simultaneously creating a public dataset capable of dwarfing those that most groups could obtain individually.
!!! >>> intrepidbio.com expired <<< !!!! Intrepid Bioinformatics serves as a community for genetic researchers and scientific programmers who need to achieve meaningful use of their genetic research data – but can’t spend tremendous amounts of time or money in the process. The Intrepid Bioinformatics system automates time consuming manual processes, shortens workflow, and eliminates the threat of lost data in a faster, cheaper, and better environment than existing solutions. The system also provides the functionality and community features needed to analyze the large volumes of Next Generation Sequencing and Single Nucleotide Polymorphism data, which is generated for a wide range of purposes from disease tracking and animal breeding to medical diagnosis and treatment.
The Bremen Core Repository - BCR, for International Ocean Discovery Program (IODP), Integrated Ocean Discovery Program (IODP), Ocean Drilling Program (ODP), and Deep Sea Drilling Project (DSDP) cores from the Atlantic Ocean, Mediterranean and Black Seas and Arctic Ocean is operated at University of Bremen within the framework of the German participation in IODP. It is one of three IODP repositories (beside Gulf Coast Repository (GCR) in College Station, TX, and Kochi Core Center (KCC), Japan). One of the scientific goals of IODP is to research the deep biosphere and the subseafloor ocean. IODP has deep-frozen microbiological samples from the subseafloor available for interested researchers and will continue to collect and preserve geomicrobiology samples for future research.
ITER is an Internet database of human health risk values and cancer classifications for over 680 chemicals of environmental concern from multiple organizations wordwide. ITER is the only database that presents risk data in a tabular format for easy comparison, along with a synopsis explaining differences in data and a link to each organization for more information.
MycoCosm, the DOE JGI’s web-based fungal genomics resource, which integrates fungal genomics data and analytical tools for fungal biologists. It provides navigation through sequenced genomes, genome analysis in context of comparative genomics and genome-centric view. MycoCosm promotes user community participation in data submission, annotation and analysis.
The Johns Hopkins Research Data Repository is an open access repository for Johns Hopkins University researchers to share their research data. The Repository is administered by professional curators at JHU Data Services, who will work with depositors to enable future discovery and reuse of your data, and ensure your data is Findable, Accessible, Interoperable and Reusable (FAIR). More information about the benefits of archiving data can be found here: https://dataservices.library.jhu.edu/