Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 46 result(s)
The Northern California Earthquake Data Center (NCEDC) is a permanent archive and distribution center primarily for multiple types of digital data relating to earthquakes in central and northern California. The NCEDC is located at the Berkeley Seismological Laboratory, and has been accessible to users via the Internet since mid-1992. The NCEDC was formed as a joint project of the Berkeley Seismological Laboratory (BSL) and the U.S. Geological Survey (USGS) at Menlo Park in 1991, and current USGS funding is provided under a cooperative agreement for seismic network operations.
>>>!!!<<< The repository is no longer available. >>>!!!<<< Here you will find a collection of atomic microstructures that have been built by the atomic modeling community. Feel free to download any of these and use them in your own scientific explorations.The focus of this cyberinfrastructure is to advance the field of atomic-scale modeling of materials by acting as a forum for disseminating new atomistic scale methodologies, educating non-experts and the next generation of computational materials scientists, and serving as a bridge between the atomistic and complementary (electronic structure, mesoscale) modeling communities.
Country
NONCODE is an integrated knowledge database dedicated to non-coding RNAs (excluding tRNAs and rRNAs). Now, there are 16 species in NONCODE(human, mouse, cow, rat, chicken, fruitfly, zebrafish, celegans, yeast, Arabidopsis, chimpanzee, gorilla, orangutan, rhesus macaque, opossum and platypus).The source of NONCODE includes literature and other public databases. We searched PubMed using key words ‘ncrna’, ‘noncoding’, ‘non-coding’,‘no code’, ‘non-code’, ‘lncrna’ or ‘lincrna. We retrieved the new identified lncRNAs and their annotation from the Supplementary Material or web site of these articles. Together with the newest data from Ensembl , RefSeq, lncRNAdb and GENCODE were processed through a standard pipeline for each species.
SeaBASS, the publicly shared archive of in situ oceanographic and atmospheric data maintained by the NASA Ocean Biology Processing Group (OBPG). High quality in situ measurements are prerequisite for satellite data product validation, algorithm development, and many climate-related inquiries. As such, the NASA Ocean Biology Processing Group (OBPG) maintains a local repository of in situ oceanographic and atmospheric data to support their regular scientific analyses. The SeaWiFS Project originally developed this system, SeaBASS, to catalog radiometric and phytoplankton pigment data used their calibration and validation activities. To facilitate the assembly of a global data set, SeaBASS was expanded with oceanographic and atmospheric data collected by participants in the SIMBIOS Program, under NASA Research Announcements NRA-96 and NRA-99, which has aided considerably in minimizing spatial bias and maximizing data acquisition rates. Archived data include measurements of apparent and inherent optical properties, phytoplankton pigment concentrations, and other related oceanographic and atmospheric data, such as water temperature, salinity, stimulated fluorescence, and aerosol optical thickness. Data are collected using a number of different instrument packages, such as profilers, buoys, and hand-held instruments, and manufacturers on a variety of platforms, including ships and moorings.
Country
Fordatis is the institutional research data repository of the Fraunhofer-Gesellschaft. Fraunhofer-Gesellschaft based in Germany is Europes largest research and technology organization. Fordatis contains research data created by researcher at Fraunhofer. These are data from the engineering, natural sciences and social sciences.
The Archaeological Map of the Czech Republic (AMCR) is a repository designed for information on archaeological investigations, sites and finds, operated by the Archaeological Institutes of the CAS in Prague and Brno. The archives of these institutions contain documentation of archaeological fieldwork on the territory of the Czech Republic from 1919 to the present day, and they continue to enrich their collections. The AMCR database and related documents form the largest collection of archaeological data concerning the Czech Republic and are therefore an important part of our cultural heritage. The AMCR digital archive contains various types of records - individual archaeological documents (texts, field photographs, aerial photographs, maps and plans, digital data), projects, fieldwork events, archaeological sites, records of individual finds and a library of 3D models. Data and descriptive information are continuously taken from the AMCR and presented in the the AMCR Digital Archive interface.
<<<!!!<<< This repository is no longer available. >>>!!!>>> NetPath is currently one of the largest open-source repository of human signaling pathways that is all set to become a community standard to meet the challenges in functional genomics and systems biology. Signaling networks are the key to deciphering many of the complex networks that govern the machinery inside the cell. Several signaling molecules play an important role in disease processes that are a direct result of their altered functioning and are now recognized as potential therapeutic targets. Understanding how to restore the proper functioning of these pathways that have become deregulated in disease, is needed for accelerating biomedical research. This resource is aimed at demystifying the biological pathways and highlights the key relationships and connections between them. Apart from this, pathways provide a way of reducing the dimensionality of high throughput data, by grouping thousands of genes, proteins and metabolites at functional level into just several hundreds of pathways for an experiment. Identifying the active pathways that differ between two conditions can have more explanatory power than just a simple list of differentially expressed genes and proteins.
The UniProt Knowledgebase (UniProtKB) is the central hub for the collection of functional information on proteins, with accurate, consistent and rich annotation. In addition to capturing the core data mandatory for each UniProtKB entry (mainly, the amino acid sequence, protein name or description, taxonomic data and citation information), as much annotation information as possible is added. This includes widely accepted biological ontologies, classifications and cross-references, and clear indications of the quality of annotation in the form of evidence attribution of experimental and computational data. The Universal Protein Resource (UniProt) is a comprehensive resource for protein sequence and annotation data. The UniProt databases are the UniProt Knowledgebase (UniProtKB), the UniProt Reference Clusters (UniRef), and the UniProt Archive (UniParc). The UniProt Metagenomic and Environmental Sequences (UniMES) database is a repository specifically developed for metagenomic and environmental data. The UniProt Knowledgebase,is an expertly and richly curated protein database, consisting of two sections called UniProtKB/Swiss-Prot and UniProtKB/TrEMBL.
>>>!!!<<< This site is going away on April 1, 2021. General access to the site has been disabled and community users will see an error upon login. >>>!!!<<< Socrata’s cloud-based solution allows government organizations to put their data online, make data-driven decisions, operate more efficiently, and share insights with citizens.
BioPortal is an open repository of biomedical ontologies, a service that provides access to those ontologies, and a set of tools for working with them. BioPortal provides a wide range of such tools, either directly via the BioPortal web site, or using the BioPortal web service REST API. BioPortal also includes community features for adding notes, reviews, and even mappings to specific ontologies. BioPortal has four major product components: the web application; the API services; widgets, or applets, that can be installed on your own site; and a Virtual Appliance version that is available for download or through Amazon Web Services machine instance (AMI). There is also a beta release SPARQL endpoint.
Country
The "Flora of Bavaria" initiative with its data portal (14 million occurrence data) and Wiki representation is primarily a citizen science project. Efforts to describe and monitor the flora of Bavaria have been ongoing for 100 years. The goal of these efforts is to record all vascular plants, including newcomers, and to document threatened or former local occurrences. Being geographically largest state of Germany with a broad range of habitats, Bavaria has a special responsibility for documenting and maintaining its plant diversity . About 85% of all German vascular plant species occur in Bavaria, and in addition it has about 50 endemic taxa, only known from Bavaria (most of them occur in the Alps). The Wiki is collaboration of volunteers and local and regional Bavarian botanical societies. Everybody is welcome to contribute, especially with photos or reports of local changes in the flora. The Flora of Bavaria project is providing access to a research data repository for occurrence data powered by the Diversity Workbench database framework.
GeneCards is a searchable, integrative database that provides comprehensive, user-friendly information on all annotated and predicted human genes. It automatically integrates gene-centric data from ~125 web sources, including genomic, transcriptomic, proteomic, genetic, clinical and functional information.
Neuroimaging Tools and Resources Collaboratory (NITRC) is currently a free one-stop-shop environment for science researchers that need resources such as neuroimaging analysis software, publicly available data sets, and computing power. Since its debut in 2007, NITRC has helped the neuroscience community to use software and data produced from research that, before NITRC, was routinely lost or disregarded, to make further discoveries. NITRC provides free access to data and enables pay-per-use cloud-based access to unlimited computing power, enabling worldwide scientific collaboration with minimal startup and cost. With NITRC and its components—the Resources Registry (NITRC-R), Image Repository (NITRC-IR), and Computational Environment (NITRC-CE)—a researcher can obtain pilot or proof-of-concept data to validate a hypothesis for a few dollars.
Originally named the Radiation Belt Storm Probes (RBSP), the mission was re-named the Van Allen Probes, following successful launch and commissioning. For simplicity and continuity, the RBSP short-form has been retained for existing documentation, file naming, and data product identification purposes. The RBSPICE investigation including the RBSPICE Instrument SOC maintains compliance with requirements levied in all applicable mission control documents.
The Humanitarian Data Exchange (HDX) is an open platform for sharing data across crises and organisations. Launched in July 2014, the goal of HDX is to make humanitarian data easy to find and use for analysis. HDX is managed by OCHA's Centre for Humanitarian Data, which is located in The Hague. OCHA is part of the United Nations Secretariat and is responsible for bringing together humanitarian actors to ensure a coherent response to emergencies. The HDX team includes OCHA staff and a number of consultants who are based in North America, Europe and Africa.
<<<!!!<<< OFFLINE >>>!!!>>> A recent computer security audit has revealed security flaws in the legacy HapMap site that require NCBI to take it down immediately. We regret the inconvenience, but we are required to do this. That said, NCBI was planning to decommission this site in the near future anyway (although not quite so suddenly), as the 1,000 genomes (1KG) project has established itself as a research standard for population genetics and genomics. NCBI has observed a decline in usage of the HapMap dataset and website with its available resources over the past five years and it has come to the end of its useful life. The International HapMap Project is a multi-country effort to identify and catalog genetic similarities and differences in human beings. Using the information in the HapMap, researchers will be able to find genes that affect health, disease, and individual responses to medications and environmental factors. The Project is a collaboration among scientists and funding agencies from Japan, the United Kingdom, Canada, China, Nigeria, and the United States. All of the information generated by the Project will be released into the public domain. The goal of the International HapMap Project is to compare the genetic sequences of different individuals to identify chromosomal regions where genetic variants are shared. By making this information freely available, the Project will help biomedical researchers find genes involved in disease and responses to therapeutic drugs. In the initial phase of the Project, genetic data are being gathered from four populations with African, Asian, and European ancestry. Ongoing interactions with members of these populations are addressing potential ethical issues and providing valuable experience in conducting research with identified populations. Public and private organizations in six countries are participating in the International HapMap Project. Data generated by the Project can be downloaded with minimal constraints. The Project officially started with a meeting in October 2002 (https://www.genome.gov/10005336/) and is expected to take about three years.
<<<!!!<<< This repository is no longer available. >>>!!!>>> BioVeL is a virtual e-laboratory that supports research on biodiversity issues using large amounts of data from cross-disciplinary sources. BioVeL supports the development and use of workflows to process data. It offers the possibility to either use already made workflows or create own. BioVeL workflows are stored in MyExperiment - Biovel Group http://www.myexperiment.org/groups/643/content. They are underpinned by a range of analytical and data processing functions (generally provided as Web Services or R scripts) to support common biodiversity analysis tasks. You can find the Web Services catalogued in the BiodiversityCatalogue.
TAED is a database of phylogenetically indexed gene families. It contains multiple sequence alignments from MAFFT1, maximum likelihood phylogenetic trees from PhyML2, bootstrap values for each node, dN/dS ratios for each lineage from the free ratios model in PAML3, and labels for each node of speciation or duplication from gene tree/species tree reconciliation using SoftParsMap4. The phylogenetic indexing enables simultaneous viewing of lineages with high dN/dS that occurred along the same species tree branches. Resources from the Protein Data Bank (PDB) and the Kyoto Encyclopedia of Genes and Genomes (KEGG)5, have been incorporated into the TAED analysis to detect substitutions along each branch within the phylogenetic tree and to assess selection within pathways.
Country
The Canadian Disaster Database (CDD) contains detailed disaster information on more than 1000 natural, technological and conflict events (excluding war) that have happened since 1900 at home or abroad and that have directly affected Canadians. Message since 2022-01: The Canadian Disaster Database geospatial view is temporarily out of service. We apologize for the inconvenience. The standard view of the database is still available.
Country
CCCma has developed a number of climate models. These are used to study climate change and variability, and to understand the various processes which govern the climate system. They are also used to make quantitative projections of future long-term climate change (given various greenhouse gas and aerosol forcing scenarios), and increasingly to make initialized climate predictions on time scales ranging from seasons to decades. A brief description of these models and their corresponding references can be found: https://www.canada.ca/en/environment-climate-change/services/climate-change/science-research-data/modeling-projections-analysis/centre-modelling-analysis/models.html
Vast networks of meteorological sensors ring the globe measuring atmospheric state variables, like temperature, humidity, wind speed, rainfall, and atmospheric carbon dioxide, on a continuous basis. These measurements serve earth system science by providing inputs into models that predict weather, climate and the cycling of carbon and water. And, they provide information that allows researchers to detect the trends in climate, greenhouse gases, and air pollution. The eddy covariance method is currently the standard method used by biometeorologists to measure fluxes of trace gases between ecosystems and atmosphere.
Country
Tethys is an Open Access Research Data Repository of the GeoSphere Austria, which publishes and distributes georeferenced geoscientific research data generated at and in cooperation with the GeoSphere Austria. The research data publications and the associated metadata are predominantly provided in German or in English. The abstracts are provided in both languages. Tethys aims to provide published data sets as open data and in accordance with the FAIR Data Principles, findable, accessible, interoperable and reusable.
TCIA is a service which de-identifies and hosts a large archive of medical images of cancer accessible for public download. The data are organized as “collections”; typically patients’ imaging related by a common disease (e.g. lung cancer), image modality or type (MRI, CT, digital histopathology, etc) or research focus. Supporting data related to the images such as patient outcomes, treatment details, genomics and expert analyses are also provided when available.