Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 149 result(s)
The UniProt Knowledgebase (UniProtKB) is the central hub for the collection of functional information on proteins, with accurate, consistent and rich annotation. In addition to capturing the core data mandatory for each UniProtKB entry (mainly, the amino acid sequence, protein name or description, taxonomic data and citation information), as much annotation information as possible is added. This includes widely accepted biological ontologies, classifications and cross-references, and clear indications of the quality of annotation in the form of evidence attribution of experimental and computational data. The Universal Protein Resource (UniProt) is a comprehensive resource for protein sequence and annotation data. The UniProt databases are the UniProt Knowledgebase (UniProtKB), the UniProt Reference Clusters (UniRef), and the UniProt Archive (UniParc). The UniProt Metagenomic and Environmental Sequences (UniMES) database is a repository specifically developed for metagenomic and environmental data. The UniProt Knowledgebase,is an expertly and richly curated protein database, consisting of two sections called UniProtKB/Swiss-Prot and UniProtKB/TrEMBL.
PAGER (Prompt Assessment of Global Earthquakes for Response) is an automated system that produces content concerning the impact of significant earthquakes around the world, informing emergency responders, government and aid agencies, and the media of the scope of the potential disaster. PAGER rapidly assesses earthquake impacts by comparing the population exposed to each level of shaking intensity with models of economic and fatality losses based on past earthquakes in each country or region of the world. Earthquake alerts – which were formerly sent based only on event magnitude and location, or population exposure to shaking – now will also be generated based on the estimated range of fatalities and economic losses. PAGER uses these earthquake parameters to calculate estimates of ground shaking by using the methodology and software developed for ShakeMaps. ShakeMap sites provide near-real-time maps of ground motion and shaking intensity following significant earthquakes. These maps are used by federal, state, and local organizations, both public and private, for post-earthquake response and recovery, public and scientific information, as well as for preparedness exercises and disaster planning.
The Museum is committed to open access and open science, and has launched the Data Portal to make its research and collections datasets available online. It allows anyone to explore, download and reuse the data for their own research. Our natural history collection is one of the most important in the world, documenting 4.5 billion years of life, the Earth and the solar system. Almost all animal, plant, mineral and fossil groups are represented. These datasets will increase exponentially. Under the Museum's ambitious digital collections programme we aim to have 20 million specimens digitised in the next five years.
GLOBE (Global Collaboration Engine) is an online collaborative environment that enables land change researchers to share, compare and integrate local and regional studies with global data to assess the global relevance of their work.
Public Opinion in the European Union. Our surveys address major topics concerning European citizenship. The Standard Eurobarometer was established in 1973. Since 1973, the European Commission has been monitoring the evolution of public opinion in the Member States, thus helping the preparation of texts, decision-making and the evaluation of its work. Our surveys and studies address major topics concerning European citizenship: enlargement, social situation, health, culture, information technology, environment, the Euro, defence, etc. Each survey consists of approximately 1000 face-to-face interviews per country. Reports are published twice yearly. Reproduction is authorised, except for commercial purposes, provided the source is acknowledged. Special Eurobarometer reports are based on in-depth thematic studies carried out for various services of the European Commission or other EU Institutions and integrated in the Standard Eurobarometer's polling waves. Reproduction is authorised, except for commercial purposes, provided the source is acknowledged. Flash Eurobarometers are ad hoc thematic telephone interviews conducted at the request of any service of the European Commission. Flash surveys enable the Commission to obtain results relatively quickly and to focus on specific target groups, as and when required. Reproduction is authorised, except for commercial purposes, provided the source is acknowledged. The qualitative studies investigate in-depth the motivations, feelings and reactions of selected social groups towards a given subject or concept, by listening to and analysing their way of expressing themselves in discussion groups or with non-directive interviews.
Project Achilles is a systematic effort aimed at identifying and cataloging genetic vulnerabilities across hundreds of genomically characterized cancer cell lines. The project uses genome-wide genetic perturbation reagents (shRNAs or Cas9/sgRNAs) to silence or knock-out individual genes and identify those genes that affect cell survival. Large-scale functional screening of cancer cell lines provides a complementary approach to those studies that aim to characterize the molecular alterations (e.g. mutations, copy number alterations) of primary tumors, such as The Cancer Genome Atlas (TCGA). The overall goal of the project is to identify cancer genetic dependencies and link them to molecular characteristics in order to prioritize targets for therapeutic development and identify the patient population that might benefit from such targets. Project Achilles data is hosted on the Cancer Dependency Map Portal (DepMap) where it has been harmonized with our genomics and cellular models data. You can access the latest and all past datasets here: https://depmap.org/portal/download/all/
Country
In addition to the common documentation methods of cylinder seals by rolled impression and photography, this collection also offers 3D-models and digital impressions. The 3D-scans can be performed without impacting the objects, thus reducing the risks. This method allows even the most fragile of seals to be documented, including those too delicate to be used for a rolled impression. These scans offer a true-to-scale reproduction of the seals.
CODEX is a database of NGS mouse and human experiments. Although, the main focus of CODEX is Haematopoiesis and Embryonic systems, the database includes a large variety of cell types. In addition to the publically available data, CODEX also includes a private site hosting non-published data. CODEX provides access to processed and curated NGS experiments. To use CODEX: (i) select a specialized repository (HAEMCODE or ESCODE) or choose the whole compendium (CODEX), then (ii) filter by organism and (iii) choose how to explore the database.
BioPortal is an open repository of biomedical ontologies, a service that provides access to those ontologies, and a set of tools for working with them. BioPortal provides a wide range of such tools, either directly via the BioPortal web site, or using the BioPortal web service REST API. BioPortal also includes community features for adding notes, reviews, and even mappings to specific ontologies. BioPortal has four major product components: the web application; the API services; widgets, or applets, that can be installed on your own site; and a Virtual Appliance version that is available for download or through Amazon Web Services machine instance (AMI). There is also a beta release SPARQL endpoint.
Content type(s)
Country
The Northern Ontario Plant Database (NOPD) is a website that provides free public access to records of herbarium specimens housed in northern Ontario educational and government institutions. A herbarium is an archival collection of plants that have been pressed, dried, mounted, and labelled. It also provides up-to-date and accurate information on the flora of northern Ontario.
Country
SilkDB is a database of the integrated genome resource for the silkworm, Bombyx mori. This database provides access to not only genomic data including functional annotation of genes, gene products and chromosomal mapping, but also extensive biological information such as microarray expression data, ESTs and corresponding references. SilkDB will be useful for the silkworm research community as well as comparative genomics
Country
The "Flora of Bavaria" initiative with its data portal (14 million occurrence data) and Wiki representation is primarily a citizen science project. Efforts to describe and monitor the flora of Bavaria have been ongoing for 100 years. The goal of these efforts is to record all vascular plants, including newcomers, and to document threatened or former local occurrences. Being geographically largest state of Germany with a broad range of habitats, Bavaria has a special responsibility for documenting and maintaining its plant diversity . About 85% of all German vascular plant species occur in Bavaria, and in addition it has about 50 endemic taxa, only known from Bavaria (most of them occur in the Alps). The Wiki is collaboration of volunteers and local and regional Bavarian botanical societies. Everybody is welcome to contribute, especially with photos or reports of local changes in the flora. The Flora of Bavaria project is providing access to a research data repository for occurrence data powered by the Diversity Workbench database framework.
Explore, search, and download data and metadata from your experiments and from public Open Data. The ESRF data repository is intended to store and archive data from photon science experiments done at the ESRF and to store digital material like documents and scientific results which need a DOI and long term preservation. Data are made public after an embargo period of maximum 3 years.
US National Science Foundation (NSF) facility to support drilling and coring in continental locations worldwide. Drill core metadata and data, borehole survey data, geophysical site survey data, drilling metadata, software code. The CSD Facility offers repositories with samples, data, publications and reference collections from scientific drilling and coring.
This database contains references to publications that include numerical data, general information, comments, and reviews on atomic line broadening and shifts, and is part of the collection of the NIST Atomic Spectroscopy Data Center https://www.nist.gov/pml/quantum-measurement/atomic-spectroscopy/atomic-spectroscopy-data-center-contacts.
CDAAC is responsible for processing the science data received from COSMIC. This data is currently being processed not long after the data is received, i.e. approximately eighty percent of radio occultation profiles are delivered to operational weather centers within 3 hours of observation as well as in a more accurate post-processed mode (within 8 weeks of observation).
The Southern California Earthquake Data Center (SCEDC) operates at the Seismological Laboratory at Caltech and is the primary archive of seismological data for southern California. The 1932-to-present Caltech/USGS catalog maintained by the SCEDC is the most complete archive of seismic data for any region in the United States. Our mission is to maintain an easily accessible, well-organized, high-quality, searchable archive for research in seismology and earthquake engineering.
The Environmental Change Network is the UK’s long-term environmental monitoring and research (LTER) programme. We make regular measurements of plant and animal communities and their physical and chemical environment. Our long-term datasets are used to increase understanding of the effects of climate change, air pollution and other environmental pressures on UK ecosystems.
GeneCards is a searchable, integrative database that provides comprehensive, user-friendly information on all annotated and predicted human genes. It automatically integrates gene-centric data from ~125 web sources, including genomic, transcriptomic, proteomic, genetic, clinical and functional information.
Neuroimaging Tools and Resources Collaboratory (NITRC) is currently a free one-stop-shop environment for science researchers that need resources such as neuroimaging analysis software, publicly available data sets, and computing power. Since its debut in 2007, NITRC has helped the neuroscience community to use software and data produced from research that, before NITRC, was routinely lost or disregarded, to make further discoveries. NITRC provides free access to data and enables pay-per-use cloud-based access to unlimited computing power, enabling worldwide scientific collaboration with minimal startup and cost. With NITRC and its components—the Resources Registry (NITRC-R), Image Repository (NITRC-IR), and Computational Environment (NITRC-CE)—a researcher can obtain pilot or proof-of-concept data to validate a hypothesis for a few dollars.
PharmGKB is a comprehensive resource that curates knowledge about the impact of genetic variation on drug response for clinicians and researchers. PharmGKB brings together the relevant data in a single place and adds value by combining disparate data on the same relationship, making it easier to search and easier to view the key aspects and by interpreting the data.PharmGKB provide clinical interpretations of this data, curated pathways and VIP summaries which are not found elsewhere.
Country
Kinsources is an open and interactive platform to archive, share, analyze and compare kinship data used in scientific research. Kinsources is not just another genealogy website, but a peer-reviewed repository designed for comparative and collaborative research. The aim of Kinsources is to provide kinship studies with a large and solid empirical base. Kinsources combines the functionality of communal data repository with a toolbox providing researchers with advanced software for analyzing kinship data. The software Puck (Program for the Use and Computation of Kinship data) is integrated in the statistical package and the search engine of the Kinsources website. Kinsources is part of a research perspective that seeks to understand the interaction between genealogy, terminology and space in the emergence of kinship structures. Hosted by the TGIR HumaNum, the platform ensures both security and free access to the scientific data is validated by the research community.