Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
  • 1 (current)
Found 10 result(s)
The UniProt Knowledgebase (UniProtKB) is the central hub for the collection of functional information on proteins, with accurate, consistent and rich annotation. In addition to capturing the core data mandatory for each UniProtKB entry (mainly, the amino acid sequence, protein name or description, taxonomic data and citation information), as much annotation information as possible is added. This includes widely accepted biological ontologies, classifications and cross-references, and clear indications of the quality of annotation in the form of evidence attribution of experimental and computational data. The Universal Protein Resource (UniProt) is a comprehensive resource for protein sequence and annotation data. The UniProt databases are the UniProt Knowledgebase (UniProtKB), the UniProt Reference Clusters (UniRef), and the UniProt Archive (UniParc). The UniProt Metagenomic and Environmental Sequences (UniMES) database is a repository specifically developed for metagenomic and environmental data. The UniProt Knowledgebase,is an expertly and richly curated protein database, consisting of two sections called UniProtKB/Swiss-Prot and UniProtKB/TrEMBL.
>>>>!!!<<<<As of March 28, 2016, the 'NSF Arctic Data Center' will serve as the current repository for NSF-funded Arctic data. The ACADIS Gateway http://www.aoncadis.org is no longer accepting data submissions. All data and metadata in the ACADIS system have been transferred to the NSF Arctic Data Center system. There is no need for you to resubmit existing data. >>>>!!!<<<< ACADIS is a repository for Arctic research data to provide data archival, preservation and access for all projects funded by NSF's Arctic Science Program (ARC). Data include long-term observational timeseries, local, regional, and system-scale research from many diverse domains. The Advanced Cooperative Arctic Data and Information Service (ACADIS) program includes data management services.
Open access repository for digital research created at the University of Minnesota. U of M researchers may deposit data to the Libraries’ Data Repository for U of M (DRUM), subject to our collection policies. All data is publicly accessible. Data sets submitted to the Data Repository are reviewed by data curation staff to ensure that data is in a format and structure that best facilitates long-term access, discovery, and reuse.
PharmGKB is a comprehensive resource that curates knowledge about the impact of genetic variation on drug response for clinicians and researchers. PharmGKB brings together the relevant data in a single place and adds value by combining disparate data on the same relationship, making it easier to search and easier to view the key aspects and by interpreting the data.PharmGKB provide clinical interpretations of this data, curated pathways and VIP summaries which are not found elsewhere.
VertNet is a NSF-funded collaborative project that makes biodiversity data free and available on the web. VertNet is a tool designed to help people discover, capture, and publish biodiversity data. It is also the core of a collaboration between hundreds of biocollections that contribute biodiversity data and work together to improve it. VertNet is an engine for training current and future professionals to use and build upon best practices in data quality, curation, research, and data publishing. Yet, VertNet is still the aggregate of all of the information that it mobilizes. To us, VertNet is all of these things and more.
ScholarsArchive@OSU is Oregon State University's digital service for gathering, indexing, making available and storing the scholarly work of the Oregon State University community. It also includes materials from outside the institution in support of the university's land, sun, sea and space grant missions and other research interests.
The OpenNeuro project (formerly known as the OpenfMRI project) was established in 2010 to provide a resource for researchers interested in making their neuroimaging data openly available to the research community. It is managed by Russ Poldrack and Chris Gorgolewski of the Center for Reproducible Neuroscience at Stanford University. The project has been developed with funding from the National Science Foundation, National Institute of Drug Abuse, and the Laura and John Arnold Foundation.
The KNB Data Repository is an international repository intended to facilitate ecological, environmental and earth science research in the broadest senses. For scientists, the KNB Data Repository is an efficient way to share, discover, access and interpret complex ecological, environmental, earth science, and sociological data and the software used to create and manage those data. Due to rich contextual information provided with data in the KNB, scientists are able to integrate and analyze data with less effort. The data originate from a highly-distributed set of field stations, laboratories, research sites, and individual researchers. The KNB supports rich, detailed metadata to promote data discovery as well as automated and manual integration of data into new projects. The KNB supports a rich set of modern repository services, including the ability to assign Digital Object Identifiers (DOIs) so data sets can be confidently referenced in any publication, the ability to track the versions of datasets as they evolve through time, and metadata to establish the provenance relationships between source and derived data.
Neotoma is a multiproxy paleoecological database that covers the Pliocene-Quaternary, including modern microfossil samples. The database is an international collaborative effort among individuals from 19 institutions, representing multiple constituent databases. There are over 20 data-types within the Neotoma Paleoecological Database, including pollen microfossils, plant macrofossils, vertebrate fauna, diatoms, charcoal, biomarkers, ostracodes, physical sedimentology and water chemistry. Neotoma provides an underlying cyberinfrastructure that enables the development of common software tools for data ingest, discovery, display, analysis, and distribution, while giving domain scientists control over critical taxonomic and other data quality issues.
Data.gov increases the ability of the public to easily find, download, and use datasets that are generated and held by the Federal Government. Data.gov provides descriptions of the Federal datasets (metadata), information about how to access the datasets, and tools that leverage government datasets