Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 1187 result(s)
Museum explorers travel to ocean depths, the peaks of the Andes, Africa's Rift Valley, the rainforests of South America, and the deserts of Central Asia. Perhaps even to a field site or research institution in your own state, territory or country. In each area, researchers collect specimens: fossils, minerals, and rocks, plants and animals, tools and artworks. Collections care professionals have meticulously preserved, labeled, cataloged, and organized items of this kind for more than 150 years. Taken together, the NMNH collections form the largest, most comprehensive natural history collection in the world. By comparing items gathered in different eras and regions, scientists learn how our world has varied across time and space.
Database and knowledgebase of authenticated microbial genomics data with full data provenance to physical materials held within American Type Culture Collection's (ATCC) biorepository and culture collections. Data includes whole genome sequencing data for bacterial, viral and fungal strains at ATCC, their genome assemblies, metadata, drug susceptibility data, and more. All data is freely available for non-commercial research use only (RUO) applications via the web portal interface or via a REST-API. The goal is to provide the research community with provenance information and authentication between the biological source materials and reference genome assemblies derived from them.
Academic Commons provides open, persistent access to the scholarship produced by researchers at Columbia University, Barnard College, Jewish Theological Seminary, Teachers College, and Union Theological Seminary. Academic Commons is a program of the Columbia University Libraries. Academic Commons accepts articles, dissertations, research data, presentations, working papers, videos, and more.
SeaBASS, the publicly shared archive of in situ oceanographic and atmospheric data maintained by the NASA Ocean Biology Processing Group (OBPG). High quality in situ measurements are prerequisite for satellite data product validation, algorithm development, and many climate-related inquiries. As such, the NASA Ocean Biology Processing Group (OBPG) maintains a local repository of in situ oceanographic and atmospheric data to support their regular scientific analyses. The SeaWiFS Project originally developed this system, SeaBASS, to catalog radiometric and phytoplankton pigment data used their calibration and validation activities. To facilitate the assembly of a global data set, SeaBASS was expanded with oceanographic and atmospheric data collected by participants in the SIMBIOS Program, under NASA Research Announcements NRA-96 and NRA-99, which has aided considerably in minimizing spatial bias and maximizing data acquisition rates. Archived data include measurements of apparent and inherent optical properties, phytoplankton pigment concentrations, and other related oceanographic and atmospheric data, such as water temperature, salinity, stimulated fluorescence, and aerosol optical thickness. Data are collected using a number of different instrument packages, such as profilers, buoys, and hand-held instruments, and manufacturers on a variety of platforms, including ships and moorings.
The mission of the GO Consortium is to develop a comprehensive, computational model of biological systems, ranging from the molecular to the organism level, across the multiplicity of species in the tree of life. The Gene Ontology (GO) knowledgebase is the world’s largest source of information on the functions of genes. This knowledge is both human-readable and machine-readable, and is a foundation for computational analysis of large-scale molecular biology and genetics experiments in biomedical research.
OMIM is a comprehensive, authoritative compendium of human genes and genetic phenotypes that is freely available and updated daily. OMIM is authored and edited at the McKusick-Nathans Institute of Genetic Medicine, Johns Hopkins University School of Medicine, under the direction of Dr. Ada Hamosh. Its official home is omim.org.
Jason is a remote-controlled deep-diving vessel that gives shipboard scientists immediate, real-time access to the sea floor. Instead of making short, expensive dives in a submarine, scientists can stay on deck and guide Jason as deep as 6,500 meters (4 miles) to explore for days on end. Jason is a type of remotely operated vehicle (ROV), a free-swimming vessel connected by a long fiberoptic tether to its research ship. The 10-km (6 mile) tether delivers power and instructions to Jason and fetches data from it.
Seafloor Sediments Data Collection is a collection of more than 14,000 archived marine geological samples recovered from the seafloor. The inventory includes long, stratified sediment cores, as well as rock dredges, surface grabs, and samples collected by the submersible Alvin.
WorldData.AI comes with a built-in workspace – the next-generation hyper-computing platform powered by a library of 3.3 billion curated external trends. WorldData.AI allows you to save your models in its “My Models Trained” section. You can make your models public and share them on social media with interesting images, model features, summary statistics, and feature comparisons. Empower others to leverage your models. For example, if you have discovered a previously unknown impact of interest rates on new-housing demand, you may want to share it through “My Models Trained.” Upload your data and combine it with external trends to build, train, and deploy predictive models with one click! WorldData.AI inspects your raw data, applies feature processors, chooses the best set of algorithms, trains and tunes multiple models, and then ranks model performance.
The World Ocean Database (WOD) is a collection of scientifically quality-controlled ocean profile and plankton data that includes measurements of temperature, salinity, oxygen, phosphate, nitrate, silicate, chlorophyll, alkalinity, pH, pCO2, TCO2, Tritium, Δ13Carbon, Δ14Carbon, Δ18Oxygen, Freon, Helium, Δ3Helium, Neon, and plankton. WOD contains all data of "World Data Service Oceanography" (WDS-Oceanography).
<<<!!!<<< This repository is no longer available. >>>!!!>>> The programme "International Oceanographic Data and Information Exchange" (IODE) of the "Intergovernmental Oceanographic Commission" (IOC) of UNESCO was established in 1961. Its purpose is to enhance marine research, exploitation and development, by facilitating the exchange of oceanographic data and information between participating Member States, and by meeting the needs of users for data and information products.
The World Ocean Atlas (WOA) contains objectively analyzed climatological fields of in situ temperature, salinity, oxygen, and other measured variables at standard depth levels for various compositing periods for the world ocean. Regional climatologies were created from the Atlas, providing a set of high resolution mean fields for temperature and salinity. A new version of the WOA is released in conjunction with each major update to the WOD, the largest collection of publicly available, uniformly formatted, quality controlled subsurface ocean profile data in the world.
<<<!!!<<< The demand for high-value environmental data and information has dramatically increased in recent years. To improve our ability to meet that demand, NOAA’s former three data centers—the National Climatic Data Center, the National Geophysical Data Center, and the National Oceanographic Data Center, which includes the National Coastal Data Development Center—have merged into the National Centers for Environmental Information (NCEI). >>>!!!>>> NOAA's National Climatic Data Center (NCDC) is responsible for preserving, monitoring, assessing, and providing public access to the Nation's treasure of climate and historical weather data and information.
The Global Terrorism Database (GTD) is an open-source database including information on terrorist events around the world from 1970 through 2020 (with annual updates planned for the future). Unlike many other event databases, the GTD includes systematic data on domestic as well as international terrorist incidents that have occurred during this time period and now includes more than 200,000 cases.
The Biological General Repository for Interaction Datasets (BioGRID) is a public database that archives and disseminates genetic and protein interaction data from model organisms and humans. BioGRID is an online interaction repository with data compiled through comprehensive curation efforts. All interaction data are freely provided through our search index and available via download in a wide variety of standardized formats.
The ACE Science Center (ASC) serves to facilitate collaborative work on data from the Advanced Composition Explorer (ACE) spacecraft and to ensure that those data are properly archived and publicly available. The collaborators served are not limited to ACE project-funded investigators.
The Materials Project produces one of the world's foremost databases of computed information about inorganic, crystalline materials, along with providing powerful web-based apps to help analyze this information to help the design of novel materials. Access is provided free-of-charge with an API available and under a permissive license.
NED is a comprehensive database of multiwavelength data for extragalactic objects, providing a systematic, ongoing fusion of information integrated from hundreds of large sky surveys and tens of thousands of research publications. The contents and services span the entire observed spectrum from gamma rays through radio frequencies. As new observations are published, they are cross- identified or statistically associated with previous data and integrated into a unified database to simplify queries and retrieval. Seamless connectivity is also provided to data in NASA astrophysics mission archives (IRSA, HEASARC, MAST), to the astrophysics literature via ADS, and to other data centers around the world.
<<<!!!<<< This repository is no longer available. >>>!!!>>> TeachingWithData.org is a portal where faculty can find resources and ideas to reduce the challenges of bringing real data into post-secondary classes. It allows faculty to introduce and build students' quantitative reasoning abilities with readily available, user-friendly, data-driven teaching materials.
Stanford Network Analysis Platform (SNAP) is a general purpose network analysis and graph mining library. It is written in C++ and easily scales to massive networks with hundreds of millions of nodes, and billions of edges. It efficiently manipulates large graphs, calculates structural properties, generates regular and random graphs, and supports attributes on nodes and edges. SNAP is also available through the NodeXL which is a graphical front-end that integrates network analysis into Microsoft Office and Excel. The SNAP library is being actively developed since 2004 and is organically growing as a result of our research pursuits in analysis of large social and information networks. Largest network we analyzed so far using the library was the Microsoft Instant Messenger network from 2006 with 240 million nodes and 1.3 billion edges. The datasets available on the website were mostly collected (scraped) for the purposes of our research. The website was launched in July 2009.
The HUGO Gene Nomenclature Committee (HGNC) assigned unique gene symbols and names to over 35,000 human loci, of which around 19,000 are protein coding. This curated online repository of HGNC-approved gene nomenclature and associated resources includes links to genomic, proteomic and phenotypic information, as well as dedicated gene family pages.
dbEST is a division of GenBank that contains sequence data and other information on "single-pass" cDNA sequences, or "Expressed Sequence Tags", from a number of organisms. Expressed Sequence Tags (ESTs) are short (usually about 300-500 bp), single-pass sequence reads from mRNA (cDNA). Typically they are produced in large batches. They represent a snapshot of genes expressed in a given tissue and/or at a given developmental stage. They are tags (some coding, others not) of expression for a given cDNA library. Most EST projects develop large numbers of sequences. These are commonly submitted to GenBank and dbEST as batches of dozens to thousands of entries, with a great deal of redundancy in the citation, submitter and library information. To improve the efficiency of the submission process for this type of data, we have designed a special streamlined submission process and data format. dbEST also includes sequences that are longer than the traditional ESTs, or are produced as single sequences or in small batches. Among these sequences are products of differential display experiments and RACE experiments. The thing that these sequences have in common with traditional ESTs, regardless of length, quality, or quantity, is that there is little information that can be annotated in the record. If a sequence is later characterized and annotated with biological features such as a coding region, 5'UTR, or 3'UTR, it should be submitted through the regular GenBank submissions procedure (via BankIt or Sequin), even if part of the sequence is already in dbEST. dbEST is reserved for single-pass reads. Assembled sequences should not be submitted to dbEST. GenBank will accept assembled EST submissions for the forthcoming TSA (Transcriptome Shotgun Assembly) division. The individual reads which make up the assembly should be submitted to dbEST, the Trace archive or the Short Read Archive (SRA) prior to the submission of the assemblies.
The Gene database provides detailed information for known and predicted genes defined by nucleotide sequence or map position. Gene supplies gene-specific connections in the nexus of map, sequence, expression, structure, function, citation, and homology data. Unique identifiers are assigned to genes with defining sequences, genes with known map positions, and genes inferred from phenotypic information. These gene identifiers are used throughout NCBI's databases and tracked through updates of annotation. Gene includes genomes represented by NCBI Reference Sequences (or RefSeqs) and is integrated for indexing and query and retrieval from NCBI's Entrez and E-Utilities systems.
We are a leading international centre for genomics and bioinformatics research. Our mandate is to advance knowledge about cancer and other diseases, to improve human health through disease prevention, diagnosis and therapeutic approaches, and to realize the social and economic benefits of genomics research.