Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 27 result(s)
Chempound is a new generation repository architecture based on RDF, semantic dictionaries and linked data. It has been developed to hold any type of chemical object expressible in CML and is exemplified by crystallographic experiments and computational chemistry calculations. In both examples, the repository can hold >50k entries which can be searched by SPARQL endpoints and pre-indexing of key fields. The Chempound architecture is general and adaptable to other fields of data-rich science.
M-CSA is a database of enzyme reaction mechanisms. It provides annotation on the protein, catalytic residues, cofactors, and the reaction mechanisms of hundreds of enzymes. There are two kinds of entries in M-CSA. 'Detailed mechanism' entries are more complete and show the individual chemical steps of the mechanism as schemes with electron flow arrows. 'Catalytic Site' entries annotate the catalytic residues necessary for the reaction, but do not show the mechanism. The M-CSA (Mechanism and Catalytic Site Atlas) represents a unified resource that combines the data in both MACiE and the CSA
MozAtlas provides gene expression data of adult male and female mosquitoes as tables, expressions, trees and models. MozAtlas also provides sequence orthology relationships with data provided by FlyBase, Vectorbase, Beetlebase, BeeBase, and WormBase.
The Office for National Statistics (ONS) is the UK’s largest independent producer of official statistics and is the recognised national statistical institute for the UK. It is responsible for collecting and publishing statistics related to the economy, population and society at national, regional and local levels. It also conducts the census in England and Wales every ten years. The ONS plays a leading role in national and international good practice in the production of official statistics. It is the executive office of the UK Statistics Authority and although they are separate, they are still closely related.
GOBASE is a taxonomically broad organelle genome database that organizes and integrates diverse data related to mitochondria and chloroplasts. GOBASE is currently expanding to include information on representative bacteria that are thought to be specifically related to the bacterial ancestors of mitochondria and chloroplasts
Protectedplanet.net combines crowd sourcing and authoritative sources to enrich and provide data for protected areas around the world. Data are provided in partnership with the World Database on Protected Areas (WDPA). The data include the location, designation type, status year, and size of the protected areas, as well as species information.
This classic collection of test cases for validation of turbulence models started as an EU / ERCOFTAC project led by Pr. W. Rodi in 1995. It is maintained by Dr. T. Craft at Manchester since 1999. Initialy limited to experimental data, computational results, and results and conclusions drawn from the ERCOFTAC Workshops on Refined Turbulence Modelling (SIG15). At the moment, each case should contain at least a brief description, some data to download, and references to published work. Some cases contain significantly more information than this.
The British Geological Survey (BGS), the world’s oldest national geological survey, has over 400 datasets including environmental monitoring data, digital databases, physical collections (borehole core, rocks, minerals and fossils), records and archives.
>>>!!!<<< Crystaleye has now been excitingly integrated into the Crystallography Open Database at http://www.crystallography.net. http://service.re3data.org/repository/r3d100010213>>>!!!<<< Crystallography Open Database now is including data and software from CrystalEye, developed by Nick Day at the department of Chemistry, the University of Cambridge under supervision of Peter Murray-Rust. The aim of the CrystalEye project is to aggregate crystallography from web resources, and to provide methods to easily browse, search, and to keep up to date with the latest published information.At present we are aggregating the crystallography from the supplementary data to articles at publishers websites.
-----<<<<< The repository is no longer available. This record is out-dated. >>>>>----- The Clean Energy Project Database (CEPDB) is a massive reference database for organic semiconductors with a particular emphasis on photovoltaic applications. It was created to store and provide access to data from computational as well as experimental studies, on both known and virtual compounds. It is a free and open resource designed to support researchers in the field of organic electronics in their scientific pursuits. The CEPDB was established as part of the Harvard Clean Energy Project (CEP), a virtual high-throughput screening initiative to identify promising new candidates for the next generation of carbon-based solar cell materials.
The modENCODE Project, Model Organism ENCyclopedia Of DNA Elements, was initiated by the funding of applications received in response to Requests for Applications (RFAs) HG-06-006, entitled Identification of All Functional Elements in Selected Model Organism Genomes and HG-06-007, entitled A Data Coordination Center for the Model Organism ENCODE Project (modENCODE). The modENCODE Project is being run as an open consortium and welcomes any investigator willing to abide by the criteria for participation that have been established for the project. Both computational and experimental approaches are being applied by modENCODE investigators to study the genomes of D. melanogaster and C. elegans. An added benefit of studying functional elements in model organisms is the ability to biologically validate the elements discovered using methods that cannot be applied in humans. The comprehensive dataset that is expected to result from the modENCODE Project will provide important insights into the biology of D. melanogaster and C. elegans as well as other organisms, including humans.
The Virtual Liver Network (VLN) represents a major research investment by the German Government focusing on work at the “bleeding edge” of Systems Biology and Systems Medicine. This Flagship Programme is tackling one of the major challenges in the life sciences: that is, how to integrate the wealth of data we have acquired post-genome, not just in a mathematical model, but more importantly in a series of models that are linked across scales to represent organ function. As the project is prototyping how to achieve true multi-scale modelling within a single organ and linking this to human physiology, it will be developing tools and protocols that can be applied to other systems, helping to drive forward the application of modelling and simulation to modern medical practice. It is the only programme of its type to our knowledge that bridges investigations from the sub-cellular through to ethically cleared patient and volunteer studies in an integrated workflow. As such, this programme is contributing significantly to the development of a new paradigm in biology and medicine.
!! OFFLINE !! A recent computer security audit has revealed security flaws in the legacy HapMap site that require NCBI to take it down immediately. We regret the inconvenience, but we are required to do this. That said, NCBI was planning to decommission this site in the near future anyway (although not quite so suddenly), as the 1,000 genomes (1KG) project has established itself as a research standard for population genetics and genomics. NCBI has observed a decline in usage of the HapMap dataset and website with its available resources over the past five years and it has come to the end of its useful life. The International HapMap Project is a multi-country effort to identify and catalog genetic similarities and differences in human beings. Using the information in the HapMap, researchers will be able to find genes that affect health, disease, and individual responses to medications and environmental factors. The Project is a collaboration among scientists and funding agencies from Japan, the United Kingdom, Canada, China, Nigeria, and the United States. All of the information generated by the Project will be released into the public domain. The goal of the International HapMap Project is to compare the genetic sequences of different individuals to identify chromosomal regions where genetic variants are shared. By making this information freely available, the Project will help biomedical researchers find genes involved in disease and responses to therapeutic drugs. In the initial phase of the Project, genetic data are being gathered from four populations with African, Asian, and European ancestry. Ongoing interactions with members of these populations are addressing potential ethical issues and providing valuable experience in conducting research with identified populations. Public and private organizations in six countries are participating in the International HapMap Project. Data generated by the Project can be downloaded with minimal constraints. The Project officially started with a meeting in October 2002 (https://www.genome.gov/10005336/) and is expected to take about three years.
mzCloud is an extensively curated database of high-resolution tandem mass spectra that are arranged into spectral trees. MS/MS and multi-stage MSn spectra were acquired at various collision energies, precursor m/z, and isolation widths using Collision-induced dissociation (CID) and Higher-energy collisional dissociation (HCD). Each raw mass spectrum was filtered and recalibrated giving rise to additional filtered and recalibrated spectral trees that are fully searchable. Besides the experimental and processed data, each database record contains the compound name with synonyms, the chemical structure, computationally and manually annotated fragments (peaks), identified adducts and multiply charged ions, molecular formulas, predicted precursor structures, detailed experimental information, peak accuracies, mass resolution, InChi, InChiKey, and other identifiers. mzCloud is a fully searchable library that allows spectra searches, tree searches, structure and substructure searches, monoisotopic mass searches, peak (m/z) searches, precursor searches, and name searches. mzCloud is free and available for public use online.
The Pfam database is a large collection of protein families, each represented by multiple sequence alignments and hidden Markov models (HMMs).
The UCD Digital Library is a platform for exploring cultural heritage, engaging with digital scholarship, and accessing research data. The UCD Digital Library allows you to search, browse and explore a growing collection of historical materials, photographs, art, interviews, letters, and other exciting content, that have been digitised and made freely available.
The Human Ageing Genomic Resources (HAGR) is a collection of databases and tools designed to help researchers study the genetics of human ageing using modern approaches such as functional genomics, network analyses, systems biology and evolutionary analyses.
>>>!!!<<< The repository is offline >>>!!!<<< On June 1, 1990 the German X-ray observatory ROSAT started its mission to open a new era in X-ray astronomy. Doubtless, this is the most ambitious project realized up to now in the short history of this young astronomical discipline. Equipped with the largest imaging X-ray telescope ever inserted into an earth orbit ROSAT has provided a tremendous amount of new scientific data and insights.
When published in 2005, the Millennium Run was the largest ever simulation of the formation of structure within the ΛCDM cosmology. It uses 10(10) particles to follow the dark matter distribution in a cubic region 500h(−1)Mpc on a side, and has a spatial resolution of 5h−1kpc. Application of simplified modelling techniques to the stored output of this calculation allows the formation and evolution of the ~10(7) galaxies more luminous than the Small Magellanic Cloud to be simulated for a variety of assumptions about the detailed physics involved. As part of the activities of the German Astrophysical Virtual Observatory we have created relational databases to store the detailed assembly histories both of all the haloes and subhaloes resolved by the simulation, and of all the galaxies that form within these structures for two independent models of the galaxy formation physics. We have implemented a Structured Query Language (SQL) server on these databases. This allows easy access to many properties of the galaxies and halos, as well as to the spatial and temporal relations between them. Information is output in table format compatible with standard Virtual Observatory tools. With this announcement (from 1/8/2006) we are making these structures fully accessible to all users. Interested scientists can learn SQL and test queries on a small, openly accessible version of the Millennium Run (with volume 1/512 that of the full simulation). They can then request accounts to run similar queries on the databases for the full simulations. In 2008 and 2012 the simulations were repeated.
The WDC Geomagnetism, Edinburgh has a comprehensive set of digital geomagnetic data as well as indices of geomagnetic activity supplied from a worldwide network of magnetic observatories. The data and services at the WDC are available for scientific use without restrictions.
ChEMBL is a database of bioactive drug-like small molecules, it contains 2-D structures, calculated properties (e.g. logP, Molecular Weight, Lipinski Parameters, etc.) and abstracted bioactivities (e.g. binding constants, pharmacology and ADMET data). The data is abstracted and curated from the primary scientific literature, and cover a significant fraction of the SAR and discovery of modern drugs We attempt to normalise the bioactivities into a uniform set of end-points and units where possible, and also to tag the links between a molecular target and a published assay with a set of varying confidence levels. Additional data on clinical progress of compounds is being integrated into ChEMBL at the current time.