Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
  • 1 (current)
Found 19 result(s)
The Northern California Earthquake Data Center (NCEDC) is a permanent archive and distribution center primarily for multiple types of digital data relating to earthquakes in central and northern California. The NCEDC is located at the Berkeley Seismological Laboratory, and has been accessible to users via the Internet since mid-1992. The NCEDC was formed as a joint project of the Berkeley Seismological Laboratory (BSL) and the U.S. Geological Survey (USGS) at Menlo Park in 1991, and current USGS funding is provided under a cooperative agreement for seismic network operations.
The Data Catalogue is a service that allows University of Liverpool Researchers to create records of information about their finalised research data, and save those data in a secure online environment. The Data Catalogue provides a good means of making that data available in a structured way, in a form that can be discovered by both general search engines and academic search tools. There are two types of record that can be created in the Data Catalogue: A discovery-only record – in these cases, the research data may be held somewhere else but a record is provided to help people find it. A record is created that alerts users to the existence of the data, and provides a link to where those data are held. A discovery and data record – in these cases, a record is created to help people discover the data exist, and the data themselves are deposited into the Data Catalogue. This process creates a unique Digital Object identifier (DOI) which can be used in citations to the data.
Galaxies, made up of billions of stars like our Sun, are the beacons that light up the structure of even the most distant regions in space. Not all galaxies are alike, however. They come in very different shapes and have very different properties; they may be large or small, old or young, red or blue, regular or confused, luminous or faint, dusty or gas-poor, rotating or static, round or disky, and they live either in splendid isolation or in clusters. In other words, the universe contains a very colourful and diverse zoo of galaxies. For almost a century, astronomers have been discussing how galaxies should be classified and how they relate to each other in an attempt to attack the big question of how galaxies form. Galaxy Zoo (Lintott et al. 2008, 2011) pioneered a novel method for performing large-scale visual classifications of survey datasets. This webpage allows anyone to download the resulting GZ classifications of galaxies in the project.
The Archaeological Map of the Czech Republic (AMCR) is a repository designed for information on archaeological investigations, sites and finds, operated by the Archaeological Institutes of the CAS in Prague and Brno. The archives of these institutions contain documentation of archaeological fieldwork on the territory of the Czech Republic from 1919 to the present day, and they continue to enrich their collections. The AMCR database and related documents form the largest collection of archaeological data concerning the Czech Republic and are therefore an important part of our cultural heritage. The AMCR digital archive contains various types of records - individual archaeological documents (texts, field photographs, aerial photographs, maps and plans, digital data), projects, fieldwork events, archaeological sites, records of individual finds and a library of 3D models. Data and descriptive information are continuously taken from the AMCR and presented in the the AMCR Digital Archive interface.
The UniProt Knowledgebase (UniProtKB) is the central hub for the collection of functional information on proteins, with accurate, consistent and rich annotation. In addition to capturing the core data mandatory for each UniProtKB entry (mainly, the amino acid sequence, protein name or description, taxonomic data and citation information), as much annotation information as possible is added. This includes widely accepted biological ontologies, classifications and cross-references, and clear indications of the quality of annotation in the form of evidence attribution of experimental and computational data. The Universal Protein Resource (UniProt) is a comprehensive resource for protein sequence and annotation data. The UniProt databases are the UniProt Knowledgebase (UniProtKB), the UniProt Reference Clusters (UniRef), and the UniProt Archive (UniParc). The UniProt Metagenomic and Environmental Sequences (UniMES) database is a repository specifically developed for metagenomic and environmental data. The UniProt Knowledgebase,is an expertly and richly curated protein database, consisting of two sections called UniProtKB/Swiss-Prot and UniProtKB/TrEMBL.
BioPortal is an open repository of biomedical ontologies, a service that provides access to those ontologies, and a set of tools for working with them. BioPortal provides a wide range of such tools, either directly via the BioPortal web site, or using the BioPortal web service REST API. BioPortal also includes community features for adding notes, reviews, and even mappings to specific ontologies. BioPortal has four major product components: the web application; the API services; widgets, or applets, that can be installed on your own site; and a Virtual Appliance version that is available for download or through Amazon Web Services machine instance (AMI). There is also a beta release SPARQL endpoint.
Country
SilkDB is a database of the integrated genome resource for the silkworm, Bombyx mori. This database provides access to not only genomic data including functional annotation of genes, gene products and chromosomal mapping, but also extensive biological information such as microarray expression data, ESTs and corresponding references. SilkDB will be useful for the silkworm research community as well as comparative genomics
Country
The "Flora of Bavaria" initiative with its data portal (14 million occurrence data) and Wiki representation is primarily a citizen science project. Efforts to describe and monitor the flora of Bavaria have been ongoing for 100 years. The goal of these efforts is to record all vascular plants, including newcomers, and to document threatened or former local occurrences. Being geographically largest state of Germany with a broad range of habitats, Bavaria has a special responsibility for documenting and maintaining its plant diversity . About 85% of all German vascular plant species occur in Bavaria, and in addition it has about 50 endemic taxa, only known from Bavaria (most of them occur in the Alps). The Wiki is collaboration of volunteers and local and regional Bavarian botanical societies. Everybody is welcome to contribute, especially with photos or reports of local changes in the flora. The Flora of Bavaria project is providing access to a research data repository for occurrence data powered by the Diversity Workbench database framework.
GeneCards is a searchable, integrative database that provides comprehensive, user-friendly information on all annotated and predicted human genes. It automatically integrates gene-centric data from ~125 web sources, including genomic, transcriptomic, proteomic, genetic, clinical and functional information.
The Humanitarian Data Exchange (HDX) is an open platform for sharing data across crises and organisations. Launched in July 2014, the goal of HDX is to make humanitarian data easy to find and use for analysis. HDX is managed by OCHA's Centre for Humanitarian Data, which is located in The Hague. OCHA is part of the United Nations Secretariat and is responsible for bringing together humanitarian actors to ensure a coherent response to emergencies. The HDX team includes OCHA staff and a number of consultants who are based in North America, Europe and Africa.
<<<!!!<<< OFFLINE >>>!!!>>> A recent computer security audit has revealed security flaws in the legacy HapMap site that require NCBI to take it down immediately. We regret the inconvenience, but we are required to do this. That said, NCBI was planning to decommission this site in the near future anyway (although not quite so suddenly), as the 1,000 genomes (1KG) project has established itself as a research standard for population genetics and genomics. NCBI has observed a decline in usage of the HapMap dataset and website with its available resources over the past five years and it has come to the end of its useful life. The International HapMap Project is a multi-country effort to identify and catalog genetic similarities and differences in human beings. Using the information in the HapMap, researchers will be able to find genes that affect health, disease, and individual responses to medications and environmental factors. The Project is a collaboration among scientists and funding agencies from Japan, the United Kingdom, Canada, China, Nigeria, and the United States. All of the information generated by the Project will be released into the public domain. The goal of the International HapMap Project is to compare the genetic sequences of different individuals to identify chromosomal regions where genetic variants are shared. By making this information freely available, the Project will help biomedical researchers find genes involved in disease and responses to therapeutic drugs. In the initial phase of the Project, genetic data are being gathered from four populations with African, Asian, and European ancestry. Ongoing interactions with members of these populations are addressing potential ethical issues and providing valuable experience in conducting research with identified populations. Public and private organizations in six countries are participating in the International HapMap Project. Data generated by the Project can be downloaded with minimal constraints. The Project officially started with a meeting in October 2002 (https://www.genome.gov/10005336/) and is expected to take about three years.
<<<!!!<<< This repository is no longer available. >>>!!!>>> BioVeL is a virtual e-laboratory that supports research on biodiversity issues using large amounts of data from cross-disciplinary sources. BioVeL supports the development and use of workflows to process data. It offers the possibility to either use already made workflows or create own. BioVeL workflows are stored in MyExperiment - Biovel Group http://www.myexperiment.org/groups/643/content. They are underpinned by a range of analytical and data processing functions (generally provided as Web Services or R scripts) to support common biodiversity analysis tasks. You can find the Web Services catalogued in the BiodiversityCatalogue.
TCIA is a service which de-identifies and hosts a large archive of medical images of cancer accessible for public download. The data are organized as “collections”; typically patients’ imaging related by a common disease (e.g. lung cancer), image modality or type (MRI, CT, digital histopathology, etc) or research focus. Supporting data related to the images such as patient outcomes, treatment details, genomics and expert analyses are also provided when available.
The North American Volcanic and Intrusive Rock Database (NAVDAT) is intended as a web-accessible repository for age, chemical and isotopic data from Mesozoic and younger igneous rocks in western North America. Please note: Although this site is fully functional, the content of NAVDAT has been static since 2014 and will not be updated until further notice. All data is still available via the EarthChem Portal http://portal.earthchem.org/
Country
Supported by the DFG, the project „MO|RE data“ assembles a public eResearch-Infrastructure for motor research data until 2016. It focuses on selected standardized motor tests of wide acceptance. Furthermore MO|RE data generates quality authority control and publishes accompanying material for motor tests.
Country
The CliSAP-Integrated Climate Data Center (ICDC) allows easy access to climate relevant data from satellite remote sensing and in situ and other measurements in Earth System Sciences. These data are important to determine the status and the changes in the climate system. Additionally some relevant re-analysis data are included, which are modeled on the basis of observational data. ICDC cooperates with the "Zentrum für Nachhaltiges Forschungsdatenmanagement "https://www.fdr.uni-hamburg.de/ to publish observational data with a doi.
MalaCards is an integrated database of human maladies and their annotations, modeled on the architecture and richness of the popular GeneCards database of human genes. MalaCards mines and merges varied web data sources to generate a computerized web card for each human disease. Each MalaCard contains disease specific prioritized annotative information, as well as links between associated diseases, leveraging the GeneCards relational database, search engine, and GeneDecks set-distillation tool. As proofs of concept of the search/distill/infer pipeline we find expected elucidations, as well as potentially novel ones.