Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
  • 1 (current)
Found 12 result(s)
The Northern California Earthquake Data Center (NCEDC) is a permanent archive and distribution center primarily for multiple types of digital data relating to earthquakes in central and northern California. The NCEDC is located at the Berkeley Seismological Laboratory, and has been accessible to users via the Internet since mid-1992. The NCEDC was formed as a joint project of the Berkeley Seismological Laboratory (BSL) and the U.S. Geological Survey (USGS) at Menlo Park in 1991, and current USGS funding is provided under a cooperative agreement for seismic network operations.
The Data Catalogue is a service that allows University of Liverpool Researchers to create records of information about their finalised research data, and save those data in a secure online environment. The Data Catalogue provides a good means of making that data available in a structured way, in a form that can be discovered by both general search engines and academic search tools. There are two types of record that can be created in the Data Catalogue: A discovery-only record – in these cases, the research data may be held somewhere else but a record is provided to help people find it. A record is created that alerts users to the existence of the data, and provides a link to where those data are held. A discovery and data record – in these cases, a record is created to help people discover the data exist, and the data themselves are deposited into the Data Catalogue. This process creates a unique Digital Object identifier (DOI) which can be used in citations to the data.
Galaxies, made up of billions of stars like our Sun, are the beacons that light up the structure of even the most distant regions in space. Not all galaxies are alike, however. They come in very different shapes and have very different properties; they may be large or small, old or young, red or blue, regular or confused, luminous or faint, dusty or gas-poor, rotating or static, round or disky, and they live either in splendid isolation or in clusters. In other words, the universe contains a very colourful and diverse zoo of galaxies. For almost a century, astronomers have been discussing how galaxies should be classified and how they relate to each other in an attempt to attack the big question of how galaxies form. Galaxy Zoo (Lintott et al. 2008, 2011) pioneered a novel method for performing large-scale visual classifications of survey datasets. This webpage allows anyone to download the resulting GZ classifications of galaxies in the project.
The Archaeological Map of the Czech Republic (AMCR) is a repository designed for information on archaeological investigations, sites and finds, operated by the Archaeological Institutes of the CAS in Prague and Brno. The archives of these institutions contain documentation of archaeological fieldwork on the territory of the Czech Republic from 1919 to the present day, and they continue to enrich their collections. The AMCR database and related documents form the largest collection of archaeological data concerning the Czech Republic and are therefore an important part of our cultural heritage. The AMCR digital archive contains various types of records - individual archaeological documents (texts, field photographs, aerial photographs, maps and plans, digital data), projects, fieldwork events, archaeological sites, records of individual finds and a library of 3D models. Data and descriptive information are continuously taken from the AMCR and presented in the the AMCR Digital Archive interface.
The Brown Digital Repository (BDR) is a place to gather, index, store, preserve, and make available digital assets produced via the scholarly, instructional, research, and administrative activities at Brown.
<<<!!!<<< This record is merged into Continental Scientific Drilling Facility https://www.re3data.org/repository/r3d100012874 >>>!!!>>> LacCore curates cores and samples from continental coring and drilling expeditions around the world, and also archives metadata and contact information for cores stored at other institutions.LacCore curates cores and samples from continental coring and drilling expeditions around the world, and also archives metadata and contact information for cores stored at other institutions.
Geochron is a global database that hosts geochronologic and thermochronologic information from detrital minerals. Information included with each sample consists of a table with the essential isotopic information and ages, a table with basic geologic metadata (e.g., location, collector, publication, etc.), a Pb/U Concordia diagram, and a relative age probability diagram. This information can be accessed and viewed with any web browser, and depending on the level of access desired, can be designated as either private or public. Loading information into Geochron requires the use of U-Pb_Redux, a Java-based program that also provides enhanced capabilities for data reduction, plotting, and analysis. Instructions are provided for three different levels of interaction with Geochron: 1. Accessing samples that are already in the Geochron database. 2. Preparation of information for new samples, and then transfer to Arizona LaserChron Center personnel for uploading to Geochron. 3. Preparation of information and uploading to Geochron using U-Pb_Redux.
iNaturalist is a citizen science project and online social network of naturalists, citizen scientists, and biologists built on the concept of mapping and sharing observations of biodiversity across the globe. iNat is a platform for biodiversity research, where anyone can start up their own science project with a specific purpose and collaborate with other observers.
The Bremen Core Repository - BCR, for International Ocean Discovery Program (IODP), Integrated Ocean Discovery Program (IODP), Ocean Drilling Program (ODP), and Deep Sea Drilling Project (DSDP) cores from the Atlantic Ocean, Mediterranean and Black Seas and Arctic Ocean is operated at University of Bremen within the framework of the German participation in IODP. It is one of three IODP repositories (beside Gulf Coast Repository (GCR) in College Station, TX, and Kochi Core Center (KCC), Japan). One of the scientific goals of IODP is to research the deep biosphere and the subseafloor ocean. IODP has deep-frozen microbiological samples from the subseafloor available for interested researchers and will continue to collect and preserve geomicrobiology samples for future research.
<<<!!!<<< OFFLINE >>>!!!>>> A recent computer security audit has revealed security flaws in the legacy HapMap site that require NCBI to take it down immediately. We regret the inconvenience, but we are required to do this. That said, NCBI was planning to decommission this site in the near future anyway (although not quite so suddenly), as the 1,000 genomes (1KG) project has established itself as a research standard for population genetics and genomics. NCBI has observed a decline in usage of the HapMap dataset and website with its available resources over the past five years and it has come to the end of its useful life. The International HapMap Project is a multi-country effort to identify and catalog genetic similarities and differences in human beings. Using the information in the HapMap, researchers will be able to find genes that affect health, disease, and individual responses to medications and environmental factors. The Project is a collaboration among scientists and funding agencies from Japan, the United Kingdom, Canada, China, Nigeria, and the United States. All of the information generated by the Project will be released into the public domain. The goal of the International HapMap Project is to compare the genetic sequences of different individuals to identify chromosomal regions where genetic variants are shared. By making this information freely available, the Project will help biomedical researchers find genes involved in disease and responses to therapeutic drugs. In the initial phase of the Project, genetic data are being gathered from four populations with African, Asian, and European ancestry. Ongoing interactions with members of these populations are addressing potential ethical issues and providing valuable experience in conducting research with identified populations. Public and private organizations in six countries are participating in the International HapMap Project. Data generated by the Project can be downloaded with minimal constraints. The Project officially started with a meeting in October 2002 (https://www.genome.gov/10005336/) and is expected to take about three years.
<<<!!!<<< This repository is no longer available. >>>!!!>>> BioVeL is a virtual e-laboratory that supports research on biodiversity issues using large amounts of data from cross-disciplinary sources. BioVeL supports the development and use of workflows to process data. It offers the possibility to either use already made workflows or create own. BioVeL workflows are stored in MyExperiment - Biovel Group http://www.myexperiment.org/groups/643/content. They are underpinned by a range of analytical and data processing functions (generally provided as Web Services or R scripts) to support common biodiversity analysis tasks. You can find the Web Services catalogued in the BiodiversityCatalogue.