Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 489 result(s)
Country
Silkworm Pathogen Database (SilkPathDB) is a comprehensive resource for studying on pathogens of silkworm, including microsporidia, fungi, bacteria and virus. SilkPathDB provides access to not only genomic data including functional annotation of genes and gene products, but also extensive biological information for gene expression data and corresponding researches. SilkPathDB will be help with researches on pathogens of silkworm as well as other Lepidoptera insects.
The OpenMadrigal project seeks to develop and support an on-line database for geospace data. The project has been led by MIT Haystack Observatory since 1980, but now has active support from Jicamarca Observatory and other community members. Madrigal is a robust, World Wide Web based system capable of managing and serving archival and real-time data, in a variety of formats, from a wide range of ground-based instruments. Madrigal is installed at a number of sites around the world. Data at each Madrigal site is locally controlled and can be updated at any time, but shared metadata between Madrigal sites allow searching of all Madrigal sites at once from any Madrigal site. Data is local; metadata is shared.
Country
As the national oceanographic data centre for Canada, MEDS maintains centralized repositories of some oceanographic data types collected in Canada, and coordinates data exchanges between DFO and recognized intergovernmental organizations, as well as acts as a central point for oceanographic data requests. Real-time, near real-time (for operational oceanography) or historical data are made available as appropriate.
Country
KRISHI Portal is an initiative of Indian Council of Agricultural Research (ICAR) to bring its knowledge resources to all stakeholders at one place. The portal is being developed as a centralized data repository system of ICAR consisting of Technology, Data generated through Experiments/ Surveys/ Observational studies, Geo-spatial data, Publications, Learning Resources etc. For implementation of research data management electronically in ICAR Institutes and digitization of agricultural research, KRISHI (Knowledge based Resources Information Systems Hub for Innovations in Agriculture) Portal has been developed as ICAR Research Data Repository for knowledge management. Data Inventory Repository aims at creating Meta Data Inventory through information related to data availability at Institute level. The portal consists of six repositories viz. technology, publication, experimental data, observational data survey data and geo-portal. The portal can be accessed at http://krishi.icar.gov.in. During the period of 2016-17, input data on latitude and longitude of all KVKs under this Zone was submitted to the concerned authority to put them in geo-portal. One brainstorming session was organized at this institute for all scientists on its use and uploading information in portal. As per guidelines of the council, various kinds of publications pertaining to this institute were also uploaded in this portal.
Country
Our system consists of a portal (portal.geomar.de), providing access to several projects with personal password. The portal offers document exchange, common or individual blogs and fora and implementation of external webpages and -services. Moreover, you can access the expedition database, that organizes data description and exchange of cruises and expeditions for each project. Expeditions are linked to KML-files (Google-Earth compatible), allowing a visualization of all stations of a cruise/expedition. We established the linkage to the publications database /repository OceanRep (EPrints), as well as the description of model-output and linkage to paper publications.
The CLARIN­/Text+ repository at the Saxon Academy of Sciences and Humanities in Leipzig offers long­term preservation of digital resources, along with their descriptive metadata. The mission of the repository is to ensure the availability and long­term preservation of resources, to preserve knowledge gained in research, to aid the transfer of knowledge into new contexts, and to integrate new methods and resources into university curricula. Among the resources currently available in the Leipzig repository are a set of corpora of the Leipzig Corpora Collection (LCC), based on newspaper, Wikipedia and Web text. Furthermore several REST-based webservices are provided for a variety of different NLP-relevant tasks The repository is part of the CLARIN infrastructure and part of the NFDI consortium Text+. It is operated by the Saxon Academy of Sciences and Humanities in Leipzig.
IsoArcH is an open access isotope web-database for bioarchaeological samples from prehistoric and historical periods all over the world. With 40,000+ isotope related data obtained on 13,000+ specimens (i.e., humans, animals, plants and organic residues) coming from 500+ archaeological sites, IsoArcH is now one of the world's largest repositories for isotopic data and metadata deriving from archaeological contexts. IsoArcH allows to initiate big data initiatives but also highlights research lacks in certain regions or time periods. Among others, it supports the creation of sound baselines, the undertaking of multi-scale analysis, and the realization of extensive studies and syntheses on various research issues such as paleodiet, food production, resource management, migrations, paleoclimate and paleoenvironmental changes.
Academic Commons provides open, persistent access to the scholarship produced by researchers at Columbia University, Barnard College, Jewish Theological Seminary, Teachers College, and Union Theological Seminary. Academic Commons is a program of the Columbia University Libraries. Academic Commons accepts articles, dissertations, research data, presentations, working papers, videos, and more.
SeaBASS, the publicly shared archive of in situ oceanographic and atmospheric data maintained by the NASA Ocean Biology Processing Group (OBPG). High quality in situ measurements are prerequisite for satellite data product validation, algorithm development, and many climate-related inquiries. As such, the NASA Ocean Biology Processing Group (OBPG) maintains a local repository of in situ oceanographic and atmospheric data to support their regular scientific analyses. The SeaWiFS Project originally developed this system, SeaBASS, to catalog radiometric and phytoplankton pigment data used their calibration and validation activities. To facilitate the assembly of a global data set, SeaBASS was expanded with oceanographic and atmospheric data collected by participants in the SIMBIOS Program, under NASA Research Announcements NRA-96 and NRA-99, which has aided considerably in minimizing spatial bias and maximizing data acquisition rates. Archived data include measurements of apparent and inherent optical properties, phytoplankton pigment concentrations, and other related oceanographic and atmospheric data, such as water temperature, salinity, stimulated fluorescence, and aerosol optical thickness. Data are collected using a number of different instrument packages, such as profilers, buoys, and hand-held instruments, and manufacturers on a variety of platforms, including ships and moorings.
The mission of the GO Consortium is to develop a comprehensive, computational model of biological systems, ranging from the molecular to the organism level, across the multiplicity of species in the tree of life. The Gene Ontology (GO) knowledgebase is the world’s largest source of information on the functions of genes. This knowledge is both human-readable and machine-readable, and is a foundation for computational analysis of large-scale molecular biology and genetics experiments in biomedical research.
Country
The edoc-Server, start 1998, is the Institutional Repository of the Humboldt-Universität zu Berlin and offers the posibility of text- and data-publications. Every item is published for Open-Access with an optional embargo period of up to five years. Data publications since 01.01.2018.
OMIM is a comprehensive, authoritative compendium of human genes and genetic phenotypes that is freely available and updated daily. OMIM is authored and edited at the McKusick-Nathans Institute of Genetic Medicine, Johns Hopkins University School of Medicine, under the direction of Dr. Ada Hamosh. Its official home is omim.org.
Jason is a remote-controlled deep-diving vessel that gives shipboard scientists immediate, real-time access to the sea floor. Instead of making short, expensive dives in a submarine, scientists can stay on deck and guide Jason as deep as 6,500 meters (4 miles) to explore for days on end. Jason is a type of remotely operated vehicle (ROV), a free-swimming vessel connected by a long fiberoptic tether to its research ship. The 10-km (6 mile) tether delivers power and instructions to Jason and fetches data from it.
The Digital Archaeological Record (tDAR) is an international digital repository for the digital records of archaeological investigations. tDAR’s use, development, and maintenance are governed by Digital Antiquity, an organization dedicated to ensuring the long-term preservation of irreplaceable archaeological data and to broadening the access to these data.
DataON is Korea's National Research Data Platform. It provides integrated search of metadata for KISTI's research data and domestic and international research data and links to raw data. DataON allows users (researchers, policy makers, etc.) to perform the following tasks: Easily search for various types of research data in all scientific fields. By registering research results, research data can be posted and cited. Build a community among researchers and enable collaborative research. It provides a data analysis environment that allows one-stop analysis of discovered research data.
With the Program EnviDat we develop a unified and managed access portal for WSL's rich reservoir of environmental monitoring and research data. EnviDat is designed as a portal to publish, connect and search across existing data but is not intended to become a large data centre hosting original data. While sharing of data is centrally facilitated, data management remains decentralised and the know-how and responsibility to curate research data remains with the original data providers.
Country
The CDC Data Catalogue describes the Climate Data of the DWD and provides access to data, descriptions and access methods. Climate Data refers to observations, statistical indices and spatial analyses. CDC comprises Climate Data for Germany, but also global Climate Data, which were collected and processed in the framework of international co-operation. The CDC Data Catalogue is under construction and not yet complete. The purposes of the CDC Data Catalogue are: to provide uniform access to climate data centres and climate datasets of the DWD to describe the climate data according to international metadata standards to make the catalogue information available on the Internet to support the search for climate data to facilitate the access to climate data and climate data descriptions
The HUGO Gene Nomenclature Committee (HGNC) assigned unique gene symbols and names to over 35,000 human loci, of which around 19,000 are protein coding. This curated online repository of HGNC-approved gene nomenclature and associated resources includes links to genomic, proteomic and phenotypic information, as well as dedicated gene family pages.
The Research Collection is ETH Zurich's publication platform. It unites the functions of a university bibliography, an open access repository and a research data repository within one platform. Researchers who are affiliated with ETH Zurich, the Swiss Federal Institute of Technology, may deposit research data from all domains. They can publish data as a standalone publication, publish it as supplementary material for an article, dissertation or another text, share it with colleagues or a research group, or deposit it for archiving purposes. Research-data-specific features include flexible access rights settings, DOI registration and a DOI preview workflow, content previews for zip- and tar-containers, as well as download statistics and altmetrics for published data. All data uploaded to the Research Collection are also transferred to the ETH Data Archive, ETH Zurich’s long-term archive.
dbEST is a division of GenBank that contains sequence data and other information on "single-pass" cDNA sequences, or "Expressed Sequence Tags", from a number of organisms. Expressed Sequence Tags (ESTs) are short (usually about 300-500 bp), single-pass sequence reads from mRNA (cDNA). Typically they are produced in large batches. They represent a snapshot of genes expressed in a given tissue and/or at a given developmental stage. They are tags (some coding, others not) of expression for a given cDNA library. Most EST projects develop large numbers of sequences. These are commonly submitted to GenBank and dbEST as batches of dozens to thousands of entries, with a great deal of redundancy in the citation, submitter and library information. To improve the efficiency of the submission process for this type of data, we have designed a special streamlined submission process and data format. dbEST also includes sequences that are longer than the traditional ESTs, or are produced as single sequences or in small batches. Among these sequences are products of differential display experiments and RACE experiments. The thing that these sequences have in common with traditional ESTs, regardless of length, quality, or quantity, is that there is little information that can be annotated in the record. If a sequence is later characterized and annotated with biological features such as a coding region, 5'UTR, or 3'UTR, it should be submitted through the regular GenBank submissions procedure (via BankIt or Sequin), even if part of the sequence is already in dbEST. dbEST is reserved for single-pass reads. Assembled sequences should not be submitted to dbEST. GenBank will accept assembled EST submissions for the forthcoming TSA (Transcriptome Shotgun Assembly) division. The individual reads which make up the assembly should be submitted to dbEST, the Trace archive or the Short Read Archive (SRA) prior to the submission of the assemblies.
The Gene database provides detailed information for known and predicted genes defined by nucleotide sequence or map position. Gene supplies gene-specific connections in the nexus of map, sequence, expression, structure, function, citation, and homology data. Unique identifiers are assigned to genes with defining sequences, genes with known map positions, and genes inferred from phenotypic information. These gene identifiers are used throughout NCBI's databases and tracked through updates of annotation. Gene includes genomes represented by NCBI Reference Sequences (or RefSeqs) and is integrated for indexing and query and retrieval from NCBI's Entrez and E-Utilities systems.