Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 30 result(s)
EBRAINS offers one of the most comprehensive platforms for sharing brain research data ranging in type as well as spatial and temporal scale. We provide the guidance and tools needed to overcome the hurdles associated with sharing data. The EBRAINS data curation service ensures that your dataset will be shared with maximum impact, visibility, reusability, and longevity, https://ebrains.eu/services/data-knowledge/share-data. Find data - the user interface of the EBRAINS Knowledge Graph - allows you to easily find data of interest. EBRAINS hosts a wide range of data types and models from different species. All data are well described and can be accessed immediately for further analysis.
nmrshiftdb is a NMR database (web database) for organic structures and their nuclear magnetic resonance (nmr) spectra. It allows for spectrum prediction (13C, 1H and other nuclei) as well as for searching spectra, structures and other properties. Last not least, it features peer-reviewed submission of datasets by its users. The nmrshiftdb2 software is open source, the data is published under an open content license. Please consult the documentation for more detailed information. nmrshiftdb2 is the continuation of the NMRShiftDB project with additional data and bugfixes and changes in the software.
Currently, the IMS repository focuses on resources provided by the Institute for Natural Language Processing in Stuttgart (IMS) and other CLARIN-D related institutions such as the local Collaborative Research Centre 732 (SFB 732) as well as institutions and/or organizations that belong to the CLARIN-D extended scientific community. Comprehensive guidelines and workflows for submission by external contributors are being compiled based on the experiences in archiving such in-house resources.
The PLANKTON*NET data provider at the Alfred Wegener Institute for Polar and Marine Research is an open access repository for plankton-related information. It covers all types of phytoplankton and zooplankton from marine and freshwater areas. PLANKTON*NET's greatest strength is its comprehensiveness as for the different taxa image information as well as taxonomic descriptions can be archived. PLANKTON*NET also contains a glossary with accompanying images to illustrate the term definitions. PLANKTON*NET therefore presents a vital tool for the preservation of historic data sets as well as the archival of current research results. Because interoperability with international biodiversity data providers (e.g. GBIF) is one of our aims, the architecture behind the new planktonnet@awi repository is observation centric and allows for mulitple assignment of assets (images, references, animations, etc) to any given observation. In addition, images can be grouped in sets and/or assigned tags to satisfy user-specific needs . Sets (and respective images) of relevance to the scientific community and/or general public have been assigned a persistant digital object identifier (DOI) for the purpose of long-term preservation (e.g. set ""Plankton*Net celebrates 50 years of Roman Treaties"", handle: 10013/de.awi.planktonnet.set.495)"
The CLARIN­/Text+ repository at the Saxon Academy of Sciences and Humanities in Leipzig offers long­term preservation of digital resources, along with their descriptive metadata. The mission of the repository is to ensure the availability and long­term preservation of resources, to preserve knowledge gained in research, to aid the transfer of knowledge into new contexts, and to integrate new methods and resources into university curricula. Among the resources currently available in the Leipzig repository are a set of corpora of the Leipzig Corpora Collection (LCC), based on newspaper, Wikipedia and Web text. Furthermore several REST-based webservices are provided for a variety of different NLP-relevant tasks The repository is part of the CLARIN infrastructure and part of the NFDI consortium Text+. It is operated by the Saxon Academy of Sciences and Humanities in Leipzig.
This is a database for vegetation data from West Africa, i.e. phytosociological and dendrometric relevés as well as floristic inventories. The West African Vegetation Database has been developed in the framework of the projects “SUN - Sustainable Use of Natural Vegetation in West Africa” and “Biodiversity Transect Analysis in Africa” (BIOTA, https://www.biota-africa.org/).
The TextGrid Repository is a digital preservation archive for human sciences research data. It offers an extensive searchable and adaptable corpus of XML/TEI encoded texts, pictures and databases. Amongst the continuously growing corpus is the Digital Library of TextGrid, which consists of works of more than 600 authors of fiction (prose verse and drama) as well as nonfiction from the beginning of the printing press to the early 20th century written in or translated into German. The files are saved in different output formats (XML, ePub, PDF), published and made searchable. Different tools e.g. viewing or quantitative text-analysis tools can be used for visualization or to further research the text. The TextGrid Repository is part of the virtual research environment TextGrid, which besides offering digital preservation also offers open-source software for collaborative creations and publications of e.g. digital editions that are based on XML/TEI.
MEMENTO aims to become a valuable tool for identifying regions of the world ocean that should be targeted in future work to improve the quality of air-sea flux estimates.
The human pluripotent stem cell registry (hPSCreg) is a public registry and data portal for human embryonic and induced pluripotent stem cell lines (hESC and hiPSC). The Registry provides comprehensive and standardized biological and legal information as well as tools to search and compare information from multiple hPSC sources and hence addresses a translational research need. To facilitate unambiguous identification over different resources, hPSCreg automatically creates a unique standardized name (identifier) for each cell line registered. In addition to biological information, hPSCreg stores extensive data about ethical standards regarding cell sourcing and conditions for application and privacy protection. hPSCreg is the first global registry that holds both, manually validated scientific and ethical information on hPSC lines, and provides access by means of a user-friendly, mobile-ready web application.
The German Text Archive (Deutsches Textarchiv, DTA) presents online a selection of key German-language works in various disciplines from the 17th to 19th centuries. The electronic full-texts are indexed linguistically and the search facilities tolerate a range of spelling variants. The DTA presents German-language printed works from around 1650 to 1900 as full text and as digital facsimile. The selection of texts was made on the basis of lexicographical criteria and includes scientific or scholarly texts, texts from everyday life, and literary works. The digitalisation was made from the first edition of each work. Using the digital images of these editions, the text was first typed up manually twice (‘double keying’). To represent the structure of the text, the electronic full-text was encoded in conformity with the XML standard TEI P5. The next stages complete the linguistic analysis, i.e. the text is tokenised, lemmatised, and the parts of speech are annotated. The DTA thus presents a linguistically analysed, historical full-text corpus, available for a range of questions in corpus linguistics. Thanks to the interdisciplinary nature of the DTA Corpus, it also offers valuable source-texts for neighbouring disciplines in the humanities, and for scientists, legal scholars and economists.
As part of the Copernicus Space Component programme, ESA manages the coordinated access to the data procured from the various Contributing Missions and the Sentinels, in response to the Copernicus users requirements. The Data Access Portfolio documents the data offer and the access rights per user category. The CSCDA portal is the access point to all data, including Sentinel missions, for Copernicus Core Users as defined in the EU Copernicus Programme Regulation (e.g. Copernicus Services).The Copernicus Space Component (CSC) Data Access system is the interface for accessing the Earth Observation products from the Copernicus Space Component. The system overall space capacity relies on several EO missions contributing to Copernicus, and it is continuously evolving, with new missions becoming available along time and others ending and/or being replaced.
The information accumulated in the SPECTR-W3 ADB contains over 450,000 records and includes factual experimental and theoretical data on ionization potentials, energy levels, wavelengths, radiation transition probabilities, oscillator strengths, and (optionally) the parameters of analytical approximations of electron-collisional cross-sections and rates for atoms and ions. Those data were extracted from publications in physical journals, proceedings of the related conferences, special-purpose publications on atomic data, and provided directly by authors. The information is supplied with references to the original sources and comments, elucidating the details of experimental measurements or calculations, where necessary and available. To date, the SPECTR-W3 ADB is the largest factual database in the world containing the information on spectral properties of multicharged ions.
The International Ocean Discovery Program (IODP) is an international marine research collaboration that explores Earth's history and dynamics using ocean-going research platforms to recover data recorded in seafloor sediments and rocks and to monitor subseafloor environments. IODP depends on facilities funded by three platform providers with financial contributions from five additional partner agencies. Together, these entities represent 26 nations whose scientists are selected to staff IODP research expeditions conducted throughout the world's oceans. IODP expeditions are developed from hypothesis-driven science proposals aligned with the program's science plan Illuminating Earth's Past, Present, and Future. The science plan identifies 14 challenge questions in the four areas of climate change, deep life, planetary dynamics, and geohazards. Until 2013 under the name: International Ocean Drilling Program.
The Language Archive at the Max Planck Institute in Nijmegen provides a unique record of how people around the world use language in everyday life. It focuses on collecting spoken and signed language materials in audio and video form along with transcriptions, analyses, annotations and other types of relevant material (e.g. photos, accompanying notes).
<<<!!!<<< This repository is no longer available. This record is out-dated >>>!!!>>> Science3D is an Open Access project to archive and curate scientific data and make them available to everyone interested in scientific endeavours. Science3D focusses mainly on 3D tomography data from biological samples, simply because theses object make it comparably easy to understand the concepts and techniques. The data come primarily from the imaging beamlines of the Helmholtz Center Geesthacht (HZG), which make use of the uniquely bright and coherent X-rays of the Petra3 synchrotron. Petra3 - like many other photon and neutron sources in Europe and World-wide - is a fantastic instrument to investigate the microscopic detail of matter and organisms. The experiments at photon science beamlines hence provide unique insights into all kind of scientific fields, ranging from medical applications to plasma physics. The success of these experiments demands enormous efforts of the scientists and quite some investments
Content type(s)
The Berlin-Brandenburg Academy of Sciences and Humanities (BBAW) is a CLARIN partner institution and has been an officially certified CLARIN service center since June 20th, 2013. The CLARIN center at the BBAW focuses on historical text corpora (predominantly provided by the 'Deutsches Textarchiv'/German Text Archive, DTA) as well as on lexical resources (e.g. dictionaries provided by the 'Digitales Wörterbuch der Deutschen Sprache'/Digital Dictionary of the German Language, DWDS).
The aim of the Freshwater Biodiversity Data Portal is to integrate and provide open and free access to freshwater biodiversity data from all possible sources. To this end, we offer tools and support for scientists interested in documenting/advertising their dataset in the metadatabase, in submitting or publishing their primary biodiversity data (i.e. species occurrence records) or having their dataset linked to the Freshwater Biodiversity Data Portal. This information portal serves as a data discovery tool, and allows scientists and managers to complement, integrate, and analyse distribution data to elucidate patterns in freshwater biodiversity. The Freshwater Biodiversity Data Portal was initiated under the EU FP7 BioFresh project and continued through the Freshwater Information Platform (http://www.freshwaterplatform.eu). To ensure the broad availability of biodiversity data and integration in the global GBIF index, we strongly encourages scientists to submit any primary biodiversity data published in a scientific paper to national nodes of GBIF or to thematic initiatives such as the Freshwater Biodiversity Data Portal.
The EZRC at KIT houses the largest experimental fish facility in Europe with a capacity of more than 300,000 fish. Zebrafish stocks are maintained mostly as frozen sperm. Frequently requested lines are also kept alive as well as a selection of wildtype strains. Several thousand mutations in protein coding genes generated by TILLING in the Stemple lab of the Sanger Centre, Hinxton, UK and lines generated by ENU mutagenesis by the Nüsslein-Volhard lab in addition to transgenic lines and mutants generated by KIT groups or brought in through collaborations. We also accept submissions on an individual basis and ship fish upon request to PIs in Europe and elsewhere. EZRC also provides screening services and technologies such as imaging and high-throughput sequencing. Key areas include automation of embryo handling and automated image acquisition and processing. Our platform also involves the development of novel microscopy techniques (e.g. SPIM, DSLM, robotic macroscope) to permit high-resolution, real-time imaging in 4D. By association with the ComPlat platform, we can support also chemical screens and offer libraries with up to 20,000 compounds in total for external users. As another service to the community the EZRC provides plasmids (cDNAs, transgenes, Talen, Crispr/cas9) maintained by the Helmholtz repository of Bioparts (HERBI) to the scientific community. In addition the fish facility keeps a range of medaka stocks, maintained by the Loosli group.
The FAIRDOMHub is built upon the SEEK software suite, which is an open source web platform for sharing scientific research assets, processes and outcomes. FAIRDOM (Web Site) will establish a support and service network for European Systems Biology. It will serve projects in standardizing, managing and disseminating data and models in a FAIR manner: Findable, Accessible, Interoperable and Reusable. FAIRDOM is an initiative to develop a community, and establish an internationally sustained Data and Model Management service to the European Systems Biology community. FAIRDOM is a joint action of ERA-Net EraSysAPP and European Research Infrastructure ISBE.
The focus of PolMine is on texts published by public institutions in Germany. Corpora of parliamentary protocols are at the heart of the project: Parliamentary proceedings are available for long stretches of time, cover a broad set of public policies and are in the public domain, making them a valuable text resource for political science. The project develops repositories of textual data in a sustainable fashion to suit the research needs of political science. Concerning data, the focus is on converting text issued by public institutions into a sustainable digital format (TEI/XML).
The DARIAH-DE repository is a digital long-term archive for human and cultural-scientific research data. Each object described and stored in the DARIAH-DE Repository has a unique and lasting Persistent Identifier (DOI), with which it is permanently referenced, cited, and kept available for the long term. In addition, the DARIAH-DE Repository enables the sustainable and secure archiving of data collections. The DARIAH-DE Repository is not only to DARIAH-DE associated research projects, but also to individual researchers as well as research projects that want to save their research data persistently, referenceable and long-term archived and make it available to third parties. The main focus is the simple and user-oriented access to long-term storage of research data. To ensure its long term sustainability, the DARIAH-DE Repository is operated by the Humanities Data Centre.
The repository is part of the National Research Data Infrastructure initiative Text+, in which the University of Tübingen is a partner. It is housed at the Department of General and Computational Linguistics. The infrastructure is maintained in close cooperation with the Digital Humanities Centre, which is a core facility of the university, colaborating with the library and computing center of the university. Integration of the repository into the national CLARIN-D and international CLARIN infrastructures gives it wide exposure, increasing the likelihood that the resources will be used and further developed beyond the lifetime of the projects in which they were developed. Among the resources currently available in the Tübingen Center Repository, researchers can find widely used treebanks of German (e.g. TüBa-D/Z), the German wordnet (GermaNet), the first manually annotated digital treebank (Index Thomisticus), as well as descriptions of the tools used by the WebLicht ecosystem for natural language processing.
The DNA Bank Network was established in spring 2007 and was funded until 2011 by the German Research Foundation (DFG). The network was initiated by GBIF Germany (Global Biodiversity Information Facility). It offers a worldwide unique concept. DNA bank databases of all partners are linked and are accessible via a central web portal, providing DNA samples of complementary collections (microorganisms, protists, plants, algae, fungi and animals). The DNA Bank Network was one of the founders of the Global Genome Biodiversity Network (GGBN) and is fully merged with GGBN today. GGBN agreed on using the data model proposed by the DNA Bank Network. The Botanic Garden and Botanical Museum Berlin-Dahlem (BGBM) hosts the technical secretariat of GGBN and its virtual infrastructure. The main focus of the DNA Bank Network is to enhance taxonomic, systematic, genetic, conservation and evolutionary studies by providing: • high quality, long-term storage of DNA material on which molecular studies have been performed, so that results can be verified, extended, and complemented, • complete on-line documentation of each sample, including the provenance of the original material, the place of voucher deposit, information about DNA quality and extraction methodology, digital images of vouchers and links to published molecular data if available.