Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 29 result(s)
Country
GBIF is an international organisation that is working to make the world's biodiversity data accessible everywhere in the world. GBIF and its many partners work to mobilize the data, and to improve search mechanisms, data and metadata standards, web services, and the other components of an Internet-based information infrastructure for biodiversity. GBIF makes available data that are shared by hundreds of data publishers from around the world. These data are shared according to the GBIF Data Use Agreement, which includes the provision that users of any data accessed through or retrieved via the GBIF Portal will always give credit to the original data publishers.
The Research Collection is ETH Zurich's publication platform. It unites the functions of a university bibliography, an open access repository and a research data repository within one platform. Researchers who are affiliated with ETH Zurich, the Swiss Federal Institute of Technology, may deposit research data from all domains. They can publish data as a standalone publication, publish it as supplementary material for an article, dissertation or another text, share it with colleagues or a research group, or deposit it for archiving purposes. Research-data-specific features include flexible access rights settings, DOI registration and a DOI preview workflow, content previews for zip- and tar-containers, as well as download statistics and altmetrics for published data. All data uploaded to the Research Collection are also transferred to the ETH Data Archive, ETH Zurich’s long-term archive.
Data deposit is supported for University of Ottawa faculty, students, and affiliated researchers. The repository is multidisciplinary and hosted on Canadian servers. It includes features such as permanent links (DOIs) which encourage citation of your dataset and help you set terms for access and reuse of your data. uOttawa Dataverse is currently optimal for small to medium datasets.
The ENCODE Encyclopedia organizes the most salient analysis products into annotations, and provides tools to search and visualize them. The Encyclopedia has two levels of annotations: Integrative-level annotations integrate multiple types of experimental data and ground level annotations. Ground-level annotations are derived directly from the experimental data, typically produced by uniform processing pipelines.
The Museum is committed to open access and open science, and has launched the Data Portal to make its research and collections datasets available online. It allows anyone to explore, download and reuse the data for their own research. Our natural history collection is one of the most important in the world, documenting 4.5 billion years of life, the Earth and the solar system. Almost all animal, plant, mineral and fossil groups are represented. These datasets will increase exponentially. Under the Museum's ambitious digital collections programme we aim to have 20 million specimens digitised in the next five years.
The Genomic Observatories Meta-Database (GEOME) is a web-based database that captures the who, what, where, and when of biological samples and associated genetic sequences. GEOME helps users with the following goals: ensure the metadata from your biological samples is findable, accessible, interoperable, and reusable; improve the quality of your data and comply with global data standards; and integrate with R, ease publication to NCBI's sequence read archive, and work with an associated LIMS. The initial use case for GEOME came from the Diversity of the Indo-Pacific Network (DIPnet) resource.
WikiPathways was established to facilitate the contribution and maintenance of pathway information by the biology community. WikiPathways is an open, collaborative platform dedicated to the curation of biological pathways. WikiPathways thus presents a new model for pathway databases that enhances and complements ongoing efforts, such as KEGG, Reactome and Pathway Commons. Building on the same MediaWiki software that powers Wikipedia, we added a custom graphical pathway editing tool and integrated databases covering major gene, protein, and small-molecule systems. The familiar web-based format of WikiPathways greatly reduces the barrier to participate in pathway curation. More importantly, the open, public approach of WikiPathways allows for broader participation by the entire community, ranging from students to senior experts in each field. This approach also shifts the bulk of peer review, editorial curation, and maintenance to the community.
The NIH 3D Print Exchange (the “Exchange”) is an open, comprehensive, and interactive website for searching, browsing, downloading, and sharing biomedical 3D print files, modeling tutorials, and educational material. "Biomedical" includes models of cells, bacteria, or viruses, molecules like proteins or DNA, and anatomical models of organs, tissue, and body parts. The NIH 3D Print Exchange provides models in formats that are readily compatible with 3D printers, and offers a unique set of tools to create and share 3D-printable models related to biomedical science.
Country
Repository of the Faculty of Science is institutional repository that gathers, permanently stores and allows access to the results of scientific and intellectual property of the Faculty of Science, University of Zagreb. The objects that can be stored in the repository are research data, scientific articles, conference papers, theses, dissertations, books, teaching materials, images, video and audio files, and presentations. To improve searchability, all materials are described with predetermined set of metadata.
BioModels is a repository of mathematical models of biological and biomedical systems. It hosts a vast selection of existing literature-based physiologically and pharmaceutically relevant mechanistic models in standard formats. Our mission is to provide the systems modelling community with reproducible, high-quality, freely-accessible models published in the scientific literature.
Biological collections are replete with taxonomic, geographic, temporal, numerical, and historical information. This information is crucial for understanding and properly managing biodiversity and ecosystems, but is often difficult to access. Canadensys, operated from the Université de Montréal Biodiversity Centre, is a Canada-wide effort to unlock the biodiversity information held in biological collections.
The OpenNeuro project (formerly known as the OpenfMRI project) was established in 2010 to provide a resource for researchers interested in making their neuroimaging data openly available to the research community. It is managed by Russ Poldrack and Chris Gorgolewski of the Center for Reproducible Neuroscience at Stanford University. The project has been developed with funding from the National Science Foundation, National Institute of Drug Abuse, and the Laura and John Arnold Foundation.
The KNB Data Repository is an international repository intended to facilitate ecological, environmental and earth science research in the broadest senses. For scientists, the KNB Data Repository is an efficient way to share, discover, access and interpret complex ecological, environmental, earth science, and sociological data and the software used to create and manage those data. Due to rich contextual information provided with data in the KNB, scientists are able to integrate and analyze data with less effort. The data originate from a highly-distributed set of field stations, laboratories, research sites, and individual researchers. The KNB supports rich, detailed metadata to promote data discovery as well as automated and manual integration of data into new projects. The KNB supports a rich set of modern repository services, including the ability to assign Digital Object Identifiers (DOIs) so data sets can be confidently referenced in any publication, the ability to track the versions of datasets as they evolve through time, and metadata to establish the provenance relationships between source and derived data.
<<<!!!<<< Pfam data and new releases are available through InterPro https://www.re3data.org/repository/r3d100010798 The Pfam website now serves as a static page with no data updates. All links below redirect to the closest alternative page in the InterPro website. >>>!!!>>>
The NBN Atlas is a collaborative project that aggregates biodiversity data from multiple sources and makes it available and usable online. It is the UK’s largest collection of freely available biodiversity data.
Country
SISSA Open Data is the Sissa repository for the research data managment. It is an institutional repository that captures, stores, preserves, and redistributes the data of the SISSA scientific community in digital form. SISSA Open Data is managed by the SISSA Library as a service to the SISSA scientific community.
The University of Waterloo Dataverse is a data repository for research outputs of our faculty, students, and staff. Files are held in a secure environment on Canadian servers. Researchers can choose to make content available to the public, to specific individuals, or to keep it private.
Country
Repository "Open Science Resource Atlas 2.0" aims to increase the accessibility, improve the quality and extend the reusability of science resources. Repository focuses on the digital sharing of resources of great importance to the field of science and economy. These include publications, scripts, lectures, 3D models, audio and video recordings, photos, input and output files of various computer programs, databases collecting data from various fields, machines, systems, language corpora and many others. The target group, apart from academics, students and doctoral students, is everyone interested, including entrepreneurs and, what is important and unique - disabled, blind, visually impaired and deaf people.
Country
FinBIF is an integral part of the global biodiversity informatics framework, dedicated to managing species information. Its mission encompasses a wide array of services, including the generation of digital data through various processes, as well as the sourcing, collation, integration, and distribution of existing digital data. Key initiatives under FinBIF include the digitization of collections, the development of data systems for collections Kotka (https://biss.pensoft.net/article/37179/) and observations (https://biss.pensoft.net/article/39150/), and the establishment of a national DNA barcode reference library. FinBIF manages data types such as verbal species descriptions (which include drawings, pictures, and other media types), biological taxonomy, scientific collection specimens, opportunistic systematic and event-based observations, and DNA barcodes. It employs a unified IT architecture to manage data flows, delivers services through a single online portal, fosters collaboration under a cohesive umbrella concept, and articulates development visions under a unified brand. The portal Laji.fi serves as the entry point to this harmonized open data ecosystem. FinBIF's portal is accessible in Finnish, Swedish, and English. Data intended for restricted use are made available to authorities through a separate portal, while open data are also shared with international systems, such as GBIF.
Country
The Inter-Sectoral Impact Model Intercomparison Project (ISIMIP) is a community-driven climate impact modeling initiative that aims to contribute to a quantitative and cross-sectoral synthesis of the various impacts of climate change, including associated uncertainties. It is designed as a continuous model intercomparison and improvement process for climate impact models and is supported by the international climate impact research community. ISIMIP is organized into simulation rounds, for which a simulation protocol specifies a set of common experiments. The protocol further describes a set of climate and direct human forcing data to be used as input data for all ISIMIP simulations. Based on this information, modelling groups from different sectors (e.g. agriculture, biomes, water) perform simulations using various climate impact models. After the simulations are performed, the data is collected by the ISIMIP data team, quality controlled and eventually published on the ISIMIP Repository. From there, it can be freely accessed for further research and analyses. The data is widely used within academia, but also by companies and civil society. ISIMIP was initiated by the Potsdam Institute for Climate Impact Research (PIK) and the International Institute for Applied Systems Analysis (IIASA).