Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 34 result(s)
Country
Phaidra (Permanent Hosting, Archiving and Indexing of Digital Resources and Assets) is the University of Padua Library System’s platform for long-term archiving of digital collections. Phaidra hosts various types of digital objects (antiquarian books, manuscripts, photographs, wallcharts, maps, learning objects, films, archive material and museum objects). Phaidra offers a search facility to identify specific objects, and each object can be viewed, downloaded, used and reused to the extent permitted by law and by its associated licences. The objects in the digital collections on the Phaidra platform are sourced from libraries (in large part due to the digitisation projects promoted by the Library System itself), museums and archives at the University of Padua and other institutions, including the Ca’ Foscari University and the Università Iuav in Venice.
Scripps Institute of Oceanography (SIO) Explorer includes five federated collections: SIO Cruises, SIO Historic Photographs, the Seamounts, Marine Geological Samples, and the Educator’s Collection, all part of the US National Science Digital Library (NSDL). Each collection represents a unique resource of irreplaceable scientific research. The effort is collaboration among researchers at Scripps, computer scientists from the San Diego Supercomputer Center (SDSC), and archivists and librarians from the UCSD Libraries. In 2005 SIOExplorer was extended to the Woods Hole Oceanographic Institution with the Multi-Institution Scalable Digital Archiving project, funded through the joint NSF/Library of Congress digital archiving and preservation program, creating a harvesting methodology and a prototype collection of cruises, Alvin submersible dives and Jason ROV lowerings.
From now on you no longer deposit archaeological data here in EASY . Please see: https://archaeology.datastations.nl/ EASY is the online archiving system of Data Archiving and Networked Services (DANS). EASY offers you access to thousands of datasets in the humanities, the social sciences and other disciplines. EASY can also be used for the online depositing of research data.
DataSpace is a digital repository meant for both archiving and publicly disseminating digital data which are the result of research, academic, or administrative work performed by members of the Princeton University community. DataSpace will promote awareness of the data and address concerns for ensuring the long-term availability of data in the repository.
Country
DOOR is the open institutional repository of the University for Continuing Education Krems, formerly known as Danube University Krems. DOOR runs on the fedora Software and is a partner repository of Phaidra Vienna and other fedora users in Austria. We provide access to OA publications, scientific data and much more for interested users and support our scientists and co-workers in their publication processis.
The mission of the GDC is to curate and provide access to oceanographic data, especially from Scripps expeditions, making them accessible for scientific and educational use worldwide. Originally launched by Bill Menard, the GDC has been in operation for more than 40 years. While many historic physical artifacts are carefully preserved, the current emphasis is on digital archiving, in coordination with other national and international programs.
Country
mediaTUM – the media and publications repository of the Technical University of Munich: mediaTUM supports the publication of digital documents and research data as well as the use of multimedia content in research and teaching.
An open digital archive of scholarly, intellectual and research outputs of the University of South Africa. The UnisaIR contains and preserves theses and dissertations, research articles, conference papers, rare and special materials and many other digital assets. With special collections from the Documentation Center for African Studies including manuscripts, photos, political posters and other archival materials about the history of South Africa.
The Texas Data Repository is a platform for publishing and archiving datasets (and other data products) created by faculty, staff, and students at Texas higher education institutions. The repository is built in an open-source application called Dataverse, developed and used by Harvard University. The repository is hosted by the Texas Digital Library, a consortium of academic libraries in Texas with a proven history of providing shared technology services that support secure, reliable access to digital collections of research and scholarship. For a list of TDL participating institutions, please visit: https://www.tdl.org/members/.
ETH Data Archive is ETH Zurich's long-term preservation solution for digital information such as research data, digitised content, archival records, or images. It serves as the backbone of data curation and for most of its content, it is a “dark archive” without public access. In this capacity, the ETH Data Archive also archives the content of ETH Zurich’s Research Collection which is the primary repository for members of the university and the first point of contact for publication of data at ETH Zurich. All data that was produced in the context of research at the ETH Zurich, can be published and archived in the Research Collection. An automated connection to the ETH Data Archive in the background ensures the medium to long-term preservation of all publications and research data. Direct access to the ETH Data Archive is intended only for customers who need to deposit software source code within the framework of ETH transfer Software Registration. Open Source code packages and other content from legacy workflows can be accessed via ETH Library @ swisscovery (https://library.ethz.ch/en/).
Research data management is a general term covering how you organize, structure, store, and care for the information used or generated during a research project. The University of Oxford policy mandates the preservation of research data and records for a minimum of 3 years after publication. A place to securely hold digital research materials (data) of any sort along with documentation that helps explain what they are and how to use them (metadata). The application of consistent archiving policies, preservation techniques and discovery tools, further increases the long term availability and usefulness of the data. This is the main difference between storage and archiving of data. ORA-Data is the University of Oxford’s research data archive https://www.re3data.org/repository/r3d100011230
Country
<<<!!!<<< The digital archive of the Historical Data Center Saxony-Anhalt was transferred to the share-it repositor https://www.re3data.org/repository/r3d100013014 >>>!!!>>> The Historical Data Centre Saxony-Anhalt was founded in 2008. Its main tasks are the computer-aided provision, processing and evaluation of historical research data, the development of theoretically consolidated normative data and vocabularies as well as the further development of methods in the context of digital humanities, research data management and quality assurance. The "Historical Data Centre Saxony-Anhalt" sees itself as a central institution for the data service of historical data in the federal state of Saxony-Anhalt and is thus part of a nationally and internationally linked infrastructure for long-term data storage and use. The Centre primarily acquires individual-specific microdata for the analysis of life courses, employment biographies and biographies (primarily quantitative, but also qualitative data), which offer a broad interdisciplinary and international analytical framework and meet clearly defined methodological and technical requirements. The studies are processed, archived and - in compliance with data protection and copyright conditions - made available to the scientifically interested public in accordance with internationally recognized standards. The degree of preparation depends on the type and quality of the study and on demand. Reference studies and studies in high demand are comprehensively documented - often in cooperation with primary researchers or experts - and summarized in data collections. The Historical Data Centre supports researchers in meeting the high demands of research data management. This includes the advisory support of the entire life cycle of data, starting with data production, documentation, analysis, evaluation, publication, long-term archiving and finally the subsequent use of data. In cooperation with other infrastructure facilities of the state of Saxony-Anhalt as well as national and international, interdisciplinary data repositories, the Data Centre provides tools and infrastructures for the publication and long-term archiving of research data. Together with the University and State Library of Saxony-Anhalt, the Data Centre operates its own data repository as well as special workstations for the digitisation and analysis of data. The Historical Data Centre aims to be a contact point for very different users of historical sources. We collect data relating to historical persons, events and historical territorial units.
Country
>>>!!!<<<GeoConnections – Discovery Portal is closed now>>>!!!<<< Information Archived on the Web Information identified as archived on the Web is for reference, research or recordkeeping purposes. It has not been altered or updated after the date of archiving. Web pages that are archived on the Web are not subject to the Government of Canada Web Standards. As per the Communications Policy of the Government of Canada, you can request alternate formats. Please "contact us" to request a format other than those available.
Country
The Research Data Repository of FID move is a digital long-term repository for open data from the field of transport and mobility research. All datasets are provided with an open licence and are assigned a persistent DataCite DOI (Digital Object Identifier). Both data search and archiving are free. The Specialised Information Service for Mobility and Transport Research (FID move) has been set up by the Saxon State and University Library Dresden (SLUB) and the German TIB – Leibniz Information Centre for Science and Technology as part of the DFG funding programme "Specialised Information Services".
The Common Cold Project began in 2011 with the aim of creating, documenting, and archiving a database that combines final research data from 5 prospective viral-challenge studies that were conducted over the preceding 25 years: the British Cold Study (BCS); the three Pittsburgh Cold Studies (PCS1, PCS2, and PCS3); and the Pittsburgh Mind-Body Center Cold Study (PMBC). These unique studies assessed predictor (and hypothesized mediating) variables in healthy adults aged 18 to 55 years, experimentally exposed them to a virus that causes the common cold, and then monitored them for development of infection and signs and symptoms of illness.
The DARIAH-DE repository is a digital long-term archive for human and cultural-scientific research data. Each object described and stored in the DARIAH-DE Repository has a unique and lasting Persistent Identifier (DOI), with which it is permanently referenced, cited, and kept available for the long term. In addition, the DARIAH-DE Repository enables the sustainable and secure archiving of data collections. The DARIAH-DE Repository is not only to DARIAH-DE associated research projects, but also to individual researchers as well as research projects that want to save their research data persistently, referenceable and long-term archived and make it available to third parties. The main focus is the simple and user-oriented access to long-term storage of research data. To ensure its long term sustainability, the DARIAH-DE Repository is operated by the Humanities Data Centre.
Country
FDAT is a research data repository hosted by the University of Tübingen, designed to facilitate long-term archiving and publication of research data. Managed by the Information, Communication and Media Center (IKM), it primarily caters to the humanities and social sciences, while welcoming researchers from all scientific disciplines at the university. Committed to high-quality data management, FDAT emphasizes the importance of adhering to the FAIR Data Principles, promoting findability, accessibility, interoperability, and reusability of the research data it contains.
UCLA Library is adopting Dataverse, the open source web application designed for sharing, preserving and using research data. UCLA Dataverse will allow data, text, software, scripts, data visualizations, etc., created from research projects at UCLA to be made publicly available, widely discoverable, linkable, and ultimately, reusable
IBICT is providing a research data repository that takes care of long-term preservation and archiving of good practices, so that researchers can share, maintain control and get recognition for your data. The repository supports research data sharing with Quote persistent data, allowing them to be played. The Dataverse is a large open data repository of all disciplines, created by the Institute for Quantitative Social Science at Harvard University. IBICT the Dataverse repository provides a means available for free to deposit and find specific data sets stored by employees of the institutions participating in the Cariniana network.
Country
GFZ Data Services is a repository for research data and scientific software across the Earth System Sciences, hosted at GFZ. The curated data are archived, persistently accessible and published with digital object identifier (DOI). They range from large dynamic datasets from global monitoring networks with real-time aquisition, to international services in geodesy and geophysics, to the full suite of small and highly heterogeneous datasets collected by individual researchers or small teams ("long-tail data"). In addition to the DOI registration and data archiving itself, GFZ Data Services team offers comprehensive consultation by domain scientists and IT specialists. Among others, GFZ Data Services is data publisher for the IAG Services ICGEM, IGETS and ISG (IAG = Int. Association for Geodesy; ICGEM = Int. Center for Global Earth Models; IGETS = Int. Geodynamics and Earth Tide Service; ISG = Int. Service for the Geoid), the World Stress Map, INTERMAGNET, GEOFON, the Geophysical Instrument Pool Potsdam GIPP, TERENO, EnMAP Flight Campaigns, the Potsdam Institute for Climate Impact Research PIK, the Specialised Information Service for Solid Earth Geosciences (FID GEO) and hosts the GFZ Catalogue for the International Generic Sample Number IGSN.
Yareta is a repository service built on digital solutions for archiving, preserving and sharing research data that enable researchers and institutions of any disciplines to share and showcase their research results. The solution was developed as part of a larger project focusing on Data Life Cycle Management (dlcm.ch) that aims to develop various services for research data management. Thanks to its highly modular architecture, Yareta can be adapted both to small institutions that need a "turnkey" solution and to larger ones that can rely on Yareta to complement what they have already implemented. Yareta is compatible with all formats in use in the different scientific disciplines and is based on modern technology that interconnects with researchers' environments (such as Electronic Laboratory Notebooks or Laboratory Information Management Systems).
Arca Data is Fiocruz's official repository for archiving, publishing, disseminating, preserving and sharing digital research data produced by the Fiocruz community or in partnership with other research institutes or bodies, with the aim of promoting new research, ensuring the reproducibility or replicability of existing research and promoting an Open and Citizen Science. Its objective is to stimulate the wide circulation of scientific knowledge, strengthening the institutional commitment to Open Science and free access to health information, in addition to providing transparency and fostering collaboration between researchers, educators, academics, managers and graduate students, to the advancement of knowledge and the creation of solutions that meet the demands of society.