Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
  • 1 (current)
Found 8 result(s)
The Open Science Framework (OSF) is part network of research materials, part version control system, and part collaboration software. The purpose of the software is to support the scientist's workflow and help increase the alignment between scientific values and scientific practices. Document and archive studies. Move the organization and management of study materials from the desktop into the cloud. Labs can organize, share, and archive study materials among team members. Web-based project management reduces the likelihood of losing study materials due to computer malfunction, changing personnel, or just forgetting where you put the damn thing. Share and find materials. With a click, make study materials public so that other researchers can find, use and cite them. Find materials by other researchers to avoid reinventing something that already exists. Detail individual contribution. Assign citable, contributor credit to any research material - tools, analysis scripts, methods, measures, data. Increase transparency. Make as much of the scientific workflow public as desired - as it is developed or after publication of reports. Find public projects here. Registration. Registering materials can certify what was done in advance of data analysis, or confirm the exact state of the project at important points of the lifecycle such as manuscript submission or at the onset of data collection. Discover public registrations here. Manage scientific workflow. A structured, flexible system can provide efficiency gain to workflow and clarity to project objectives, as pictured.
A service of the Inter-university Consortium for Political and Social Research (ICPSR), openICPSR is a self-publishing repository for social, behavioral, and health sciences research data. openICPSR is particularly well-suited for the deposit of replication data sets for researchers who need to publish their raw data associated with a journal article so that other researchers can replicate their findings.
Bioconductor provides tools for the analysis and comprehension of high-throughput genomic data. Bioconductor uses the R statistical programming language, and is open source and open development. It has two releases each year, and an active user community. Bioconductor is also available as an AMI (Amazon Machine Image) and a series of Docker images.
<<<!!!<<< This repository is no longer available. >>>!!!>>> BioVeL is a virtual e-laboratory that supports research on biodiversity issues using large amounts of data from cross-disciplinary sources. BioVeL supports the development and use of workflows to process data. It offers the possibility to either use already made workflows or create own. BioVeL workflows are stored in MyExperiment - Biovel Group http://www.myexperiment.org/groups/643/content. They are underpinned by a range of analytical and data processing functions (generally provided as Web Services or R scripts) to support common biodiversity analysis tasks. You can find the Web Services catalogued in the BiodiversityCatalogue.
Country
Arquivo.pt is a research infrastructure that preserves millions of files collected from the web since 1996 and provides a public search service over this information. It contains information in several languages. Periodically it collects and stores information published on the web. Then, it processes the collect data to make it searchable, providing a “Google-like” service that enables searching the past web (English user interface available at https://arquivo.pt/?l=en). This preservation workflow is performed through a large-scale distributed information system and can also accessed through API (https://arquivo.pt/api).
myExperiment is a collaborative environment where scientists can safely publish their workflows and in silico experiments, share them with groups and find those of others. Workflows, other digital objects and bundles (called Packs) can now be swapped, sorted and searched like photos and videos on the Web. Unlike Facebook or MySpace, myExperiment fully understands the needs of the researcher and makes it really easy for the next generation of scientists to contribute to a pool of scientific methods, build communities and form relationships — reducing time-to-experiment, sharing expertise and avoiding reinvention. myExperiment is now the largest public repository of scientific workflows.
The Research Collection is ETH Zurich's publication platform. It unites the functions of a university bibliography, an open access repository and a research data repository within one platform. Researchers who are affiliated with ETH Zurich, the Swiss Federal Institute of Technology, may deposit research data from all domains. They can publish data as a standalone publication, publish it as supplementary material for an article, dissertation or another text, share it with colleagues or a research group, or deposit it for archiving purposes. Research-data-specific features include flexible access rights settings, DOI registration and a DOI preview workflow, content previews for zip- and tar-containers, as well as download statistics and altmetrics for published data. All data uploaded to the Research Collection are also transferred to the ETH Data Archive, ETH Zurich’s long-term archive.
Country
<<<!!!<<< The digital archive of the Historical Data Center Saxony-Anhalt was transferred to the share-it repositor https://www.re3data.org/repository/r3d100013014 >>>!!!>>> The Historical Data Centre Saxony-Anhalt was founded in 2008. Its main tasks are the computer-aided provision, processing and evaluation of historical research data, the development of theoretically consolidated normative data and vocabularies as well as the further development of methods in the context of digital humanities, research data management and quality assurance. The "Historical Data Centre Saxony-Anhalt" sees itself as a central institution for the data service of historical data in the federal state of Saxony-Anhalt and is thus part of a nationally and internationally linked infrastructure for long-term data storage and use. The Centre primarily acquires individual-specific microdata for the analysis of life courses, employment biographies and biographies (primarily quantitative, but also qualitative data), which offer a broad interdisciplinary and international analytical framework and meet clearly defined methodological and technical requirements. The studies are processed, archived and - in compliance with data protection and copyright conditions - made available to the scientifically interested public in accordance with internationally recognized standards. The degree of preparation depends on the type and quality of the study and on demand. Reference studies and studies in high demand are comprehensively documented - often in cooperation with primary researchers or experts - and summarized in data collections. The Historical Data Centre supports researchers in meeting the high demands of research data management. This includes the advisory support of the entire life cycle of data, starting with data production, documentation, analysis, evaluation, publication, long-term archiving and finally the subsequent use of data. In cooperation with other infrastructure facilities of the state of Saxony-Anhalt as well as national and international, interdisciplinary data repositories, the Data Centre provides tools and infrastructures for the publication and long-term archiving of research data. Together with the University and State Library of Saxony-Anhalt, the Data Centre operates its own data repository as well as special workstations for the digitisation and analysis of data. The Historical Data Centre aims to be a contact point for very different users of historical sources. We collect data relating to historical persons, events and historical territorial units.