Filter

Subjects

Content Types

Countries

AID systems

API

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
  • 1 (current)
Found 15 result(s)
The Open Science Framework (OSF) is part network of research materials, part version control system, and part collaboration software. The purpose of the software is to support the scientist's workflow and help increase the alignment between scientific values and scientific practices. Document and archive studies. Move the organization and management of study materials from the desktop into the cloud. Labs can organize, share, and archive study materials among team members. Web-based project management reduces the likelihood of losing study materials due to computer malfunction, changing personnel, or just forgetting where you put the damn thing. Share and find materials. With a click, make study materials public so that other researchers can find, use and cite them. Find materials by other researchers to avoid reinventing something that already exists. Detail individual contribution. Assign citable, contributor credit to any research material - tools, analysis scripts, methods, measures, data. Increase transparency. Make as much of the scientific workflow public as desired - as it is developed or after publication of reports. Find public projects here. Registration. Registering materials can certify what was done in advance of data analysis, or confirm the exact state of the project at important points of the lifecycle such as manuscript submission or at the onset of data collection. Discover public registrations here. Manage scientific workflow. A structured, flexible system can provide efficiency gain to workflow and clarity to project objectives, as pictured.
The database aims to bridge the gap between agent repositories and studies documenting the effect of antimicrobial combination therapies. Most notably, our primary aim is to compile data on the combination of antimicrobial agents, namely natural products such as AMP. To meet this purpose, we have developed a data curation workflow that combines text mining, manual expert curation and graph analysis and supports the reconstruction of AMP-Drug combinations.
BioVeL is a virtual e-laboratory that supports research on biodiversity issues using large amounts of data from cross-disciplinary sources. BioVeL supports the development and use of workflows to process data. It offers the possibility to either use already made workflows or create own. BioVeL workflows are stored in MyExperiment - Biovel Group http://www.myexperiment.org/groups/643/content. They are underpinned by a range of analytical and data processing functions (generally provided as Web Services or R scripts) to support common biodiversity analysis tasks. You can find the Web Services catalogued in the BiodiversityCatalogue.
The NASA Earth Exchange (NEX) represents a platform for the Earth science community that provides a mechanism for scientific collaboration and knowledge sharing. NEX combines supercomputing, Earth system modeling, workflow management, NASA remote sensing data feeds, and a knowledge sharing platform to deliver a complete work environment in which users can explore and analyze large datasets, run modeling codes, collaborate on new or existing projects, and quickly share results among the Earth Science communities. Includes some local data collections as well as links to data on other sites. On January 31st, 2019, the NEX portal will be down-scoped; member logins will be suspended and the portal contents transitioned to a static set of archives. New projects and resources will no longer be possible after this occurs.
Intrepid Bioinformatics serves as a community for genetic researchers and scientific programmers who need to achieve meaningful use of their genetic research data – but can’t spend tremendous amounts of time or money in the process. The Intrepid Bioinformatics system automates time consuming manual processes, shortens workflow, and eliminates the threat of lost data in a faster, cheaper, and better environment than existing solutions. The system also provides the functionality and community features needed to analyze the large volumes of Next Generation Sequencing and Single Nucleotide Polymorphism data, which is generated for a wide range of purposes from disease tracking and animal breeding to medical diagnosis and treatment.
Country
Arquivo.pt is a research infrastructure that preserves millions of files collected from the web since 1996 and provides a public search service over this information. It contains information in several languages. Periodically it collects and stores information published on the web. Then, it processes the collect data to make it searchable, providing a “Google-like” service that enables searching the past web (English user interface available at www.archive.pt). This preservation workflow is performed through a large-scale distributed information system and can also accessed through API.
The platform hosts the critical edition of the letters written to Jacob Burckhardt, reconstructing in open access one of the most important European correspondences of the 19th century. Save a few exceptions, these letters are all unpublished. On a later stage, the project aims to publish also Jacob Burckhardt’s letters. The editing process has been carried out using Muruca semantic digital library framework. The Muruca framework has been modified over the project, as the requirements of the philological researchers emerged more clearly. The results are stored in and accessible from the front-end of the platform.
myExperiment is a collaborative environment where scientists can safely publish their workflows and in silico experiments, share them with groups and find those of others. Workflows, other digital objects and bundles (called Packs) can now be swapped, sorted and searched like photos and videos on the Web. Unlike Facebook or MySpace, myExperiment fully understands the needs of the researcher and makes it really easy for the next generation of scientists to contribute to a pool of scientific methods, build communities and form relationships — reducing time-to-experiment, sharing expertise and avoiding reinvention. myExperiment is now the largest public repository of scientific workflows.
Country
Open Government Data Portal of Sikkim–sikkim.data.gov.in - is a platform for supporting Open Data initiative of Government of Sikkim. The portal is intended to be used by Departments/Organizations of Government of Sikkim to publish datasets, documents, services, tools and applications collected by them for public use. It intends to increase transparency in the functioning of the state Government and also open avenues for many more innovative uses of Government Data to give different perspective. Open Government Data Portal of Sikkim is designed and developed by the Open Government Data Division of National Informatics Centre (NIC), Department of Electronics and Information Technology (DeitY), Government of India. The portal has been created under Software as A Service (SaaS) model of Open Government Data (OGD) Platform India of NIC. The data available in the portal are owned by various Departments/Organization of Government of Sikkim. Open Government Data Portal of Sikkim has following modules: Data Management System (DMS) – Module for contributing data catalogs by various state government agencies for making those available on the front end website after a due approval process through a defined workflow. Content Management System (CMS) – Module for managing and updating various functionalities and content types of Open Government Data Portal of Sikkim. Visitor Relationship Management (VRM) – Module for collating and disseminating viewer feedback on various data catalogs. Communities – Module for community users to interact and share their zeal and views with others, who share common interests as that of theirs.
The Virtual Liver Network (VLN) represents a major research investment by the German Government focusing on work at the “bleeding edge” of Systems Biology and Systems Medicine. This Flagship Programme is tackling one of the major challenges in the life sciences: that is, how to integrate the wealth of data we have acquired post-genome, not just in a mathematical model, but more importantly in a series of models that are linked across scales to represent organ function. As the project is prototyping how to achieve true multi-scale modelling within a single organ and linking this to human physiology, it will be developing tools and protocols that can be applied to other systems, helping to drive forward the application of modelling and simulation to modern medical practice. It is the only programme of its type to our knowledge that bridges investigations from the sub-cellular through to ethically cleared patient and volunteer studies in an integrated workflow. As such, this programme is contributing significantly to the development of a new paradigm in biology and medicine.
Country
It is a platform (designed and developed by the National Informatics Centre (NIC), Government of India) for supporting Open Data initiative of Surat Municipal Corporation, intended to publish government datasets for public use. The portal has been created under Software as A Service (SaaS) model of Open Government Data (OGD) Platform, thus gives avenues for resuing datasets of the City in different perspective. This Portal has numerious modules; (a) Data Management System (DMS) for contributing data catalogs by various departments for making those available on the front end website after a due approval process through a defined workflow; (b) Content Management System (CMS) for managing and updating various functionalities and content types of Open Government Data Portal of Surat City; (c) Visitor Relationship Management (VRM) for collating and disseminating viewer feedback on various data catalogs; and (d) Communities module for community users to interact and share their zeal and views with others, who share common interests as that of theirs.
The Research Collection is ETH Zurich's publication platform. It unites the functions of a university bibliography, an open access repository and a research data repository within one platform. Researchers who are affiliated with ETH Zurich, the Swiss Federal Institute of Technology, may deposit research data from all domains. They can publish data as a standalone publication, publish it as supplementary material for an article, dissertation or another text, share it with colleagues or a research group, or deposit it for archiving purposes. Research-data-specific features include flexible access rights settings, DOI registration and a DOI preview workflow, content previews for zip- and tar-containers, as well as download statistics and altmetrics for published data. All data uploaded to the Research Collection are also transferred to the ETH Data Archive, ETH Zurich’s long-term archive.
Country
Open Government Data Portal of Tamil Nadu is a platform (designed by the National Informatics Centre), for Open Data initiative of the Government of Tamil Nadu. The portal is intended to publish datasets collected by the Tamil Nadu Government for public uses in different perspective. It has been created under Software as A Service (SaaS) model of Open Government Data (OGD) and publishes dataset in open formats like CSV, XLS, ODS/OTS, XML, RDF, KML, GML, etc. This data portal has following modules, namely (a) Data Management System (DMS) for contributing data catalogs by various state government agencies for making those available on the front end website after a due approval process through a defined workflow; (b) Content Management System (CMS) for managing and updating various functionalities and content types; (c) Visitor Relationship Management (VRM) for collating and disseminating viewer feedback on various data catalogs; and (d) Communities module for community users to interact and share their views and common interests with others. It includes different types of datasets generated both in geospatial and non-spatial data classified as shareable data and non-shareable data. Geospatial data consists primarily of satellite data, maps, etc.; and non-spatial data derived from national accounts statistics, price index, census and surveys produced by a statistical mechanism. It follows the principle of data sharing and accessibility via Openness, Flexibility, Transparency, Quality, Security and Machine-readable.
The European Nucleotide Archive (ENA) captures and presents information relating to experimental workflows that are based around nucleotide sequencing. A typical workflow includes the isolation and preparation of material for sequencing, a run of a sequencing machine in which sequencing data are produced and a subsequent bioinformatic analysis pipeline. ENA records this information in a data model that covers input information (sample, experimental setup, machine configuration), output machine data (sequence traces, reads and quality scores) and interpreted information (assembly, mapping, functional annotation). Data arrive at ENA from a variety of sources. These include submissions of raw data, assembled sequences and annotation from small-scale sequencing efforts, data provision from the major European sequencing centres and routine and comprehensive exchange with our partners in the International Nucleotide Sequence Database Collaboration (INSDC). Provision of nucleotide sequence data to ENA or its INSDC partners has become a central and mandatory step in the dissemination of research findings to the scientific community. ENA works with publishers of scientific literature and funding bodies to ensure compliance with these principles and to provide optimal submission systems and data access tools that work seamlessly with the published literature.
Country
The Historical Data Centre Saxony-Anhalt was founded in 2008. Its main tasks are the computer-aided provision, processing and evaluation of historical research data, the development of theoretically consolidated normative data and vocabularies as well as the further development of methods in the context of digital humanities, research data management and quality assurance. The "Historical Data Centre Saxony-Anhalt" sees itself as a central institution for the data service of historical data in the federal state of Saxony-Anhalt and is thus part of a nationally and internationally linked infrastructure for long-term data storage and use. The Centre primarily acquires individual-specific microdata for the analysis of life courses, employment biographies and biographies (primarily quantitative, but also qualitative data), which offer a broad interdisciplinary and international analytical framework and meet clearly defined methodological and technical requirements. The studies are processed, archived and - in compliance with data protection and copyright conditions - made available to the scientifically interested public in accordance with internationally recognized standards. The degree of preparation depends on the type and quality of the study and on demand. Reference studies and studies in high demand are comprehensively documented - often in cooperation with primary researchers or experts - and summarized in data collections. The Historical Data Centre supports researchers in meeting the high demands of research data management. This includes the advisory support of the entire life cycle of data, starting with data production, documentation, analysis, evaluation, publication, long-term archiving and finally the subsequent use of data. In cooperation with other infrastructure facilities of the state of Saxony-Anhalt as well as national and international, interdisciplinary data repositories, the Data Centre provides tools and infrastructures for the publication and long-term archiving of research data. Together with the University and State Library of Saxony-Anhalt, the Data Centre operates its own data repository as well as special workstations for the digitisation and analysis of data. The Historical Data Centre aims to be a contact point for very different users of historical sources. We collect data relating to historical persons, events and historical territorial units.