Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
  • 1 (current)
Found 10 result(s)
The Open Science Framework (OSF) is part network of research materials, part version control system, and part collaboration software. The purpose of the software is to support the scientist's workflow and help increase the alignment between scientific values and scientific practices. Document and archive studies. Move the organization and management of study materials from the desktop into the cloud. Labs can organize, share, and archive study materials among team members. Web-based project management reduces the likelihood of losing study materials due to computer malfunction, changing personnel, or just forgetting where you put the damn thing. Share and find materials. With a click, make study materials public so that other researchers can find, use and cite them. Find materials by other researchers to avoid reinventing something that already exists. Detail individual contribution. Assign citable, contributor credit to any research material - tools, analysis scripts, methods, measures, data. Increase transparency. Make as much of the scientific workflow public as desired - as it is developed or after publication of reports. Find public projects here. Registration. Registering materials can certify what was done in advance of data analysis, or confirm the exact state of the project at important points of the lifecycle such as manuscript submission or at the onset of data collection. Discover public registrations here. Manage scientific workflow. A structured, flexible system can provide efficiency gain to workflow and clarity to project objectives, as pictured.
A central source for NEI biomedical digital objects including data sets, software and analytical workflow, metadata, standards, publications and more.
The database aims to bridge the gap between agent repositories and studies documenting the effect of antimicrobial combination therapies. Most notably, our primary aim is to compile data on the combination of antimicrobial agents, namely natural products such as AMP. To meet this purpose, we have developed a data curation workflow that combines text mining, manual expert curation and graph analysis and supports the reconstruction of AMP-Drug combinations.
!!! >>> intrepidbio.com expired <<< !!!! Intrepid Bioinformatics serves as a community for genetic researchers and scientific programmers who need to achieve meaningful use of their genetic research data – but can’t spend tremendous amounts of time or money in the process. The Intrepid Bioinformatics system automates time consuming manual processes, shortens workflow, and eliminates the threat of lost data in a faster, cheaper, and better environment than existing solutions. The system also provides the functionality and community features needed to analyze the large volumes of Next Generation Sequencing and Single Nucleotide Polymorphism data, which is generated for a wide range of purposes from disease tracking and animal breeding to medical diagnosis and treatment.
Country
Arquivo.pt is a research infrastructure that preserves millions of files collected from the web since 1996 and provides a public search service over this information. It contains information in several languages. Periodically it collects and stores information published on the web. Then, it processes the collect data to make it searchable, providing a “Google-like” service that enables searching the past web (English user interface available at https://arquivo.pt/?l=en). This preservation workflow is performed through a large-scale distributed information system and can also accessed through API (https://arquivo.pt/api).
The platform hosts the critical edition of the letters written to Jacob Burckhardt, reconstructing in open access one of the most important European correspondences of the 19th century. Save a few exceptions, these letters are all unpublished. On a later stage, the project aims to publish also Jacob Burckhardt’s letters. The editing process has been carried out using Muruca semantic digital library framework. The Muruca framework has been modified over the project, as the requirements of the philological researchers emerged more clearly. The results are stored in and accessible from the front-end of the platform.
The Odum Institute Archive Dataverse contains social science data curated and archived by the Odum Institute Data Archive at the University of North Carolina at Chapel Hill. Some key collections include the primary holdings of the Louis Harris Data Center, the National Network of State Polls, and other Southern-focused public opinion data. Please note that some datasets in this collection are restricted to University of North Carolina at Chapel Hill affiliates. Access to these datasets require UNC ONYEN institutional login to the Dataverse system.
Country
Open Government Data Portal of Sikkim–sikkim.data.gov.in - is a platform for supporting Open Data initiative of Government of Sikkim. The portal is intended to be used by Departments/Organizations of Government of Sikkim to publish datasets, documents, services, tools and applications collected by them for public use. It intends to increase transparency in the functioning of the state Government and also open avenues for many more innovative uses of Government Data to give different perspective. Open Government Data Portal of Sikkim is designed and developed by the Open Government Data Division of National Informatics Centre (NIC), Department of Electronics and Information Technology (DeitY), Government of India. The portal has been created under Software as A Service (SaaS) model of Open Government Data (OGD) Platform India of NIC. The data available in the portal are owned by various Departments/Organization of Government of Sikkim. Open Government Data Portal of Sikkim has following modules: Data Management System (DMS) – Module for contributing data catalogs by various state government agencies for making those available on the front end website after a due approval process through a defined workflow. Content Management System (CMS) – Module for managing and updating various functionalities and content types of Open Government Data Portal of Sikkim. Visitor Relationship Management (VRM) – Module for collating and disseminating viewer feedback on various data catalogs. Communities – Module for community users to interact and share their zeal and views with others, who share common interests as that of theirs.
Country
>>>!!!<<< The repository is no longer available. >>>!!!<<< C3-Grid is an ALREADY FINISHED project within D-Grid, the initiative to promote a grid-based e-Science framework in Germany. The goal of C3-Grid is to support the workflow of Earth system researchers. A grid infrastructure will be implemented that allows efficient distributed data processing and inter-institutional data exchange. Aim of the effort was to develop an infrastructure for uniform access to heterogeneous data and distributed data processing. The work was structured in two projects funded by the Federal Ministry of Education and Research. The first project was part of the D-Grid initiative and explored the potential of grid technology for climate research and developed a prototype infrastructure. Details about the C3Grid architecture are described in “Earth System Modelling – Volume 6”. In the second phase "C3Grid - INAD: Towards an Infrastructure for General Access to Climate Data" this infrastructure was improved especially with respect to interoperability to Earth System Grid Federation (ESGF). Further the portfolio of available diagnostic workflows was expanded. These workflows can be re-used now in adjacent infrastructures MiKlip Evaluation Tool (http://www.fona-miklip.de/en/index.php) and as Web Processes within the Birdhouse Framework (http://bird-house.github.io/). The Birdhouse Framework is now funded as part of the European Copernicus Climate Change Service (https://climate.copernicus.eu/) managed by ECMWF and will be extended to provide scalable processing services for ESGF hosted data at DKRZ as well as IPSL and BADC.
The European Nucleotide Archive (ENA) captures and presents information relating to experimental workflows that are based around nucleotide sequencing. A typical workflow includes the isolation and preparation of material for sequencing, a run of a sequencing machine in which sequencing data are produced and a subsequent bioinformatic analysis pipeline. ENA records this information in a data model that covers input information (sample, experimental setup, machine configuration), output machine data (sequence traces, reads and quality scores) and interpreted information (assembly, mapping, functional annotation). Data arrive at ENA from a variety of sources. These include submissions of raw data, assembled sequences and annotation from small-scale sequencing efforts, data provision from the major European sequencing centres and routine and comprehensive exchange with our partners in the International Nucleotide Sequence Database Collaboration (INSDC). Provision of nucleotide sequence data to ENA or its INSDC partners has become a central and mandatory step in the dissemination of research findings to the scientific community. ENA works with publishers of scientific literature and funding bodies to ensure compliance with these principles and to provide optimal submission systems and data access tools that work seamlessly with the published literature.