Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 89 result(s)
Chempound is a new generation repository architecture based on RDF, semantic dictionaries and linked data. It has been developed to hold any type of chemical object expressible in CML and is exemplified by crystallographic experiments and computational chemistry calculations. In both examples, the repository can hold >50k entries which can be searched by SPARQL endpoints and pre-indexing of key fields. The Chempound architecture is general and adaptable to other fields of data-rich science. The Chempound software is hosted at http://bitbucket.org/chempound and is available under the Apache License, Version 2.0
>>>>!!!<<< As stated 2017-06-27 The website http://researchcompendia.org is no longer available; repository software is archived on github https://github.com/researchcompendia >>>!!!<<< The ResearchCompendia platform is an attempt to use the web to enhance the reproducibility and verifiability—and thus the reliability—of scientific research. we provide the tools to publish the "actual scholarship" by hosting data, code, and methods in a form that is accessible, trackable, and persistent. Some of our short term goals include: To expand and enhance the platform including adding executability for a greater variety of coding languages and frameworks, and enhancing output presentation. To expand usership and to test the ResearchCompendia model in a number of additional fields, including computational mathematics, statistics, and biostatistics. To pilot integration with existing scholarly platforms, enabling researchers to discover relevant Research Compendia websites when looking at online articles, code repositories, or data archives.
!!! >>> intrepidbio.com expired <<< !!!! Intrepid Bioinformatics serves as a community for genetic researchers and scientific programmers who need to achieve meaningful use of their genetic research data – but can’t spend tremendous amounts of time or money in the process. The Intrepid Bioinformatics system automates time consuming manual processes, shortens workflow, and eliminates the threat of lost data in a faster, cheaper, and better environment than existing solutions. The system also provides the functionality and community features needed to analyze the large volumes of Next Generation Sequencing and Single Nucleotide Polymorphism data, which is generated for a wide range of purposes from disease tracking and animal breeding to medical diagnosis and treatment.
Bitbucket is a web-based version control repository hosting service owned by Atlassian, for source code and development projects that use either Mercurial or Git revision control systems.
The repository is no longer available. >>>!!!<<< 2021-01-25: no more access to California Water CyberInfrastructure >>>!!!<<<
Project Data Sphere, LLC, operates a free digital library-laboratory where the research community can broadly share, integrate and analyze historical, de-identified, patient-level data from academic and industry cancer Phase II-III clinical trials. These patient-level datasets are available through the Project Data Sphere platform to researchers affiliated with life science companies, hospitals and institutions, as well as independent researchers, at no cost and without requiring a research proposal.
Country
BCCM/ITM is a collection of well documented mycobacteria, characterized by phenotypic and/or genotypic tests. While having an emphasis on (drug-resistant) M. tuberculosis complex, BCCM/ITM comprises more than 90 mycobacterial species from human, animal and environmental origin from all continents.
Country
Risklayer Explorer is a collaboration between Risklayer GmbH and the Karlsruhe Institute of Technology's Center for Disaster Risk Management and Risk Reduction Technology (CEDIM). This website is still under development, but we are going live with it already, because we want to present data on the Novel Coronavirus (COVID-19) to help inform the public of the current situation. You will be able to track disaster events and read about our analysis here. Our work is a continuation of a new style of disaster research started by CEDIM in 2011 to analyze disasters immediately after their occurrence, assess the impacts, and retrace the temporal development of disaster events. We are already analyzing damaging earthquakes globally, providing you with event characteristics, earthquake's intensity footprints, as well as the population affected by earthquakes. In addition to earthquake events, we expect to be tracking and analyzing tropical cyclone, volcano and extreme weather events in 2020.
WorldData.AI comes with a built-in workspace – the next-generation hyper-computing platform powered by a library of 3.3 billion curated external trends. WorldData.AI allows you to save your models in its “My Models Trained” section. You can make your models public and share them on social media with interesting images, model features, summary statistics, and feature comparisons. Empower others to leverage your models. For example, if you have discovered a previously unknown impact of interest rates on new-housing demand, you may want to share it through “My Models Trained.” Upload your data and combine it with external trends to build, train, and deploy predictive models with one click! WorldData.AI inspects your raw data, applies feature processors, chooses the best set of algorithms, trains and tunes multiple models, and then ranks model performance.
<<<!!!<<< CRAWDAD has moved to IEEE-Dataport https://www.re3data.org/repository/r3d100012569 The datasets in the Community Resource for Archiving Wireless Data at Dartmouth (CRAWDAD) repository are now hosted as the CRAWDAD Collection on IEEE Dataport. After nearly two decades as a stand-alone archive at crawdad.org, the migration of the collection to IEEE DataPort provides permanence and new visibility. >>>!!!>>>
Country
The German Socio-Economic Panel Study (SOEP) is a wide-ranging representative longitudinal study of private households, located at the German Institute for Economic Research, DIW Berlin. Every year, there were nearly 11,000 households, and more than 20,000 persons sampled by the fieldwork organization TNS Infratest Sozialforschung. The data provide information on all household members, consisting of Germans living in the Old and New German States, Foreigners, and recent Immigrants to Germany. The Panel was started in 1984. Some of the many topics include household composition, occupational biographies, employment, earnings, health and satisfaction indicators.
Country
GTS AI is an Artificial Intelligence Company that offers excellent services to its clients. We use high definition images and use high quality data to analyze and help in Machine Learning Company . We are a dataset provider and we collect data in regards to artificial intelligence.
Country
<<<!!!<<< The database is no longer available from 1st July 2018 >>>!!!>>> CRYSTMET was previously included in the NCDS as part of CrystalWorks. Unfortunately we are no longer able to license the CRYSTMET database for access through the NCDS. Therefore the database will no longer be accessible from 1st July 2018. >>>> CRYSTMET contains chemical, crystallographic and bibliographic data together with associated comments regarding experimental details for each study. It is a database of critically evaluated crystallographic data for metals, including alloys, intermetallics and minerals.Using these data, a number of associated files are derived, a major one being a parallel file of calculated powder patterns. These derived data are included within the CRYSTMET product.
The National Science Foundation (NSF) Ultraviolet (UV) Monitoring Network provides data on ozone depletion and the associated effects on terrestrial and marine systems. Data are collected from 7 sites in Antarctica, Argentina, United States, and Greenland. The network is providing data to researchers studying the effects of ozone depletion on terrestrial and marine biological systems. Network data is also used for the validation of satellite observations and for the verification of models describing the transfer of radiation through the atmosphere.
The Wolfram Data Repository is a public resource that hosts an expanding collection of computable datasets, curated and structured to be suitable for immediate use in computation, visualization, analysis and more. Building on the Wolfram Data Framework and the Wolfram Language, the Wolfram Data Repository provides a uniform system for storing data and making it immediately computable and useful. With datasets of many types and from many sources, the Wolfram Data Repository is built to be a global resource for public data and data-backed publication.
The Cooperative Association for Internet Data Analysis (CAIDA) is a collaborative undertaking among organizations in the commercial, government, and research sectors aimed at promoting greater cooperation in the engineering and maintenance of a robust, scalable global Internet infrastructure.It is an independent analysis and research group with particular focus on: Collection, curation, analysis, visualization, dissemination of sets of the best available Internet data, providing macroscopic insight into the behavior of Internet infrastructure worldwide, improving the integrity of the field of Internet science, improving the integrity of operational Internet measurement and management, informing science, technology, and communications public policies.
The repository is no longer available. >>>!!!<<< 2018-09-14: no more access to GIS Data Depot >>>!!!<<<
Country
Survey of India, The National Survey and Mapping Organization of the country under the Department of Science & Technology, is the OLDEST SCIENTIFIC DEPARTMENT OF THE GOVT. OF INDIA. It was set up in 1767 and has evolved rich traditions over the years. In its assigned role as the nation's Principal Mapping Agency, Survey of India bears a special responsibility to ensure that the country's domain is explored and mapped suitably, provide base maps for expeditious and integrated development and ensure that all resources contribute with their full measure to the progress, prosperity and security of our country now and for generations to come. The history of the Survey of India dates back to the 18th Century. Forerunners of army of the East India Company and Surveyors had an onerous task of exploring the unknown. Bit by bit the tapestry of Indian terrain was completed by the painstaking efforts of a distinguished line of Surveyors such as Mr. Lambton and Sir George Everest. It is a tribute to the foresight of such Surveyors that at the time of independence the country inherited a survey network built on scientific principles. The great Trigonometric series spanning the country from North to South East to West are some of the best geodetic control series available in the world. The scientific principles of surveying have since been augmented by the latest technology to meet the multidisciplinary requirement of data from planners and scientists. Organized into only 5 Directorates in 1950, mainly to look after the mapping needs of Defense Forces in North West and North East, the Department has now grown into 22 Directorates spread in approx. all parts (states) of the country to provide the basic map coverage required for the development of the country. Its technology, latest in the world, has been oriented to meet the needs of defense forces, planners and scientists in the field of geo-sciences, land and resource management. Its expert advice is being utilized by various Ministries and undertakings of Govt. of India in many sensitive areas including settlement of International borders, State boundaries and in assisting planned development of hitherto under developed areas. Faced with the requirement of digital topographical data, the department has created three Digital Centers during late eighties to generate Digital Topographical Data Base for the entire country for use in various planning processes and creation of geographic information system. Its specialized Directorates such as Geodetic and Research Branch, and Indian Institute of Surveying & Mapping (erstwhile Survey Training Institute) have been further strengthened to meet the growing requirement of user community. The department is also assisting in many scientific programs of the Nation related to the field of geo-physics, remote sensing and digital data transfers.
Country
The Institutional Repository of the Universidad Santo Tomás manages, preserves, stores, disseminates and provides access to digital objects, the product of all academic and administrative production.
Bioinformatics.org serves the scientific and educational needs of bioinformatic practitioners and the general public. We develop and maintain computational resources to facilitate world-wide communications and collaborations between people of all educational and professional levels. We provide and promote open access to the materials and methods required for, and derived from, research, development and education.
Government of Yukon open data provides an easy way to find, access and reuse the government's public datasets. This service brings all of the government's data together in one searchable website. Our datasets are created and managed by different government departments. We cannot guarantee the quality or timeliness of all data. If you have any feedback you can get in touch with the department that produced the dataset. This is a pilot project. We are in the process of adding a quality framework to make it easier for you to access high quality, reliable data.
The Organic Chemistry Portal offers an overview of recent topics, interesting reactions, and information on important chemicals for organic chemists. Searchable index of citations, chemical synthesis and chemical products. We publish 1000 additional citations per year. German version see https://www.organische-chemie.ch/
The Virtual Research Environment (VRE) is an open-source data management platform that enables medical researchers to store, process and share data in compliance with the European Union (EU) General Data Protection Regulation (GDPR). The VRE addresses the present lack of digital research data infrastructures fulfilling the need for (a) data protection for sensitive data, (b) capability to process complex data such as radiologic imaging, (c) flexibility for creating own processing workflows, (d) access to high performance computing. The platform promotes FAIR data principles and reduces barriers to biomedical research and innovation. The VRE offers a web portal with graphical and command-line interfaces, segregated data zones and organizational measures for lawful data onboarding, isolated computing environments where large teams can collaboratively process sensitive data privately, analytics workbench tools for processing, analyzing, and visualizing large datasets, automated ingestion of hospital data sources, project-specific data warehouses for structured storage and retrieval, graph databases to capture and query ontology-based metadata, provenance tracking, version control, and support for automated data extraction and indexing. The VRE is based on a modular and extendable state-of-the art cloud computing framework, a RESTful API, open developer meetings, hackathons, and comprehensive documentation for users, developers, and administrators. The VRE with its concerted technical and organizational measures can be adopted by other research communities and thus facilitates the development of a co-evolving interoperable platform ecosystem with an active research community.
CiteSeerx is an evolving scientific literature digital library and search engine that focuses primarily on the literature in computer and information science. CiteSeerx aims to improve the dissemination of scientific literature and to provide improvements in functionality, usability, availability, cost, comprehensiveness, efficiency, and timeliness in the access of scientific and scholarly knowledge. Rather than creating just another digital library, CiteSeerx attempts to provide resources such as algorithms, data, metadata, services, techniques, and software that can be used to promote other digital libraries. CiteSeerx has developed new methods and algorithms to index PostScript and PDF research articles on the Web.