Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 106 result(s)
The UCI Machine Learning Repository is a collection of databases, domain theories, and data generators that are used by the machine learning community for the empirical analysis of machine learning algorithms. It is used by students, educators, and researchers all over the world as a primary source of machine learning data sets. As an indication of the impact of the archive, it has been cited over 1000 times.
CiteSeerx is an evolving scientific literature digital library and search engine that focuses primarily on the literature in computer and information science. CiteSeerx aims to improve the dissemination of scientific literature and to provide improvements in functionality, usability, availability, cost, comprehensiveness, efficiency, and timeliness in the access of scientific and scholarly knowledge. Rather than creating just another digital library, CiteSeerx attempts to provide resources such as algorithms, data, metadata, services, techniques, and software that can be used to promote other digital libraries. CiteSeerx has developed new methods and algorithms to index PostScript and PDF research articles on the Web.
<<<!!!<<< The repository is offline >>>!!!>>> A collection of open content name datasets for Information Centric Networking. The "Content Name Collection" (CNC) lists and hosts open datasets of content names. These datasets are either derived from URL link databases or web traces. The names are typically used for research on Information Centric Networking (ICN), for example to measure cache hit/miss ratios in simulations.
Sinmin contains texts of different genres and styles of the modern and old Sinhala language. The main sources of electronic copies of texts for the corpus are online Sinhala newspapers, online Sinhala news sites, Sinhala school textbooks available in online, online Sinhala magazines, Sinhala Wikipedia, Sinhala fictions available in online, Mahawansa, Sinhala Blogs, Sinhala subtitles and Sri lankan gazette.
OpenML is an open ecosystem for machine learning. By organizing all resources and results online, research becomes more efficient, useful and fun. OpenML is a platform to share detailed experimental results with the community at large and organize them for future reuse. Moreover, it will be directly integrated in today’s most popular data mining tools (for now: R, KNIME, RapidMiner and WEKA). Such an easy and free exchange of experiments has tremendous potential to speed up machine learning research, to engender larger, more detailed studies and to offer accurate advice to practitioners. Finally, it will also be a valuable resource for education in machine learning and data mining.
-----<<<<< The repository is no longer available. This record is out-dated. >>>>>----- GEON is an open collaborative project that is developing cyberinfrastructure for integration of 3 and 4 dimensional earth science data. GEON will develop services for data integration and model integration, and associated model execution and visualization. Mid-Atlantic test bed will focus on tectonothermal, paleogeographic, and biotic history from the late-Proterozoicto mid-Paleozoic. Rockies test bed will focus on integration of data with dynamic models, to better understand deformation history. GEON will develop the most comprehensive regional datasets in test bed areas.
CLARIN-LV is a national node of Clarin ERIC (Common Language Resources and Technology Infrastructure). The mission of the repository is to ensure the availability and long­ term preservation of language resources. The data stored in the repository are being actively used and cited in scientific publications.
The National Science Digital Library provides high quality online educational resources for teaching and learning, with current emphasis on the sciences, technology, engineering, and mathematics (STEM) disciplines—both formal and informal, institutional and individual, in local, state, national, and international educational settings. The NSDL collection contains structured descriptive information (metadata) about web-based educational resources held on other sites by their providers. These providers have contribute this metadata to NSDL for organized search and open access to educational resources via this website and its services.
CLARIN-UK is a consortium of centres of expertise involved in research and resource creation involving digital language data and tools. The consortium includes the national library, and academic departments and university centres in linguistics, languages, literature and computer science.
Country
The Informatics Research Data Repository is a Japanese data repository that collects data on disciplines within informatics. Such sub-categories are things like consumerism and information diffusion. The primary data within these data sets is from experiments run by IDR on how one group is linked to another.
Country
IIASA DARE is the institutional repository for publishing open research data produced by all researchers affiliated with IIASA - International Institute for Applied Systems Analysis. IIASA has been implemented to help scientists fulfill the requirements from funding bodies and to meet the growing impact of publishing research data. The deposited data will receive a persistent, citable link and it will be openly accessible and stored for the long term.
Country
The version 1.0 of the open database contains 1,151,268 brain signals of 2 seconds each, captured with the stimulus of seeing a digit (from 0 to 9) and thinking about it, over the course of almost 2 years between 2014 & 2015, from a single Test Subject David Vivancos. All the signals have been captured using commercial EEGs (not medical grade), NeuroSky MindWave, Emotiv EPOC, Interaxon Muse & Emotiv Insight, covering a total of 19 Brain (10/20) locations. In 2014 started capturing brain signals and released the first versions of the "MNIST" of brain digits, and in 2018 released another open dataset with a subset of the "IMAGENET" of The Brain. Version 0.05 (last update 09/28/2021) of the open database contains 24,000 brain signals of 2 seconds each, captured with the stimulus of seeing a real MNIST digit (from 0 to 9) 6,000 so far and thinking about it, + the same amout of signals with another 2 seconds of seeing a black screen, shown in between the digits, from a single Test Subject David Vivancos in a controlled still experiment to reduce noise from EMG & avoiding blinks.
>>>!!!<<< 2018-01-18: no data nor programs can be found >>>!!!<<< These archives contain public domain programs for calculations in physics and other programs that we suppose about will help during work with computer. Physical constants and experimental or theoretical data as cross sections, rate constants, swarm parameters, etc., that are necessary for physical calculations are stored here, too. Programs are mainly dedicated to computers compatible with PC IBM. If programs do not use graphic units it is possible to use them on other computers, too. It is necessary to reprogram the graphic parts of programs in the other cases.
Bitbucket is a web-based version control repository hosting service owned by Atlassian, for source code and development projects that use either Mercurial or Git revision control systems.
FLOSSmole is a collaborative collection of free, libre, and open source software (FLOSS) data. FLOSSmole contains nearly 1 TB of data covering the period 2004 until now, about more than 500,000 different open source projects.
Discovery is the digital repository of research, and related activities, undertaken at the University of Dundee. The content held in Discovery is varied and ranges from traditional research outputs such as peer-reviewed articles and conference papers, books, chapters and post-graduate research theses and data to records for artefacts, exhibitions, multimedia and software. Where possible Discovery provides full-text access to a version of the research. Discovery is the data catalogue for datasets resulting from research undertaken at the University of Dundee and in some instances the publisher of research data.
The Prototype Data Portal allows to retrieve Data from World Data System (WDS) members. WDS ensures the long-term stewardship and provision of quality-assessed data and data services to the international science community and other stakeholders
Data products developed and distributed by the National Institute of Standards and Technology span multiple disciplines of research and are widely used in research and development programs by industry and academia. NIST's publicly available data sets showcase its committment to providing accurate, well-curated measurements of physical properties, exemplified by the Standard Reference Data program, as well as its committment to advancing basic research. In accordance with U.S. Government Open Data Policy and the NIST Plan for providing public access to the results of federally funded research data, NIST maintains a publicly accessible listing of available data, the NIST Public Dataset List (json). Additionally, these data are assigned a Digital Object Identifier (DOI) to increase the discovery and access to research output; these DOIs are registered with DataCite and provide globally unique persistent identifiers. The NIST Science Data Portal provides a user-friendly discovery and exploration tool for publically available datasets at NIST. This portal is designed and developed with data.gov Project Open Data standards and principles. The portal software is hosted in the usnistgov github repository.
The UA Campus Repository is an institutional repository that facilitates access to the research, creative works, publications and teaching materials of the University by collecting, sharing and archiving content selected and deposited by faculty, researchers, staff and affiliated contributors.
The University of Cape Town (UCT) uses Figshare for institutions for their data repository, which was launched in 2017 and is called ZivaHub: Open Data UCT. ZivaHub serves principal investigators at the University of Cape Town who are in need of a repository to store and openly disseminate the data that support their published research findings. The repository service is provided in terms of the UCT Research Data Management Policy. It provides open access to supplementary research data files and links to their respective scholarly publications (e.g. theses, dissertations, papers et al) hosted on other platforms, such as OpenUCT.
The Information Marketplace for Policy and Analysis of Cyber-risk & Trust (IMPACT) program supports global cyber risk research & development by coordinating, enhancing and developing real world data, analytics and information sharing capabilities, tools, models, and methodologies. In order to accelerate solutions around cyber risk issues and infrastructure security, IMPACT makes these data sharing components broadly available as national and international resources to support the three-way partnership among cyber security researchers, technology developers and policymakers in academia, industry and the government.