Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
  • 1 (current)
Found 22 result(s)
Country
AusGeochem is an easy-to-use platform for uploading, visualising, analysing and discovering georeferenced sample information and data produced by various geoscience research institutions such as universities, geological survey agencies and museums. With respect to analytical research laboratories, AusGeochem provides a centralised repository allowing laboratories to upload, archive, disseminate and publish their datasets. The intuitive user interface (UI) allows users to access national publicly funded data quickly through the ability to view an area of interest, synthesise a variety of geochemical data in real-time, and extract the required data, gaining novel scientific insights through multi-method data collation. Lithodat Pty Ltd has integrated built-in data synthesis functions into the platform, such as cumulative age histograms, age vs elevation plots, and step-heating diagrams, allowing for rapid inter-study comparisons. Data can be extracted in multiple formats for re-use in a variety of software systems, allowing for the integration of regional datasets into machine learning and AI systems.
Country
AURIN is a collaborative national network of leading researchers and data providers across the academic, government, and private sectors. We provide a one-stop online workbench with access to thousands of multidisciplinary datasets, from over 100 different data sources.
B2FIND is a discovery service based on metadata steadily harvested from research data collections from EUDAT data centres and other repositories. The service offers faceted browsing and it allows in particular to discover data that is stored through the B2SAFE and B2SHARE services. The B2FIND service includes metadata that is harvested from many different community repositories.
BrainMaps.org, launched in May 2005, is an interactive multiresolution next-generation brain atlas that is based on over 20 million megapixels of sub-micron resolution, annotated, scanned images of serial sections of both primate and non-primate brains and that is integrated with a high-speed database for querying and retrieving data about brain structure and function over the internet. Currently featured are complete brain atlas datasets for various species, including Macaca mulatta, Chlorocebus aethiops, Felis catus, Mus musculus, Rattus norvegicus, and Tyto alba.
Country
The Canadian Astronomy Data Centre (CADC) was established in 1986 by the National Research Council of Canada (NRC), through a grant provided by the Canadian Space Agency (CSA). Over the past 30 years the CADC has evolved from an archiving centre---hosting data from Hubble Space Telescope, Canada-France-Hawaii Telescope, the Gemini observatories, and the James Clerk Maxwell Telescope---into a Science Platform for data-intensive astronomy. The CADC, in partnership with Shared Services Canada, Compute Canada, CANARIE and the university community (funded through the Canadian Foundation for Innovation), offers cloud computing, user-managed storage, group management, and data publication services, in addition to its ongoing mission to provide permanent storage for major data collections. Located at NRC Herzberg Astronomy and Astrophysics Research Centre in Victoria, BC, the CADC staff consists of professional astronomers, software developers, and operations staff who work with the community to develop and deliver leading-edge services to advance Canadian research. The CADC plays a leading role in international efforts to improve the scientific/technical landscape that supports data intensive science. This includes leadership roles in the International Virtual Observatory Alliance and participation in organizations like the Research Data Alliance, CODATA, and the World Data Systems. CADC also contributes significantly to future Canadian projects like the Square Kilometre Array and TMT. In 2019, the Canadian Astronomy Data Centre (CADC) delivered over 2 Petabytes of data (over 200 million individual files) to thousands of astronomers in Canada and in over 80 other countries. The cloud processing system completed over 6 million jobs (over 1100 core years) in 2019.
Country
The purpose of the Canadian Urban Data Repository (CUDR) is to provide a “home” for urban datasets. While primarily focused on datasets created by academe, it will also contain datasets created by NGOs, governments, citizens, and industry. Datasets stored in the repository will be open-access and will not contain personally identifiable information. The purpose of the Canadian Urban Data Catalogue (CUDC) is to enhance the awareness of urban datasets that exist across Canada by providing a catalogue of Canadian and Canadian-created urban datasets. It will catalogue datasets available in CUDR and external datasets available on other platforms and as web services. These external datasets may be open or closed. CUDC uses a rich metadata model that supports the documentation and search for datasets relevant to a user’s needs. Catalogue entry metadata may be exported and imported from/to CUDC.
Data.gov increases the ability of the public to easily find, download, and use datasets that are generated and held by the Federal Government. Data.gov provides descriptions of the Federal datasets (metadata), information about how to access the datasets, and tools that leverage government datasets
The NCBI Short Genetic Variations database, commonly known as dbSNP, catalogs short variations in nucleotide sequences from a wide range of organisms. These variations include single nucleotide variations, short nucleotide insertions and deletions, short tandem repeats and microsatellites. Short Genetic Variations may be common, thus representing true polymorphisms, or they may be rare. Some rare human entries have additional information associated withthem, including disease associations, genotype information and allele origin, as some variations are somatic rather than germline events. ***NCBI will phase out support for non-human organism data in dbSNP and dbVar beginning on September 1, 2017***
The Ensembl project produces genome databases for vertebrates and other eukaryotic species. Ensembl is a joint project between the European Bioinformatics Institute (EBI) and the Wellcome Trust Sanger Institute (WTSI) to develop a software system that produces and maintains automatic annotation on selected genomes.The Ensembl project was started in 1999, some years before the draft human genome was completed. Even at that early stage it was clear that manual annotation of 3 billion base pairs of sequence would not be able to offer researchers timely access to the latest data. The goal of Ensembl was therefore to automatically annotate the genome, integrate this annotation with other available biological data and make all this publicly available via the web. Since the website's launch in July 2000, many more genomes have been added to Ensembl and the range of available data has also expanded to include comparative genomics, variation and regulatory data. Ensembl is a joint project between European Bioinformatics Institute (EBI), an outstation of the European Molecular Biology Laboratory (EMBL), and the Wellcome Trust Sanger Institute (WTSI). Both institutes are located on the Wellcome Trust Genome Campus in Hinxton, south of the city of Cambridge, United Kingdom.
The European Union Open Data Portal is the single point of access to a growing range of data from the institutions and other bodies of the European Union (EU). Data are free for you to use and reuse for commercial or non-commercial purposes. By providing easy and free access to data, the portal aims to promote their innovative use and unleash their economic potential. It also aims to help foster the transparency and the accountability of the institutions and other bodies of the EU. The EU Open Data Portal is managed by the Publications Office of the European Union. Implementation of the EU's open data policy is the responsibility of the Directorate-General for Communications Networks, Content and Technology of the European Commission.
Harmonized, indexed, searchable large-scale human FG data collection with extensive metadata. Provides scalable, unified way to easily access massive functional genomics (FG) and annotation data collections curated from large-scale genomic studies. Direct integration (API) with custom / high-throughput genetic and genomic analysis workflows.
Launched in December 2013, Gaia is destined to create the most accurate map yet of the Milky Way. By making accurate measurements of the positions and motions of stars in the Milky Way, it will answer questions about the origin and evolution of our home galaxy. The first data release (2016) contains three-dimensional positions and two-dimensional motions of a subset of two million stars. The second data release (2018) increases that number to over 1.6 Billion. Gaia’s measurements are as precise as planned, paving the way to a better understanding of our galaxy and its neighborhood. The AIP hosts the Gaia data as one of the external data centers along with the main Gaia archive maintained by ESAC and provides access to the Gaia data releases as part of Gaia Data Processing and Analysis Consortium (DPAC).
GeneWeaver combines cross-species data and gene entity integration, scalable hierarchical analysis of user data with a community-built and curated data archive of gene sets and gene networks, and tools for data driven comparison of user-defined biological, behavioral and disease concepts. Gene Weaver allows users to integrate gene sets across species, tissue and experimental platform. It differs from conventional gene set over-representation analysis tools in that it allows users to evaluate intersections among all combinations of a collection of gene sets, including, but not limited to annotations to controlled vocabularies. There are numerous applications of this approach. Sets can be stored, shared and compared privately, among user defined groups of investigators, and across all users.
InnateDB is a publicly available database of the genes, proteins, experimentally-verified interactions and signaling pathways involved in the innate immune response of humans, mice and bovines to microbial infection. The database captures an improved coverage of the innate immunity interactome by integrating known interactions and pathways from major public databases together with manually-curated data into a centralised resource. The database can be mined as a knowledgebase or used with our integrated bioinformatics and visualization tools for the systems level analysis of the innate immune response.
Kaggle is a platform for predictive modelling and analytics competitions in which statisticians and data miners compete to produce the best models for predicting and describing the datasets uploaded by companies and users. This crowdsourcing approach relies on the fact that there are countless strategies that can be applied to any predictive modelling task and it is impossible to know beforehand which technique or analyst will be most effective.
LOVD portal provides LOVD software and access to a list of worldwide LOVD applications through Locus Specific Database list and List of Public LOVD installations. The LOVD installations that have indicated to be included in the global LOVD listing are included in the overall LOVD querying service, which is based on an API.
The NASA Exoplanet Archive collects and serves public data to support the search for and characterization of extra-solar planets (exoplanets) and their host stars. The data include published light curves, images, spectra and parameters, and time-series data from surveys that aim to discover transiting exoplanets. Tools are provided to work with the data, particularly the display and analysis of transit data sets from Kepler and CoRoT. All data are validated by the Exoplanet Archive science staff and traced to their sources. The Exoplanet Archive is the U.S. data portal for the CoRoT mission.
Pandora is an open data platform devoted to the study of the human story. Data may be deposited from various disciplines and research topics that investigate humans from their early beginnings until present in addition to their environmental context (e.g. archeology, anthropology history, ancient DNA, isotopes, zooarchaeology, archaeobotany, and paleoenvironmental and paleoclimatic studies, etc.). Pandora allows autonomous data communities to self-manage their webspace and community membership. Data communities self-curate their data plus other supporting resources. Datasets may be assigned a new DOI and a schema markup is employed to improve data findability. Pandora also allows for links to datasets stored externally and having previously assigned DOIs. Through this, it becomes possible to establish data networks devoted to specific topics that may combine a mix of datasets stored either within Pandora or externally.
<<<!!!<<< All user content from this site has been deleted. Visit SeedMeLab (https://seedmelab.org/) project as a new option for data hosting. >>>!!!>>> SeedMe is a result of a decade of onerous experience in preparing and sharing visualization results from supercomputing simulations with many researchers at different geographic locations using different operating systems. It’s been a labor–intensive process, unsupported by useful tools and procedures for sharing information. SeedMe provides a secure and easy-to-use functionality for efficiently and conveniently sharing results that aims to create transformative impact across many scientific domains.
TERN provides open data, research and management tools, data infrastructure and site-based research equipment. The open access ecosystem data is provided by TERN Data Discovery Portal , see https://www.re3data.org/repository/r3d100012013
Country
The ZBW Journal Data Archive is a service for editors of journals in economics and management. The Journal Data Archive offers the possibility for journal authors of papers that contain empirical work, simulations or experimental work to store the data, programs, and other details of computations, to make these files publicly available and to support confirmability and replicability of their published research papers.