Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 48 result(s)
<<<<< ----- !!! The data is in the phase of migration to another system. Therefore the repository is no longer available. This record is out-dated.; 2020-10-06 !!! ----- >>>>> Due to the changes at the individual IGS analysis centers during these years the resulting time series of global geodetic parameters are inhomogeneous and inconsistent. A geophysical interpretation of these long series and the realization of a high-accuracy global reference frame are therefore difficult and questionable. The GPS reprocessing project GPS-PDR (Potsdam Dresden Reprocessing), initiated by TU München and TU Dresden and continued by GFZ Potsdam and TU Dresden, provides selected products of a homogeneously reprocessed global GPS network such as GPS satellite orbits and Earth rotation parameters.
WorldClim is a set of global climate layers (climate grids) with a spatial resolution of about 1 square kilometer. The data can be used for mapping and spatial modeling in a GIS or with other computer programs.
>>>!!!<<< 2019-01: Global Land Cover Facility goes offline see https://spatialreserves.wordpress.com/2019/01/07/global-land-cover-facility-goes-offline/ ; no more access to http://www.landcover.org >>>!!!<<< The Global Land Cover Facility (GLCF) provides earth science data and products to help everyone to better understand global environmental systems. In particular, the GLCF develops and distributes remotely sensed satellite data and products that explain land cover from the local to global scales.
coastDat is a model based data bank developed mainly for the assessment of long-term changes in data sparse regions. A sequence of numerical models is employed to reconstruct all aspects of marine climate (such as storms, waves, surges etc.) over many decades of years relying only on large-scale information such as large-scale atmospheric conditions or bathymetry.
The ENCODE Encyclopedia organizes the most salient analysis products into annotations, and provides tools to search and visualize them. The Encyclopedia has two levels of annotations: Integrative-level annotations integrate multiple types of experimental data and ground level annotations. Ground-level annotations are derived directly from the experimental data, typically produced by uniform processing pipelines.
The World Data Center for Remote Sensing of the Atmosphere, WDC-RSAT, offers scientists and the general public free access (in the sense of a “one-stop shop”) to a continuously growing collection of atmosphere-related satellite-based data sets (ranging from raw to value added data), information products and services. Focus is on atmospheric trace gases, aerosols, dynamics, radiation, and cloud physical parameters. Complementary information and data on surface parameters (e.g. vegetation index, surface temperatures) is also provided. This is achieved either by giving access to data stored at the data center or by acting as a portal containing links to other providers.
The Brain Transcriptome Database (BrainTx) project aims to create an integrated platform to visualize and analyze our original transcriptome data and publicly accessible transcriptome data related to the genetics that underlie the development, function, and dysfunction stages and states of the brain.
The Portal aims to serve as a unique access point to timely, comprehensive migration statistics and reliable information about migration data globally. The site is designed to help policy makers, national statistics officers, journalists and the general public interested in the field of migration to navigate the increasingly complex landscape of international migration data, currently scattered across different organisations and agencies. Especially in critical times, such as those faced today, it is essential to ensure that responses to migration are based on sound facts and accurate analysis. By making the evidence about migration issues accessible and easy to understand, the Portal aims to contribute to a more informed public debate. The Portal was launched in December 2017 and is managed and developed by IOM’s Global Migration Data Analysis Centre (GMDAC), with the guidance of its Advisory Board, and was supported in its conception by the Economist Intelligence Unit (EIU). The Portal is supported financially by the Governments of Germany, the United States of America and the UK Department for International Development (DFID).
DBpedia is a crowd-sourced community effort to extract structured information from Wikipedia and make this information available on the Web. DBpedia allows you to ask sophisticated queries against Wikipedia, and to link the different data sets on the Web to Wikipedia data. We hope that this work will make it easier for the huge amount of information in Wikipedia to be used in some new interesting ways. Furthermore, it might inspire new mechanisms for navigating, linking, and improving the encyclopedia itself.
SCISAT, also known as the Atmospheric Chemistry Experiment (ACE), is a Canadian Space Agency small satellite mission for remote sensing of the Earth's atmosphere using solar occultation. The satellite was launched on 12 August 2003 and continues to function perfectly. The primary mission goal is to improve our understanding of the chemical and dynamical processes that control the distribution of ozone in the stratosphere and upper troposphere, particularly in the Arctic. The high precision and accuracy of solar occultation makes SCISAT useful for monitoring changes in atmospheric composition and the validation of other satellite instruments. The satellite carries two instruments. A high resolution (0.02 cm-¹) infrared Fourier transform spectrometer (FTS) operating from 2 to 13 microns (750-4400 cm-¹) is measuring the vertical distribution of trace gases, particles and temperature. This provides vertical profiles of atmospheric constituents including essentially all of the major species associated with ozone chemistry. Aerosols and clouds are monitored using the extinction of solar radiation at 1.02 and 0.525 microns as measured by two filtered imagers. The vertical resolution of the FTS is about 3-4 km from the cloud tops up to about 150 km. Peter Bernath of the University of Waterloo is the principal investigator. A dual optical spectrograph called MAESTRO (Measurement of Aerosol Extinction in the Stratosphere and Troposphere Retrieved by Occultation) covers the 400-1030 nm spectral region and measures primarily ozone, nitrogen dioxide and aerosol/cloud extinction. It has a vertical resolution of about 1-2 km. Tom McElroy of Environment and Climate Change Canada is the principal investigator. ACE data are freely available from the University of Waterloo website. SCISAT was designated an ESA Third Party Mission in 2005. ACE data are freely available through an ESA portal.
The Sloan Digital Sky Survey (SDSS) is one of the most ambitious and influential surveys in the history of astronomy. Over eight years of operations (SDSS-I, 2000-2005; SDSS-II, 2005-2008; SDSS-III 2008-2014; SDSS-IV 2013 ongoing), it obtained deep, multi-color images covering more than a quarter of the sky and created 3-dimensional maps containing more than 930,000 galaxies and more than 120,000 quasars. DSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, Harvard-Smithsonian Center for Astrophysics, Instituto de Astrofísica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, Lawrence Berkeley National Laboratory, Leibniz Institut für Astrophysik Potsdam (AIP), Max-Planck-Institut für Astrophysik (MPA Garching), Max-Planck-Institut für Extraterrestrische Physik (MPE), Max-Planck-Institut für Astronomie (MPIA Heidelberg), National Astronomical Observatory of China, New Mexico State University, New York University, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autónoma de México, University of Arizona, University of Colorado Boulder, University of Portsmouth, University of Utah, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University.
GeneCards is a searchable, integrative database that provides comprehensive, user-friendly information on all annotated and predicted human genes. It automatically integrates gene-centric data from ~125 web sources, including genomic, transcriptomic, proteomic, genetic, clinical and functional information.
mentha archives evidence collected from different sources and presents these data in a complete and comprehensive way. Its data comes from manually curated protein-protein interaction databases that have adhered to the IMEx consortium. The aggregated data forms an interactome which includes many organisms. mentha is a resource that offers a series of tools to analyse selected proteins in the context of a network of interactions. Protein interaction databases archive protein-protein interaction (PPI) information from published articles. However, no database alone has sufficient literature coverage to offer a complete resource to investigate "the interactome". mentha's approach generates every week a consistent interactome (graph). Most importantly, the procedure assigns to each interaction a reliability score that takes into account all the supporting evidence. mentha offers eight interactomes (Homo sapiens, Arabidopsis thaliana, Caenorhabditis elegans, Drosophila melanogaster, Escherichia coli K12, Mus musculus, Rattus norvegicus, Saccharomyces cerevisiae) plus a global network that comprises every organism, including those not mentioned. The website and the graphical application are designed to make the data stored in mentha accessible and analysable to all users. Source databases are: MINT, IntAct, DIP, MatrixDB and BioGRID.
RADAM portal is an interface to the network of RADAM (RADiation DAMage) Databases collecting data on interactions of ions, electrons, positrons and photons with biomolecular systems, on radiobiological effects and relevant phenomena occurring at different time, spatial and energy scales in irradiated targets during and after the irradiation. This networking system has been created by the Consortium of COST Action MP1002 (Nano-IBCT: Nano-scale insights into Ion Beam Cancer Therapy) during 2011-2014 using the Virtual Atomic and Molecular Data Center (VAMDC) standards.
The Polinsky Language Sciences Lab at Harvard University is a linguistics lab that examines questions of language structure and its effect on the ways in which people use and process language in real time. We engage in linguistic and interdisciplinary research projects ourselves; offer linguistic research capabilities for undergraduate and graduate students, faculty, and visitors; and build relationships with the linguistic communities in which we do our research. We are interested in a broad range of issues pertaining to syntax, interfaces, and cross-linguistic variation. We place a particular emphasis on novel experimental evidence that facilitates the construction of linguistic theory. We have a strong cross-linguistic focus, drawing upon English, Russian, Chinese, Korean, Mayan languages, Basque, Austronesian languages, languages of the Caucasus, and others. We believe that challenging existing theories with data from as broad a range of languages as possible is a crucial component of the successful development of linguistic theory. We investigate both fluent speakers and heritage speakers—those who grew up hearing or speaking a particular language but who are now more fluent in a different, societally dominant language. Heritage languages, a novel field of linguistic inquiry, are important because they provide new insights into processes of linguistic development and attrition in general, thus increasing our understanding of the human capacity to maintain and acquire language. Understanding language use and processing in real time and how children acquire language helps us improve language study and pedagogy, which in turn improves communication across the globe. Although our lab does not specialize in language acquisition, we have conducted some studies of acquisition of lesser-studied languages and heritage languages, with the purpose of comparing heritage speakers to adults.
Content type(s)
The National Archives and Records Administration (NARA) is the nation's record keeper. Of all documents and materials created in the course of business conducted by the United States Federal government, only 1%-3% are so important for legal or historical reasons that they are kept by us forever. Those valuable records are preserved and are available to you, whether you want to see if they contain clues about your family’s history, need to prove a veteran’s military service, or are researching an historical topic that interests you.
Protectedplanet.net combines crowd sourcing and authoritative sources to enrich and provide data for protected areas around the world. Data are provided in partnership with the World Database on Protected Areas (WDPA). The data include the location, designation type, status year, and size of the protected areas, as well as species information.
The Square Kilometre Array (SKA) is a radio telescope with around one million square metres of collecting area, designed to study the Universe with unprecedented speed and sensitivity. The SKA is not a single telescope, but a collection of various types of antennas, called an array, to be spread over long distances. The SKA will be used to answer fundamental questions of science and about the laws of nature, such as: how did the Universe, and the stars and galaxies contained in it, form and evolve? Was Einstein’s theory of relativity correct? What is the nature of ‘dark matter’ and ‘dark energy’? What is the origin of cosmic magnetism? Is there life somewhere else in the Universe?
World Data Center for Oceanography serves to store and provide to users data on physical, chemical and dynamical parameters of the global ocean as well as oceanography-related papers and publications, which are either came from other countries through the international exchange or provided to the international exchange by organizations of the Russian Federation
The Radio Telescope Data Center (RTDC) reduces, archives, and makes available on its web site data from SMA and the CfA Millimeter-wave Telescope. The whole-Galaxy CO survey presented in Dame et al. (2001) is a composite of 37 separate surveys. The data from most of these surveys can be accessed. Larger composites of these surveys are available separately.
The Bremen Core Repository - BCR, for International Ocean Discovery Program (IODP), Integrated Ocean Discovery Program (IODP), Ocean Drilling Program (ODP), and Deep Sea Drilling Project (DSDP) cores from the Atlantic Ocean, Mediterranean and Black Seas and Arctic Ocean is operated at University of Bremen within the framework of the German participation in IODP. It is one of three IODP repositories (beside Gulf Coast Repository (GCR) in College Station, TX, and Kochi Core Center (KCC), Japan). One of the scientific goals of IODP is to research the deep biosphere and the subseafloor ocean. IODP has deep-frozen microbiological samples from the subseafloor available for interested researchers and will continue to collect and preserve geomicrobiology samples for future research.
The twin GRACE satellites were launched on March 17, 2002. Since that time, the GRACE Science Data System (SDS) has produced and distributed estimates of the Earth gravity field on an ongoing basis. These estimates, in conjunction with other data and models, have provided observations of terrestrial water storage changes, ice-mass variations, ocean bottom pressure changes and sea-level variations. This portal, together with PODAAC, is responsible for the distribution of the data and documentation for the GRACE project.
The Africa Health Research Institute (AHRI) has published its updated analytical datasets for 2016. The datasets cover socio-economic, education and employment information for individuals and households in AHRI’s population research area in rural northern KwaZulu-Natal. The datasets also include details on the migration patterns of the individuals and households who migrated into and out of the surveillance area as well as data on probable causes of death for individuals who passed away. Data collection for the 2016 individual interviews – which involves a dried blood spot sample being taken – is still in progress, and therefore datasets on HIV status and General Health only go up to 2015 for now. Over the past 16 years researchers have developed an extensive longitudinal database of demographic, social, economic, clinical and laboratory information about people over the age of 15 living in the AHRI population research area. During this time researchers have followed more than 160 000 people, of which 92 000 are still in the programme.
-----<<<<< The repository is no longer available. This record is out-dated. The Matter lab provides the archived database version of 2012 and 2013 at https://www.matter.toronto.edu/basic-content-page/data-download. Data linked from the World Community Grid - The Clean Energy Project see at https://www.worldcommunitygrid.org/research/cep1/overview.do and on fighshare https://figshare.com/articles/dataset/moldata_csv/9640427 >>>>>----- The Clean Energy Project Database (CEPDB) is a massive reference database for organic semiconductors with a particular emphasis on photovoltaic applications. It was created to store and provide access to data from computational as well as experimental studies, on both known and virtual compounds. It is a free and open resource designed to support researchers in the field of organic electronics in their scientific pursuits. The CEPDB was established as part of the Harvard Clean Energy Project (CEP), a virtual high-throughput screening initiative to identify promising new candidates for the next generation of carbon-based solar cell materials.