Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
  • 1 (current)
Found 16 result(s)
In February 1986 the NIST measurements were communicated to appropriate astronomers for use in ground-based testing and calibration programs for the GHRS, and in 1990 the NIST group published the new wavelengths for about 3000 lines in the Supplement Series of the Astrophysical Journal. The full report on the NIST measurements in the form of a complete and detailed atlas of the platinum/neon spectrum presented in this special issue of the Journal of Research of NIST will be highly useful to a wide range of scientists.
The European Monitoring and Evaluation Programme (EMEP) is a scientifically based and policy driven programme under the Convention on Long-range Transboundary Air Pollution (CLRTAP) for international co-operation to solve transboundary air pollution problems.
HyperLeda is an information system for astronomy: It consists in a database and tools to process that data according to the user's requirements. The scientific goal which motivates the development of HyperLeda is the study of the physics and evolution of galaxies. LEDA was created more than 20 years ago, in 1983, and became HyperLeda after the merging with Hypercat in 2000
PAGER (Prompt Assessment of Global Earthquakes for Response) is an automated system that produces content concerning the impact of significant earthquakes around the world, informing emergency responders, government and aid agencies, and the media of the scope of the potential disaster. PAGER rapidly assesses earthquake impacts by comparing the population exposed to each level of shaking intensity with models of economic and fatality losses based on past earthquakes in each country or region of the world. Earthquake alerts – which were formerly sent based only on event magnitude and location, or population exposure to shaking – now will also be generated based on the estimated range of fatalities and economic losses. PAGER uses these earthquake parameters to calculate estimates of ground shaking by using the methodology and software developed for ShakeMaps. ShakeMap sites provide near-real-time maps of ground motion and shaking intensity following significant earthquakes. These maps are used by federal, state, and local organizations, both public and private, for post-earthquake response and recovery, public and scientific information, as well as for preparedness exercises and disaster planning.
>>>!!!<<< This site is going away on April 1, 2021. General access to the site has been disabled and community users will see an error upon login. >>>!!!<<< Socrata’s cloud-based solution allows government organizations to put their data online, make data-driven decisions, operate more efficiently, and share insights with citizens.
<<<!!!<<< OFFLINE >>>!!!>>> A recent computer security audit has revealed security flaws in the legacy HapMap site that require NCBI to take it down immediately. We regret the inconvenience, but we are required to do this. That said, NCBI was planning to decommission this site in the near future anyway (although not quite so suddenly), as the 1,000 genomes (1KG) project has established itself as a research standard for population genetics and genomics. NCBI has observed a decline in usage of the HapMap dataset and website with its available resources over the past five years and it has come to the end of its useful life. The International HapMap Project is a multi-country effort to identify and catalog genetic similarities and differences in human beings. Using the information in the HapMap, researchers will be able to find genes that affect health, disease, and individual responses to medications and environmental factors. The Project is a collaboration among scientists and funding agencies from Japan, the United Kingdom, Canada, China, Nigeria, and the United States. All of the information generated by the Project will be released into the public domain. The goal of the International HapMap Project is to compare the genetic sequences of different individuals to identify chromosomal regions where genetic variants are shared. By making this information freely available, the Project will help biomedical researchers find genes involved in disease and responses to therapeutic drugs. In the initial phase of the Project, genetic data are being gathered from four populations with African, Asian, and European ancestry. Ongoing interactions with members of these populations are addressing potential ethical issues and providing valuable experience in conducting research with identified populations. Public and private organizations in six countries are participating in the International HapMap Project. Data generated by the Project can be downloaded with minimal constraints. The Project officially started with a meeting in October 2002 (https://www.genome.gov/10005336/) and is expected to take about three years.
Content type(s)
The Network for the Detection of Atmospheric Composition Change (NDACC), a major contributor to the worldwide atmospheric research effort, consists of a set of globally distributed research stations providing consistent, standardized, long-term measurements of atmospheric trace gases, particles, spectral UV radiation reaching the Earth's surface, and physical parameters, centered around the following priorities.
The Mexican Health and Aging Study (MHAS) started as a prospective panel study of health and aging in Mexico. MHAS is nationally representative of the 13 million Mexicans born prior to 1951. The survey has national and urban/rural representation. The baseline survey, in 2001, included a nationally representative sample of Mexicans aged 50 and over and their spouse/partners regardless of their age. A direct interview was sought with each individual and proxy interviews were obtained when poor health or temporary absence precluded a direct interview. The sample was distributed in all 32 states of the country in urban and rural areas. Households in the six states which account for 40% of all migrants to the U.S. were over-sampled. A sub-sample was selected to obtain anthropometric measures.
The CPTAC Data Portal is the centralized repository for the dissemination of proteomic data collected by the Proteome Characterization Centers (PCCs) for the CPTAC program. The portal also hosts analyses of the mass spectrometry data (mapping of spectra to peptide sequences and protein identification) from the PCCs and from a CPTAC-sponsored common data analysis pipeline (CDAP).
The Leicester Database and Archive Service (LEDAS) is an easy to use on-line astronomical database and archive access service, dealing mainly with data from high energy astrophysics missions, but also providing full database functionality for over 200 astronomical catalogues from ground-based observations and space missions. The LEDAS also allows access to images, spectra and light curves in graphics, HDS and FITS formats, as well as access to raw and processed event data. LEDAS provides the primary means of access for the UK astronomical community to the ROSAT Public Data Archive, the ASCA Public Data Archive and the Ginga Products Archive by its Archive Network Interface ARNIE.
Country
The Ocean Date and Information System provides information on physical, chemical, biological and geological parameters of ocean and coasts on spatial and temporal domains that is vital for both research and operational oceanography. In-situ and remote sensing data are included. The Ocean Information Bank is supported by the data received from Ocean Observing Systems in the Indian Ocean (both the in-situ platforms and satellites) as well as by a chain of Marine Data Centres. Ocean and coastal measurements are available. Data products are accessible through various portals on the site and are largely available by data type (in situ or remote sensing) and then by parameter.