Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 46 result(s)
MalaCards is an integrated database of human maladies and their annotations, modeled on the architecture and richness of the popular GeneCards database of human genes. MalaCards mines and merges varied web data sources to generate a computerized web card for each human disease. Each MalaCard contains disease specific prioritized annotative information, as well as links between associated diseases, leveraging the GeneCards relational database, search engine, and GeneDecks set-distillation tool. As proofs of concept of the search/distill/infer pipeline we find expected elucidations, as well as potentially novel ones.
GeneCards is a searchable, integrative database that provides comprehensive, user-friendly information on all annotated and predicted human genes. It automatically integrates gene-centric data from ~125 web sources, including genomic, transcriptomic, proteomic, genetic, clinical and functional information.
virus mentha archives evidence about viral interactions collected from different sources and presents these data in a complete and comprehensive way. Its data comes from manually curated protein-protein interaction databases that have adhered to the IMEx consortium. virus mentha is a resource that offers a series of tools to analyse selected proteins in the context of a network of interactions. Protein interaction databases archive protein-protein interaction (PPI) information from published articles. However, no database alone has sufficient literature coverage to offer a complete resource to investigate "the interactome". virus mentha's approach generates every week a consistent interactome (graph). Most importantly, the procedure assigns to each interaction a reliability score that takes into account all the supporting evidence. virus mentha offers direct access to viral families such as: Orthomyxoviridae, Orthoretrovirinae and Herpesviridae plus, it offers the unique possibility of searching by host organism. The website and the graphical application are designed to make the data stored in virus mentha accessible and analysable to all users.virus mentha superseeds VirusMINT. The Source databases are: MINT, DIP, IntAct, MatrixDB, BioGRID.
DDBJ; DNA Data Bank of Japan is the sole nucleotide sequence data bank in Asia, which is officially certified to collect nucleotide sequences from researchers and to issue the internationally recognized accession number to data submitters.Since we exchange the collected data with EMBL-Bank/EBI; European Bioinformatics Institute and GenBank/NCBI; National Center for Biotechnology Information on a daily basis, the three data banks share virtually the same data at any given time. The virtually unified database is called "INSD; International Nucleotide Sequence Database DDBJ collects sequence data mainly from Japanese researchers, but of course accepts data and issue the accession number to researchers in any other countries.
The Brain Transcriptome Database (BrainTx) project aims to create an integrated platform to visualize and analyze our original transcriptome data and publicly accessible transcriptome data related to the genetics that underlie the development, function, and dysfunction stages and states of the brain.
The European Mouse Mutant Archive – EMMA is a non-profit repository for the collection, archiving (via cryopreservation) and distribution of relevant mutant mouse strains essential for basic biomedical research. The laboratory mouse is the most important mammalian model for studying genetic and multi-factorial diseases in man. The comprehensive physical and data resources of EMMA support basic biomedical and preclinical research, and the available research tools and mouse models of human disease offer the opportunity to develop a better understanding of molecular disease mechanisms and may provide the foundation for the development of diagnostic, prognostic and therapeutic strategies.
The HUGO Gene Nomenclature Committee (HGNC) assigned unique gene symbols and names to over 35,000 human loci, of which around 19,000 are protein coding. This curated online repository of HGNC-approved gene nomenclature and associated resources includes links to genomic, proteomic and phenotypic information, as well as dedicated gene family pages.
<<<!!!<<< OFFLINE >>>!!!>>> A recent computer security audit has revealed security flaws in the legacy HapMap site that require NCBI to take it down immediately. We regret the inconvenience, but we are required to do this. That said, NCBI was planning to decommission this site in the near future anyway (although not quite so suddenly), as the 1,000 genomes (1KG) project has established itself as a research standard for population genetics and genomics. NCBI has observed a decline in usage of the HapMap dataset and website with its available resources over the past five years and it has come to the end of its useful life. The International HapMap Project is a multi-country effort to identify and catalog genetic similarities and differences in human beings. Using the information in the HapMap, researchers will be able to find genes that affect health, disease, and individual responses to medications and environmental factors. The Project is a collaboration among scientists and funding agencies from Japan, the United Kingdom, Canada, China, Nigeria, and the United States. All of the information generated by the Project will be released into the public domain. The goal of the International HapMap Project is to compare the genetic sequences of different individuals to identify chromosomal regions where genetic variants are shared. By making this information freely available, the Project will help biomedical researchers find genes involved in disease and responses to therapeutic drugs. In the initial phase of the Project, genetic data are being gathered from four populations with African, Asian, and European ancestry. Ongoing interactions with members of these populations are addressing potential ethical issues and providing valuable experience in conducting research with identified populations. Public and private organizations in six countries are participating in the International HapMap Project. Data generated by the Project can be downloaded with minimal constraints. The Project officially started with a meeting in October 2002 (https://www.genome.gov/10005336/) and is expected to take about three years.
The Barcode of Life Data Systems (BOLD) provides DNA barcode data. BOLD's online workbench supports data validation, annotation, and publication for specimen, distributional, and molecular data. The platform consists of four main modules: a data portal, a database of barcode clusters, an educational portal, and a data collection workbench. BOLD is the go-to site for DNA-based identification. As the central informatics platform for DNA barcoding, BOLD plays a crucial role in assimilating and organizing data gathered by the international barcode research community. Two iBOL (International Barcode of Life) Working Groups are supporting the ongoing development of BOLD.
The ENCODE Encyclopedia organizes the most salient analysis products into annotations, and provides tools to search and visualize them. The Encyclopedia has two levels of annotations: Integrative-level annotations integrate multiple types of experimental data and ground level annotations. Ground-level annotations are derived directly from the experimental data, typically produced by uniform processing pipelines.
Genomic Expression Archive (GEA) is a public database of functional genomics data such as gene expression, epigenetics and genotyping SNP array. Both microarray- and sequence-based data are accepted in the MAGE-TAB format in compliance with MIAME and MINSEQE guidelines, respectively. GEA issues accession numbers, E-GEAD-n to experiment and A-GEAD-n to array design. Data exchange between GEA and EBI ArrayExpress is planned.
The Genomic Observatories Meta-Database (GEOME) is a web-based database that captures the who, what, where, and when of biological samples and associated genetic sequences. GEOME helps users with the following goals: ensure the metadata from your biological samples is findable, accessible, interoperable, and reusable; improve the quality of your data and comply with global data standards; and integrate with R, ease publication to NCBI's sequence read archive, and work with an associated LIMS. The initial use case for GEOME came from the Diversity of the Indo-Pacific Network (DIPnet) resource.
mentha archives evidence collected from different sources and presents these data in a complete and comprehensive way. Its data comes from manually curated protein-protein interaction databases that have adhered to the IMEx consortium. The aggregated data forms an interactome which includes many organisms. mentha is a resource that offers a series of tools to analyse selected proteins in the context of a network of interactions. Protein interaction databases archive protein-protein interaction (PPI) information from published articles. However, no database alone has sufficient literature coverage to offer a complete resource to investigate "the interactome". mentha's approach generates every week a consistent interactome (graph). Most importantly, the procedure assigns to each interaction a reliability score that takes into account all the supporting evidence. mentha offers eight interactomes (Homo sapiens, Arabidopsis thaliana, Caenorhabditis elegans, Drosophila melanogaster, Escherichia coli K12, Mus musculus, Rattus norvegicus, Saccharomyces cerevisiae) plus a global network that comprises every organism, including those not mentioned. The website and the graphical application are designed to make the data stored in mentha accessible and analysable to all users. Source databases are: MINT, IntAct, DIP, MatrixDB and BioGRID.
The ISSAID website gathers resources related to the systemic autoinflammatory diseases in order to facilitate contacts between interested physicians and researchers. The website provides support to share and rapidly disseminate information, thoughts, feelings and experiences to improve the quality of life of patients and families affected by systemic autoinflammatory diseases, and promote advances in the search for causes and cures.
Content type(s)
The IDR makes datasets that have never previously been accessible publicly available, allowing the community to search, view, mine and even process and analyze large, complex, multidimensional life sciences image data. Sharing data promotes the validation of experimental methods and scientific conclusions, the comparison with new data obtained by the global scientific community, and enables data reuse by developers of new analysis and processing tools.
WFCC-MIRCEN World Data Centre for Microorganisms (WDCM) provides a comprehensive directory of culture collections, databases on microbes and cell lines, and the gateway to biodiversity, molecular biology and genome projects.The WFCC is a Multidisciplinary Commission of the International Union of Biological Sciences (IUBS) and a Federation within the International Union of Microbiological Societies (IUMS). The WFCC is concerned with the collection, authentication, maintenance and distribution of cultures of microorganisms and cultured cells. Its aim is to promote and support the establishment of culture collections and related services, to provide liaison and set up an information network between the collections and their users, to organise workshops and conferences, publications and newsletters and work to ensure the long term perpetuation of important collections.
I2D (Interologous Interaction Database) is an on-line database of known and predicted mammalian and eukaryotic protein-protein interactions. It has been built by mapping high-throughput (HTP) data between species. Thus, until experimentally verified, these interactions should be considered "predictions". It remains one of the most comprehensive sources of known and predicted eukaryotic PPI. I2D includes data for S. cerevisiae, C. elegans, D. melonogaster, R. norvegicus, M. musculus, and H. sapiens.
InterPro collects information about protein sequence analysis and classification, providing access to a database of predictive protein signatures used for the classification and automatic annotation of proteins and genomes. Sequences in InterPro are classified at superfamily, family, and subfamily. InterPro predicts the occurrence of functional domains, repeats, and important sites, and adds in-depth annotation such as GO terms to the protein signatures.
The CGIAR Research Program No. 6 (CRP6): Forests, Trees and Agroforestry: Livelihoods, Landscapes and Governance aims to enhance the management and use of forests, agroforestry and tree genetic resources across the landscape, from farms to forests.
UniProtKB/Swiss-Prot is the manually annotated and reviewed section of the UniProt Knowledgebase (UniProtKB). It is a high quality annotated and non-redundant protein sequence database, which brings together experimental results, computed features and scientific conclusions. Since 2002, it is maintained by the UniProt consortium and is accessible via the UniProt website.
The European Nucleotide Archive (ENA) captures and presents information relating to experimental workflows that are based around nucleotide sequencing. A typical workflow includes the isolation and preparation of material for sequencing, a run of a sequencing machine in which sequencing data are produced and a subsequent bioinformatic analysis pipeline. ENA records this information in a data model that covers input information (sample, experimental setup, machine configuration), output machine data (sequence traces, reads and quality scores) and interpreted information (assembly, mapping, functional annotation). Data arrive at ENA from a variety of sources. These include submissions of raw data, assembled sequences and annotation from small-scale sequencing efforts, data provision from the major European sequencing centres and routine and comprehensive exchange with our partners in the International Nucleotide Sequence Database Collaboration (INSDC). Provision of nucleotide sequence data to ENA or its INSDC partners has become a central and mandatory step in the dissemination of research findings to the scientific community. ENA works with publishers of scientific literature and funding bodies to ensure compliance with these principles and to provide optimal submission systems and data access tools that work seamlessly with the published literature.
InnateDB is a publicly available database of the genes, proteins, experimentally-verified interactions and signaling pathways involved in the innate immune response of humans, mice and bovines to microbial infection. The database captures an improved coverage of the innate immunity interactome by integrating known interactions and pathways from major public databases together with manually-curated data into a centralised resource. The database can be mined as a knowledgebase or used with our integrated bioinformatics and visualization tools for the systems level analysis of the innate immune response.