Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 1127 result(s)
Språkbanken was established in 1975 as a national center located in the Faculty of Arts, University of Gothenburg. Allén's groundbreaking corpus linguistic research resulted in the creation of one of the first large electronic text corpora in another language than English, with one million words of newspaper text. The task of Språkbanken is to collect, develop, and store (Swedish) text corpora, and to make linguistic data extracted from the corpora available to researchers and to the public.
Country
The Repository of University of Wroclaw is an institutional repository, which archives and makes available scientific as well as research and development materials, that were created by the employees, postgraduate students, and (in the selection) by the students of the University of Wroclaw or issued at the University of Wroclaw. These materials include, inter alia, dissertations, postdoctoral thesis, selected undergraduate’s and postgraduate’s thesis, research articles, conference papers, monographs or their chapters, didactic materials, posters, and also research data. Repository is organized by fields of knowledge, in accordance with the areas represented at the University in the frameworks of its organizational units, such as departments, institutes and other interfaculty units, and its structure is hierarchical, based on groups of subjects, covering a variety of collections.
Country
The Australian National Corpus collates and provides access to assorted examples of Australian English text, transcriptions, audio and audio-visual materials. Text analysis tools are embedded in the interface allowing analysis and downloads in *.CSV format.
The University of Oxford Text Archive develops, collects, catalogues and preserves electronic literary and linguistic resources for use in Higher Education, in research, teaching and learning. We also give advice on the creation and use of these resources, and are involved in the development of standards and infrastructure for electronic language resources.
The German Text Archive (Deutsches Textarchiv, DTA) presents online a selection of key German-language works in various disciplines from the 17th to 19th centuries. The electronic full-texts are indexed linguistically and the search facilities tolerate a range of spelling variants. The DTA presents German-language printed works from around 1650 to 1900 as full text and as digital facsimile. The selection of texts was made on the basis of lexicographical criteria and includes scientific or scholarly texts, texts from everyday life, and literary works. The digitalisation was made from the first edition of each work. Using the digital images of these editions, the text was first typed up manually twice (‘double keying’). To represent the structure of the text, the electronic full-text was encoded in conformity with the XML standard TEI P5. The next stages complete the linguistic analysis, i.e. the text is tokenised, lemmatised, and the parts of speech are annotated. The DTA thus presents a linguistically analysed, historical full-text corpus, available for a range of questions in corpus linguistics. Thanks to the interdisciplinary nature of the DTA Corpus, it also offers valuable source-texts for neighbouring disciplines in the humanities, and for scientists, legal scholars and economists.
The Text Laboratory provides assistance with databases, word lists, corpora and tailored solutions for language technology. We also work on research and development projects alone or in cooperation with others - locally, nationally and internationally. Services and tools: Word and frequency lists, Written corpora, Speech corpora, Multilingual corpora, Databases, Glossa Search Tool, The Oslo-Bergen Tagger, GREI grammar games, Audio files: dialects from Norway and America etc., Nordic Atlas of Language Structures (NALS) Journal, Norwegian in America, NEALT, Ethiopian Language Technology, Access to Corpora
The CLARIN­/Text+ repository at the Saxon Academy of Sciences and Humanities in Leipzig offers long­term preservation of digital resources, along with their descriptive metadata. The mission of the repository is to ensure the availability and long­term preservation of resources, to preserve knowledge gained in research, to aid the transfer of knowledge into new contexts, and to integrate new methods and resources into university curricula. Among the resources currently available in the Leipzig repository are a set of corpora of the Leipzig Corpora Collection (LCC), based on newspaper, Wikipedia and Web text. Furthermore several REST-based webservices are provided for a variety of different NLP-relevant tasks The repository is part of the CLARIN infrastructure and part of the NFDI consortium Text+. It is operated by the Saxon Academy of Sciences and Humanities in Leipzig.
CLARIN-UK is a consortium of centres of expertise involved in research and resource creation involving digital language data and tools. The consortium includes the national library, and academic departments and university centres in linguistics, languages, literature and computer science.
Content type(s)
The Berlin-Brandenburg Academy of Sciences and Humanities (BBAW) is a CLARIN partner institution and has been an officially certified CLARIN service center since June 20th, 2013. The CLARIN center at the BBAW focuses on historical text corpora (predominantly provided by the 'Deutsches Textarchiv'/German Text Archive, DTA) as well as on lexical resources (e.g. dictionaries provided by the 'Digitales Wörterbuch der Deutschen Sprache'/Digital Dictionary of the German Language, DWDS).
In collaboration with other centres in the Text+ consortium and in the CLARIN infrastructure, the CLARIND-UdS enables eHumanities by providing a service for hosting and processing language resources (notably corpora) for members of the research community. CLARIND-UdS centre thus contributes of lifting the fragmentation of language resources by assisting members of the research community in preparing language materials in such a way that easy discovery is ensured, interchange is facilitated and preservation is enabled by enriching such materials with meta-information, transforming them into sustainable formats and hosting them. We have an explicit mission to archive language resources especially multilingual corpora (parallel, comparable) and corpora including specific registers, both collected by associated researchers as well as researchers who are not affiliated with us.
The Tree of Life Web Project is a collection of information about biodiversity compiled collaboratively by hundreds of expert and amateur contributors. Its goal is to contain a page with pictures, text, and other information for every species and for each group of organisms, living or extinct. Connections between Tree of Life web pages follow phylogenetic branching patterns between groups of organisms, so visitors can browse the hierarchy of life and learn about phylogeny and evolution as well as the characteristics of individual groups.
Jason is a remote-controlled deep-diving vessel that gives shipboard scientists immediate, real-time access to the sea floor. Instead of making short, expensive dives in a submarine, scientists can stay on deck and guide Jason as deep as 6,500 meters (4 miles) to explore for days on end. Jason is a type of remotely operated vehicle (ROV), a free-swimming vessel connected by a long fiberoptic tether to its research ship. The 10-km (6 mile) tether delivers power and instructions to Jason and fetches data from it.
The DOE Data Explorer (DDE) is an information tool to help you locate DOE's collections of data and non-text information and, at the same time, retrieve individual datasets within some of those collections. It includes collection citations prepared by the Office of Scientific and Technical Information, as well as citations for individual datasets submitted from DOE Data Centers and other organizations.
Country
The World Data Centre section provides software and data catalogue information and data produced by IPS Radio and Space Services over the past few past decades. You can download data files, plot graphs from data files, check data availability, retrieve data sets and station information.
A web database is provided which can be used to calculate photon cross sections for scattering, photoelectric absorption and pair production, as well as total attenuation coefficients, for any element, compound or mixture (Z ≤ 100), at energies from 1 keV to 100 GeV.
The TextGrid Repository is a digital preservation archive for human sciences research data. It offers an extensive searchable and adaptable corpus of XML/TEI encoded texts, pictures and databases. Amongst the continuously growing corpus is the Digital Library of TextGrid, which consists of works of more than 600 authors of fiction (prose verse and drama) as well as nonfiction from the beginning of the printing press to the early 20th century written in or translated into German. The files are saved in different output formats (XML, ePub, PDF), published and made searchable. Different tools e.g. viewing or quantitative text-analysis tools can be used for visualization or to further research the text. The TextGrid Repository is part of the virtual research environment TextGrid, which besides offering digital preservation also offers open-source software for collaborative creations and publications of e.g. digital editions that are based on XML/TEI.
SWE-CLARIN is a national node in European Language and Technology Infrastructure (CLARIN) - an ESFRI initiative to build an infrastructure for e-science in the humanities and social sciences. SWE-CLARIN makes language-based materials available as research data using advanced processing tools and other resources. One basic idea is that the increasing amount of text and speech - contemporary and historical - as digital research material enables new forms of e-science and new ways to tackle old research issues.
Codex Sinaiticus is one of the most important books in the world. Handwritten well over 1600 years ago, the manuscript contains the Christian Bible in Greek, including the oldest complete copy of the New Testament. The Codex Sinaiticus Project is an international collaboration to reunite the entire manuscript in digital form and make it accessible to a global audience for the first time. Drawing on the expertise of leading scholars, conservators and curators, the Project gives everyone the opportunity to connect directly with this famous manuscript.
Country
<<<<<!!! With the implementation of GlyTouCan (https://glytoucan.org/) the mission of GlycomeDB comes to an end. !!!>>>>> With the new database, GlycomeDB, it is possible to get an overview of all carbohydrate structures in the different databases and to crosslink common structures in the different databases. Scientists are now able to search for a particular structure in the meta database and get information about the occurrence of this structure in the five carbohydrate structure databases.
The Alvin Frame-Grabber system provides the NDSF community on-line access to Alvin's video imagery co-registered with vehicle navigation and attitude data for shipboard analysis, planning deep submergence research cruises, and synoptic review of data post-cruise. The system is built upon the methodology and technology developed for the JasonII Virtual Control Van and a prototype system that was deployed on 13 Alvin dives in the East Pacific Rise and the Galapagos (AT7-12, AT7-13). The deployed prototype system was extremely valuable in facilitating real-time dive planning, review, and shipboard analysis.
the Data Hub is a community-run catalogue of useful sets of data on the Internet. You can collect links here to data from around the web for yourself and others to use, or search for data that others have collected. Depending on the type of data (and its conditions of use), the Data Hub may also be able to store a copy of the data or host it in a database, and provide some basic visualisation tools.
4DGenome is a public database that archives and disseminates chromatin interaction data. Currently, 4DGenome contains over 8,038,247 interactions curated from both experimental studies (high throughput and individual studies) and computational predictions. It covers five organisms, Homo sapiens, Mus musculus, Drosophila melanogaster, Plasmodium falciparum, and Saccharomyces cerevisiae.
The Government is releasing public data to help people understand how government works and how policies are made. Some of this data is already available, but data.gov.uk brings it together in one searchable website. Making this data easily available means it will be easier for people to make decisions and suggestions about government policies based on detailed information.
The Astromaterials Data System (AstroMat) is a data infrastructure to store, curate, and provide access to laboratory data acquired on samples curated in the Astromaterials Collection of the Johnson Space Center. AstroMat is developed and operated at the Lamont-Doherty Earth Observatory of Columbia University and funded by NASA.