June 10-13, 2019 | Hamburg, Germany
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
Archiving and collecting Arctic datasets: Open Arctic Research Index
The UiT - The Arctic University of Norway
The number of digital repositories containing publications and datasets on the Arctic region is increasing enormously. Users want relevant information according to their query with a minimum interval of time. Scholars are compelled to search the individual repositories to get their desired documents.
Open Arctic Research Index (Open ARI), a planned service at the UiT - The Arctic University of Norway, aims to collect and index all the openly available Arctic-related publications and datasets in a single open access metadata index. By providing a simple search dialog box to the index, users can search all these repositories and archives in a single operation.
The project investigates how such a service can support researchers in their research by making results from Arctic research more visible and better retrievable based on a standardized, interdisciplinary metadata set. The project started by clarifying the need for a new technical solution to collect all the published material using algorithms that allow the best way of filtering relevant records. We have defined 113 possible national and international collaborators who can feed the Open ARI with content. The team will analyze the success opportunities and the challenges in order of planning a full-scale management model.
An Agricultural Research e-Seeker to find, explore and visualize open repository resources
1International Livestock Research Institute, Kenya; 2International Center for Agricultural Research in the Dry Areas, Egypt; 3CodeObia, Jordan; 4International Livestock Research Institute, Ethiopia
Since 2010, several CGIAR centres, programs and partners have joined forces to enhance and open up access to their knowledge, information and data products through shared open repositories – mainly using DSpace, Dataverse and CKAN. These repositories now contain tens of thousands of items from many organizations and on diverse topics important to developing countries. In 2018, driven by an aspiration to offer more value from all this content, several of these partners invested in an aggregation tool – the agricultural research e-seeker – to facilitate integrated insights and intelligence and provide new ways to access the content across these different platforms. The poster describes and presents the AReS tool (which will be released to repository communities in the first half of 2019), illustrating how it enhances content discovery for users, supports institutional insights, visualizes content around different metadata filters, and generates ‘snapshots’ of diverse knowledge products for different users and use cases. The poster will explain the technical, organizational and knowledge management approaches used to build this tool.
Finding Citations on Lume Institutional Repository
Federal University of Rio Grande do Sul, Brazil
This ongoing work aims to provide citation information for users of DSpace repositories. Its scope is on Lume’s journal article community - which has knowledge areas as collections - and its citations within this community. This community may point the citation increasing of articles published in scientific journals and available in institutional open access repositories. The Extract Transform Load (ETL) approach consists of three steps: (1) - Data collection; (2) – Citations discovery; and (3) - Load processed data. From the process, for example, one can explore a specific period, areas of knowledge, and groups of authors. It can provide relevant information to users about a particular type of document or author; as well as how, when and where the citation occurred in documents. The disclosure of these results aims the encouragement of the deposit of works in open access IR and its usage.
PHAIDRA: An open approach for the repository infrastructure from the University of Vienna
University of Vienna, Austria
PHAIDRA is a repository infrastructure develop and run by the University of Vienna. The primary driver for this infrastructure is openness beyond open access and open data. The data management system is open to every member of the University including students. We invite researchers to work with us jointly improving PHAIDRA, and we are designing domain-specific interface together in publicly funded projects.
DeepGreen – Open Access Transformation in Practice
1Zuse Institute Berlin, Germany; 2Helmholtz Open Science Coordination Office at the GFZ German Research Centre for Geosciences, Potsdam, Germany
Since 2011 so-called Alliance licenses funded by the German Research Foundation (DFG) include an open access component which allows authors to make their articles, after a shortened embargo period, publicly available through their institutional or subject-based repositories. If the institution negotiated the license, she acts as a representative of the author and therefore with equal rights. However, very few institutions use this opportunity due to the high effort associated with manually researching the articles in question and adding them to the repositories.
DeepGreen aims to change that. Funded by DFG, DeepGreen develops an automated workflow to transfer scholarly publications from publishers to open access repositories. During a first funding period (2016-2017) a technical solution for a data router was developed. Publishers deposit data files (metadata and full text) and DeepGreen is matching them to authorized repositories using affiliations included in the publishers metadata. During a second funding phase, which started in August 2018, other licensing models will be examined and in summer 2019 DeepGreen will see a beta launch with a selection of publishers and repositories.
DeepGreen increases the percentage of open access publications which makes it an active player in the field of open access transformation and open science.
Bridging the gap between Repositories and Homepages - Providing data from DSpace-CRIS with OData
University of Bamberg, Germany
Universities are using a multitude of technical systems to support researchers and students. As a result there are new challenges concerning administration and system integration. The University of Bamberg has the requirement that research data from our repository (DSpace-CRIS) should be accessible through a web service to embed the data into a homepage uitilizing Typo3. DSpace-CRIS already features a REST API which supports access to DSpace’s core data (publications) but is not conceived to provide data of CRIS entities (projects, research data). Moreover, in a multitude system landscape it is favored to use the same standard for several systems to access data which is implemented by the Open Data Protocol (OData) at the University of Bamberg. The goal of OData is to establish a consistent standard for realizing a RESTful API. In 2017 OData has been approved as a standard for Open Data Exchange by OASIS. In our approach the OData API makes direct use of DSpace-CRIS’ underlying search platform (Solr) to access both DSpace’s core data and data of CRIS entities by implementing a unified query language. Providing data from several system with the same query language simplifies the integration within other systems and reduces the amount of maintenance.
Preserving Social Media posts: a case study with “In Her Shoes”
Digital Repository of Ireland
On 25th May 2018, the Irish people voted to remove the controversial “8th Amendment” to the Irish Constitution, opening the way for the introduction of legislation governing the termination of pregnancy in the State. The Digital Repository of Ireland was asked to archive the materials from a grass-roots social-media-based group campaigning in the run-up to the referendum..
The poster will present some of the difficulties encountered when attempting to archive these social media posts and describe the approach taken by the DRI to overcome them. It will also show the open-source facebook-to-dc tool developed by DRI which may be used by others to generate Dublin Core metadata and textual asset files from a Facebook Group.
DOISerbia Repository – how transition country raised visibility of scientific journals
National Library of Serbia, Serbia
DOISerbia repository has been implemented in 2005 by Department of Scientific Information in National Library of Serbia. That was one of the first developed OA repositories in Serbia. The aim of implementation of this service was improvement and promotion of scientific publishing in Serbia. System includes 66 scientific journals from Serbia with archive from 2002 till today. Every article, beside main bibliographic data is equipped with DOI number. For every journal, there are data about web address, coverage, aims and scope, publisher, editorial board, frequency, impact factor if any, and also big improvement data about journal: journal editorial policy, instruction for authors etc. On this way, it was established permanent link to full-text of every article, but also it was raised its visibility national and international. Connection between metadata of DOIs and articles is conducted via CrossRef. Since 2005 till nowadays there are over 40.000 articles in this system. The main advantage of this system is that metadata are standardized in Dublin Core and it was implemented OAI PMH protocol which opened a door for connecting and harvesting our data by big international OA repositories – TEL, Europeana and DOAJ.
Building a single repository to meet all use cases: a collaboration between institution, researchers and supplier
1University of Westminster, United Kingdom; 2Haplo, United Kingdom
Repositories have historically focused on a single use case, primarily the capture of traditional (text-based) open access publications, requiring separate solutions for different use cases (e.g. research data). This made capturing the wide variety of research outputs challenging at the University of Westminster, which engages in practice-based arts research (encompassing a range of disciplines, from fine and performing arts, to architecture and design and whose outputs tend to be in a non-text format) alongside traditional research.
Building on a history of collaboration, Haplo and the University and its research community have built a single, open source repository meeting multiple use cases including text-based and non-text based outputs, portfolios and research data. Made possible through the flexible technical architecture of the Haplo platform, whose underlying technology is based on semantic web principles and meets COAR’s vision for next-generation repositories.
Improvements to the repository now enable better capture and display of research outputs across disciplines. Highlights include the development of dynamic portfolios, improved support for non-text based outputs and ongoing engagement with practice-based arts researchers to understand their needs, build workflows, review metadata and build, test and implement a transformed repository.
OASIS: A sustainable digital repository service owned by academics
University of York, United Kingdom
The Technology Team at the University of York Library and Archives aspires to support research funded digital repository development. This poster will explain how, in close collaboration with academics, the Technology Team developed the Open Accessible Summaries In Language Studies (OASIS) repository service. The OASIS repository service is governed by academics and is open for use to the whole research community in language studies. The OASIS initiative is supported by some peer-review journals in language studies.
The poster will illustrate how we used an open and collaborative approach to develop the OASIS digital repository based on the Hyrax/Samvera community owned repository solution. We will include visual screenshots presenting our customisation work on the Hyrax default user interface in order to:
- Accommodate rich metadata in language studies;
- Create an intuitive deposit workflow with an administrative approval step;
- Provide a faceted search interface allowing users to find relevant summaries.
We will also present our digital library and archives emerging infrastructure and our sustainability plans for hosting the OASIS service alongside our future digital library solution.
ROSI – Open Metrics for Open Repositories
German National Library of Science and Technology (TIB), Germany
Researchers rely more and more on online tools to conduct their research. In theory, the growing need for comprehensive information about online research outputs could be easily satisfied. Yet, the majority of scientometric sources are not completely open, which generates intransparent data and limits impact assessments. To illustrate, some proprietary databases that generate scientometric indicators do not disclose the raw data for users beyond traditional subscription-based models. At the same time, researchers' needs concerning scientometric indicators are not addressed adequately by these existing products.
In contrast, the project ROSI (Reference Implementation for Open Scientific Indicators) focuses only on open data sources. A reference implementation to visualise related metrics from open data sources, such as open access repositories, will be developed. Throughout the project, the needs of researchers concerning scientometric indicators are gathered in an iterative process, and researchers will be invited to evaluate the project outcomes. The reference implementation will be documented in a user handbook and will be reusable in other contexts, such as research information systems, repositories and publishing software.
Making Local Knowledge Visible: An IR in Kosovo
University of the Pacific, United States of America
In 2017, a joint international effort commenced under the direction of the President of University for Business and Technology (UBT) in Kosovo with colleagues from Linnaeus University (Sweden) and University for the Pacific (USA) to define, create and populate a Knowledge Center for UBT which would include an institutional repository (IR). Enlivened by discussion and feedback from the intended recipients, the needs and goals of a UBT IR were identified. Of course, creating and populating an IR is a lengthy process with many potential problems and varied approaches. Discussion of best practices was undertaken early and currently, the UBT Knowledge Center (https://knowledgecenter.ubt-uni.net/) has 1,495 records uploaded.
The point of this presentation is not only to discuss the process by which a Kosovo IR began but also the impact of making local knowledge visible to current and future UBT students, as well as regionally and internationally. As part of the author’s doctoral research, a study of quantitative and qualitative impact is currently underway on the UBT Knowledge Center. Results from surveying student and faculty at UBT will be shared as well as usage statistics both in country and internationally, drawing conclusions on impact and reach of the project endeavor.
Virtual Reality Record Metadata
University of the Pacific, United States of America
In 2018, the University of the Pacific Libraries worked with a faculty member in the School of Engineering and Computer Science to upload a class project involving multi-file records to the institutional repository. One of the file types was an .EXE executable Virtual Reality (VR) application. This was a first at the institution and in my experience with institutional repositories; I was stymied on how to describe and provide metadata for the VR piece – to both human and machine audiences. Attempting to read up on best practices and query the community didn’t result in much concrete assistance and we muddled through as best we could. Since the first project occurred, I have continued to research standards and best practices regarding hosting and describing virtual reality, augmented reality, etc., file types in an open repository. We will also be facing this problem again as the faculty member will repeat the course and the project and wants to continue using the repository as a host. Interested? Come learn about what I did and what I’ve discovered along the way.
An assessment of the status of Open Access Policies and Repositories Development in Kenya
University of Nairobi, Kenya
Academic institutions worldwide have embraced institutional repositories as a means to showcase their research globally. In Kenya, the majority of academic institutions with effective repositories are established universities. Little is known of institutional repositories of newly established universities in Kenya. The study assesses the status of open access policy and repository development in Kenyan universities that were established between 2016 and 2017.
The researchers collected data from professional library staff in three newly established universities using questionnaires.
In the findings, all the university libraries investigated had functional institutional repositories. The libraries had developed submission and metadata policies. The staff charged with implementing institutional repositories had relevant skills, understood the scholarly communication cycle, and were responsible for recruitment of institutional repository content. The challenges faced in implementing institutional repositories included low levels of awareness of the existence of institutional repositories by the intended users, reluctant of the researchers in submitting their research with the repositories, lack of resources, inadequate staff and submission policy.
The findings of this study buttresses the place of information repositories as a platform to share research literature and open access to scholarly materials globally even for newly established universities in developing countries.
A native iPad app for the DSpace 7 REST API
Virginia Tech, United States of America
This is a native iPad app for DSpace 7 repositories, built using the new version of the REST API. The app also runs on iPhones, but this is not recommended for most use cases because of the difficulty inherent in reading formatted academic articles on small phone screens. The app allows repository users to browse, search, and download content, and repository administrators to submit and edit repository content. Because of the connectivity challenges inherent with mobile devices, the app was designed so that some of the work can be done while users are offline. This app is unofficial and is intended only as a supplement to the spiffy, official and feature-complete Angular UI which will be preferred by most users. This app is bound to be a niche product, because most users find our repositories by way of search hits from Google that link directly to articles, but I believe that some users will enjoy the intimacy and speed of a native mobile app for interacting with repositories, and I am eager to present this poster.
Hamburg Open Science at TUHH: An All-in-one repository service based on DSpace CRIS
TU Hamburg, Germany
The Hamburg Open Science program aims to develop an open science infrastructure for universities in Hamburg. This includes open access repositories, research data repositories and research information systems. Hamburg University of Technology, a small institution with limited programming capacities, opted for an integrated approach instead of three different systems. The current Open Access repository (based on DSpace CRIS and fully ORCID-enabled) is going to be extended with a research data repository and supposed to include all institutional researchers, organization units, projects and publication information building a research information system and a university bibliography.
This poster illustrates the components of our extended repository and shows how DSpace CRIS, an open source software with a vibrant community, helped us cover the two new components of the repository. It will focus especially on the entities.
Usage Statistics Do Count
Make Data Count observes that “Sharing data is time-consuming and researchers need incentives for undertaking the extra work. Metrics for data will provide feedback on data usage, views, and impact that will help encourage researchers to share their data”. At Zenodo, we have been working on the same principles and we have launched Zenodo Usage Statistics, a feature that exposes fine-grained up-to-date usage statistics in an easily accessible manner. Now, users can find the number of views and downloads on every record page, and they can also sort search results by most viewed. These statistics are “versioning aware”; by default, we roll-up usage statistics for all versions of a record. However, it is also possible to see detailed usage statistics for the specific version of a record.
We strongly believe in user’s right to privacy; thus we have spent a lot of time to design our system to ensure all tracking is completely anonymized. As well, our usage statistics are tracked according to industry standards such as the COUNTER Code of Practice as well as the Code of Practice for Research Data Usage Metrics and thus allows our users to compare metrics from Zenodo with other compliant repositories.
DOI Minter: A Service for Flexible Generation of DataCite DOIs in Connection with a DSpace Repository
KAUST, Saudi Arabia
As institutional repositories seek to better support the release of unique research outputs (as opposed to open access versions of separately published materials), they increasingly turn to DataCite DOIs as the most appropriate form of persistent identifier. While DataCite DOI registration support is built into many repository platforms (reference 2), the native configuration options may be limited. For institutions like ours who opt to use commercial repository hosting services, contracting customizations to features such as the DataCite integration within the existing software also complicates future upgrade or migration paths for the platform as a whole. In addition, locating the DOI minting service of an institution within a single platform or database may be inappropriate when the institution has several different systems that would benefit from the use of DOIs.(reference 1) Due to these factors we developed a local DOI Minter service connected to our hosted DSpace repository via REST API. This has allowed us to have greater immediate flexibility and also better positions us for future expansion of our DOI related services.
Introducing the Fedora User Guide
1Indiana University, United States of America; 2University of North Carolina Chapel Hill, United States of America; 3Northwestern University, United States of America; 4Penn State University, United States of America; 5Duraspace, United States of America
Fedora Digital Repository has a strong history of documentation for system administrators hosting and managing Fedora as software and developers using the repository as a backend for collection access and management.  However, the same level of documentation has not been as readily available for users who manage collections and perform metadata work in Fedora. Andrew Woods from Duraspace approached a group of metadata librarians at institutions using Fedora to start a new section of documentation called the Fedora User Guide. The new User Guide currently contains information about metadata recommendations and best practices within Fedora from different user communities. It also provides various examples of content modeling to show how different ways of organizing content are implemented in Fedora using the current capabilities of the digital repository software.
Long-term preservation: Integrating DSpace and Rosetta at ETH Zurich
ETH Zurich, Switzerland
Institutional repositories are more and more intended to be dynamic platforms that not only expose the outcome of scientific research, but also reflect the entire lifecycle of such production, and spin a complex web between the various content types. The challenge of long-term preservation becomes bigger, as it needs to take into consideration the moving target that this dynamic structure and lifecycle represent.
At ETH Zurich, our DSpace based repository Research Collection is one typical example of such modern and complex repositories. It contains content that is unique to our institution, such as research data, doctoral theses or technical reports. Without a preservation system, all that research output wouldn’t be secured for the long term.
We therefore decided to integrate the Research Collection with our existing long-term preservation tool, Rosetta (branded as “ETH Data Archive”). Such an integration was for us a good way to ensure safeguard of our unique content, while not imposing on our users the burden of having to worry about long-term preservation themselves (e.g. they don’t have to submit twice, and only have to work with a publication platform that has been designed to focus on their end-user needs). The poster illustrates the integrated workflow.
User stories and Front Ends for a Research and Cultural Heritage Data Repository
Royal Danish Library, Denmark
This poster presents the Royal Danish Library Open Access Repository (LOAR)  based on DSpace  along with the MeLOAR dedicated front end for specific LOAR collections. MeLOAR offers keyword search and location search, shows the search results with facets, maps and highlights, and shows the highlights inside the pdfs as well.
The repository was from the idea stage a two purpose repository for both FAIR research data from Danish universities and cultural heritage data from the Royal Library. It was from the start also an open data repository to advocate the open data agenda.
An important lesson was that different users mean different user stories, both when users are national library curators vs. university researchers and when users are researchers from different disciplines with different types of data. Sometimes the solution is a dedicated front end which gives a better user experience.
Jisc Open Research Hub – Supporting Open Scholarship
Jisc, United Kingdom
Jisc’s Open Research Hub integrates a number of repository, preservation, reporting and storage platforms as a one stop shop for researchers and research managers. The service offers both open source and proprietary systems and allows data and metadata to be shared openly if required. The platform has been developed through years-long consultation with the UK HE research sector and sector bodies, along with contributions from both in-house Jisc and third-party experts.
The need for such a solution has arisen from the sector’s desires to achieve several, shared aims, including: greater collaboration; tackling the reproducibility crisis; enabling better research; and meeting funder requirements.
Jisc’s custom-built repository—the Open Research Repository—is part of the Jisc Open Research Hub. It’s built upon an extensive data model and rich messaging layer, providing users with a clean, simple, and easy-to-learn interface for the deposit, approval, and discovery of a range of outputs.
Jisc’s position in the UK higher education / research sector, as well as the scale of the service provides us with many domain-specific insights to share with OR2019 delegates, ranging from the broad methods mentioned above, down to individual design decisions informed by our research and domain expertise.
Highly Automated Import of Metadata into an Institutional Repository: A PHP Tool
Lib4RI - Library for the Research Institutes within the ETH Domain: Eawag, Empa, PSI & WSL, Dübendorf, Switzerland
Nowadays researches are often requested to submit their publications to their local institutional repository (IR). This is a rather time-consuming administrative task, which they would readily avoid. Besides improving the publication-submission procedure, the need to import publications from other databases (DBs) directly instead of asking the researchers to submit their publications manually, is paramount. Repositories have dealt with this problem in the past (e.g., Roy and Gray 2018; Li 2016 ). Nevertheless, to our knowledge there is no free tool available that allows the automatic import from well-known DBs such as Scopus or Web of Science, which is based on PHP – a programming language vastly used by repository developers. Our IR is based on Drupal/Islandora; however, we aim to provide code that can be used by as many systems as possible. Therefore, our approach is based on two levels: (a) a PHP library to abstract the interaction with the publication DBs, providing a unified interface that can be used by any PHP application or script; (b) a Drupal module using the PHP library to implement the automatic import of metadata into Islandora repositories.
Adapting repositories to OpenAire 4 Guidelines: Huelva repository, a case study
1Universidad de Huelva; 2Arvo Consultores y Tecnología
The approval in late November 2018 of the OpenAIRE Guidelines for Literature Repository Managers v4 marks a step further in this evolution of this harvesting requisites. The incorporation (and exposure) of ORCID identifiers, together with new sets of typologies, rights and versioning metadata, poses a challenge to digital repositories. Combined with the fact that it may be necessary to maintain the compatibility of aggregators in other metadating schema or systems (OAI_DC, Openaire V3, DRIVER, etc.) in the long term, we conclude that it may be difficult to repositories to adapt
We present the work carried out at the Arias Montano Repository of the University of Huelva, based on Dspace v6. The repository complies with, among others, OpenAire harvesting requisites for its adaptation to the new OpenAire 4 application profile. Specifically, we have placed particular emphasis on the use of all the ORCID author identifier information that was available to the repository (through the use of the authority control functions of Dspace), together with the coexistence of COAR vocabularies for the descriptive metadating of resources with vocabularies of previous specifications that continue to be required in the OAI interface.
Developing an Open Repository into Full Service platform for Open Publishing - The Case of 25 Universities of Applied Sciences in Finland
AMKIT Consortium, Finland
The Theseus poster presents a full publication service platform of 25 Universities of Applied Sciences (UAS) integrating an open repository into a current research information system (CRIS) and a National digital long-term preservation (DP services).
By the year 2020 Finnish universities are required to publish 100% open access. To reach this goal, Theseus will launch a CRIS to accompany the self-archiving of research papers into the Repository in the same workflow with the reporting to the Ministry of Education. This will save resources, increase reporting and presumably lead to growing numbers of self-archiving as well.
As a part of Theseus reform, the UAS theses will be carried forward to the DP services. In addition, a new way to manage the publicity of uploads will be developed. The connection to the DP services will meet the requirements of an operative e-archive described by the National Archives System in Finland and provide a transfer to long-term preservation by the National Library of Finland.
The model is resource saving and it will reorganize publication, publicity issues, archiving and reporting in UASs of Finland as well as it effectively promotes open publishing in Finnish UASs. The extended service will be launched early in 2019.
Research Data Support at the Royal Veterinary College: A Case Study
Royal Veterinary College, United Kingdom
In Autumn, 2018, I began working to set up a research data repository at the Royal Veterinary College, University of London, working primarily with Dr. Dan O’Neill, an epidemiologist. The resulting efforts offer some instructive insight into the challenges and opportunities presented for librarians and research support staff by open repositories, and specifically the expectation, increasingly held by researchers, that technical and administrative support in the matter of open data be available to them via their institution’s library and/or IT services. The poster documents, visually and descriptively, the different parties and systems involved in the effort to support what remains a relatively nascent expectation among researchers. This expectation, in turn, has been generated by external (expectations of peer reviewers, grant funder policy), institutional (research data policies specific to their institution), and professional (effective techniques for collaboration, intellectual property concerns related to use of commercial repositories) responsibilities. This poster looks at the different concepts and stakeholders represented in a local project in an attempt to draw out some widely applicable truths and recommendations.
Automatic data enrichment: merging metadata from several sources
1Universitätsbibliothek Duisburg-Essen; 2Verbundzentrale des GBV (VZG)
Today it is possible to uniquely identify authors and publications on common and well established ID systems such as ORCID, ISBN or DOI. Databases like CrossRef, Datacite, PubMed, IEEE or Scopus share their data using often freely accessible APIs. This opens completely new ways to automatically retrieve, merge, link, and enrich publication data.
We would like to introduce an improved mechanism for importing and enriching bibliographic data, the so called „Enrichment Resolver“. Incompleteness and ambiguity of publication metadata is common. Enriching data from external sources helps us to create the best possible version of every single metadata record. Looking at the author entries, it is our goal to get the most complete version of it, including person identifiers like OCRID, Scopus ID and other. Additionally, extra services like DOAJ or OADOI will be used to get further information like the open access state of the publication, which also can be added to the imported metadata.
We would like to present how a “self-filling" publication repository or institutional bibliography could work like: Starting from an institutional identifier, we can get a list of institution members linked to their ORCID profiles, find publications there, import and enrich that publication data from various sources
Search Engine Optimization as a motivational factor for researchers to submit their material on the institutional open repository case study of Uganda Christian University.
Uganda Christian University, Uganda
Search engines are widely used by researchers to locate research materials on the internet, they provide optimization techniques to content developers to allow their content show up among the top lists while a user types in their topic of interest.
Repositories that have not been optimized will not show up among the top lists or never at all making its content to have low visibility and therefore low citations. Researchers then see no benefits of submitting their content on these platforms.
I propose that all open repositories be search engine optimized because this will lead to the content of their researchers more visible to other researchers. Visibility will allow for more citations and hence a motivating factor for researchers to submit more content to the institutional open repository.
I also demonstrate the impact of this on Uganda Christian University open Repository where I started off without optimization and later optimized. The number of researchers willing to give us their work has increased greatly due to the benefit of visibility of their research online.
I believe that the greatest motivator of researchers is having their work more visible to the public other than monitory benefits.
MyCoRe - the repository software framework
Universität Hamburg, Germany
MyCoRe is an established repository software framework in the German-speaking region. The open source framework is able to serve as basis for digital libraries, multimedia archives, research data repositories or institutional repositories. The name, pronounced "my core", indicates that there is a software core that can be used in custom applications. More than 80 MyCoRe applications are running at over 20, mostly German locations. They provide publications, digital objects (such as documents, manuscripts, books, catalogs, journals, newspapers, etc.) and research data. All objects are described with metadata. Also image files, sound documents, and videos may be included in MyCoRe-based information systems. The MyCoRe framework provides the functionality necessary to manage, store, present, and share metadata and digital resources. Various interfaces are supported. Resource harvesting within the OAI-PMH framework is provided. The deposit of content is possible via SWORD protocol (v2). ORCID identifiers are integrated in MyCoRe repositories for author identification and automated data export to or import from ORCID is possible. The support of schema.org and JSON-LD is the newest software development.
Examples of MyCoRe applications are "DuEPublico" with the university bibliography at the University of Duisburg-Essen, the publication server of the TU Braunschweig or the "Digital Library Thuringia (DBT)".
Hamburg Open Science, linking Repositories across universities and fostering digital cultural in science
University of Hamburg/ Hamburg Open Science, Germany
Hamburg is a federal state with universities of various orientations, associated biobiotheques and scientific collections. Repositories are operated in all these institutions and, in addition, diverse research results and data are produced. Scientists are looking for a long-term and structured backup solution that can be accessed easily, sometimes openly. In the Hamburg Open Science program, the universities jointly develop respective services to foster the use of repositories for open Science and implement them in open science infrastructures adapted to the special demands of the respective universities. Hence, the focus here is on analysing and understanding the needs of scientists and evaluating the benefits of the implemented infrastructure for scientists from different disciplines. Through joint development, it will be possible to use a single web service to locate the diverse scientific results of advances from distributed repositories across Hamburg and to conduct a differentiated search for publications, data or research contacts. In addition to search results of metadata, open data can be used directly from the source repositories.
Building Campus Support and Adoption of a New Repository
University of North Alabama, United States of America
Our repository is just over a year old and yet campus adoption is still an uphill battle. Even as repositories become more common and more popular, administration and faculty may still not understand the benefits of utilizing repositories and Open Access publication. Others may understand the concept of an IR but might be frustrated by the thought of it being another task added to their ever-increasing to-do list. The need to educate campus constituents on why the repository is important and how it can support their pursuit of tenure and promotion goals remains a challenge for most IR administrators. Education and outreach are crucial at this stage. Do you start with the faculty or the administration? Do you try to talk to faculty one-on-one or do you go to department meetings? Can you hold open sessions in the library or other central locations on campus? How can you leverage metrics and impact factors? This poster will address these questions and will provide a framework that you can take back to your campus and use to build rapport with faculty.
Repositories at Work: General Research Data Repository at Universität Hamburg
University of Hamburg, Germany
A general research data repository is supposed to be intuitively usable for a large variety of users with different backgrounds, needs, and expectations. These come to the fore in the design of the interface as secondary features. While the primary features expressed by the core functionality in the scalability, the retrieval time, or the size of the data to be processed is perceived as an absolute must-have, secondary features are perceived as a real advantage ‘nice-to-have’. Yet the one depends crucially on the other. We like to share our experience of configuring a research data repository, designing the technical infrastructure, and adding software functionality. By describing the process of making adjustments and their intended results on the usage behavior, we propose a use case of setting up a general research data repository in which the needs of the user play a central role.
OpenAIRE Content Acquisition Policy: expanding the scope
University of Minho, Portugal
This poster outlines the new OpenAIRE Content Acquisition Policy released in October 2018, which defines the conditions under which metadata of scientific products collected from content providers in OpenAIRE will be considered for inclusion in the OpenAIRE information space. Policies specify which typologies of objects are mapped into which OpenAIRE entities (literature, dataset, software, other research products) and which are the minimal quality conditions under which metadata can be accepted.
With it’s new content acquisition policy, OpenAIRE broadens its scope to integrate metadata records of all scientific and research products. This means that OpenAIRE now harvests: publication records of all access levels (open access, closed access/metadata only etc.), publication records with and without funding references, publication records of different research product types in one repository (literature publications, research data, software and other research products). In order to grant that records are included in OpenAIRE, it is vital that the access level of a record is made clear (preferably by an access level statement on record level, alternately by the use of specific OAI-sets) and each record contains a PID (or URL) that resolves to a splash page.
How to publish software artefacts in institutional repositories: Git integration for DSpace repositories with SARA
1Ulm University, Germany; 2University of Konstanz, Germany
Software plays an essential role in today’s scientific research. In addition to research data, the software itself should be made available for the long term to enhance the reproducibility of research results. To address this problem we developed a web service (SARA) that integrates Git with institutional repositories. The web service allows researchers to import software artefacts from participating GitLab instances or GitHub. A web interface displays captured metadata, allows the selection of branches, the optional inclusion of the version history and more. Metadata records are pushed to DSpace while the bitstreams are stored in a separate Git archive where you can browse and view the files. Depending on an institution’s preferred workflow and DSpace configuration, software artefacts are either immediately available or go through the workspace and/or workflow in DSpace. The web service can be self-hosted and used by one or several institutions, integrating their respective institutional repositories. We are interested in connecting additional repository software and welcome initiatives to contribute to the project.
One Repo fits all – The Specialist Repository for Life Sciences as a Data Store for every type of publication
ZB MED Information Centre for Life Sciences, Germany
There are many different types of repositories for scientific purposes. Fortunately, more and more institutions and infrastructural units discover the value of digital storages which allow the reuse of research results in the context of open science. Based on this many scientific communities establish a number of commonly used repositories and other publishing platforms for different kinds of scientific output. These platforms and repositories provide only one kind of publication. This causes the problem that it can be difficult to find and to get access to all information about one research project of one scientific group or institution. ZB MED and its publication platform (PUBLISSO) aims not only to establish competitive products for existing best-case-scenarios, but to fill these gaps in publication services. Therefore, PUBLISSO has built a repository which fits all types of publication. PUBLISSO provides access to a broad range of publications (i.e. text publications and research data) at one place. Also, it gives small institutions and communities the opportunity to publish their work at one place. Additionally, PUBLISSO promotes the idea of cross referencing to make relations of publications visible. Therefore, the development of tools of Linked Open Data are constantly monitored and implemented if possible.
Leveraging a University Institutional Repository to Host a New Open Access, Peer-Reviewed Student Biomedical Science Journal at the Cooper Medical School of Rowan University
Cooper Medical of Rowan University, United States of America
Background: In the Spring of 2018 the library of the Cooper Medical School of Rowan University (CMSRU) was approached by a clinical faculty member and two medical students who expressed interest creating a legitimate, peer-reviewed journal that could publish student research from around the world.
Problem: The CMSRU library took on the challenge of facilitating this project. Their advice ensured the journal would be Open Access (OA), publish with a CC-BY license, have an ISSN, issue DOIs, and follow ethical publishing guidelines to ensure the journal could eventually be included in indices.
Approach: The CMSRU library already had an existing Digital Commons institutional repository (IR) which could be utilized as a hosting platform, facilitate peer review, and index articles in Google Scholar for discoverability. The journal officially launched in the Fall of 2018 and is actively peer-reviewing submissions.
Conclusions: This poster will highlight the unique challenges of hosting OA journals on IRs. The poster will also provide best practices on starting peer-reviewed journals that are partially student operated. It also will include information on how this project can fill the niche of providing early-career scholars a home to highlight their scholarship while learning the best practices of scholarly OA publishing.
Introducing Orpheus, an Open Source database of journals and publishers
University of Cambridge, United Kingdom
Orpheus is a database of academic journals’ attributes that are frequently required by repository managers, such as revenue model (subscription, hybrid or fully Open Access), self-archiving policies, licences, contacts for queries and article processing charges (APCs). It features web frontends for users and administrators, and a RESTful API for integration with repository platforms and other services. Orpheus also comes with a collection of Python parsers for datasets commonly used by repository staff, such as lists of embargoes and APCs from major publishers (Elsevier, Wiley, Taylor & Francis, Oxford University Press and Cambridge University Press) and the databases DOAJ and SHERPA/RoMEO. Orpheus was recently integrated with the Cambridge DSpace repository (Apollo) and auxiliary systems, which has enabled embargo periods to be automatically applied to deposited articles and streamlined the process of advising researchers on payments, licences and compliance to funders' Open Access policies. Orpheus’ source code, available at https://github.com/osc-cam/orpheus, may be easily expanded or tailored to meet the particular needs of other repositories and Scholarly Communication services.
Donut and Glaze: A Hyrax Dam and Decoupled Frontend in AWS
Northwestern University, United States of America
Northwestern University has worked over the last two years to move it’s core infrastructure to AWS. At the same time we worked to develop a Hyrax-based DAM and decoupled front end. The end result is a scalable solution that allows the team to use modern front-end frameworks (react), s3 (static) hosting, serverless (lambda-based) tasks, and flexibility in backend decisions.
Keeping the user in the workflow: IR licensing for mediated deposits
Cornell University, United States of America
The practice of mediated deposit of content to Institutional Repositories (IRs) is widespread (e.g. CNI 2017, Dubinsky 2014, Poynder 2016, Salo 2008). For Cornell University’s DSpace installation, eCommons, approximately 80% deposits have been mediated. An IR deposit workflow typically involves presenting the uploader, presumed also to be the rights holder, with a license agreement granting the service provider the non-exclusive rights required to provide the service. Mediated deposit complicates the process of obtaining permission from the rights holder and removes them from the licensing process. In spite of this reality, IR platforms have yet to evolve to support this aspect of mediated deposit, even while they support batch upload (presumably mediated). Similarly, standard deposit agreements seldom address mediated deposit (Rinehart and Cunningham 2017). This leaves IR managers to either develop their own workarounds, or perhaps, simply omit the process of obtaining and documenting the rights holder’s acceptance of the repository license. We will share our procedures for obtaining, recording and retaining acceptance of the terms of our IR’s license for a variety of mediated deposit scenarios.
Flexible metadata: the key to a single repository for all types of output
Haplo, United Kingdom
This poster explains why a flexible metadata schema is critical to building a repository for all types of output; such as articles, image-based research, collections of outputs, datasets. It illustrates the Haplo data model and describes in detail how the flexibility is achieved and the benefits gained.
The Bridge of Data project: Gdansk University of Technology approach to building up the infrastructure for data sharing.
Gdansk University of Technology, Poland
In the era of Big Data, research data play the unquestionable role in scientific research as well as everyday life. The Bridge of Data project at Gdansk University of Technology (GUT) will be available as the data repository with adjunctive services that are unique in Poland. The project is the continuum and built upon the previous project called the Bridge of Knowledge that was foremost concentrated on Open Access. Current developments put emphasis on research data and will provide technological innovations such as hosting the project on the private computing cloud and storing the data on the Cepch Object Storage. Searching the data will be available through open text search due to implementing the NoSQL database – ElasticSearch. Moreover, the project will allow researchers to perform Big Data Analysis by the Apache Zeppelin GUI on the supercomputer Tryton (40.000 cores, 1,5 PFLOPS).
Additionally, the Competency Center operating at the GUT Library will be launched and provide assistance and on-site tailoring training among researchers from all scientific disciplines that include Data Management Plan, open licensing or metadata standards.
The Bridge of Data is co-financed by the European Regional Development Fund within the framework of an Operational Programme Digital Poland for 2014-2020.
Designing a vocabulary service for a ‘data-driven’ materials data repository
National Institute for Materials Science, Japan
We are developing a new data repository to support data-driven developments in materials science, and it became necessary to build an open vocabulary set to describe metadata such as chemical substances, characterization methods, instruments, units, etc. There have been efforts to build a standard vocabulary or an ontology for materials science, but what should be considered as the essential concepts and how to structure them can be quite domain-specific, varying from one researcher to another, as materials science is an interdisciplinary field that encompasses chemistry, physics, and biology. To address this, we are developing a wiki-powered vocabulary management service, not only to apply aforementioned earlier efforts, but also to allow building on top of them by ‘crowd-sourcing’ among the researchers, thereby realizing appropriate metadata description for a highly-usable materials data repository.
CDS Videos: The new platform for CERN videos
CERN Document Server (CDS) is the CERN Institutional Repository based on the Invenio open source digital repository framework. CDS aims to be the CERN’s document hub. To achieve this we are transforming CDS into an aggregator over specialised repositories, each having its own software stack, with features enabled based on the repository’s content. The first specialised repository created is CDS Videos. CDS Videos provides an integrated submission, long-term archival and dissemination of CERN video material. It offers a complete solution for the CERN video team, as well as for any department or user at CERN, to upload video productions. Since it was released in production, the yearly number of uploaded videos at CERN doubled, showing that the creation of a specialised platform based on user needs, rather that a generic platform capable of dealing with many document types, has been benefit for the CERN community and for the video heritage of CERN.
A short history of ORCID (DE) in Germany
1Helmholtz Association, Germany; 2German National Library of Science and Technology; 3German National Library; 4Bielefeld University
In the past few years the Open Researcher and Contributor ID (ORCID) became the global standard for author identification in science. In Germany ORCID has been established as a standard too. The project ORCID DE contributed to this development by initiating the foundation of the ORCID Germany Consortium led by the German National Library of Science and Technology (TIB). Through workshops, webinars and its website the project provides a forum for academic institutions in Germany to discuss challenges and benefits of ORCID. The implementation of ORCID in essential information infrastructures such as the Bielefeld Academic Search Engine (BASE) as well as the linking with the Integrated Authority File (GND) mark important milestones of the dissemination of ORCID in Germany. The aim of the ORCID DE project is to sustainably foster Open Researcher and Contributor ID (ORCID) at universities and non-university research institutions by taking a comprehensive approach. ORCID DE received funding from the German Research Foundation (DFG) at the beginning of 2016 for a period of three years. The poster illustrates the development of ORCID in Germany in the past three years depicting milestones and crucial factors for the growth of ORCID.
re3data - Open infrastructure for Open Science
1Helmholtz Association, Germany; 2Karlsruhe Institute of Technology; 3Humboldt Universität zu Berlin; 4Purdue University; 5GFZ German Research Centre for Geosciences
re3data (https://www.re3data.org) is the global registry of research data repositories and portals (Pampel, et al. 2013). In December 2018 over 2.240 digital repositories for research data are registered using the comprehensive re3data metadata schema (Rücknagel, et al.2015). To identify suitable research data repositories a vast number of funders, publishers and research organizations from all around the world recommend re3data within their research data management guidelines.
re3data is part of DataCite’s service portfolio hosted by the library of the Karlsruhe Institute of Technology (KIT) in collaboration with the Helmholtz Open Science Coordination Office at the Helmholtz Centre Potsdam GFZ German Research Centre for Geosciences, the university library of Purdue University, USA and the Berlin School of Library and Information Science of the Humboldt-Universität zu Berlin.
The poster presents the current status of re3data as well as future improvements based on stakeholder requirements.
A model for a sustainable repository network in Serbia
Institute of nuclear sciences "Vinca", University of Belgrade, Serbia
A model for developing and maintaining institutional repositories through cooperation of the University of Belgrade Computer Center (UBCC) with (so far, seven) research organizations is presented. It has resulted in improved visibility of local scientific output and has created an interoperable environment for information exchange. The infrastructure is developed and maintained by the UBCC IT team, whereas each institution has its own repository manager responsible for content.
The main focus in developing institutional repositories is compliance with Open Science principles, standardized metadata, licenses, reliance on the non-proprietary and open source platform (DSpace). The repositories are now harvested by aggregators and services (OpenAIRE, CORE, BASE, Unpaywall, Google Scholar).
This model is flexible, providing to new partners tested and optimized structure. It is also expandable, i.e. new services may be added, e.g. the Altmetric.com widget or in-house developed tools for massive metadata editing and import.
The model described could be interesting, especially for repository managers and users from developing countries. It is demonstrated that proper coordination and standardized procedures could lead to an interoperable and sustainable network of repositories in an environment with no repository infrastructure, ensuring optimal information exchange with users, both human beings or machines.
Archiving large or restricted datasets with Edinburgh DataVault
University of Edinburgh, United Kingdom
In order to meet the need of researchers at the University of Edinburgh to archive large datasets and restricted datasets for the long-term, the University set out to build a DataVault system from the codebase previously developed in collaboration with the University of Manchester. This new DataVault is a complementary and integral part of the existing suite of Research Data Management (RDM) tools at Edinburgh; open data up to 100 GB is archived in DataShare and metadata records are brought together with other research outputs in our CRIS (Pure). In this way the Research Data Service is better able to support users taking a holistic approach to RDM as a component of Open Science. Through user engagement and testing, our understanding of user needs has been refined. Challenges relating to usability and resilience have been overcome. The web-based DataVault system allows users to archive multiple terabytes in an affordable, secure archive with a persistent identifier and other metadata discoverable on the CRIS. Future development will implement a review system to allow appropriate management of data over the long-term.
EUDAT-B2FIND : A FAIR friendly and interdisciplinary discovery service for open research data
Deutsches Klimarechenzentrum GmbH, Germany
The European Data Infrastructure (EUDAT) project established a pan-European e-infrastructure supporting
a variety of multiple research communities and individuals to manage the rising tide of research data in
open science. This Collaborative Data Infrastructure (CDI) is based on the FAIR principles and implements
community-driven and advanced data management technologies and services to tackle the specific
challenges of international and cross-domain research data management.
The EUDAT data metadata service B2FIND plays a central role in the European Open Science Cloud (EOSChub)
project as the central metadata repository and discovery portal for the diverse metadata collected
from heterogeneous and interdisciplinary sources within and beyond EOSC-hub. The B2FIND catalogue not
only harvests metadata from research communities, but also data from generic repositories, such as the
data publication service EUDAT-B2SHARE.
To support Open Science according to the FAIR principles, EUDAT-B2FIND allows both research data
providers to easily publish their metadata and scientists to conduct cross-disciplinary and semantic search
for and re-use of data resources.
Using DSpace@Fraunhofer – Building up the Fraunhofer Open Science Cloud
Fraunhofer-Informationszentrum Raum und Bau IRB, Germany
Since 2016, Fraunhofer, Europe’s largest organization for applied research, is facing the challenge of implementing and migrating three repository systems: A new current research information system (CRIS), a new open data repository and the complete renovation of the longstanding bibliographic database »Fraunhofer-Publica«, along with its younger sibling, the open access repository »Fraunhofer-ePrints«. The goal is to implement a unique repository landscape as a key enabler for Open Science. For all systems, DSpace or DSpace CRIS is being used. Reasons for selecting DSpace were the availability of a plug-in for individual CRIS functionalities, numerous out-of-the-box functionalities, the large, well-organized community and the high amount of successful installations around the globe. The software enables the systems to use entities such as people, projects and organizations jointly. In addition, standard submission workflows for all datatypes and a consistent user experience will be available. The poster will deliver a visual presentation of the three systems, their interfaces and workflows with key user groups as well as their interconnection and their software architecture. The poster will present the key points of our feasibility studies of DSpace/DSpace-CRIS, and it will also show an outlook on the Fraunhofer vision to build a unique »Fraunhofer-Open Science Cloud«.
JOIN² Software Platform for the JINR Open Access Institutional Repository
Joint Institute for Nuclear Research, Russian Federation
Nowadays the practical interest of scientific research result and educational lectures and materials push to the creation and development of open archives of scientific publications. The JINR Document Server (JDS — jds.jinr.ru) was based on the software platform Invenio (developed at CERN). The goals of JDS are to store JINR information resources and to provide effective access to them. JDS contains many materials that reflect and facilitate research activities.
In the framework of the JOIN2 project, the partners have improved and adapted the Invenio software platform to the information needs of JOIN2 users. Needs of JDS users are very similar to needs of JOIN2 users, so JINR decided to join JOIN2 project. The participation of JINR in the project will improve the functionality of the JINR Open Access institutional repository by reusing the code and further joint development. The presentation shows the process of migration and adaptation JDS to JOIN2 software platform.
Context-adaptive research data repository publishing in Chinese Academy of Sciences
Computer Network Information Center, Chinese Academy of Scinece, China, People's Republic of
ScienceDB is an open and generic data repository aiming at making scientific data FAIR(Wilkinson et al,2016)since 2016(Zhang et al 2018). ScienceDB mainly serves in three scenarios: long-term data services for data journal publishing; large-scale data services for research networks; and tailored data publishing services to individual research scientists and etc,.
(1)For data journal publishing, ScienceDB generally support lifelong data curation and services. For example, China Scientific Data (www.csdata.org) , the first multidisciplinary data journal in China, has published over 170 data papers with 70% of all the linked datasets submit to ScienceDB.
(2)For research teams/networks, ScienceDB supports complex management models for major research projects featuring inner communication and valuable data sharing. In CASEarth project, ScienceDB has supported over 100 sub-programs to summit and preserved over 400TB data covering broad geoscience and related subjects.
(3)For individual research scientists data publishing, data repository publishing features user friendly services such as tailored tools for data curation based on subjects and so on.
So far, we have 259,190 page reviews and 3,6000 downloads. And further, connectivity in local and international science community , sustainability and broader social impacts shall also help contribute to the long-term development of data repository publishing.
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: OR2019
|Conference Software - ConfTool Pro 2.6.128+TC
© 2001 - 2019 by Dr. H. Weinreich, Hamburg, Germany