Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Filter by Track or Type of Session 
Only Sessions at Date / Time 
 
 
Session Overview
Date: Tuesday, 11/Jun/2019
8:00am - 9:00amMorning coffee
 
8:00am - 5:00pmRegistration
 
9:00am - 10:30amOpening Plenary and keynote: Jeff Gothelf
Session Chair: Jessica Lindholm, Chalmers University of Technology
Outcomes over output: a user-centric approach to building successful systems
Lecture Hall A 
10:30am - 11:00amCoffee break
 
11:00am - 12:30pmP1A: UX in practice
Session Chair: Rory McNicholl, CoSector, University of London
Lecture Hall A 
 

Jisc Open Research Repository: Delivering a compelling User Experience

Tom Davey, Dom Fripp, John Kaye

Jisc, United Kingdom

Jisc has developed “Jisc Open Research Hub” - a solution to enable the research community to deposit and preserve research data and other digital objects. It includes repository, preservation, and reporting. This presentation discusses the repository – Jisc Open Research Repository – developed from scratch in alignment to a range of sector needs.

Jisc recognised the importance of delivering this service with a compelling user experience, and invested greatly into achieving this aim.

We explore the many motivations for the care given to the user experience and the different methods applied throughout the development process, which are felt to be highly relevant to those wishing to understand user-centred approaches to product development, especially in the repository / research space.

We discuss the design/development process, including the use of external and in-house expertise, and key activities including:

Requirements-gathering (inc. 16 pilot institutions, domain experts, other bodies)

Testing - including several phases of user acceptance testing, benchmarking, and standards (accessibility) testing

UX Governance

Jisc’s position in the UK HE / research sector, as well as the scale of the project provides us many domain-specific insights to share, ranging from broad methods, down to individual design decisions informed by research and domain expertise.

Paper-P1A-119Davey_a.pptx
Paper-P1A-119Davey_b.pdf


Uncomplicating the business of repositories

Emily G. Morton-Owens, Katherine Lynch

University of Pennsylvania Libraries, United States of America

In this presentation, we discuss how our library is running our repository in production to meet the needs of our “business” as efficiently as possible. We have an interest in limiting the number of digital platforms we manage for the purposes of sustainability and efficiency, but we must also consider how well a general platform can meet specific user needs.

A governance group of administrators, in conference with stakeholders and developers, seeks to find the best way to accommodate each collection or functional need, with an eye to minimizing technical complexity, offering stakeholders self-serve options when possible, and maintaining a single canonical copy of each object. We will present some case studies of how material has been handled in our developing digital ecosystem, where preservation and access sometimes present conflicting priorities. We are exploring how our repository can best evolve to support our aims of making data and documents freely available.

Paper-P1A-421Morton-Owens,Lynch_a.pptx
Paper-P1A-421Morton-Owens,Lynch_b.pdf


Building Interfaces for All the Users

William Hicks, Mark E. Phillips, Pamela Andrews

University of North Texas Libraries, United States of America

A recent redesign of the digital repository platform at the University of North Texas (UNT) gave us the opportunity to examine how our interfaces can be enhanced for a better user experience. In doing so, we considered both human and and non-human users, and how they connect with our digital objects. For non-human users, this also meant integrating a number of existing interfaces and technologies into the digital repository platform. As some of these integrations also benefit our human users, the design meant incorporating these technologies in a way that was unobtrusive, yet accessible. Using our redesign as a case study, this presentation discusses the impact of the different user groups and content types on our overall design direction for the project, as well as the different interfaces and technologies we integrated to make our content more broadly available across multiple access points.

Paper-P1A-502Hicks_a.pdf
 
11:00am - 12:30pmP1B: It's Repository Rodeo time!
Session Chair: David Wilcox, DuraSpace
Lecture Hall B 
 

Revenge of the Repository Rodeo

David Wilcox1, Melissa Anez2, Pascal Becker3, Mark Bussey4, Will Fyson5, Lars Holm Nielsen6, Robin Ruggaber7

1DuraSpace; 2Islandora Foundation; 3The Library Code; 4Data Curation Experts; 5University of Southampton; 6CERN; 7University of Virginia

The Repository Rodeo returns for another round of questions and answers! This popular panel, featured since Open Repositories 2016 in Dublin, offers a broad overview of the main repository platforms at Open Repositories and provides an opportunity for spirited discussion amongst panelists and attendees. Join representatives from the DSpace, Eprints, Fedora, Islandora, Invenio, and Samvera communities as we (briefly) explain what each of our repositories actually does. We'll also talk about the directions of our respective technical and community developments, and related to the conference theme of ​All the Users Needs, ​discuss how our various systems can support the needs of users in our communities.

This panel will be a great opportunity for newcomers to Open Repositories to get a crash course on the major repository options and meet representatives from each of their communities. After a brief presentation from each representative, we'll open the session up for questions from the audience.

Panels-P1B-366Wilcox_a.pdf
 
11:00am - 12:30pmP1C: 24x7s: Systems and Policy
Session Chair: Leila Sterman, Montana State University
Lecture Hall C 
 

Revising an institutional open access policy to reserve the right to apply a Creative Commons License to dissertations and author accepted manuscript versions of peer-reviewed articles.

Paula Callan, Katya Henry

Queensland University of Technology, Australia

In 2003, Queensland University of Technology (QUT) implemented the world’s first university-wide open access mandate. This policy, which required the deposit of higher degree research theses and author accepted manuscripts of peer reviewed articles, has played a significant role in the success of QUT’s open access repository, QUT ePrints. In 2018, QUT revised its open access policy in what is perhaps another world’s first. This strategic policy aims to increase the proportion of repository content made available under a Creative Commons license, facilitating greater use and impact of QUT’s research outputs. Previously, most author accepted manuscript files downloaded from QUT’s repository carried no license information. This revised policy asserts that author accepted manuscripts will be made available under a Creative Commons Attribution Non-Commercial (CC-BY-NC) license, and higher degree by research theses (dissertations) will be made available under a Creative Commons Attribution Non-Commercial NoDerivative (CC-BY-NC-ND) license. This action is supported by the University’s Intellectual Property Policy which reserves some rights with respect to works created by staff in the course of their employment. QUT’s revised policy aligns with trends in funding body requirements in regards to open access requirements, and represents a new approach to knowledge discovery and dissemination.

24x7-P1C-302Callan_a.pdf
24x7-P1C-302Callan_b.pptx


Make it easy - Integration of Data Description in the Research Process

Sibylle Hermann, Dorothea Iglezakis, Anett Seeland

Universität Stuttgart, Germany

DaRUS, the upcoming data repository of the University of Stuttgart based on Dataverse, should not only be used to publish data but also to manage hot research data. The idea is to integrate the repository in the daily life of the researchers. The structured description of research data is made as easy as possible and will be integrated early into the processes of the researchers through additional tools. The RePlay Client is a GUI-based tool that supports researchers in versioning and documenting their research data at their workplace during the research process. The metadata harvester is a command line tool that automates parts of the documentation by parsing input and log files and by enabling a comfortable integration with metadata templates. But using Dataverse as a data management system rather than a publication repository also poses additional challenges like linking data from different research steps, moving datasets between private and public areas and enabling alternative views and search options over the datasets of a dataverse. We plan to address these challenges with an RDM-client interacting with the REST-APIs of dataverse to make research data easily searchable and sharable over all phases of the research process.

24x7-P1C-328Hermann_a.pdf


Automating repository workflows with Orpheus, an Open Source database of journals and publishers

André Fernando Sartori

University of Cambridge, United Kingdom

Repository management relies on knowledge of numerous attributes of academic journals, such as revenue model (subscription, hybrid or fully Open Access), self-archiving policies, licences, contacts for queries and article processing charges (APCs). While datasets collating some of this information are helpful to repository administrators, most cover only one or few of those attributes (e.g., APC price lists from publishers), do not provide APIs or their API responses are not machine readable (self-archiving policies from RoMEO), or are not updated very often (licences and APCs from DOAJ). As a result, most repositories still rely on administrative staff looking up and entering required attributes manually. To solve this problem and increase automation of tasks performed by the Cambridge repository team, I developed Orpheus, a database of academic journals/publishers written in Django. Orpheus was recently integrated with our DSpace repository Apollo and auxiliary systems via its RESTful API, enabling embargo periods to be automatically applied to deposited articles and streamlining the process of advising researchers on payments, licences and compliance to funders' Open Access policies. Orpheus is Open Source (https://github.com/osc-cam/orpheus) and may be easily expanded or tailored to meet the particular needs of other repositories and Scholarly Communication services.

24x7-P1C-251Sartori_a.pdf


Permissible Closed-Use General licences: filling the gap between Open and Restrictive Data Licences

Graham Alexander Parton, Sam Pepler

Centre for Environmental Data Analysis, Science and Technology Facilities Council, United Kingdom

The virtues of Open Data are increasingly understood within the research domain, yet barriers remain for data producers to release data under an Open Data licence. Though restrictive licences exist, these are undesirable by some data providers wishing to adhere to many of the virtues of open data, but have concerns about onward sharing; seeking instead to establish a single, canonical data source. This desire stems from reasons including: data usage reporting requirements; maintaining data veracity; and that data may continue to develop (e.g. new additions or versions). These providers wish, however, to still follow other Open Data principles, especially in permitting as broad a range of uses as possible.

To support such data providers’ wishes the Centre for Environmental Data Analysis have drawn up two new generic, permissive-use ‘closed’ data licences based on the UK Open Government and UK Non-Commercial Government Licences. The Closed-Use General Licence (CUGL) and Closed-Use Non-Commercial General Licence (CUNCGL) permit widely permissive use as per open data licences, but address the onward-sharing concerns. Thus, they also signpost potential users to the canonical data source should they wish to exploit the data themselves. The licences can be found at: http://artefacts.ceda.ac.uk/licences/

24x7-P1C-414Parton_b.pdf


Building node-iiif: A perfomant, standards-compliant IIIF service in < 500 lines of code

David E Schober, Michael Klein

Northwestern University, United States of America

Northwestern has moved its repository systems into the AWS cloud. Part of our goal in doing so was to allow scaling, use existing services when we could, and remove management from services we consume as much as possible. The learning curve has been in identifying which server-hosted systems could easily be replaced by services (existing or lambda based). This presentation is about a small part of that journey – the development of an IIIF compliant node-js application hosted as a “serverless” lambda.

24x7-P1C-438Schober_a.pdf


ARCLib – development of open source solution for long-term preservation

Martin Lhotak

Library of the Czech Academy of Sciences, Czech Republic

This short presentation will inform about the Czech ARCLib project. One of the main goals of the project is the development of an open-source solution for a bit-level and logical preservation of digital documents, respecting the national and international standards as well as the needs of all types of libraries in the Czech Republic. The mission of the ARCLib project lies, among others, in creating a solution that will allow institutions to implement all of the OAIS functional modules and entities, considering institutions’ information model. The architecture is planned as open and modular and the final product will be able to ingest, validate and store data from a majority of software products used for creating, disseminating and archiving libraries’ digital and digitized data in the Czech Republic. The solution is connected to Fedora commons repository and Archivematica, as far as it counts with creation of submission information packages using these software solutions.

24x7-P1C-437Lhotak_a.pdf


Answering the call: researcher-driven training in data management

Anne Stevenson, Carmi Cronje

CSIRO, Australia

While CSIRO’s Research Data Service (RDS) has been promoting sound data management practices and the sharing of data, uptake has been inconsistent, and fallen largely to early adopters, those required by publishers or funders, and those with other motivations.

Across CSIRO, data management awareness is limited to small pockets: large facilities have detailed requirements, but the approach is inconsistent. Meanwhile, the explosion in data science is driving a need for a holistic approach to upskilling researchers.

Researchers from the Agriculture and Food (A&F) research unit called for support for developing better data skills and approached their executive for assistance, and the Ag & Food Data School was born.

In consultation with the A&F Data School organisers, an activities-based RDM component using the Carpentries model was presented in April 2018. This material was collaboratively refined and presented in October 2018. Deep engagement has flowed from these sessions.

CSIRO’s Learning and Development unit are working with RDS team members to take this to an enterprise level initiative.

Peer-to-peer learning has a great chance of success. Partnering with researchers helps support teams connect concepts with researchers’ practice. Co-design training with engaged researchers, as collaboration is the key to improving data culture and practice.

24x7-P1C-190Stevenson_a.pdf
24x7-P1C-190Stevenson_b.pptx


OA Theses: Demonstrating value and addressing concerns of students and their supervisors

Jenny McKnight

University of Melbourne, Australia

In 2017, the University of Melbourne introduced a revised thesis access policy. Under this revised policy, open access became ‘opt out’ rather than ‘opt in’ for students and the default embargo period was reduced from seven years to two. Whilst these changes were welcomed by many members of the university community, they also served to generate a degree of anxiety in many quarters: what was the value of these changes? Would this new policy facilitate plagiarism of student work? And (by far the most the most common concern raised) would it prevent students publishing from their thesis? This paper will explore a number of initiatives undertaken at the university to address these concerns at a local level and increase confidence amongst graduate research students, and their supervisors, in making their theses OA.

24x7-P1C-229McKnight_b.pdf


Discovery with Linked Open Data: Leveraging Wikidata for Context and Exploration

Devin Higgins, Lucas Mak

Michigan State University, United States of America

Wikidata has been touted for its potential to be an authority linking hub as it provides the mechanism to connect to a few thousand different types of external identifiers, i.e. concepts from different thesauri and controlled vocabularies (Neubert, 2017). Common vocabularies like VIAF, ISNI, LCSH, LCNAF, FAST, AAT, ULAN, GeoNames, etc. have been, at least, partially mapped to Wikidata. How can cultural heritage institutions leverage this foundation built by Wikidata to enhance the discovery experience of users? The subject knowledge cards available for all items in the Michigan State University Libraries digital repository utilize the URIs of subject terms in metadata records as an entry point to connect to rich linkages provided in Wikidata entries. The card captures contextual information of the subject in focus by pulling information from Wikidata and linked DBpedia. It also provides links to semantically related resources from the library catalog and selected scholarly databases by using Wikidata as an interchange to trace equivalent concepts used in different systems. With linkages to external resources, users are no longer confined to a single silo but able to navigate an integrated network of resources beyond the immediate system.

24x7-P1C-426Higgins_a.pdf
 
11:00am - 12:30pmP1D: Convergence of repository and CRIS functions
Session Chair: Sara Gould, British Library
Lecture Hall J 
 

Practices and Patterns: the Convergence of Repository and CRIS Functions

Rebecca Bryant1, Pablo de Castro2, Annette Dortmund1, Michele Mennielli3

1OCLC; 2University of Strathclyde; 3Duraspace

The recently released report Practices and Patterns in Research Information Management: Findings from a Global Survey, jointly produced by OCLC Research and euroCRIS, shines a light onto a relevant area of repository evolution, namely the ever increasing coupling of its workflows to those traditionally associated with Research Information Management (RIM, or CRIS) systems. Previous landscape analysis had already identified this trend a few years ago, and this comprehensive OCLC/euroCRIS exercise provides new evidence on repositories and RIM systems acting like a single integrated, interoperable system for the purpose of collecting and exposing the institutional research output.

The proposed presentation examines this trend based on nearly 400 survey responses from 40 nations, and provides insights into the increasing overlap of practice, functionality, and workflows, such as the extent to which RIM systems may be playing the role of institutional (literature) repositories, data repositories, and repositories for electronic theses and dissertations (ETDs). The global responses will be analysed together with their regional breakdown. Also relevant is the fact that the most popular protocol identified in the survey is OAI-PMH. Further iterations planned for this survey should help capturing the trends of the landscape evolution in this and other areas.

Paper-P1D-161Bryant,de Castro,Mennielli_a.pptx
Paper-P1D-161Bryant,de Castro,Mennielli_b.pdf


Building an All-in-one Service: Extending an existing Open Access Repository to a complete Research Information System

Oliver Goldschmidt, Beate Rajski, Gunnar Weidt

TU Hamburg, Germany

The Hamburg University of Technology is using DSpace CRIS for their Open Access publications. In context of the Hamburg Open Science program this repository is going to be extended with a research data repository and supposed to include all researchers, organization units and publications (also non OA, even without any fulltext) building a research information system and a university bibliography.

DSpace CRIS is adopting some recommendations for next generation repositories by the Confederation of Open Access Repositories (COAR) and is providing entity structures, which a normal DSpace system does not have yet. Thus we believe, that by sticking with DSpace (CRIS) we are well prepared for the future having a good community with active developers.

This presentation shows, how DSpace CRIS helped us to cover the two new components of the repository and which challenges we had to face while developing the new All-in-one system.

Paper-P1D-237Goldschmidt,Rajski_a.pptx
Paper-P1D-237Goldschmidt,Rajski_b.pdf


RCAAP Repositories Network - Promoting Interoperability

José Carvalho1, Paulo Lopes2, João Moreira2

1University of Minho, Portugal; 2FCT/FCCN

In this presentation we describe a set of initiatives adopted to develop an integrated network of repositories. In particular, we focus on the guidelines, tools, services and open access practices, Furthermore, we demonstrate how such network can be easily implemented in other contexts and and can interact with other systems of the science management ecosystem.

Paper-P1D-374Carvalho_a.pdf
 
11:00am - 12:30pmP1E: Measuring use and reuse
Session Chair: Holly Mercer, University of Tennessee
Lecture Hall M 
 

Reuse and use: new definitions for digital library assessment

Ayla Stein Kenfield1, Santi Thompson2, Elizabeth Kelly3, Genya O'Gara4, Caroline Muglia5, Liz Woolcott6

1University of Illinois at Urbana-Champaign, United States of America; 2University of Houston Libraries; 3Loyola University New Orleans; 4Virtual Library of Virginia; 5University of Southern California; 6Utah State University

This presentation explores the complexities of measuring the impact of use vs. reuse among cultural heritage and knowledge organizations (including but not limited to museums, libraries, archives, data repositories, and historical societies). It will provide an overview of the project team’s evolving definitions for use and reuse, informed by their work with the Measuring Reuse project. These definitions help delineate the differences between reuse and use, thus setting the stage to help information professionals determine which use cases should be considered use or reuse and developing more detailed and relevant assessments of the impact of digital collections. The speakers will apply the new definitions to specific use cases to show their utility and value and suggest how these efforts will lead to more informed assessment methods of digital library materials.

Paper-P1E-178Kenfield,Thompson_a.pptx
Paper-P1E-178Kenfield,Thompson_b.pdf


The BitViews project: a new method for aggregating online usage data, a new route to universal Open Access.

Manfredi La Manna1, Camillo Lamanna2

1University of St Andrews, United Kingdom; 2University of New South Wales and University of Sydney, Australia

Academics, librarians, university administrators, research funders all agree that having aggregated, worldwide, reliable, and validated data on online usage of scientific, scholarly, and medical peer-reviewed outputs would be highly desirable. Current initiatives to collate views data across publishers and institutional repositories, validating the data using the COUNTER protocol, are unlike to succeed because they are predicated on the concept of central clearing-house which is expensive to maintain and not scalable. The BitViews project uses open-source blockchain technology (a distributed publicly-accessible ledger protecting the privacy of viewing data) and thus provides a low-cost solution which bypasses the need for a central clearing house. As soon as online usage data are aggregated on a worldwide basis, they provide the raw material for devising discipline-specific non-citation impact metrics. Authors of scientific, scholarly, and medical peer-reviewed outputs will have a strong personal incentive to maximise the visibility (as opposed to the citability) of their work and therefore will voluntarily want to deposit their postprints on open access repositories. Librarians, university administrators, and academics ought to work together to make online usage data open data and realise that BitViews can turn open data into open access for all (dovetailing with Plan S).

Paper-P1E-199La Manna_a.pdf


Analyzing Aggregate IR Use Data from RAMP

Kenning Arlitsch1, Dale Askey2, Jonathan Wheeler3

1Montana State University, United States of America; 2University of Alberta, Canada; 3University of New Mexico, United States of America

Data collected from 50 institutional repositories (IR) on various platforms and from around the world will be analyzed for this presentation to demonstrate aggregate IR performance, use, and the visibility of content. The Repository Analytics & Metrics Portal (RAMP) is a free web service developed in 2017 with funding from the Institute of Museum and Library Services. The dataset collected by RAMP currently exceeds 300 million rows and it is the only open aggregate data available to evaluate the visibility and use of IR content, diagnose deficiencies with performance, align content with user needs, and optimize metadata for maximum click-through ratios, among myriad other potential uses. This presentation will address several potential research questions that could help improve IR performance and demonstrate the IR value proposition. Methods for extending the RAMP dataset’s analytic potential through augmentation with complementary, publicly available datasets will be described. The presentation will encourage audience members to register their own repositories with RAMP and/or to consider additional ways to analyze the dataset.

Paper-P1E-452Arlitsch,Askey,Wheeler_a.pptx
Paper-P1E-452Arlitsch,Askey,Wheeler_b.pdf
 
12:30pm - 1:30pmLunch Provided
 
1:30pm - 3:30pmP2A: Audible and visual - non-textual objects in repositories
Session Chair: Neil Stephen Jefferies, University of Oxford
Lecture Hall A 
 

All the viewer needs: How user-centered design helps to develop and operate an open repository for audiovisual media

Margret Plank, Bastian Drees, Abiodun Ogunyemi

Leibniz Information Centre for Science and Technology, Germany

The amount of audiovisual material in science is rising sharply. At the same time, adequate repositories for this particular type of research output are rare. Neither text and data repositories nor platforms such as YouTube and Vimeo can meet users' needs for scientific audiovisual materials. Audiovisual repositories have to provide innovative tools as well as a stable and reliable infrastructure.

To meet the user needs the TIB AV-Portal was developed based on a user-centered design approach. Today it offers automated video analysis with scene, speech, text and image recognition and provides free access to approximately 18,000 videos from science and technology. Our presentation will show use case scenarios of audiovisual material in research and education from recent user interviews. Moreover, the TIB AV-Portal is presented as a reliable repository for scientific videos and the potential for further development and lessons learned will be discussed.

Paper-P2A-207Plank_a.pptx
Paper-P2A-207Plank_b.pdf


What do users expect from image repositories? – Focus group impressions

Lucia Sohmen, Ina Blümel, Lambert Heller

Leibniz Information Center for Science and Technology, Germany

The NOA project collects and stores images from open access publications and makes them findable and reusable. During the project a focus group workshop was held to determine whether the development is addressing researchers’ needs. This took place before the second half of the project so that the results could be considered for further development since addressing users’ needs is a big part of the project. The focus was to find out what content and functionality they expect from image repositories.

In a first step, participants were asked to fill out a survey about their images use. Secondly, they tested different use cases on the live system. The first finding is that users have a need for finding scholarly images but it is not a routine task and they often do not know any image repositories. This is another reason for repositories to become more open and reach users by integrating with other content providers. The second finding is that users paid attention to image licenses but struggled to find and interpret them while also being unsure how to cite images. In general, there is a high demand for reusing scholarly images but the existing infrastructure has room to improve.

Paper-P2A-329Sohmen_a.pdf


Developing technical infrastructure and services for diverse usage scenarios based on multimedial linguistic data

Hanna Hedeland1, Daniel Jettka2

1Hamburger Zentrum für Sprachkorpora, Universität Hamburg, Germany; 2Institut für Finno-Ugristik/Uralistik, Universität Hamburg, Germany

The Hamburg Centre for Language Corpora (HZSK) is a research data centre at the University of Hamburg focussing on spoken multilingual data and data from research on linguistic diversity and language documentation. The HZSK provides digital infrastructure comprising a research data repository and supports researchers with advice and training in all questions related to data management across the research data lifecycle. The presentation discusses how the complex digital infrastructure of the HZSK has emerged, in compliance with standards and best practices within research data management, and how its components provide functionality for dealing with various user needs and requirements for specific usage scenarios.

Paper-P2A-491Hedeland_a.txt
Paper-P2A-491Hedeland_b.pdf


Searching for Sustainability – Avalon in the Samvera Community

David E Schober1, Jon Dunn2, Ryan Stean1

1Northwestern University, United States of America; 2Indiana Univerisity, United States of America

This presentation will focus on Northwestern University and Indiana University’s continued work toward a sustainable model for support, maintenance, and development of the Avalon Media System - an open-source, Samvera-based repository for audio and video jointly developed since 2011. Over the last two years, the team has focused on widening engagement with and commitment to the Samvera and IIIF communities as well as developing wider developer interest by re-basing the product on top of Hyrax and developing a modular architecture.

Paper-P2A-425Schober,Dunn_a.pptx
Paper-P2A-425Schober,Dunn_b.pdf
 
1:30pm - 3:30pmP2B: Designing interfaces with UX
Session Chair: Joseph McArthur, Open Access Button
Lecture Hall B 
 

A new user-centric open repository design: A case study of INED’s institutional repository with Polaris OS

Yann Mahé1, Karin Sohler2, Manuel Guzman1, Sarah Amrani1

1MyScienceWork, France; 2French Institute for Demographic Studies (Ined), France

MyScienceWork has released in March 2018 the new open source solution Polaris OS, which seeks to find appropriate solutions to major challenges of open repositories. It was first implemented together with the French Institute for Demographic Studies (INED), in the framework of an institutional open repository (IOR) project carried out from September 2017 to July 2018.

In our talk we will present the features and services of Polaris OS, by using the case study of the IOR developed with INED. Our presentation will focus on three major topics:

1) Development process: we will show how inputs/feed-backs from the INED project team and researchers have helped to develop the general architecture, features and services of the technical solution, and allowed for a customized configuration to meet the specific needs of INED users.

2) User-centered design and services of the IOR: we will present and give a short demonstration of the customized tools/services implemented so far for different kinds of users (researchers, data analysts, librarians, repository managers…).

3) Outlook of the new repository solution: We will give an overview of future user involvement and development of the Polaris OS solution (user group/community; new services and features…).

Paper-P2B-217Mahé,Sohler_a.pdf
Paper-P2B-217Mahé,Sohler_b.pdf


Experts and Novices: redesigning user interfaces for the White Rose Repositories

Beccy Shipman1, Kate O'Neill2, Kate Petherbridge3

1University of Leeds, United Kingdom; 2University of Sheffield, United Kingdom; 3White Rose Libraries, United Kingdom

This paper will explore the impact of expert and novice user needs on the development of repository deposit interfaces. White Rose Libraries (The University Libraries of Leeds, Sheffield and York) manage two open access repositories, one for research papers (WRRO) and one for etheses (WREO). Significant development work has been undertaken over the last three years on both systems. During the process of collecting user requirements, it became clear the users of the two systems have very different levels of expertise. Those using WREO are predominantly novices, postgraduates using it once to upload their thesis. Users of WRRO are primarily experts, Library staff with substantial experience of reviewing and depositing papers. These differences in user needs will be explored further, showing the approaches we took to gathering user requirements. This paper will then set out how these requirements were translated to the development of the repositories using a design process that was both iterative and collaborative. It was vital that the redesign work met the needs of all three institutions so collaboration was central to the approach taken. The paper will conclude with lessons learned and advice for anyone embarking on a similar redesign of the submission process.

Paper-P2B-265Shipman,ONeill_a.pptx


R-Shiny as an interface for Data Visualization and Data Analysis on the Brazilian Digital Library of Thesis and Dissertations (BDTD)

Lucca F. Ramalho, Leonard R. Campelo, Washington Luís Ribeiro de Carvalho Segundo

Brazilian Institute of Information in Science and Technology (IBICT), Brazil

This work presents a use case of building a data visualization interface for open access repositories. The case of analysis is the Brazilian Digital Library of Thesis and Dissertations (BDTD). R is a statistical tool very used among developers and programmers. One of its packages is called Shiny, that makes it easy to build interactive web apps straight from R. Through the app, the user can visualize data in a fast and customizable way. It could help them to keep track of metadata and usage statistics over the institutional repositories and can also be applied to discovering scientific information, such as bibliographic references and lists of specialists in certain research domain. These data visualization tools can stimulate others to create open repositories and join either national, regional or international repositories networks.

Paper-P2B-436Ramalho,de Carvalho Segundo_a.pptx


How to build a repository relevant for your institution, allowing the researchers to do research rather than administration

Jessica Lindholm

Chalmers University of Technology, Sweden

This paper presents our experiences of in-house development of a CRIS at Chalmers University of Technology, Sweden. Which course for the future is relevant when building a new repository platform in 2019, how does is it relate to the choices made 15 years ago when many of the current repository platforms were launched?

This paper will present features that we have in place, as well as the experiences from moving out of the comfort zone and dealing with new, non-publication related data, while sustaining and enhancing existing data and current services. In the development of research.chalmers.se we have had to leave several desired features in a backlog list, whilst focusing on simply doing the most relevant features. For the main features we have implemented them as elegant as possible. For instance, we will demonstrate automatic classification, data-driven workflows between various system - all made with the motto: "Let the researchers do research, instead of administrative tasks".

Paper-P2B-525Lindholm_a.pptx
 
1:30pm - 3:30pmP2C: Collections and connections for research data
Session Chair: Frances Madden, The British Library
Lecture Hall C 
 

In a third space: Building a horizontally connected digital collections ecosystem

Kevin Clair, Jack Maness, Kim Pham, Fernando Reyes, Jeff Rynhart

University of Denver, United States of America

This presentation describes the development of our open-source digital collections infrastructure, which is comprised of a repository for metadata (ArchivesSpace - Digital Collections Department), a preservation repository (Archivematica and Duracloud - Artefactual/Duraspace), a digital object repository (Node.js + ElasticSearch - Library Technology Services Department), and a streaming server (Kaltura - campus IT).

In line with COAR’s Next Generation Repositories guiding principles, the technology space of our ecosystem isn’t relegated to one vendor or to one IT department on campus - rather, it is placed in the hands of those with the best skills and expertise to provide that support. Each system is an independently managed standalone product, resulting in a true hybrid architecture and the coordinated effort of digital curation activities that still allows for each group to focus on the service they have the most vested interest in providing. In monolithic repositories, knowledge is spread across different components, where skills are gained in parts require a lot of attention, while other parts are left as a black box (fingers crossed that it doesn't break). We will also talk about the different management and development practices for each system, and how we negotiate our partnership to support one another and provide digital-collections-as-a-service.

Paper-P2C-264Maness,Pham_a.pdf


The Protean IR: Developing a versatile and decentralized repository through an API

Brian Luna Lucero, Kathryn Pope

Columbia University Libraries, United States of America

Columbia University Libraries recently released a new version of Columbia’s institutional repository, Academic Commons. As we worked on the update, users asked for the ability to curate collections of related works. In response, the project team considered how we might implement some type of “collections” and we soon found ourselves questioning the nature of the repository. Did our assigned categories reflect the thinking of depositors and researchers? Could we be both a campus-wide repository and a showcase for the curated works of specific groups or projects?

These questions led us to redefine the repository. Rather than a website for distributing scholarly works we now see it as a body of digital documents and records describing them that can be indexed, searched and retrieved in multiple contexts. This conclusion represents a paradigm shift in our thinking about the repository.

This new vision of decentralized access is made possible through an API that serves records from Academic Commons based on user queries. Since the introduction of the API, we have worked with on-campus partners to envision how we can implement it to present curated selections of repository content on department and project websites.

Paper-P2C-431Luna Lucero_a.pdf


Two worlds meet: customising a general purpose repository for the specific needs of Life Sciences to achieve FAIRness for research data

Asztrik Bakos1, Daniela Digles1, Gerhard F. Ecker1, Raman Ganguly1, Isabelle Herbauts1, Tomasz Miksa2, Andreas Rauber2, Michael Viereck1

1University of Vienna, Austria; 2TU Wien

The existing digital ecosystem surrounding scholarly data publication is not yet addressing all requirements of life sciences. Although for certain types of digital objects there are already well established repositories, a considerable part of the research data from life science never become accessible to the open world due to a lack of appropriate tools for their continuous use and preservation.

Here, we describe how we adapted an existing general purpose repository at the University of Vienna to the domain specific needs of life sciences. We complemented the existing functionality of the repository with extended metadata scheme and user interface to support the needs of and methodologies used in life science, without affecting the usability of the main repository. We thus evaded setting up a new system, which, in turn, allowed us to reduce required effort and minimise future maintenance costs. The larger vision is to create a repository that can be used across both the humanities and life sciences, which will not only be used as a system for digital preservation but equally well as a platform to facilitate research by aiming to meet the FAIR data principles (Findable, Accessible, Interoperable, and Reusable).

Paper-P2C-377Bakos.pdf


The Fast and the FRDR

Alex Garnett1, Lee Wilson2, Clara Turp3, Julienne Pascoe4

1Simon Fraser University; 2Portage, ACENET; 3McGill University; 4Library and Archives, Canada

The Federated Research Data Repository (FRDR), developed through a partnership between the Canadian Association of Research Libraries’ Portage initiative and Compute Canada, improves research data discovery in Canada by providing a single search portal for more than 100,000 metadata records indexed from over 40 Canadian governmental, institutional, and domain-specific data repositories. While this national discovery layer helps to de-silo Canadian research data, challenges in data discovery remain due to a lack of standardized metadata practices across repositories. In recognition of this challenge, a Portage working group, drawn from a national network of experts, has engaged in a project to map subject keywords to the Online Computer Library Center’s (OCLC) Faceted Application of Subject Terminology (FAST) using the open source OpenRefine software. This presentation will describe the working group’s project, provide a demonstration of preliminary results and examples of how this work improves data discovery, and discuss how this approach may be adopted by other repositories and metadata aggregators to support metadata standardization.

Paper-P2C-403Turp_a.pptx
 
1:30pm - 3:30pmP2D: Repositories all around the world
Session Chair: Kazutsuna Yamaji, National Institute of Informatics, Japan
Lecture Hall J 
 

Efforts for Promoting Open Access Repositories in Palestine

Rawia Fawzi Awadallah, Iyad Mohammed Alagha

The Islamic University of Gaza, Palestinian Territories

Developing countries have begun to take strong steps in open access publishing through open repositories, however, they face significant challenges that may differ from those in developed countries, most notably the lack of resources at the institutional level, as well as the political and geographic constraints that reduce the incentive for open repositories adoption. Developing countries also have special needs at the user level, specifically with regard to the language used in open repository systems. The existence of solutions at the national or perhaps regional level will contribute to encouraging institutions in these countries to adopt open repositories, especially if these solutions come to remove most of the burden on institutions to develop, customize, and maintain these open repositories. In this presentation, we will introduce ROMOR, a project funded by the Erasmus+ program, as one of the initiatives for building open institutional repositories in Palestine. We will discuss with the conference participants ROMOR's future vision for a national solution to support open access publishing.

Paper-P2D-225Mohammed Alagha_a.pptx


Enhancing and connecting repositories in Africa: NREN-repository collaboration

Omo Oaiya1, Iryna Kuchma2, Kathleen Shearer3

1WACREN, Nigeria; 2EIFL, International; 3COAR, International

The value of repositories increases significantly when they are connected through value added services. Research and education networks (RENs) provide connectivity for university communities and are also building so-called ‘above-the-net’ services such as secure network access and identity federation services (e.g. eduroam and eduGAIN). In Africa, the research and education networks are already playing a role to support open science and add value to African repositories and they are interested in increasing and expanding this role in collaboration with libraries. The LIBSENSE project is exploring collaborations between repositories and RENs in order to support the aims of both communities. As a first step, a workshop was organized by WACREN (West and Central Africa Research and Education Network) with support from COAR, OpenAIRE, EIFL and UbuntuNet Alliance in November 2018. The workshop brought together representatives from 17 African countries representing national repository networks and NRENs. This was the first in a series of meetings to develop a more cohesive strategy for strengthening and building repository networks in Africa through the adoption of value added services for repositories by NRENs.

Paper-P2D-391Kuchma,Shearer_a.pdf


The implementation of national research data repository in South Africa

Mbuyiselo Mqondisi Ndlovu

Council for Scientific and Industrial Research, South Africa

In South Africa, some research institutions do not have IT capability to provide digital storage for their research data. As a result, there has been an increasing need to look at how all institutions can be supported by offering them a central repository system that they can use in storing their research data so that it can be secure and retrievable for future use. The provisioning of competent and user friendly application to help researchers interact with centralized data repository is essential. The identification of an application that is capable of collection management, metadata extraction, metadata templates and metadata management has been done. The main systems that were identified are Dell EMC Metalnx as well as the integrated Rule-Oriented Data System (iRODS). Data Intensive Research Initiative of South Africa is on the journey to implement reliable, persistent and easily accessible resource to safely share data. This integrated solution is intended to be made available to SA’s research community.

The objectives of this presentation is to present the National Data Repository that has been developed by DIRISA for South African researchers. The presentation will assist the researchers to gather inputs in terms of work done and foster future collaborations.

Paper-P2D-339Ndlovu_a.ppt


Building NED: National edeposit for Australia

Barbara Lemon

National and State Libraries Australia, Australia

Australia is a big country. So big that our rail tracks were built to different measurements in different states. Similarly, our state and territory libraries have operated with separate legislation to collect materials published in their jurisdictions, with the National Library in Canberra responsible for collecting copies of all Australian publications.

That made some sense when publications were in print form. In 2016, however, Australia’s legal deposit provisions were finally extended to cover electronic materials.

Nine state and territory libraries agreed to a world-first collaboration to build one system that could provide for deposit, management, storage, preservation, discovery and delivery of published electronic material nationwide. A system that could cater to commercial, non-profit, academic, and community-based publishers alike – allowing them to deposit once or in bulk, nominate access conditions, have copies automatically transferred to relevant libraries, and track usage statistics. A system capable of capturing and preserving the digital documentary heritage of Australia for the future, while providing an excellent user experience today for publishers (easy deposit mechanism), libraries (more efficient workflows), and the public (broader access to Australian publications).

This presentation shares our approach to the significant challenges of satisfying nine sets of technical requirements and legislation, balancing open access principles and copyright law with content security and protection of commercial viability, in order to launch NED as an open repository for Australia in May 2019.

Paper-P2D-359Lemon_a.pptx
 
1:30pm - 3:30pmP2E: Developer track
Session Chair: Michael Giarlo, Stanford University
Lecture Hall M 
 

Automating OAIS compliant digital preservation using Archivematica and DSpace

Hrafn Malmquist

University of Edinburgh, United Kingdom

Developing a comprehensive system for permanent digital preservation is a daunting task that entails taking due consideration to both preservation and access. Such a workflow can result in a multi-step manual process including steps for file normalisation and metadata creation/extraction which can be relatively time consuming. Currently there is open source software available that supports this but with limited in-built integration. Some content, notably archives of individuals, needs to be manually appraised by a digital archivist. Other content, such as documents created systematically as part of operating a large bureaucratic organisation follow a set prescription enabling a high degree of automation for archiving. At The University of Edinburgh we have been developing such an integrated workflow using open source software; Archivematica and Dspace optionally integrating with ArchivesSpace as well. This has meant developing Archivematica in cooperation with it’s main developer Artefactual culminating in support for DSpace REST integration being included in the Archivematica 1.8/Storage Service 0.13 release in the autumn of 2018.

Developer Track-P2E-316Malmquist_a.pptx


Using DSpace as backend service - Workflow-centric repository development in practice

Ari Häyrinen

University of Jyväskylä, Finland

We moved from a DSpace -centric development model to a workflow-centric model in order to speed up our repository development. In the workflow-centric model the starting point is not the workflows that the *system* can offer but the workflows that are needed. The organisation defines efficient workflows for content management and then implements tools that support those workflows independently from repository software. The result is a network of applications where the repository software is just one part of the network via REST-api.

The best part of the model is that it allows quick experimenting without any significant risks (financial or otherwise). The model also helps to separate tasks that are part of maintaining the core infrastructure from tasks that are content specific. This separation is essential so that we can be sure that libary's development resources are used in best possible ways.

In this session I'll demonstrate workflows that are in use in Jyväskylä University Digital Repository JYX and tools that make them possible.

Developer Track-P2E-170Häyrinen_a.pdf


Longleaf: a repository-independent utility for applying digital preservation processes to files

Benjamin Pennell, Jason Casden

University of North Carolina at Chapel Hill University Libraries, United States of America

Our institution has developed longleaf, a new portable, command-line, repository-agnostic, rules-based tool for monitoring, replicating, and applying preservation processes to files. As our digital collections infrastructure has grown over the past 20 years, we’ve found it difficult to apply digital preservation plans consistently across system-defined content boundaries. We chose to develop longleaf in order to address several ongoing technological preservation challenges that we feel are also common at other institutions, including the uneven application of preservation practices across systems and rising computational cost as collections grow. We argue that the complexity of digital preservation technologies and the manner in which they are coupled with repository management systems contribute significantly to these problems. Longleaf reduces the interference of repository system constraints in what should be a needs-based digital preservation planning process by applying preservation processes at the file and storage level rather than through a repository system intermediary. Files managed in temporary storage or non-preservation asset management systems can now benefit from the same replication and verification processes as those ingested into preservation repositories.

Developer Track-P2E-409Pennell_a.pptx


A Multi-Tenancy Cloud-Native Digital Library Platform

Yinlin Chen, Jim Tuttle, William A. Ingram

Virginia Tech, United States of America

Virginia Tech Libraries presents our next generation digital library platform. Our design and implementation addresses the maintainability, sustainability, modularity, and scalability of a digital repository using a Cloud- native architecture, in which the entire platform is deployed in a cloud environment - Amazon Web Services (AWS). Our next-gen digital library eschews the old model of multiple siloed systems and embraces a common, sustainable infrastructure. This approach facilitates a more maintainable approach to managing and providing access to collections allowing us to focus on content and user experience.

This platform is composed of a suite of microservices and cloud services. Microservices implemented as Lambda functions handle specific tasks and communicate with each other and other cloud services using lightweight asynchronous messaging. Cloud-native application development embodies the future of digital asset management and content delivery. Shared infrastructure throughout the stack and a clear demarcation between front- and back-end makes the platform more generalizable and supports independent replacement of components.

We share our experiences and lessons learned developing this digital library platform, including architecture design, microservice implementation, cloud integration, best practices, and practical strategies and directions for developing a Cloud-native repository.

Developer Track-P2E-461Tuttle_a.pptx


DSpace-Clustering via Puppet, HAProxy and CephFS

Bernd Nicklas, Paul Münch

Philipps-Universität Marburg, Germany

Running a DSpace repository in a single all-in-one installation is limited both

in hardware-resources and in maintainability. Therefore the Philipps-University

Marburg implemented a DSpace clustering solution based on an in-house devel-

oped Puppet module, a slightly modified DSpace installation process, HAProxy

and CephFS which allows for customization, horizontal scaling, better perfor-

mance and higher availability. The presentation features the short-comings of

all-in-one installations, the benefits and also the problems of clustering.

Developer Track-P2E-184Nicklas_a.pdf
 
3:30pm - 4:00pmCoffee break
 
4:00pm - 4:30pmWelcome by Katharina Fegebank (Second Mayor and Senator for Science, Research, and Equalities)
Chair: Stefan Thiemann, Head of Center for Sustainable Research Data Management, Universität Hamburg.
Lecture Hall A 
4:30pm - 5:30pmMinute Madness
Session Chair: Elizabeth Krznarich, ORCID
Session Chair: Iryna Kuchma, Stichting eIFL.net
Lecture Hall A 
5:30pm - 7:30pmPoster reception
 
 

Archiving and collecting Arctic datasets: Open Arctic Research Index

Tamer Abu-Alam, Karl Magnus Nilsen, Obiajulu Odu, Stein Høydalsvik

The UiT - The Arctic University of Norway

The number of digital repositories containing publications and datasets on the Arctic region is increasing enormously. Users want relevant information according to their query with a minimum interval of time. Scholars are compelled to search the individual repositories to get their desired documents.

Open Arctic Research Index (Open ARI), a planned service at the UiT - The Arctic University of Norway, aims to collect and index all the openly available Arctic-related publications and datasets in a single open access metadata index. By providing a simple search dialog box to the index, users can search all these repositories and archives in a single operation.

The project investigates how such a service can support researchers in their research by making results from Arctic research more visible and better retrievable based on a standardized, interdisciplinary metadata set. The project started by clarifying the need for a new technical solution to collect all the published material using algorithms that allow the best way of filtering relevant records. We have defined 113 possible national and international collaborators who can feed the Open ARI with content. The team will analyze the success opportunities and the challenges in order of planning a full-scale management model.

Posters--319Abu-Alam.docx


An Agricultural Research e-Seeker to find, explore and visualize open repository resources

Peter Ballantyne1, Moayad Al-Najdawi3, Enrico Bonaiuti2, Valerio Graziano2, Alan Orth1, Jane Poole1, Mohammed Salem3, Abenet Yabowork4

1International Livestock Research Institute, Kenya; 2International Center for Agricultural Research in the Dry Areas, Egypt; 3CodeObia, Jordan; 4International Livestock Research Institute, Ethiopia

Since 2010, several CGIAR centres, programs and partners have joined forces to enhance and open up access to their knowledge, information and data products through shared open repositories – mainly using DSpace, Dataverse and CKAN. These repositories now contain tens of thousands of items from many organizations and on diverse topics important to developing countries. In 2018, driven by an aspiration to offer more value from all this content, several of these partners invested in an aggregation tool – the agricultural research e-seeker – to facilitate integrated insights and intelligence and provide new ways to access the content across these different platforms. The poster describes and presents the AReS tool (which will be released to repository communities in the first half of 2019), illustrating how it enhances content discovery for users, supports institutional insights, visualizes content around different metadata filters, and generates ‘snapshots’ of diverse knowledge products for different users and use cases. The poster will explain the technical, organizational and knowledge management approaches used to build this tool.

Posters--477Yabowork.pdf


Finding Citations on Lume Institutional Repository

André Rolim Behr, Manuela Klanovicz Ferreira, Zaida Horowitz, Janise Silva Borges da Costa, Carla Metzler Saatkamp, Cleusa Pavan, Caterina Groposo Pavão

Federal University of Rio Grande do Sul, Brazil

This ongoing work aims to provide citation information for users of DSpace repositories. Its scope is on Lume’s journal article community - which has knowledge areas as collections - and its citations within this community. This community may point the citation increasing of articles published in scientific journals and available in institutional open access repositories. The Extract Transform Load (ETL) approach consists of three steps: (1) - Data collection; (2) – Citations discovery; and (3) - Load processed data. From the process, for example, one can explore a specific period, areas of knowledge, and groups of authors. It can provide relevant information to users about a particular type of document or author; as well as how, when and where the citation occurred in documents. The disclosure of these results aims the encouragement of the deposit of works in open access IR and its usage.

Posters--375Behr.pdf


PHAIDRA: An open approach for the repository infrastructure from the University of Vienna

Susanne Blumesberger, Raman Ganguly

University of Vienna, Austria

PHAIDRA is a repository infrastructure develop and run by the University of Vienna. The primary driver for this infrastructure is openness beyond open access and open data. The data management system is open to every member of the University including students. We invite researchers to work with us jointly improving PHAIDRA, and we are designing domain-specific interface together in publicly funded projects.

Posters--515Blumesberger.pdf


DeepGreen – Open Access Transformation in Practice

Julia Boltze1, Eike Wannick2

1Zuse Institute Berlin, Germany; 2Helmholtz Open Science Coordination Office at the GFZ German Research Centre for Geosciences, Potsdam, Germany

Since 2011 so-called Alliance licenses funded by the German Research Foundation (DFG) include an open access component which allows authors to make their articles, after a shortened embargo period, publicly available through their institutional or subject-based repositories. If the institution negotiated the license, she acts as a representative of the author and therefore with equal rights. However, very few institutions use this opportunity due to the high effort associated with manually researching the articles in question and adding them to the repositories.

DeepGreen aims to change that. Funded by DFG, DeepGreen develops an automated workflow to transfer scholarly publications from publishers to open access repositories. During a first funding period (2016-2017) a technical solution for a data router was developed. Publishers deposit data files (metadata and full text) and DeepGreen is matching them to authorized repositories using affiliations included in the publishers metadata. During a second funding phase, which started in August 2018, other licensing models will be examined and in summer 2019 DeepGreen will see a beta launch with a selection of publishers and repositories.

DeepGreen increases the percentage of open access publications which makes it an active player in the field of open access transformation and open science.

Posters--156Boltze.pdf


Bridging the gap between Repositories and Homepages - Providing data from DSpace-CRIS with OData

Jonathan Boß, Cornelius Matějka

University of Bamberg, Germany

Universities are using a multitude of technical systems to support researchers and students. As a result there are new challenges concerning administration and system integration. The University of Bamberg has the requirement that research data from our repository (DSpace-CRIS) should be accessible through a web service to embed the data into a homepage uitilizing Typo3. DSpace-CRIS already features a REST API which supports access to DSpace’s core data (publications) but is not conceived to provide data of CRIS entities (projects, research data). Moreover, in a multitude system landscape it is favored to use the same standard for several systems to access data which is implemented by the Open Data Protocol (OData) at the University of Bamberg. The goal of OData is to establish a consistent standard for realizing a RESTful API. In 2017 OData has been approved as a standard for Open Data Exchange by OASIS. In our approach the OData API makes direct use of DSpace-CRIS’ underlying search platform (Solr) to access both DSpace’s core data and data of CRIS entities by implementing a unified query language. Providing data from several system with the same query language simplifies the integration within other systems and reduces the amount of maintenance.

Posters--343Boß.pdf


Preserving Social Media posts: a case study with “In Her Shoes”

Kathryn Cassidy, Aileen O'Carroll, Kevin Long

Digital Repository of Ireland

On 25th May 2018, the Irish people voted to remove the controversial “8th Amendment” to the Irish Constitution, opening the way for the introduction of legislation governing the termination of pregnancy in the State. The Digital Repository of Ireland was asked to archive the materials from a grass-roots social-media-based group campaigning in the run-up to the referendum..

The poster will present some of the difficulties encountered when attempting to archive these social media posts and describe the approach taken by the DRI to overcome them. It will also show the open-source facebook-to-dc tool developed by DRI which may be used by others to generate Dublin Core metadata and textual asset files from a Facebook Group.

Posters--381Cassidy.pdf


DOISerbia Repository – how transition country raised visibility of scientific journals

Boris Đenadić, Tatjana Timotijević, Katarina Perić, Aleksandra Kužet

National Library of Serbia, Serbia

DOISerbia repository has been implemented in 2005 by Department of Scientific Information in National Library of Serbia. That was one of the first developed OA repositories in Serbia. The aim of implementation of this service was improvement and promotion of scientific publishing in Serbia. System includes 66 scientific journals from Serbia with archive from 2002 till today. Every article, beside main bibliographic data is equipped with DOI number. For every journal, there are data about web address, coverage, aims and scope, publisher, editorial board, frequency, impact factor if any, and also big improvement data about journal: journal editorial policy, instruction for authors etc. On this way, it was established permanent link to full-text of every article, but also it was raised its visibility national and international. Connection between metadata of DOIs and articles is conducted via CrossRef. Since 2005 till nowadays there are over 40.000 articles in this system. The main advantage of this system is that metadata are standardized in Dublin Core and it was implemented OAI PMH protocol which opened a door for connecting and harvesting our data by big international OA repositories – TEL, Europeana and DOAJ.

Posters--291Perić,Kužet_a.pdf


Building a single repository to meet all use cases: a collaboration between institution, researchers and supplier

Jenny Evans1, Tom Renner2, Nina Watts1

1University of Westminster, United Kingdom; 2Haplo, United Kingdom

Repositories have historically focused on a single use case, primarily the capture of traditional (text-based) open access publications, requiring separate solutions for different use cases (e.g. research data). This made capturing the wide variety of research outputs challenging at the University of Westminster, which engages in practice-based arts research (encompassing a range of disciplines, from fine and performing arts, to architecture and design and whose outputs tend to be in a non-text format) alongside traditional research.

Building on a history of collaboration, Haplo and the University and its research community have built a single, open source repository meeting multiple use cases including text-based and non-text based outputs, portfolios and research data. Made possible through the flexible technical architecture of the Haplo platform, whose underlying technology is based on semantic web principles and meets COAR’s vision for next-generation repositories.

Improvements to the repository now enable better capture and display of research outputs across disciplines. Highlights include the development of dynamic portfolios, improved support for non-text based outputs and ongoing engagement with practice-based arts researchers to understand their needs, build workflows, review metadata and build, test and implement a transformed repository.

Posters--512Evans_a.pdf


OASIS: A sustainable digital repository service owned by academics

Yankui Feng, Sebastian Pałucha

University of York, United Kingdom

The Technology Team at the University of York Library and Archives aspires to support research funded digital repository development. This poster will explain how, in close collaboration with academics, the Technology Team developed the Open Accessible Summaries In Language Studies (OASIS) repository service. The OASIS repository service is governed by academics and is open for use to the whole research community in language studies. The OASIS initiative is supported by some peer-review journals in language studies.

The poster will illustrate how we used an open and collaborative approach to develop the OASIS digital repository based on the Hyrax/Samvera community owned repository solution. We will include visual screenshots presenting our customisation work on the Hyrax default user interface in order to:

- Accommodate rich metadata in language studies;

- Create an intuitive deposit workflow with an administrative approval step;

- Provide a faceted search interface allowing users to find relevant summaries.

We will also present our digital library and archives emerging infrastructure and our sustainability plans for hosting the OASIS service alongside our future digital library solution.

Posters--311Feng.pdf


ROSI – Open Metrics for Open Repositories

Grischa Fraumann, Svantje Lilienthal, Christian Hauschke, Lucia Sohmen

German National Library of Science and Technology (TIB), Germany

Researchers rely more and more on online tools to conduct their research. In theory, the growing need for comprehensive information about online research outputs could be easily satisfied. Yet, the majority of scientometric sources are not completely open, which generates intransparent data and limits impact assessments. To illustrate, some proprietary databases that generate scientometric indicators do not disclose the raw data for users beyond traditional subscription-based models. At the same time, researchers' needs concerning scientometric indicators are not addressed adequately by these existing products.

In contrast, the project ROSI (Reference Implementation for Open Scientific Indicators) focuses only on open data sources. A reference implementation to visualise related metrics from open data sources, such as open access repositories, will be developed. Throughout the project, the needs of researchers concerning scientometric indicators are gathered in an iterative process, and researchers will be invited to evaluate the project outcomes. The reference implementation will be documented in a user handbook and will be reusable in other contexts, such as research information systems, repositories and publishing software.

Posters--233Sohmen.pdf


Making Local Knowledge Visible: An IR in Kosovo

Michele Gibney

University of the Pacific, United States of America

In 2017, a joint international effort commenced under the direction of the President of University for Business and Technology (UBT) in Kosovo with colleagues from Linnaeus University (Sweden) and University for the Pacific (USA) to define, create and populate a Knowledge Center for UBT which would include an institutional repository (IR). Enlivened by discussion and feedback from the intended recipients, the needs and goals of a UBT IR were identified. Of course, creating and populating an IR is a lengthy process with many potential problems and varied approaches. Discussion of best practices was undertaken early and currently, the UBT Knowledge Center (https://knowledgecenter.ubt-uni.net/) has 1,495 records uploaded.

The point of this presentation is not only to discuss the process by which a Kosovo IR began but also the impact of making local knowledge visible to current and future UBT students, as well as regionally and internationally. As part of the author’s doctoral research, a study of quantitative and qualitative impact is currently underway on the UBT Knowledge Center. Results from surveying student and faculty at UBT will be shared as well as usage statistics both in country and internationally, drawing conclusions on impact and reach of the project endeavor.

Posters--509Gibney.pdf


Virtual Reality Record Metadata

Michele Gibney

University of the Pacific, United States of America

In 2018, the University of the Pacific Libraries worked with a faculty member in the School of Engineering and Computer Science to upload a class project involving multi-file records to the institutional repository. One of the file types was an .EXE executable Virtual Reality (VR) application. This was a first at the institution and in my experience with institutional repositories; I was stymied on how to describe and provide metadata for the VR piece – to both human and machine audiences. Attempting to read up on best practices and query the community didn’t result in much concrete assistance and we muddled through as best we could. Since the first project occurred, I have continued to research standards and best practices regarding hosting and describing virtual reality, augmented reality, etc., file types in an open repository. We will also be facing this problem again as the faculty member will repeat the course and the project and wants to continue using the repository as a host. Interested? Come learn about what I did and what I’ve discovered along the way.

Posters--492Gibney.pdf


An assessment of the status of Open Access Policies and Repositories Development in Kenya

Milcah Wawira Gikunju, Felix Rop

University of Nairobi, Kenya

Academic institutions worldwide have embraced institutional repositories as a means to showcase their research globally. In Kenya, the majority of academic institutions with effective repositories are established universities. Little is known of institutional repositories of newly established universities in Kenya. The study assesses the status of open access policy and repository development in Kenyan universities that were established between 2016 and 2017.

The researchers collected data from professional library staff in three newly established universities using questionnaires.

In the findings, all the university libraries investigated had functional institutional repositories. The libraries had developed submission and metadata policies. The staff charged with implementing institutional repositories had relevant skills, understood the scholarly communication cycle, and were responsible for recruitment of institutional repository content. The challenges faced in implementing institutional repositories included low levels of awareness of the existence of institutional repositories by the intended users, reluctant of the researchers in submitting their research with the repositories, lack of resources, inadequate staff and submission policy.

The findings of this study buttresses the place of information repositories as a platform to share research literature and open access to scholarly materials globally even for newly established universities in developing countries.

Posters--504Gikunju.docx


A native iPad app for the DSpace 7 REST API

Keith Gilbertson

Virginia Tech, United States of America

This is a native iPad app for DSpace 7 repositories, built using the new version of the REST API. The app also runs on iPhones, but this is not recommended for most use cases because of the difficulty inherent in reading formatted academic articles on small phone screens. The app allows repository users to browse, search, and download content, and repository administrators to submit and edit repository content. Because of the connectivity challenges inherent with mobile devices, the app was designed so that some of the work can be done while users are offline. This app is unofficial and is intended only as a supplement to the spiffy, official and feature-complete Angular UI which will be preferred by most users. This app is bound to be a niche product, because most users find our repositories by way of search hits from Google that link directly to articles, but I believe that some users will enjoy the intimacy and speed of a native mobile app for interacting with repositories, and I am eager to present this poster.

Posters--470Gilbertson.pdf


Hamburg Open Science at TUHH: An All-in-one repository service based on DSpace CRIS

Oliver Goldschmidt, Andreas Pohnke, Beate Rajski, Gunnar Weidt

TU Hamburg, Germany

The Hamburg Open Science program aims to develop an open science infrastructure for universities in Hamburg. This includes open access repositories, research data repositories and research information systems. Hamburg University of Technology, a small institution with limited programming capacities, opted for an integrated approach instead of three different systems. The current Open Access repository (based on DSpace CRIS and fully ORCID-enabled) is going to be extended with a research data repository and supposed to include all institutional researchers, organization units, projects and publication information building a research information system and a university bibliography.

This poster illustrates the components of our extended repository and shows how DSpace CRIS, an open source software with a vibrant community, helped us cover the two new components of the repository. It will focus especially on the entities.

Posters--244Goldschmidt,Rajski_a.pdf
Posters--244Goldschmidt,Rajski_b.png


Usage Statistics Do Count

Chiara Bigarella, Jose Benito Gonzalez Lopez, Alexandros Ioannidis, Lars Nielsen

CERN, Switzerland

Make Data Count observes that “Sharing data is time-consuming and researchers need incentives for undertaking the extra work. Metrics for data will provide feedback on data usage, views, and impact that will help encourage researchers to share their data”. At Zenodo, we have been working on the same principles and we have launched Zenodo Usage Statistics, a feature that exposes fine-grained up-to-date usage statistics in an easily accessible manner. Now, users can find the number of views and downloads on every record page, and they can also sort search results by most viewed. These statistics are “versioning aware”; by default, we roll-up usage statistics for all versions of a record. However, it is also possible to see detailed usage statistics for the specific version of a record.

We strongly believe in user’s right to privacy; thus we have spent a lot of time to design our system to ensure all tracking is completely anonymized. As well, our usage statistics are tracked according to industry standards such as the COUNTER Code of Practice as well as the Code of Practice for Research Data Usage Metrics and thus allows our users to compare metrics from Zenodo with other compliant repositories.

Posters--235Bigarella.pdf


DOI Minter: A Service for Flexible Generation of DataCite DOIs in Connection with a DSpace Repository

Mohyden Habbal, Daryl Grenz

KAUST, Saudi Arabia

As institutional repositories seek to better support the release of unique research outputs (as opposed to open access versions of separately published materials), they increasingly turn to DataCite DOIs as the most appropriate form of persistent identifier. While DataCite DOI registration support is built into many repository platforms (reference 2), the native configuration options may be limited. For institutions like ours who opt to use commercial repository hosting services, contracting customizations to features such as the DataCite integration within the existing software also complicates future upgrade or migration paths for the platform as a whole. In addition, locating the DOI minting service of an institution within a single platform or database may be inappropriate when the institution has several different systems that would benefit from the use of DOIs.(reference 1) Due to these factors we developed a local DOI Minter service connected to our hosted DSpace repository via REST API. This has allowed us to have greater immediate flexibility and also better positions us for future expansion of our DOI related services.

Posters--499Habbal.pdf


Introducing the Fedora User Guide

Juliet L. Hardesty1, Anna Goslen2, Jennifer B. Young3, Ruth Kitchin Tillman4, Andrew Woods5

1Indiana University, United States of America; 2University of North Carolina Chapel Hill, United States of America; 3Northwestern University, United States of America; 4Penn State University, United States of America; 5Duraspace, United States of America

Fedora Digital Repository has a strong history of documentation for system administrators hosting and managing Fedora as software and developers using the repository as a backend for collection access and management. [1] However, the same level of documentation has not been as readily available for users who manage collections and perform metadata work in Fedora. Andrew Woods from Duraspace approached a group of metadata librarians at institutions using Fedora to start a new section of documentation called the Fedora User Guide. The new User Guide currently contains information about metadata recommendations and best practices within Fedora from different user communities. It also provides various examples of content modeling to show how different ways of organizing content are implemented in Fedora using the current capabilities of the digital repository software.

Posters--160Hardesty.pdf


Long-term preservation: Integrating DSpace and Rosetta at ETH Zurich

Barbara Hirschmann, Andreas la Roi, Greg Scowen

ETH Zurich, Switzerland

Institutional repositories are more and more intended to be dynamic platforms that not only expose the outcome of scientific research, but also reflect the entire lifecycle of such production, and spin a complex web between the various content types. The challenge of long-term preservation becomes bigger, as it needs to take into consideration the moving target that this dynamic structure and lifecycle represent.

At ETH Zurich, our DSpace based repository Research Collection is one typical example of such modern and complex repositories. It contains content that is unique to our institution, such as research data, doctoral theses or technical reports. Without a preservation system, all that research output wouldn’t be secured for the long term.

We therefore decided to integrate the Research Collection with our existing long-term preservation tool, Rosetta (branded as “ETH Data Archive”). Such an integration was for us a good way to ensure safeguard of our unique content, while not imposing on our users the burden of having to worry about long-term preservation themselves (e.g. they don’t have to submit twice, and only have to work with a publication platform that has been designed to focus on their end-user needs). The poster illustrates the integrated workflow.

Posters--344la Roi_a.pdf


User stories and Front Ends for a Research and Cultural Heritage Data Repository

Bolette Ammitzbøll Jurik

Royal Danish Library, Denmark

This poster presents the Royal Danish Library Open Access Repository (LOAR) [1] based on DSpace [4] along with the MeLOAR dedicated front end for specific LOAR collections. MeLOAR offers keyword search and location search, shows the search results with facets, maps and highlights, and shows the highlights inside the pdfs as well.

The repository was from the idea stage a two purpose repository for both FAIR[3] research data from Danish universities and cultural heritage data from the Royal Library. It was from the start also an open data repository to advocate the open data agenda.

An important lesson was that different users mean different user stories, both when users are national library curators vs. university researchers and when users are researchers from different disciplines with different types of data. Sometimes the solution is a dedicated front end which gives a better user experience.

Posters--309Jurik_a.pdf


Jisc Open Research Hub – Supporting Open Scholarship

John Kaye, Dom Fripp, Tom Davey

Jisc, United Kingdom

Jisc’s Open Research Hub integrates a number of repository, preservation, reporting and storage platforms as a one stop shop for researchers and research managers. The service offers both open source and proprietary systems and allows data and metadata to be shared openly if required. The platform has been developed through years-long consultation with the UK HE research sector and sector bodies, along with contributions from both in-house Jisc and third-party experts.

The need for such a solution has arisen from the sector’s desires to achieve several, shared aims, including: greater collaboration; tackling the reproducibility crisis; enabling better research; and meeting funder requirements.

Jisc’s custom-built repository—the Open Research Repository—is part of the Jisc Open Research Hub. It’s built upon an extensive data model and rich messaging layer, providing users with a clean, simple, and easy-to-learn interface for the deposit, approval, and discovery of a range of outputs.

Jisc’s position in the UK higher education / research sector, as well as the scale of the service provides us with many domain-specific insights to share with OR2019 delegates, ranging from the broad methods mentioned above, down to individual design decisions informed by our research and domain expertise.

Posters--488Kaye.docx


Highly Automated Import of Metadata into an Institutional Repository: A PHP Tool

Laura Amalia Konstantaki, Federico Cantini, Lothar Nunnenmacher

Lib4RI - Library for the Research Institutes within the ETH Domain: Eawag, Empa, PSI & WSL, Dübendorf, Switzerland

Nowadays researches are often requested to submit their publications to their local institutional repository (IR). This is a rather time-consuming administrative task, which they would readily avoid. Besides improving the publication-submission procedure, the need to import publications from other databases (DBs) directly instead of asking the researchers to submit their publications manually, is paramount. Repositories have dealt with this problem in the past (e.g., Roy and Gray 2018; Li 2016 ). Nevertheless, to our knowledge there is no free tool available that allows the automatic import from well-known DBs such as Scopus or Web of Science, which is based on PHP – a programming language vastly used by repository developers. Our IR is based on Drupal/Islandora; however, we aim to provide code that can be used by as many systems as possible. Therefore, our approach is based on two levels: (a) a PHP library to abstract the interaction with the publication DBs, providing a unified interface that can be used by any PHP application or script; (b) a Drupal module using the PHP library to implement the automatic import of metadata into Islandora repositories.

Posters--214Konstantaki.docx


Adapting repositories to OpenAire 4 Guidelines: Huelva repository, a case study

Jose Carlos Morillo Moreno1, Emilio Lorenzo Gil2, Eva Braña Ferreiro2

1Universidad de Huelva; 2Arvo Consultores y Tecnología

The approval in late November 2018 of the OpenAIRE Guidelines for Literature Repository Managers v4 marks a step further in this evolution of this harvesting requisites. The incorporation (and exposure) of ORCID identifiers, together with new sets of typologies, rights and versioning metadata, poses a challenge to digital repositories. Combined with the fact that it may be necessary to maintain the compatibility of aggregators in other metadating schema or systems (OAI_DC, Openaire V3, DRIVER, etc.) in the long term, we conclude that it may be difficult to repositories to adapt

We present the work carried out at the Arias Montano Repository of the University of Huelva, based on Dspace v6. The repository complies with, among others, OpenAire harvesting requisites for its adaptation to the new OpenAire 4 application profile. Specifically, we have placed particular emphasis on the use of all the ORCID author identifier information that was available to the repository (through the use of the authority control functions of Dspace), together with the coexistence of COAR vocabularies for the descriptive metadating of resources with vocabularies of previous specifications that continue to be required in the OAI interface.

Posters--401Morillo Moreno.pdf


Developing an Open Repository into Full Service platform for Open Publishing - The Case of 25 Universities of Applied Sciences in Finland

Minna Marjamaa, Tiina Tolonen

AMKIT Consortium, Finland

The Theseus poster presents a full publication service platform of 25 Universities of Applied Sciences (UAS) integrating an open repository into a current research information system (CRIS) and a National digital long-term preservation (DP services).

By the year 2020 Finnish universities are required to publish 100% open access. To reach this goal, Theseus will launch a CRIS to accompany the self-archiving of research papers into the Repository in the same workflow with the reporting to the Ministry of Education. This will save resources, increase reporting and presumably lead to growing numbers of self-archiving as well.

As a part of Theseus reform, the UAS theses will be carried forward to the DP services. In addition, a new way to manage the publicity of uploads will be developed. The connection to the DP services will meet the requirements of an operative e-archive described by the National Archives System in Finland and provide a transfer to long-term preservation by the National Library of Finland.

The model is resource saving and it will reorganize publication, publicity issues, archiving and reporting in UASs of Finland as well as it effectively promotes open publishing in Finnish UASs. The extended service will be launched early in 2019.

Posters--268Marjamaa.pdf


Research Data Support at the Royal Veterinary College: A Case Study

Michael Murphy

Royal Veterinary College, United Kingdom

In Autumn, 2018, I began working to set up a research data repository at the Royal Veterinary College, University of London, working primarily with Dr. Dan O’Neill, an epidemiologist. The resulting efforts offer some instructive insight into the challenges and opportunities presented for librarians and research support staff by open repositories, and specifically the expectation, increasingly held by researchers, that technical and administrative support in the matter of open data be available to them via their institution’s library and/or IT services. The poster documents, visually and descriptively, the different parties and systems involved in the effort to support what remains a relatively nascent expectation among researchers. This expectation, in turn, has been generated by external (expectations of peer reviewers, grant funder policy), institutional (research data policies specific to their institution), and professional (effective techniques for collaboration, intellectual property concerns related to use of commercial repositories) responsibilities. This poster looks at the different concepts and stakeholders represented in a local project in an attempt to draw out some widely applicable truths and recommendations.

Posters--260Murphy.docx


Automatic data enrichment: merging metadata from several sources

Frank Lützenkirchen1, Kathleen Neumann2

1Universitätsbibliothek Duisburg-Essen; 2Verbundzentrale des GBV (VZG)

Today it is possible to uniquely identify authors and publications on common and well established ID systems such as ORCID, ISBN or DOI. Databases like CrossRef, Datacite, PubMed, IEEE or Scopus share their data using often freely accessible APIs. This opens completely new ways to automatically retrieve, merge, link, and enrich publication data.

We would like to introduce an improved mechanism for importing and enriching bibliographic data, the so called „Enrichment Resolver“. Incompleteness and ambiguity of publication metadata is common. Enriching data from external sources helps us to create the best possible version of every single metadata record. Looking at the author entries, it is our goal to get the most complete version of it, including person identifiers like OCRID, Scopus ID and other. Additionally, extra services like DOAJ or OADOI will be used to get further information like the open access state of the publication, which also can be added to the imported metadata.

We would like to present how a “self-filling" publication repository or institutional bibliography could work like: Starting from an institutional identifier, we can get a list of institution members linked to their ORCID profiles, find publications there, import and enrich that publication data from various sources

Posters--511Lützenkirchen.pdf


Search Engine Optimization as a motivational factor for researchers to submit their material on the institutional open repository case study of Uganda Christian University.

Fredrick Odongo

Uganda Christian University, Uganda

Search engines are widely used by researchers to locate research materials on the internet, they provide optimization techniques to content developers to allow their content show up among the top lists while a user types in their topic of interest.

Repositories that have not been optimized will not show up among the top lists or never at all making its content to have low visibility and therefore low citations. Researchers then see no benefits of submitting their content on these platforms.

I propose that all open repositories be search engine optimized because this will lead to the content of their researchers more visible to other researchers. Visibility will allow for more citations and hence a motivating factor for researchers to submit more content to the institutional open repository.

I also demonstrate the impact of this on Uganda Christian University open Repository where I started off without optimization and later optimized. The number of researchers willing to give us their work has increased greatly due to the benefit of visibility of their research online.

I believe that the greatest motivator of researchers is having their work more visible to the public other than monitory benefits.

Posters--173Odongo.docx


MyCoRe - the repository software framework

Wiebke Oeltjen

Universität Hamburg, Germany

MyCoRe is an established repository software framework in the German-speaking region. The open source framework is able to serve as basis for digital libraries, multimedia archives, research data repositories or institutional repositories. The name, pronounced "my core", indicates that there is a software core that can be used in custom applications. More than 80 MyCoRe applications are running at over 20, mostly German locations. They provide publications, digital objects (such as documents, manuscripts, books, catalogs, journals, newspapers, etc.) and research data. All objects are described with metadata. Also image files, sound documents, and videos may be included in MyCoRe-based information systems. The MyCoRe framework provides the functionality necessary to manage, store, present, and share metadata and digital resources. Various interfaces are supported. Resource harvesting within the OAI-PMH framework is provided. The deposit of content is possible via SWORD protocol (v2). ORCID identifiers are integrated in MyCoRe repositories for author identification and automated data export to or import from ORCID is possible. The support of schema.org and JSON-LD is the newest software development.

Examples of MyCoRe applications are "DuEPublico" with the university bibliography at the University of Duisburg-Essen, the publication server of the TU Braunschweig or the "Digital Library Thuringia (DBT)".

Posters--378Oeltjen.pdf


Hamburg Open Science, linking Repositories across universities and fostering digital cultural in science

Konstantin Olschofsky

University of Hamburg/ Hamburg Open Science, Germany

Hamburg is a federal state with universities of various orientations, associated biobiotheques and scientific collections. Repositories are operated in all these institutions and, in addition, diverse research results and data are produced. Scientists are looking for a long-term and structured backup solution that can be accessed easily, sometimes openly. In the Hamburg Open Science program, the universities jointly develop respective services to foster the use of repositories for open Science and implement them in open science infrastructures adapted to the special demands of the respective universities. Hence, the focus here is on analysing and understanding the needs of scientists and evaluating the benefits of the implemented infrastructure for scientists from different disciplines. Through joint development, it will be possible to use a single web service to locate the diverse scientific results of advances from distributed repositories across Hamburg and to conduct a differentiated search for publications, data or research contacts. In addition to search results of metadata, open data can be used directly from the source repositories.

Posters--382Olschofsky.docx


Building Campus Support and Adoption of a New Repository

Jennifer Lea Pate

University of North Alabama, United States of America

Our repository is just over a year old and yet campus adoption is still an uphill battle. Even as repositories become more common and more popular, administration and faculty may still not understand the benefits of utilizing repositories and Open Access publication. Others may understand the concept of an IR but might be frustrated by the thought of it being another task added to their ever-increasing to-do list. The need to educate campus constituents on why the repository is important and how it can support their pursuit of tenure and promotion goals remains a challenge for most IR administrators. Education and outreach are crucial at this stage. Do you start with the faculty or the administration? Do you try to talk to faculty one-on-one or do you go to department meetings? Can you hold open sessions in the library or other central locations on campus? How can you leverage metrics and impact factors? This poster will address these questions and will provide a framework that you can take back to your campus and use to build rapport with faculty.

Posters--180Pate.pdf


Repositories at Work: General Research Data Repository at Universität Hamburg

Hagen Peukert, Juliane Jacob, Iris Vogel, Kai Wörner

University of Hamburg, Germany

A general research data repository is supposed to be intuitively usable for a large variety of users with different backgrounds, needs, and expectations. These come to the fore in the design of the interface as secondary features. While the primary features expressed by the core functionality in the scalability, the retrieval time, or the size of the data to be processed is perceived as an absolute must-have, secondary features are perceived as a real advantage ‘nice-to-have’. Yet the one depends crucially on the other. We like to share our experience of configuring a research data repository, designing the technical infrastructure, and adding software functionality. By describing the process of making adjustments and their intended results on the usage behavior, we propose a use case of setting up a general research data repository in which the needs of the user play a central role.

Posters--365Peukert_a.pdf
Posters--365Peukert_b.png


OpenAIRE Content Acquisition Policy: expanding the scope

Pedro Príncipe

University of Minho, Portugal

This poster outlines the new OpenAIRE Content Acquisition Policy released in October 2018, which defines the conditions under which metadata of scientific products collected from content providers in OpenAIRE will be considered for inclusion in the OpenAIRE information space. Policies specify which typologies of objects are mapped into which OpenAIRE entities (literature, dataset, software, other research products) and which are the minimal quality conditions under which metadata can be accepted.

With it’s new content acquisition policy, OpenAIRE broadens its scope to integrate metadata records of all scientific and research products. This means that OpenAIRE now harvests: publication records of all access levels (open access, closed access/metadata only etc.), publication records with and without funding references, publication records of different research product types in one repository (literature publications, research data, software and other research products). In order to grant that records are included in OpenAIRE, it is vital that the access level of a record is made clear (preferably by an access level statement on record level, alternately by the use of specific OAI-sets) and each record contains a PID (or URL) that resolves to a splash page.

Posters--468Príncipe.pdf


How to publish software artefacts in institutional repositories: Git integration for DSpace repositories with SARA

Franziska Rapp1, Daniel Scharon2

1Ulm University, Germany; 2University of Konstanz, Germany

Software plays an essential role in today’s scientific research. In addition to research data, the software itself should be made available for the long term to enhance the reproducibility of research results. To address this problem we developed a web service (SARA) that integrates Git with institutional repositories. The web service allows researchers to import software artefacts from participating GitLab instances or GitHub. A web interface displays captured metadata, allows the selection of branches, the optional inclusion of the version history and more. Metadata records are pushed to DSpace while the bitstreams are stored in a separate Git archive where you can browse and view the files. Depending on an institution’s preferred workflow and DSpace configuration, software artefacts are either immediately available or go through the workspace and/or workflow in DSpace. The web service can be self-hosted and used by one or several institutions, integrating their respective institutional repositories. We are interested in connecting additional repository software and welcome initiatives to contribute to the project.

Posters--242Rapp_a.pdf


One Repo fits all – The Specialist Repository for Life Sciences as a Data Store for every type of publication

Dr. Ursula Arning, Robin Rothe

ZB MED Information Centre for Life Sciences, Germany

There are many different types of repositories for scientific purposes. Fortunately, more and more institutions and infrastructural units discover the value of digital storages which allow the reuse of research results in the context of open science. Based on this many scientific communities establish a number of commonly used repositories and other publishing platforms for different kinds of scientific output. These platforms and repositories provide only one kind of publication. This causes the problem that it can be difficult to find and to get access to all information about one research project of one scientific group or institution. ZB MED and its publication platform (PUBLISSO) aims not only to establish competitive products for existing best-case-scenarios, but to fill these gaps in publication services. Therefore, PUBLISSO has built a repository which fits all types of publication. PUBLISSO provides access to a broad range of publications (i.e. text publications and research data) at one place. Also, it gives small institutions and communities the opportunity to publish their work at one place. Additionally, PUBLISSO promotes the idea of cross referencing to make relations of publications visible. Therefore, the development of tools of Linked Open Data are constantly monitored and implemented if possible.

Posters--240Rothe_a.pdf


Leveraging a University Institutional Repository to Host a New Open Access, Peer-Reviewed Student Biomedical Science Journal at the Cooper Medical School of Rowan University

Benjamin Saracco, Amanda Adams

Cooper Medical of Rowan University, United States of America

Background: In the Spring of 2018 the library of the Cooper Medical School of Rowan University (CMSRU) was approached by a clinical faculty member and two medical students who expressed interest creating a legitimate, peer-reviewed journal that could publish student research from around the world.

Problem: The CMSRU library took on the challenge of facilitating this project. Their advice ensured the journal would be Open Access (OA), publish with a CC-BY license, have an ISSN, issue DOIs, and follow ethical publishing guidelines to ensure the journal could eventually be included in indices.

Approach: The CMSRU library already had an existing Digital Commons institutional repository (IR) which could be utilized as a hosting platform, facilitate peer review, and index articles in Google Scholar for discoverability. The journal officially launched in the Fall of 2018 and is actively peer-reviewing submissions.

Conclusions: This poster will highlight the unique challenges of hosting OA journals on IRs. The poster will also provide best practices on starting peer-reviewed journals that are partially student operated. It also will include information on how this project can fill the niche of providing early-career scholars a home to highlight their scholarship while learning the best practices of scholarly OA publishing.

Posters--165Saracco_a.pdf
Posters--165Saracco_b.png


Introducing Orpheus, an Open Source database of journals and publishers

André Fernando Sartori

University of Cambridge, United Kingdom

Orpheus is a database of academic journals’ attributes that are frequently required by repository managers, such as revenue model (subscription, hybrid or fully Open Access), self-archiving policies, licences, contacts for queries and article processing charges (APCs). It features web frontends for users and administrators, and a RESTful API for integration with repository platforms and other services. Orpheus also comes with a collection of Python parsers for datasets commonly used by repository staff, such as lists of embargoes and APCs from major publishers (Elsevier, Wiley, Taylor & Francis, Oxford University Press and Cambridge University Press) and the databases DOAJ and SHERPA/RoMEO. Orpheus was recently integrated with the Cambridge DSpace repository (Apollo) and auxiliary systems, which has enabled embargo periods to be automatically applied to deposited articles and streamlined the process of advising researchers on payments, licences and compliance to funders' Open Access policies. Orpheus’ source code, available at https://github.com/osc-cam/orpheus, may be easily expanded or tailored to meet the particular needs of other repositories and Scholarly Communication services.

Posters--253Sartori.pdf


Donut and Glaze: A Hyrax Dam and Decoupled Frontend in AWS

David E Schober

Northwestern University, United States of America

Northwestern University has worked over the last two years to move it’s core infrastructure to AWS. At the same time we worked to develop a Hyrax-based DAM and decoupled front end. The end result is a scalable solution that allows the team to use modern front-end frameworks (react), s3 (static) hosting, serverless (lambda-based) tasks, and flexibility in backend decisions.

Posters--440Schober.pdf


Keeping the user in the workflow: IR licensing for mediated deposits

Gail Steinhart

Cornell University, United States of America

The practice of mediated deposit of content to Institutional Repositories (IRs) is widespread (e.g. CNI 2017, Dubinsky 2014, Poynder 2016, Salo 2008). For Cornell University’s DSpace installation, eCommons, approximately 80% deposits have been mediated. An IR deposit workflow typically involves presenting the uploader, presumed also to be the rights holder, with a license agreement granting the service provider the non-exclusive rights required to provide the service. Mediated deposit complicates the process of obtaining permission from the rights holder and removes them from the licensing process. In spite of this reality, IR platforms have yet to evolve to support this aspect of mediated deposit, even while they support batch upload (presumably mediated). Similarly, standard deposit agreements seldom address mediated deposit (Rinehart and Cunningham 2017). This leaves IR managers to either develop their own workarounds, or perhaps, simply omit the process of obtaining and documenting the rights holder’s acceptance of the repository license. We will share our procedures for obtaining, recording and retaining acceptance of the terms of our IR’s license for a variety of mediated deposit scenarios.

Posters--120Steinhart.pdf


Flexible metadata: the key to a single repository for all types of output

Ben Summers

Haplo, United Kingdom

This poster explains why a flexible metadata schema is critical to building a repository for all types of output; such as articles, image-based research, collections of outputs, datasets. It illustrates the Haplo data model and describes in detail how the flexibility is achieved and the benefits gained.

Posters--257Summers_a.pdf


The Bridge of Data project: Gdansk University of Technology approach to building up the infrastructure for data sharing.

Magdalena Szuflita-Żurawska, Michał Nowacki, Anna Wałek, Paweł Pszczoliński

Gdansk University of Technology, Poland

In the era of Big Data, research data play the unquestionable role in scientific research as well as everyday life. The Bridge of Data project at Gdansk University of Technology (GUT) will be available as the data repository with adjunctive services that are unique in Poland. The project is the continuum and built upon the previous project called the Bridge of Knowledge that was foremost concentrated on Open Access. Current developments put emphasis on research data and will provide technological innovations such as hosting the project on the private computing cloud and storing the data on the Cepch Object Storage. Searching the data will be available through open text search due to implementing the NoSQL database – ElasticSearch. Moreover, the project will allow researchers to perform Big Data Analysis by the Apache Zeppelin GUI on the supercomputer Tryton (40.000 cores, 1,5 PFLOPS).

Additionally, the Competency Center operating at the GUT Library will be launched and provide assistance and on-site tailoring training among researchers from all scientific disciplines that include Data Management Plan, open licensing or metadata standards.

The Bridge of Data is co-financed by the European Regional Development Fund within the framework of an Operational Programme Digital Poland for 2014-2020.

Posters--307Szuflita-Żurawska.docx


Designing a vocabulary service for a ‘data-driven’ materials data repository

Kosuke Tanabe, Asahiko Matsuda, Mikiko Tanifuji

National Institute for Materials Science, Japan

We are developing a new data repository to support data-driven developments in materials science, and it became necessary to build an open vocabulary set to describe metadata such as chemical substances, characterization methods, instruments, units, etc. There have been efforts to build a standard vocabulary or an ontology for materials science, but what should be considered as the essential concepts and how to structure them can be quite domain-specific, varying from one researcher to another, as materials science is an interdisciplinary field that encompasses chemistry, physics, and biology. To address this, we are developing a wiki-powered vocabulary management service, not only to apply aforementioned earlier efforts, but also to allow building on top of them by ‘crowd-sourcing’ among the researchers, thereby realizing appropriate metadata description for a highly-usable materials data repository.

Posters--310Tanabe.docx


CDS Videos: The new platform for CERN videos

Flavio Costa, Jose Benito Gonzalez Lopez, Ludmila Marian, Karolina Przerwa, Nicola Tarocco, Zacharias Zacharodimos

CERN, Switzerland

CERN Document Server (CDS) is the CERN Institutional Repository based on the Invenio open source digital repository framework. CDS aims to be the CERN’s document hub. To achieve this we are transforming CDS into an aggregator over specialised repositories, each having its own software stack, with features enabled based on the repository’s content. The first specialised repository created is CDS Videos. CDS Videos provides an integrated submission, long-term archival and dissemination of CERN video material. It offers a complete solution for the CERN video team, as well as for any department or user at CERN, to upload video productions. Since it was released in production, the yearly number of uploaded videos at CERN doubled, showing that the creation of a specialised platform based on user needs, rather that a generic platform capable of dealing with many document types, has been benefit for the CERN community and for the video heritage of CERN.

Posters--383Costa.pdf


A short history of ORCID (DE) in Germany

Paul Vierkant1, Heinz Pampel1, Britta Dreyer2, Stephanie Glagla-Dietz3, Sarah Hartmann3, Christian Pietsch4, Jochen Schirrwagen4, Friedrich Summann4

1Helmholtz Association, Germany; 2German National Library of Science and Technology; 3German National Library; 4Bielefeld University

In the past few years the Open Researcher and Contributor ID (ORCID) became the global standard for author identification in science. In Germany ORCID has been established as a standard too. The project ORCID DE contributed to this development by initiating the foundation of the ORCID Germany Consortium led by the German National Library of Science and Technology (TIB). Through workshops, webinars and its website the project provides a forum for academic institutions in Germany to discuss challenges and benefits of ORCID. The implementation of ORCID in essential information infrastructures such as the Bielefeld Academic Search Engine (BASE) as well as the linking with the Integrated Authority File (GND) mark important milestones of the dissemination of ORCID in Germany. The aim of the ORCID DE project is to sustainably foster Open Researcher and Contributor ID (ORCID) at universities and non-university research institutions by taking a comprehensive approach. ORCID DE received funding from the German Research Foundation (DFG) at the beginning of 2016 for a period of three years. The poster illustrates the development of ORCID in Germany in the past three years depicting milestones and crucial factors for the growth of ORCID.

Posters--218Vierkant_a.pdf


re3data - Open infrastructure for Open Science

Paul Vierkant1, Heinz Pampel1, Robert Ulrich2, Frank Scholze2, Maxi Kindling3, Michael Witt4, Kirsten Elger5

1Helmholtz Association, Germany; 2Karlsruhe Institute of Technology; 3Humboldt Universität zu Berlin; 4Purdue University; 5GFZ German Research Centre for Geosciences

re3data (https://www.re3data.org) is the global registry of research data repositories and portals (Pampel, et al. 2013). In December 2018 over 2.240 digital repositories for research data are registered using the comprehensive re3data metadata schema (Rücknagel, et al.2015). To identify suitable research data repositories a vast number of funders, publishers and research organizations from all around the world recommend re3data within their research data management guidelines.

re3data is part of DataCite’s service portfolio hosted by the library of the Karlsruhe Institute of Technology (KIT) in collaboration with the Helmholtz Open Science Coordination Office at the Helmholtz Centre Potsdam GFZ German Research Centre for Geosciences, the university library of Purdue University, USA and the Berlin School of Library and Information Science of the Humboldt-Universität zu Berlin.

The poster presents the current status of re3data as well as future improvements based on stakeholder requirements.

Posters--222Vierkant_a.pdf


A model for a sustainable repository network in Serbia

Obrad Vučkovac

Institute of nuclear sciences "Vinca", University of Belgrade, Serbia

A model for developing and maintaining institutional repositories through cooperation of the University of Belgrade Computer Center (UBCC) with (so far, seven) research organizations is presented. It has resulted in improved visibility of local scientific output and has created an interoperable environment for information exchange. The infrastructure is developed and maintained by the UBCC IT team, whereas each institution has its own repository manager responsible for content.

The main focus in developing institutional repositories is compliance with Open Science principles, standardized metadata, licenses, reliance on the non-proprietary and open source platform (DSpace). The repositories are now harvested by aggregators and services (OpenAIRE, CORE, BASE, Unpaywall, Google Scholar).

This model is flexible, providing to new partners tested and optimized structure. It is also expandable, i.e. new services may be added, e.g. the Altmetric.com widget or in-house developed tools for massive metadata editing and import.

The model described could be interesting, especially for repository managers and users from developing countries. It is demonstrated that proper coordination and standardized procedures could lead to an interoperable and sustainable network of repositories in an environment with no repository infrastructure, ensuring optimal information exchange with users, both human beings or machines.

Posters--280Vučkovac.docx


Archiving large or restricted datasets with Edinburgh DataVault

Pauline Ward

University of Edinburgh, United Kingdom

In order to meet the need of researchers at the University of Edinburgh to archive large datasets and restricted datasets for the long-term, the University set out to build a DataVault system from the codebase previously developed in collaboration with the University of Manchester. This new DataVault is a complementary and integral part of the existing suite of Research Data Management (RDM) tools at Edinburgh; open data up to 100 GB is archived in DataShare and metadata records are brought together with other research outputs in our CRIS (Pure). In this way the Research Data Service is better able to support users taking a holistic approach to RDM as a component of Open Science. Through user engagement and testing, our understanding of user needs has been refined. Challenges relating to usability and resilience have been overcome. The web-based DataVault system allows users to archive multiple terabytes in an affordable, secure archive with a persistent identifier and other metadata discoverable on the CRIS. Future development will implement a review system to allow appropriate management of data over the long-term.

Posters--507Ward.pdf


EUDAT-B2FIND : A FAIR friendly and interdisciplinary discovery service for open research data

Heinrich Widmann, Claudia Martens, Hannes Thiemann

Deutsches Klimarechenzentrum GmbH, Germany

The European Data Infrastructure (EUDAT) project established a pan-European e-infrastructure supporting

a variety of multiple research communities and individuals to manage the rising tide of research data in

open science. This Collaborative Data Infrastructure (CDI) is based on the FAIR principles and implements

community-driven and advanced data management technologies and services to tackle the specific

challenges of international and cross-domain research data management.

The EUDAT data metadata service B2FIND plays a central role in the European Open Science Cloud (EOSChub)

project as the central metadata repository and discovery portal for the diverse metadata collected

from heterogeneous and interdisciplinary sources within and beyond EOSC-hub. The B2FIND catalogue not

only harvests metadata from research communities, but also data from generic repositories, such as the

data publication service EUDAT-B2SHARE.

To support Open Science according to the FAIR principles, EUDAT-B2FIND allows both research data

providers to easily publish their metadata and scientists to conduct cross-disciplinary and semantic search

for and re-use of data resources.

Posters--376Widmann.pdf


Using DSpace@Fraunhofer – Building up the Fraunhofer Open Science Cloud

Andrea Wuchner, Dirk Eisengräber-Pabst, Michael Erndt

Fraunhofer-Informationszentrum Raum und Bau IRB, Germany

Since 2016, Fraunhofer, Europe’s largest organization for applied research, is facing the challenge of implementing and migrating three repository systems: A new current research information system (CRIS), a new open data repository and the complete renovation of the longstanding bibliographic database »Fraunhofer-Publica«, along with its younger sibling, the open access repository »Fraunhofer-ePrints«. The goal is to implement a unique repository landscape as a key enabler for Open Science. For all systems, DSpace or DSpace CRIS is being used. Reasons for selecting DSpace were the availability of a plug-in for individual CRIS functionalities, numerous out-of-the-box functionalities, the large, well-organized community and the high amount of successful installations around the globe. The software enables the systems to use entities such as people, projects and organizations jointly. In addition, standard submission workflows for all datatypes and a consistent user experience will be available. The poster will deliver a visual presentation of the three systems, their interfaces and workflows with key user groups as well as their interconnection and their software architecture. The poster will present the key points of our feasibility studies of DSpace/DSpace-CRIS, and it will also show an outlook on the Fraunhofer vision to build a unique »Fraunhofer-Open Science Cloud«.

Posters--113Wuchner.docx


JOIN² Software Platform for the JINR Open Access Institutional Repository

Irina Filozova, Roman Semenov, Galina Shestakova, Tatiana Zaikina

Joint Institute for Nuclear Research, Russian Federation

Nowadays the practical interest of scientific research result and educational lectures and materials push to the creation and development of open archives of scientific publications. The JINR Document Server (JDS — jds.jinr.ru) was based on the software platform Invenio (developed at CERN). The goals of JDS are to store JINR information resources and to provide effective access to them. JDS contains many materials that reflect and facilitate research activities.

In the framework of the JOIN2 project, the partners have improved and adapted the Invenio software platform to the information needs of JOIN2 users. Needs of JDS users are very similar to needs of JOIN2 users, so JINR decided to join JOIN2 project. The participation of JINR in the project will improve the functionality of the JINR Open Access institutional repository by reusing the code and further joint development. The presentation shows the process of migration and adaptation JDS to JOIN2 software platform.

Posters--332Filozova.docx


Context-adaptive research data repository publishing in Chinese Academy of Sciences

Lili Zhang, Chengzan Li, Lulu Jiang, Yuanchun Zhou

Computer Network Information Center, Chinese Academy of Scinece, China, People's Republic of

ScienceDB is an open and generic data repository aiming at making scientific data FAIR(Wilkinson et al,2016)since 2016(Zhang et al 2018). ScienceDB mainly serves in three scenarios: long-term data services for data journal publishing; large-scale data services for research networks; and tailored data publishing services to individual research scientists and etc,.

(1)For data journal publishing, ScienceDB generally support lifelong data curation and services. For example, China Scientific Data (www.csdata.org) , the first multidisciplinary data journal in China, has published over 170 data papers with 70% of all the linked datasets submit to ScienceDB.

(2)For research teams/networks, ScienceDB supports complex management models for major research projects featuring inner communication and valuable data sharing. In CASEarth project, ScienceDB has supported over 100 sub-programs to summit and preserved over 400TB data covering broad geoscience and related subjects.

(3)For individual research scientists data publishing, data repository publishing features user friendly services such as tailored tools for data curation based on subjects and so on.

So far, we have 259,190 page reviews and 3,6000 downloads. And further, connectivity in local and international science community , sustainability and broader social impacts shall also help contribute to the long-term development of data repository publishing.

Posters--456Zhang.pdf
 

 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: OR2019
Conference Software - ConfTool Pro 2.6.128+TC
© 2001 - 2019 by Dr. H. Weinreich, Hamburg, Germany