Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Session Overview
Session
MM01: Minute Madness
Time:
Tuesday, 05/Jun/2018:
5:10pm - 6:10pm

Session Chair: Michele Mennielli, DuraSpace
Session Chair: John Adewale Ajao, University Of California, Santa Barbara
Location: Ballroom A
Ballroom A is the largest single room meeting space in the SUB. Will fit the whole conference. Live streaming.

Show help for 'Increase or decrease the abstract text size'
Presentations

DOI versioning done right

Krzysztof Nowak, Alexandros Ioannidis, Lars Holm Nielsen

CERN, Switzerland

Spring 2017 Zenodo launched a new DOI versioning scheme. Unlike DOI versioning schemes in other generic repositories, no semantic versioning information is included in the identifier string. Instead, DOIs for versions of the same resource are semantically linked in the DataCite metadata registered along with the DOI. Furthermore, an additional DOI representing all versions is also registered to provide an aggregated view for discovery systems. The poster presents use cases, user interface design considerations and implementation details of the new DOI versioning scheme on Zenodo.


Introducing The New Sherpa Data Model and API

Adam Field

Jisc, United Kingdom

As part of our v2.Sherpa project, we have been rebuilding all of our services from the ground up. This includes a more rational data model and standardised APIs across all services. The first of these APIs to be completed is the Object Retrieval API, currently active on v2.Juliet and v2.OpenDOAR, allows full records to be downloaded from our services in JSON.

This poster is intended to provide an introduction to:

• The data model, as represented by the JSON structure that the new API exports.

• The Object Retrieval API, and how it can be used to locate and retrieve items.

Additionally, the poster will provide an updated roadmap for the releases of our future services and when the current APIs will be retired for each service.


Building a home for digital content

Julie Allinson, Stephanie Taylor

University of London, United Kingdom

This poster draws on the experience of the authors in working with a range of open source technologies, tools, communities and content to show how to build not just a system, but a home for digital content. A system alone doesn’t ensure digital content will become a useful, sustainable part of a collection. This poster will show the steps to consider beyond installing a system - the steps needed to build a ‘home’. A ‘home’ environment supports the core values and activities of an organisation, the work of information professionals in managing and preserving content and the requirements of the users, who need to discover, access and use the content for their own purposes.

The poster illustrates how the Samvera technologies can be used to build repository solutions for archives, libraries, galleries and museums. The flexibility and adaptability of these technologies make it possible to build a ‘home’ that can be changed to suit different requirements. It explores building systems that can not only support organisations now, but grow and adapt for the future. We believe a ’home’ puts content at the heart of an organisation, and that use, re-use and integration leads to sustainable systems.


Scholarly passion on the web: helping a professor create a sustainable solution for his fables collections.

Richard Jizba, Rose Fredrick, Karl Wirth

Creighton University Libraries, United States of America

In the late 1990s faculty in various disciplines discovered that they could share their scholarly interests and passions with a world-wide audience using relatively simple and inexpensive web publishing tools. As the web became increasing complex and web-publishing became increasing sophisticated, those scholars who continued to work on their own web sites began to fall behind many trends in web design. As they added content to their web sites, these project websites tended to become unwieldy and difficult to migrate -- an inevitable challenge for any long-term web project. One such collection at Creighton University was the web site for the Fr. Carlson Fables Collection. The Fable Collection includes over ten thousand books and hundreds of fables related artifacts including: figurines, cigarette cards, calendars, clothing, cookie tins, toys and more. This poster describes how dspace is being used to simplify many of the web site maintenance issues, create structured metadata, and provide a powerful, illustrated and annotated index to the collection in a way that provides long term sustainability and portability for the collection. Finally, it notes other sustainability issues for projects such as the Fables Collection, which is the project and passion of just a single scholar.


Design for Diversity: Towards a More Inclusive Systems Design

Amanda Rust, Julia Flanders, Patrick Yott, Cara Messina, Sarah Sweeney

Northeastern University Libraries, United States of America

Design for Diversity is an IMLS-funded project focusing on the ways in which information systems embody and reinforce cultural norms, asking how we might design systems that account for diverse cultural materials and ways of knowing. By centering information systems, we focus on issues such as the harm caused by cataloging standards that classify living, breathing people as “illegal aliens”, or data models that enforce strict gender binaries such as “woman or man” when human experience is much broader. To empower cultural heritage practitioners in their advocacy around these issues, and also better educate the next generation of systems designers, we created a teaching and learning toolkit with case studies, readings, and other educational materials. This poster will present a draft version of the Toolkit, and Open Repositories attendees will be encouraged to add their experiences to the toolkit through feedback such as suggestions for new topics and ways to use the Toolkit for high impact.


LAKEsuperior: a new old Fedora implementation

Stefano Cossu

The Art Institute of Chicago, United States of America

While Fedora keeps evolving toward a formalization of its specifications, leading up to Fedora versions 5 and beyond, there is at the same time the need to address some limitations of the current Fedora 4 implementation, mostly tied to its underlying technology and unrelated to the evolution of the API specs.

LAKEsuperior aims at being a drop-in replacement of the official Fedora 4 implementation, addressing some of such limitations.

At the moment, community feedback is important to move the development efforts forward beyond its alpha stage. This poster session intends to be a place for encouraging such feedback.


OpenAIRE Advancing Open Scholarship

Pedro Principe1, Jochen Schirrwagen2

1University of Minho, Portugal; 2Bielefeld University, Germany

From 1st January 2018, OpenAIRE infrastructure enters a new phase with the start of the OpenAIRE-Advance project. OpenAIRE-Advance continues the mission of OpenAIRE to support the Open Access and Open Data mandates in Europe relying on a decentralized network of content providers. By sustaining the current infrastructure, comprised of a human network and technical services, it consolidates its achievements while working to shift the momentum among its communities to Open Science, aiming to be a trusted e-Infrastructure within the realms of the European Open Science Cloud.

In this next phase, additionally to the empowerment of several working areas of the infrastructure, OpenAIRE will focus on the outcomes of the COAR Next Generation Repositories Working Group. OpenAIRE aims to promote emerging changes in the scientific communication landscape by building on repositories as the foundation of a globally networked and distributed Open Science infrastructure. This poster outlines the OpenAIRE-Advance practical plans and tasks to support the development of the next generation repositories with new functionalities and new technologies that will help with the adoption of modern web-technologies and protocols that will allow repository platforms to better interact with more innovative and sophisticated scholarly networked tools and services.


Archiving Creighton University Campus Ministries’ Daily Reflections

Karl Leo Wirth, Richard Jizba, Rose Frederick

Creighton University Health Sciences Library, United States of America

Since 1998 Creighton University Campus Ministries, with the support of various Staff and Faculty volunteer authors, has produced and maintained an online series of Daily Reflections on the days Liturgical readings, which receives views/hits from all over the world. Originally intended as a resource for internal use by members of the campus community the level of outside interest was something of a surprise. In response to requests from users Campus Ministries developed a web based archive of each days reading. In early 2014 the Creighton Digital Repository (CDR) partnered with Creighton University Campus Ministries to create a new more sustainable and user-friendly archive for this collection. The previous archive system had grown dated with limited functionality in areas such as statistic keeping and search functionality. The CDR team took the Campus Ministries existing Daily Reflections materials and, using a combination of PERL script to massage new and existing metadata and custom theming, produced a new D-Space based archive with greatly expanded functionality over the previous system. The success of this project served as a stepping stone that the CDR team was able to use as an example in order to obtain community buy-in on further projects.


Beyond Pubmed Central: ClinicalTrials.gov and Other Data Sharing Repositories from The National Library of Medicine

Ann Glusker

National Network of Libraries of Medicine, United States of America

Many researchers use PubMed Central, the free archive of full-text biomedical journal articles from the National Library of Medicine (NLM). In addition, some are aware that the National Institutes of Health (NIH) sponsors data sharing repositories in/from which they can both deposit and retrieve data. However, they are not aware that some of the higher-profile databases of the NLM, an NIH institute, actually double as data sharing repositories. These include PubChem, GenBank, and ClinicalTrials.gov. Awareness of the repositories available through the NLM expands the potential options for data sharing and extraction. This poster will highlight the eight repositories sponsored by the NLM, with examples of data reposited and available for retrieval from the three most-used, PubChem, GenBank, and ClinicalTrials.gov. Understanding options for repositing and retrieving data in these repositories can enhance possibilities for researchers’ disseminating data and results, and selecting topics for future research.


Jisc Research Data Shared Service – Integrating Platforms for Open Science

John Paul Kaye, Dom Fripp, Daniela Duca, Paul Stokes, Tamsin Burland

Jisc, United Kingdom

Jisc is developing a solution for researchers and support staff within universities to enable them to deposit and preserve research data and digital objects for the long-term. Whilst general management of research data is not a new problem area, it has been of growing importance due to several issues, such as: are reproducibility, better research, more collaboration, funder requirements.

Many higher education institutions in the UK have thought about research data management and have implemented data policies, some have tendered or created solutions and as a result are running data repositories or combined data and publications repositories. Preservation is still a rare happening for research data. We are working with 16 of these institutions to build the research data shared service (RDSS).

Jisc are developing 3 major building blocks for the RDSS end to end offer: repository, preservation and analytics. Currently we are in the alpha stage of development and are going into beta in 2018 before launching the service in summer 2018. We are working with more than a dozen vendors that are either supplying their systems or or supporting the development of the service and the integration of open source and licensed platforms within the service


Scaling Humanities to Science: the Formation of Research Data Management Services at the Universität Hamburg

Juliane Jacob, Hagen Peukert, Iris Vogel, Kai Wörner

Universität Hamburg, Germany

The Center of Research Data Management at the University of Hamburg is currently scaling up their services from the humanities department only to all departments. The upcoming challenge is twofold: Firstly, data intensive sciences need high quality data curation services and technical infrastructures. Secondly, strict budget constraints inhibit the implementation of established strategies known from and applied to rapidly growing organizations. Our approach is to balance main impacting factors (e.g. heterogeneity and consistency of research data, technical and human resources, and requirements and functionality of custom designed applications) by standardizing service requests. Yet, standardization does not refer to one-size-fits-all, but rather to a well-tuned collection of modules that can be flexibly aligned to new services.


Toward accessible PDF documents in Open Access Repositories

Alexa Ramírez-Vega

Instituto Tecnológico de Costa Rica, Costa Rica

Currently, disability affects 15% of the world's population (approximately one billion people), according to data from the World Health Organization’s (WHO) first report on disability. Of those 645 million suffer visual or hearing impairment, which affects the correct use of academic documents. Most of the documents in Open Access Repositories are deposited in PDF format and it is important to guarantee the accessibility of those documents. Consequently, it is necessary to evaluate the accessibility of PDF documents in open access repositories available on DOAR (Directory of Open Access Repositories) in the Communications and Information Technology area. For developers and site managers it is important to know the situation of accessibility in PDF documents that would be deposit in the repositories.


Towards a Flexible and Robust Digital Preservation Infrastructure at Oregon State University Libraries and Press

Michael Boock, Hui Zhang

Oregon State University Libraries & Press, United States of America

Oregon State University Libraries and Press (OSULP) committed in its 2012-2017 strategic plan to the long term maintenance and preservation of digital content. As a sustaining member of the MetaArchive Cooperative since 2010, the library replicates the university’s corpus of theses and dissertations and extension publications using that Private LOCKSS network. In 2017, the library developed a set of recommendations to achieve a more robust, comprehensive, and interoperable digital preservation program that includes increased use of the MetaArchive distributed digital preservation platform and cloud storage for geographically distributed content replication. This work is in process now and is expected to be completed by May 2018. Our solution for content housed in Fedora includes a script that traverses the hierarchy of repository objects to locate and export binary files with the RDF metadata of the ‘parent’ object. The generated BAGs are then moved to temporary Amazon Web Services storage for LOCKSS harvesting by seven geographically dispersed servers.


Ensuring Reusablity in Institutional Data Repositories: A Case Study

Lisa Johnston, Erik Moore, Valerie Collins

University of Minnesota, United States of America

The Data Repository for the University of Minnesota (DRUM) launched in 2015 as a fully-curated, open access institutional data repository running on the DSpace platform. In 2017 DRUM acquired the community-driven CoreTrustSeal Certification, one step on the path toward establishing itself as a trustworthy data repository. This poster will present our case study showcasing the various ways that DRUM aims to ensure high quality and reusable data sets. We will detail our data-level quality assurance procedures, our ingest and processing workflows (e.g., minimum response time from our 6 domain experts data curators), and present usage statistics to date on the types, frequency, formats, and author satisfaction for open access data repository services at a large multidisciplinary US university.


Generic machine readability and digital repositories

Jozef Misutka, Ondřej Košarko

LINDAT/CLARIN, Charles University, Czech Republic

Many digital repositories claim to be machine readable/accessible. Accessing several digital repositories based on different software (or even different versions) in an automated way often means applying specific processing to each of them or the harvesters have to use only the common subset which can be very little in the real world even for repositories from the same domain.

We describe a slightly modified approach used by the CLARIN project (https://www.clarin.eu/) that can be used for real automated harvesting and processing. This includes the usage of flexible PID system (e.g., handle system) that can be directly resolvable to a metadata format using specific syntax, flexible metadata schema with semantic annotation, OAI-PMH protocol and optionally exploiting PID metadata. Most of the techniques used are simply exploiting existing technologies. Proof of concept implementation by LINDAT/CLARIN based on DSpace can be found at http://lindat.cz/.

Many of the above mentioned techniques are conforming to recommendations and outputs from RDA (Research Data Alliance). Despite the fact that a working solution can be achieved we also mention the price-performance ratio and the maturity of some of the techniques.


KNOWLEDGE AND UTILISATION OF REPOSITORIES AS PREDICTORS OF RESEARCH PRODUCTIVITY OF ACADEMIC STAFF IN PRIVATE UNIVERSITIES IN SOUTH- WEST, NIGERIA

BASIRU ADETOMIWA

Redeemer's University, Nigeria

The universities and research organisations all over the world have begun to pay more attention to the production and usage of documents in digital form.

The study was carried out to ascertain the awareness of academics concerning the utilization of institutional repositories (IRs) in Nigerian universities. The study took the form of a descriptive survey, gathering data from the 21 out of the 27 private universities established and approved between 1999 and 2012 in South-west, Nigeria.

The study concludes that knowledge and utilization of repositories individually and collectively influenced research productivity of academic staff in private universities in South-west, Nigeria. Knowledge made significant contributions towards the utilization of repositories. Moderate level of utilization of repositories were found in the study. It was specifically found that there is a significant correlation between research productivity and knowledge of repositories by academic staff in the surveyed private universities.

The observed correlation between research productivity and knowledge of electronic databases readily affirms the general perception by the academic staff that knowledge of repositories will have a positive effect on research productivity. Therefore, knowledge and utilisation of repositories are predictors of research productivity of academic staff in private universities in South-west, Nigeria.


LOCKSS Software Redesign Project

Art Pasquinelli, Thib Guicherd-Callin

Stanford U., United States of America

The LOCKSS Java software is going through a major Software update and revision that will effect existing LOCKSS users and also future community innovators. Originally designed and launched in the late 1990’s LOCKSS (Lots of Copies Keeps Stuff Safe) is used by the library community to help maintain and preserve eBooks and eJournals. The present software rewriting will enhance not only the focus and capabilities of the LOCKSS technology itself, but also the collaborative partnerships, support structure, ‘market’ focus to new users and for new types of content. The goal to revamp the LOCKSS code into web services components will allow totally new types of relationships and use cases. Many of the new directions for LOCKSS are already in incipient discussion with experienced, long-term member LOCKSS Networks and will evolve as the software is distributed, tested, and gains maturity. The poster will focus on:

1. The capabilities of the new componentized web services

2. Software development methodology

3. New opportunities for technology exchange and collaboration


Lume’s Custom Search Filters for Communities and Collections: Automatic Insertion of New Filter Values and Detection of Untranslated Values

Manuela Klanovicz Ferreira, Guilherme Antonio Borges, André Rolim Behr, Zaida Horowitz, Caterina Groposo Pavão, Janise Silva Borges da Costa, Carla Metzler Saatkamp

Universidade Federal do Rio Grande do Sul, Brazil

Nowadays, the Digital Repository of UFRGS – Lume – provides custom filters at main page of each community

and collection and both, the filters options and communities/collections names, are translated according to

the selected locale. However, the dropdown options of these filters have to be added manually. The Lume’s

upgrade from 1.8 to 5.8 DSpace version demanded that these filters migrate to use SOLR as index and search

tool. This migration was an opportunity to improve these filters to support automatic dropdown generation

based on metadata values of items belonging to each community/collection. In this way, a software module

was developed to add this feature, and beyond that, it also detects dropdown options that have not been

translated yet. This custom module provides to the end users a more updated and complete dropdown list

which reflects the content added to the repository without the need to manually verifying metadata values

from new items and add them as options.


Ignatian Spiritual Exercises as an Online Retreat: Preservation of a Global Resource

Rose Fredrick, Richard Jizba, Karl Wirth

Creighton University, United States of America

When the Creighton University Online Ministries began in 1998, it was as an internal resource for faculty and staff. However, after hearing from many people in the third world, they decided to create the Online Retreat so that anyone can do the Spiritual Exercises of Ignatius of Loyola from home regardless of their location. It has been translated into six languages and is popular worldwide.

It was a groundbreaking website for its time, but is based on early web design with little technical support. After retirements and departmental downsizing, it also became clear that there was no plan for preservation and the material would be lost if no action was taken. We decided to archive the website in our DSpace repository, the Creighton Digital Repository (CDR).

We supplied structure by creating fielded data and item records with each week’s content, including the html files, pdfs, audio files, and pictures. We also created navigability by providing internal links to both the previous and the next weeks’ records.

This popular international collection is now safely archived in our repository. This eliminates any sustainability issues by preserving it on a maintained, open interface with organized and accessible item records and files.


Enhanced interoperability for multiple platforms funded by the OpenAIRE project

Andrea Bollini, Claudio Cortese, Susanna Mornati

4Science, Italy

In December 2017 OpenAIRE has announced an open call for tenders with the aim to seek innovative ideas that will improve the OpenAIRE infrastructure services and/or its overall uptake. 4Science replied to the Lot1 - Repository tools and services to address the lower level of the infrastructure improving interoperability.

We proposed a set of initiatives among the most used open source platforms in the open science ecosystem such as OJS, Dataverse, DSpace and DSpace-CRIS, to adopt emerging standards and protocols and increase the interoperability of such platforms with the OpenAIRE infrastructure.

[Approval is pending and due on 15th January 2018. As soon as approved or rejected, we will update the present abstract with details about the proposed improvements]

The project runs from 1st February to 15th May, the poster aims to showcases the produced results giving the relevant information to all the institutions that want to benefit of the produced code released as open source on GitHub and proposed to the respective communities for inclusion in the next official releases.


Exploring Solr Sharded Cores for Dynamic Reports

André Rolim Behr, Manuela Klanovicz Ferreira

Federal University of Rio Grande do Sul, Brazil

This work is an experience report of DSpace migration from version 1.8 to 5.8, specially related to statistics of use. The main issues were to migrate statistics stored in a relational database to a Solr core and then adapt all the reports to retrieve data accordingly. The import approach consists of two scripts (i) to split the CSV containing the result of SQL query in files containing 10,000 tuples and (ii) to load and monitor the files aforementioned. After that, the statistics core was sharded by year. We have to update some queries to be handled on Solr to generate the dynamic reports aimed. Finally, some migration biases have to be led as withdrawn/moved items, collections, or communities. As the result, Solr was capable to deliver faster answers to queries and dynamic reports in DSpace environment.


A Bountiful Harvest: What we learned from building bepress's first harvesting tool

Ann Connolly

bepress, United States of America

Automatically harvesting content into the university’s faculty profiles and repository’s is a dream of many institutions struggling to create sustainable solutions while working with shrinking budgets and limited staff time. Having witnessed first hand the challenges faced by libraries, last year bepress began exploring what it would take to make this dream a reality by developing a tool to harvest content. In this presentation, Ann Connolly, Director of Product at bepress, will discuss bepess’s research, development, and pilot phases. She will share details about the new harvesting tool that provides access to more than 160 million objects from sources including PubMed, Elsevier’s ScenceDirect, Springer, IEEE, ArXiv, SSRN, RePec, and JSTOR. She will discuss the lessons learned from an invaluable group of pilot testers that challenged assumptions and pushed us to improve workflows for a variety of campus needs. Attendees will come away with a framework for approaching this type of technical evaluation and development, as well as information about bepress’s current and upcoming plans for harvesting.


Developing a Common Workflow for Ingest into our Digital Preservation System

Terrence W Brady

Georgetown University Library, United States of America

The Georgetown University Library joined the Academic Preservation Trust (APTrust) consortium. In early 2016, our library developed an in-house workflow system to ingest content into APTrust.

During the same timeframe, our library applied to have a National Digital Stewardship Resident (NDSR) placed in our library during the 2016-2017 academic year. Using open source tools and this custom workflow system, our NDSR resident was able to process the backlog of assets ready for digital preservation.

We identified 4 priority use cases to implement: (1) Items with preservation media described in DSpace. (2) Items described in DSpace with preservation media stored outside of DSpace. (3) Items described in DSpace with updated metadata (4) Items described in ArchivesSpace. We identified 2 potential use cases to consider in the future: (1) Art items described in Embark (2) Unique items described only in the library catalog.

This presentation will describe the common workflow that emerged while examining these use cases. The presentation will also highlight the packaging decisions we made to support this workflow.


Sustaining Workflows and Budget: Using Zotero, SHERPA/RoMEO, and Unpaywall to Input Faculty Works

Ashley Sergiadis, Ethan Reynolds

East Tennessee State University, United States of America

Charles C. Sherrod Library was tasked with inputting faculty works in the open access institutional repository, Digital Commons@East Tennessee State University (https://dc.etsu.edu). In order for this project to remain sustainable with limited staffing and funding, they created a workflow around the integration of Zotero and SHERPA/RoMEO to input data and check copyright in addition to Unpaywall to locate open access documents. This presentation will detail the technical aspects and workflow of using these freely available products so that attendees can replicate all or relevant parts of this project. After a year of using the products, Sherrod Library completed a quantitative study on the quality records available in Zotero based on disciplines and document types. The study discovered that the education and arts/humanities fields were poorly represented in contrast to the social/behavioral sciences and medicine/health sciences fields. Furthermore, journal articles, books, and book contributions were better represented in Zotero than newsletters and magazine articles, conference proceedings, and music albums. Consequently, Sherrod Library continues to use the products primarily for journal articles, books, and book contributions by STEM faculty. The outcomes of this study can inform content providers on how to best sustain open data through their websites’ structures and metadata practices.


Data Repository at University of Washington (DRUW): A Case Study in Open Source Software Implementation

Elizabeth Bedford

University of Washington Libraries, United States of America

The University of Washington Libraries will be launching a new data repository this spring - the Data Repository at the University of Washington (DRUW). This is not the Libraries’ first digital materials platform; it joins ResearchWorks, our institutional repository built with DSpace, and our Digital Collections, which are hosted by ContentDM. But we had experienced enough problems with both of these strategies that we decided to work towards an instance of Samvera Hyrax. We will be launching DRUW in the spring, but the combination of an extremely small development team, the nature of open access software development, and the policy difficulties involved with sharing research data made this project substantially more challenging than initially anticipated. Aimed at repository librarians and administrators rather than direct IT implementers, this presentation gives a frank overview of the policy, administrative, and logistical challenges we faced. During our initial project planning our estimates for the hours of effort, skillsets, and complexity of policy questions involved were overly optimistic. We suspect that other institutions with small development teams contemplating using open access software could benefit from a comprehensive breakdown of the types and amounts of resources required to get DRUW off the ground.


Sustaining Institutional Repositories: Breaking the Mold to Add Value

Karen Bjork1, Ryan Otto2, Rebel Cummings-Sauls2

1Portland State University, United States of America; 2Kansas State University Libraries, United States of America

Librarians at Kansas State University and Portland State University recognized a need to document and showcase a more complete view of the digital scholarship from their institution’s faculty, staff, and students; giving each library the ability to elevate the academic research and creative output being produced by their community. The proposed expansion of representation would be accomplished through the addition of metadata only (non full text) records in their institutional repositories (IR), the inclusion of which may run counter to the archetype of open access (OA) IR. The need to provide a more comprehensive view of scholarly activity has been recognized, but not yet addressed by many universities. When it has, various solutions have been used which include using an existing IR infrastructure. This presentation details two pilot projects that evaluated how their repositories tracked faculty research output through the inclusion of metadata only records in their IRs and why redefining our IRs’ scope adds value for our institutions while aiding platform, service, and open sustainability. The purpose of each pilot project was to determine the feasibility of inclusion of metadata records and provide an assessment on the long-term impact on the repository’s mission statement, staffing, and collection development policies.


Investing in open-source - community-level strategies for developing sustainable software

Kim Pham, Marcus Barnes, Nat Ledchumykanthan, Kirsta Stapelfeldt, Irfan Rahman

University of Toronto Scarborough, Canada

This proposal discusses the approaches taken to maintain and develop sustainable open-source software. The team developed custom code to extend the support of audio and video oral histories in an Islandora repository, but at the same time implemented a number of strategies in order to ensure longevity and future sustainability of the software. These strategies will be discussed on three levels: on a local, technology department level, on an institutional level, and finally on a global level, applying to the broader open-source software community.


Moran CORE: Open Repository to Train Ophthalmologists Locally and Globally

Nancy T. Lombardo1, Christy Jarvis1, Bryan Hull1, Kathleen Digre2

1University of Utah, Eccles Health Sciences Library, United States of America; 2University of Utah, Moran Eye Center, United States of America

The Eccles Health Sciences Library (EHSL) was approached by the Moran Eye Center (MEC)both at the University of Utah- to collaborate in developing a comprehensive online tool for training clinical ophthalmologists. Valuable resources exist, but are unavailable because they have not been collected, organized or made accessible. An organizational structure was developed and refined in partnership with the MEC faculty. Flexibility is built in to accommodate variations in local and global training programs. Content sources were identified and systematic collection procedures were established to capture them. The EHSL established procedures and manages the process for submission, review and inclusion as materials are produced. This process provides opportunities for students, residents and fellows to build their publication portfolio by contributing to a peer-reviewed online educational repository. Usage and collection statistics reflect the growth and widespread use and adoption of the resources collected. Libraries are uniquely positioned to collaborate with academic departments to collect, organize and publish intellectual output into meaningful digital educational resources. These resources enhance the local teaching mission, and have global impact as they support the institution’s international outreach initiatives.


Research Data Management: diverse approaches and tools for different needs

Andrea Bollini, Claudio Cortese, Susanna Mornati

4Science, Italy

Nowadays, in all disciplines, the awareness that the data produced in the research field must be managed to guarantee its long-term use, sharing and preservation is acquired.

However, to achieve these objectives, different open source solutions can be used, depending on the characteristics of the data to be managed, on the type of institution or of the needs of the research group. There are platforms built exclusively for data management, such as Dataverse and CKAN, which also provide users with analysis tools, and solutions, such as DSpace-CRIS and DSpace-GLAM, focused on the contextualization of the datasets that can be obtained by linking them to other entities. The presentation is aimed at presenting the different contexts and case studies in which they can be used at their best, demonstrating how each of them can respond to the needs of different situations and institutions, but also how, in some cases, it is necessary to integrate them with each other and also with specific platforms for preservation such as DuraCloud, to implement a complete ecosystem for Research Data Management.


Qvain – a Generic Research Dataset Metadata Editor

Esa-Pekka Keskitalo, Wouter van Hemel

National Library of Finland, Finland

Qvain is a generic research dataset metadata editor. It takes a JSON data model and turns it into a form where metadata can be added or edited. It provides tools for the administrator to manage the look and feel: the order and grouping of fields, the help texts not included in the data model, etc. Also, it makes it possible to add a widget to enhance the way data is entered.

Qvain can support more than one data model, provided that the administrator makes the necessary adjustments to how the model is displayed.

Qvain works as a standalone tool. But, first and foremost, we are developing it as a component in a cluster of IT services for researchers. The metadata records will be transferred to a shared Metadata Repository, where other systems can access them.

Being generic always comes with drawbacks. But there is more need for describing datasets than there are discipline-specific tools. We hope that we will be able to better integrate metadata creation to the daily work of research groups.


Libraries and Research Data - The Need for a New Approach

Eddie Neuwirth

Ex Libris, a ProQuest Company

Research is the lifeblood of academic institutions. Yet, many in academia recognize the need for an integrated approach for managing research data throughout the research cycle – a systematic data management approach that would eliminate duplication of effort, break down information silos, reduce the burden on individual stakeholders, and support the institutional goal of increasing the impact of research output.

Ex Libris introduced "Esploro", an open, collaborative, cloud-based Research Service Platform that aims to maximize the impact of academic research by increasing the visibility, efficiency, and compliance of research activities. The introduction of Esploro signifies a move beyond the traditional institutional repository, creating a unified system of records for all research objects, leveraging open APIs and out-of-the-box integration with multiple campus systems. Esploro is developed in partnership with five leading universities in the United States and United Kingdom.

In this session we will discuss and inspire a conversation around the potential of a new paradigm for managing research data, and the role that libraries can play in driving this transition, by leveraging their expertise in data curation, resource management, and content dissemination, and the infrastructure needed for supporting these processes.


Using the IR as a Research Data Registry

Daryl Grenz, Eirini Mastoraki, Han Wang, Mohamed Ba-essa

University Library, King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia

As data and software become increasingly common research outputs, universities have an opportunity to expand their existing efforts to record affiliated publications so that they also capture information about research data releases. At KAUST we have taken several steps to put our repository on a path towards becoming a reliable registry for information about the existence and location of research data released by affiliated researchers. These included developing a process to retrospectively retrieve and register information about datasets with machine-readable relationships to publications already in the repository, and updates to our active publications tracking procedures so that data availability statements are retrieved at the time of harvesting and checked for references to research data. The presentation will conclude by discussing how these efforts help put the repository in a position to provide expanded services in support of improved research data management, including access to and preservation of research data not explicitly linked to a formal publication.


RCAAP: An Integrated Repository Infrastructure

José Carvalho1, Paulo Lopes2, João Mendes Moreira2

1University of Minho, Portugal; 2FCT/FCCN

Established as a nation network of scientific resources, the RCAAP network, with 10 years of experience, has reached a new milestone and focus now on advanced services for the repository community, the funders, but mainly focus on the researcher and the way repositories can facilitate the researcher work. We present on this work the necessary requisites to develop an integrated network, from the technical aspects and functionalities of repositories to added value services like the monitoring of funding to the international alignement on the vision and technical guidelines to promote and facilitate interoperability between the different participants of the network.


Building Collaborations for Student Research Through Open Access

Dylan Burns, Becky Thoms

Utah State University, United States of America

Utah State University's decade’s long tradition of undergraduate and graduate student research is a point of pride, and it’s a source of recognition of institutional and student success as well as a recruitment tool. Yet, because of the ephemeral nature of student populations, it is difficult to build relationships that encourage participation. This presentation will explore collaborations between the library and the Research and Graduate Studies office at Utah State University as an opportunity to sustain the IR through the promotion and preservation of student research. Important is the library's role in ongoing research events on campus, such as Research on Capitol Hill (at the Capitol in Salt Lake City) and the annual Student Research Symposium held during the Spring's research week. By painting the library's digital services as a tool that students can use to promote themselves for graduate school or employment in the near future, the IR is being positioned as an integral tool for the Research office's promotional and academic missions. And, collaborations such as these provide libraries with the opportunity to advance campus discussions about scholarly communication among faculty, staff, and students while highlighting their expertise in the areas of information and digital literacy.


Lessons Learned From 100 Releases

Carolyn Cole

Penn State University, United States of America

This past September I realized we had released ScholarSphere, Penn State’s Digital Institutional Repository (IR), 100 times. This presentation takes a moment to look back over those 100 releases, and the 85 releases of Sufia, to gather some lessons learned from all the hard work. When I looked back over the 100 releases, I realized I had learned lessons in nearly every part of the development and deployment process including code development, deployment and community involvement.

Sufia is an open source gem that has been heavily utilized in the Samvera (formally Hydra) community for Self-Deposit IRs, which was extracted from the original source code of ScholarSphere.


Set It an Forget It: Fully Automated Deployment Pipeline

Alexander James Kessinger

bepress, United States of America

We recently built a TLS-enabled reverse proxy for images, sometimes referred to as a camo server. This service enabled us to seamlessly transition roughly 500 sites to SSL.

We chose to setup a fully automated pipeline for this service. The pipeline will run tests, build an artifact, deploy to a staging server, run e2e tests, finally deploys to production.

We feel that this pipeline allowed us to efficiently deliver value to our customers, by reducing the barriers between the developer and production.


Institutional Repositories (IRs) at University of Pakistan: Status, Issues, and Sustainability

Muhammad Rafiq

University of the Punjab Lahore Pakistan, University at Buffalo NY USA

The purpose of this study was to assess the current state of institutional repositories (IRs), identify associated issues that are affecting the development of IRs, and find out the sustainability factors related to IRs at Universities in Pakistan. The study opted sequential mixed methods of research to meet the objectives of the research. In QUAN strand a questionnaire survey was conducted followed by a focus of purposely selected library practitioners in QUAL strand. Focus group participants presented the real time examples from their experiences and the strategies that may be helpful for sustaining repositories in universities of Pakistan.


Make Data Count

Daniella Lowenberg

University of California Curation Center

Make Data Count (https://www.makedatacount.org) is a Sloan funded project between DataCite, California Digital Library, and DataONE set out to elevate research data as a first-class scholarly output. To do so we have developed a standard and approach for repositories to process and display comparable, standardized data-level-metrics (DLMs). Our poster and participant engagement will be centered around how repositories can make your data count. This means that repositories can display data usage metrics (Views, Downloads, Volume) and citation metrics (from Crossref and DataCite Event Data). By processing and displaying these metrics repositories can both show researchers the value of their data as well as see the repositories impact. Our intention is to drive adoption of data-level-metrics and equip those at Open Repositories to both adopt and promote DLMs.


Persistent Identifiers for Research and the Data Life Cycle

Gavin Kennedy1, Ian Duncan2, Andrew Janke3, Siobhann McCafferty4

1QCIF, Australia; 2Research Data Services, Brisbane, Australia; 3University of Queensland, Brisbane, Australia; 4Australian Access Federation, Brisbane, Australia

Persistent Identifiers (PID’s) are an essential tool for digital research processes, infrastructure and the evolving data management ecosystem. Standardised, low cost and platform agnostic PID services can streamline and connect doing, finding and reporting on research.

The Australian Data Life Cycle Framework (DLCF) Project has developed the RAiD, a Research Project Activity PID, as an enabling technology designed to join up existing eResearch infrastructure and project activities nationally and internationally. Where ORCID is a persistent ID for people, and a DOI for Digital objects, RAiD is a persistent ID for Projects.

For institution and infrastructure providers, RAiD is a mechanism that helps draw a clear line of sight along data management processes and workflows. It also facilitates collection of precise data for measuring cooperation, impact, value and outputs.

For researchers, using PIDs integrated services provides seamless, semi-automated data management processes and access to research infrastructure, storage and discovery. RAiD is being integrated into a selection of Institutions across Australia and New Zealand.

RaiDs are implemented in the open source ReDBox research data management platform and in the ReDBox based Research Activity Portal (RAP).



 
Contact and Legal Notice · Contact Address:
Conference: OR2018
Conference Software - ConfTool Pro 2.6.123+TC
© 2001 - 2018 by Dr. H. Weinreich, Hamburg, Germany