June 10-13, 2019 | Hamburg, Germany
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
P1C: 24x7s: Systems and Policy
Revising an institutional open access policy to reserve the right to apply a Creative Commons License to dissertations and author accepted manuscript versions of peer-reviewed articles.
Queensland University of Technology, Australia
In 2003, Queensland University of Technology (QUT) implemented the world’s first university-wide open access mandate. This policy, which required the deposit of higher degree research theses and author accepted manuscripts of peer reviewed articles, has played a significant role in the success of QUT’s open access repository, QUT ePrints. In 2018, QUT revised its open access policy in what is perhaps another world’s first. This strategic policy aims to increase the proportion of repository content made available under a Creative Commons license, facilitating greater use and impact of QUT’s research outputs. Previously, most author accepted manuscript files downloaded from QUT’s repository carried no license information. This revised policy asserts that author accepted manuscripts will be made available under a Creative Commons Attribution Non-Commercial (CC-BY-NC) license, and higher degree by research theses (dissertations) will be made available under a Creative Commons Attribution Non-Commercial NoDerivative (CC-BY-NC-ND) license. This action is supported by the University’s Intellectual Property Policy which reserves some rights with respect to works created by staff in the course of their employment. QUT’s revised policy aligns with trends in funding body requirements in regards to open access requirements, and represents a new approach to knowledge discovery and dissemination.
Make it easy - Integration of Data Description in the Research Process
Universität Stuttgart, Germany
DaRUS, the upcoming data repository of the University of Stuttgart based on Dataverse, should not only be used to publish data but also to manage hot research data. The idea is to integrate the repository in the daily life of the researchers. The structured description of research data is made as easy as possible and will be integrated early into the processes of the researchers through additional tools. The RePlay Client is a GUI-based tool that supports researchers in versioning and documenting their research data at their workplace during the research process. The metadata harvester is a command line tool that automates parts of the documentation by parsing input and log files and by enabling a comfortable integration with metadata templates. But using Dataverse as a data management system rather than a publication repository also poses additional challenges like linking data from different research steps, moving datasets between private and public areas and enabling alternative views and search options over the datasets of a dataverse. We plan to address these challenges with an RDM-client interacting with the REST-APIs of dataverse to make research data easily searchable and sharable over all phases of the research process.
Automating repository workflows with Orpheus, an Open Source database of journals and publishers
University of Cambridge, United Kingdom
Repository management relies on knowledge of numerous attributes of academic journals, such as revenue model (subscription, hybrid or fully Open Access), self-archiving policies, licences, contacts for queries and article processing charges (APCs). While datasets collating some of this information are helpful to repository administrators, most cover only one or few of those attributes (e.g., APC price lists from publishers), do not provide APIs or their API responses are not machine readable (self-archiving policies from RoMEO), or are not updated very often (licences and APCs from DOAJ). As a result, most repositories still rely on administrative staff looking up and entering required attributes manually. To solve this problem and increase automation of tasks performed by the Cambridge repository team, I developed Orpheus, a database of academic journals/publishers written in Django. Orpheus was recently integrated with our DSpace repository Apollo and auxiliary systems via its RESTful API, enabling embargo periods to be automatically applied to deposited articles and streamlining the process of advising researchers on payments, licences and compliance to funders' Open Access policies. Orpheus is Open Source (https://github.com/osc-cam/orpheus) and may be easily expanded or tailored to meet the particular needs of other repositories and Scholarly Communication services.
Permissible Closed-Use General licences: filling the gap between Open and Restrictive Data Licences
Centre for Environmental Data Analysis, Science and Technology Facilities Council, United Kingdom
The virtues of Open Data are increasingly understood within the research domain, yet barriers remain for data producers to release data under an Open Data licence. Though restrictive licences exist, these are undesirable by some data providers wishing to adhere to many of the virtues of open data, but have concerns about onward sharing; seeking instead to establish a single, canonical data source. This desire stems from reasons including: data usage reporting requirements; maintaining data veracity; and that data may continue to develop (e.g. new additions or versions). These providers wish, however, to still follow other Open Data principles, especially in permitting as broad a range of uses as possible.
To support such data providers’ wishes the Centre for Environmental Data Analysis have drawn up two new generic, permissive-use ‘closed’ data licences based on the UK Open Government and UK Non-Commercial Government Licences. The Closed-Use General Licence (CUGL) and Closed-Use Non-Commercial General Licence (CUNCGL) permit widely permissive use as per open data licences, but address the onward-sharing concerns. Thus, they also signpost potential users to the canonical data source should they wish to exploit the data themselves. The licences can be found at: http://artefacts.ceda.ac.uk/licences/
Building node-iiif: A perfomant, standards-compliant IIIF service in < 500 lines of code
Northwestern University, United States of America
Northwestern has moved its repository systems into the AWS cloud. Part of our goal in doing so was to allow scaling, use existing services when we could, and remove management from services we consume as much as possible. The learning curve has been in identifying which server-hosted systems could easily be replaced by services (existing or lambda based). This presentation is about a small part of that journey – the development of an IIIF compliant node-js application hosted as a “serverless” lambda.
ARCLib – development of open source solution for long-term preservation
Library of the Czech Academy of Sciences, Czech Republic
This short presentation will inform about the Czech ARCLib project. One of the main goals of the project is the development of an open-source solution for a bit-level and logical preservation of digital documents, respecting the national and international standards as well as the needs of all types of libraries in the Czech Republic. The mission of the ARCLib project lies, among others, in creating a solution that will allow institutions to implement all of the OAIS functional modules and entities, considering institutions’ information model. The architecture is planned as open and modular and the final product will be able to ingest, validate and store data from a majority of software products used for creating, disseminating and archiving libraries’ digital and digitized data in the Czech Republic. The solution is connected to Fedora commons repository and Archivematica, as far as it counts with creation of submission information packages using these software solutions.
Answering the call: researcher-driven training in data management
While CSIRO’s Research Data Service (RDS) has been promoting sound data management practices and the sharing of data, uptake has been inconsistent, and fallen largely to early adopters, those required by publishers or funders, and those with other motivations.
Across CSIRO, data management awareness is limited to small pockets: large facilities have detailed requirements, but the approach is inconsistent. Meanwhile, the explosion in data science is driving a need for a holistic approach to upskilling researchers.
Researchers from the Agriculture and Food (A&F) research unit called for support for developing better data skills and approached their executive for assistance, and the Ag & Food Data School was born.
In consultation with the A&F Data School organisers, an activities-based RDM component using the Carpentries model was presented in April 2018. This material was collaboratively refined and presented in October 2018. Deep engagement has flowed from these sessions.
CSIRO’s Learning and Development unit are working with RDS team members to take this to an enterprise level initiative.
Peer-to-peer learning has a great chance of success. Partnering with researchers helps support teams connect concepts with researchers’ practice. Co-design training with engaged researchers, as collaboration is the key to improving data culture and practice.
OA Theses: Demonstrating value and addressing concerns of students and their supervisors
University of Melbourne, Australia
In 2017, the University of Melbourne introduced a revised thesis access policy. Under this revised policy, open access became ‘opt out’ rather than ‘opt in’ for students and the default embargo period was reduced from seven years to two. Whilst these changes were welcomed by many members of the university community, they also served to generate a degree of anxiety in many quarters: what was the value of these changes? Would this new policy facilitate plagiarism of student work? And (by far the most the most common concern raised) would it prevent students publishing from their thesis? This paper will explore a number of initiatives undertaken at the university to address these concerns at a local level and increase confidence amongst graduate research students, and their supervisors, in making their theses OA.
Discovery with Linked Open Data: Leveraging Wikidata for Context and Exploration
Michigan State University, United States of America
Wikidata has been touted for its potential to be an authority linking hub as it provides the mechanism to connect to a few thousand different types of external identifiers, i.e. concepts from different thesauri and controlled vocabularies (Neubert, 2017). Common vocabularies like VIAF, ISNI, LCSH, LCNAF, FAST, AAT, ULAN, GeoNames, etc. have been, at least, partially mapped to Wikidata. How can cultural heritage institutions leverage this foundation built by Wikidata to enhance the discovery experience of users? The subject knowledge cards available for all items in the Michigan State University Libraries digital repository utilize the URIs of subject terms in metadata records as an entry point to connect to rich linkages provided in Wikidata entries. The card captures contextual information of the subject in focus by pulling information from Wikidata and linked DBpedia. It also provides links to semantically related resources from the library catalog and selected scholarly databases by using Wikidata as an interchange to trace equivalent concepts used in different systems. With linkages to external resources, users are no longer confined to a single silo but able to navigate an integrated network of resources beyond the immediate system.
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: OR2019
|Conference Software - ConfTool Pro 2.6.128+TC
© 2001 - 2019 by Dr. H. Weinreich, Hamburg, Germany