Conference Agenda

General track 18: Research data lifecycles
Friday, 30/Jun/2017:
9:00am - 10:30am

Session Chair: Louise Mary Howard, Griffith University
Location: Ballroom A & B
Hilton Brisbane

9:00am - 9:30am

OSF and Fedora: Removing the Barriers between Preservation and Active Research

Rick Johnson1, David Wilcox2, Sayeed Choudhury3, Jeff Spies4

1University of Notre Dame, United States of America; 2Duraspace, United States of America; 3Johns Hopkins University, United States of America; 4The Center for Open Science, United States of America

The Center for Open Science, DuraSpace, Johns Hopkins University (through the Data Conservancy), and University of Notre Dame are partnering to address several aspects of the data management, workflows, and integration with external system interests expressed for Open Repositories 2017. Specifically, Fedora Repository and the Open Science Framework are being integrated to connect two traditionally disjointed activities: preservation and active research. By removing the gap between the two, archiving and preservation can move from being distinct activities following the active research phase to activities that take place continuously as part of researchers’ existing workflows throughout the research lifecycle. The benefits of these services can be realized continuously thereby providing opportunity to communicate value to the researcher. This work will enable the true mission of preservation by facilitating reuse and retrieval of archived data and materials into subsequent research projects.

9:30am - 10:00am

Scholix Framework: Building a Bridge Between Research Data and Publications

Amir Aryani1, Adrian Burton1, Paolo Manghi2, Sandro La Bruzzo2, Markus Stocker3, Uwe Schindler3, Michael Diepenbroek3, Martin Fenner4, Hylke Koers5

1Australian National Data Service, Melbourne, Australia; 2Institute of Information Science and Technology - CNR, Pisa, Italy; 3PANGAEA, Bremen, Germany; 4DataCite, Hannover, Germany; 5Elsevier, Amsterdam, The Netherlands

Identifying the connections between datasets and publications has been a known challenge for scholarly communications and research repositories. In the last three years, there has been a significant development in promoting these connections among data centres and publishers. The major force behind identifying the data-literature connections is emerging new funders’ policies that encourage (in some cases enforce) reproducibility of science. In this talk, we will present the Scholix (Scholarly Link exchange) framework as a high-level interoperability approach toward exchanging information about the links between scholarly literature and data. This work was initiated by the Research Data Alliance working on Publishing Data Services.

Over the past decade, publishers and data centres, have agreed on and implemented numerous bilateral agreements to establish bidirectional links between research data and the scholarly literature. However, because of the considerable differences inherent to these many agreements, there is very limited interoperability between the various solutions. This talk will present the vision of a universal interlinking service and proposes the technical guidelines of a multi-hub interoperability framework.

10:00am - 10:30am

Moving data around: Integrating repositories with research workflows for curating and publishing data

Maude Frances1, Daniel Bangert1, Luc Betbeder-Matibet1, Carolien van Ham1, Steven McEachern2, Janet McDougall2

1UNSW Australia, Australia; 2ANU, Australia

The presentation draws on a use case from political science to demonstrate integrated scholarly processes for curating and publishing data. Curation activities are distributed across institutional and national websites, repositories, archives and registries. In workflows which prioritise existing research practice and disciplinary standards, the primary role of the repository is to move the data around – to apply standards and protocols that enable the data to be widely and openly accessible. Researchers provide structured metadata using a template based on the Data Documention Initiative , an international standard for describing data that result from observational methods in the social, behavioural, economic and health sciences. DDI metadata are mapped to standards (RDF, RIF-CS) which enable the institutional repository to disseminate and publish the datasets.

The implementation supports researchers in comprehensively describing their research methods and data according to a widely-adopted disciplinary standard, and in depositing the data in a trusted national archive. The integrity of the institutional data repository is increased by its direct integration with the rich descriptions of data in the disciplinary archive. Added value for the institution is derived from the reporting capabilities of the repository, which links to enterprise systems to generate statistics about the University’s research assets.