Conference Agenda

To read the abstracts of submissions, click on the title of the session at the top of the cell, not on the title of the submission.  

Session Overview
4.2: Oral Presentations - Information Retrieval (2)
Thursday, 02/June/2022:
1:45pm - 3:30pm

Session Chair: Louise Farragher
Location: Willem Burgerzaal

External Resource:
Show help for 'Increase or decrease the abstract text size'
1:45pm - 2:00pm
ID: 136 / 4.2: 1
Oral Presentation
Topics: Information retrieval and evidence syntheses

Evaluating search strategies used to identify systematic reviews and RCTs for an evidence gap map

Naomi Shaw, Alison Bethel

University of Exeter, United Kingdom


Search methods for systematic reviews and other evidence syntheses should be transparent, reproducible and comprehensive. The PRISMA-S checklist requires full search strategies are reported for each bibliographic database, however, these do not provide any indication of the efficiency of the search strategy, or the usefulness of individual search lines for identifying studies for inclusion.

It is becoming increasingly important to evaluate search strategies for evidence syntheses, particularly for those that require regular updates or for ‘living’ reviews, to ensure search strategies are effective and efficient, and to minimise future screening load.

To identify a simple method for search strategy evaluation and consider how Information Specialists (ISs) can report and share search strategy evaluations.


Searches were conducted to identify systematic reviews (SRs), randomised controlled trials (RCTs) and economic evaluations for an evidence gap map on peer support interventions. Search strategies included a combination of free-text and controlled vocabulary terms.

A search summary table was created to highlight where included studies were found. This indicated 27 of the 32 included SRs, and 50 of the 61 included RCTs were retrieved by the original Ovid MEDLINE searches.

Test sets were created for included references using PubMed identifiers in Ovid MEDLINE. These were used to evaluate each line of the SR and RCT topic search strategies, in order to identify the lines of the search strategy that retrieved included or unique references, and the simplest combination of search terms that would retrieve all included references.

Initial findings indicate that a simple strategy (using only two search lines) would identify all included SRs, whereas a broader range of terminology (12 search lines) is needed to capture all included RCTs. 18 search lines in the peer support strategy for SRs retrieved at least one included SR. 26 search lines in the RCT search strategy picked up at least one included RCT.

We will present further findings from our evaluation of search strategies conducted for an evidence map.

The conduct and reporting of a search strategy evaluation, in addition to a search summary table, may improve search efficiency and minimise screening load for reviews that require frequent updates. These can be time-consuming tasks, however, search strategy evaluation provides opportunities for ISs to reflect on current practice and gather evidence about the value of different search approaches. Reporting details of search strategy evaluation ensures transparency and reproducibility of search methods, and may also guide ISs working on similar topics to make informed decisions about selection of search terms. The IS community could work together to develop simple and effective methods to evaluate search strategies and consider how this knowledge can best be shared.

The authors intend to conduct further research comparing the performance of ‘evaluated’ search strategies with our original search strategies. We will assess the efficiency and number needed to screen for both strategies to inform updates of the living evidence map of peer support interventions.

2:00pm - 2:15pm
ID: 120 / 4.2: 2
Oral Presentation
Topics: Information retrieval and evidence syntheses

Reducing systematic review burden using Deduklick: a novel, automated, reliable, and explainable deduplication algorithm

Nikolay Borissov1,2, Quentin Haas1,2, Beatrice Minder3, Doris Kopp-Heim3, Marc von Gernler4, Heidrun Janka4, Douglas Teodoro5,6, Poorya Amini1,2

1Risklick AG, Spin-off University of Bern, Bern, Switzerland; 2CTU Bern, University of Bern, Bern, Switzerland; 3Public Health & Primary Care Library, University Library of Bern, University of Bern, Switzerland; 44 Medical Library, University Library of Bern, University of Bern, Bern, Switzerland; 5University of Applied Sciences and Arts Western Switzerland, Geneva, Switzerland; 6Department of Radiology and Medical Informatics, University of Geneva, Geneva, Switzerland



Identifying and removing reference duplicates when conducting systematic reviews (SRs) remains a major, time-consuming issue for authors who manually check for duplicates using built-in features in citation managers. To address issues related to manual deduplication, we developed an automated, efficient, and rapid artificial intelligence (AI)-based algorithm named Deduklick. Deduklick combines natural language processing (NLP) algorithms with a set of rules created by expert information specialists.


Deduklick’s deduplication uses a multistep algorithm of data normalization, calculated a similarity score, and identified unique and duplicate references based on metadata fields, such as title, authors, journal, DOI, year, issue, volume, and pages. We measured and compared Deduklick’s capacity to accurately detect duplicates with the information specialists’ standard, manual duplicate removal process using EndNote on eight heterogeneous datasets. Using a sensitivity analysis, the efficiency and noise of both methods were manually cross-compared.


Following deduplication and comparing performance measurements, Deduklick achieved an average recall of 99·51%, an average precision of 100·00%, and average F1 score of 99·75%. In contrast, the manual deduplication process achieved an average recall of 88·65%, an average precision of 99·95%, and an average F1 Score of 91·98%. Deduklick achieved equal to higher expert-level performance on duplicate removal. It also preserved a high metadata quality, and drastically diminished the time spent on analysis.


Deduklick represents an efficient, transparent, ergonomic, and time-saving solution for searching and removing duplicates in SRs. Deduklick could therefore simplify SRs production and represent important advantages for scientists, including saving time, increasing accuracy, reducing costs, and contributing to quality SRs.

Human Touch (Recommended)

Automated, Reliable and Explainable Deduplication of trials and publications metadata, part of systematic review process.

2:15pm - 2:30pm
ID: 133 / 4.2: 3
Oral Presentation
Topics: Information retrieval and evidence syntheses

Using Quality Improvement methodology to investigate the impact of reference management software and search interfaces on literature searches

Lindsay Snell, Lisa Lawrence, Suzanne Toft

University Hospitals of Derby and Burton NHS Foundation Trust, United Kingdom


Literature searches to answer clinical and management questions can be time consuming. Our Library and Knowledge Service (LKS) provides 300-400 searches annually: a substantial time investment. Quality Improvement (QI) methodology is a recognised way to address inefficiencies and improve processes. Our organisation recently introduced revised QI practices. The LKS is one team chosen to pilot the new approach. The LKS literature searchers decided to use QI methodology to investigate improvements to the search process.


To investigate whether the time taken to provide a literature search could be reduced without any impact on quality.


At baseline, we timed the different stages of searches (e.g. correspondence with colleagues/users; searching; peer review; sifting; summarising). As recommended by a QI specialist, we aimed to record data for about 20 searches per searcher per Plan/Do/Study/Act cycle: a large amount of data for a QI project. However, we were advised this was needed given the large variation in complexity of search questions and the time taken to complete them. We have completed 3 Cycles: (1) Introducing reference management software (RMS) and increasing the number of interfaces searched; (2) Returning to a single interface while using RMS; (3) Sharing experiences of RMS to streamline processes (single interface). All other activities which contribute to search quality (e.g. number of databases searched) remained unchanged.


Cycle 1 (presented elsewhere) demonstrated that using RMS offset the increased time taken to search multiple interfaces. Time spent searching databases increased, whilst time was saved on formatting and sifting. We anticipated time savings in Cycle 2 (return to single interface; continued use of RMS). We were surprised this was not the case. In Cycle 2, across all searchers, there was an average time saving of only 2 minutes per search compared to baseline. On investigating, we found considerably differing results for each searcher. This varied from an average increase per search of 139 minutes to an average time saving per search of 62 minutes. Reasons for variation, including human factors, search complexity, and the way individuals’ search processes changed through the cycles, will be discussed. Note: Data collection for cycle 3 will be completed in March 2022, and presented at EAHIL.


This QI project enabled us to demonstrate and understand time savings created by changes to our workflows. It gave us a deeper understanding of search processes and personal variations in technique. Recommendations: (1) Use of QI methodology to evaluate and improve search processes, and understand the impact of changes between database platforms; (2) Use of RMS to streamline literature search delivery.

Human Touch

Our results were unexpected, although we are experienced searchers and had anticipated certain outcomes. This highlights the benefit of QI methodology for improving processes, and the need to question assumptions, even about familiar practices. We would recommend this questioning approach to other Library services to re-evaluate established procedures. It has been professionally rewarding to compare and evaluate our approaches to literature searching. This comparison has allowed us to improve our own practice.

Biography and Bibliography
Cycle 1 of this project was presented as: Snell, L, Lawrence, L, and Toft, S. Providing literature searches more efficiently: using quality improvement methods to save time without losing quality. Paper presented at: International Clinical Librarian Conference ICLCLite; 2021 Nov 11.

Lindsay Snell, Lisa Lawrence, and Suzanne Toft are Clinical Librarians with substantial experience in embedded roles in the English NHS. They have presented previously at the International Clinical Librarian Conference and the UK Health Libraries Group Conference. Lindsay Snell and Lisa Lawrence have presented jointly on creating and developing clinical librarian services, and on the impact of these services. Lindsay has presented on clinical librarianship to support Quality Improvement and integrated care. Lisa has presented on clinical librarianship in Dermatology and Tuberculosis services. Suzanne Toft has presented on providing critical appraisal training to patients.

2:30pm - 2:45pm
ID: 184 / 4.2: 4
Oral Presentation
Topics: Information retrieval and evidence syntheses

Users' expectations and preferences regarding machine learning tools for title and abstract selections in practice

Miriam Maria van der Maten1,2

1Knowledge Institute of the Dutch Association of Medical Specialists, Netherlands, The; 2Cochrane Netherlands, Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, the Netherlands

Title and abstracts selections form the basis for systematic evidence syntheses which are vital for evidence-based medicine. They are time consuming to perform, but the ever-growing body of scientific literature plus the demand for faster up to date information make the current selection process unsustainable. Machine learning-based screening tools have been suggested as a solution to accelerate the process, but large-scale application in practice stays off. A gap in knowledge about users' expectations and preferences may partially explain this low uptake.
To create insight in users' expectations and preferences regarding machine learning tools for title and abstract selection to better align tool development and usage in practice.

Method/ Program Description
We use a clinical guideline development setting to derive the expectations and preferences of potential title and abstract tool users. First insights were collected through a survey among guideline developers and clinical specialists. More in-depth expectations were collected through focus groups with guideline developers and clinical specialists.
Results/ Evaluation
The survey was distributed among 335 guideline developers and clinical specialists. 88 responses were obtained of which 79 survey responses could be used for the analysis. Seven guideline developers and 5 clinical specialists were recruited respectively for two separate focus groups. Data is currently being analysed and results will be available at the conference.
As a result of this study, we will provide an overview of users' expectations and preferences regarding machine learning tools for title and abstract selections.

2:45pm - 3:00pm
ID: 175 / 4.2: 5
Oral Presentation
Topics: Information retrieval and evidence syntheses

Development and validation of a database filter for study size

Sabrina Gunput, Wichor Bramer

Erasmus MC, Netherlands, The

Researchers performing systematic reviews often express the desire to limit the search results to a certain study size: "I want to include only studies of more than 100 patients". While we of course can discuss about the validity of such a request, limiting the search results to match the inclusion criteria can reduce the burden of screening for reviewers.

The aim of our study was to develop a filter in and Medline Ovid to retrieve references above a certain threshhold of sample size. We compared the effectivenss of our filter in development using existing systematic reviews that report using sample size as an inclusion criteria.

Method/ Program Description
Together with researchers who expressed the desire to limit search results to a certain number of patients we constructed preliminary filters which were tested on the spot by evaluating the patient numbers of relevant references that had not been retrieved. If the patient numbers matched the inclusion criteria, the filter was adapted to retrieve the missed articles. After several rounds of improvement of the filter the filter was then tested against existing systematic reviews that used sample size as inclusion criteria but did not limit their search to a sample size.

Results/ Evaluation

The filter that was developed consists mainly of truncated numbers in proximity with words such as patients, cases, adults, females etc and phrase like "n=". The filter can and should be adapted to the research topic by combining these truncated numbers with specific terms for diseases, interventions or body parts of interest such as melanomas, surgeries, eyes or knees. The sensitivity of the filter as evaluated on existing systematic reviews was at least 94%. The references that were not retrieved were older articles that did not include the study size in their abstract.


The study size filter is a good way to limit search results to a certain number of patients. It is not 100% sensitive, but few filters are. Current guidelines for abstract formats advice authors to include in their abstract the number of patients in their research. We therefore expect the sensitivity of the filters only to improve for newer studies. A limitation is that the filters are only available in the interfaces of and Ovid and cannot be translated into PubMed, as the filter uses proximity operators which are not available in PubMed.

Human Touch (Recommended)

With the study size filter the burden of screening for systematic reviews can be greatly reduced.

3:00pm - 3:15pm
ID: 131 / 4.2: 6
Oral Presentation
Topics: Information retrieval and evidence syntheses

Going beyond the traditional roles: Importance of partnership working

Mala Mann1, Rhiannon Cordiner1, Annmarie Nelson2, Anthony Byrne2

1Specialist Unit for Review Evidence, Cardiff University, Wales; 2Marie Curie Palliative Care Research Centre (MCPCRC), School of Medicine, Cardiff University, Wales


Evidence-Based Medicine (EBM) has expanded the role of the librarian beyond identification of the literature only, to be involved in other stages of the evidence review process.

The aim of this paper is to demonstrate the role of the librarian in conducting evidence synthesis in partnership with clinicians, health care workers, researchers, and policy makers. We will examine a series of rapid reviews conducted during the last five years to support professionals and other decision-makers working in palliative care.

The literature searches were conducted across a range of databases and supplementary sources. In addition to designing and running the literature searches, other tasks included carrying out screening and study selection, developing data extraction forms and carrying out quality assessment of the eligible studies. Final tasks included synthesising evidence and writing the review using reporting templates in collaboration with researchers.

To date, twelve reviews have been conducted using a methodology developed in partnership with the research team. Findings will be presented from the start of the process at the point of partnership working, to development of the review and subsequent follow up to demonstrate impact. The evidence from these reviews impacts directly on palliative care clinicians and other decision makers, and indirectly on patients/carers in receipt of palliative care.


Broadening horizons provides opportunities for information professionals in health care to play an invaluable role. Librarians can be effective partners in supporting researchers to practice evidence-based medicine.

Human Touch (Recommended)

Being integrated into a research team is an invaluable experience and contributing to other aspects of the review process can be rewarding. It provides opportunities to develop our expertise and remain relevant in an ever-changing world.

Biography and Bibliography
I am an Information Specialist/Systematic Reviewer based at Cardiff University's Specialist Unit for Review Evidence (SURE), with expertise in systematic reviewing for over 20 years. My particular expertise is in advanced literature searching and the development of systematic review methodologies. I have worked on projects for a range of organisations including the National Institute for Health & Care Excellence (NICE), National Society for the Prevention of Cruelty to Children (NSPCC) the Welsh Government and Public Health Wales. I have co-authored over 100 publications including Cochrane reviews and methodology papers. Current projects include conducting reviews for Cardiff University Marie Curie Palliative Care Research Centre, Wales COVID-19 Evidence Centre and the Centre for Homelessness Impact.
In addition, I teach evidence-based methodologies on several internal and external programmes including Cardiff University Doctoral Academy and lead the Cardiff Systematic Review Course. I have jointly supervised intercalated degree and postgraduate students who are involved in carrying out a systematic review as a component within their degree programme.


•Edwards, Deborah, Anstey, Sally, Coffey, Michael, Gill, Paul, Mann, Mala, Meudell, Alan and Hannigan, Ben 2021. End of life care for people with severe mental illness: Mixed methods systematic review and thematic synthesis (the MENLOC study). Palliative Medicine 35(10), pp. 1747-1760. (10.1177/02692163211037480)
•Harrop, Emily, Mann, Mala, Semedo, Lenira, Chao, Davina, Selman, Lucy E. and Byrne, Anthony. 2020. What elements of a systems approach to bereavement are most effective in times of mass bereavement? A narrative systematic review with lessons for COVID-19. Palliative Medicine 34(9), pp. 1165-1181. (10.1177/0269216320946273)
•Oakley, Natalie Jayne, Kneale, Dylan, Mann, Mala, Hilliar, Mariann, Dayan, Colin, Gregory, John W and French, Robert 2020. Type 1 diabetes mellitus and educational attainment in childhood: a systematic review. BMJ Open 10(1), article number: e033215. (10.1136/bmjopen-2019-033215)
•Nurmatov, Ulugbek, Foster, Catherine, Bezeczky, Zoe, Owen, Jennifer, El-Banna, Asmaa, Mann, Mala, Petrou, Stavros, Kemp, Alison, Scourfield, Jonathan, Forrester, Donald and Turley, Ruth2020. Impact of shared decision-making family meetings on children's out-of-home care, family empowerment and satisfaction: a systematic review. Project Report. [Online]. London: What Works Centre for Children's Social Care. Available at:
•Mann, Mala, Woodward, Amanda, Nelson, Annmarie and Byrne, Anthony 2019. Palliative Care Evidence Review Service (PaCERS): a knowledge transfer partnership. Health Research Policy and Systems 17(1), article number: 100. (10.1186/s12961-019-0504-4)

3:15pm - 3:30pm
ID: 187 / 4.2: 7
Oral Presentation
Topics: Information retrieval and evidence syntheses

An emerging concern in systematic reviews process: identifying articles published in predatory journals

Cécile Jaques, Jérôme Zbinden, Jolanda Elmers, Alexia Trombert

Medical Library, Lausanne University Hospital and University of Lausanne, Switzerland

Introduction: The purpose of systematic reviews is to evaluate and synthesise the best available evidence on a specific question, using rigorous and transparent methods. Generally, most of the included studies are scientific articles published in academic journals. Recently, the number of predatory journals has increased considerably. Predatory journals are often published under the gold open access model. They do not follow good editorial, publishing and transparency practices and use aggressive solicitation methods. As systematic reviews call for exhaustive searches in databases and complementary information sources, articles published in predatory journals might be retrieved. Because of the potential of poor quality, fraud, or erroneous and misleading data in studies published in these journals, their inclusion in a systematic review may affect its results and conclusions.

Aim: To create an automated instrument to help researchers identify articles from predatory journals among the articles retained for inclusion after the selection stage and before the quality appraisal and data extraction steps. The tool analyses a collection of articles and highlights suspicious titles.

Method: The criteria for identifying whether an article is published in a predatory journal should be searchable in an automated way for a set of articles.

Examples of the criteria considered in the initial design:

- Is the publisher a member of COPE (Committee on Publication Ethics)?

- Is the journal listed in DOAJ, or has wrongly claimed to be part of it?

- Is it indexed in Medline or in the Web of Science Core Collection?

- Is the ISSN listed on the ISSN International Centre Portal, and does its name match?

- Is it part of the updated « Beall’s list of Predatory Journals »?

We are still considering more criteria to be added to the instrument. In practical terms, the instrument takes a list of article references exported from EndNote in XML format. A weight is assigned to each criterion in order to calculate a score for each reference. The higher the score, the greater the chance that an article has been published in a predatory journal.

Results: We are currently analysing different sets of articles to improve the report of the results. This report is intended to inform the researchers of the results of the analysis. Researchers will have to check manually suspicious references using complementary criteria, according to a list that we will provide. Finally, they will have to decide how to handle the identified articles. Their strategy as well as the method used for identification should be described in the protocol. No guidance exists yet, but some recommendations have been published.

Conclusion: As a systematic review support service, it is important for us to provide our researchers with a tool to facilitate the identification of articles from predatory journals, which can be time consuming. With this instrument, information specialists are now able to help authors to identify this threat on the quality and validity of their systematic review.

Human Touch: This project emerged from discussions and interrogations within the review teams in which we are involved.

Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: EAHIL 2022
Conference Software - ConfTool Pro 2.6.144
© 2001–2022 by Dr. H. Weinreich, Hamburg, Germany