Large expert-curated database for benchmarking document similarity detection in biomedical literature search

dc.contributor.authorBrown, Peter
dc.contributor.authorLarrosa Pérez, Mar
dc.contributor.authorRELISH Consortium
dc.contributor.authorZhou, Yaoqi
dc.date.accessioned2022-02-15T13:38:28Z
dc.date.available2022-02-15T13:38:28Z
dc.date.issued2019
dc.description.abstractDocument recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research.spa
dc.description.filiationUEMspa
dc.description.impact2.593 JCR (2019) Q2, 15/59 Mathematical & Computational Biologyspa
dc.description.impact2.248 SJR (2019) Q1, 13/312 Agricultural and Biological Sciences (miscellaneous)spa
dc.description.impactNo data IDR 2019spa
dc.description.sponsorshipSin financiaciónspa
dc.identifier.citationBrown, P., RELISH Consortium , & Zhou, Y. (2019). Large expert-curated database for benchmarking document similarity detection in biomedical literature search. Database: The Journal of Biological Databases and Curation, 2019, 1-66. https://doi.org/10.1093/database/baz085spa
dc.identifier.doi10.1093/database/baz085
dc.identifier.issn1758-0463
dc.identifier.urihttp://hdl.handle.net/11268/10757
dc.language.isoengspa
dc.peerreviewedSispa
dc.rightsAtribución 4.0 Internacional*
dc.rights.accessRightsopen accessspa
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.subject.unescoCiencias médicasspa
dc.subject.unescoInvestigaciónspa
dc.subject.unescoEvaluación comparativaspa
dc.titleLarge expert-curated database for benchmarking document similarity detection in biomedical literature searchspa
dc.typejournal articlespa
dspace.entity.typePublication
relation.isAuthorOfPublicationfaac3041-87f1-4251-81a8-3d42f0aaa132
relation.isAuthorOfPublication.latestForDiscoveryfaac3041-87f1-4251-81a8-3d42f0aaa132

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Larrosa_Database_2019.pdf
Size:
1.96 MB
Format:
Adobe Portable Document Format
Description:
Versión del editor