“What now?”: TrendMD guides readers to the most relevant further reading

By Paul Kudlow, February 8, 2016

TrendMD’s collaborative filtering engine improves clickthrough rates by 272% compared to the standard ‘similar article’ algorithm in the A/B trial

Whenever we listen to music, watch videos, or make purchases online, websites such as Spotify, Netflix, and Amazon use sophisticated algorithms to suggest additional items that may be interesting. Scholarly publishers can benefit from the same approach.

One simple form of recommendation is to look for the items that share the most similarity — based on shared keywords, tags, or semantic classifications. In scholarly publishing, this works well as a means to cluster articles together into highly related groups. PubMed’s similar articles feature is a popular example of this approach, and works well for readers wishing to explore a particular area in depth.

Comparison of recommendation outputs from the TrendMD widget and a basic similar article algorithm, demonstrating different approaches to article recommendations.

However, it is clear that the most useful further reading links are not always the most semantically related. Indeed, if articles are too closely related, there may be diminishing returns from discovering more articles in precisely the same niche. By analogy, if I’ve just bought a coffee maker, I probably don’t want to buy another one, but I may be interested in buying coffee beans or a descaler.

How TrendMD’s Collaborative Filtering Improves Article Recommendations

Collaborative filtering is a powerful way to improve recommendations. It identifies this type of correlation through the analysis of anonymized click data. TrendMD heavily uses collaborative filtering to optimize its recommendations, ensuring that the articles shown by the TrendMD widget are those predicted to be most useful based on the pattern of previous click data.

To demonstrate the impact of collaborative filtering on the quality of recommendations and to show how collaborative filtering progressively improves over time as click data accumulates, TrendMD recently ran a controlled experiment to compare the performance of its recommendations against a benchmark based purely on semantic similarity, without collaborative filtering.

The A/B Test: Comparing TrendMD to Standard Algorithms

Journal of Medical Internet Research (JMIR) performed a 6-week A/B test comparing recommendations generated by the TrendMD service, incorporating collaborative filtering, with recommendations generated using the basic PubMed similar article algorithm, described on the NCBI website. All articles published in JMIR were included in the study (n=2,740). The primary outcome measure investigated was the aggregate clickthrough rate for the article recommendations displayed by the TrendMD widget.

Graph showing the difference in clickthrough rates between PubMed's similar article algorithm and TrendMD's collaborative filtering, highlighting a 272% increase in CTR with TrendMD.

Existing click data for the JMIR account was temporarily removed from the system to ensure that the collaborative filtering started with a clean slate.

Results

Throughout the A/B trial, 41,871 article views showed recommendations based on the PubMed relatedness algorithm, while 41,884 article views showed recommendations where collaborative filtering had been used to optimize the recommendations. Readers were blind to the algorithm used as the user interface of the recommendations was identical in both cases.

Graph depicting the growth in clickthrough rates over 6 weeks for recommendations using collaborative filtering compared to the PubMed algorithm.

The above graph shows that around 1.16% of article page views with the PubMed algorithm led to a click on a further reading recommendation. Using collaborative filtering, the aggregate clickthrough rate started the same, as expected, since the PubMed algorithm was the baseline that collaborative filtering was used to optimize. Over the 6 weeks of the trial, as click data was collected and used to optimize the recommendations, the clickthrough rate with collaborative filtering increased to 4.31%, representing a 272% improvement in the effectiveness of the recommendations.

Conclusions

As this experiment demonstrates, collaborative filtering can dramatically improve the effectiveness of article recommendations for scholarly publishers. However, there are significant hurdles to implementing this lesson as collaborative filtering requires non-trivial technological investment in machine learning and requires a large amount of click data to be effective.

Fortunately, TrendMD can help on both fronts. TrendMD’s cloud-based article recommendation widget covers the technical complexity and machine learning wizardry, allowing publishers to focus on publishing. Meanwhile, thanks to TrendMD’s rapidly growing publisher network, which now delivers over 120 million article recommendations monthly, TrendMD has the data to generate highly optimized article recommendations and even optimize those recommendations for individual users.

Our general response time is one business day.

Your submission has been received!
Oops! Something went wrong
Button Text