RecSys Challenge 2024

Sponsored by

About

This year's challenge focuses on online news recommendation, addressing both the technical and normative challenges inherent in designing effective and responsible recommender systems for news publishing. The challenge will delve into the unique aspects of news recommendation, including modeling user preferences based on implicit behavior, accounting for the influence of the news agenda on user interests, and managing the rapid decay of news items. Furthermore, our challenge embraces the normative complexities, involving investigating the effects of recommender systems on the news flow and whether they resonate with editorial values.


Call for Contributions - Jul. 18 23:59:59 AOE

We invite researchers and practitioners to submit their work for the RecSys Challenge workshop. Note that winning teams must submit papers and sign up for the workshop.

Format and Templates

Starting this year, the RecSys submissions have adopted a new template. We will follow the same new rules that apply to all other types of papers, please follow the Call for Paper Submission Guidelines.

Contributions

The topics of interest include, but are not limited to (in alphabetical order):
  • Applications of news recommendation
  • Benchmarking and evaluation of recommender systems
  • Bias in intelligent news systems
  • Clickbait, fake news and misinformation detection
  • Contributions focused on beyond accuracy, such as fairness, diversity, coverage, etc.
  • Cross-domain and multi-modal recommendations
  • Dataset analyses and preprocessing techniques
  • News categorization, summarization and headline generation
  • News content modeling
  • News ranking techniques
  • News trend and lifecycle
  • Novel model architectures for news recommendation
  • Privacy protection in news recommendation
  • Scalability and efficiency of recommendation algorithms
  • User behavior analysis
  • User interest modeling
Furthermore, we ask that solutions using all features, including those that may yield information not available in a live setup, and report results both with and without these features (as discussed in the thread: link).



Organizers

The challenge is organized by Ekstra Bladet and JP/Politikens Hus A/S ("Ekstra Bladet"), Johannes Kruse1,2, Kasper Lindskow1, Anshuk Uppal2, Michael Riis Andersen2, Jes Frellsen2, Marco Polignano3, Claudio Pomo4 and Abhishek Srivastava5 based on the data provided by Ekstra Bladet.
  1. Ekstra Bladet / JP/Politikens Hus A/S
  2. Technical University of Denmark
  3. University of Bari Aldo Moro, Italy
  4. Politecnico di Bari, Italy
  5. IIM Visakhapatnam, India
If you have any questions, suggestions, or are experiencing any issues, do not hesitate to reach out:
  • Johannes.Kruse@jppol.dk
For all questions regarding EasyChair, please refer to the
  • Claudio.Pomo@poliba.it

Challenge Task

The Ekstra Bladet RecSys Challenge aims to predict which article a user will click on from a list of articles that were seen during a specific impression. Utilizing the user's click history, session details (like time and device used), and personal metadata (including gender and age), along with a list of candidate news articles listed in an impression log, the challenge's objective is to rank the candidate articles based on the user's personal preferences. This involves developing models that encapsulate both the users and the articles through their content and the users' interests. The models are to estimate the likelihood of a user clicking on each article by evaluating the compatibility between the article's content and the user's preferences. The articles are ranked based on these likelihood scores, and the precision of these rankings is measured against the actual selections made by users.

Evaluation

To evaluate the models, we use several standard metrics in the recommendation field, including the area under the ROC curve (AUC), mean reciprocal rank (MRR), and normalized discounted cumulative gain (nDCG@K) for K shown recommendations. To address the normative complexities inherent in news recommendations, the test set incorporates samples specifically designed to assess models based on normative properties. This includes evaluating models on Beyond-Accuracy Objectives, such as intra-list diversity, serendipity, novelty, coverage, among others. The final result is the average of these metrics across all impression logs.

  • The primary metric for the challenge is AUC.
  • RecSys 24 Challenge Sponsor Workshop Recording and Slides Available

    We are pleased to announce that the RecSys 24 Challenge Workshop Sponsor edition, held virtually on Monday, May 27th, was a great success. If you missed the live session or would like to revisit the content, we have made the recording and slides available for your convenience.

    Workshop Recording Slides

    During the workshop, we introduced the organizers, shared the motivation behind this year's competition, provided insights into the objectives and structure of the challenge, and addressed participants' questions in an open Q&A session. We hope you find these materials helpful and look forward to your continued engagement in the RecSys 24 Challenge!


    Dataset

    The Ekstra Bladet News Recommendation Dataset (EB-NeRD) is a large-scale Danish dataset created by Ekstra Bladet to support advancements and benchmarking in news recommendation research. EB-NeRD comprises over 2.3 million users and more than 380 million impression logs from Ekstra Bladet. Alongside, we offer a collection of more than 125,000 news articles news articles, enriched with textual content features such as titles, abstracts, and bodies. This enables text features in a low-resource language as context for recommender systems.

    For more details about the dataset and format:

    Dataset Description

    Download

    EB-NeRD is free to download for research purposes under General License Terms (“License Terms”). Before you download the dataset, please read the License Terms and click below button to confirm that you agree to them.

    I have read and accepted the License Terms

    In addition, to help researchers become familiar with our data and run quick experiments, we are releasing a demo and a small version of the EB-NeRD by randomly sampling 5,000 and 50,000 users and their behavior logs from the full dataset.

    Also, to get started, we have assembled a toolkit featuring a range of established news and general recommendation methods.

    EBRec


    Registration

    The Challenge's evaluation system will be hosted on the open-source platform Codabench. Please read the Challenge's Terms and Conditions. To register please follow:

    Register

    The Codabench website is occasionally offline for maintenance to ensure optimal performance. To stay updated on maintenance schedules, please visit the CodaLab Competitions Google Group. Thank you for your understanding and continued support!

    Prizes

    The top three teams will receive exciting cash prizes: $3,500 for first place, $2,500 for second, and $1,500 for third. Additionally, a special $2,500 prize will be awarded to the best academic team.


    Timeline

    The table below presents the comprehensive timeline and critical deadlines relevant to the challenge. It's essential to note that all listed dates and times are based on the Anywhere on Earth (AoE) timezone, marked at 23:59:59.

    When? What?
    Mar. 8, 2024 Start RecSys Challenge
    Release dataset
    Mar. 25, 2024 Submission System Open
    Apr. 4, 2024 Leaderboard live
    Jun. 21, 2024 End RecSys Challenge
    Jun. 24, 2024 Final Leaderboard & Winners
    EasyChair open for submissions
    Jul. 1, 2024 Code Upload
    Upload code of the final predictions
    Jul. 18, 2024 Paper Submission Due
    Aug. 3, 2024 Paper Acceptance Notifications
    Aug. 29, 2024 Camera-Ready Papers
    Oct. 14, 2024 RecSys Challenge Workshop
    @ACM RecSys 2024

    Leaderboard

    Winners 🏆

    Congratulations to the winners of the RecSys Challenge 2024!

    🥇 1st Place: :D

    🥈 2nd Place: BlackPearl

    🥉 3rd Place: Tom3TK

    🥇 1st Place [Best Academic Team]: FeatureSalad


    Updated: June 21, 2024.
    To be added to the Academic Teams' leaderboard, please fill out the Google Form: Academic Leaderboard. Note, you should write your username for each member of the team, not the organization name in the form.

    Search for multiple users at once using commas (e.g., "user_1, user_2, ..., user_n").
    Rank User AUC MRR NDCG@5 NDCG@10