About
This year's challenge focuses on online news recommendation, addressing both the technical and normative challenges inherent in designing effective and responsible recommender systems for news publishing. The challenge will delve into the unique aspects of news recommendation, including modeling user preferences based on implicit behavior, accounting for the influence of the news agenda on user interests, and managing the rapid decay of news items. Furthermore, our challenge embraces the normative complexities, involving investigating the effects of recommender systems on the news flow and whether they resonate with editorial values.
Organizers
The challenge is organized by Ekstra Bladet and JP/Politikens Hus A/S ("Ekstra Bladet"), Johannes Kruse1,2, Kasper Lindskow1, Anshuk Uppal2, Michael Riis Andersen2, Jes Frellsen2, Marco Polignano3, Claudio Pomo4 and Abhishek Srivastava5 based on the data provided by Ekstra Bladet.- Ekstra Bladet / JP/Politikens Hus A/S
- Technical University of Denmark
- University of Bari Aldo Moro, Italy
- Politecnico di Bari, Italy
- IIM Visakhapatnam, India
- Johannes.Kruse@jppol.dk
- Claudio.Pomo@poliba.it
Challenge Task
The Ekstra Bladet RecSys Challenge aims to predict which article a user will click on from a list of articles that were seen during a specific impression. Utilizing the user's click history, session details (like time and device used), and personal metadata (including gender and age), along with a list of candidate news articles listed in an impression log, the challenge's objective is to rank the candidate articles based on the user's personal preferences. This involves developing models that encapsulate both the users and the articles through their content and the users' interests. The models are to estimate the likelihood of a user clicking on each article by evaluating the compatibility between the article's content and the user's preferences. The articles are ranked based on these likelihood scores, and the precision of these rankings is measured against the actual selections made by users.
Evaluation
To evaluate the models, we use several standard metrics in the recommendation field, including the
area under the ROC curve (AUC),
mean reciprocal rank (MRR), and
normalized discounted cumulative gain (nDCG@K) for K shown recommendations.
To address the normative complexities inherent in news recommendations, the test set incorporates
samples specifically designed to assess models based on normative properties.
This includes evaluating models on Beyond-Accuracy Objectives, such as intra-list diversity,
serendipity, novelty, coverage, among others.
The final result is the average of these metrics across all impression logs.
RecSys 24 Challenge Sponsor Workshop Recording and Slides Available
We are pleased to announce that the RecSys 24 Challenge Workshop Sponsor edition, held virtually on Monday, May 27th, was a great success. If you missed the live session or would like to revisit the content, we have made the recording and slides available for your convenience.
During the workshop, we introduced the organizers, shared the motivation behind this year's competition, provided insights into the objectives and structure of the challenge, and addressed participants' questions in an open Q&A session. We hope you find these materials helpful and look forward to your continued engagement in the RecSys 24 Challenge!
Dataset
The Ekstra Bladet News Recommendation Dataset (EB-NeRD) is a large-scale Danish dataset created by Ekstra Bladet to support advancements and benchmarking in news recommendation research. EB-NeRD comprises data from over 1 million unique users, with more than 37 million impression logs and over 251 million interactions from Ekstra Bladet. Alongside, we offer a collection of more than 125,000 news articles, enriched with textual content features such as titles, abstracts, and bodies. This enables text features in a low-resource language as context for recommender systems.
For more details about the dataset and format:
Download
EB-NeRD is free to download for research purposes under General License Terms (“License Terms”). Before you download the dataset, please read the License Terms and click below button to confirm that you agree to them.
In addition, to help researchers become familiar with our data and run quick experiments, we are releasing
a
demo and a small version of the EB-NeRD by randomly sampling 5,000 and 50,000 users and
their behavior logs from the full dataset.
Also, to get started, we have assembled a toolkit featuring a range of established news and general
recommendation methods.
Registration
The Challenge's evaluation system will be hosted on the open-source platform Codabench. Please read the Challenge's Terms and Conditions. To register please follow:
The Codabench website is occasionally offline for maintenance to ensure optimal performance. To stay updated on maintenance schedules, please visit the CodaLab Competitions Google Group. Thank you for your understanding and continued support!
Prizes
The top three teams will receive exciting cash prizes: $3,500 for first place, $2,500 for second, and $1,500 for third. Additionally, a special $2,500 prize will be awarded to the best academic team.
Timeline
The table below presents the comprehensive timeline and critical deadlines relevant to the challenge. It's essential to note that all listed dates and times are based on the Anywhere on Earth (AoE) timezone, marked at 23:59:59.
When? | What? |
---|---|
Start RecSys Challenge
Release dataset |
|
Submission System Open | |
Leaderboard live | |
End RecSys Challenge | |
Final Leaderboard & Winners
EasyChair open for submissions |
|
Code Upload
Upload code of the final predictions |
|
Paper Submission Due | |
Paper Acceptance Notifications | |
Camera-Ready Papers | |
Oct. 14, 2024 |
RecSys Challenge Workshop @ACM RecSys 2024 |
Call for Contributions
We invite researchers and practitioners to submit their work for the RecSys Challenge workshop. Note that winning teams must submit papers and sign up for the workshop.
Format and Templates
Starting this year, the RecSys submissions have adopted a new template. We will follow the same new rules that apply to all other types of papers, please follow the Call for Paper Submission Guidelines.
Contributions
The topics of interest include, but are not limited to (in alphabetical order):- Applications of news recommendation
- Benchmarking and evaluation of recommender systems
- Bias in intelligent news systems
- Clickbait, fake news and misinformation detection
- Contributions focused on beyond accuracy, such as fairness, diversity, coverage, etc.
- Cross-domain and multi-modal recommendations
- Dataset analyses and preprocessing techniques
- News categorization, summarization and headline generation
- News content modeling
- News ranking techniques
- News trend and lifecycle
- Novel model architectures for news recommendation
- Privacy protection in news recommendation
- Scalability and efficiency of recommendation algorithms
- User behavior analysis
- User interest modeling
Leaderboard
Winners 🏆
Congratulations to the winners of the RecSys Challenge 2024!
🥇 1st Place: :D
- Kazuki Fujikawa (kfujikawa), Naoki Murakami (kami634), Yuki Sugawara (sugawarya), Takuya Akiyama (akiyama).
🥈 2nd Place: BlackPearl
- Peng Yan (contentbetter), Linsen Guo (invalidpointer), Haoru Chen (tilbur), Zhimin Lin (chizhu), Jing Yang (jingy), Zijian Zhang (zachzang), Taofeng Xue (xuetf), Mengjiao Bao (spongebob), Binli Luo (overfitking).
🥉 3rd Place: Tom3TK
- Akihiro Tomita (asato), Tomomu Iwai (tomo426), Tomoyuki Arai (tomoyukiarai), Hiroki Ogawa (kurokurob), Takuma Saito (taksai).
🥇 1st Place [Best Academic Team]: FeatureSalad
- Lorenzo Campana (lorecampa99), Saverio Maggese (savemay), Federico Ciliberto (federicocilib), Francesco Zanella (francesczanella), Carlo Sgaravatti (carlosgaravatti), Andrea Alari (andrealari), Andrea Pisani, Maurizio Ferrari Dacrema.
Updated: June 21, 2024.
To be added to the Academic Teams' leaderboard, please fill out the Google Form: Academic Leaderboard. Note, you should write your username for each member of the team, not the organization name in the form.
Rank | User | AUC | MRR | NDCG@5 | NDCG@10 |
---|