ML20M Data Splitting

This page describes the analysis done to select the cutoffs for temporally-splitting the ML20M data set.

Split Windows

Following (Meng et al. 2020), we are going to prepare a global temporal split of the rating data. We will target approximately a 70/15/15 train/tune/test split, but round the timestamps so our test splits are at clean calendar dates. Searching for quantiles will get us this.

t_tune t_test
0 2007-12-08 00:20:24.800 2010-12-06 22:37:22.200

This suggests that 2008-2010 (valid) and 2011-end (test) are reasonable splits.

part n_ratings n_users n_items
0 tune 2992131 22796 15040
1 train 14063903 101068 9710
2 test 2943855 25167 25805

How many test users have at least 5 training ratings?

n_users n_ratings
0 5564 619592

And for tuning?

n_users n_ratings
0 5278 818921

This give us enough data to work with, even if we might like more test users. For more thoroughness, let’s look at how many test users we have by training rating count:

/home/mde48/lenskit/lenskit-codex/.pixi/envs/cuda-dev/lib/python3.12/site-packages/pandas/core/arraylike.py:399: RuntimeWarning: divide by zero encountered in log10

/tmp/ipykernel_517224/4033342475.py:1: DeprecationWarning: DataFrameGroupBy.apply operated on the grouping columns. This behavior is deprecated, and in a future version of pandas the grouping columns will be excluded from the operation. Either pass `include_groups=False` to exclude the groupings or explicitly select the grouping columns after groupby to silence this warning.

Since we have very small loss up through 10–11 ratings, we will use all users who appear at least once in training as our test users.

References

Meng, Zaiqiao, Richard McCreadie, Craig Macdonald, and Iadh Ounis. 2020. “Exploring Data Splitting Strategies for the Evaluation of Recommendation Models.” In Fourteenth ACM Conference on Recommender Systems, 681–86. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3383313.3418479.