ML10M Data Splitting

This page describes the analysis done to select the cutoffs for temporally-splitting the ML10M data set.

Split Windows

Following (Meng et al. 2020), we are going to prepare a global temporal split of the rating data. We will target a 70/15/15 train/tune/test split, but round the timestamps so our test splits are at clean calendar dates. Searching for quantiles will get us this.

t_tune t_test
0 2005-04-02 16:54:12.100 2006-12-27 19:38:13.650

This suggests that Apr. 2005 is a reasonable tuning cutoff, and 2007 a good test cutoff.

part n_ratings n_users n_items
0 train 6974137 54668 8550
1 tune 1538314 10479 8913
2 test 1487543 10904 10414

How many test users have at least 5 training ratings?

n_users n_ratings
0 3128 269921

And for tuning / validation?

n_users n_ratings
0 3248 431294

This give us enough data to work with, even if we might like more test users. For more thoroughness, let’s look at how many test users we have by training rating count:

/home/mde48/lenskit/lenskit-codex/.pixi/envs/cuda-dev/lib/python3.12/site-packages/pandas/core/arraylike.py:399: RuntimeWarning: divide by zero encountered in log10

/tmp/ipykernel_517577/4033342475.py:1: DeprecationWarning: DataFrameGroupBy.apply operated on the grouping columns. This behavior is deprecated, and in a future version of pandas the grouping columns will be excluded from the operation. Either pass `include_groups=False` to exclude the groupings or explicitly select the grouping columns after groupby to silence this warning.

Since we have very small loss up through 10–11 ratings, we will use all users who appear at least once in training as our test users.

References

Meng, Zaiqiao, Richard McCreadie, Craig Macdonald, and Iadh Ounis. 2020. “Exploring Data Splitting Strategies for the Evaluation of Recommendation Models.” In Fourteenth ACM Conference on Recommender Systems, 681–86. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3383313.3418479.