ML32M Data Splitting

This page describes the analysis done to select the cutoffs for temporally-splitting the ML32M data set.

Split Windows

Following (Meng et al. 2020), we are going to prepare a global temporal split of the rating data. We will target a 70/15/15 train/tune/test split, but round the timestamps so our test splits are at clean calendar dates. Searching for quantiles will get us this.

t_tune t_test
0 2016-10-13 13:59:02.800 2019-11-09 20:24:13.700

This suggests that Oct. 2016 is a reasonable validation set cutoff, and Nov. 2019 a reasonable test set cutoff.

part n_ratings n_users n_items
0 test 4830150 30302 70651
1 train 22354937 154353 36291
2 tune 4814880 31199 54661

How many test users have at least 5 training ratings?

n_users n_ratings
0 7760 898684

And for testing?

n_users n_ratings
0 7863 1229053

This give us enough data to work with, even if we might like more test users. For more thoroughness, let’s look at how many test users we have by training rating count:

/home/mde48/lenskit/lenskit-codex/.pixi/envs/cuda-dev/lib/python3.12/site-packages/pandas/core/arraylike.py:399: RuntimeWarning: divide by zero encountered in log10

/tmp/ipykernel_517756/4033342475.py:1: DeprecationWarning: DataFrameGroupBy.apply operated on the grouping columns. This behavior is deprecated, and in a future version of pandas the grouping columns will be excluded from the operation. Either pass `include_groups=False` to exclude the groupings or explicitly select the grouping columns after groupby to silence this warning.

Since we have very small loss up through 10–11 ratings, we will use all users who appear at least once in training as our test users.

References

Meng, Zaiqiao, Richard McCreadie, Craig Macdonald, and Iadh Ounis. 2020. “Exploring Data Splitting Strategies for the Evaluation of Recommendation Models.” In Fourteenth ACM Conference on Recommender Systems, 681–86. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3383313.3418479.