ALS BiasedMF

This page analyzes the hyperparameter tuning results for biased matrix factorization with ALS.

Parameter Search Space

/home/mde48/lenskit/lenskit-codex/.venv/lib/python3.12/site-packages/ray/tune/search/sample.py:700: RayDeprecationWarning: The `base` argument is deprecated. Please remove it as it is not actually needed in this method.
Parameter Type Distribution Values Selected
embedding_size Integer LogUniform 4 ≤ \(x\) ≤ 512 450
regularization.user Float LogUniform 1e-05 ≤ \(x\) ≤ 1 0.323
regularization.item Float LogUniform 1e-05 ≤ \(x\) ≤ 1 0.00317
damping.user Float LogUniform 1e-12 ≤ \(x\) ≤ 100 3.74
damping.item Float LogUniform 1e-12 ≤ \(x\) ≤ 100 1.28e-10

Final Result

Searching selected the following configuration:

{
    'embedding_size': 450,
    'regularization': {'user': 0.32339547493145027, 'item': 0.003169704945518609},
    'damping': {'user': 3.7448453198454534, 'item': 1.27559519714575e-10},
    'epochs': 6
}

With these metrics:

{
    'RBP': 0.0006888496762251515,
    'DCG': 7.726449284567924,
    'NDCG': 0.265924279382336,
    'RecipRank': 0.004507232900513236,
    'Hit10': 0.005970149253731343,
    'RMSE': 0.8055420517921448,
    'max_epochs': 30,
    'epoch_train_s': 180.29132019299868,
    'epoch_measure_s': 343.33556148700154,
    'done': True,
    'training_iteration': 6,
    'trial_id': '38bad571',
    'date': '2025-07-28_21-14-58',
    'timestamp': 1753751698,
    'time_this_iter_s': 523.6316566467285,
    'time_total_s': 3061.792503118515,
    'pid': 132395,
    'hostname': 'CCI-ws21',
    'node_ip': '10.248.127.152',
    'config': {
        'embedding_size': 450,
        'regularization': {'user': 0.32339547493145027, 'item': 0.003169704945518609},
        'damping': {'user': 3.7448453198454534, 'item': 1.27559519714575e-10},
        'epochs': 6
    },
    'time_since_restore': 3061.792503118515,
    'iterations_since_restore': 6
}

Parameter Analysis

Embedding Size

The embedding size is the hyperparameter that most affects the model’s fundamental logic, so let’s look at performance as a fufnction of it:

Learning Parameters

Iteration Completion

How many iterations, on average, did we complete?

How did the metric progress in the best result?

How did the metric progress in the longest results?