This page analyzes the hyperparameter tuning results for the FlexMF scorer in explicit-feedback mode (a biased matrix factorization model trained with PyTorch).
Parameter Search Space
/home/mde48/lenskit/lenskit-codex/.venv/lib/python3.12/site-packages/ray/tune/search/sample.py:700: RayDeprecationWarning: The `base` argument is deprecated. Please remove it as it is not actually needed in this method.
embedding_size |
Integer |
LogUniform |
4 ≤ \(x\) ≤ 512 |
24 |
regularization |
Float |
LogUniform |
0.0001 ≤ \(x\) ≤ 10 |
0.00195 |
learning_rate |
Float |
LogUniform |
0.001 ≤ \(x\) ≤ 0.1 |
0.00335 |
reg_method |
Categorical |
Uniform |
L2, AdamW |
L2 |
Final Result
Searching selected the following configuration:
{
'embedding_size': 24,
'regularization': 0.0019458185042904327,
'learning_rate': 0.0033469147972763446,
'reg_method': 'L2',
'epochs': 3
}
With these metrics:
{
'RBP': 0.0698818903188234,
'DCG': 8.622904402730668,
'NDCG': 0.313513004543718,
'RecipRank': 0.18287091878343642,
'Hit10': 0.34369063772048847,
'RMSE': 0.8205416798591614,
'max_epochs': 50,
'epoch_train_s': 3.9633692409988726,
'epoch_measure_s': 22.135559880000073,
'done': False,
'training_iteration': 3,
'trial_id': '76a8693a',
'date': '2025-07-28_18-34-36',
'timestamp': 1753742076,
'time_this_iter_s': 26.10281252861023,
'time_total_s': 87.65014863014221,
'pid': 101126,
'hostname': 'CCI-ws21',
'node_ip': '10.248.127.152',
'config': {
'embedding_size': 24,
'regularization': 0.0019458185042904327,
'learning_rate': 0.0033469147972763446,
'reg_method': 'L2',
'epochs': 3
},
'time_since_restore': 87.65014863014221,
'iterations_since_restore': 3
}
Parameter Analysis
Embedding Size
The embedding size is the hyperparameter that most affects the model’s fundamental logic, so let’s look at performance as a fufnction of it:
Iteration Completion
How many iterations, on average, did we complete?
How did the metric progress in the best result?
How did the metric progress in the longest results?