This page analyzes the hyperparameter tuning results for the FlexMF scorer in explicit-feedback mode (a biased matrix factorization model trained with PyTorch).
Parameter Search Space
<frozen abc>:106: UserWarning: component class Placeholder does not define a config attribute type
| embedding_size_exp |
Integer |
Uniform |
3 ≤ \(x\) ≤ 10 |
4 |
| regularization |
Float |
LogUniform |
0.0001 ≤ \(x\) ≤ 10 |
0.119 |
| learning_rate |
Float |
LogUniform |
0.001 ≤ \(x\) ≤ 0.1 |
0.00423 |
| reg_method |
Categorical |
Uniform |
L2, AdamW |
AdamW |
Final Result
Searching selected the following configuration:
{
'embedding_size_exp': 4,
'regularization': 0.11929652807453098,
'learning_rate': 0.00422821861051453,
'reg_method': 'AdamW',
'epochs': 5
}
With these metrics:
{
'RBP': 0.16137180240142998,
'DCG': 10.092204826620334,
'NDCG': 0.41872709490730176,
'RecipRank': 0.33805246283700296,
'Hit10': 0.5869565217391305,
'RMSE': 0.7614558935165405,
'max_epochs': 50,
'epoch_train_s': 0.7170857610180974,
'epoch_measure_s': 6.831948532955721,
'done': True,
'training_iteration': 5,
'trial_id': '839f8c8c',
'date': '2025-09-30_00-05-01',
'timestamp': 1759205101,
'time_this_iter_s': 7.5527966022491455,
'time_total_s': 37.48297643661499,
'pid': 3954354,
'hostname': 'CCI-ws21',
'node_ip': '10.248.127.152',
'config': {
'embedding_size_exp': 4,
'regularization': 0.11929652807453098,
'learning_rate': 0.00422821861051453,
'reg_method': 'AdamW',
'epochs': 5
},
'time_since_restore': 37.48297643661499,
'iterations_since_restore': 5
}
Parameter Analysis
Embedding Size
The embedding size is the hyperparameter that most affects the model’s fundamental logic, so let’s look at performance as a fufnction of it:
Iteration Completion
How many iterations, on average, did we complete?
How did the metric progress in the best result?
How did the metric progress in the longest results?