| Parameter | Type | Distribution | Values | Selected |
|---|---|---|---|---|
| embedding_size_exp | Integer | Uniform | 3 ≤ \(x\) ≤ 10 | 7 |
| regularization | Float | LogUniform | 0.0001 ≤ \(x\) ≤ 10 | 2.4 |
| learning_rate | Float | LogUniform | 0.001 ≤ \(x\) ≤ 0.1 | 0.0148 |
| reg_method | Categorical | Uniform | L2, AdamW | AdamW |
FlexMF Explicit
This page analyzes the hyperparameter tuning results for the FlexMF scorer in explicit-feedback mode (a biased matrix factorization model trained with PyTorch).
Parameter Search Space
Final Result
Searching selected the following configuration:
{ 'embedding_size_exp': 7, 'regularization': 2.3974767212864188, 'learning_rate': 0.014772891042984784, 'reg_method': 'AdamW', 'epochs': 10 }
With these metrics:
{ 'RBP': 0.029706024606954082, 'DCG': 0.7649055729292705, 'NDCG': 0.21476421198726356, 'RecipRank': 0.11209593312487026, 'Hit10': 0.2751322751322751, 'RMSE': 0.9266439080238342, 'max_epochs': 50, 'epoch_train_s': 0.006704902974888682, 'epoch_measure_s': 0.3568746990058571, 'done': False, 'training_iteration': 10, 'trial_id': 'ead5348b', 'date': '2025-09-30_19-10-58', 'timestamp': 1759273858, 'time_this_iter_s': 0.367112398147583, 'time_total_s': 3.959475517272949, 'pid': 694602, 'hostname': 'CCI-ws21', 'node_ip': '10.248.127.152', 'config': { 'embedding_size_exp': 7, 'regularization': 2.3974767212864188, 'learning_rate': 0.014772891042984784, 'reg_method': 'AdamW', 'epochs': 10 }, 'time_since_restore': 1.250558614730835, 'iterations_since_restore': 3 }
Parameter Analysis
Embedding Size
The embedding size is the hyperparameter that most affects the model’s fundamental logic, so let’s look at performance as a fufnction of it:
Learning Parameters
Iteration Completion
How many iterations, on average, did we complete?
How did the metric progress in the best result?
How did the metric progress in the longest results?