Parameter | Type | Distribution | Values | Selected |
---|---|---|---|---|
embedding_size | Integer | LogUniform | 4 ≤ \(x\) ≤ 512 | 8 |
regularization | Float | LogUniform | 0.0001 ≤ \(x\) ≤ 10 | 0.0435 |
learning_rate | Float | LogUniform | 0.001 ≤ \(x\) ≤ 0.1 | 0.00786 |
reg_method | Categorical | Uniform | L2, AdamW | L2 |
FlexMF Explicit
This page analyzes the hyperparameter tuning results for the FlexMF scorer in explicit-feedback mode (a biased matrix factorization model trained with PyTorch).
Parameter Search Space
Final Result
Searching selected the following configuration:
{ 'embedding_size': 8, 'regularization': 0.04348568763678297, 'learning_rate': 0.007860577021310463, 'reg_method': 'L2', 'epochs': 9 }
With these metrics:
{ 'RBP': 0.014196129440746583, 'LogRBP': -0.4605459563536778, 'NDCG': 0.17844322060908277, 'RecipRank': 0.05514580220640624, 'RMSE': 0.8223512184170895, 'TrainTask': '3f3e3bae-5bf9-455d-af25-936807e5f8a4', 'TrainTime': None, 'TrainCPU': None, 'max_epochs': 50, 'done': True, 'training_iteration': 9, 'trial_id': 'ec487_00091', 'date': '2025-05-05_06-15-09', 'timestamp': 1746440109, 'time_this_iter_s': 2.2098076343536377, 'time_total_s': 22.274284601211548, 'pid': 3582069, 'hostname': 'CCI-ws21', 'node_ip': '10.248.127.152', 'config': { 'embedding_size': 8, 'regularization': 0.04348568763678297, 'learning_rate': 0.007860577021310463, 'reg_method': 'L2', 'epochs': 9 }, 'time_since_restore': 2.2098076343536377, 'iterations_since_restore': 1 }
Parameter Analysis
Embedding Size
The embedding size is the hyperparameter that most affects the model’s fundamental logic, so let’s look at performance as a fufnction of it:
Learning Parameters
Iteration Completion
How many iterations, on average, did we complete?
How did the metric progress in the best result?
How did the metric progress in the longest results?