Parameter | Type | Distribution | Values | Selected |
---|---|---|---|---|
embedding_size_exp | Integer | Uniform | 3 ≤ \(x\) ≤ 10 | 6 |
regularization | Float | LogUniform | 0.0001 ≤ \(x\) ≤ 10 | 0.211 |
learning_rate | Float | LogUniform | 0.001 ≤ \(x\) ≤ 0.1 | 0.0781 |
reg_method | Categorical | Uniform | L2, AdamW | AdamW |
item_bias | Categorical | Uniform | True, False | True |
FlexMF WARP
This page analyzes the hyperparameter tuning results for the FlexMF scorer in implicit-feedback mode with WARP loss.
Parameter Search Space
Final Result
Searching selected the following configuration:
{ 'embedding_size_exp': 6, 'regularization': 0.21080704579971982, 'learning_rate': 0.0780646991058362, 'reg_method': 'AdamW', 'item_bias': True, 'epochs': 13 }
With these metrics:
{ 'RBP': 0.1317286837759703, 'DCG': 1.4334711164614036, 'NDCG': 0.40247882304526417, 'RecipRank': 0.4149979501124277, 'Hit10': 0.7566137566137566, 'max_epochs': 50, 'epoch_train_s': 0.11083783418871462, 'epoch_measure_s': 0.23467652103863657, 'done': False, 'training_iteration': 13, 'trial_id': '72878258', 'date': '2025-09-30_18-03-55', 'timestamp': 1759269835, 'time_this_iter_s': 0.34922122955322266, 'time_total_s': 4.267271280288696, 'pid': 478906, 'hostname': 'CCI-ws21', 'node_ip': '10.248.127.152', 'config': { 'embedding_size_exp': 6, 'regularization': 0.21080704579971982, 'learning_rate': 0.0780646991058362, 'reg_method': 'AdamW', 'item_bias': True, 'epochs': 13 }, 'time_since_restore': 4.267271280288696, 'iterations_since_restore': 13 }
Parameter Analysis
Embedding Size
The embedding size is the hyperparameter that most affects the model’s fundamental logic, so let’s look at performance as a fufnction of it:
Learning Parameters
Iteration Completion
How many iterations, on average, did we complete?
How did the metric progress in the best result?
How did the metric progress in the longest results?