Parameter | Type | Distribution | Values | Selected |
---|---|---|---|---|
embedding_size_exp | Integer | Uniform | 3 ≤ \(x\) ≤ 10 | 5 |
regularization | Float | LogUniform | 0.0001 ≤ \(x\) ≤ 10 | 0.0235 |
learning_rate | Float | LogUniform | 0.001 ≤ \(x\) ≤ 0.1 | 0.00304 |
reg_method | Categorical | Uniform | L2, AdamW | AdamW |
item_bias | Categorical | Uniform | True, False | True |
FlexMF WARP
This page analyzes the hyperparameter tuning results for the FlexMF scorer in implicit-feedback mode with WARP loss.
Parameter Search Space
Final Result
Searching selected the following configuration:
{ 'embedding_size_exp': 5, 'regularization': 0.023507830034360473, 'learning_rate': 0.003042904939520177, 'reg_method': 'AdamW', 'item_bias': True, 'epochs': 18 }
With these metrics:
{ 'RBP': 0.2224918914617804, 'DCG': 11.970690046578941, 'NDCG': 0.4435227643596746, 'RecipRank': 0.40112016702900927, 'Hit10': 0.6238383838383839, 'max_epochs': 50, 'epoch_train_s': 14.259723965078592, 'epoch_measure_s': 6.25665595009923, 'done': True, 'training_iteration': 18, 'trial_id': 'c4a81999', 'date': '2025-10-01_20-04-09', 'timestamp': 1759363449, 'time_this_iter_s': 20.523654222488403, 'time_total_s': 553.1135029792786, 'pid': 1380443, 'hostname': 'CCI-ws21', 'node_ip': '10.248.127.152', 'config': { 'embedding_size_exp': 5, 'regularization': 0.023507830034360473, 'learning_rate': 0.003042904939520177, 'reg_method': 'AdamW', 'item_bias': True, 'epochs': 18 }, 'time_since_restore': 553.1135029792786, 'iterations_since_restore': 18 }
Parameter Analysis
Embedding Size
The embedding size is the hyperparameter that most affects the model’s fundamental logic, so let’s look at performance as a fufnction of it:
Learning Parameters
Iteration Completion
How many iterations, on average, did we complete?
How did the metric progress in the best result?
How did the metric progress in the longest results?