Parameter | Type | Distribution | Values | Selected |
---|---|---|---|---|
embedding_size | Integer | LogUniform | 4 ≤ \(x\) ≤ 512 | 78 |
regularization | Float | LogUniform | 0.0001 ≤ \(x\) ≤ 10 | 0.35 |
learning_rate | Float | LogUniform | 0.001 ≤ \(x\) ≤ 0.1 | 0.0781 |
reg_method | Categorical | Uniform | L2, AdamW | AdamW |
item_bias | Categorical | Uniform | True, False | False |
FlexMF WARP
This page analyzes the hyperparameter tuning results for the FlexMF scorer in implicit-feedback mode with WARP loss.
Parameter Search Space
Final Result
Searching selected the following configuration:
{ 'embedding_size': 78, 'regularization': 0.3495563013306563, 'learning_rate': 0.07805734732903105, 'reg_method': 'AdamW', 'item_bias': False, 'epochs': 8 }
With these metrics:
{ 'RBP': 0.12610737218832532, 'LogRBP': 1.7236183950835242, 'NDCG': 0.39538753216569494, 'RecipRank': 0.40121002845877846, 'TrainTask': '3ddfc519-3ec6-45cb-bf2d-c988f0451bad', 'TrainTime': None, 'TrainCPU': None, 'max_epochs': 50, 'done': False, 'training_iteration': 8, 'trial_id': '58582799', 'date': '2025-05-07_17-36-23', 'timestamp': 1746653783, 'time_this_iter_s': 0.5173976421356201, 'time_total_s': 4.576089143753052, 'pid': 1353206, 'hostname': 'gracehopper1', 'node_ip': '192.168.225.60', 'config': { 'embedding_size': 78, 'regularization': 0.3495563013306563, 'learning_rate': 0.07805734732903105, 'reg_method': 'AdamW', 'item_bias': False, 'epochs': 8 }, 'time_since_restore': 4.576089143753052, 'iterations_since_restore': 8 }
Parameter Analysis
Embedding Size
The embedding size is the hyperparameter that most affects the model’s fundamental logic, so let’s look at performance as a fufnction of it:
Learning Parameters
Iteration Completion
How many iterations, on average, did we complete?
How did the metric progress in the best result?
How did the metric progress in the longest results?