Parameter | Type | Distribution | Values |
---|---|---|---|
embedding_size | Integer | LogUniform | 4 ≤ \(x\) ≤ 512 |
regularization | Float | LogUniform | 0.0001 ≤ \(x\) ≤ 10 |
learning_rate | Float | LogUniform | 0.001 ≤ \(x\) ≤ 0.1 |
reg_method | Categorical | Uniform | L2, AdamW |
FlexMF Explicit on ML10M
This page analyzes the hyperparameter tuning results for the FlexMF scorer in explicit-feedback mode (a biased matrix factorization model trained with PyTorch).
Parameter Search Space
Final Result
Searching selected the following configuration:
{ 'embedding_size': 14, 'regularization': 0.04977041557904842, 'learning_rate': 0.002501599473184507, 'reg_method': 'L2', 'epochs': 7 }
With these metrics:
{ 'RBP': 0.1201926373103401, 'NDCG': 0.4037863375695813, 'RecipRank': 0.2678976522436296, 'RMSE': 0.7612197947196359, 'TrainTask': 'd8d9000f-2f57-4253-abfd-bf24a153a4ad', 'TrainTime': None, 'TrainCPU': None, 'max_epochs': 50, 'epoch': 7, 'done': True, 'training_iteration': 7, 'trial_id': '3b368_00065', 'date': '2025-04-02_22-14-42', 'timestamp': 1743646482, 'time_this_iter_s': 7.709898233413696, 'time_total_s': 57.655722856521606, 'pid': 323936, 'hostname': 'CCI-ws21', 'node_ip': '10.248.127.152', 'config': { 'embedding_size': 14, 'regularization': 0.04977041557904842, 'learning_rate': 0.002501599473184507, 'reg_method': 'L2', 'epochs': 7 }, 'time_since_restore': 57.655722856521606, 'iterations_since_restore': 7, 'experiment_tag': '65_embedding_size=14,learning_rate=0.0025,reg_method=L2,regularization=0.0498' }
Parameter Analysis
Embedding Size
The embedding size is the hyperparameter that most affects the model’s fundamental logic, so let’s look at performance as a fufnction of it:
Learning Parameters
Iteration Completion
How many iterations, on average, did we complete?
How did the metric progress in the best result?
How did the metric progress in the longest results?