Parameter | Type | Distribution | Values | Selected |
---|---|---|---|---|
embedding_size | Integer | LogUniform | 4 ≤ \(x\) ≤ 512 | 441 |
regularization.user | Float | LogUniform | 1e-05 ≤ \(x\) ≤ 1 | 0.954 |
regularization.item | Float | LogUniform | 1e-05 ≤ \(x\) ≤ 1 | 0.00924 |
damping.user | Float | LogUniform | 1e-12 ≤ \(x\) ≤ 100 | 0.00816 |
damping.item | Float | LogUniform | 1e-12 ≤ \(x\) ≤ 100 | 0.384 |
ALS BiasedMF
This page analyzes the hyperparameter tuning results for biased matrix factorization with ALS.
Parameter Search Space
Final Result
Searching selected the following configuration:
{ 'embedding_size': 441, 'regularization': {'user': 0.9542511656015913, 'item': 0.009237124529306156}, 'damping': {'user': 0.008160369796354735, 'item': 0.38372963480137023}, 'epochs': 4 }
With these metrics:
{ 'RBP': 0.003625788332921936, 'LogRBP': -1.8254435733962806, 'NDCG': 0.18033139536536427, 'RecipRank': 0.024423803959309594, 'RMSE': 0.9294814075584765, 'TrainTask': '0534a67f-9a16-4fd8-b726-3cc15c230b39', 'TrainTime': None, 'TrainCPU': None, 'max_epochs': 30, 'done': False, 'training_iteration': 4, 'trial_id': 'b3fea0d2', 'date': '2025-05-06_23-35-15', 'timestamp': 1746588915, 'time_this_iter_s': 1.2646234035491943, 'time_total_s': 4.98090124130249, 'pid': 1028419, 'hostname': 'gracehopper1', 'node_ip': '192.168.225.60', 'config': { 'embedding_size': 441, 'regularization': {'user': 0.9542511656015913, 'item': 0.009237124529306156}, 'damping': {'user': 0.008160369796354735, 'item': 0.38372963480137023}, 'epochs': 4 }, 'time_since_restore': 4.98090124130249, 'iterations_since_restore': 4 }
Parameter Analysis
Embedding Size
The embedding size is the hyperparameter that most affects the model’s fundamental logic, so let’s look at performance as a fufnction of it:
Learning Parameters
Iteration Completion
How many iterations, on average, did we complete?
How did the metric progress in the best result?
How did the metric progress in the longest results?