| Parameter | Type | Distribution | Values | Selected |
|---|---|---|---|---|
| embedding_size_exp | Integer | Uniform | 3 ≤ \(x\) ≤ 10 | 9 |
| regularization.user | Float | LogUniform | 1e-05 ≤ \(x\) ≤ 1 | 0.367 |
| regularization.item | Float | LogUniform | 1e-05 ≤ \(x\) ≤ 1 | 0.00622 |
| damping.user | Float | LogUniform | 1e-12 ≤ \(x\) ≤ 100 | 38.9 |
| damping.item | Float | LogUniform | 1e-12 ≤ \(x\) ≤ 100 | 1.83e-11 |
ALS BiasedMF
This page analyzes the hyperparameter tuning results for biased matrix factorization with ALS.
Parameter Search Space
Final Result
Searching selected the following configuration:
{ 'embedding_size_exp': 9, 'regularization': {'user': 0.3668378993142183, 'item': 0.006219227061483342}, 'damping': {'user': 38.8642624310209, 'item': 1.8255907394413952e-11}, 'epochs': 6 }
With these metrics:
{ 'RBP': 0.05523347798821119, 'DCG': 9.63330762244665, 'NDCG': 0.35687021161606486, 'RecipRank': 0.09677467755210865, 'Hit10': 0.37797979797979797, 'RMSE': 0.7656736373901367, 'max_epochs': 30, 'epoch_train_s': 102.25948791089468, 'epoch_measure_s': 102.1872130590491, 'done': True, 'training_iteration': 6, 'trial_id': '47efdb6a', 'date': '2025-10-01_03-06-58', 'timestamp': 1759302418, 'time_this_iter_s': 204.45267152786255, 'time_total_s': 1151.6100106239319, 'pid': 819214, 'hostname': 'CCI-ws21', 'node_ip': '10.248.127.152', 'config': { 'embedding_size_exp': 9, 'regularization': {'user': 0.3668378993142183, 'item': 0.006219227061483342}, 'damping': {'user': 38.8642624310209, 'item': 1.8255907394413952e-11}, 'epochs': 6 }, 'time_since_restore': 1151.6100106239319, 'iterations_since_restore': 6 }
Parameter Analysis
Embedding Size
The embedding size is the hyperparameter that most affects the model’s fundamental logic, so let’s look at performance as a fufnction of it:
Learning Parameters
Iteration Completion
How many iterations, on average, did we complete?
How did the metric progress in the best result?
How did the metric progress in the longest results?