Parameter | Type | Distribution | Values | Selected |
---|---|---|---|---|
embedding_size_exp | Integer | Uniform | 3 ≤ \(x\) ≤ 10 | 9 |
regularization.user | Float | LogUniform | 1e-05 ≤ \(x\) ≤ 1 | 0.397 |
regularization.item | Float | LogUniform | 1e-05 ≤ \(x\) ≤ 1 | 0.0229 |
damping.user | Float | LogUniform | 1e-12 ≤ \(x\) ≤ 100 | 7.05 |
damping.item | Float | LogUniform | 1e-12 ≤ \(x\) ≤ 100 | 18.8 |
ALS BiasedMF
This page analyzes the hyperparameter tuning results for biased matrix factorization with ALS.
Parameter Search Space
Final Result
Searching selected the following configuration:
{ 'embedding_size_exp': 9, 'regularization': {'user': 0.3968940346316714, 'item': 0.022922031117155444}, 'damping': {'user': 7.046887794620691, 'item': 18.81840526361956}, 'epochs': 3 }
With these metrics:
{ 'RBP': 0.023823028647493943, 'DCG': 0.7540927676766077, 'NDCG': 0.21172827698869, 'RecipRank': 0.1051019215640897, 'Hit10': 0.20634920634920634, 'RMSE': 0.9165915846824646, 'max_epochs': 30, 'epoch_train_s': 1.4852916388772428, 'epoch_measure_s': 0.38960870588198304, 'done': False, 'training_iteration': 3, 'trial_id': '0e6de578', 'date': '2025-09-30_17-38-27', 'timestamp': 1759268307, 'time_this_iter_s': 1.878904104232788, 'time_total_s': 5.820883512496948, 'pid': 402281, 'hostname': 'CCI-ws21', 'node_ip': '10.248.127.152', 'config': { 'embedding_size_exp': 9, 'regularization': {'user': 0.3968940346316714, 'item': 0.022922031117155444}, 'damping': {'user': 7.046887794620691, 'item': 18.81840526361956}, 'epochs': 3 }, 'time_since_restore': 5.820883512496948, 'iterations_since_restore': 3 }
Parameter Analysis
Embedding Size
The embedding size is the hyperparameter that most affects the model’s fundamental logic, so let’s look at performance as a fufnction of it:
Learning Parameters
Iteration Completion
How many iterations, on average, did we complete?
How did the metric progress in the best result?
How did the metric progress in the longest results?