Parameter | Type | Distribution | Values | Selected |
---|---|---|---|---|
embedding_size | Integer | LogUniform | 4 ≤ \(x\) ≤ 512 | 81 |
regularization.user | Float | LogUniform | 1e-05 ≤ \(x\) ≤ 1 | 0.382 |
regularization.item | Float | LogUniform | 1e-05 ≤ \(x\) ≤ 1 | 0.0245 |
damping.user | Float | LogUniform | 1e-12 ≤ \(x\) ≤ 100 | 1.84e-09 |
damping.item | Float | LogUniform | 1e-12 ≤ \(x\) ≤ 100 | 2.3e-12 |
ALS BiasedMF
This page analyzes the hyperparameter tuning results for biased matrix factorization with ALS.
Parameter Search Space
Final Result
Searching selected the following configuration:
{ 'embedding_size': 81, 'regularization': {'user': 0.38174613585361583, 'item': 0.024491652223358643}, 'damping': {'user': 1.84234069760588e-09, 'item': 2.2971321460271355e-12}, 'epochs': 5 }
With these metrics:
{ 'RBP': 0.0018974824159591586, 'LogRBP': -2.4729873459173604, 'NDCG': 0.1767482350054537, 'RecipRank': 0.018591129608906982, 'RMSE': 0.9320751876427383, 'TrainTask': '501ab838-0961-49f3-8648-0c0b3ca2194f', 'TrainTime': None, 'TrainCPU': None, 'max_epochs': 30, 'done': False, 'training_iteration': 5, 'trial_id': 'ded56_00026', 'date': '2025-05-07_23-10-16', 'timestamp': 1746673816, 'time_this_iter_s': 0.2640204429626465, 'time_total_s': 1.3943123817443848, 'pid': 619606, 'hostname': 'CCI-ws21', 'node_ip': '10.248.127.152', 'config': { 'embedding_size': 81, 'regularization': {'user': 0.38174613585361583, 'item': 0.024491652223358643}, 'damping': {'user': 1.84234069760588e-09, 'item': 2.2971321460271355e-12}, 'epochs': 5 }, 'time_since_restore': 1.3943123817443848, 'iterations_since_restore': 5 }
Parameter Analysis
Embedding Size
The embedding size is the hyperparameter that most affects the model’s fundamental logic, so let’s look at performance as a fufnction of it:
Learning Parameters
Iteration Completion
How many iterations, on average, did we complete?
How did the metric progress in the best result?
How did the metric progress in the longest results?