Parameter | Type | Distribution | Values | Selected |
---|---|---|---|---|
embedding_size | Integer | LogUniform | 4 ≤ \(x\) ≤ 512 | 313 |
regularization.user | Float | LogUniform | 1e-05 ≤ \(x\) ≤ 1 | 0.158 |
regularization.item | Float | LogUniform | 1e-05 ≤ \(x\) ≤ 1 | 0.0151 |
damping.user | Float | LogUniform | 1e-12 ≤ \(x\) ≤ 100 | 3.06e-10 |
damping.item | Float | LogUniform | 1e-12 ≤ \(x\) ≤ 100 | 1.46 |
ALS BiasedMF
This page analyzes the hyperparameter tuning results for biased matrix factorization with ALS.
Parameter Search Space
Final Result
Searching selected the following configuration:
{ 'embedding_size': 313, 'regularization': {'user': 0.15778630974585842, 'item': 0.015106952175028448}, 'damping': {'user': 3.063168859977759e-10, 'item': 1.4636360311142516}, 'epochs': 6 }
With these metrics:
{ 'RBP': 0.08185716969258171, 'LogRBP': 1.291460586281584, 'NDCG': 0.3906712754513665, 'RecipRank': 0.1941867238884197, 'RMSE': 0.7520846411664338, 'TrainTask': '4510a89c-61e3-44c3-b442-791f5699922c', 'TrainTime': None, 'TrainCPU': None, 'max_epochs': 30, 'done': True, 'training_iteration': 6, 'trial_id': '3d07f9d4', 'date': '2025-05-05_10-29-42', 'timestamp': 1746455382, 'time_this_iter_s': 21.478567123413086, 'time_total_s': 142.84648752212524, 'pid': 4077948, 'hostname': 'CCI-ws21', 'node_ip': '10.248.127.152', 'config': { 'embedding_size': 313, 'regularization': {'user': 0.15778630974585842, 'item': 0.015106952175028448}, 'damping': {'user': 3.063168859977759e-10, 'item': 1.4636360311142516}, 'epochs': 6 }, 'time_since_restore': 142.84648752212524, 'iterations_since_restore': 6 }
Parameter Analysis
Embedding Size
The embedding size is the hyperparameter that most affects the model’s fundamental logic, so let’s look at performance as a fufnction of it:
Learning Parameters
Iteration Completion
How many iterations, on average, did we complete?
How did the metric progress in the best result?
How did the metric progress in the longest results?