Parameter | Type | Distribution | Values | Selected |
---|---|---|---|---|
embedding_size_exp | Integer | Uniform | 3 ≤ \(x\) ≤ 10 | 5 |
regularization.user | Float | LogUniform | 1e-05 ≤ \(x\) ≤ 1 | 0.538 |
regularization.item | Float | LogUniform | 1e-05 ≤ \(x\) ≤ 1 | 0.302 |
weight | Float | Uniform | 5 ≤ \(x\) ≤ 100 | 5 |
ALS Implicit
This page analyzes the hyperparameter tuning results for the implicit-feedback ALS matrix factorization model.
Parameter Search Space
Final Result
Searching selected the following configuration:
{ 'embedding_size_exp': 5, 'regularization': {'user': 0.5377113342545874, 'item': 0.3016285268860072}, 'weight': 5.003931155116192, 'epochs': 7 }
With these metrics:
{ 'RBP': 0.10368340971386897, 'DCG': 1.271514337921707, 'NDCG': 0.3570058638329642, 'RecipRank': 0.3349724574748492, 'Hit10': 0.6605960264900662, 'max_epochs': 30, 'epoch_train_s': 0.043975158128887415, 'epoch_measure_s': 1.042900734115392, 'done': False, 'training_iteration': 7, 'trial_id': 'c8f6d195', 'date': '2025-09-29_11-12-27', 'timestamp': 1759158747, 'time_this_iter_s': 1.091226577758789, 'time_total_s': 7.585057258605957, 'pid': 3284423, 'hostname': 'CCI-ws21', 'node_ip': '10.248.127.152', 'config': { 'embedding_size_exp': 5, 'regularization': {'user': 0.5377113342545874, 'item': 0.3016285268860072}, 'weight': 5.003931155116192, 'epochs': 7 }, 'time_since_restore': 7.585057258605957, 'iterations_since_restore': 7 }
Parameter Analysis
Embedding Size
The embedding size is the hyperparameter that most affects the model’s fundamental logic, so let’s look at performance as a fufnction of it:
Learning Parameters
Iteration Completion
How many iterations, on average, did we complete?
How did the metric progress in the best result?
How did the metric progress in the longest results?