ALS Implicit

This page analyzes the hyperparameter tuning results for the implicit-feedback ALS matrix factorization model.

Parameter Search Space

Parameter Type Distribution Values Selected
embedding_size_exp Integer Uniform 3 ≤ \(x\) ≤ 10 5
regularization.user Float LogUniform 1e-05 ≤ \(x\) ≤ 1 1.4e-05
regularization.item Float LogUniform 1e-05 ≤ \(x\) ≤ 1 8.22e-05
weight Float Uniform 5 ≤ \(x\) ≤ 100 5.17

Final Result

Searching selected the following configuration:

{
    'embedding_size_exp': 5,
    'regularization': {'user': 1.4034988497695742e-05, 'item': 8.215297424168993e-05},
    'weight': 5.174824775936449,
    'epochs': 6
}

With these metrics:

{
    'RBP': 0.19982875976172412,
    'DCG': 11.850325662359984,
    'NDCG': 0.4386068526437416,
    'RecipRank': 0.36296658853907604,
    'Hit10': 0.5997979797979798,
    'max_epochs': 30,
    'epoch_train_s': 0.5908325591590255,
    'epoch_measure_s': 10.247574906097725,
    'done': False,
    'training_iteration': 6,
    'trial_id': '07e8bdef',
    'date': '2025-10-01_05-48-53',
    'timestamp': 1759312133,
    'time_this_iter_s': 10.843009948730469,
    'time_total_s': 58.00034022331238,
    'pid': 909085,
    'hostname': 'CCI-ws21',
    'node_ip': '10.248.127.152',
    'config': {
        'embedding_size_exp': 5,
        'regularization': {'user': 1.4034988497695742e-05, 'item': 8.215297424168993e-05},
        'weight': 5.174824775936449,
        'epochs': 6
    },
    'time_since_restore': 58.00034022331238,
    'iterations_since_restore': 6
}

Parameter Analysis

Embedding Size

The embedding size is the hyperparameter that most affects the model’s fundamental logic, so let’s look at performance as a fufnction of it:

Learning Parameters

Iteration Completion

How many iterations, on average, did we complete?

How did the metric progress in the best result?

How did the metric progress in the longest results?