ALS Implicit

This page analyzes the hyperparameter tuning results for the implicit-feedback ALS matrix factorization model.

Parameter Search Space

Parameter Type Distribution Values Selected
embedding_size_exp Integer Uniform 3 ≤ \(x\) ≤ 10 4
regularization.user Float LogUniform 1e-05 ≤ \(x\) ≤ 1 0.924
regularization.item Float LogUniform 1e-05 ≤ \(x\) ≤ 1 0.00113
weight Float Uniform 5 ≤ \(x\) ≤ 100 5.66

Final Result

Searching selected the following configuration:

{
    'embedding_size_exp': 4,
    'regularization': {'user': 0.923757334541105, 'item': 0.0011325977087353607},
    'weight': 5.661157127519835,
    'epochs': 4
}

With these metrics:

{
    'RBP': 0.14493980747625881,
    'DCG': 1.5059954224916172,
    'NDCG': 0.42284163119536544,
    'RecipRank': 0.4503075472476785,
    'Hit10': 0.7936507936507936,
    'max_epochs': 30,
    'epoch_train_s': 0.0024435671512037516,
    'epoch_measure_s': 0.1448000418022275,
    'done': False,
    'training_iteration': 4,
    'trial_id': 'bbee7489',
    'date': '2025-09-30_18-16-37',
    'timestamp': 1759270597,
    'time_this_iter_s': 0.15028023719787598,
    'time_total_s': 0.6142270565032959,
    'pid': 510915,
    'hostname': 'CCI-ws21',
    'node_ip': '10.248.127.152',
    'config': {
        'embedding_size_exp': 4,
        'regularization': {'user': 0.923757334541105, 'item': 0.0011325977087353607},
        'weight': 5.661157127519835,
        'epochs': 4
    },
    'time_since_restore': 0.6142270565032959,
    'iterations_since_restore': 4
}

Parameter Analysis

Embedding Size

The embedding size is the hyperparameter that most affects the model’s fundamental logic, so let’s look at performance as a fufnction of it:

Learning Parameters

Iteration Completion

How many iterations, on average, did we complete?

How did the metric progress in the best result?

How did the metric progress in the longest results?