This page analyzes the hyperparameter tuning results for the FlexMF scorer in implicit-feedback mode with logistic loss (Logistic Matrix Factorization).
Parameter Search Space
<frozen abc>:106: UserWarning: component class Placeholder does not define a config attribute type
embedding_size_exp |
Integer |
Uniform |
3 ≤ \(x\) ≤ 10 |
8 |
regularization |
Float |
LogUniform |
0.0001 ≤ \(x\) ≤ 10 |
0.346 |
learning_rate |
Float |
LogUniform |
0.001 ≤ \(x\) ≤ 0.1 |
0.00313 |
reg_method |
Categorical |
Uniform |
L2, AdamW |
AdamW |
negative_count |
Integer |
Uniform |
1 ≤ \(x\) ≤ 5 |
4 |
positive_weight |
Float |
Uniform |
1 ≤ \(x\) ≤ 10 |
1.98 |
user_bias |
Categorical |
Uniform |
True, False |
True |
item_bias |
Categorical |
Uniform |
True, False |
True |
Final Result
Searching selected the following configuration:
{
'embedding_size_exp': 8,
'regularization': 0.34557498661015595,
'learning_rate': 0.0031270400753845807,
'reg_method': 'AdamW',
'negative_count': 4,
'positive_weight': 1.9823806801886432,
'user_bias': True,
'item_bias': True,
'epochs': 13
}
With these metrics:
{
'RBP': 0.2512356555189226,
'DCG': 11.872558202768435,
'NDCG': 0.484293399165171,
'RecipRank': 0.4444466963704219,
'Hit10': 0.6814888010540184,
'max_epochs': 50,
'epoch_train_s': 2.6057158031035215,
'epoch_measure_s': 3.3271000599488616,
'done': False,
'training_iteration': 13,
'trial_id': '02d842d1',
'date': '2025-09-29_14-47-57',
'timestamp': 1759171677,
'time_this_iter_s': 5.936446189880371,
'time_total_s': 125.33728742599487,
'pid': 3502639,
'hostname': 'CCI-ws21',
'node_ip': '10.248.127.152',
'config': {
'embedding_size_exp': 8,
'regularization': 0.34557498661015595,
'learning_rate': 0.0031270400753845807,
'reg_method': 'AdamW',
'negative_count': 4,
'positive_weight': 1.9823806801886432,
'user_bias': True,
'item_bias': True,
'epochs': 13
},
'time_since_restore': 125.33728742599487,
'iterations_since_restore': 13
}
Parameter Analysis
Embedding Size
The embedding size is the hyperparameter that most affects the model’s fundamental logic, so let’s look at performance as a fufnction of it:
Iteration Completion
How many iterations, on average, did we complete?
How did the metric progress in the best result?
How did the metric progress in the longest results?