FlexMF Explicit on ML100K

This page analyzes the hyperparameter tuning results for the FlexMF scorer in explicit-feedback mode (a biased matrix factorization model trained with PyTorch).

Parameter Search Space

Parameter Type Distribution Values
embedding_size Integer LogUniform 4 ≤ \(x\) ≤ 512
regularization Float LogUniform 1e-05 ≤ \(x\) ≤ 1
learning_rate Float LogUniform 1e-05 ≤ \(x\) ≤ 0.1
reg_method Categorical Uniform L2, AdamW

Final Result

Searching selected the following configuration:

{
    'embedding_size': 4,
    'regularization': 3.549318228105173e-05,
    'learning_rate': 0.06375691734903581,
    'reg_method': 'AdamW',
    'epochs': 14
}

With these metrics:

{
    'RBP': 0.0004829290990085603,
    'NDCG': 0.17035274814398288,
    'RecipRank': 0.012526291560828278,
    'RMSE': 0.8919253028416759,
    'TrainTask': '820912a1-e9f3-4418-9ec7-678515c10bdb',
    'TrainTime': 5.033381743000064,
    'TrainCPU': 5.011,
    'timestamp': 1743029573,
    'checkpoint_dir_name': None,
    'done': True,
    'training_iteration': 14,
    'trial_id': 'd2af5_00056',
    'date': '2025-03-26_18-52-53',
    'time_this_iter_s': 0.3304908275604248,
    'time_total_s': 5.205181121826172,
    'pid': 283187,
    'hostname': 'CCI-ws21',
    'node_ip': '10.248.127.152',
    'config': {
        'embedding_size': 4,
        'regularization': 3.549318228105173e-05,
        'learning_rate': 0.06375691734903581,
        'reg_method': 'AdamW',
        'epochs': 14
    },
    'time_since_restore': 5.205181121826172,
    'iterations_since_restore': 14,
    'experiment_tag': '56_embedding_size=4,learning_rate=0.0638,reg_method=AdamW,regularization=0.0000'
}

Parameter Analysis

Embedding Size

The embedding size is the hyperparameter that most affects the model’s fundamental logic, so let’s look at performance as a fufnction of it:

Learning Parameters

Iteration Completion

How many iterations, on average, did we complete?

How did the metric progress in the best result?

How did the metric progress in the longest results?