FlexMF WARP

This page analyzes the hyperparameter tuning results for the FlexMF scorer in implicit-feedback mode with WARP loss.

Parameter Search Space

Parameter Type Distribution Values Selected
embedding_size Integer LogUniform 4 ≤ \(x\) ≤ 512 9
regularization Float LogUniform 0.0001 ≤ \(x\) ≤ 10 0.0392
learning_rate Float LogUniform 0.001 ≤ \(x\) ≤ 0.1 0.0029
reg_method Categorical Uniform L2, AdamW AdamW
item_bias Categorical Uniform True, False True

Final Result

Searching selected the following configuration:

{
    'embedding_size': 9,
    'regularization': 0.039164122203056186,
    'learning_rate': 0.002902394647945465,
    'reg_method': 'AdamW',
    'item_bias': True,
    'epochs': 22
}

With these metrics:

{
    'RBP': 0.2198006871963069,
    'NDCG': 0.4408849768264969,
    'RecipRank': 0.3959630952900485,
    'TrainTask': '1c7f3ca7-0666-4b47-a492-c924844ee197',
    'TrainTime': None,
    'TrainCPU': None,
    'max_epochs': 50,
    'done': True,
    'training_iteration': 22,
    'trial_id': 'cea44_00094',
    'date': '2025-04-23_00-11-43',
    'timestamp': 1745381503,
    'time_this_iter_s': 22.314486265182495,
    'time_total_s': 863.8184952735901,
    'pid': 1207768,
    'hostname': 'CCI-ws21',
    'node_ip': '10.248.127.152',
    'config': {
        'embedding_size': 9,
        'regularization': 0.039164122203056186,
        'learning_rate': 0.002902394647945465,
        'reg_method': 'AdamW',
        'item_bias': True,
        'epochs': 22
    },
    'time_since_restore': 432.60910964012146,
    'iterations_since_restore': 14
}

Parameter Analysis

Embedding Size

The embedding size is the hyperparameter that most affects the model’s fundamental logic, so let’s look at performance as a fufnction of it:

Learning Parameters

Iteration Completion

How many iterations, on average, did we complete?

How did the metric progress in the best result?

How did the metric progress in the longest results?