Technology Sharing

[Machine Learning Practice] Datawhale Summer Camp: Baseline Intensive Notes 2

2024-07-08

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

# AI Summer Camp # Datawhale # Summer Camp

In addition to cross-validation, there is another key optimization method on the original baseline, namely feature engineering.

How to optimize features is related to improving the accuracy of model prediction. Feature engineering is often done by people who have a deep understanding of the problem domain, because we need to think about the way of conversion.

In addition to Smiles features, there are many other features that can be used to extract valuable information. For example, InChI is composed of a series of parts and provides detailed information about the molecular structure.Initial identifier, molecular formula, connection table, hydrogen atom count, multi-rotatable bond count, stereochemical information, isomer information, mixture or tautomer information, charge and spin multiplicity information, etc.

In addition, if you want to improve the accuracy of the model, changing the model is also a good idea.

Feature Optimization

Extract molecular formula

From the InChI string, we can see that the molecular formula is given directly in/C47H61N7O6Spart. This means that the molecule is made up of 47 carbon atoms, 61 hydrogen atoms, 7 nitrogen atoms, 6 oxygen atoms, and 1 sulfur atom;

Calculate molecular weight

Molecular weight can be calculated by multiplying the atomic mass of each atom by the number of atoms and then adding the numbers.

like

  • The atomic mass of carbon (C) is about 12.01 g/mol

  • The atomic mass of hydrogen (H) is approximately 1.008 g/mol

  • The atomic mass of nitrogen (N) is approximately 14.01 g/mol

  • The atomic mass of oxygen (O) is approximately 16.00 g/mol

  • The atomic mass of sulfur (S) is approximately 32.07 g/mol

Multiplying by adding the quantities, we get the molecular weight.

Atom Counting

Directly calculate the number of different atoms and expand them.

import pandas as pd
import re

atomic_masses = {
    'H': 1.008, 'He': 4.002602, 'Li': 6.94, 'Be': 9.0122, 'B': 10.81, 'C': 12.01,
    'N': 14.01, 'O': 16.00, 'F': 19.00, 'Ne': 20.180, 'Na': 22.990, 'Mg': 24.305,
    'Al': 26.982, 'Si': 28.085, 'P': 30.97, 'S': 32.07, 'Cl': 35.45, 'Ar': 39.95,
    'K': 39.10, 'Ca': 40.08, 'Sc': 44.956, 'Ti': 47.867, 'V': 50.942, 'Cr': 52.00,
    'Mn': 54.938, 'Fe': 55.845, 'Co': 58.933, 'Ni': 58.69, 'Cu': 63.55, 'Zn': 65.38
}

# 函数用于解析单个InChI字符串
def parse_inchi(row):
    inchi_str = row['InChI']
    formula = ''
    molecular_weight = 0
    element_counts = {}

    # 提取分子式
    formula_match = re.search(r"InChI=1S/([^/] )/c", inchi_str)
    if formula_match:
        formula = formula_match.group(1)

    # 计算分子量和原子计数
    for element, count in re.findall(r"([A-Z][a-z]*)([0-9]*)", formula):
        count = int(count) if count else 1
        element_mass = atomic_masses.get(element.upper(), 0)
        molecular_weight  = element_mass * count
        element_counts[element.upper()] = count

    return pd.Series({
        'Formula': formula,
        'MolecularWeight': molecular_weight,
        'ElementCounts': element_counts
    })

# 应用函数到DataFrame的每一行
train[['Formula', 'MolecularWeight', 'ElementCounts']] = train.apply(parse_inchi, axis=1)

# 定义存在的key
keys = ['H', 'He', 'Li', 'Be', 'B', 'C', 'N', 'O', 'F', 'Ne', 'Na', 'Mg', 'Al', 'Si', 'P', 'S', 'Cl', 'Ar', 'K', 'Ca', 'Sc', 'Ti', 'V', 'Cr', 'Mn', 'Fe', 'Co', 'Ni', 'Cu', 'Zn']

# 创建一个空的DataFrame,列名为keys
df_expanded = pd.DataFrame({key: pd.Series() for key in keys})

# 遍历数据,填充DataFrame
for index, item in enumerate(train['ElementCounts'].values):
    for key in keys:
        # 将字典中的值填充到相应的列中
        df_expanded.at[index, key] = item.get(key, 0)

df_expanded = pd.DataFrame(df_expanded)

Model Fusion

Last time I mentioned that we used the catboost model and had not tried lightgbm and xgboost. We can run these three models in turn and then average the results of the three models for fusion (which is also an area that can be improved).

def cv_model(clf, train_x, train_y, test_x, clf_name, seed = 2023):
    folds = 5
    kf = KFold(n_splits=folds, shuffle=True, random_state=seed)
    oof = np.zeros(train_x.shape[0])
    test_predict = np.zeros(test_x.shape[0])
    cv_scores = []
    for i, (train_index, valid_index) in enumerate(kf.split(train_x, train_y)):
        print('************************************ {} ************************************'.format(str(i 1)))
        trn_x, trn_y, val_x, val_y = train_x.iloc[train_index], train_y[train_index], train_x.iloc[valid_index], train_y[valid_index]

        if clf_name == "lgb":
            train_matrix = clf.Dataset(trn_x, label=trn_y)
            valid_matrix = clf.Dataset(val_x, label=val_y)
            params = {
                'boosting_type': 'gbdt',
                'objective': 'binary',
                'min_child_weight': 6,
                'num_leaves': 2 ** 6,
                'lambda_l2': 10,
                'feature_fraction': 0.8,
                'bagging_fraction': 0.8,
                'bagging_freq': 4,
                'learning_rate': 0.35,
                'seed': 2024,
                'nthread' : 16,
                'verbose' : -1,
            }
            model = clf.train(params, train_matrix, 2000, valid_sets=[train_matrix, valid_matrix],
                              categorical_feature=[], verbose_eval=1000, early_stopping_rounds=100)
            val_pred = model.predict(val_x, num_iteration=model.best_iteration)
            test_pred = model.predict(test_x, num_iteration=model.best_iteration)

        if clf_name == "xgb":
            xgb_params = {
              'booster': 'gbtree', 
              'objective': 'binary:logistic',
              'num_class':3,
              'max_depth': 5,
              'lambda': 10,
              'subsample': 0.7,
              'colsample_bytree': 0.7,
              'colsample_bylevel': 0.7,
              'eta': 0.35,
              'tree_method': 'hist',
              'seed': 520,
              'nthread': 16
              }
            train_matrix = clf.DMatrix(trn_x , label=trn_y)
            valid_matrix = clf.DMatrix(val_x , label=val_y)
            test_matrix = clf.DMatrix(test_x)

            watchlist = [(train_matrix, 'train'),(valid_matrix, 'eval')]

            model = clf.train(xgb_params, train_matrix, num_boost_round=2000, evals=watchlist, verbose_eval=1000, early_stopping_rounds=100)
            val_pred  = model.predict(valid_matrix)
            test_pred = model.predict(test_matrix)

        if clf_name == "cat":
            params = {'learning_rate': 0.35, 'depth': 5, 'bootstrap_type':'Bernoulli','random_seed':2024,
                      'od_type': 'Iter', 'od_wait': 100, 'random_seed': 11, 'allow_writing_files': False}

            model = clf(iterations=2000, **params)
            model.fit(trn_x, trn_y, eval_set=(val_x, val_y),
                      metric_period=1000,
                      use_best_model=True, 
                      cat_features=[],
                      verbose=1)

            val_pred  = model.predict_proba(val_x)
            test_pred = model.predict_proba(test_x)

        oof[valid_index] = val_pred
        test_predict  = test_pred / kf.n_splits

        F1_score = f1_score(val_y, np.where(val_pred