「日拱一码」033 机器学习——严格划分

目录

简单随机划分(train_test_split)

分组划分(Group Splitting)

简单分组划分 (Group Splitting)

分层分组划分 (Stratified Group Splitting)

交叉验证法(Cross-Validation)

分组K 折交叉验证(GroupKFold)

留一组法(LeaveOneGroupOut)


简单随机划分(train_test_split)

简单随机分组通过随机分配组到训练集和测试集,确保同一组的数据不会同时出现在训练集和测试集中。这种方法简单易实现,但可能无法保证数据分布的平衡性

## 简单随机划分

import pandas as pd
from sklearn.model_selection import train_test_split

data = {
    'A': [1.0, 1.0, 2.0, 2.0, 1.0, 1.0],
    'B': [0.5, 0.5, 1.5, 1.5, 0.5, 0.5],
    'C': [1.0, 1.0, 2.0, 2.0, 1.0, 1.0],
    'D': [0.5, 0.5, 1.5, 1.5, 0.5, 0.5],
    'E': [0.1, 0.2, 0.1, 0.2, 0.3, 0.4],
    'y': [0, 1, 0, 1, 0, 1]
}
df = pd.DataFrame(data)

# 定义分组特征
df['group'] = df[['A', 'B', 'C', 'D']].apply(lambda row: '_'.join(map(str, row)), axis=1)

# 获取所有唯一组
unique_groups = df['group'].unique()

# 划分组(而非划分行)
train_groups, test_groups = train_test_split(unique_groups,test_size=0.2,  # 测试集比例
    random_state=42,  # 随机种子
    stratify=df.groupby('group')['y'].first().values  # 按目标变量分层
)

# 根据组划分数据
train_data = df[df['group'].isin(train_groups)]
test_data = df[df['group'].isin(test_groups)]

print(f"训练集: {len(train_data)}条(来自{len(train_groups)}个组)")
print(f"测试集: {len(test_data)}条(来自{len(test_groups)}个组)")

# 训练集: 2条(来自1个组)
# 测试集: 4条(来自1个组)

分组划分(Group Splitting)

简单分组划分 (Group Splitting)

简单分组划分的核心思想是将具有相同参数组合的数据划分为同一个组,然后基于这些组进行划分,确保训练集和测试集的组互斥

## 简单分组划分

import pandas as pd
from sklearn.model_selection import GroupShuffleSplit

data = {
    'A': [1.0, 1.0, 2.0, 2.0, 1.0, 1.0],
    'B': [0.5, 0.5, 1.5, 1.5, 0.5, 0.5],
    'C': [1.0, 1.0, 2.0, 2.0, 1.0, 1.0],
    'D': [0.5, 0.5, 1.5, 1.5, 0.5, 0.5],
    'E': [0.1, 0.2, 0.1, 0.2, 0.3, 0.4],
    'y': [0, 1, 0, 1, 0, 1]
}
df = pd.DataFrame(data)

# 定义分组特征
df['group'] = df[['A', 'B', 'C', 'D']].apply(lambda row: '_'.join(map(str, row)), axis=1)

# 使用 GroupShuffleSplit 进行分组划分
gss = GroupShuffleSplit(n_splits=1, test_size=0.2, random_state=42)

for train_index, test_index in gss.split(df, groups=df['group']):
    train = df.iloc[train_index]
    test = df.iloc[test_index]

# 分离特征和目标变量
X_train, y_train = train.drop(columns=['y', 'group']), train['y']
X_test, y_test = test.drop(columns=['y', 'group']), test['y']

print("训练集:")
print(X_train)
#      A    B    C    D    E
# 0  1.0  0.5  1.0  0.5  0.1
# 1  1.0  0.5  1.0  0.5  0.2
# 4  1.0  0.5  1.0  0.5  0.3
# 5  1.0  0.5  1.0  0.5  0.4
print("测试集:")
print(X_test)
# 2  2.0  1.5  2.0  1.5  0.1
# 3  2.0  1.5  2.0  1.5  0.2

分层分组划分 (Stratified Group Splitting)

分层分组在随机分组的基础上,确保训练集和测试集在某个关键特征(如目标变量  y )的分布上保持一致,能更好地保持数据的分布特性

## 分层分组划分

from sklearn.model_selection import StratifiedShuffleSplit
import pandas as pd

data = {
    'A': [1.0, 1.0, 2.0, 2.0, 1.0, 1.0, 1.5, 2, 1.5],
    'B': [0.5, 0.5, 1.5, 1.5, 0.5, 0.5, 1.5, 2, 1.5],
    'C': [1.0, 1.0, 2.0, 2.0, 1.0, 1.0, 1.5, 2, 1.5],
    'D': [0.5, 0.5, 1.5, 1.5, 0.5, 0.5, 1.5, 2, 1.5],
    'E': [0.1, 0.2, 0.1, 0.2, 0.3, 0.4, 1.5, 2, 1.5],
    'y': [0, 1, 0, 1, 0, 1, 2, 2, 1]
}
df = pd.DataFrame(data)

# 定义分组特征

df['group_id'] = df[['A', 'B', 'C', 'D']].astype(str).apply('_'.join, axis=1)

group_targets = df.groupby('group_id')['y'].first()
unique_groups = group_targets.index.values


# 使用 StratifiedShuffleSplit 进行分层分组划分
sss = StratifiedShuffleSplit(n_splits=1, test_size=0.3, random_state=42)

for train_groups_idx, test_groups_idx in sss.split(np.zeros(len(unique_groups)), group_targets):
    train_groups = unique_groups[train_groups_idx]
    test_groups = unique_groups[test_groups_idx]

    train_df = df[df['group_id'].isin(train_groups)]
    test_df = df[df['group_id'].isin(test_groups)]

# 分离特征和目标变量
X_train, y_train = train.drop(columns=['y', 'group']), train['y']
X_test, y_test = test.drop(columns=['y', 'group']), test['y']

print("训练集:")
print(X_train)
#      A    B    C    D    E
# 0  1.0  0.5  1.0  0.5  0.1
# 1  1.0  0.5  1.0  0.5  0.2
# 4  1.0  0.5  1.0  0.5  0.3
# 5  1.0  0.5  1.0  0.5  0.4
print("测试集:")
print(X_test)
#      A    B    C    D    E
# 2  2.0  1.5  2.0  1.5  0.1
# 3  2.0  1.5  2.0  1.5  0.2

交叉验证法(Cross-Validation)

交叉验证法通过将数据划分为多个子集(折),每次使用一个子集作为测试集,其余作为训练集,重复多次以评估模型的性能。这种方法充分利用数据,结果更稳定

分组K 折交叉验证(GroupKFold)

GroupKFold  是一种分组 K 折交叉验证方法,它确保每个组(group)的数据完全独立于其他组。这种方法非常适合处理具有明确分组特征的数据

## 分组K折交叉验证
import pandas as pd
from sklearn.model_selection import GroupKFold

data = {
    'A': [1.0, 1.0, 2.0, 2.0, 1.0, 1.0],
    'B': [0.5, 0.5, 1.5, 1.5, 0.5, 0.5],
    'C': [1.0, 1.0, 2.0, 2.0, 1.0, 1.0],
    'D': [0.5, 0.5, 1.5, 1.5, 0.5, 0.5],
    'E': [0.1, 0.2, 0.1, 0.2, 0.3, 0.4],
    'y': [0, 1, 0, 1, 0, 1]
}
df = pd.DataFrame(data)

# 定义分组特征
df['group'] = df[['A', 'B', 'C', 'D']].apply(lambda row: '_'.join(map(str, row)), axis=1)
# 获取分组标识符
groups = df['group'].values

# 使用 GroupKFold 进行分组 K 折交叉验证
gkf = GroupKFold(n_splits=2)  # 假设我们使用 2 折交叉验证

for train_index, test_index in gkf.split(df, groups=groups):
    train = df.iloc[train_index]
    test = df.iloc[test_index]

    X_train, y_train = train.drop(columns=['y', 'group']), train['y']
    X_test, y_test = test.drop(columns=['y', 'group']), test['y']

    print("训练集:")
    print(X_train)     
#     A    B    C    D    E
# 0  1.0  0.5  1.0  0.5  0.1
# 1  1.0  0.5  1.0  0.5  0.2
# 4  1.0  0.5  1.0  0.5  0.3
# 5  1.0  0.5  1.0  0.5  0.4
    print("测试集:")
    print(X_test)      
#     A    B    C    D    E
# 2  2.0  1.5  2.0  1.5  0.1
# 3  2.0  1.5  2.0  1.5  0.2

留一组法(LeaveOneGroupOut)

## 留一组法

import pandas as pd
from sklearn.model_selection import LeaveOneGroupOut

data = {
    'A': [1.0, 1.0, 2.0, 2.0, 1.0, 1.0],
    'B': [0.5, 0.5, 1.5, 1.5, 0.5, 0.5],
    'C': [1.0, 1.0, 2.0, 2.0, 1.0, 1.0],
    'D': [0.5, 0.5, 1.5, 1.5, 0.5, 0.5],
    'E': [0.1, 0.2, 0.1, 0.2, 0.3, 0.4],
    'y': [0, 1, 0, 1, 0, 1]
}
df = pd.DataFrame(data)

# 定义分组特征
df['group'] = df[['A', 'B', 'C', 'D']].apply(lambda row: '_'.join(map(str, row)), axis=1)
# 获取分组标识符
groups = df['group'].values

# 使用 LeaveOneGroupOut 进行分组划分
logo = LeaveOneGroupOut()

for train_index, test_index in logo.split(df, groups=groups):
    train = df.iloc[train_index]
    test = df.iloc[test_index]

    X_train, y_train = train.drop(columns=['y', 'group']), train['y']
    X_test, y_test = test.drop(columns=['y', 'group']), test['y']

    print("训练集:")
    print(X_train)      
#     A    B    C    D    E
# 0  1.0  0.5  1.0  0.5  0.1
# 1  1.0  0.5  1.0  0.5  0.2
# 4  1.0  0.5  1.0  0.5  0.3
# 5  1.0  0.5  1.0  0.5  0.4
    print("测试集:")
    print(X_test) 
#     A    B    C    D    E
# 2  2.0  1.5  2.0  1.5  0.1
# 3  2.0  1.5  2.0  1.5  0.2
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值