PyTorch 学习笔记-Week5
ChaptSand Lv3

这周学习了从零开始使用PyTorch构建一个完整机器学习项目的全过程。通过两个经典案例——PIMA糖尿病分类和泰坦尼克号生还预测分类,系统地学习了数据处理、模型构建、训练、评估和优化的核心技能。

1. 核心概念与工作流

1.1 训练三元组:Epoch, Batch, Iteration

这是理解模型训练过程的基础。

  • Epoch (周期): 指的是整个训练数据集被完整地、系统地过了一遍。
  • Batch-Size (批次大小): 由于数据集太大无法一次性载入内存,我们将其分成小批次(batch)。Batch-Size就是每个批次中包含的样本数量。
  • Iteration (迭代): 模型更新一次权重的过程。每处理一个batch的数据,就完成了一次迭代。
  • 关系: Iterations_per_Epoch = Total_Samples / Batch_Size

1.2 标准机器学习工作流

一个标准的项目遵循以下流程:

  1. 数据加载与探索 (EDA)
  2. 数据预处理与清洗
  3. 划分训练集/验证集
  4. 构建数据管道 (DataLoader)
  5. 定义模型架构
  6. 训练模型
  7. 评估模型性能
  8. (可选)超参数调优
  9. (可选)对新数据进行预测

2. 数据处理与加载 (Pandas & PyTorch)

2.1 探索性数据分析 (EDA)

在接触任何数据集时,首先要了解它。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
import pandas as pd

# 加载数据
df = pd.read_csv('./datasets/titanic/train.csv')

# 查看前5行
print(df.head())

# 查看数据结构、非空值数量和数据类型
# 这是发现缺失值和错误数据类型的关键步骤
df.info()

# 查看数值列的统计摘要
print(df.describe())

2.2 数据预处理

这是将原始数据转换为模型可用格式的关键步骤。

a. 处理缺失值

1
2
3
4
5
6
7
8
9
10
# 对于数值型特征,用中位数填充更稳健
age_median = df['Age'].median()
df['Age'] = df['Age'].fillna(age_median)

# 对于类别型特征,用众数填充
embarked_mode = df['Embarked'].mode()[0]
df['Embarked'] = df['Embarked'].fillna(embarked_mode)

# 对于缺失值过多的列,直接删除
df = df.drop('Cabin', axis=1)

b. 处理类别特征(独热编码)

神经网络只能处理数字,因此文本特征需要转换。

1
2
# drop_first=True 用于避免多重共线性,是一种好习惯
df = pd.get_dummies(df, columns=['Sex', 'Embarked'], drop_first=True)

c. 特征选择

删除与预测目标不相关的列。

1
df = df.drop(['PassengerId', 'Name', 'Ticket'], axis=1)

d. 特征缩放 (Standardization)

将所有特征缩放到相似的范围,有助于模型更快、更稳定地收敛。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split

# 先划分数据
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2)

# 创建scaler对象
scaler = StandardScaler()

# 在训练集上fit_transform
X_train = scaler.fit_transform(X_train)

# 在验证/测试集上只用transform
X_val = scaler.transform(X_val)

正确的顺序和做法

正确的流程应该是严格模拟真实世界:我们只能根据已有的训练数据来学习规律,然后将这些规律应用到未知的测试数据上。

正确的顺序是:1. 先分割,2. 后处理。

  1. 先分割数据:将数据集划分为训练集和测试集。
  2. 拟合和转换训练集:在 仅有训练集 (x_train) 上调用 fit_transform()。这一步会计算出只属于训练集的均值和标准差,并用它们来转换训练集。
  3. 仅转换测试集:使用同一个(已经在训练集上拟合好的)scaler,在测试集 (x_test) 上只调用 transform()。这一步是用训练集的均值和标准差来转换测试集,以确保两边处理标准一致。

额外的重要提示:不要标准化标签

2.3 PyTorch数据管道

a. 自定义 Dataset

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
from torch.utils.data import Dataset

class TitanicDataset(Dataset):
def __init__(self, features, labels):
# 将NumPy数组转换为PyTorch张量
self.features = torch.tensor(features, dtype=torch.float32)
self.labels = torch.tensor(labels, dtype=torch.float32)

def __getitem__(self, index):
# 定义如何获取单个样本
return self.features[index], self.labels[index]

def __len__(self):
# 返回数据集总长度
return len(self.features)

b. DataLoader

Dataset封装成一个可迭代对象,实现批处理和数据打乱。

1
2
3
4
5
6
from torch.utils.data import DataLoader

train_dataset = TitanicDataset(X_train, y_train)
train_loader = DataLoader(dataset=train_dataset,
batch_size=32,
shuffle=True) # 打乱数据对训练很重要

3. 模型构建与训练

3.1 模型定义

使用nn.Module定义模型结构,用nn.Sequential可以使结构更简洁。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
import torch.nn as nn

class Classifier(nn.Module):
def __init__(self, num_features):
super().__init__()
self.net = nn.Sequential(
nn.Linear(num_features, 16),
nn.ReLU(),
nn.Linear(16, 8),
nn.ReLU(),
nn.Linear(8, 1) # 输出层,对于BCEWithLogitsLoss,输出原始logit
)

def forward(self, x):
return self.net(x)

3.2 训练循环

这是模型学习的核心。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# 初始化模型、损失函数和优化器
model = Classifier(input_features)
criterion = nn.BCEWithLogitsLoss() # 适用于二分类
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)

model.train() # 将模型设置为训练模式
for epoch in range(EPOCHS):
for features, labels in train_loader:
# 1. 前向传播
outputs = model(features)

# 2. 计算损失
loss = criterion(outputs, labels)

# 3. 反向传播与优化
optimizer.zero_grad() # 清空之前的梯度
loss.backward() # 计算当前梯度
optimizer.step() # 更新权重

4. 模型评估与实践技巧

4.1 评估循环

在验证集上检查模型性能,全程不计算梯度。

1
2
3
4
5
6
7
8
9
10
11
12
from sklearn.metrics import accuracy_score

model.eval() # 将模型设置为评估模式
all_predictions = []
with torch.no_grad(): # 在此代码块中不计算梯度
for features, _ in val_loader:
outputs = model(features)
predicted = (torch.sigmoid(outputs) > 0.5).float()
all_predictions.extend(predicted.numpy())

accuracy = accuracy_score(y_val, all_predictions)
print(f"验证集上的准确率: {accuracy*100:.2f}%")

4.2 Kaggle提交流程

关键在于对test.csv应用完全相同的预处理流程。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# 1. 加载测试数据
test_df = pd.read_csv('test.csv')

# 2. 应用所有预处理步骤 (填充缺失值、独热编码...)
# 关键:使用从训练集计算出的median/mode/scaler

# 3. 对齐列 (重要!)
# 确保测试集的列与训练集的列完全一致
test_df = test_df.reindex(columns=train_columns, fill_value=0)

# 4. 使用训练好的模型进行预测

# 5. 生成提交文件
submission_df.to_csv('submission.csv', index=False)

4.3 超参数调优

通过实验寻找最佳的模型配置。

  • 关键超参数: 学习率、网络结构(层数、神经元数)、训练周期数等。
  • 策略: 一次只调整一个参数,观察其对验证集性能的影响。
  • 自动化: 将训练/评估过程封装成函数,用循环来自动化测试不同的超参数组合。

5.代码实战

5.1PIMA糖尿病分类

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
import torch
import torch.nn as nn
import numpy as np
from torch.utils.data import TensorDataset, DataLoader
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
import matplotlib.pyplot as plt

class DiabetesClassifier(nn.Module):
def __init__(self):
super().__init__()
self.layer1 = nn.Linear(8, 6)
self.layer2 = nn.Linear(6, 4)
self.layer3 = nn.Linear(4, 1)
self.relu = nn.ReLU()

def forward(self, x):
x = self.relu(self.layer1(x))
x = self.relu(self.layer2(x))
x = self.layer3(x)
return x

EPOCHS = 100
LR = 0.01
BATCH_SIZE = 32
RANDOM_STATE = 42

raw_data = np.loadtxt('./datasets/diabetes.csv', delimiter=',', dtype=np.float32)
features = raw_data[:, :-1]
labels = raw_data[:, [-1]]

X_train, X_test, y_train, y_test = train_test_split(
features, labels, test_size=0.2, random_state=RANDOM_STATE
)

scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)

train_dataset = TensorDataset(torch.from_numpy(X_train), torch.from_numpy(y_train))
test_dataset = TensorDataset(torch.from_numpy(X_test), torch.from_numpy(y_test))

train_loader = DataLoader(dataset=train_dataset, batch_size=BATCH_SIZE, shuffle=True)
test_loader = DataLoader(dataset=test_dataset, batch_size=BATCH_SIZE, shuffle=True)

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = DiabetesClassifier().to(device)

criterion = nn.BCEWithLogitsLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=LR)

loss_history = []
model.train()
for epoch in range(EPOCHS):
epoch_loss = 0.0
for features, labels in train_loader:
features, labels = features.to(device), labels.to(device)
outputs = model(features)
loss = criterion(outputs, labels)

optimizer.zero_grad()
loss.backward()
optimizer.step()

epoch_loss += loss.item()

avg_loss = epoch_loss / len(train_loader)
loss_history.append(avg_loss)

all_labels = []
all_predictions = []
model.eval()
with torch.no_grad():
for features, labels in test_loader:
features, labels = features.to(device), labels.to(device)
outputs = model(features)
predicted = (torch.sigmoid(outputs) > 0.5).float()

all_labels.extend(labels.cpu().numpy())
all_predictions.extend(predicted.cpu().numpy())

accuracy = accuracy_score(all_labels, all_predictions)
precision = precision_score(all_labels, all_predictions)
recall = recall_score(all_labels, all_predictions)
f1 = f1_score(all_labels, all_predictions)

print(f'Accuracy: {accuracy*100:.2f}%')
print(f'Precision: {precision*100:.2f}%')
print(f'Recall: {recall*100:.2f}%')
print(f'F1 Score: {f1*100:.2f}%')

plt.figure(figsize=(10, 5))
plt.plot(range(1, EPOCHS+1), loss_history)
plt.xlabel("Epoch")
plt.ylabel("Loss")
plt.title("Training Loss Curve")
plt.grid()
plt.show()

5.2泰坦尼克号生还预测分类

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
import torch
import torch.nn as nn
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from torch.utils.data import TensorDataset, DataLoader
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score

df = pd.read_csv('./datasets/titanic/train.csv')

age_median = df['Age'].median()
df['Age'] = df['Age'].fillna(age_median)
embarked_mode = df['Embarked'].mode()[0]
df['Embarked'] = df['Embarked'].fillna(embarked_mode)

df = pd.get_dummies(df, columns=['Sex', 'Embarked'], drop_first=True)
df = df.drop(['PassengerId', 'Name', 'Ticket', 'Cabin'], axis=1)

X = df.drop(['Survived'], axis=1).to_numpy()
y = df['Survived'].to_numpy().reshape(-1, 1)

X_train, X_val, y_train, y_val = train_test_split(
X, y, test_size=0.2, random_state=42
)

scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_val = scaler.transform(X_val)

X_train = torch.tensor(X_train, dtype=torch.float32)
X_val = torch.tensor(X_val, dtype=torch.float32)
y_train = torch.tensor(y_train, dtype=torch.float32)
y_val = torch.tensor(y_val, dtype=torch.float32)

train_dataset = TensorDataset(X_train, y_train)
val_dataset = TensorDataset(X_val, y_val)

train_loader = DataLoader(dataset=train_dataset, batch_size=32, shuffle=True)
val_loader = DataLoader(dataset=val_dataset, batch_size=32, shuffle=False)

input_features = X_train.shape[1]

class Classifier(nn.Module):
def __init__(self, num_features):
super().__init__()
self.net = nn.Sequential(
nn.Linear(num_features, 16),
nn.ReLU(),
nn.Linear(16, 8),
nn.ReLU(),
nn.Linear(8, 1),
)

def forward(self, x):
return self.net(x)

EPOCHS = 200
LR = 0.001
model = Classifier(input_features)
criterion = nn.BCEWithLogitsLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=LR)

model.train()
for epoch in range(EPOCHS):
for features, labels in train_loader:
outputs = model(features)
loss = criterion(outputs, labels)

optimizer.zero_grad()
loss.backward()
optimizer.step()

model.eval()
all_predictions = []
with torch.no_grad():
for features, labels in val_loader:
outputs = model(features)
predicted = (torch.sigmoid(outputs) > 0.5).float()
all_predictions.extend(predicted.numpy())

accuracy = accuracy_score(y_val, all_predictions)
print(f'Accuracy in val set: {accuracy*100:.2f}%')

test_df = pd.read_csv('./datasets/titanic/test.csv')
passenger_ids = test_df['PassengerId']

fare_median = df['Fare'].median()
test_df['Age'] = test_df['Age'].fillna(age_median)
test_df['Fare'] = test_df['Fare'].fillna(fare_median)

test_df = pd.get_dummies(test_df, columns=['Sex', 'Embarked'], drop_first=True)
test_features_df = test_df.drop(['PassengerId', 'Name', 'Ticket', 'Cabin'], axis=1)

train_columns = df.drop(['Survived'], axis=1).columns
test_features_df = test_features_df.reindex(columns=train_columns, fill_value=0).to_numpy()

X_test = scaler.transform(test_features_df)
X_test = torch.tensor(X_test, dtype=torch.float32)

with torch.no_grad():
test_outputs = model(X_test)
predictions = (torch.sigmoid(test_outputs) > 0.5).int().numpy()

submission_df = pd.DataFrame({'PassengerId': passenger_ids,'Survived': predictions.flatten()})
submission_df.to_csv('submission.csv', index=False)
 评论
评论插件加载失败
正在加载评论插件
由 Hexo 驱动 & 主题 Keep
访客数 访问量