卷积神经网络效果

构建卷积神经网络

  • 卷积网络中的输入和层与传统神经网络有些区别,需重新设计,训练模块基本一致
1
2
3
4
5
6
7
8
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torchvision import datasets,transforms
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline

首先读取数据

  • 分别构建训练集和测试集(验证集)
  • DataLoader来迭代取数据
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# 定义超参数 
input_size = 28 #图像的总尺寸28*28
num_classes = 10 #标签的种类数
num_epochs = 3 #训练的总循环周期
batch_size = 64 #一个撮(批次)的大小,64张图片

# 训练集
train_dataset = datasets.MNIST(root='./data',
train=True,
transform=transforms.ToTensor(),
download=True)

# 测试集
test_dataset = datasets.MNIST(root='./data',
train=False,
transform=transforms.ToTensor())

# 构建batch数据
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=True)

卷积网络模块构建

  • 一般卷积层,relu层,池化层可以写成一个套餐
  • 注意卷积最后结果还是一个特征图,需要把图转换成向量才能做分类或者回归任务
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
class CNN(nn.Module):
def __init__(self):
super(CNN, self).__init__()
self.conv1 = nn.Sequential( # 输入大小 (1, 28, 28)
nn.Conv2d(
in_channels=1, # 灰度图
out_channels=16, # 要得到几多少个特征图
kernel_size=5, # 卷积核大小
stride=1, # 步长
padding=2, # 如果希望卷积后大小跟原来一样,需要设置padding=(kernel_size-1)/2 if stride=1
), # 输出的特征图为 (16, 28, 28)
nn.ReLU(), # relu层
nn.MaxPool2d(kernel_size=2), # 进行池化操作(2x2 区域), 输出结果为: (16, 14, 14)
)
self.conv2 = nn.Sequential( # 下一个套餐的输入 (16, 14, 14)
nn.Conv2d(16, 32, 5, 1, 2), # 输出 (32, 14, 14)
nn.ReLU(), # relu层
nn.MaxPool2d(2), # 输出 (32, 7, 7)
)
self.out = nn.Linear(32 * 7 * 7, 10) # 全连接层得到的结果

def forward(self, x):
x = self.conv1(x)
x = self.conv2(x)
x = x.view(x.size(0), -1) # flatten操作,结果为:(batch_size, 32 * 7 * 7)
output = self.out(x)
return output

准确率作为评估标准

1
2
3
4
def accuracy(predictions, labels):
pred = torch.max(predictions.data, 1)[1]
rights = pred.eq(labels.data.view_as(pred)).sum()
return rights, len(labels)

训练网络模型

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
# 实例化
net = CNN()
#损失函数
criterion = nn.CrossEntropyLoss()
#优化器
optimizer = optim.Adam(net.parameters(), lr=0.001) #定义优化器,普通的随机梯度下降算法

#开始训练循环
for epoch in range(num_epochs):
#当前epoch的结果保存下来
train_rights = []

for batch_idx, (data, target) in enumerate(train_loader): #针对容器中的每一个批进行循环
net.train()
output = net(data)
loss = criterion(output, target)
optimizer.zero_grad()
loss.backward()
optimizer.step()
right = accuracy(output, target)
train_rights.append(right)


if batch_idx % 100 == 0:

net.eval()
val_rights = []

for (data, target) in test_loader:
output = net(data)
right = accuracy(output, target)
val_rights.append(right)

#准确率计算
train_r = (sum([tup[0] for tup in train_rights]), sum([tup[1] for tup in train_rights]))
val_r = (sum([tup[0] for tup in val_rights]), sum([tup[1] for tup in val_rights]))

print('当前epoch: {} [{}/{} ({:.0f}%)]\t损失: {:.6f}\t训练集准确率: {:.2f}%\t测试集正确率: {:.2f}%'.format(
epoch, batch_idx * batch_size, len(train_loader.dataset),
100. * batch_idx / len(train_loader),
loss.data,
100. * train_r[0].numpy() / train_r[1],
100. * val_r[0].numpy() / val_r[1]))
当前epoch: 0 [0/60000 (0%)]    损失: 2.298275    训练集准确率: 18.75%    测试集正确率: 16.69%
当前epoch: 0 [6400/60000 (11%)]    损失: 0.366936    训练集准确率: 77.09%    测试集正确率: 91.76%
当前epoch: 0 [12800/60000 (21%)]    损失: 0.197412    训练集准确率: 85.04%    测试集正确率: 95.32%
当前epoch: 0 [19200/60000 (32%)]    损失: 0.065437    训练集准确率: 88.57%    测试集正确率: 95.96%
当前epoch: 0 [25600/60000 (43%)]    损失: 0.245751    训练集准确率: 90.43%    测试集正确率: 97.05%
当前epoch: 0 [32000/60000 (53%)]    损失: 0.116508    训练集准确率: 91.65%    测试集正确率: 97.33%
当前epoch: 0 [38400/60000 (64%)]    损失: 0.106026    训练集准确率: 92.51%    测试集正确率: 97.47%
当前epoch: 0 [44800/60000 (75%)]    损失: 0.024781    训练集准确率: 93.20%    测试集正确率: 97.79%
当前epoch: 0 [51200/60000 (85%)]    损失: 0.040254    训练集准确率: 93.77%    测试集正确率: 97.44%
当前epoch: 0 [57600/60000 (96%)]    损失: 0.013604    训练集准确率: 94.19%    测试集正确率: 97.57%
当前epoch: 1 [0/60000 (0%)]    损失: 0.038379    训练集准确率: 100.00%    测试集正确率: 97.90%
当前epoch: 1 [6400/60000 (11%)]    损失: 0.091921    训练集准确率: 97.94%    测试集正确率: 98.26%
当前epoch: 1 [12800/60000 (21%)]    损失: 0.082685    训练集准确率: 97.88%    测试集正确率: 98.12%
当前epoch: 1 [19200/60000 (32%)]    损失: 0.030613    训练集准确率: 97.95%    测试集正确率: 98.53%
当前epoch: 1 [25600/60000 (43%)]    损失: 0.098491    训练集准确率: 97.96%    测试集正确率: 98.30%
当前epoch: 1 [32000/60000 (53%)]    损失: 0.078065    训练集准确率: 97.97%    测试集正确率: 98.50%
当前epoch: 1 [38400/60000 (64%)]    损失: 0.013370    训练集准确率: 98.02%    测试集正确率: 98.55%
当前epoch: 1 [44800/60000 (75%)]    损失: 0.065581    训练集准确率: 98.09%    测试集正确率: 98.65%
当前epoch: 1 [51200/60000 (85%)]    损失: 0.077535    训练集准确率: 98.12%    测试集正确率: 98.23%
当前epoch: 1 [57600/60000 (96%)]    损失: 0.007826    训练集准确率: 98.16%    测试集正确率: 98.65%
当前epoch: 2 [0/60000 (0%)]    损失: 0.170131    训练集准确率: 98.44%    测试集正确率: 98.57%
当前epoch: 2 [6400/60000 (11%)]    损失: 0.046841    训练集准确率: 98.64%    测试集正确率: 98.40%
当前epoch: 2 [12800/60000 (21%)]    损失: 0.095354    训练集准确率: 98.50%    测试集正确率: 98.58%
当前epoch: 2 [19200/60000 (32%)]    损失: 0.009594    训练集准确率: 98.58%    测试集正确率: 98.68%
当前epoch: 2 [25600/60000 (43%)]    损失: 0.017973    训练集准确率: 98.62%    测试集正确率: 98.82%
当前epoch: 2 [32000/60000 (53%)]    损失: 0.045781    训练集准确率: 98.66%    测试集正确率: 98.63%
当前epoch: 2 [38400/60000 (64%)]    损失: 0.056535    训练集准确率: 98.65%    测试集正确率: 98.94%
当前epoch: 2 [44800/60000 (75%)]    损失: 0.014779    训练集准确率: 98.66%    测试集正确率: 98.97%
当前epoch: 2 [51200/60000 (85%)]    损失: 0.010532    训练集准确率: 98.71%    测试集正确率: 98.50%
当前epoch: 2 [57600/60000 (96%)]    损失: 0.076463    训练集准确率: 98.68%    测试集正确率: 98.79%
1

1


卷积神经网络效果
https://zhangfuli.github.io/2020/09/03/卷积神经网络效果/
作者
张富利
发布于
2020年9月3日
许可协议