目录
6.1 自定义损失函数
torch.nn模块有许多常用的损失函数:MSELoss,L1Loss,BCELoss。但是自定义损失函数的技能是尤为重要的
6.1.1 以函数方式定义
def my_loss(output, target):
loss = torch.mean((output - target)**2)
return loss
6.1.2 以类方式定义
Loss
函数部分继承自_loss
, 部分继承自_WeightedLoss
, 而_WeightedLoss
继承自_loss
,_loss
继承自 nn.Module。可以当作神经网络的一层,损失函数类需要继承自nn.Module类。
Dice Loss
实现代码:
class DiceLoss(nn.Module):
def __init__(self,weight=None,size_average=True):
super(DiceLoss,self).__init__()
def forward(self,inputs,targets,smooth=1):
inputs = F.sigmoid(inputs)
inputs = inputs.view(-1)
targets = targets.view(-1)
intersection = (inputs * targets).sum()
dice = (2.*intersection + smooth)/(inputs.sum() + targets.sum() + smooth)
return 1 - dice
# 使用方法
criterion = DiceLoss()
loss = criterion(input,targets)
BCE-Dice Loss
Jaccard
Intersection over Union (IoU) Loss
Focal Loss
class DiceBCELoss(nn.Module):
def __init__(self, weight=None, size_average=True):
super(DiceBCELoss, self).__init__()
def forward(self, inputs, targets, smooth=1):
inputs = F.sigmoid(inputs)
inputs = inputs.view(-1)
targets = targets.view(-1)
intersection = (inputs * targets).sum()
dice_loss = 1 - (2.*intersection + smooth)/(inputs.sum() + targets.sum() + smooth)
BCE = F.binary_cross_entropy(inputs, targets, reduction='mean')
Dice_BCE = BCE + dice_loss
return Dice_BCE
--------------------------------------------------------------------
class IoULoss(nn.Module):
def __init__(self, weight=None, size_average=True):
super(IoULoss, self).__init__()
def forward(self, inputs, targets, smooth=1):
inputs = F.sigmoid(inputs)
inputs = inputs.view(-1)
targets = targets.view(-1)
intersection = (inputs * targets).sum()
total = (inputs + targets).sum()
union = total - intersection
IoU = (intersection + smooth)/(union + smooth)
return 1 - IoU
--------------------------------------------------------------------
ALPHA = 0.8
GAMMA = 2
class FocalLoss(nn.Module):
def __init__(self, weight=None, size_average=True):
super(FocalLoss, self).__init__()
def forward(self, inputs, targets, alpha=ALPHA, gamma=GAMMA, smooth=1):
inputs = F.sigmoid(inputs)
inputs = inputs.view(-1)
targets = targets.view(-1)
BCE = F.binary_cross_entropy(inputs, targets, reduction='mean')
BCE_EXP = torch.exp(-BCE)
focal_loss = alpha * (1-BCE_EXP)**gamma * BCE
return focal_loss
自定义损失函数,涉及到数学运算,全程使用PyTorch提供的张量计算接口,这样不需要实现自动求导功能,并直接调用cuda。
6.2 动态调整学习率
学习速率设置过小,会极大降低收敛速度,增加训练时间;
学习率太大,可能导致参数在最优解两侧来回振荡。
选定了一个合适的学习率后,经过许多轮的训练后,可能会出现准确率震荡或loss不再下降等情况,说明当前学习率已不能满足模型调优的需求。
Scheduler:通过适当的学习率衰减策略改善这种现象,提高精度。
6.2.1 使用官方scheduler
代码解释:
# 选择一种优化器
optimizer = torch.optim.Adam(...)
# 选择上面提到的一种或多种动态调整学习率的方法
scheduler1 = torch.optim.lr_scheduler....
scheduler2 = torch.optim.lr_scheduler....
...
schedulern = torch.optim.lr_scheduler....
# 进行训练
for epoch in range(100):
train(...)
validate(...)
optimizer.step()
# 需要在优化器参数更新之后再动态调整学习率
scheduler1.step()
...
schedulern.step()
使用官方的torch.optim.lr_scheduler
,将scheduler.step()
放在optimizer.step()
后面使用。
6.2.2 自定义scheduler
自定义学习率调整策略:自定义函数adjust_learning_rate
来改变param_group
中lr
的值
学习率每30轮下降为原来的1/10:
def adjust_learning_rate(optimizer, epoch):
lr = args.lr * (0.1 ** (epoch // 30))
for param_group in optimizer.param_groups:
param_group['lr'] = lr
adjust_learning_rate
函数的定义,实现学习率的动态变化:
def adjust_learning_rate(optimizer,...):
...
optimizer = torch.optim.SGD(model.parameters(),lr = args.lr,momentum = 0.9)
for epoch in range(10):
train(...)
validate(...)
adjust_learning_rate(optimizer,epoch)
6.3 模型微调-torchvision
开源模型都是在较大数据集上进行训练:
Imagenet-1k,Imagenet-11k,ImageNet-21k
越大的模型对数据量的要求越大,过拟合无法避免
应用迁移学习(transfer learning):源数据集学到的知识迁移到目标数据集。例如,ImageNet数据集图像大多跟椅子无关,但该数据集上训练的模型可以抽取较通用的图像特征,从而能够帮助识别边缘、纹理、形状和物体组成等。这些类似的特征对于识别椅子也可能同样有效。
模型微调(finetune):找到一个同类的别人训练好的模型,把别人现成的训练好了的模型拿过来,换成自己的数据,通过训练调整一下参数。
预训练好的网络模型:
VGG,ResNet系列,mobilenet系列
6.3.1 模型微调的流程
-
在源数据集(如ImageNet数据集)上预训练一个神经网络模型。
-
创建一个新的神经网络模型,即目标模型。它复制了源模型上除了输出层外的所有模型设计及其参数。我们假设这些模型参数包含了源数据集上学习到的知识,且这些知识同样适用于目标数据集。我们还假设源模型的输出层跟源数据集的标签紧密相关,因此在目标模型中不予采用。
-
为目标模型添加一个输出⼤小为⽬标数据集类别个数的输出层,并随机初始化该层的模型参数。
-
在目标数据集上训练目标模型。我们将从头训练输出层,而其余层的参数都是基于源模型的参数微调得到的。
6.3.2 使用已有模型结构
实例化网络
import torchvision.models as models
resnet18 = models.resnet18()
# resnet18 = models.resnet18(pretrained=False) 等价于与上面的表达式
alexnet = models.alexnet()
vgg16 = models.vgg16()
squeezenet = models.squeezenet1_0()
densenet = models.densenet161()
inception = models.inception_v3()
googlenet = models.googlenet()
shufflenet = models.shufflenet_v2_x1_0()
mobilenet_v2 = models.mobilenet_v2()
mobilenet_v3_large = models.mobilenet_v3_large()
mobilenet_v3_small = models.mobilenet_v3_small()
resnext50_32x4d = models.resnext50_32x4d()
wide_resnet50_2 = models.wide_resnet50_2()
mnasnet = models.mnasnet1_0()
传递pretrained
参数
通过True
或者False
来决定是否使用预训练好的权重
pretrained = False
,不使用预训练得到的权重,
pretrained = True
,使用在一些数据集上预训练得到的权重。
import torchvision.models as models
resnet18 = models.resnet18(pretrained=True)
alexnet = models.alexnet(pretrained=True)
squeezenet = models.squeezenet1_0(pretrained=True)
vgg16 = models.vgg16(pretrained=True)
densenet = models.densenet161(pretrained=True)
inception = models.inception_v3(pretrained=True)
googlenet = models.googlenet(pretrained=True)
shufflenet = models.shufflenet_v2_x1_0(pretrained=True)
mobilenet_v2 = models.mobilenet_v2(pretrained=True)
mobilenet_v3_large = models.mobilenet_v3_large(pretrained=True)
mobilenet_v3_small = models.mobilenet_v3_small(pretrained=True)
resnext50_32x4d = models.resnext50_32x4d(pretrained=True)
wide_resnet50_2 = models.wide_resnet50_2(pretrained=True)
mnasnet = models.mnasnet1_0(pretrained=True)
-
通常PyTorch模型的扩展为
.pt
或.pth
,程序运行时会首先检查默认路径中是否有已经下载的模型权重,一旦权重被下载,下次加载就不需要下载了。 -
一般情况下预训练模型的下载会比较慢,我们可以直接通过迅雷或者其他方式去 这里 查看自己的模型里面
model_urls
,然后手动下载,预训练模型的权重在Linux
和Mac
的默认下载路径是用户根目录下的.cache
文件夹。在Windows
下就是C:\Users\<username>\.cache\torch\hub\checkpoint
。我们可以通过使用 torch.utils.model_zoo.load_url()设置权重的下载地址。 -
如果觉得麻烦,还可以将自己的权重下载下来放到同文件夹下,然后再将参数加载网络。
self.model = models.resnet50(pretrained=False)
self.model.load_state_dict(torch.load('./model/resnet50-19c8e357.pth'))
4. 中途强行停止下载,对应路径下将权重文件删除干净,要不然可能会报错。
6.3.3 训练特定层
在默认情况下,参数的属性.requires_grad = True
,从头开始训练或微调不需要注意这里。
正在提取特征并且只想为新初始化的层计算梯度,其他参数不进行改变。需要通过设置requires_grad = False
来冻结部分层。
def set_parameter_requires_grad(model, feature_extracting):
if feature_extracting:
for param in model.parameters():
param.requires_grad = False
使用resnet18
为例的将1000类改为4类,但是仅改变最后一层的模型参数,不改变特征提取的模型参数;先冻结模型参数的梯度,再对模型输出部分的全连接层进行修改,修改后的全连接层的参数就是可计算梯度的。
import torchvision.models as models
# 冻结参数的梯度
feature_extract = True
model = models.resnet18(pretrained=True)
set_parameter_requires_grad(model, feature_extract)
# 修改模型
num_ftrs = model.fc.in_features
model.fc = nn.Linear(in_features=num_ftrs, out_features=4, bias=True)
在训练过程中,model仍会进行梯度回传,参数更新则只会发生在fc层。通过设定参数的requires_grad属性,完成指定训练模型的特定层的目标,对实现模型微调非常重要。
6.3 模型微调 - timm
Ross Wightman创建。提供许多计算机视觉的SOTA模型,可以当作是torchvision的扩充版本,并且里面的模型在准确度上也较高。
-
Github链接:https://github.com/rwightman/pytorch-image-models
-
官网链接:https://fastai.github.io/timmdocs/ https://rwightman.github.io/pytorch-image-models/
timm的安装
pip install timm
git clone https://github.com/rwightman/pytorch-image-models
cd pytorch-image-models && pip install -e .
如何查看预训练模型种类
timm.list_models()查看timm提供的预训练模型。
import timm
avail_pretrained_models = timm.list_models(pretrained=True)
len(avail_pretrained_models)
Resnet系列就包括了ResNet18,50,101等模型,我们可以在timm.list_models()
传入想查询的模型名称(模糊查询)
all_densnet_models = timm.list_models("*densenet*")
all_densnet_models
返回:
['densenet121',
'densenet121d',
'densenet161',
'densenet169',
'densenet201',
'densenet264',
'densenet264d_iabn',
'densenetblur121d',
'tv_densenet121']
查看模型的具体参数,可以通过访问模型的default_cfg
属性进行查看
model = timm.create_model('resnet34',num_classes=10,pretrained=True)
model.default_cfg
{'url': 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet34-43635321.pth',
'num_classes': 1000,
'input_size': (3, 224, 224),
'pool_size': (7, 7),
'crop_pct': 0.875,
'interpolation': 'bilinear',
'mean': (0.485, 0.456, 0.406),
'std': (0.229, 0.224, 0.225),
'first_conv': 'conv1',
'classifier': 'fc',
'architecture': 'resnet34'}
使用和修改预训练模型
通过timm.create_model()
进行模型创建,通过传入参数pretrained=True
,使用预训练模型。
也可以使用跟torchvision里面的模型一样方法查看模型参数,类型
import timm
import torch
model = timm.create_model('resnet34',pretrained=True)
x = torch.randn(1,3,224,224)
output = model(x)
output.shape
torch.Size([1, 1000])
-
查看某一层模型参数(以第一层卷积为例)
model = timm.create_model('resnet34',pretrained=True)
list(dict(model.named_children())['conv1'].parameters())
[Parameter containing:
tensor([[[[-2.9398e-02, -3.6421e-02, -2.8832e-02, ..., -1.8349e-02,
-6.9210e-03, 1.2127e-02],
[-3.6199e-02, -6.0810e-02, -5.3891e-02, ..., -4.2744e-02,
-7.3169e-03, -1.1834e-02],
...
[ 8.4563e-03, -1.7099e-02, -1.2176e-03, ..., 7.0081e-02,
2.9756e-02, -4.1400e-03]]]], requires_grad=True)]
-
修改模型(将1000类改为10类输出)
model = timm.create_model('resnet34',num_classes=10,pretrained=True)
x = torch.randn(1,3,224,224)
output = model(x)
output.shape
torch.Size([1, 10])
-
改变输入通道数(比如我们传入的图片是单通道的,但是模型需要的是三通道图片) 我们可以通过添加
in_chans=1
来改变
model = timm.create_model('resnet34',num_classes=10,pretrained=True,in_chans=1)
x = torch.randn(1,1,224,224)
output = model(x)
模型的保存
timm库所创建的模型是torch.model
的子类,直接使用torch库中内置的模型参数保存和加载的方法
torch.save(model.state_dict(),'./checkpoint/timm_model.pth')
model.load_state_dict(torch.load('./checkpoint/timm_model.pth'))
6.4 半精度训练
PyTorch,想用硬件设备GPU的支持,也就是“卡”。GPU的性能主要分为两部分:
算力和显存
前者决定了显卡计算的速度
后者则决定了显卡可以同时放入多少数据用于计算。
在可以使用的显存数量一定的情况下,每次训练能够加载的数据更多(也就是batch size更大),则也可以提高训练效率。另外,有时候数据本身也比较大(比如3D图像、视频等),显存较小的情况下可能甚至batch size为1的情况都无法实现。因此,合理使用显存也就显得十分重要。
PyTorch默认的浮点数存储方式用的是torch.float32,小数点后位数更多固然能保证数据的精确性,但绝大多数场景其实并不需要这么精确,只保留一半的信息不会影响结果,使用torch.float16格式。数位减了一半,被称为“半精度”,具体如下图:
半精度能够减少显存占用,使得显卡可以同时加载更多数据进行计算。
6.4.1 半精度训练的设置
在PyTorch中使用autocast配置半精度训练
-
import autocast
from torch.cuda.amp import autocast
-
模型设置
模型定义中,使用python的装饰器方法,用autocast装饰模型中的forward函数。
@autocast()
def forward(self, x):
...
return x
-
训练过程
训练过程中,只需在将数据输入模型及其之后的部分放入“with autocast():“
for x in train_loader:
x = x.cuda()
with autocast():
output = model(x)
...
半精度训练主要适用于数据本身的size比较大(3D图像、视频)。数据本身size不大时(手写数字MNIST数据集图片尺寸只有28*28),使用半精度训练不会带来显著的提升。
6.5 数据增强-imgaug
深度学习最重要的是数据。需要大量数据避免模型过拟合。许多场景无法获得大量数据,例如医学图像分析。数据增强技术是为了解决这个问题,针对有限数据问题的解决方案。数据增强一套技术,可提高训练数据集大小和质量,可以使用它们构建深度学习模型。 在计算视觉领域,生成增强图像相对容易。即使引入噪声或裁剪图像的一部分,模型仍可以对图像进行分类,有一些机器学习库进行计算视觉领域数据增强,imgaug 官网封装很多数据增强算法。
6.5.1 imgaug简介和安装
6.5.1.1 imgaug简介
6.5.1.2 imgaug的安装
conda config --add channels conda-forge
conda install imgaug
# install imgaug either via pypi
pip install imgaug
# install the latest version directly from github
pip install git+https://github.com/aleju/imgaug.git
6.5.2 imgaug的使用
使用imageio进行读入
opencv进行文件读取,需要进行手动改变通道,将读BGR转换为RGB。
用PIL.Image进行读取,图片没有shape的属性,将读取到的img转换为np.array()的形式再进行处理。
单张图片处理
import imageio
import imgaug as ia
%matplotlib inline
# 图片的读取
img = imageio.imread("./Lenna.jpg")
# 使用Image进行读取
# img = Image.open("./Lenna.jpg")
# image = np.array(img)
# ia.imshow(image)
# 可视化图片
ia.imshow(img)
Affine
from imgaug import augmenters as iaa
# 设置随机数种子
ia.seed(4)
# 实例化方法
rotate = iaa.Affine(rotate=(-4,45))
img_aug = rotate(image=img)
ia.imshow(img_aug)
对一张图片进行一种操作方式,实际情况下,可能对一张图片做多种数据增强处理。
这种情况下,需要利用imgaug.augmenters.Sequential()
构造数据增强的pipline,与torchvison.transforms.Compose()
类似。
iaa.Sequential(children=None, # Augmenter集合
random_order=False, # 是否对每个batch使用不同顺序的Augmenter list
name=None,
deterministic=False,
random_state=None)
# 构建处理序列
aug_seq = iaa.Sequential([
iaa.Affine(rotate=(-25,25)),
iaa.AdditiveGaussianNoise(scale=(10,60)),
iaa.Crop(percent=(0,0.2))
])
# 对图片进行处理,image不可以省略,也不能写成images
image_aug = aug_seq(image=img)
ia.imshow(image_aug)
对批次图片进行处理
对批次的图片以同一种方式处理
images = [img,img,img,img,]
images_aug = rotate(images=images)
ia.imshow(np.hstack(images_aug))
对批次的图片使用多种增强方法,与单张图片的方法类似,需要借助Sequential
构造数据增强的pipline。
aug_seq = iaa.Sequential([
iaa.Affine(rotate=(-25, 25)),
iaa.AdditiveGaussianNoise(scale=(10, 60)),
iaa.Crop(percent=(0, 0.2))
])
# 传入时需要指明是images参数
images_aug = aug_seq.augment_images(images = images)
#images_aug = aug_seq(images = images)
ia.imshow(np.hstack(images_aug))
对批次的图片分部分处理
通过imgaug.augmenters.Sometimes()
对batch中的一部分图片应用一部分Augmenters,剩下的图片应用另外的Augmenters。
iaa.Sometimes(p=0.5, # 代表划分比例
then_list=None, # Augmenter集合。p概率的图片进行变换的Augmenters。
else_list=None, #1-p概率的图片会被进行变换的Augmenters。注意变换的图片应用的Augmenter只能是then_list或者else_list中的一个。
name=None,
deterministic=False,
random_state=None)
对不同大小的图片进行处理
# 构建pipline
seq = iaa.Sequential([
iaa.CropAndPad(percent=(-0.2, 0.2), pad_mode="edge"), # crop and pad images
iaa.AddToHueAndSaturation((-60, 60)), # change their color
iaa.ElasticTransformation(alpha=90, sigma=9), # water-like effect
iaa.Cutout() # replace one squared area within the image by a constant intensity value
], random_order=True)
# 加载不同大小的图片
images_different_sizes = [
imageio.imread("https://upload.wikimedia.org/wikipedia/commons/e/ed/BRACHYLAGUS_IDAHOENSIS.jpg"),
imageio.imread("https://upload.wikimedia.org/wikipedia/commons/c/c9/Southern_swamp_rabbit_baby.jpg"),
imageio.imread("https://upload.wikimedia.org/wikipedia/commons/9/9f/Lower_Keys_marsh_rabbit.jpg")
]
# 对图片进行增强
images_aug = seq(images=images_different_sizes)
# 可视化结果
print("Image 0 (input shape: %s, output shape: %s)" % (images_different_sizes[0].shape, images_aug[0].shape))
ia.imshow(np.hstack([images_different_sizes[0], images_aug[0]]))
print("Image 1 (input shape: %s, output shape: %s)" % (images_different_sizes[1].shape, images_aug[1].shape))
ia.imshow(np.hstack([images_different_sizes[1], images_aug[1]]))
print("Image 2 (input shape: %s, output shape: %s)" % (images_different_sizes[2].shape, images_aug[2].shape))
ia.imshow(np.hstack([images_different_sizes[2], images_aug[2]]))
6.5.3 imgaug在PyTorch的应用
import numpy as np
from imgaug import augmenters as iaa
from torch.utils.data import DataLoader, Dataset
from torchvision import transforms
# 构建pipline
tfs = transforms.Compose([
iaa.Sequential([
iaa.flip.Fliplr(p=0.5),
iaa.flip.Flipud(p=0.5),
iaa.GaussianBlur(sigma=(0.0, 0.1)),
iaa.MultiplyBrightness(mul=(0.65, 1.35)),
]).augment_image,
# 不要忘记了使用ToTensor()
transforms.ToTensor()
])
# 自定义数据集
class CustomDataset(Dataset):
def __init__(self, n_images, n_classes, transform=None):
# 图片的读取,建议使用imageio
self.images = np.random.randint(0, 255,
(n_images, 224, 224, 3),
dtype=np.uint8)
self.targets = np.random.randn(n_images, n_classes)
self.transform = transform
def __getitem__(self, item):
image = self.images[item]
target = self.targets[item]
if self.transform:
image = self.transform(image)
return image, target
def __len__(self):
return len(self.images)
def worker_init_fn(worker_id):
imgaug.seed(np.random.get_state()[1][0] + worker_id)
custom_ds = CustomDataset(n_images=50, n_classes=10, transform=tfs)
custom_dl = DataLoader(custom_ds, batch_size=64,
num_workers=4, pin_memory=True,
worker_init_fn=worker_init_fn)
num_workers在Windows系统上只能设置成0
使用Linux远程服务器时,可能使用不同的num_workers的数量,需要注意worker_init_fn()函数的作用。保证使用的数据增强在num_workers>0时是对数据的增强是随机的。
6.6 使用argparse进行调参
6.6.1 argparse简介
argsparse是python的命令行解析的标准模块,内置于python,不需要安装。
这个库可以让我们直接在命令行中就可以向程序中传入参数。
使用python file.py
运行python文件。argparse的作用将命令行传入的其他参数进行解析、保存和使用。使用argparse后,在命令行输入的参数就可以以python file.py --lr 1e-4 --batch_size 32
完成对常见超参数的设置
6.6.2 argparse的使用
-
创建
ArgumentParser()
对象 -
调用
add_argument()
方法添加参数 -
使用
parse_args()
解析参数 在接下来的内容中,我们将以实际操作来学习argparse的使用方法。
# demo.py
import argparse
# 创建ArgumentParser()对象
parser = argparse.ArgumentParser()
# 添加参数
parser.add_argument('-o', '--output', action='store_true',
help="shows output")
# action = `store_true` 会将output参数记录为True
# type 规定了参数的格式
# default 规定了默认值
parser.add_argument('--lr', type=float, default=3e-5, help='select the learning rate, default=1e-3')
parser.add_argument('--batch_size', type=int, required=True, help='input batch size')
# 使用parse_args()解析函数
args = parser.parse_args()
if args.output:
print("This is some output")
print(f"learning rate:{args.lr} ")
在命令行使用python demo.py --lr 3e-4 --batch_size 32
This is some output
learning rate: 3e-4
argparse参数分为可选参数和必选参数。
可选参数跟lr
参数类似,未输入的情况下会设置为默认值。
必选参数就跟我们的batch_size
参数相类似,参数设置required =True
后,必须传入该参数,否则就会报错。
输入参数的时候不使用--可以吗?答案是肯定的,需要在设置上做出一些改变。
# positional.py
import argparse
# 位置参数
parser = argparse.ArgumentParser()
parser.add_argument('name')
parser.add_argument('age')
args = parser.parse_args()
print(f'{args.name} is {args.age} years old')
不实用--后,严格按照参数位置进行解析
$ positional_arg.py Peter 23
Peter is 23 years old
6.6.3 更加高效使用argparse修改超参数
来自Datawhale大神的超参数管理方法:
将有关超参数操作写在config.py
,在train.py
或者其他文件导入
import argparse
def get_options(parser=argparse.ArgumentParser()):
parser.add_argument('--workers', type=int, default=0,
help='number of data loading workers, you had better put it '
'4 times of your gpu')
parser.add_argument('--batch_size', type=int, default=4, help='input batch size, default=64')
parser.add_argument('--niter', type=int, default=10, help='number of epochs to train for, default=10')
parser.add_argument('--lr', type=float, default=3e-5, help='select the learning rate, default=1e-3')
parser.add_argument('--seed', type=int, default=118, help="random seed")
parser.add_argument('--cuda', action='store_true', default=True, help='enables cuda')
parser.add_argument('--checkpoint_path',type=str,default='',
help='Path to load a previous trained model if not empty (default empty)')
parser.add_argument('--output',action='store_true',default=True,help="shows output")
opt = parser.parse_args()
if opt.output:
print(f'num_workers: {opt.workers}')
print(f'batch_size: {opt.batch_size}')
print(f'epochs (niters) : {opt.niter}')
print(f'learning rate : {opt.lr}')
print(f'manual_seed: {opt.seed}')
print(f'cuda enable: {opt.cuda}')
print(f'checkpoint_path: {opt.checkpoint_path}')
return opt
if __name__ == '__main__':
opt = get_options()
$ python config.py
num_workers: 0
batch_size: 4
epochs (niters) : 10
learning rate : 3e-05
manual_seed: 118
cuda enable: True
checkpoint_path:
在train.py
,可以使用下面的结构调用参数。
# 导入必要库
...
import config
opt = config.get_options()
manual_seed = opt.seed
num_workers = opt.workers
batch_size = opt.batch_size
lr = opt.lr
niters = opt.niters
checkpoint_path = opt.checkpoint_path
# 随机数的设置,保证复现结果
def set_seed(seed):
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
random.seed(seed)
np.random.seed(seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
...
if __name__ == '__main__':
set_seed(manual_seed)
for epoch in range(niters):
train(model,lr,batch_size,num_workers,checkpoint_path)
val(model,lr,batch_size,num_workers,checkpoint_path)
PyTorch模型定义与进阶训练技巧
import os
import numpy as np
import collections
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
Point 1:模型定义方式
-
Sequential
## Sequential: Direct list
import torch.nn as nn
net1 = nn.Sequential(
nn.Linear(784, 256),
nn.ReLU(),
nn.Linear(256, 10),
)
print(net1)
## Sequential: Ordered Dict
import collections
import torch.nn as nn
net2 = nn.Sequential(collections.OrderedDict([
('fc1', nn.Linear(784, 256)),
('relu1', nn.ReLU()),
('fc2', nn.Linear(256, 10))
]))
print(net2)
# 试一下
a = torch.rand(4,784)
out1 = net1(a)
out2 = net2(a)
print(out1.shape==out2.shape, out1.shape)
-
ModuleList
## ModuleList
net3 = nn.ModuleList([nn.Linear(784, 256), nn.ReLU()])
net3.append(nn.Linear(256, 10)) # # 类似List的append操作
print(net3[-1]) # 类似List的索引访问
print(net3)
class Net3(nn.Module):
def __init__(self):
super().__init__()
self.modulelist = nn.ModuleList([nn.Linear(784, 256), nn.ReLU()])
self.modulelist.append(nn.Linear(256, 10))
def forward(self, x):
for layer in self.modulelist:
x = layer(x)
return x
net3_ = Net3()
out3_ = net3_(a)
print(out3_.shape)
-
ModuleDict
## ModuleDict
net4 = nn.ModuleDict({
'linear': nn.Linear(784, 256),
'act': nn.ReLU(),
})
net4['output'] = nn.Linear(256, 10) # 添加
print(net4['linear']) # 访问
print(net4.output)
Point 2:利用模型块快速搭建复杂网络(汽车识别)
快速构建U-Net网络
1)每个子块内部的两次卷积(Double Convolution)
2)左侧模型块之间的下采样连接,即最大池化(Max pooling)
3)右侧模型块之间的上采样连接(Up sampling)
4)输出层的处理
import os
import numpy as np
import collections
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
class DoubleConv(nn.Module):
"""(convolution => [BN] => ReLU) * 2"""
def __init__(self, in_channels, out_channels, mid_channels=None):
super().__init__()
if not mid_channels:
mid_channels = out_channels
self.double_conv = nn.Sequential(
nn.Conv2d(in_channels, mid_channels, kernel_size=3, padding=1, bias=False),
nn.BatchNorm2d(mid_channels),
nn.ReLU(inplace=True),
nn.Conv2d(mid_channels, out_channels, kernel_size=3, padding=1, bias=False),
nn.BatchNorm2d(out_channels),
nn.ReLU(inplace=True)
)
def forward(self, x):
return self.double_conv(x)
class Down(nn.Module):
"""Downscaling with maxpool then double conv"""
def __init__(self, in_channels, out_channels):
super().__init__()
self.maxpool_conv = nn.Sequential(
nn.MaxPool2d(2),
DoubleConv(in_channels, out_channels)
)
def forward(self, x):
return self.maxpool_conv(x)
class Up(nn.Module):
"""Upscaling then double conv"""
def __init__(self, in_channels, out_channels, bilinear=True):
super().__init__()
# if bilinear, use the normal convolutions to reduce the number of channels
if bilinear:
self.up = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)
self.conv = DoubleConv(in_channels, out_channels, in_channels // 2)
else:
self.up = nn.ConvTranspose2d(in_channels, in_channels // 2, kernel_size=2, stride=2)
self.conv = DoubleConv(in_channels, out_channels)
def forward(self, x1, x2):
x1 = self.up(x1)
# input is CHW
diffY = x2.size()[2] - x1.size()[2]
diffX = x2.size()[3] - x1.size()[3]
x1 = F.pad(x1, [diffX // 2, diffX - diffX // 2,
diffY // 2, diffY - diffY // 2])
# if you have padding issues, see
# https://github.com/HaiyongJiang/U-Net-Pytorch-Unstructured-Buggy/commit/0e854509c2cea854e247a9c615f175f76fbb2e3a
# https://github.com/xiaopeng-liao/Pytorch-UNet/commit/8ebac70e633bac59fc22bb5195e513d5832fb3bd
x = torch.cat([x2, x1], dim=1)
return self.conv(x)
class OutConv(nn.Module):
def __init__(self, in_channels, out_channels):
super(OutConv, self).__init__()
self.conv = nn.Conv2d(in_channels, out_channels, kernel_size=1)
def forward(self, x):
return self.conv(x)
## 组装
class UNet(nn.Module):
def __init__(self, n_channels, n_classes, bilinear=True):
super(UNet, self).__init__()
self.n_channels = n_channels
self.n_classes = n_classes
self.bilinear = bilinear
self.inc = DoubleConv(n_channels, 64)
self.down1 = Down(64, 128)
self.down2 = Down(128, 256)
self.down3 = Down(256, 512)
factor = 2 if bilinear else 1
self.down4 = Down(512, 1024 // factor)
self.up1 = Up(1024, 512 // factor, bilinear)
self.up2 = Up(512, 256 // factor, bilinear)
self.up3 = Up(256, 128 // factor, bilinear)
self.up4 = Up(128, 64, bilinear)
self.outc = OutConv(64, n_classes)
def forward(self, x):
x1 = self.inc(x)
x2 = self.down1(x1)
x3 = self.down2(x2)
x4 = self.down3(x3)
x5 = self.down4(x4)
x = self.up1(x5, x4)
x = self.up2(x, x3)
x = self.up3(x, x2)
x = self.up4(x, x1)
logits = self.outc(x)
return logits
unet = UNet(3,1)
unet
输出:
UNet(
(inc): DoubleConv(
(double_conv): Sequential(
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
)
)
(down1): Down(
(maxpool_conv): Sequential(
(0): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(1): DoubleConv(
(double_conv): Sequential(
(0): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
)
)
)
)
(down2): Down(
(maxpool_conv): Sequential(
(0): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(1): DoubleConv(
(double_conv): Sequential(
(0): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
)
)
)
)
(down3): Down(
(maxpool_conv): Sequential(
(0): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(1): DoubleConv(
(double_conv): Sequential(
(0): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(4): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
)
)
)
)
(down4): Down(
(maxpool_conv): Sequential(
(0): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(1): DoubleConv(
(double_conv): Sequential(
(0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(4): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
)
)
)
)
(up1): Up(
(up): Upsample(scale_factor=2.0, mode=bilinear)
(conv): DoubleConv(
(double_conv): Sequential(
(0): Conv2d(1024, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(512, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
)
)
)
(up2): Up(
(up): Upsample(scale_factor=2.0, mode=bilinear)
(conv): DoubleConv(
(double_conv): Sequential(
(0): Conv2d(512, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(256, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
)
)
)
(up3): Up(
(up): Upsample(scale_factor=2.0, mode=bilinear)
(conv): DoubleConv(
(double_conv): Sequential(
(0): Conv2d(256, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
)
)
)
(up4): Up(
(up): Upsample(scale_factor=2.0, mode=bilinear)
(conv): DoubleConv(
(double_conv): Sequential(
(0): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
)
)
)
(outc): OutConv(
(conv): Conv2d(64, 1, kernel_size=(1, 1), stride=(1, 1))
)
)
Point 3:模型修改
假设最后的分割是多类别的(即mask不止0和1,还有2,3,4等值代表其他目标),需要对模型特定层进行修改。
-
添加额外输入
-
添加额外输出
## 修改特定层
import copy
unet1 = copy.deepcopy(unet)
unet1.outc
b = torch.rand(1,3,224,224)
out_unet1 = unet1(b)
print(out_unet1.shape)
unet1.outc = OutConv(64, 5)
unet1.outc
out_unet1 = unet1(b)
print(out_unet1.shape)
## 添加额外输入
class UNet2(nn.Module):
def __init__(self, n_channels, n_classes, bilinear=True):
super(UNet2, self).__init__()
self.n_channels = n_channels
self.n_classes = n_classes
self.bilinear = bilinear
self.inc = DoubleConv(n_channels, 64)
self.down1 = Down(64, 128)
self.down2 = Down(128, 256)
self.down3 = Down(256, 512)
factor = 2 if bilinear else 1
self.down4 = Down(512, 1024 // factor)
self.up1 = Up(1024, 512 // factor, bilinear)
self.up2 = Up(512, 256 // factor, bilinear)
self.up3 = Up(256, 128 // factor, bilinear)
self.up4 = Up(128, 64, bilinear)
self.outc = OutConv(64, n_classes)
def forward(self, x, add_variable):
x1 = self.inc(x)
x2 = self.down1(x1)
x3 = self.down2(x2)
x4 = self.down3(x3)
x5 = self.down4(x4)
x = self.up1(x5, x4)
x = self.up2(x, x3)
x = self.up3(x, x2)
x = self.up4(x, x1)
x = x + add_variable #修改点
logits = self.outc(x)
return logits
unet2 = UNet2(3,1)
c = torch.rand(1,1,224,224)
out_unet2 = unet2(b, c)
print(out_unet2.shape)
## 添加额外输出
class UNet3(nn.Module):
def __init__(self, n_channels, n_classes, bilinear=True):
super(UNet3, self).__init__()
self.n_channels = n_channels
self.n_classes = n_classes
self.bilinear = bilinear
self.inc = DoubleConv(n_channels, 64)
self.down1 = Down(64, 128)
self.down2 = Down(128, 256)
self.down3 = Down(256, 512)
factor = 2 if bilinear else 1
self.down4 = Down(512, 1024 // factor)
self.up1 = Up(1024, 512 // factor, bilinear)
self.up2 = Up(512, 256 // factor, bilinear)
self.up3 = Up(256, 128 // factor, bilinear)
self.up4 = Up(128, 64, bilinear)
self.outc = OutConv(64, n_classes)
def forward(self, x):
x1 = self.inc(x)
x2 = self.down1(x1)
x3 = self.down2(x2)
x4 = self.down3(x3)
x5 = self.down4(x4)
x = self.up1(x5, x4)
x = self.up2(x, x3)
x = self.up3(x, x2)
x = self.up4(x, x1)
logits = self.outc(x)
return logits, x5 # 修改点
unet3 = UNet3(3,1)
c = torch.rand(1,1,224,224)
out_unet3, mid_out = unet3(b)
print(out_unet3.shape, mid_out.shape)
使用Carvana数据集,实现一个基本的U-Net训练过程
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms
import torch.optim as optim
import matplotlib.pyplot as plt
import PIL
from sklearn.model_selection import train_test_split
os.environ['CUDA_VISIBLE_DEVICES'] = '2,3'
class CarvanaDataset(Dataset):
def __init__(self, base_dir, idx_list, mode="train", transform=None):
self.base_dir = base_dir
self.idx_list = idx_list
self.images = os.listdir(base_dir+"train")
self.masks = os.listdir(base_dir+"train_masks")
self.mode = mode
self.transform = transform
def __len__(self):
return len(self.idx_list)
def __getitem__(self, index):
image_file = self.images[self.idx_list[index]]
mask_file = image_file[:-4]+"_mask.gif"
image = PIL.Image.open(os.path.join(base_dir, "train", image_file))
if self.mode=="train":
mask = PIL.Image.open(os.path.join(base_dir, "train_masks", mask_file))
if self.transform is not None:
image = self.transform(image)
mask = self.transform(mask)
mask[mask!=0] = 1.0
return image, mask.float()
else:
if self.transform is not None:
image = self.transform(image)
return image
base_dir = "./"
transform = transforms.Compose([transforms.Resize((256,256)), transforms.ToTensor()])
train_idxs, val_idxs = train_test_split(range(len(os.listdir(base_dir+"train_masks"))), test_size=0.3)
train_data = CarvanaDataset(base_dir, train_idxs, transform=transform)
val_data = CarvanaDataset(base_dir, val_idxs, transform=transform)
train_loader = DataLoader(train_data, batch_size=32, num_workers=4, shuffle=True)
val_loader = DataLoader(train_data, batch_size=32, num_workers=4, shuffle=False)
image, mask = next(iter(train_loader))
plt.subplot(121)
plt.imshow(image[0,0])
plt.subplot(122)
plt.imshow(mask[0,0], cmap="gray")
<matplotlib.image.AxesImage at 0x7f4b61982d00>
# 使用Binary Cross Entropy Loss,之后我们会尝试替换为自定义的loss
criterion = nn.BCEWithLogitsLoss()
optimizer = optim.Adam(unet.parameters(), lr=1e-3, weight_decay=1e-8)
unet = nn.DataParallel(unet).cuda()
def dice_coeff(pred, target):
eps = 0.0001
num = pred.size(0)
m1 = pred.view(num, -1) # Flatten
m2 = target.view(num, -1) # Flatten
intersection = (m1 * m2).sum()
return (2. * intersection + eps) / (m1.sum() + m2.sum() + eps)
def train(epoch):
unet.train()
train_loss = 0
for data, mask in train_loader:
data, mask = data.cuda(), mask.cuda()
optimizer.zero_grad()
output = unet(data)
loss = criterion(output,mask)
loss.backward()
optimizer.step()
train_loss += loss.item()*data.size(0)
train_loss = train_loss/len(train_loader.dataset)
print('Epoch: {} \tTraining Loss: {:.6f}'.format(epoch, train_loss))
def val(epoch):
print("current learning rate: ", optimizer.state_dict()["param_groups"][0]["lr"])
unet.eval()
val_loss = 0
dice_score = 0
with torch.no_grad():
for data, mask in val_loader:
data, mask = data.cuda(), mask.cuda()
output = unet(data)
loss = criterion(output, mask)
val_loss += loss.item()*data.size(0)
dice_score += dice_coeff(torch.sigmoid(output).cpu(), mask.cpu())*data.size(0)
val_loss = val_loss/len(val_loader.dataset)
dice_score = dice_score/len(val_loader.dataset)
print('Epoch: {} \tValidation Loss: {:.6f}, Dice score: {:.6f}'.format(epoch, val_loss, dice_score))
epochs = 100
for epoch in range(1, epochs+1):
train(epoch)
val(epoch)
Point 5:自定义损失函数
不想使用交叉熵函数,而是想针对分割模型常用的Dice系数设计专门的loss,即DiceLoss,需要自定义PyTorch的损失函数
class DiceLoss(nn.Module):
def __init__(self, weight=None, size_average=True):
super(DiceLoss, self).__init__()
def forward(self,inputs,targets,smooth=1):
inputs = torch.sigmoid(inputs)
inputs = inputs.view(-1)
targets = targets.view(-1)
intersection = (inputs * targets).sum()
dice = (2.*intersection + smooth)/(inputs.sum() + targets.sum() + smooth)
return 1 - dice
newcriterion = DiceLoss()
unet.eval()
image, mask = next(iter(val_loader))
out_unet = unet(image.cuda())
loss = newcriterion(out_unet, mask.cuda())
print(loss)
tensor(0.1071, device='cuda:0', grad_fn=<RsubBackward1>)
Point 6:动态调整学习率
固定的学习率可能无法满足优化的需求,这时需要调整学习率,降低优化的速度
这里演示使用PyTorch自带的StepLR scheduler动态调整学习率的效果
自定义scheduler的方式
scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=1, gamma=0.8)
epochs = 100
for epoch in range(1, epochs+1):
train(epoch)
val(epoch)
scheduler.step()
Point 7:模型微调
unet
DataParallel(
(module): UNet(
(inc): DoubleConv(
(double_conv): Sequential(
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
)
)
(down1): Down(
(maxpool_conv): Sequential(
(0): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(1): DoubleConv(
(double_conv): Sequential(
(0): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
)
)
)
)
(down2): Down(
(maxpool_conv): Sequential(
(0): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(1): DoubleConv(
(double_conv): Sequential(
(0): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
)
)
)
)
(down3): Down(
(maxpool_conv): Sequential(
(0): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(1): DoubleConv(
(double_conv): Sequential(
(0): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(4): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
)
)
)
)
(down4): Down(
(maxpool_conv): Sequential(
(0): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(1): DoubleConv(
(double_conv): Sequential(
(0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(4): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
)
)
)
)
(up1): Up(
(up): Upsample(scale_factor=2.0, mode=bilinear)
(conv): DoubleConv(
(double_conv): Sequential(
(0): Conv2d(1024, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(512, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
)
)
)
(up2): Up(
(up): Upsample(scale_factor=2.0, mode=bilinear)
(conv): DoubleConv(
(double_conv): Sequential(
(0): Conv2d(512, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(256, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
)
)
)
(up3): Up(
(up): Upsample(scale_factor=2.0, mode=bilinear)
(conv): DoubleConv(
(double_conv): Sequential(
(0): Conv2d(256, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
)
)
)
(up4): Up(
(up): Upsample(scale_factor=2.0, mode=bilinear)
(conv): DoubleConv(
(double_conv): Sequential(
(0): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
)
)
)
(outc): OutConv(
(conv): Conv2d(64, 1, kernel_size=(1, 1), stride=(1, 1))
)
)
)
unet.module.outc.conv.weight.requires_grad = False
unet.module.outc.conv.bias.requires_grad = False
for layer, param in unet.named_parameters():
print(layer, '\t', param.requires_grad)
module.inc.double_conv.0.weight True
module.inc.double_conv.1.weight True
module.inc.double_conv.1.bias True
module.inc.double_conv.3.weight True
module.inc.double_conv.4.weight True
module.inc.double_conv.4.bias True
module.down1.maxpool_conv.1.double_conv.0.weight True
module.down1.maxpool_conv.1.double_conv.1.weight True
module.down1.maxpool_conv.1.double_conv.1.bias True
module.down1.maxpool_conv.1.double_conv.3.weight True
module.down1.maxpool_conv.1.double_conv.4.weight True
module.down1.maxpool_conv.1.double_conv.4.bias True
module.down2.maxpool_conv.1.double_conv.0.weight True
module.down2.maxpool_conv.1.double_conv.1.weight True
module.down2.maxpool_conv.1.double_conv.1.bias True
module.down2.maxpool_conv.1.double_conv.3.weight True
module.down2.maxpool_conv.1.double_conv.4.weight True
module.down2.maxpool_conv.1.double_conv.4.bias True
module.down3.maxpool_conv.1.double_conv.0.weight True
module.down3.maxpool_conv.1.double_conv.1.weight True
module.down3.maxpool_conv.1.double_conv.1.bias True
module.down3.maxpool_conv.1.double_conv.3.weight True
module.down3.maxpool_conv.1.double_conv.4.weight True
module.down3.maxpool_conv.1.double_conv.4.bias True
module.down4.maxpool_conv.1.double_conv.0.weight True
module.down4.maxpool_conv.1.double_conv.1.weight True
module.down4.maxpool_conv.1.double_conv.1.bias True
module.down4.maxpool_conv.1.double_conv.3.weight True
module.down4.maxpool_conv.1.double_conv.4.weight True
module.down4.maxpool_conv.1.double_conv.4.bias True
module.up1.conv.double_conv.0.weight True
module.up1.conv.double_conv.1.weight True
module.up1.conv.double_conv.1.bias True
module.up1.conv.double_conv.3.weight True
module.up1.conv.double_conv.4.weight True
module.up1.conv.double_conv.4.bias True
module.up2.conv.double_conv.0.weight True
module.up2.conv.double_conv.1.weight True
module.up2.conv.double_conv.1.bias True
module.up2.conv.double_conv.3.weight True
module.up2.conv.double_conv.4.weight True
module.up2.conv.double_conv.4.bias True
module.up3.conv.double_conv.0.weight True
module.up3.conv.double_conv.1.weight True
module.up3.conv.double_conv.1.bias True
module.up3.conv.double_conv.3.weight True
module.up3.conv.double_conv.4.weight True
module.up3.conv.double_conv.4.bias True
module.up4.conv.double_conv.0.weight True
module.up4.conv.double_conv.1.weight True
module.up4.conv.double_conv.1.bias True
module.up4.conv.double_conv.3.weight True
module.up4.conv.double_conv.4.weight True
module.up4.conv.double_conv.4.bias True
module.outc.conv.weight False
module.outc.conv.bias False
param
Parameter containing:
tensor([-0.1994], device='cuda:0')
Point 8:半精度训练
## 演示时需要restart kernel,并运行Unet模块
from torch.cuda.amp import autocast
os.environ['CUDA_VISIBLE_DEVICES'] = '2,3'
class CarvanaDataset(Dataset):
def __init__(self, base_dir, idx_list, mode="train", transform=None):
self.base_dir = base_dir
self.idx_list = idx_list
self.images = os.listdir(base_dir+"train")
self.masks = os.listdir(base_dir+"train_masks")
self.mode = mode
self.transform = transform
def __len__(self):
return len(self.idx_list)
def __getitem__(self, index):
image_file = self.images[self.idx_list[index]]
mask_file = image_file[:-4]+"_mask.gif"
image = PIL.Image.open(os.path.join(base_dir, "train", image_file))
if self.mode=="train":
mask = PIL.Image.open(os.path.join(base_dir, "train_masks", mask_file))
if self.transform is not None:
image = self.transform(image)
mask = self.transform(mask)
mask[mask!=0] = 1.0
return image, mask.float()
else:
if self.transform is not None:
image = self.transform(image)
return image
base_dir = "./"
transform = transforms.Compose([transforms.Resize((256,256)), transforms.ToTensor()])
train_idxs, val_idxs = train_test_split(range(len(os.listdir(base_dir+"train_masks"))), test_size=0.3)
train_data = CarvanaDataset(base_dir, train_idxs, transform=transform)
val_data = CarvanaDataset(base_dir, val_idxs, transform=transform)
train_loader = DataLoader(train_data, batch_size=32, num_workers=4, shuffle=True)
val_loader = DataLoader(train_data, batch_size=32, num_workers=4, shuffle=False)
class UNet_half(nn.Module):
def __init__(self, n_channels, n_classes, bilinear=True):
super(UNet_half, self).__init__()
self.n_channels = n_channels
self.n_classes = n_classes
self.bilinear = bilinear
self.inc = DoubleConv(n_channels, 64)
self.down1 = Down(64, 128)
self.down2 = Down(128, 256)
self.down3 = Down(256, 512)
factor = 2 if bilinear else 1
self.down4 = Down(512, 1024 // factor)
self.up1 = Up(1024, 512 // factor, bilinear)
self.up2 = Up(512, 256 // factor, bilinear)
self.up3 = Up(256, 128 // factor, bilinear)
self.up4 = Up(128, 64, bilinear)
self.outc = OutConv(64, n_classes)
@autocast()
def forward(self, x):
x1 = self.inc(x)
x2 = self.down1(x1)
x3 = self.down2(x2)
x4 = self.down3(x3)
x5 = self.down4(x4)
x = self.up1(x5, x4)
x = self.up2(x, x3)
x = self.up3(x, x2)
x = self.up4(x, x1)
logits = self.outc(x)
return logits
unet_half = UNet_half(3,1)
unet_half = nn.DataParallel(unet_half).cuda()
criterion = nn.BCEWithLogitsLoss()
optimizer = optim.Adam(unet_half.parameters(), lr=1e-3, weight_decay=1e-8)
def dice_coeff(pred, target):
eps = 0.0001
num = pred.size(0)
m1 = pred.view(num, -1) # Flatten
m2 = target.view(num, -1) # Flatten
intersection = (m1 * m2).sum()
return (2. * intersection + eps) / (m1.sum() + m2.sum() + eps)
def train_half(epoch):
unet_half.train()
train_loss = 0
for data, mask in train_loader:
data, mask = data.cuda(), mask.cuda()
with autocast():
optimizer.zero_grad()
output = unet_half(data)
loss = criterion(output,mask)
loss.backward()
optimizer.step()
train_loss += loss.item()*data.size(0)
train_loss = train_loss/len(train_loader.dataset)
print('Epoch: {} \tTraining Loss: {:.6f}'.format(epoch, train_loss))
def val_half(epoch):
print("current learning rate: ", optimizer.state_dict()["param_groups"][0]["lr"])
unet_half.eval()
val_loss = 0
dice_score = 0
with torch.no_grad():
for data, mask in val_loader:
data, mask = data.cuda(), mask.cuda()
with autocast():
output = unet_half(data)
loss = criterion(output, mask)
val_loss += loss.item()*data.size(0)
dice_score += dice_coeff(torch.sigmoid(output).cpu(), mask.cpu())*data.size(0)
val_loss = val_loss/len(val_loader.dataset)
dice_score = dice_score/len(val_loader.dataset)
print('Epoch: {} \tValidation Loss: {:.6f}, Dice score: {:.6f}'.format(epoch, val_loss, dice_score))
epochs = 100
scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=1, gamma=0.8)
for epoch in range(1, epochs+1):
train_half(epoch)
val_half(epoch)
scheduler.step()