Datawhale 零基础入门CV赛事-Task05 小白调参之路

   日期:2020-06-05     浏览:237    评论:0    
核心提示:importos,sys,glob,shutil,jsonos.environ[CUDA_VISIBLE_DEVICES]=0importcv2fromPILimportImageimportnumpyasnpfromtqdmimporttqdm,tqdm_notebookimporttorchtorch.manual_seed(0)torch.backends.cudnn.deterministic=Falsetorch.backe

最后一次学习任务打卡,作为深度学习小白,对这次入门CV赛事进行一次探索性的总结吧。下面来细数我在探索中遇到的坑,并分享一个线上得分为0.7687的改进版baseline。

一、GPU环境配置

安装GPU版本的Pytorch最重要的一步就是找对CUDA的驱动版本,之前装的版本和自己的电脑一直都不匹配,还在那折腾了好久,后来选对版本一次就装成功了。
查看电脑显卡对应的CUDA驱动版本
1、首先进入控制面板,查看显卡类型
我的台式机是有NVIDIA显卡的,可以安装GPU,但我Matebook X笔记本的显卡是因特尔的,就没法安装了。

2、进入NVIDIA显卡控制面板

3、进步系统信息,在组件里查看NVCUDA.DLL的信息
这里可以看到,我的CUDA版本是10.1的。

4、Pytorch官网安装命令,这里就选择对应的CUDA 10.1
用系统默认的源进行在线安装会比较慢,可以选用清华源安装,也可以下载离线安装包进行安装。

添加清华源

conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge 
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/msys2/
conda config --set show_channel_urls yes

清华源还提供了 Anaconda 仓库与第三方源(conda-forge、msys2、pytorch等)的镜像。因此需要pytorch, 还需要添加pytorch的镜像:

conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch/

上面的pytorch安装命令中,应该把-c pytorch去掉,否则还是用默认的源进行安装。

conda install pytorch torchvision cudatoolkit=10.1 

安装好之后测试下GPU是否可用,这样环境就算配置完成了。

二、跑通Baseline

这里记录下跑baseline时遇到的问题,最后都通过学习群的答疑文档解决了。

  • 在windows系统下,num_workers需要改为0
  • c0, c1, c2, c3, c4 = model(input) 下面要加上一行 “target=target.long()”

用GPU训练时

  • c0.data.numpy()需要改为c0.data.cpu().numpy()

三、调参技巧

目前尝试了一些简单的改进方法:

  • 训练集的数据增强
    其实baseline里面已经给出了数据增强的方法,但具体参数还可以自己调节一下,图片中字符的位置关系确实很重要,可以考虑多做些基于位置的数据增强。也可以增加其他的数据增强方法,比如自定义transforms方法来增加高斯噪声等等,这个准备下一步尝试。

  • 测试集的数据增强
    测试集数据扩增(Test Time Augmentation,简称TTA)也是常用的集成学习技巧,数据扩增不仅可以在训练时候用,而且可以同样在预测时候进行数据扩增,对同一个样本预测三次,然后对三次结果进行平均。实验中,如果对测试集数据使用旋转、颜色变换等反而会降低线上分数。

  • 学习率衰减
    初始学习率的设置也很重要,我是设置为0.001,如果设置为0.01你会发现前几轮的验证集准确率非常低,大概在0.02左右,后续提高也不明显,这是因为步长过大了。尝试使用学习率衰减策略,前12轮使用0.001的学习率,后面使用0.0001的学习率,对线上提分有帮助。

  • 每次调整一项参数
    这一点对于新手很重要,不至于调参调到晕头转向,最后模型改进了也不确定是哪个参数的功劳。还有,与此同时固定随机数torch.manual_seed(0),方便进行单因素调参,排除不确定性因素的影响。

  • 学会看训练日志
    日志里面有四项输出,例如:Epoch: 18, Train loss: 0.397826306382815 Val loss: 2.669229788303375,Val Acc 0.5954
    不能一味追求训练集的loss降低,因为会过拟合,主要还是看验证集的loss情况,训练轮数也不用太多。

后续待改进的方法
对于新手而言,还有非常大的进步空间……

  • 数据增强部分的进一步细化
  • dropout
  • 训练验证集划分,以及多折交叉
  • resnet50等更复杂的预训练模型
  • 尝试目标检测模型yolo

我的完整版代码,基于baseline进行了一点改进,线上得分为0.7687

import os, sys, glob, shutil, json
os.environ["CUDA_VISIBLE_DEVICES"] = '0'
import cv2

from PIL import Image
import numpy as np

from tqdm import tqdm, tqdm_notebook

import torch
torch.manual_seed(0)
torch.backends.cudnn.deterministic = False
torch.backends.cudnn.benchmark = True

import torchvision.models as models
import torchvision.transforms as transforms
import torchvision.datasets as datasets
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
from torch.utils.data.dataset import Dataset

class SVHNDataset(Dataset):
    def __init__(self, img_path, img_label, transform=None):
        self.img_path = img_path
        self.img_label = img_label
        if transform is not None:
            self.transform = transform
        else:
            self.transform = None

    def __getitem__(self, index):
        img = Image.open(self.img_path[index]).convert('RGB')

        if self.transform is not None:
            img = self.transform(img)

        lbl = np.array(self.img_label[index], dtype=np.int)
        lbl = list(lbl) + (5 - len(lbl)) * [10]
        return img, torch.from_numpy(np.array(lbl[:5]))

    def __len__(self):
        return len(self.img_path)

train_path = glob.glob('input/train/*.png')
train_path.sort()
train_json = json.load(open('input/train.json'))
train_label = [train_json[x]['label'] for x in train_json]
print(len(train_path), len(train_label))

train_loader = torch.utils.data.DataLoader(
    SVHNDataset(train_path, train_label,
                transforms.Compose([
                    transforms.Resize((64, 128)),
                    transforms.RandomCrop((55, 115)),
                    transforms.ColorJitter(0.3, 0.3, 0.2),
                    transforms.RandomRotation(10),
                    transforms.ToTensor(),
                    transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
    ])),
    batch_size=40,
    shuffle=True,
   # num_workers=0,
)

val_path = glob.glob('input/val/*.png')
val_path.sort()
val_json = json.load(open('input/val.json'))
val_label = [val_json[x]['label'] for x in val_json]
print(len(val_path), len(val_label))

val_loader = torch.utils.data.DataLoader(
    SVHNDataset(val_path, val_label,
                transforms.Compose([
                    transforms.Resize((64, 128)),
                    transforms.RandomCrop((55, 115)),
                    transforms.ColorJitter(0.3, 0.3, 0.2),
                    transforms.RandomRotation(10),
                    transforms.ToTensor(),
                    transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
    ])),
    batch_size=40,
    shuffle=False,
    #num_workers=0,
)

class SVHN_Model1(nn.Module):
    def __init__(self):
        super(SVHN_Model1, self).__init__()

        model_conv = models.resnet18(pretrained=True)
        model_conv.avgpool = nn.AdaptiveAvgPool2d(1)
        model_conv = nn.Sequential(*list(model_conv.children())[:-1])
        self.cnn = model_conv

        self.fc1 = nn.Linear(512, 11)
        self.fc2 = nn.Linear(512, 11)
        self.fc3 = nn.Linear(512, 11)
        self.fc4 = nn.Linear(512, 11)
        self.fc5 = nn.Linear(512, 11)

    def forward(self, img):
        feat = self.cnn(img)
        # print(feat.shape)
        feat = feat.view(feat.shape[0], -1)
        c1 = self.fc1(feat)
        c2 = self.fc2(feat)
        c3 = self.fc3(feat)
        c4 = self.fc4(feat)
        c5 = self.fc5(feat)
        return c1, c2, c3, c4, c5

def train(train_loader, model, criterion, optimizer, epoch):
    # 切换模型为训练模式
    model.train()
    train_loss = []

    for i, (input, target) in enumerate(train_loader):
        if use_cuda:
            input = input.cuda()
            target = target.cuda()

        c0, c1, c2, c3, c4 = model(input)
        target = target.long()
        loss = criterion(c0, target[:, 0]) + \
               criterion(c1, target[:, 1]) + \
               criterion(c2, target[:, 2]) + \
               criterion(c3, target[:, 3]) + \
               criterion(c4, target[:, 4])

        # loss /= 6
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

        train_loss.append(loss.item())
    return np.mean(train_loss)


def validate(val_loader, model, criterion):
    # 切换模型为预测模型
    model.eval()
    val_loss = []

    # 不记录模型梯度信息
    with torch.no_grad():
        for i, (input, target) in enumerate(val_loader):
            if use_cuda:
                input = input.cuda()
                target = target.cuda()

            c0, c1, c2, c3, c4 = model(input)
            target = target.long()
            loss = criterion(c0, target[:, 0]) + \
                   criterion(c1, target[:, 1]) + \
                   criterion(c2, target[:, 2]) + \
                   criterion(c3, target[:, 3]) + \
                   criterion(c4, target[:, 4])
            # loss /= 6
            val_loss.append(loss.item())
    return np.mean(val_loss)


def predict(test_loader, model, tta=10):
    model.eval()
    test_pred_tta = None

    # TTA 次数
    for _ in range(tta):
        test_pred = []

        with torch.no_grad():
            for i, (input, target) in enumerate(test_loader):
                if use_cuda:
                    input = input.cuda()

                c0, c1, c2, c3, c4 = model(input)
                if use_cuda:
                    output = np.concatenate([
                        c0.data.cpu().numpy(),
                        c1.data.cpu().numpy(),
                        c2.data.cpu().numpy(),
                        c3.data.cpu().numpy(),
                        c4.data.cpu().numpy()], axis=1)
                else:
                    output = np.concatenate([
                        c0.data.numpy(),
                        c1.data.numpy(),
                        c2.data.numpy(),
                        c3.data.numpy(),
                        c4.data.numpy()], axis=1)

                test_pred.append(output)

        test_pred = np.vstack(test_pred)
        if test_pred_tta is None:
            test_pred_tta = test_pred
        else:
            test_pred_tta += test_pred

    return test_pred_tta


model = SVHN_Model1()
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), 0.001)
best_loss = 1000.0

use_cuda = True
if use_cuda:
    model = model.cuda()

for epoch in range(20):
    if epoch > 12:
        optimizer = torch.optim.Adam(model.parameters(), 0.0001)
    train_loss = train(train_loader, model, criterion, optimizer, epoch)
    val_loss = validate(val_loader, model, criterion)

    val_label = [''.join(map(str, x)) for x in val_loader.dataset.img_label]
    val_predict_label = predict(val_loader, model, 1)
    val_predict_label = np.vstack([
        val_predict_label[:, :11].argmax(1),
        val_predict_label[:, 11:22].argmax(1),
        val_predict_label[:, 22:33].argmax(1),
        val_predict_label[:, 33:44].argmax(1),
        val_predict_label[:, 44:55].argmax(1),
    ]).T
    val_label_pred = []
    for x in val_predict_label:
        val_label_pred.append(''.join(map(str, x[x != 10])))

    val_char_acc = np.mean(np.array(val_label_pred) == np.array(val_label))

    print('Epoch: {0}, Train loss: {1} \t Val loss: {2}'.format(epoch, train_loss, val_loss))
    print('Val Acc', val_char_acc)
    # 记录下验证集精度
    if val_loss < best_loss:
        best_loss = val_loss
        # print('Find better model in Epoch {0}, saving model.'.format(epoch))
        torch.save(model.state_dict(), './model.pt')

test_path = glob.glob('input/test_a/*.png')
test_path.sort()
#test_json = json.load(open('input/test_a.json'))
test_label = [[1]] * len(test_path)
print(len(test_path), len(test_label))

test_loader = torch.utils.data.DataLoader(
    SVHNDataset(test_path, test_label,
                transforms.Compose([
                    transforms.Resize((64, 128)),
                    #transforms.RandomCrop((55, 115)),
                    #transforms.ColorJitter(0.3, 0.3, 0.2),
                    #transforms.RandomRotation(10),
                    transforms.ToTensor(),
                    transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
    ])),
    batch_size=40,
    shuffle=False,
    num_workers=0,
)

# 加载保存的最优模型
model.load_state_dict(torch.load('model.pt'))

test_predict_label = predict(test_loader, model, 1)
print(test_predict_label.shape)

test_label = [''.join(map(str, x)) for x in test_loader.dataset.img_label]
test_predict_label = np.vstack([
    test_predict_label[:, :11].argmax(1),
    test_predict_label[:, 11:22].argmax(1),
    test_predict_label[:, 22:33].argmax(1),
    test_predict_label[:, 33:44].argmax(1),
    test_predict_label[:, 44:55].argmax(1),
]).T

test_label_pred = []
for x in test_predict_label:
    test_label_pred.append(''.join(map(str, x[x != 10])))

import pandas as pd

df_submit = pd.read_csv('input/sample_submit_A.csv')
df_submit['file_code'] = test_label_pred
df_submit.to_csv('submit.csv', index=None)

训练日志:

30000 30000
10000 10000
Epoch: 0, Train loss: 3.6878881301879884   Val loss: 3.8314202270507813
Val Acc 0.3009
Epoch: 1, Train loss: 2.353892293771108   Val loss: 3.3798744926452637
Val Acc 0.3678
Epoch: 2, Train loss: 1.9556938782533009   Val loss: 2.81031014251709
Val Acc 0.4612
Epoch: 3, Train loss: 1.755560370206833   Val loss: 2.6982784156799315
Val Acc 0.4744
Epoch: 4, Train loss: 1.6203363784948985   Val loss: 2.6456471338272096
Val Acc 0.5076
Epoch: 5, Train loss: 1.4930931321779888   Val loss: 2.491475617170334
Val Acc 0.5248
Epoch: 6, Train loss: 1.4172131468454996   Val loss: 2.500324214935303
Val Acc 0.5324
Epoch: 7, Train loss: 1.3275229590336481   Val loss: 2.4962129735946657
Val Acc 0.5321
Epoch: 8, Train loss: 1.269516961534818   Val loss: 2.504521679401398
Val Acc 0.5199
Epoch: 9, Train loss: 1.219906511068344   Val loss: 2.5031806914806367
Val Acc 0.5397
Epoch: 10, Train loss: 1.175540789326032   Val loss: 2.3800894572734834
Val Acc 0.5593
Epoch: 11, Train loss: 1.1057798286676408   Val loss: 2.3298161714076997
Val Acc 0.5592
Epoch: 12, Train loss: 1.0604492790699005   Val loss: 2.4316101694107055
Val Acc 0.5537
Epoch: 13, Train loss: 0.8014968274434408   Val loss: 2.161671969652176
Val Acc 0.6084
Epoch: 14, Train loss: 0.7160509318908056   Val loss: 2.1714828568696976
Val Acc 0.6105
Epoch: 15, Train loss: 0.6728036096990109   Val loss: 2.192563164949417
Val Acc 0.6107
Epoch: 16, Train loss: 0.6450036255816619   Val loss: 2.1687607110738756
Val Acc 0.6152
Epoch: 17, Train loss: 0.613456147958835   Val loss: 2.168002246141434
Val Acc 0.6192
Epoch: 18, Train loss: 0.5890960101981958   Val loss: 2.2155155210494994
Val Acc 0.6148
Epoch: 19, Train loss: 0.5747307665348053   Val loss: 2.2544857943058014
Val Acc 0.6158
40000 40000
(40000, 55)

参考

安晟–天池直播:模型训练与验证+模型集成
计算机视觉实践(街景字符编码识别)
datawhalechina
动手学深度学习

Datawhale是一个专注于数据科学与AI领域的开源组织,汇集了众多领域院校和知名企业的优秀学习者,聚合了一群有开源精神和探索精神的团队成员。Datawhale以“for the learner,和学习者一起成长”为愿景,鼓励真实地展现自我、开放包容、互信互助、敢于试错和勇于担当。同时Datawhale 用开源的理念去探索开源内容、开源学习和开源方案,赋能人才培养,助力人才成长,建立起人与人,人与知识,人与企业和人与未来的联结。

 
打赏
 本文转载自:网络 
所有权利归属于原作者,如文章来源标示错误或侵犯了您的权利请联系微信13520258486
更多>最近资讯中心
更多>最新资讯中心
更多>相关资讯中心
0相关评论

推荐图文
推荐资讯中心
点击排行
最新信息
新手指南
采购商服务
供应商服务
交易安全
关注我们
手机网站:
新浪微博:
微信关注:

13520258486

周一至周五 9:00-18:00
(其他时间联系在线客服)

24小时在线客服