yolov5 win10 数据集制作 各种踩坑

   日期:2020-07-18     浏览:738    评论:0    
核心提示:1.GitHub 下载yolov52.环境:win10,visual studio 2015 ,anaconda2.7,python=3.7,cuda=10.2 ,pytorch=1.5(GPU)3.更改镜像-清华4.安装pytorch5.安装requirements.txt:报错。解决方案:git+https://github.com/philferriere/cocoapi.git#subdirectory=PythonAPI用这个替换掉原有语句6.下载预训练模型,YOLOv5l.pt

1.GitHub 下载yolov5
2.环境:win10,visual studio 2015 ,anaconda2.7,python=3.7,cuda=10.2 ,pytorch=1.5(GPU)
3.更改镜像-清华
4.安装pytorch
5.安装requirements.txt:报错。解决方案:

git+https://github.com/philferriere/cocoapi.git#subdirectory=PythonAPI
用这个替换掉原有语句

6.下载预训练模型,YOLOv5l.pt。
7.直接运行detect文件,报错。解决方案:

1.python detect.py --source ./inference/images/bus.jpg --weights ./weights/yolov5s.pt --conf 0.4 --device cuda:0
2.这里报错:找不到cuda
3.进入到torch_utils.py,

def select_device(device='', apex=False, batch_size=None):
    # device = 'cpu' or '0' or '0,1,2,3'
    cpu_request = device.lower() == 'cpu'
    if device and not cpu_request:  # if device requested other than 'cpu'
        os.environ['CUDA_VISIBLE_DEVICES'] = device  # set environment variable
        assert torch.cuda.is_available(), 'CUDA unavailable, invalid device %s requested' % device  # check availablity
.
.
.

加入一个语句

def select_device(device='', apex=False, batch_size=None):
    # device = 'cpu' or '0' or '0,1,2,3'
    
    
    print(torch.cuda.is_available())


    cpu_request = device.lower() == 'cpu'
    if device and not cpu_request:  # if device requested other than 'cpu'
        os.environ['CUDA_VISIBLE_DEVICES'] = device  # set environment variable
        assert torch.cuda.is_available(), 'CUDA unavailable, invalid device %s requested' % device  # check availablity

这样就可以找到cuda

于是得到结果:

接下来,是制作自己的小型数据集,训练和测试

0.下载coco2017,手动选40张图片和对应标签。

1.目录结构:

##数据集和代码平行目录

yolo的格式是 xxx.jpg和对应的 xxx.txt,共2个文件
txt中存放的是标注好的人、狗、摩托车的坐标;还有所属类别的序号

2.自制yaml文件

train: ../lee/train/images
val: ../lee/valid/images
test: ../lee/test/images

# number of classes
nc: 80

# class names
names: ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',
        'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
        'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
        'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
        'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
        'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
        'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
        'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear',
        'hair drier', 'toothbrush']

运行指令:
python train.py --img 640 --batch 16 --epochs 10 --data ./data/lee.yaml --cfg ./models/yolov5s.yaml --device cuda:0

结果如下

(yolov5_py37_torch1.5) D:\yolo\yolov5-master>python train.py --img 640 --batch 16 --epochs 10 --data ./data/lee.yaml --cfg ./models/yolov5s.yaml  --device cuda:0
Namespace(batch_size=16, bucket='', cache_images=False, cfg='./models/yolov5s.yaml', data='./data/lee.yaml', device='cuda:0', epochs=10, evolve=False, hyp='', img_size=[640], multi_scale=F
alse, name='', noautoanchor=False, nosave=False, notest=False, rect=False, resume=False, single_cls=False, weights='')
True
Using CUDA Apex device0 _CudaDeviceProperties(name='GeForce RTX 2060 SUPER', total_memory=8192MB)
Start Tensorboard with "tensorboard --logdir=runs", view at http://localhost:6006/
Hyperparameters {'optimizer': 'SGD', 'lr0': 0.01, 'momentum': 0.937, 'weight_decay': 0.0005, 'giou': 0.05, 'cls': 0.58, 'cls_pw': 1.0, 'obj': 1.0, 'obj_pw': 1.0, 'iou_t': 0.2, 'anchor_t':
4.0, 'fl_gamma': 0.0, 'hsv_h': 0.014, 'hsv_s': 0.68, 'hsv_v': 0.36, 'degrees': 0.0, 'translate': 0.0, 'scale': 0.5, 'shear': 0.0}

                 from  n    params  module                                  arguments
  0                -1  1      3520  models.common.Focus                     [3, 32, 3]
  1                -1  1     18560  models.common.Conv                      [32, 64, 3, 2]
  2                -1  1     19904  models.common.BottleneckCSP             [64, 64, 1]
  3                -1  1     73984  models.common.Conv                      [64, 128, 3, 2]
  4                -1  1    161152  models.common.BottleneckCSP             [128, 128, 3]
  5                -1  1    295424  models.common.Conv                      [128, 256, 3, 2]
  6                -1  1    641792  models.common.BottleneckCSP             [256, 256, 3]
  7                -1  1   1180672  models.common.Conv                      [256, 512, 3, 2]
  8                -1  1    656896  models.common.SPP                       [512, 512, [5, 9, 13]]
  9                -1  1   1248768  models.common.BottleneckCSP             [512, 512, 1, False]
 10                -1  1    131584  models.common.Conv                      [512, 256, 1, 1]
 11                -1  1         0  torch.nn.modules.upsampling.Upsample    [None, 2, 'nearest']
 12           [-1, 6]  1         0  models.common.Concat                    [1]
 13                -1  1    378624  models.common.BottleneckCSP             [512, 256, 1, False]
 14                -1  1     33024  models.common.Conv                      [256, 128, 1, 1]
 15                -1  1         0  torch.nn.modules.upsampling.Upsample    [None, 2, 'nearest']
 16           [-1, 4]  1         0  models.common.Concat                    [1]
 17                -1  1     95104  models.common.BottleneckCSP             [256, 128, 1, False]
 18                -1  1     32895  torch.nn.modules.conv.Conv2d            [128, 255, 1, 1]
 19                -2  1    147712  models.common.Conv                      [128, 128, 3, 2]
 20          [-1, 14]  1         0  models.common.Concat                    [1]
 21                -1  1    313088  models.common.BottleneckCSP             [256, 256, 1, False]
 22                -1  1     65535  torch.nn.modules.conv.Conv2d            [256, 255, 1, 1]
 23                -2  1    590336  models.common.Conv                      [256, 256, 3, 2]
 24          [-1, 10]  1         0  models.common.Concat                    [1]
 25                -1  1   1248768  models.common.BottleneckCSP             [512, 512, 1, False]
 26                -1  1    130815  torch.nn.modules.conv.Conv2d            [512, 255, 1, 1]
 27      [-1, 22, 18]  1         0  models.yolo.Detect                      [80, [[116, 90, 156, 198, 373, 326], [30, 61, 62, 45, 59, 119], [10, 13, 16, 30, 33, 23]]]
Model Summary: 191 layers, 7.46816e+06 parameters, 7.46816e+06 gradients

Optimizer groups: 62 .bias, 70 conv.weight, 59 other
Scanning labels ..\lee\train\labels.cache (39 found, 0 missing, 0 empty, 0 duplicate, for 39 images): 100%|████████████████████████████████████████
█████| 39/39 [00:00<00:00, 19317.18it/s]
Scanning labels ..\lee\valid\labels.cache (29 found, 0 missing, 0 empty, 0 duplicate, for 29 images): 100%|████████████████████████████████████████
█████| 29/29 [00:00<00:00, 29281.37it/s]

Analyzing anchors... Best Possible Recall (BPR) = 0.9935
Image sizes 640 train, 640 test
Using 8 dataloader workers
Starting training for 10 epochs...

     Epoch   gpu_mem      GIoU       obj       cls     total   targets  img_size
       0/9     2.93G    0.1186   0.08896    0.1206    0.3281       242       640:  33%|███████████████████████▎                                              | 1/3 [
       0/9     2.95G    0.1167   0.08447    0.1214    0.3225       161       640:  33%|███████████████████████▎                                              | 1/3 [
       0/9     2.95G    0.1167   0.08447    0.1214    0.3225       161       640:  67%|██████████████████████████████████████████████▋
       0/9     2.95G    0.1169   0.08754     0.121    0.3255       105       640:  67%|██████████████████████████████████████████████▋
       0/9     2.95G    0.1169   0.08754     0.121    0.3255       105       640: 100%|██████████████████████████████████████████████████
       0/9     2.95G    0.1169   0.08754     0.121    0.3255       105       640: 100%|██████████████████████████████████████████████████
████████████████████| 3/3 [00:04<00:00,  1.61s/it]

注意:下载的数据集中,有几张图片的txt文件有问题,训练时会报错
比如:KeyError: ‘…\lee\train\images\000000000009.jpg’
可以直接删除相应的图片和txt文件

然后,会在这个weights文件夹中,生成新的xxx.pt权重文件

win10这步会有问题,由于代码是针对Linux写的。

打开train.py文件

'runs/exp' //这个是Linux的目录结构,需要改成win10的
    # Train
    if not opt.evolve:
        tb_writer = SummaryWriter(log_dir=increment_dir('runs/exp', opt.name))
        print('Start Tensorboard with "tensorboard --logdir=runs", view at http://localhost:6006/')
        train(hyp)

改成:

    # Train
    if not opt.evolve:
        tb_writer = SummaryWriter(log_dir=increment_dir('..\\yolo\\yolov5-master\\runs\\exp', opt.name))
        print('Start Tensorboard with "tensorboard --logdir=runs", view at http://localhost:6006/')
        train(hyp)

最后是训练完成后,用自己的权重测试图片
命令:
python detect.py --source ../lee/test/images --weights ./weights/myweights.pt

(yolov5_py37_torch1.5) D:\yolo\yolov5-master>python detect.py --source ../lee/test/images   --device cuda:0 --weights ./weights/yolov5s.pt
Namespace(agnostic_nms=False, augment=False, classes=None, conf_thres=0.4, device='cuda:0', img_size=640, iou_thres=0.5, output='inference/output', save_txt=False, source='../lee/test/images', update=False, vi
ew_img=False, weights=['./weights/myweights.pt'])
True
Using CUDA device0 _CudaDeviceProperties(name='GeForce RTX 2060 SUPER', total_memory=8192MB)

Fusing layers... Model Summary: 140 layers, 7.45958e+06 parameters, 7.45958e+06 gradients
image 1/29 D:\yolo\lee\test\images\000000000389.jpg: 512x640 12 persons, 2 ties, Done. (0.034s)
image 2/29 D:\yolo\lee\test\images\000000000394.jpg: 640x640 1 frisbees, Done. (0.017s)
image 3/29 D:\yolo\lee\test\images\000000000395.jpg: 640x640 7 persons, 1 cell phones, Done. (0.015s)
image 4/29 D:\yolo\lee\test\images\000000000397.jpg: 512x640 1 persons, 1 pizzas, 1 chairs, Done. (0.017s)
image 5/29 D:\yolo\lee\test\images\000000000400.jpg: 640x640 1 dogs, Done. (0.015s)
image 6/29 D:\yolo\lee\test\images\000000000404.jpg: 640x448 1 boats, Done. (0.017s)
image 7/29 D:\yolo\lee\test\images\000000000415.jpg: 640x384 1 persons, 1 tennis rackets, Done. (0.016s)
image 8/29 D:\yolo\lee\test\images\000000000419.jpg: 512x640 2 persons, 2 cars, 1 tennis rackets, Done. (0.017s)
image 9/29 D:\yolo\lee\test\images\000000000428.jpg: 384x640 1 persons, 1 cakes, Done. (0.017s)
image 10/29 D:\yolo\lee\test\images\000000000431.jpg: 448x640 1 persons, 1 sports balls, 1 tennis rackets, Done. (0.016s)
image 11/29 D:\yolo\lee\test\images\000000000436.jpg: 640x448 1 persons, 1 bottles, 1 donuts, Done. (0.016s)
image 12/29 D:\yolo\lee\test\images\000000000438.jpg: 512x640 17 donuts, Done. (0.016s)
image 13/29 D:\yolo\lee\test\images\000000000443.jpg: 512x640 1 remotes, Done. (0.016s)
image 14/29 D:\yolo\lee\test\images\000000000446.jpg: 640x512 1 persons, 1 couchs, 1 potted plants, Done. (0.016s)
image 15/29 D:\yolo\lee\test\images\000000000450.jpg: 512x640 1 wine glasss, 3 cups, 1 pizzas, Done. (0.017s)
image 16/29 D:\yolo\lee\test\images\000000000459.jpg: 640x576 1 persons, 1 ties, Done. (0.018s)
image 17/29 D:\yolo\lee\test\images\000000000471.jpg: 448x640 1 buss, Done. (0.016s)
image 18/29 D:\yolo\lee\test\images\000000000472.jpg: 256x640 1 airplanes, Done. (0.016s)
image 19/29 D:\yolo\lee\test\images\000000000474.jpg: 640x448 1 persons, 1 baseball gloves, Done. (0.016s)
image 20/29 D:\yolo\lee\test\images\000000000486.jpg: 448x640 1 bowls, 1 refrigerators, Done. (0.016s)
image 21/29 D:\yolo\lee\test\images\000000000488.jpg: 448x640 4 persons, 1 sports balls, 3 baseball gloves, Done. (0.015s)
image 22/29 D:\yolo\lee\test\images\000000000490.jpg: 640x640 1 dogs, 1 skateboards, 1 chairs, Done. (0.015s)
image 23/29 D:\yolo\lee\test\images\000000000491.jpg: 448x640 1 birds, 1 teddy bears, Done. (0.016s)
image 24/29 D:\yolo\lee\test\images\000000000502.jpg: 448x640 1 bears, Done. (0.015s)
image 25/29 D:\yolo\lee\test\images\000000000510.jpg: 512x640 1 persons, 1 benchs, Done. (0.016s)
image 26/29 D:\yolo\lee\test\images\000000000514.jpg: 640x384 1 beds, Done. (0.016s)
image 27/29 D:\yolo\lee\test\images\000000000520.jpg: 512x640 2 persons, 7 birds, Done. (0.017s)
image 28/29 D:\yolo\lee\test\images\000000000529.jpg: 640x448 1 persons, 1 motorcycles, Done. (0.015s)
image 29/29 D:\yolo\lee\test\images\000000000531.jpg: 512x640 9 persons, 1 tennis rackets, Done. (0.016s)
Results saved to D:\yolo\yolov5-master\inference/output
Done. (1.946s)

可以在…\yolov5-master\inference\output中看到

 
打赏
 本文转载自:网络 
所有权利归属于原作者,如文章来源标示错误或侵犯了您的权利请联系微信13520258486
更多>最近资讯中心
更多>最新资讯中心
0相关评论

推荐图文
推荐资讯中心
点击排行
最新信息
新手指南
采购商服务
供应商服务
交易安全
关注我们
手机网站:
新浪微博:
微信关注:

13520258486

周一至周五 9:00-18:00
(其他时间联系在线客服)

24小时在线客服