主頁 > 知識(shí)庫 > 淺談Pytorch 定義的網(wǎng)絡(luò)結(jié)構(gòu)層能否重復(fù)使用

淺談Pytorch 定義的網(wǎng)絡(luò)結(jié)構(gòu)層能否重復(fù)使用

熱門標(biāo)簽:所得系統(tǒng)電梯怎樣主板設(shè)置外呼 地圖標(biāo)注面積 小蘇云呼電話機(jī)器人 北京電銷外呼系統(tǒng)加盟 市場上的電銷機(jī)器人 儋州電話機(jī)器人 北瀚ai電銷機(jī)器人官網(wǎng)手機(jī)版 佛山400電話辦理 朝陽手機(jī)外呼系統(tǒng)

前言:最近在構(gòu)建網(wǎng)絡(luò)的時(shí)候,有一些層參數(shù)一樣,于是就沒有定義新的層,直接重復(fù)使用了原來已經(jīng)有的層,發(fā)現(xiàn)效果和模型大小都沒有什么變化,心中產(chǎn)生了疑問:定義的網(wǎng)絡(luò)結(jié)構(gòu)層能否重復(fù)使用?因此接下來利用了一個(gè)小模型網(wǎng)絡(luò)實(shí)驗(yàn)了一下。

一、網(wǎng)絡(luò)結(jié)構(gòu)一:(連續(xù)使用相同的層)

1、網(wǎng)絡(luò)結(jié)構(gòu)如下所示:

class Cnn(nn.Module):
    def __init__(self):
        super(Cnn, self).__init__()
        self.conv1 = nn.Sequential(
          nn.Conv2d(
            in_channels = 3,  #(, 64, 64, 3)
            out_channels = 16,
            kernel_size = 3,
            stride = 1,
            padding = 1
          ),   ##( , 64, 64, 16)
          nn.ReLU(),
          nn.MaxPool2d(kernel_size = 2)
        )  ##( , 32, 32, 16)
        self.conv2 = nn.Sequential(
          nn.Conv2d(16,32,3,1,1),
          nn.ReLU(),
          nn.MaxPool2d(2)
        )
        self.conv3 = nn.Sequential(
          nn.Conv2d(32,64,3,1,1),
          nn.ReLU(),
          nn.MaxPool2d(2)
        )
        self.conv4 = nn.Sequential(
          nn.Conv2d(64,64,3,1,1),
          nn.BatchNorm2d(64),
          nn.ReLU(),
        )
        self.out = nn.Linear(64*8*8, 6)
    def forward(self, x):
        x = self.conv1(x)
        x = self.conv2(x)
        x = self.conv3(x)
        x = self.conv4(x)
        x = x.view(x.size(0),-1)
        out = self.out(x)
        return out

定義了一個(gè)卷積層conv4,接下來圍繞著這個(gè)conv4做一些變化。打印一下網(wǎng)絡(luò)結(jié)構(gòu):

和想象中的一樣,其中

nn.BatchNorm2d # 對(duì)應(yīng)上面的 module.conv4.1.*

激活層沒有參數(shù)所以直接跳過

2、改變一下forward():

連續(xù)使用兩個(gè)conv4層:

def forward(self, x):
        x = self.conv1(x)
        x = self.conv2(x)
        x = self.conv3(x)
        x = self.conv4(x)
        x = self.conv4(x)
        x = x.view(x.size(0),-1)
        out = self.out(x)
        return out

打印網(wǎng)絡(luò)結(jié)構(gòu):

和1.1中的結(jié)構(gòu)一樣,conv4沒有生效。

二、網(wǎng)絡(luò)結(jié)構(gòu)二:(間斷使用相同的層)

網(wǎng)絡(luò)結(jié)構(gòu)多定義一個(gè)和conv4一樣的層conv5,同時(shí)間斷使用conv4:

    self.conv4 = nn.Sequential(
      nn.Conv2d(64,64,3,1,1),
      nn.BatchNorm2d(64),
      nn.ReLU(),
    )
    self.conv5 = nn.Sequential(
      nn.Conv2d(64,64,3,1,1),
      nn.BatchNorm2d(64),
      nn.ReLU(),
    )
    self.out = nn.Linear(64*8*8, 6)
def forward(self, x):
    x = self.conv1(x)
    x = self.conv2(x)
    x = self.conv3(x)
    x = self.conv4(x)
    x = self.conv5(x)
    x = self.conv4(x)
    x = x.view(x.size(0),-1)
    out = self.out(x)
    return out

打印網(wǎng)絡(luò)結(jié)構(gòu):

果不其然,新定義的conv5有效,conv4還是沒有生效。

本來以為,使用重復(fù)定義的層會(huì)像conv4.0,conv4.1,…這樣下去,看樣子是不能重復(fù)使用定義的層。

Pytorch_5.7 使用重復(fù)元素的網(wǎng)絡(luò)--VGG

5.7.1 VGG塊

VGG引入了Block的概念 作為模型的基礎(chǔ)模塊

import time
import torch
from torch import nn, optim
import pytorch_deep as pyd
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
def vgg_block(num_convs, in_channels, out_channels):
    blk = []
    for i in range(num_convs):
        if i == 0:
            blk.append(nn.Conv2d(in_channels, out_channels,kernel_size=3, padding=1))
        else:
            blk.append(nn.Conv2d(out_channels, out_channels,kernel_size=3, padding=1))
        blk.append(nn.ReLU())
    blk.append(nn.MaxPool2d(kernel_size=2, stride=2)) # 這⾥會(huì)使寬⾼減半
    return nn.Sequential(*blk)

實(shí)現(xiàn)VGG_11網(wǎng)絡(luò)

8個(gè)卷積層和3個(gè)全連接

def vgg_11(conv_arch, fc_features, fc_hidden_units=4096):
    net = nn.Sequential()
    # 卷積層部分
    for i, (num_convs, in_channels, out_channels) in enumerate(conv_arch):
        # 每經(jīng)過⼀個(gè)vgg_block都會(huì)使寬⾼減半
        net.add_module("vgg_block_" + str(i+1),vgg_block(num_convs, in_channels, out_channels))
    # 全連接層部分
    net.add_module("fc", nn.Sequential(
                    pyd.FlattenLayer(),
                    nn.Linear(fc_features,fc_hidden_units),
                    nn.ReLU(),
                    nn.Dropout(0.5),
                    nn.Linear(fc_hidden_units,fc_hidden_units),
                    nn.ReLU(),
                    nn.Dropout(0.5),
                    nn.Linear(fc_hidden_units, 10)
                    ))
    return net
ratio = 8
small_conv_arch = [(1, 1, 64//ratio), (1, 64//ratio, 128//ratio),(2, 128//ratio, 256//ratio),(2, 256//ratio, 512//ratio), (2, 512//ratio,512//ratio)]
fc_features = 512 * 7 * 7 # c *
fc_hidden_units = 4096 # 任意
net = vgg_11(small_conv_arch, fc_features // ratio, fc_hidden_units //ratio)
print(net)
Sequential(
  (vgg_block_1): Sequential(
    (0): Conv2d(1, 8, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): ReLU()
    (2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  )
  (vgg_block_2): Sequential(
    (0): Conv2d(8, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): ReLU()
    (2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  )
  (vgg_block_3): Sequential(
    (0): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): ReLU()
    (2): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (3): ReLU()
    (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  )
  (vgg_block_4): Sequential(
    (0): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): ReLU()
    (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (3): ReLU()
    (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  )
  (vgg_block_5): Sequential(
    (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): ReLU()
    (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (3): ReLU()
    (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  )
  (fc): Sequential(
    (0): FlattenLayer()
    (1): Linear(in_features=3136, out_features=512, bias=True)
    (2): ReLU()
    (3): Dropout(p=0.5)
    (4): Linear(in_features=512, out_features=512, bias=True)
    (5): ReLU()
    (6): Dropout(p=0.5)
    (7): Linear(in_features=512, out_features=10, bias=True)
  )
)

訓(xùn)練數(shù)據(jù)

batch_size = 32
# 如出現(xiàn)“out of memory”的報(bào)錯(cuò)信息,可減⼩batch_size或resize
train_iter, test_iter = pyd.load_data_fashion_mnist(batch_size,resize=224)
lr, num_epochs = 0.001, 5
optimizer = torch.optim.Adam(net.parameters(), lr=lr)
pyd.train_ch5(net, train_iter, test_iter, batch_size, optimizer,device, num_epochs)
training on  cuda
epoch 1, loss 0.5166, train acc 0.810, test acc 0.872,time 57.6 sec
epoch 2, loss 0.1557, train acc 0.887, test acc 0.902,time 57.9 sec
epoch 3, loss 0.0916, train acc 0.900, test acc 0.907,time 57.7 sec
epoch 4, loss 0.0609, train acc 0.912, test acc 0.915,time 57.6 sec
epoch 5, loss 0.0449, train acc 0.919, test acc 0.914,time 57.4 sec

以上為個(gè)人經(jīng)驗(yàn),希望能給大家一個(gè)參考,也希望大家多多支持腳本之家。

您可能感興趣的文章:
  • 淺談PyTorch的可重復(fù)性問題(如何使實(shí)驗(yàn)結(jié)果可復(fù)現(xiàn))
  • 對(duì)pytorch網(wǎng)絡(luò)層結(jié)構(gòu)的數(shù)組化詳解
  • pytorch構(gòu)建網(wǎng)絡(luò)模型的4種方法
  • pytorch 更改預(yù)訓(xùn)練模型網(wǎng)絡(luò)結(jié)構(gòu)的方法

標(biāo)簽:酒泉 定西 金融催收 云南 寧夏 龍巖 商丘 江蘇

巨人網(wǎng)絡(luò)通訊聲明:本文標(biāo)題《淺談Pytorch 定義的網(wǎng)絡(luò)結(jié)構(gòu)層能否重復(fù)使用》,本文關(guān)鍵詞  淺談,Pytorch,定義,的,網(wǎng)絡(luò),;如發(fā)現(xiàn)本文內(nèi)容存在版權(quán)問題,煩請(qǐng)?zhí)峁┫嚓P(guān)信息告之我們,我們將及時(shí)溝通與處理。本站內(nèi)容系統(tǒng)采集于網(wǎng)絡(luò),涉及言論、版權(quán)與本站無關(guān)。
  • 相關(guān)文章
  • 下面列出與本文章《淺談Pytorch 定義的網(wǎng)絡(luò)結(jié)構(gòu)層能否重復(fù)使用》相關(guān)的同類信息!
  • 本頁收集關(guān)于淺談Pytorch 定義的網(wǎng)絡(luò)結(jié)構(gòu)層能否重復(fù)使用的相關(guān)信息資訊供網(wǎng)民參考!
  • 推薦文章