图像识别-ResNet-18网络结构图示及解读

文章与视频资源多平台更新

微信公众号|知乎|B站|头条:AI研习图书馆

深度学习、大数据、IT编程知识与资源分享,欢迎关注,共同进步~

一、介绍

ResNet系列网络,图像分类领域的知名算法,经久不衰,历久弥新,直到今天依旧具有广泛的研究意义和应用场景。被业界各种改进,经常用于图像识别任务。

今天主要介绍一下ResNet-18网络结构,其他深层次网络,可以依次类推。

ResNet-18,数字代表的是网络的深度,也就是说ResNet18 网络就是18层的吗?实则不然,其实这里的18指定的是带有权重的 18层,包括卷积层和全连接层,不包括池化层和BN层。

二、网络结构

本文主要基于caffe框架,解读Resnet-18网络结构~

1. 网络参数

在这里插入图片描述

2. 网络图示

网络结构图,由caffe train.prototxt文件内容绘制:

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
caffe train.prototxt文件内容:

name: "ResNet-18"

layer {
  name: "data"
    type: "Data"
    top: "data"
    top: "label"
    include {
    phase: TRAIN
      }
  transform_param {
    mirror: true
      crop_size: 224
      mean_file: "/home/vgenty/git/caffe/build/tools/ub_seven_class_train_mean.binary"
      }
  data_param {
    source: "/home/vgenty/git/caffe/build/tools/ub_seven_class_train.db"
      batch_size: 8
      backend: LMDB
      }

}

layer {
  name: "data"
    type: "Data"
    top: "data"
    top: "label"
    include {
    phase: TEST
      }
  transform_param {
    mirror: false
      crop_size: 224
      mean_file: "/home/vgenty/git/caffe/build/tools/ub_seven_class_valid_mean.binary"
      }
  data_param {
    source: "/home/vgenty/git/caffe/build/tools/ub_seven_class_valid.db"
      batch_size:8
      backend: LMDB
      }
}

layer {
  bottom: "data"
    top: "conv1"
    name: "conv1"
    type: "Convolution"
    convolution_param {
    num_output: 64
      kernel_size: 7
      pad: 3
      stride: 2
      }
}

layer {
  bottom: "conv1"
    top: "conv1"
    name: "bn_conv1"
    type: "BatchNorm"
    batch_norm_param {
    use_global_stats: true
      }
}

layer {
  bottom: "conv1"
    top: "conv1"
    name: "scale_conv1"
    type: "Scale"
    scale_param {
    bias_term: true
      }
}

layer {
  bottom: "conv1"
    top: "conv1"
    name: "conv1_relu"
    type: "ReLU"
    }

layer {
  bottom: "conv1"
    top: "pool1"
    name: "pool1"
    type: "Pooling"
    pooling_param {
    kernel_size: 3
      stride: 2
      pool: MAX
      }
}
##########################
######first shortcut######
##########################
layer {
  bottom: "pool1"
    top: "res2a_branch1"
    name: "res2a_branch1"
    type: "Convolution"
    convolution_param {
    num_output: 64
      kernel_size: 1
      pad: 0
      stride: 1
      bias_term: false
      }
}

layer {
  bottom: "res2a_branch1"
    top: "res2a_branch1"
    name: "bn2a_branch1"
    type: "BatchNorm"
    batch_norm_param {
    use_global_stats: true
      }
}

layer {
  bottom: "res2a_branch1"
    top: "res2a_branch1"
    name: "scale2a_branch1"
    type: "Scale"
    scale_param {
    bias_term: true
      }
}

layer {
  bottom: "pool1"
    top: "res2a_branch2a"
    name: "res2a_branch2a"
    type: "Convolution"
    convolution_param {
    num_output: 64
      kernel_size: 3
      pad: 1
      stride: 1
      bias_term: false
      }
}

layer {
  bottom: "res2a_branch2a"
    top: "res2a_branch2a"
    name: "bn2a_branch2a"
    type: "BatchNorm"
    batch_norm_param {
    use_global_stats: true
      }
}

layer {
  bottom: "res2a_branch2a"
    top: "res2a_branch2a"
    name: "scale2a_branch2a"
    type: "Scale"
    scale_param {
    bias_term: true
      }
}

layer {
  bottom: "res2a_branch2a"
    top: "res2a_branch2a"
    name: "res2a_branch2a_relu"
    type: "ReLU"
    }

layer {
  bottom: "res2a_branch2a"
    top: "res2a_branch2b"
    name: "res2a_branch2b"
    type: "Convolution"
    convolution_param {
    num_output: 64
      kernel_size: 3
      pad: 1
      stride: 1
      bias_term: false
      }
}

layer {
  bottom: "res2a_branch2b"
    top: "res2a_branch2b"
    name: "bn2a_branch2b"
    type: "BatchNorm"
    batch_norm_param {
    use_global_stats: true
      }
}

layer {
  bottom: "res2a_branch2b"
    top: "res2a_branch2b"
    name: "scale2a_branch2b"
    type: "Scale"
    scale_param {
    bias_term: true
      }
}

layer {
    bottom: "res2a_branch1"
    bottom: "res2a_branch2b"
    top: "res2a"
    name: "res2a"
    type: "Eltwise"
    }

layer {
  bottom: "res2a"
    top: "res2a"
    name: "res2a_relu"
    type: "ReLU"
    }

##########################
######first-2 shortcut####
##########################

layer {
  bottom: "res2a"
    top: "res2b_branch1"
    name: "res2b_branch1"
    type: "Convolution"
    convolution_param {
    num_output: 64
      kernel_size: 1
      pad: 0
      stride: 1
      bias_term: false
      }
}

layer {
  bottom: "res2b_branch1"
    top: "res2b_branch1"
    name: "bn2b_branch1"
    type: "BatchNorm"
    batch_norm_param {
    use_global_stats: true
      }
}

layer {
  bottom: "res2b_branch1"
    top: "res2b_branch1"
    name: "scale2b_branch1"
    type: "Scale"
    scale_param {
    bias_term: true
      }
}


layer {
    bottom: "res2a"
    top: "res2b_branch2a"
    name: "res2b_branch2a"
    type: "Convolution"
    convolution_param {
    num_output: 64
      kernel_size: 3
      pad: 1
      stride: 1
      bias_term: false
      }
}

layer {
  bottom: "res2b_branch2a"
    top: "res2b_branch2a"
    name: "bn2b_branch2a"
    type: "BatchNorm"
    batch_norm_param {
    use_global_stats: true
      }
}

layer {
  bottom: "res2b_branch2a"
    top: "res2b_branch2a"
    name: "scale2b_branch2a"
    type: "Scale"
    scale_param {
    bias_term: true
      }
}

layer {
  bottom: "res2b_branch2a"
    top: "res2b_branch2a"
    name: "res2b_branch2a_relu"
    type: "ReLU"
    }

layer {
  bottom: "res2b_branch2a"
    top: "res2b_branch2b"
    name: "res2b_branch2b"
    type: "Convolution"
    convolution_param {
    num_output: 64
      kernel_size: 3
      pad: 1
      stride: 1
      bias_term: false
      }
}

layer {
  bottom: "res2b_branch2b"
    top: "res2b_branch2b"
    name: "bn2b_branch2b"
    type: "BatchNorm"
    batch_norm_param {
    use_global_stats: true
      }
}

layer {
  bottom: "res2b_branch2b"
    top: "res2b_branch2b"
    name: "scale2b_branch2b"
    type: "Scale"
    scale_param {
    bias_term: true
      }
}

layer {
    bottom: "res2b_branch1"
    bottom: "res2b_branch2b"
    top: "res2b"
    name: "res2b"
    type: "Eltwise"
    }

layer {
  bottom: "res2b"
    top: "res2b"
    name: "res2b_relu"
    type: "ReLU"
    }


##########################
######second shortcut#####
##########################

layer {
  bottom: "res2b"
    top: "res3a_branch1"
    name: "res3a_branch1"
    type: "Convolution"
    convolution_param {
    num_output: 128
      kernel_size: 1
      pad: 0
      stride: 2
      bias_term: false
      }
}

layer {
  bottom: "res3a_branch1"
    top: "res3a_branch1"
    name: "bn3a_branch1"
    type: "BatchNorm"
    batch_norm_param {
    use_global_stats: true
      }
}

layer {
  bottom: "res3a_branch1"
    top: "res3a_branch1"
    name: "scale3a_branch1"
    type: "Scale"
    scale_param {
    bias_term: true
      }
}

layer {
  bottom: "res2b"
    top: "res3a_branch2a"
    name: "res3a_branch2a"
    type: "Convolution"
    convolution_param {
    num_output: 128
      kernel_size: 3
      pad: 1
      stride: 2
      bias_term: false
      }
}

layer {
  bottom: "res3a_branch2a"
    top: "res3a_branch2a"
    name: "bn3a_branch2a"
    type: "BatchNorm"
    batch_norm_param {
    use_global_stats: true
      }
}

layer {
  bottom: "res3a_branch2a"
    top: "res3a_branch2a"
    name: "scale3a_branch2a"
    type: "Scale"
    scale_param {
    bias_term: true
      }
}

layer {
  bottom: "res3a_branch2a"
    top: "res3a_branch2a"
    name: "res3a_branch2a_relu"
    type: "ReLU"
    }

layer {
  bottom: "res3a_branch2a"
    top: "res3a_branch2b"
    name: "res3a_branch2b"
    type: "Convolution"
    convolution_param {
    num_output: 128
      kernel_size: 3
      pad: 1
      stride: 1
      bias_term: false
      }
}

layer {
  bottom: "res3a_branch2b"
    top: "res3a_branch2b"
    name: "bn3a_branch2b"
    type: "BatchNorm"
    batch_norm_param {
    use_global_stats: true
      }
}

layer {
  bottom: "res3a_branch2b"
    top: "res3a_branch2b"
    name: "scale3a_branch2b"
    type: "Scale"
    scale_param {
    bias_term: true
      }
}

layer {
  bottom: "res3a_branch1"
    bottom: "res3a_branch2b"
    top: "res3a"
    name: "res3a"
    type: "Eltwise"
    }

layer {
  bottom: "res3a"
    top: "res3a"
    name: "res3a_relu"
    type: "ReLU"
    }


##########################
######second-2 shortcut#####
##########################

layer {
  bottom: "res3a"
    top: "res3b_branch1"
    name: "res3b_branch1"
    type: "Convolution"
    convolution_param {
    num_output: 128
      kernel_size: 1
      pad: 0
      stride: 1
      bias_term: false
      }
}

layer {
  bottom: "res3b_branch1"
    top: "res3b_branch1"
    name: "bn3b_branch1"
    type: "BatchNorm"
    batch_norm_param {
    use_global_stats: true
      }
}

layer {
  bottom: "res3b_branch1"
    top: "res3b_branch1"
    name: "scale3b_branch1"
    type: "Scale"
    scale_param {
    bias_term: true
      }
}


layer {
  bottom: "res3a"
    top: "res3b_branch2a"
    name: "res3b_branch2a"
    type: "Convolution"
    convolution_param {
    num_output: 128
      kernel_size: 3
      pad: 1
      stride: 1
      bias_term: false
      }
}

layer {
  bottom: "res3b_branch2a"
    top: "res3b_branch2a"
    name: "bn3b_branch2a"
    type: "BatchNorm"
    batch_norm_param {
    use_global_stats: true
      }
}

layer {
  bottom: "res3b_branch2a"
    top: "res3b_branch2a"
    name: "scale3b_branch2a"
    type: "Scale"
    scale_param {
    bias_term: true
      }
}

layer {
  bottom: "res3b_branch2a"
    top: "res3b_branch2a"
    name: "res3b_branch2a_relu"
    type: "ReLU"
    }

layer {
  bottom: "res3b_branch2a"
    top: "res3b_branch2b"
    name: "res3b_branch2b"
    type: "Convolution"
    convolution_param {
    num_output: 128
      kernel_size: 3
      pad: 1
      stride: 1
      bias_term: false
      }
}

layer {
  bottom: "res3b_branch2b"
    top: "res3b_branch2b"
    name: "bn3b_branch2b"
    type: "BatchNorm"
    batch_norm_param {
    use_global_stats: true
      }
}

layer {
  bottom: "res3b_branch2b"
    top: "res3b_branch2b"
    name: "scale3b_branch2b"
    type: "Scale"
    scale_param {
    bias_term: true
      }
}

layer {
  bottom: "res3b_branch1"
    bottom: "res3b_branch2b"
    top: "res3b"
    name: "res3b"
    type: "Eltwise"
    }

layer {
  bottom: "res3b"
    top: "res3b"
    name: "res3b_relu"
    type: "ReLU"
    }

##########################
######third shortcut#####
##########################

layer {
  bottom: "res3b"
    top: "res4a_branch1"
    name: "res4a_branch1"
    type: "Convolution"
    convolution_param {
    num_output: 256
      kernel_size: 1
      pad: 0
      stride: 2
      bias_term: false
      }
}

layer {
  bottom: "res4a_branch1"
    top: "res4a_branch1"
    name: "bn4a_branch1"
    type: "BatchNorm"
    batch_norm_param {
    use_global_stats: true
      }
}

layer {
  bottom: "res4a_branch1"
    top: "res4a_branch1"
    name: "scale4a_branch1"
    type: "Scale"
    scale_param {
    bias_term: true
      }
}

layer {
  bottom: "res3b"
    top: "res4a_branch2a"
    name: "res4a_branch2a"
    type: "Convolution"
    convolution_param {
    num_output: 256
      kernel_size: 3
      pad: 1
      stride: 2
      bias_term: false
      }
}

layer {
  bottom: "res4a_branch2a"
    top: "res4a_branch2a"
    name: "bn4a_branch2a"
    type: "BatchNorm"
    batch_norm_param {
    use_global_stats: true
      }
}

layer {
  bottom: "res4a_branch2a"
    top: "res4a_branch2a"
    name: "scale4a_branch2a"
    type: "Scale"
    scale_param {
    bias_term: true
      }
}

layer {
  bottom: "res4a_branch2a"
    top: "res4a_branch2a"
    name: "res4a_branch2a_relu"
    type: "ReLU"
    }

layer {
  bottom: "res4a_branch2a"
    top: "res4a_branch2b"
    name: "res4a_branch2b"
    type: "Convolution"
    convolution_param {
    num_output: 256
      kernel_size: 3
      pad: 1
      stride: 1
      bias_term: false
      }
}

layer {
  bottom: "res4a_branch2b"
    top: "res4a_branch2b"
    name: "bn4a_branch2b"
    type: "BatchNorm"
    batch_norm_param {
    use_global_stats: true
      }
}

layer {
  bottom: "res4a_branch2b"
    top: "res4a_branch2b"
    name: "scale4a_branch2b"
    type: "Scale"
    scale_param {
    bias_term: true
      }
}

layer {
  bottom: "res4a_branch1"
    bottom: "res4a_branch2b"
    top: "res4a"
    name: "res4a"
    type: "Eltwise"
    }

layer {
  bottom: "res4a"
    top: "res4a"
    name: "res4a_relu"
    type: "ReLU"
    }



###########################
######third-2 shortcut#####
##########################

layer {
  bottom: "res4a"
    top: "res4b_branch1"
    name: "res4b_branch1"
    type: "Convolution"
    convolution_param {
    num_output: 256
      kernel_size: 1
      pad: 0
      stride: 1
      bias_term: false
      }
}

layer {
  bottom: "res4b_branch1"
    top: "res4b_branch1"
    name: "bn4b_branch1"
    type: "BatchNorm"
    batch_norm_param {
    use_global_stats: true
      }
}

layer {
  bottom: "res4b_branch1"
    top: "res4b_branch1"
    name: "scale4b_branch1"
    type: "Scale"
    scale_param {
    bias_term: true
      }
}


layer {
  bottom: "res4a"
    top: "res4b_branch2a"
    name: "res4b_branch2a"
    type: "Convolution"
    convolution_param {
    num_output: 256
      kernel_size: 3
      pad: 1
      stride: 1
      bias_term: false
      }
}

layer {
  bottom: "res4b_branch2a"
    top: "res4b_branch2a"
    name: "bn4b_branch2a"
    type: "BatchNorm"
    batch_norm_param {
    use_global_stats: true
      }
}

layer {
  bottom: "res4b_branch2a"
    top: "res4b_branch2a"
    name: "scale4b_branch2a"
    type: "Scale"
    scale_param {
    bias_term: true
      }
}

layer {
  bottom: "res4b_branch2a"
    top: "res4b_branch2a"
    name: "res4b_branch2a_relu"
    type: "ReLU"
    }

layer {
  bottom: "res4b_branch2a"
    top: "res4b_branch2b"
    name: "res4b_branch2b"
    type: "Convolution"
    convolution_param {
    num_output: 256
      kernel_size: 3
      pad: 1
      stride: 1
      bias_term: false
      }
}

layer {
  bottom: "res4b_branch2b"
    top: "res4b_branch2b"
    name: "bn4b_branch2b"
    type: "BatchNorm"
    batch_norm_param {
    use_global_stats: true
      }
}

layer {
  bottom: "res4b_branch2b"
    top: "res4b_branch2b"
    name: "scale4b_branch2b"
    type: "Scale"
    scale_param {
    bias_term: true
      }
}

layer {
  bottom: "res4b_branch1"
    bottom: "res4b_branch2b"
    top: "res4b"
    name: "res4b"
    type: "Eltwise"
    }

layer {
  bottom: "res4b"
    top: "res4b"
    name: "res4b_relu"
    type: "ReLU"
    }
##########################
######forth shortcut#####
##########################

layer {
  bottom: "res4b"
    top: "res5a_branch1"
    name: "res5a_branch1"
    type: "Convolution"
    convolution_param {
    num_output: 512
      kernel_size: 1
      pad: 0
      stride: 2
      bias_term: false
      }
}

layer {
  bottom: "res5a_branch1"
    top: "res5a_branch1"
    name: "bn5a_branch1"
    type: "BatchNorm"
    batch_norm_param {
    use_global_stats: true
      }
}

layer {
  bottom: "res5a_branch1"
    top: "res5a_branch1"
    name: "scale5a_branch1"
    type: "Scale"
    scale_param {
    bias_term: true
      }
}

layer {
  bottom: "res4b"
    top: "res5a_branch2a"
    name: "res5a_branch2a"
    type: "Convolution"
    convolution_param {
    num_output: 512
      kernel_size: 3
      pad: 1
      stride: 2
      bias_term: false
      }
}

layer {
  bottom: "res5a_branch2a"
    top: "res5a_branch2a"
    name: "bn5a_branch2a"
    type: "BatchNorm"
    batch_norm_param {
    use_global_stats: true
      }
}

layer {
  bottom: "res5a_branch2a"
    top: "res5a_branch2a"
    name: "scale5a_branch2a"
    type: "Scale"
    scale_param {
    bias_term: true
      }
}

layer {
  bottom: "res5a_branch2a"
    top: "res5a_branch2a"
    name: "res5a_branch2a_relu"
    type: "ReLU"
    }

layer {
  bottom: "res5a_branch2a"
    top: "res5a_branch2b"
    name: "res5a_branch2b"
    type: "Convolution"
    convolution_param {
    num_output: 512
      kernel_size: 3
      pad: 1
      stride: 1
      bias_term: false
      }
}

layer {
  bottom: "res5a_branch2b"
    top: "res5a_branch2b"
    name: "bn5a_branch2b"
    type: "BatchNorm"
    batch_norm_param {
    use_global_stats: true
      }
}

layer {
  bottom: "res5a_branch2b"
    top: "res5a_branch2b"
    name: "scale5a_branch2b"
    type: "Scale"
    scale_param {
    bias_term: true
      }
}


layer {
  bottom: "res5a_branch1"
    bottom: "res5a_branch2b"
    top: "res5a"
    name: "res5a"
    type: "Eltwise"
    }

layer {
  bottom: "res5a"
    top: "res5a"
    name: "res5a_relu"
    type: "ReLU"
    }


##########################
######forth-2 shortcut#####
##########################

layer {
  bottom: "res5a"
    top: "res5b_branch1"
    name: "res5b_branch1"
    type: "Convolution"
    convolution_param {
    num_output: 512
      kernel_size: 1
      pad: 0
      stride: 1
      bias_term: false
      }
}

layer {
  bottom: "res5b_branch1"
    top: "res5b_branch1"
    name: "bn5b_branch1"
    type: "BatchNorm"
    batch_norm_param {
    use_global_stats: true
      }
}

layer {
  bottom: "res5b_branch1"
    top: "res5b_branch1"
    name: "scale5b_branch1"
    type: "Scale"
    scale_param {
    bias_term: true
      }
}


layer {
  bottom: "res5a"
    top: "res5b_branch2a"
    name: "res5b_branch2a"
    type: "Convolution"
    convolution_param {
    num_output: 512
      kernel_size: 3
      pad: 1
      stride: 1
      bias_term: false
      }
}

layer {
  bottom: "res5b_branch2a"
    top: "res5b_branch2a"
    name: "bn5b_branch2a"
    type: "BatchNorm"
    batch_norm_param {
    use_global_stats: true
      }
}

layer {
  bottom: "res5b_branch2a"
    top: "res5b_branch2a"
    name: "scale5b_branch2a"
    type: "Scale"
    scale_param {
    bias_term: true
      }
}

layer {
  bottom: "res5b_branch2a"
    top: "res5b_branch2a"
    name: "res5b_branch2a_relu"
    type: "ReLU"
    }

layer {
  bottom: "res5b_branch2a"
    top: "res5b_branch2b"
    name: "res5b_branch2b"
    type: "Convolution"
    convolution_param {
    num_output: 512
      kernel_size: 3
      pad: 1
      stride: 1
      bias_term: false
      }
}

layer {
  bottom: "res5b_branch2b"
    top: "res5b_branch2b"
    name: "bn5b_branch2b"
    type: "BatchNorm"
    batch_norm_param {
    use_global_stats: true
      }
}

layer {
  bottom: "res5b_branch2b"
    top: "res5b_branch2b"
    name: "scale5b_branch2b"
    type: "Scale"
    scale_param {
    bias_term: true
      }
}

layer {
  bottom: "res5b_branch1"
    bottom: "res5b_branch2b"
    top: "res5b"
    name: "res5b"
    type: "Eltwise"
    }

layer {
  bottom: "res5b"
    top: "res5b"
    name: "res5b_relu"
    type: "ReLU"
    }

layer {
  bottom: "res5b"
    top: "pool5"
    name: "pool5"
    type: "Pooling"
    pooling_param {
    kernel_size: 7
      stride: 1
      pool: AVE
      }
}

layer {
  bottom: "pool5"
    top: "fc7"
    name: "fc7"
    type: "InnerProduct"
    inner_product_param {
    num_output: 7
      }
}

#layer {
#  bottom: "fc7"
#    top: "prob"
#    name: "prob"
#    type: "Softmax"
#    }

  layer {
    name: "loss"
      type: "SoftmaxWithLoss"
      bottom: "fc7"
      bottom: "label"
      top: "loss"
      }

layer {
  name: "accuracy"
    type: "Accuracy"
    bottom: "fc7"
    bottom: "label"
    top: "accuracy"
    include {
    phase: TEST
      }
}

详细参数设置,可查看以上文件内容定义。

三、总结

ResNet及其变体网路系列,对于一般的图像识别任务表现优异,具体场景的算法应用,可以结合实际情况,进行具体网络结构改进,如网路裁剪,网络加深或其它策略,可以进行实践改进。

码字不易,如有不对,欢迎留言交流~

您的支持,是我不断创作的最大动力~

欢迎点赞关注留言交流~

深度学习,乐此不疲~

个人微信公众号,欢迎关注~
在这里插入图片描述

### ResNet18 深度学习模型架构与实现 ResNet18 是一种基于残差网络(ResNet)的深度学习模型,它通过引入跳跃连接解决了深度神经网络训练中的梯度消失问题。这种设计使得训练非常深的网络成为可能,并显著提升了模型性能[^2]。 #### 1. ResNet18 的整体架构 ResNet18 的架构由一系列卷积层、批归一化层(Batch Normalization)、ReLU 激活函数以及全连接层组成。它的主要特点是使用了残差块(Residual Block),每个残差块包含两个 3x3 卷积层和一个跨层的恒等映射(Identity Mapping)。当输入和输出维度不匹配时,会通过线性投影进行调整以匹配维度[^2]。 以下是 ResNet18 的主要组成部分: - **输入层**:接收大小为 224x224 的图像。 - **卷积层**:第一层卷积核大小为 7x7,步长为 2,通道数为 64。 - **最大池化层**:步长为 2,池化窗口大小为 3x3。 - **残差块**:共包含 4 个阶段,每个阶段包含多个残差块,具体如下: - 第一阶段:2 个残差块,通道数为 64。 - 第二阶段:2 个残差块,通道数为 128。 - 第三阶段:2 个残差块,通道数为 256。 - 第四阶段:2 个残差块,通道数为 512。 - **全局平均池化层**:将特征图降维为固定大小。 - **全连接层**:用于分类任务,输出类别数根据具体任务而定。 #### 2. ResNet18 的实现代码 以下是一个基于 PyTorch 的 ResNet18 实现示例: ```python import torch import torch.nn as nn import torchvision.models as models # 加载预训练的 ResNet18 模型 model = models.resnet18(pretrained=True) # 将模型指定到 GPU 上运行(如果可用) device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model = model.to(device) # 示例:定义输入数据并将其移动到 GPU input_tensor = torch.randn(1, 3, 224, 224).to(device) # 假设输入为单张 RGB 图像 output = model(input_tensor) print(output) ``` 上述代码展示了如何加载预训练的 ResNet18 模型,并将其指定到 GPU 上运行[^4]。 #### 3. 预训练模型下载 ResNet18 的预训练模型可以从官方或第三方资源中获取。例如,在 PyTorch 中可以直接通过 `torchvision.models` 加载预训练权重。此外,也可以从其他开源项目中下载特定变体的预训练模型。例如,ResNet50 IBN-A 的预训练模型可以参考以下链接进行下载[^3]: ```plaintext https://github.com/XingangPan/IBN-Net/releases/download/v1.0/resnet50_ibn_a-d9d0bb7b.pth ``` 需要注意的是,不同版本的 ResNet 可能具有不同的预训练权重,因此在选择时应确保其与目标任务相匹配。 ###
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

AI研习图书馆

您的鼓励将是我创作的最大动力~

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值