【Mindspore】【ops.ReduceMean】ops.ReduceMean 放到网络中无法运行训练脚本

本文介绍如何正确使用 MindSpore 中的 ops.ReduceMean 算子,并通过一个 CNN 模型示例展示了常见错误及其修正方法。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

问题描述:

问题算子

ops.ReduceMean算子

实验环境

  • 操作系统: Ubuntu20.04
  • Python版本: Python3.7.5
  • Mindspore版本: mindspore-gpu 1.5.0
  • CUDA版本: 11.1

安装命令

conda install mindspore-gpu=1.5.0 cudatoolkit=11.1 -c mindspore -c conda-forge

具体代码

1. 使用 ops.ReduceMean()算子替代GlobalAvgPool算子

import mindspore.nn as nnfrom mindspore import ops



class GlobalAvgPool(nn.Cell):

    def __init__(self):

        super(GlobalAvgPool, self).__init__()

        self.global_avg_pool = ops.ReduceMean(keep_dims=False)



    def construct(self, x):

        return self.global_avg_pool(x=x, axis=(0, 1))

2. 将上面实现的算子放在CNN中

class CNN(nn.Cell):

    def __init__(self, num_class, img_channel, img_size, actions):

        super(CNN, self).__init__()

        self.layers = list()

        last_out_channel = img_channel

        for i in range(len(actions)):

            conv = nn.Conv2d(in_channels=last_out_channel,

                             out_channels=actions[i]['filter_num'],

                             kernel_size=actions[i]['kernel_size'],

                             pad_mode='same')

            last_out_channel = actions[i]['filter_num']

            self.layers.append(conv)

            self.layers.append(nn.ReLU())

        self.sequential_cell = nn.SequentialCell(self.layers)

        self.global_avg_pool = GlobalAvgPool()

        self.flatten = nn.Flatten()

        self.dense = nn.Dense(in_channels=last_out_channel,out_channels=num_class)



    def construct(self, x):

        x = self.sequential_cell(x)

        x = self.global_avg_pool(x)

        x = self.flatten(x)

        x = self.dense(x)

        return x

运行训练报错:

{0: {'layer_id': 0, 'filter_num': 9, 'kernel_size': 7}, 1: {'layer_id': 1, 'filter_num': 18, 'kernel_size': 5}, 2: {'layer_id': 2, 'filter_num': 18, 'kernel_size': 7}, 3: {'layer_id': 3, 'filter_num': 36, 'kernel_size': 14}, 4: {'layer_id': 4, 'filter_num': 36, 'kernel_size': 14}, 5: {'layer_id': 5, 'filter_num': 36, 'kernel_size': 5}, 6: {'layer_id': 6, 'filter_num': 36, 'kernel_size': 14}, 7: {'layer_id': 7, 'filter_num': 36, 'kernel_size': 5}, 8: {'layer_id': 8, 'filter_num': 36, 'kernel_size': 14}, 9: {'layer_id': 9, 'filter_num': 18, 'kernel_size': 7}}

CNN<

  (sequential_cell): SequentialCell<

    (0): Conv2d<input_channels=3, output_channels=9, kernel_size=(7, 7), stride=(1, 1), pad_mode=same, padding=0, dilation=(1, 1), group=1, has_bias=False, weight_init=normal, bias_init=zeros, format=NCHW>

    (1): ReLU<>

    (2): Conv2d<input_channels=9, output_channels=18, kernel_size=(5, 5), stride=(1, 1), pad_mode=same, padding=0, dilation=(1, 1), group=1, has_bias=False, weight_init=normal, bias_init=zeros, format=NCHW>

    (3): ReLU<>

    (4): Conv2d<input_channels=18, output_channels=18, kernel_size=(7, 7), stride=(1, 1), pad_mode=same, padding=0, dilation=(1, 1), group=1, has_bias=False, weight_init=normal, bias_init=zeros, format=NCHW>

    (5): ReLU<>

    (6): Conv2d<input_channels=18, output_channels=36, kernel_size=(14, 14), stride=(1, 1), pad_mode=same, padding=0, dilation=(1, 1), group=1, has_bias=False, weight_init=normal, bias_init=zeros, format=NCHW>

    (7): ReLU<>

    (8): Conv2d<input_channels=36, output_channels=36, kernel_size=(14, 14), stride=(1, 1), pad_mode=same, padding=0, dilation=(1, 1), group=1, has_bias=False, weight_init=normal, bias_init=zeros, format=NCHW>

    (9): ReLU<>

    (10): Conv2d<input_channels=36, output_channels=36, kernel_size=(5, 5), stride=(1, 1), pad_mode=same, padding=0, dilation=(1, 1), group=1, has_bias=False, weight_init=normal, bias_init=zeros, format=NCHW>

    (11): ReLU<>

    (12): Conv2d<input_channels=36, output_channels=36, kernel_size=(14, 14), stride=(1, 1), pad_mode=same, padding=0, dilation=(1, 1), group=1, has_bias=False, weight_init=normal, bias_init=zeros, format=NCHW>

    (13): ReLU<>

    (14): Conv2d<input_channels=36, output_channels=36, kernel_size=(5, 5), stride=(1, 1), pad_mode=same, padding=0, dilation=(1, 1), group=1, has_bias=False, weight_init=normal, bias_init=zeros, format=NCHW>

    (15): ReLU<>

    (16): Conv2d<input_channels=36, output_channels=36, kernel_size=(14, 14), stride=(1, 1), pad_mode=same, padding=0, dilation=(1, 1), group=1, has_bias=False, weight_init=normal, bias_init=zeros, format=NCHW>

    (17): ReLU<>

    (18): Conv2d<input_channels=36, output_channels=18, kernel_size=(7, 7), stride=(1, 1), pad_mode=same, padding=0, dilation=(1, 1), group=1, has_bias=False, weight_init=normal, bias_init=zeros, format=NCHW>

    (19): ReLU<>

    >

  (global_avg_pool): GlobalAvgPool<>

  (flatten): Flatten<>

  (dense): Dense<input_channels=18, output_channels=10, has_bias=True>

  >[EXCEPTION] ANALYZER(117475,7f5958cab740,python):2021-12-19-15:21:05.166.863 [mindspore/ccsrc/pipeline/jit/static_analysis/prim.cc:504] ConvertAbstractToPython] Unsupported parameter type for python primitive.AbstractKeywordArg(key : xvalue : AbstractTensor(shape: (64, 18, 28, 28), element: AbstractScalar(Type: Float32, Value: AnyValue, Shape: NoShape), value_ptr: 0x556598062db0, value: AnyValue))

Traceback (most recent call last):

  File "/home/yons/Desktop/fnas/QHW-NAS/NAS/train.py", line 49, in <module>

    train()

  File "/home/yons/Desktop/fnas/QHW-NAS/NAS/train.py", line 34, in train

    callbacks=[summary_collector], dataset_sink_mode=False)

  File "/home/yons/anaconda3/envs/mindspore/lib/python3.7/site-packages/mindspore/train/model.py", line 726, in train

    sink_size=sink_size)

  File "/home/yons/anaconda3/envs/mindspore/lib/python3.7/site-packages/mindspore/train/model.py", line 498, in _train

    self._train_process(epoch, train_dataset, list_callback, cb_params)

  File "/home/yons/anaconda3/envs/mindspore/lib/python3.7/site-packages/mindspore/train/model.py", line 626, in _train_process

    outputs = self._train_network(*next_element)

  File "/home/yons/anaconda3/envs/mindspore/lib/python3.7/site-packages/mindspore/nn/cell.py", line 404, in __call__

    out = self.compile_and_run(*inputs)

  File "/home/yons/anaconda3/envs/mindspore/lib/python3.7/site-packages/mindspore/nn/cell.py", line 682, in compile_and_run

    self.compile(*inputs)

  File "/home/yons/anaconda3/envs/mindspore/lib/python3.7/site-packages/mindspore/nn/cell.py", line 669, in compile

    _cell_graph_executor.compile(self, *inputs, phase=self.phase, auto_parallel_mode=self._auto_parallel_mode)

  File "/home/yons/anaconda3/envs/mindspore/lib/python3.7/site-packages/mindspore/common/api.py", line 548, in compile

    result = self._graph_executor.compile(obj, args_list, phase, use_vm, self.queue_name)

TypeError: mindspore/ccsrc/pipeline/jit/static_analysis/prim.cc:504 ConvertAbstractToPython] Unsupported parameter type for python primitive.AbstractKeywordArg(key : xvalue : AbstractTensor(shape: (64, 18, 28, 28), element: AbstractScalar(Type: Float32, Value: AnyValue, Shape: NoShape), value_ptr: 0x556598062db0, value: AnyValue))

The function call stack (See file '/home/yons/Desktop/fnas/QHW-NAS/NAS/rank_0/om/analyze_fail.dat' for more details):# 0 In file /home/yons/anaconda3/envs/mindspore/lib/python3.7/site-packages/mindspore/nn/wrap/cell_wrapper.py(353)

        loss = self.network(*inputs)

               ^# 1 In file /home/yons/anaconda3/envs/mindspore/lib/python3.7/site-packages/mindspore/nn/wrap/cell_wrapper.py(110)

        out = self._backbone(data)

              ^# 2 In file /home/yons/Desktop/fnas/QHW-NAS/NAS/neural_network/model.py(34)

        x = self.global_avg_pool(x)

            ^# 3 In file /home/yons/Desktop/fnas/QHW-NAS/NAS/neural_network/model.py(11)

        return self.global_avg_pool(x=x, axis=(0, 1))

               ^





Process finished with exit code 1

解答:

ops.ReduceMean使用错误,

self.global_avg_pool(x, axis=(0, 1))

改成:

self.global_avg_pool(x, (0, 1))
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值