pytorch lstm RNN “Input and hidden tensors are not at the same device, found input tensor at cuda:0

pytorch lstm RNN "Input and hidden tensors are not at the same device, found input tensor at cuda:0 

Input and hidden tensors are not at the same device, found input tensor at cuda:0 and hidden tensor at cpu

前提

x ,y   都转成cuda,

model也转成cuda

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
x.to(device)
y.to(device)
model.to(device)

问题的关键,提示是隐藏层在cpu

rnn的话 修改h0即可,

模型定义的地方,修改forword方法

        h0 = torch.zeros(self.num_layers * 2, x.size(0), self.hidden_size).to(self.device)
        c0 = torch.zeros(self.num_layers * 2, x.size(0), self.hidden_size).to(self.device)

    def forward(self, x):
        # 初始化隐藏状态和细胞状态
        h0 = torch.zeros(self.num_layers * 2, x.size(0), self.hidden_size).to(self.device)
        c0 = torch.zeros(self.num_layers * 2, x.size(0), self.hidden_size).to(self.device)

        # 前向传播LSTM
        out, _ = self.lstm(x, (h0, c0))

        # x = torch.randn(5, 20, input_size)  # 批量大小5,序列长度20
        # y = torch.randn(5, output_size)  # 批量大小
        # 取最后一个时间步的输出. 少了 sequence的长度 ,
        # out = self.fc(out[:, -1, :])

        out = self.fc(out)

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值