【搭建 Transformer】

搭建 Transformer 的基本步骤

Transformer 是一种基于自注意力机制的深度学习模型,广泛应用于自然语言处理任务。以下为搭建 Transformer 的关键步骤和代码示例。

自注意力机制

自注意力机制是 Transformer 的核心,计算输入序列中每个元素与其他元素的关联度。公式如下:
$$ \text{Attention}(Q, K, V) = \text{softmax}\left(\frac{QK^T}{\sqrt{d_k}}\right)V $$
其中,$Q$ 为查询矩阵,$K$ 为键矩阵,$V$ 为值矩阵,$d_k$ 为键的维度。

import torch
import torch.nn as nn

class SelfAttention(nn.Module):
    def __init__(self, embed_size, heads):
        super(SelfAttention, self).__init__()
        self.embed_size = embed_size
        self.heads = heads
        self.head_dim = embed_size // heads

        self.values = nn.Linear(self.head_dim, self.head_dim, bias=False)
        self.keys = nn.Linear(self.head_dim, self.head_dim, bias=False)
        self.queries = nn.Linear(self.head_dim, self.head_dim, bias=False)
        self.fc_out = nn.Linear(embed_size, embed_size)

    def forward(self, values, keys, queries, mask):
        N = queries.shape[0]
        value_len, key_len, query_len = values.shape[1], keys.shape[1], queries.shape[1]

        values = values.reshape(N, value_len, self.heads, self.head_dim)
        keys = keys.reshape(N, key_len, self.heads, self.head_dim)
        queries = queries.reshape(N, query_len, self.heads, self.head_dim)

        energy = torch.einsum("nqhd,nkhd->nhqk", [queries, keys])
        if mask is not None:
            energy = energy.masked_fill(mask == 0, float("-1e20"))

        attention = torch.softmax(energy / (self.embed_size ** (0.5)), dim=3)
        out = torch.einsum("nhql,nlhd->nqhd", [attention, values]).reshape(
            N, query_len, self.embed_size
        )
        return self.fc_out(out)

多头注意力

多头注意力通过并行计算多个自注意力头,增强模型的表达能力。

class MultiHeadAttention(nn.Module):
    def __init__(self, embed_size, heads):
        super(MultiHeadAttention, self).__init__()
        self.attention = SelfAttention(embed_size, heads)
        self.norm = nn.LayerNorm(embed_size)
        self.dropout = nn.Dropout(0.1)

    def forward(self, x, mask):
        attention = self.attention(x, x, x, mask)
        x = self.dropout(self.norm(attention + x))
        return x

前馈神经网络

前馈神经网络用于进一步处理自注意力层的输出。

class FeedForward(nn.Module):
    def __init__(self, embed_size, ff_dim):
        super(FeedForward, self).__init__()
        self.ff = nn.Sequential(
            nn.Linear(embed_size, ff_dim),
            nn.ReLU(),
            nn.Linear(ff_dim, embed_size),
        )
        self.norm = nn.LayerNorm(embed_size)
        self.dropout = nn.Dropout(0.1)

    def forward(self, x):
        out = self.ff(x)
        x = self.dropout(self.norm(out + x))
        return x

编码器层

编码器层由多头注意力和前馈神经网络组成。

class EncoderLayer(nn.Module):
    def __init__(self, embed_size, heads, ff_dim):
        super(EncoderLayer, self).__init__()
        self.attention = MultiHeadAttention(embed_size, heads)
        self.ff = FeedForward(embed_size, ff_dim)

    def forward(self, x, mask):
        x = self.attention(x, mask)
        x = self.ff(x)
        return x

解码器层

解码器层包含掩码多头注意力、编码器-解码器注意力和前馈神经网络。

class DecoderLayer(nn.Module):
    def __init__(self, embed_size, heads, ff_dim):
        super(DecoderLayer, self).__init__()
        self.masked_attention = MultiHeadAttention(embed_size, heads)
        self.attention = MultiHeadAttention(embed_size, heads)
        self.ff = FeedForward(embed_size, ff_dim)

    def forward(self, x, enc_out, src_mask, trg_mask):
        x = self.masked_attention(x, trg_mask)
        x = self.attention(enc_out, src_mask)
        x = self.ff(x)
        return x

完整 Transformer

整合编码器和解码器,构建完整的 Transformer 模型。

class Transformer(nn.Module):
    def __init__(
        self,
        src_vocab_size,
        trg_vocab_size,
        embed_size=512,
        num_layers=6,
        heads=8,
        ff_dim=2048,
        max_len=100,
    ):
        super(Transformer, self).__init__()
        self.encoder_embed = nn.Embedding(src_vocab_size, embed_size)
        self.decoder_embed = nn.Embedding(trg_vocab_size, embed_size)
        self.pos_embed = PositionalEncoding(embed_size, max_len)
        self.encoder_layers = nn.ModuleList(
            [EncoderLayer(embed_size, heads, ff_dim) for _ in range(num_layers)]
        )
        self.decoder_layers = nn.ModuleList(
            [DecoderLayer(embed_size, heads, ff_dim) for _ in range(num_layers)]
        )
        self.fc_out = nn.Linear(embed_size, trg_vocab_size)

    def forward(self, src, trg, src_mask, trg_mask):
        src_embed = self.pos_embed(self.encoder_embed(src))
        trg_embed = self.pos_embed(self.decoder_embed(trg))

        for layer in self.encoder_layers:
            src_embed = layer(src_embed, src_mask)

        for layer in self.decoder_layers:
            trg_embed = layer(trg_embed, src_embed, src_mask, trg_mask)

        return self.fc_out(trg_embed)

位置编码

位置编码用于注入序列的位置信息。

class PositionalEncoding(nn.Module):
    def __init__(self, embed_size, max_len):
        super(PositionalEncoding, self).__init__()
        pe = torch.zeros(max_len, embed_size)
        position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1)
        div_term = torch.exp(torch.arange(0, embed_size, 2).float() * (-math.log(10000.0) / embed_size))
        pe[:, 0::2] = torch.sin(position * div_term)
        pe[:, 1::2] = torch.cos(position * div_term)
        self.register_buffer("pe", pe.unsqueeze(0))

    def forward(self, x):
        return x + self.pe[:, :x.shape[1], :]

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

lyh1344

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值