Pytorch分布式数据并行(DistributedDataParallel)

1 初始化进程组

import os
from torch import distributed


try:
    world_size = int(os.environ["WORLD_SIZE"])  # 全局进程个数
    rank = int(os.environ["RANK"])  # 当前进程编号(全局)
    local_rank = int(os.environ["LOCAL_RANK"])  # 每台机器上的进程编号(局部)
    distributed.init_process_group("nccl")  # 初始化进程, 使用nccl后端
except KeyError:
    world_size = 1
    rank = 0
    local_rank = 0
    distributed.init_process_group(
        backend="nccl",
        init_method="tcp://127.0.0.1:12584",
        rank=rank,
        world_size=world_size,
    )

2 使用DistributedSampler划分数据集

与nn.DataParrallel不同的是,分布式训练中的batch_size为单卡的输入样本数,因为它代表的是当前rank下对应的partition,总batch_size是这里的batch_size再乘以并行数。举个例子,假设使用8张卡训练模型,nn.DataParrallel中的batch_size为3200,nn.DistributedDataParallel中的batch_size则为400,

from dataloader.distributed_sampler import DistributedSampler


train_sampler = DistributedSampler(
        train_set, num_replicas=world_size, rank=rank, shuffle=True, seed=seed)
trainloader = DataLoader(
                         dataset=train_set,
                         pin_memory=true,
                         batch_size=batch_size,
                         num_workers=num_workers,
                         sampler=train_sampler
)  # pin_memory: 是否提前申请CUDA内存. 创建DataLoader时,设置pin_memory=True,则意味着生成的Tensor数据最开始是属于内存中的锁页内存,这样将内存的Tensor转义到GPU的显存就会更快一些.

3 使用DistributedDataParallel封装模型

DistributedDataParallel能够为不同GPU上求得的梯度进行all reduce(即汇总不同GPU计算所得的梯度,并同步计算结果)。all reduce后不同GPU中模型的梯度均为all reduce之前各GPU梯度的均值,

backbone = get_model(
        cfg.network, dropout=0.0, fp16=cfg.fp16, num_features=cfg.embedding_size).cuda()

backbone = torch.nn.parallel.DistributedDataParallel(
        module=backbone, broadcast_buffers=False, device_ids=[local_rank], bucket_cap_mb=16,
        find_unused_parameters=True)

4 训练模型

把输入图片、标签及模型加载到当前进程使用的GPU中,

for epoch in range(start_epoch, cfg.num_epoch):
    if isinstance(train_loader, DataLoader):
        # 设置train_loader中的sampler的epoch,DistributedSampler需要这个参数来维持各个进程之间的相同随机数种子
        train_loader.sampler.set_epoch(epoch)
    for _, (img, local_labels) in enumerate(train_loader):
        global_step += 1
        local_embeddings = backbone(img)
        loss: torch.Tensor = module_partial_fc(local_embeddings, local_labels, opt)
        loss.backward()
        torch.nn.utils.clip_grad_norm_(backbone.parameters(), 5)
        opt.step()
        opt.zero_grad()
        lr_scheduler.step()

5 计算损失

distributed.all_gather(tensor_list,input_tensor):从所有设备收集指定的input_tensor并将其放置在所有设备上的tensor_list变量中,

from torch import distributed

distributed.all_gather(_gather_embeddings, local_embeddings)
distributed.all_gather(_gather_labels, local_labels)

distributed.all_reduce(loss, distributed.ReduceOp.SUM)

6 保存模型

if rank == 0:
    path_module = os.path.join(cfg.output, "model_final.pt")
    torch.save(backbone.module.state_dict(), path_module)

7 启动并行程序

(1) 使用torch.distributed.launch

该指令会使脚本并行地运行n次(n为使用的GPU个数),

python -m torch.distributed.launch --nproc_per_node=8 train.py configs/ms1mv3_r50

(2) 使用torch.multiprocessing

torch.multiprocessing会自动创建进程,绕开torch.distributed.launch开启和退出进程的一些小毛病,

def main(rank):
    pass
 
torch.multiprocessing.spawn(main, nprocs, args)

8 代码示例

参考insightface的代码,

import argparse
import logging
import os
from datetime import datetime

import numpy as np
import torch
from backbones import get_model
from dataset import get_dataloader
from losses import CombinedMarginLoss
from lr_scheduler import PolyScheduler
from partial_fc import PartialFC, PartialFCAdamW
from torch import distributed
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter
from utils.utils_callbacks import CallBackLogging, CallBackVerification
from utils.utils_config import get_config
from utils.utils_distributed_sampler import setup_seed
from utils.utils_logging import AverageMeter, init_logging

assert torch.__version__ >= "1.12.0", "In order to enjoy the features of the new torch, \
we have upgraded the torch to 1.12.0. torch before than 1.12.0 may not work in the future."

try:
    rank = int(os.environ["RANK"])
    local_rank = int(os.environ["LOCAL_RANK"])
    world_size = int(os.environ["WORLD_SIZE"])
    distributed.init_process_group("nccl")
except KeyError:
    rank = 0
    local_rank = 0
    world_size = 1
    distributed.init_process_group(
        backend="nccl",
        init_method="tcp://127.0.0.1:12584",
        rank=rank,
        world_size=world_size,
    )


def main(args):

    # get config
    cfg = get_config(args.config)
    # global control random seed
    setup_seed(seed=cfg.seed, cuda_deterministic=False)

    torch.cuda.set_device(local_rank)

    os.makedirs(cfg.output, exist_ok=True)
    init_logging(rank, cfg.output)

    summary_writer = (
        SummaryWriter(log_dir=os.path.join(cfg.output, "tensorboard"))
        if rank == 0
        else None
    )
    
    wandb_logger = None
    if cfg.using_wandb:
        import wandb
        # Sign in to wandb
        try:
            wandb.login(key=cfg.wandb_key)
        except Exception as e:
            print("WandB Key must be provided in config file (base.py).")
            print(f"Config Error: {e}")
        # Initialize wandb
        run_name = datetime.now().strftime("%y%m%d_%H%M") + f"_GPU{rank}"
        run_name = run_name if cfg.suffix_run_name is None else run_name + f"_{cfg.suffix_run_name}"
        try:
            wandb_logger = wandb.init(
                entity = cfg.wandb_entity, 
                project = cfg.wandb_project, 
                sync_tensorboard = True,
                resume=cfg.wandb_resume,
                name = run_name, 
                notes = cfg.notes) if rank == 0 or cfg.wandb_log_all else None
            if wandb_logger:
                wandb_logger.config.update(cfg)
        except Exception as e:
            print("WandB Data (Entity and Project name) must be provided in config file (base.py).")
            print(f"Config Error: {e}")
        
    train_loader = get_dataloader(
        cfg.rec,
        local_rank,
        cfg.batch_size,
        cfg.dali,
        cfg.seed,
        cfg.num_workers
    )

    backbone = get_model(
        cfg.network, dropout=0.0, fp16=cfg.fp16, num_features=cfg.embedding_size).cuda()

    backbone = torch.nn.parallel.DistributedDataParallel(
        module=backbone, broadcast_buffers=False, device_ids=[local_rank], bucket_cap_mb=16,
        find_unused_parameters=True)

    backbone.train()
    # FIXME using gradient checkpoint if there are some unused parameters will cause error
    backbone._set_static_graph()

    margin_loss = CombinedMarginLoss(
        64,
        cfg.margin_list[0],
        cfg.margin_list[1],
        cfg.margin_list[2],
        cfg.interclass_filtering_threshold
    )

    if cfg.optimizer == "sgd":
        module_partial_fc = PartialFC(
            margin_loss, cfg.embedding_size, cfg.num_classes,
            cfg.sample_rate, cfg.fp16)
        module_partial_fc.train().cuda()
        # TODO the params of partial fc must be last in the params list
        opt = torch.optim.SGD(
            params=[{"params": backbone.parameters()}, {"params": module_partial_fc.parameters()}],
            lr=cfg.lr, momentum=0.9, weight_decay=cfg.weight_decay)

    elif cfg.optimizer == "adamw":
        module_partial_fc = PartialFCAdamW(
            margin_loss, cfg.embedding_size, cfg.num_classes,
            cfg.sample_rate, cfg.fp16)
        module_partial_fc.train().cuda()
        opt = torch.optim.AdamW(
            params=[{"params": backbone.parameters()}, {"params": module_partial_fc.parameters()}],
            lr=cfg.lr, weight_decay=cfg.weight_decay)
    else:
        raise

    cfg.total_batch_size = cfg.batch_size * world_size
    cfg.warmup_step = cfg.num_image // cfg.total_batch_size * cfg.warmup_epoch
    cfg.total_step = cfg.num_image // cfg.total_batch_size * cfg.num_epoch

    lr_scheduler = PolyScheduler(
        optimizer=opt,
        base_lr=cfg.lr,
        max_steps=cfg.total_step,
        warmup_steps=cfg.warmup_step,
        last_epoch=-1
    )

    start_epoch = 0
    global_step = 0
    if cfg.resume:
        dict_checkpoint = torch.load(os.path.join(cfg.output, f"checkpoint_gpu_{rank}.pt"))
        start_epoch = dict_checkpoint["epoch"]
        global_step = dict_checkpoint["global_step"]
        backbone.module.load_state_dict(dict_checkpoint["state_dict_backbone"])
        module_partial_fc.load_state_dict(dict_checkpoint["state_dict_softmax_fc"])
        opt.load_state_dict(dict_checkpoint["state_optimizer"])
        lr_scheduler.load_state_dict(dict_checkpoint["state_lr_scheduler"])
        del dict_checkpoint

    for key, value in cfg.items():
        num_space = 25 - len(key)
        logging.info(": " + key + " " * num_space + str(value))

    callback_verification = CallBackVerification(
        val_targets=cfg.val_targets, rec_prefix=cfg.rec, 
        summary_writer=summary_writer, wandb_logger = wandb_logger
    )
    callback_logging = CallBackLogging(
        frequent=cfg.frequent,
        total_step=cfg.total_step,
        batch_size=cfg.batch_size,
        start_step = global_step,
        writer=summary_writer
    )

    loss_am = AverageMeter()
    amp = torch.cuda.amp.grad_scaler.GradScaler(growth_interval=100)

    for epoch in range(start_epoch, cfg.num_epoch):

        if isinstance(train_loader, DataLoader):
            train_loader.sampler.set_epoch(epoch)
        for _, (img, local_labels) in enumerate(train_loader):
            global_step += 1
            local_embeddings = backbone(img)
            loss: torch.Tensor = module_partial_fc(local_embeddings, local_labels, opt)

            if cfg.fp16:
                amp.scale(loss).backward()
                amp.unscale_(opt)
                torch.nn.utils.clip_grad_norm_(backbone.parameters(), 5)
                amp.step(opt)
                amp.update()
            else:
                loss.backward()
                torch.nn.utils.clip_grad_norm_(backbone.parameters(), 5)
                opt.step()

            opt.zero_grad()
            lr_scheduler.step()

            with torch.no_grad():
                if wandb_logger:
                    wandb_logger.log({
                        'Loss/Step Loss': loss.item(),
                        'Loss/Train Loss': loss_am.avg,
                        'Process/Step': global_step,
                        'Process/Epoch': epoch
                    })
                
                loss_am.update(loss.item(), 1)
                callback_logging(global_step, loss_am, epoch, cfg.fp16, lr_scheduler.get_last_lr()[0], amp)

                if global_step % cfg.verbose == 0 and global_step > 0:
                    callback_verification(global_step, backbone)

        if cfg.save_all_states:
            checkpoint = {
                "epoch": epoch + 1,
                "global_step": global_step,
                "state_dict_backbone": backbone.module.state_dict(),
                "state_dict_softmax_fc": module_partial_fc.state_dict(),
                "state_optimizer": opt.state_dict(),
                "state_lr_scheduler": lr_scheduler.state_dict()
            }
            torch.save(checkpoint, os.path.join(cfg.output, f"checkpoint_gpu_{rank}.pt"))

        if rank == 0:
            path_module = os.path.join(cfg.output, "model.pt")
            torch.save(backbone.module.state_dict(), path_module)
            
            if wandb_logger and cfg.save_artifacts:
                artifact_name = f"{run_name}_E{epoch}"
                model = wandb.Artifact(artifact_name, type='model')
                model.add_file(path_module)
                wandb_logger.log_artifact(model)

        if cfg.dali:
            train_loader.reset()

    if rank == 0:
        path_module = os.path.join(cfg.output, "model.pt")
        torch.save(backbone.module.state_dict(), path_module)

        from torch2onnx import convert_onnx
        convert_onnx(backbone.module.cpu().eval(), path_module, os.path.join(cfg.output, "model.onnx"))
        
        if wandb_logger and cfg.save_artifacts:
            artifact_name = f"{run_name}_Final"
            model = wandb.Artifact(artifact_name, type='model')
            model.add_file(path_module)
            wandb_logger.log_artifact(model)

    distributed.destroy_process_group()


if __name__ == "__main__":
    torch.backends.cudnn.benchmark = True
    parser = argparse.ArgumentParser(
        description="Distributed Arcface Training in Pytorch")
    parser.add_argument("config", type=str, help="py config file")
    main(parser.parse_args())

参考文档:

1 https://blog.csdn.net/ytusdc/article/details/122091284

2 https://www.cnblogs.com/kkyyhh96/p/13769220.html

<think>好的,用户想知道在PyTorch中使用DDP进行分布式训练时如何正确保存模型参数和状态。首先,我需要回忆一下DDP的工作原理。DDP是分布式数据并行,每个进程都有模型的副本,并且通过Ring-All-Reduce同步梯度。每个进程的参数在训练过程中应该保持一致,对吧? 那保存模型的时候,是否需要每个进程都保存呢?显然不需要,因为这样会导致重复保存,浪费资源。正确的做法应该是只在主进程(rank 0)保存模型。这时候需要考虑如何判断当前进程是否是主进程,可以通过torch.distributed.get_rank()来获取当前进程的排名,如果是0的话就执行保存操作。 接下来,用户可能关心保存哪些内容。模型的状态包括参数和优化器状态,可能还有学习率调度器的状态。使用model.module.state_dict()来获取模型的状态字典,因为DDP会将原始模型包裹在DistributedDataParallel中,所以需要访问module属性才能得到原始模型。 另外,用户可能遇到在加载模型后结果不一致的问题,这可能是因为保存时没有正确使用model.eval(),或者在模型结构中存在随机操作,比如Dropout或BatchNorm。这些在评估模式下需要固定,所以在保存前应该设置模型为eval模式,加载时同样要设置。 还有,用户提到的参考文章里提到保存模型时使用单卡保存,这和在DDP中只在主进程保存是一致的。同时,加载模型时需要注意设备的位置,使用map_location参数来确保模型加载到正确的设备上。 还需要考虑是否在保存前同步所有进程,确保参数一致。虽然DDP在每次迭代后会自动同步梯度,但保存模型时是否会有延迟?可能需要使用torch.distributed.barrier()来同步进程,确保所有进程在保存前参数一致。 总结一下,正确的步骤应该包括:1)仅在主进程保存;2)使用model.module.state_dict();3)保存优化器和调度器状态;4)处理随机层;5)正确加载模型。同时要注意多机多卡的情况,保存路径可能需要是共享的文件系统。 用户可能还会遇到加载模型后预测结果不一致的问题,这时候需要检查是否在加载时使用了相同的配置,比如model.eval(),以及是否有随机种子设置等问题。此外,如果使用多节点训练,保存路径的访问权限和共享也需要考虑进去。 最后,应该生成相关问题,帮助用户深入理解DDP模型保存的相关问题,比如如何加载模型、多节点保存的注意事项,以及调试预测不一致的方法。</think>在PyTorch分布式数据并行(DDP)训练中,正确保存模型需要特别注意以下关键点: ### 1. 基本保存方法 **仅在主进程保存**:通过判断当前进程是否为rank 0来避免重复保存 ```python if torch.distributed.get_rank() == 0: torch.save({ 'model_state_dict': model.module.state_dict(), # 必须通过model.module获取原始模型 'optimizer_state_dict': optimizer.state_dict(), 'epoch': epoch }, 'checkpoint.pth') ``` ### 2. 关键技术细节 - **模型参数同步**:DDP通过Ring-All-Reduce架构保持各进程参数一致[^2],但保存前建议添加同步屏障: ```python torch.distributed.barrier() ``` - **随机层处理**:若模型包含Dropout/BatchNorm,保存前需切换评估模式: ```python model.eval() with torch.no_grad(): # 执行验证或保存操作 ``` ### 3. 多节点训练扩展 当跨多台机器训练时: - 确保所有节点能访问存储路径(建议使用共享文件系统) - 保存时指定设备映射: ```python torch.save(..., map_location='cuda:0') ``` ### 4. 完整保存示例 ```python def save_checkpoint(model, optimizer, epoch): if torch.distributed.get_rank() == 0: checkpoint = { 'model': model.module.state_dict(), 'optimizer': optimizer.state_dict(), 'epoch': epoch } torch.save(checkpoint, f"model_epoch{epoch}.pth") print(f"Checkpoint saved at epoch {epoch}") ``` ### 5. 模型加载要点 ```python # 加载时需先初始化分布式环境 model = DDP(model) checkpoint = torch.load('checkpoint.pth', map_location='cuda:0') model.module.load_state_dict(checkpoint['model_state_dict']) ```
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

chen_znn

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值