pytorch_v1.13.1_gpu 安装

时间: 2025-09-04 18:19:26 AIGC 浏览: 2
要安装 PyTorch 1.13.1 并支持 GPU,需要确保系统环境满足必要的依赖条件,并选择合适的安装命令。以下是详细的步骤和说明: ### 确认 CUDA 版本与显卡兼容性 首先确认你的 NVIDIA 显卡型号(如 GTX1050)是否支持目标版本的 CUDA。根据已知信息,GTX1050 支持的最高 CUDA 版本为 11.6 或 11.7,而 PyTorch 1.13.1 支持这些 CUDA 版本[^1]。 可以通过以下命令查看当前系统中安装的 CUDA 版本: ```bash nvcc --version ``` 如果发现现有的 CUDA 版本较低(例如 11.3),则建议更新到 11.6 或 11.7 以获得更好的兼容性和性能表现[^1]。 ### 安装 PyTorch 1.13.1 及相关依赖 PyTorch 提供了多种方式来安装其软件包,包括使用 pip 或 conda。对于希望直接通过 pip 安装并且明确指定了 CUDA 版本的情况,可以采用如下命令: #### 使用 pip 安装 假设你打算使用 CUDA 11.6,可以运行以下命令来安装 PyTorch 1.13.1 以及配套的 torchvision 和 torchaudio: ```bash pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu116 ``` 此命令会从指定的镜像源下载并安装对应版本的库文件[^2]。 #### 验证安装 安装完成后,可以通过一段简单的 Python 脚本来验证 PyTorch 是否成功识别到了 GPU: ```python import torch import torchvision import torchaudio print(f"PyTorch version: {torch.__version__}") print(f"CUDA available: {torch.cuda.is_available()}") print(f"CUDA version: {torch.version.cuda}") print(f"Torchvision version: {torchvision.__version__}") print(f"Torchaudio version: {torchaudio.__version__}") ``` 这段代码将输出各个组件的版本号以及 CUDA 的相关信息,帮助确认安装过程是否一切正常[^3]。 ### 注意事项 - 如果你在 Ubuntu 系统上操作,需要注意某些旧的方法可能会建议手动安装 NVIDIA 驱动、CUDA 工具包等。然而,在较新的 PyTorch 安装过程中,这些工具通常会被自动处理。因此,除非有特殊需求,否则无需单独安装这些组件。 - 在尝试安装特定版本的 CUDA 时,请注意它可能会影响现有驱动程序的状态。如果你已经安装了更高版本的 NVIDIA 驱动,则应避免重复安装可能导致冲突的 CUDA 工具包[^4]。
阅读全文

相关推荐

1.我们现在的进展到哪了 还差什么 ? 2."PowerShell 7 环境已加载 (版本: 7.5.2) PS C:\Users\Administrator\Desktop> cd E:\PyTorch_Build\pytorch PS E:\PyTorch_Build\pytorch> .\pytorch_env\Scripts\activate (pytorch_env) PS E:\PyTorch_Build\pytorch> # 列出所有可用的 Conda 环境 (pytorch_env) PS E:\PyTorch_Build\pytorch> conda env list # conda environments: # base C:\Miniconda3 pytorch-env C:\Miniconda3\envs\pytorch-env (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 如果 pytorch_env 不存在,创建新环境 (pytorch_env) PS E:\PyTorch_Build\pytorch> conda create -n pytorch_env python=3.10 -y 3 channel Terms of Service accepted Channels: - defaults - conda-forge - https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main Platform: win-64 Collecting package metadata (repodata.json): done Solving environment: done ## Package Plan ## environment location: C:\Miniconda3\envs\pytorch_env added / updated specs: - python=3.10 done # # To activate this environment, use # # $ conda activate pytorch_env # # To deactivate an active environment, use # # $ conda deactivate (pytorch_env) PS E:\PyTorch_Build\pytorch> conda activate pytorch_env (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 安装必要的 Python 包 (pytorch_env) PS E:\PyTorch_Build\pytorch> conda install -c conda-forge -y numpy mkl mkl-include mkl-service intel-openmp 3 channel Terms of Service accepted Channels: - conda-forge - defaults - https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main Platform: win-64 Collecting package metadata (repodata.json): done Solving environment: done ## Package Plan ## environment location: C:\Miniconda3 added / updated specs: - intel-openmp - mkl - mkl-include - mkl-service - numpy The following packages will be downloaded: package | build ---------------------------|----------------- ca-certificates-2025.8.3 | h4c7d964_0 151 KB conda-forge intel-openmp-2025.2.0 | h57928b3_757 21.4 MB conda-forge libblas-3.9.0 | 34_h5709861_mkl 69 KB conda-forge libcblas-3.9.0 | 34_h2a3cdd5_mkl 69 KB conda-forge liblapack-3.9.0 | 34_hf9ab0e9_mkl 80 KB conda-forge llvm-openmp-20.1.8 | h29ce207_0 329 KB defaults mkl-2024.2.2 | h57928b3_16 98.3 MB conda-forge mkl-include-2025.2.0 | h57928b3_628 692 KB conda-forge mkl-service-2.5.2 | py313haca3b5c_0 64 KB conda-forge openssl-3.5.2 | h725018a_0 8.8 MB conda-forge ------------------------------------------------------------ Total: 130.0 MB The following NEW packages will be INSTALLED: llvm-openmp pkgs/main/win-64::llvm-openmp-20.1.8-h29ce207_0 mkl-service conda-forge/win-64::mkl-service-2.5.2-py313haca3b5c_0 The following packages will be UPDATED: ca-certificates pkgs/main/win-64::ca-certificates-202~ --> conda-forge/noarch::ca-certificates-2025.8.3-h4c7d964_0 intel-openmp 2024.2.1-h57928b3_1083 --> 2025.2.0-h57928b3_757 libblas 3.9.0-24_win64_mkl --> 3.9.0-34_h5709861_mkl libcblas 3.9.0-24_win64_mkl --> 3.9.0-34_h2a3cdd5_mkl liblapack 3.9.0-24_win64_mkl --> 3.9.0-34_hf9ab0e9_mkl mkl 2024.1.0-h66d3029_694 --> 2024.2.2-h57928b3_16 mkl-include 2024.1.0-h66d3029_694 --> 2025.2.0-h57928b3_628 openssl 3.1.8-ha4e3fda_0 --> 3.5.2-h725018a_0 Downloading and Extracting Packages: Preparing transaction: done Verifying transaction: done Executing transaction: done (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 验证 MKL 安装 (pytorch_env) PS E:\PyTorch_Build\pytorch> python -c "import mkl; print(f'MKL version: {mkl.__version__}')" Traceback (most recent call last): File "<string>", line 1, in <module> ModuleNotFoundError: No module named 'mkl' (pytorch_env) PS E:\PyTorch_Build\pytorch> python -c "import numpy as np; np.show_config()" Traceback (most recent call last): File "<string>", line 1, in <module> ModuleNotFoundError: No module named 'numpy' (pytorch_env) PS E:\PyTorch_Build\pytorch> # 确保在 pytorch 源码目录 (pytorch_env) PS E:\PyTorch_Build\pytorch> cd E:\PyTorch_Build\pytorch (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 设置环境变量(根据您的实际路径修改) (pytorch_env) PS E:\PyTorch_Build\pytorch> $env:CUDA_PATH = "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.0" (pytorch_env) PS E:\PyTorch_Build\pytorch> $env:CUDNN_INCLUDE_DIR = "E:\Program Files\NVIDIA\CUNND\v9.12\include" (pytorch_env) PS E:\PyTorch_Build\pytorch> $env:CUDNN_LIBRARY = "E:\Program Files\NVIDIA\CUNND\v9.12\lib\x64\cudnn.lib" (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 安装构建依赖 (pytorch_env) PS E:\PyTorch_Build\pytorch> pip install cmake ninja Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Requirement already satisfied: cmake in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (4.1.0) Requirement already satisfied: ninja in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (1.13.0) (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 开始构建 (pytorch_env) PS E:\PyTorch_Build\pytorch> python setup.py install --cmake -G "Ninja" >> -DCMAKE_CUDA_COMPILER="$env:CUDA_PATH\bin\nvcc.exe" >> -DCUDNN_INCLUDE_DIR="$env:CUDNN_INCLUDE_DIR" >> -DCUDNN_LIBRARY="$env:CUDNN_LIBRARY" Building wheel torch-2.9.0a0+git2d31c3d option -G not recognized (pytorch_env) PS E:\PyTorch_Build\pytorch> # 卸载现有版本 (pytorch_env) PS E:\PyTorch_Build\pytorch> pip uninstall -y torch torchvision torchaudio Found existing installation: torch 2.9.0a0+git2d31c3d Uninstalling torch-2.9.0a0+git2d31c3d: Successfully uninstalled torch-2.9.0a0+git2d31c3d WARNING: Skipping torchvision as it is not installed. WARNING: Skipping torchaudio as it is not installed. (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 安装 CUDA 12.1 版本的 PyTorch(兼容 CUDA 13.0) (pytorch_env) PS E:\PyTorch_Build\pytorch> pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121 Looking in indexes: https://download.pytorch.org/whl/cu121 Collecting torch Using cached https://download.pytorch.org/whl/cu121/torch-2.5.1%2Bcu121-cp310-cp310-win_amd64.whl (2449.4 MB) Collecting torchvision Using cached https://download.pytorch.org/whl/cu121/torchvision-0.20.1%2Bcu121-cp310-cp310-win_amd64.whl (6.1 MB) Collecting torchaudio Using cached https://download.pytorch.org/whl/cu121/torchaudio-2.5.1%2Bcu121-cp310-cp310-win_amd64.whl (4.1 MB) Requirement already satisfied: filelock in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from torch) (3.19.1) Requirement already satisfied: typing-extensions>=4.8.0 in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from torch) (4.15.0) Requirement already satisfied: networkx in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from torch) (3.4.2) Requirement already satisfied: jinja2 in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from torch) (3.1.6) Requirement already satisfied: fsspec in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from torch) (2025.7.0) Collecting sympy==1.13.1 (from torch) Using cached https://download.pytorch.org/whl/sympy-1.13.1-py3-none-any.whl (6.2 MB) Requirement already satisfied: mpmath<1.4,>=1.1.0 in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from sympy==1.13.1->torch) (1.3.0) Collecting numpy (from torchvision) Downloading https://download.pytorch.org/whl/numpy-2.1.2-cp310-cp310-win_amd64.whl.metadata (59 kB) Collecting pillow!=8.3.*,>=5.3.0 (from torchvision) Downloading https://download.pytorch.org/whl/pillow-11.0.0-cp310-cp310-win_amd64.whl.metadata (9.3 kB) Requirement already satisfied: MarkupSafe>=2.0 in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from jinja2->torch) (3.0.2) Downloading https://download.pytorch.org/whl/pillow-11.0.0-cp310-cp310-win_amd64.whl (2.6 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.6/2.6 MB 5.7 MB/s 0:00:00 Downloading https://download.pytorch.org/whl/numpy-2.1.2-cp310-cp310-win_amd64.whl (12.9 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 12.9/12.9 MB 27.8 MB/s 0:00:00 Installing collected packages: sympy, pillow, numpy, torch, torchvision, torchaudio Attempting uninstall: sympy Found existing installation: sympy 1.14.0 Uninstalling sympy-1.14.0: Successfully uninstalled sympy-1.14.0 Successfully installed numpy-2.1.2 pillow-11.0.0 sympy-1.13.1 torch-2.5.1+cu121 torchaudio-2.5.1+cu121 torchvision-0.20.1+cu121 (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 或者安装特定版本 (pytorch_env) PS E:\PyTorch_Build\pytorch> pip install torch==2.3.0+cu121 torchvision==0.18.0+cu121 torchaudio==2.3.0 -f https://download.pytorch.org/whl/torch_stable.html Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Looking in links: https://download.pytorch.org/whl/torch_stable.html Collecting torch==2.3.0+cu121 Downloading https://download.pytorch.org/whl/cu121/torch-2.3.0%2Bcu121-cp310-cp310-win_amd64.whl (2413.3 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.4/2.4 GB 27.9 MB/s 0:01:13 Collecting torchvision==0.18.0+cu121 Downloading https://download.pytorch.org/whl/cu121/torchvision-0.18.0%2Bcu121-cp310-cp310-win_amd64.whl (5.7 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.7/5.7 MB 115.0 MB/s 0:00:00 Collecting torchaudio==2.3.0 Downloading https://download.pytorch.org/whl/cu121/torchaudio-2.3.0%2Bcu121-cp310-cp310-win_amd64.whl (4.1 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.1/4.1 MB 80.9 MB/s 0:00:00 Requirement already satisfied: filelock in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from torch==2.3.0+cu121) (3.19.1) Requirement already satisfied: typing-extensions>=4.8.0 in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from torch==2.3.0+cu121) (4.15.0) Requirement already satisfied: sympy in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from torch==2.3.0+cu121) (1.13.1) Requirement already satisfied: networkx in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from torch==2.3.0+cu121) (3.4.2) Requirement already satisfied: jinja2 in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from torch==2.3.0+cu121) (3.1.6) Requirement already satisfied: fsspec in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from torch==2.3.0+cu121) (2025.7.0) Collecting mkl<=2021.4.0,>=2021.1.1 (from torch==2.3.0+cu121) Downloading https://pypi.tuna.tsinghua.edu.cn/packages/fe/1c/5f6dbf18e8b73e0a5472466f0ea8d48ce9efae39bd2ff38cebf8dce61259/mkl-2021.4.0-py2.py3-none-win_amd64.whl (228.5 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 228.5/228.5 MB 46.8 MB/s 0:00:05 Requirement already satisfied: numpy in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from torchvision==0.18.0+cu121) (2.1.2) Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from torchvision==0.18.0+cu121) (11.0.0) Collecting intel-openmp==2021.* (from mkl<=2021.4.0,>=2021.1.1->torch==2.3.0+cu121) Downloading https://pypi.tuna.tsinghua.edu.cn/packages/6f/21/b590c0cc3888b24f2ac9898c41d852d7454a1695fbad34bee85dba6dc408/intel_openmp-2021.4.0-py2.py3-none-win_amd64.whl (3.5 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.5/3.5 MB 51.7 MB/s 0:00:00 Requirement already satisfied: tbb==2021.* in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from mkl<=2021.4.0,>=2021.1.1->torch==2.3.0+cu121) (2021.13.1) Requirement already satisfied: MarkupSafe>=2.0 in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from jinja2->torch==2.3.0+cu121) (3.0.2) Requirement already satisfied: mpmath<1.4,>=1.1.0 in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from sympy->torch==2.3.0+cu121) (1.3.0) Installing collected packages: intel-openmp, mkl, torch, torchvision, torchaudio Attempting uninstall: intel-openmp Found existing installation: intel-openmp 2024.2.1 Uninstalling intel-openmp-2024.2.1: Successfully uninstalled intel-openmp-2024.2.1 Attempting uninstall: torch Found existing installation: torch 2.5.1+cu121 Uninstalling torch-2.5.1+cu121: Successfully uninstalled torch-2.5.1+cu121 Attempting uninstall: torchvision Found existing installation: torchvision 0.20.1+cu121 Uninstalling torchvision-0.20.1+cu121: Successfully uninstalled torchvision-0.20.1+cu121 Attempting uninstall: torchaudio Found existing installation: torchaudio 2.5.1+cu121 Uninstalling torchaudio-2.5.1+cu121: Successfully uninstalled torchaudio-2.5.1+cu121 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. mkl-static 2024.1.0 requires intel-openmp==2024.*, but you have intel-openmp 2021.4.0 which is incompatible. Successfully installed intel-openmp-2021.4.0 mkl-2021.4.0 torch-2.3.0+cu121 torchaudio-2.3.0+cu121 torchvision-0.18.0+cu121 (pytorch_env) PS E:\PyTorch_Build\pytorch> python -c " >> import torch >> from torch.utils import cpp_extension >> >> print('='*50) >> print(f'PyTorch 版本: {torch.__version__}') >> print(f'CUDA 可用: {torch.cuda.is_available()}') >> if torch.cuda.is_available(): >> print(f'CUDA 版本: {torch.version.cuda}') >> print(f'cuDNN 版本: {torch.backends.cudnn.version()}') >> print(f'GPU 名称: {torch.cuda.get_device_name(0)}') >> >> print('='*20 + ' 配置信息 ' + '='*20) >> print(torch.__config__.show()) >> >> print('='*20 + ' 简单计算测试 ' + '='*20) >> x = torch.randn(1000, 1000, device='cuda') >> y = torch.randn(1000, 1000, device='cuda') >> z = x @ y >> print(f'矩阵乘法完成: {z.size()}') >> >> print('='*20 + ' cuDNN 卷积测试 ' + '='*20) >> conv = torch.nn.Conv2d(3, 64, kernel_size=3).cuda() >> input = torch.randn(1, 3, 256, 256).cuda() >> output = conv(input) >> print(f'卷积输出尺寸: {output.size()}') >> >> print('='*20 + ' MKL 性能测试 ' + '='*20) >> a = torch.randn(5000, 5000) >> b = torch.randn(5000, 5000) >> c = a @ b >> print(f'CPU 矩阵乘法完成: {c.size()}') >> >> print('='*50) >> " ================================================== PyTorch 版本: 2.3.0+cu121 CUDA 可用: True CUDA 版本: 12.1 cuDNN 版本: 8801 E:\PyTorch_Build\pytorch\pytorch_env\lib\site-packages\torch\cuda\__init__.py:209: UserWarning: NVIDIA GeForce RTX 5070 with CUDA capability sm_120 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilities sm_50 sm_60 sm_61 sm_70 sm_75 sm_80 sm_86 sm_90. If you want to use the NVIDIA GeForce RTX 5070 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/ warnings.warn( GPU 名称: NVIDIA GeForce RTX 5070 ==================== 配置信息 ==================== PyTorch built with: - C++ Version: 201703 - MSVC 192930151 - Intel(R) oneAPI Math Kernel Library Version 2021.4-Product Build 20210904 for Intel(R) 64 architecture applications - Intel(R) MKL-DNN v3.3.6 (Git Hash 86e6af5974177e513fd3fee58425e1063e7f1361) - OpenMP 2019 - LAPACK is enabled (usually provided by MKL) - CPU capability usage: AVX2 - CUDA Runtime 12.1 - NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90 - CuDNN 8.8.1 (built against CUDA 12.0) - Magma 2.5.4 - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=12.1, CUDNN_VERSION=8.8.1, CXX_COMPILER=C:/actions-runner/_work/pytorch/pytorch/builder/windows/tmp_bin/sccache-cl.exe, CXX_FLAGS=/DWIN32 /D_WINDOWS /GR /EHsc /Zc:__cplusplus /bigobj /FS /utf-8 -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE /wd4624 /wd4068 /wd4067 /wd4267 /wd4661 /wd4717 /wd4244 /wd4804 /wd4273, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=2.3.0, USE_CUDA=ON, USE_CUDNN=ON, USE_CUSPARSELT=OFF, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=OFF, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF, ==================== 简单计算测试 ==================== A module that was compiled using NumPy 1.x cannot be run in NumPy 2.1.2 as it may crash. To support both 1.x and 2.x versions of NumPy, modules must be compiled with NumPy 2.0. Some module may need to rebuild instead e.g. with 'pybind11>=2.12'. If you are a user of the module, the easiest solution will be to downgrade to 'numpy<2' or try to upgrade the affected module. We expect that some modules will need time to support NumPy 2. Traceback (most recent call last): File "<string>", line 17, in <module> <string>:17: UserWarning: Failed to initialize NumPy: _ARRAY_API not found (Triggered internally at ..\torch\csrc\utils\tensor_numpy.cpp:84.) Traceback (most recent call last): File "<string>", line 17, in <module> RuntimeError: CUDA error: no kernel image is available for execution on the device CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with TORCH_USE_CUDA_DSA to enable device-side assertions. “

PowerShell 7 环境已加载 (版本: 7.5.2) PS C:\Users\Administrator\Desktop> cd E:\PyTorch_Build\pytorch PS E:\PyTorch_Build\pytorch> python -m venv rtx5070_env PS E:\PyTorch_Build\pytorch> .\rtx5070_env\Scripts\activate (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 修正 headeronly 路径问题 (rtx5070_env) PS E:\PyTorch_Build\pytorch> (Get-Content "..\torch\headeronly\CMakeLists.txt") >> -replace 'add_subdirectory\("\.\."\)', >> 'add_subdirectory("${CMAKE_SOURCE_DIR}/torch")' | >> Set-Content "..\torch\headeronly\CMakeLists.txt" Get-Content: Cannot find path 'E:\PyTorch_Build\torch\headeronly\CMakeLists.txt' because it does not exist. (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 完全修正 csrc 路径转义 (rtx5070_env) PS E:\PyTorch_Build\pytorch> (Get-Content "..\torch\csrc\CMakeLists.txt") >> -replace '\.\.\\\\\.\.\\', >> '${CMAKE_SOURCE_DIR}/' | >> Set-Content "..\torch\csrc\CMakeLists.txt" Get-Content: Cannot find path 'E:\PyTorch_Build\torch\csrc\CMakeLists.txt' because it does not exist. (rtx5070_env) PS E:\PyTorch_Build\pytorch> "-DOpenBLAS_LIBRARY=E:/Libs/OpenBLAS_Prebuilt/lib/openblas.lib", >> "-DOpenBLAS_INCLUDE_DIR=E:/Libs/OpenBLAS_Prebuilt/include" -DOpenBLAS_LIBRARY=E:/Libs/OpenBLAS_Prebuilt/lib/openblas.lib -DOpenBLAS_INCLUDE_DIR=E:/Libs/OpenBLAS_Prebuilt/include (rtx5070_env) PS E:\PyTorch_Build\pytorch> Remove-Item -Recurse -Force build (rtx5070_env) PS E:\PyTorch_Build\pytorch> mkdir build Directory: E:\PyTorch_Build\pytorch Mode LastWriteTime Length Name ---- ------------- ------ ---- d---- 2025/9/2 12:25 build (rtx5070_env) PS E:\PyTorch_Build\pytorch> cd build (rtx5070_env) PS E:\PyTorch_Build\pytorch\build> # 重新运行带修复参数的 CMake (rtx5070_env) PS E:\PyTorch_Build\pytorch\build> # 安装 CPU 版 PyTorch 作为基础 (rtx5070_env) PS E:\PyTorch_Build\pytorch\build> pip install torch --index-url https://download.pytorch.org/whl/cpu Looking in indexes: https://download.pytorch.org/whl/cpu Collecting torch Downloading https://download.pytorch.org/whl/cpu/torch-2.8.0%2Bcpu-cp310-cp310-win_amd64.whl.metadata (29 kB) Requirement already satisfied: filelock in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from torch) (3.19.1) Requirement already satisfied: typing-extensions>=4.10.0 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from torch) (4.15.0) Collecting sympy>=1.13.3 (from torch) Downloading https://download.pytorch.org/whl/sympy-1.13.3-py3-none-any.whl.metadata (12 kB) Requirement already satisfied: networkx in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from torch) (3.4.2) Requirement already satisfied: jinja2 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from torch) (3.1.6) Requirement already satisfied: fsspec in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from torch) (2025.7.0) Requirement already satisfied: mpmath<1.4,>=1.1.0 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from sympy>=1.13.3->torch) (1.3.0) Requirement already satisfied: MarkupSafe>=2.0 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from jinja2->torch) (3.0.2) Downloading https://download.pytorch.org/whl/cpu/torch-2.8.0%2Bcpu-cp310-cp310-win_amd64.whl (619.4 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 619.4/619.4 MB 33.9 MB/s 0:00:17 Downloading https://download.pytorch.org/whl/sympy-1.13.3-py3-none-any.whl (6.2 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.2/6.2 MB 95.6 MB/s 0:00:00 Installing collected packages: sympy, torch Attempting uninstall: sympy Found existing installation: sympy 1.13.1 Uninstalling sympy-1.13.1: Successfully uninstalled sympy-1.13.1 Successfully installed sympy-1.13.3 torch-2.8.0+cpu (rtx5070_env) PS E:\PyTorch_Build\pytorch\build> (rtx5070_env) PS E:\PyTorch_Build\pytorch\build> # 单独编译 CUDA 扩展 (rtx5070_env) PS E:\PyTorch_Build\pytorch\build> cd torch\csrc\cuda Set-Location: Cannot find path 'E:\PyTorch_Build\pytorch\build\torch\csrc\cuda' because it does not exist. (rtx5070_env) PS E:\PyTorch_Build\pytorch\build> python setup.py develop E:\Python310\python.exe: can't open file 'E:\\PyTorch_Build\\pytorch\\build\\setup.py': [Errno 2] No such file or directory (rtx5070_env) PS E:\PyTorch_Build\pytorch\build> # 验证关键工具链 (rtx5070_env) PS E:\PyTorch_Build\pytorch\build> cl.exe /? # MSVC 编译器 cl.exe: The term 'cl.exe' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (rtx5070_env) PS E:\PyTorch_Build\pytorch\build> nvcc --version # CUDA 编译器 nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2025 NVIDIA Corporation Built on Wed_Jul_16_20:06:48_Pacific_Daylight_Time_2025 Cuda compilation tools, release 13.0, V13.0.48 Build cuda_13.0.r13.0/compiler.36260728_0 (rtx5070_env) PS E:\PyTorch_Build\pytorch\build> cmake --version # CMake 版本 cmake version 4.1.0 CMake suite maintained and supported by Kitware (kitware.com/cmake). (rtx5070_env) PS E:\PyTorch_Build\pytorch\build> (rtx5070_env) PS E:\PyTorch_Build\pytorch\build> # 检查 OpenBLAS 有效性 (rtx5070_env) PS E:\PyTorch_Build\pytorch\build> dumpbin /HEADERS E:\Libs\OpenBLAS_Prebuilt\lib\openblas.lib dumpbin: The term 'dumpbin' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (rtx5070_env) PS E:\PyTorch_Build\pytorch\build>

(pytorch_env) PS E:\PyTorch_Build\pytorch> # 安装支持 sm_120 架构的 PyTorch Nightly 版本 (pytorch_env) PS E:\PyTorch_Build\pytorch> pip uninstall -y torch torchvision torchaudio Found existing installation: torch 2.3.0+cu121 Uninstalling torch-2.3.0+cu121: Successfully uninstalled torch-2.3.0+cu121 Found existing installation: torchvision 0.18.0+cu121 Uninstalling torchvision-0.18.0+cu121: Successfully uninstalled torchvision-0.18.0+cu121 Found existing installation: torchaudio 2.3.0+cu121 Uninstalling torchaudio-2.3.0+cu121: Successfully uninstalled torchaudio-2.3.0+cu121 (pytorch_env) PS E:\PyTorch_Build\pytorch> pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu121 Looking in indexes: https://download.pytorch.org/whl/nightly/cu121 Collecting torch Using cached https://download.pytorch.org/whl/nightly/cu121/torch-2.6.0.dev20241112%2Bcu121-cp310-cp310-win_amd64.whl (2456.2 MB) Collecting torchvision Using cached https://download.pytorch.org/whl/nightly/cu121/torchvision-0.20.0.dev20241112%2Bcu121-cp310-cp310-win_amd64.whl (6.2 MB) Collecting torchaudio Using cached https://download.pytorch.org/whl/nightly/cu121/torchaudio-2.5.0.dev20241112%2Bcu121-cp310-cp310-win_amd64.whl (4.2 MB) Requirement already satisfied: filelock in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from torch) (3.19.1) Requirement already satisfied: typing-extensions>=4.10.0 in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from torch) (4.15.0) Requirement already satisfied: networkx in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from torch) (3.4.2) Requirement already satisfied: jinja2 in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from torch) (3.1.6) Requirement already satisfied: fsspec in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from torch) (2025.7.0) Requirement already satisfied: sympy==1.13.1 in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from torch) (1.13.1) Requirement already satisfied: mpmath<1.4,>=1.1.0 in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from sympy==1.13.1->torch) (1.3.0) Requirement already satisfied: numpy in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from torchvision) (2.1.2) Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from torchvision) (11.0.0) Requirement already satisfied: MarkupSafe>=2.0 in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from jinja2->torch) (3.0.2) Installing collected packages: torch, torchvision, torchaudio Successfully installed torch-2.6.0.dev20241112+cu121 torchaudio-2.5.0.dev20241112+cu121 torchvision-0.20.0.dev20241112+cu121 (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 或者从源码编译支持 sm_120 (pytorch_env) PS E:\PyTorch_Build\pytorch> git clone --depth 1 --branch release/2.4 https://github.com/pytorch/pytorch Cloning into 'pytorch'... remote: Enumerating objects: 11178, done. remote: Counting objects: 100% (11178/11178), done. remote: Compressing objects: 100% (9646/9646), done. remote: Total 11178 (delta 1404), reused 6483 (delta 1248), pack-reused 0 (from 0) Receiving objects: 100% (11178/11178), 99.55 MiB | 2.38 MiB/s, done. Resolving deltas: 100% (1404/1404), done. Updating files: 100% (15987/15987), done. (pytorch_env) PS E:\PyTorch_Build\pytorch> cd pytorch (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> pip install -r requirements.txt Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Ignoring setuptools: markers 'python_version >= "3.12"' don't match your environment Collecting astunparse (from -r requirements.txt (line 2)) Downloading https://pypi.tuna.tsinghua.edu.cn/packages/2b/03/13dde6512ad7b4557eb792fbcf0c653af6076b81e5941d36ec61f7ce6028/astunparse-1.6.3-py2.py3-none-any.whl (12 kB) Collecting expecttest!=0.2.0 (from -r requirements.txt (line 3)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/27/fb/deeefea1ea549273817ca7bed3db2f39cc238a75a745a20e3651619f7335/expecttest-0.3.0-py3-none-any.whl (8.2 kB) Collecting hypothesis (from -r requirements.txt (line 4)) Downloading https://pypi.tuna.tsinghua.edu.cn/packages/f7/fb/0eca64797a914b73a3601ed0c8941764d0b4232a4900eb03502e1daf8407/hypothesis-6.138.11-py3-none-any.whl (533 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 533.4/533.4 kB 17.0 MB/s 0:00:00 Requirement already satisfied: numpy in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from -r requirements.txt (line 5)) (2.1.2) Collecting psutil (from -r requirements.txt (line 6)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/50/1b/6921afe68c74868b4c9fa424dad3be35b095e16687989ebbb50ce4fceb7c/psutil-7.0.0-cp37-abi3-win_amd64.whl (244 kB) Requirement already satisfied: pyyaml in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from -r requirements.txt (line 7)) (6.0.2) Collecting requests (from -r requirements.txt (line 8)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/1e/db/4254e3eabe8020b458f1a747140d32277ec7a271daf1d235b70dc0b4e6e3/requests-2.32.5-py3-none-any.whl (64 kB) Requirement already satisfied: setuptools in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from -r requirements.txt (line 9)) (80.9.0) Collecting types-dataclasses (from -r requirements.txt (line 10)) Downloading https://pypi.tuna.tsinghua.edu.cn/packages/31/85/23ab2bbc280266af5bf22ded4e070946d1694d1721ced90666b649eaa795/types_dataclasses-0.6.6-py3-none-any.whl (2.9 kB) Requirement already satisfied: typing-extensions>=4.8.0 in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from -r requirements.txt (line 11)) (4.15.0) Requirement already satisfied: sympy in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from -r requirements.txt (line 12)) (1.13.1) Requirement already satisfied: filelock in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from -r requirements.txt (line 13)) (3.19.1) Requirement already satisfied: networkx in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from -r requirements.txt (line 14)) (3.4.2) Requirement already satisfied: jinja2 in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from -r requirements.txt (line 15)) (3.1.6) Requirement already satisfied: fsspec in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from -r requirements.txt (line 16)) (2025.7.0) Collecting lintrunner (from -r requirements.txt (line 17)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/40/c1/eb0184a324dd25e19b84e52fc44c6262710677737c8acca5d545b3d25ffb/lintrunner-0.12.7-py3-none-win_amd64.whl (1.7 MB) Requirement already satisfied: ninja in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from -r requirements.txt (line 18)) (1.13.0) Collecting packaging (from -r requirements.txt (line 21)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/20/12/38679034af332785aac8774540895e234f4d07f7545804097de4b666afd8/packaging-25.0-py3-none-any.whl (66 kB) Collecting optree>=0.11.0 (from -r requirements.txt (line 22)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/74/fa/83d4cd387043483ee23617b048829a1289bf54afe2f6cb98ec7b27133369/optree-0.17.0-cp310-cp310-win_amd64.whl (304 kB) Requirement already satisfied: wheel<1.0,>=0.23.0 in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from astunparse->-r requirements.txt (line 2)) (0.45.1) Collecting six<2.0,>=1.6.1 (from astunparse->-r requirements.txt (line 2)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/b7/ce/149a00dd41f10bc29e5921b496af8b574d8413afcd5e30dfa0ed46c2cc5e/six-1.17.0-py2.py3-none-any.whl (11 kB) Collecting attrs>=22.2.0 (from hypothesis->-r requirements.txt (line 4)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/77/06/bb80f5f86020c4551da315d78b3ab75e8228f89f0162f2c3a819e407941a/attrs-25.3.0-py3-none-any.whl (63 kB) Collecting exceptiongroup>=1.0.0 (from hypothesis->-r requirements.txt (line 4)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/36/f4/c6e662dade71f56cd2f3735141b265c3c79293c109549c1e6933b0651ffc/exceptiongroup-1.3.0-py3-none-any.whl (16 kB) Collecting sortedcontainers<3.0.0,>=2.1.0 (from hypothesis->-r requirements.txt (line 4)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/32/46/9cb0e58b2deb7f82b84065f37f3bffeb12413f947f9388e4cac22c4621ce/sortedcontainers-2.4.0-py2.py3-none-any.whl (29 kB) Collecting charset_normalizer<4,>=2 (from requests->-r requirements.txt (line 8)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/e2/c6/f05db471f81af1fa01839d44ae2a8bfeec8d2a8b4590f16c4e7393afd323/charset_normalizer-3.4.3-cp310-cp310-win_amd64.whl (107 kB) Collecting idna<4,>=2.5 (from requests->-r requirements.txt (line 8)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/76/c6/c88e154df9c4e1a2a66ccf0005a88dfb2650c1dffb6f5ce603dfbd452ce3/idna-3.10-py3-none-any.whl (70 kB) Collecting urllib3<3,>=1.21.1 (from requests->-r requirements.txt (line 8)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/a7/c2/fe1e52489ae3122415c51f387e221dd0773709bad6c6cdaa599e8a2c5185/urllib3-2.5.0-py3-none-any.whl (129 kB) Collecting certifi>=2017.4.17 (from requests->-r requirements.txt (line 8)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/e5/48/1549795ba7742c948d2ad169c1c8cdbae65bc450d6cd753d124b17c8cd32/certifi-2025.8.3-py3-none-any.whl (161 kB) Requirement already satisfied: mpmath<1.4,>=1.1.0 in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from sympy->-r requirements.txt (line 12)) (1.3.0) Requirement already satisfied: MarkupSafe>=2.0 in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from jinja2->-r requirements.txt (line 15)) (3.0.2) Installing collected packages: types-dataclasses, sortedcontainers, urllib3, six, psutil, packaging, optree, lintrunner, idna, expecttest, exceptiongroup, charset_normalizer, certifi, attrs, requests, hypothesis, astunparse Successfully installed astunparse-1.6.3 attrs-25.3.0 certifi-2025.8.3 charset_normalizer-3.4.3 exceptiongroup-1.3.0 expecttest-0.3.0 hypothesis-6.138.11 idna-3.10 lintrunner-0.12.7 optree-0.17.0 packaging-25.0 psutil-7.0.0 requests-2.32.5 six-1.17.0 sortedcontainers-2.4.0 types-dataclasses-0.6.6 urllib3-2.5.0 (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> # 设置计算能力环境变量 (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> $env:TORCH_CUDA_ARCH_LIST="8.9;9.0" # RTX 5070 需要 8.9+ (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> # 开始编译 (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> python setup.py install Building wheel torch-2.4.0a0+gitee1b680 -- Building version 2.4.0a0+gitee1b680 --- Trying to initialize submodules Submodule 'android/libs/fbjni' (https://github.com/facebookincubator/fbjni.git) registered for path 'android/libs/fbjni' Submodule 'third_party/NNPACK_deps/FP16' (https://github.com/Maratyszcza/FP16.git) registered for path 'third_party/FP16' Submodule 'third_party/NNPACK_deps/FXdiv' (https://github.com/Maratyszcza/FXdiv.git) registered for path 'third_party/FXdiv' Submodule 'third_party/NNPACK' (https://github.com/Maratyszcza/NNPACK.git) registered for path 'third_party/NNPACK' Submodule 'third_party/VulkanMemoryAllocator' (https://github.com/GPUOpen-LibrariesAndSDKs/VulkanMemoryAllocator.git) registered for path 'third_party/VulkanMemoryAllocator' Submodule 'third_party/XNNPACK' (https://github.com/google/XNNPACK.git) registered for path 'third_party/XNNPACK' Submodule 'third_party/benchmark' (https://github.com/google/benchmark.git) registered for path 'third_party/benchmark' Submodule 'third_party/cpp-httplib' (https://github.com/yhirose/cpp-httplib.git) registered for path 'third_party/cpp-httplib' Submodule 'third_party/cpuinfo' (https://github.com/pytorch/cpuinfo.git) registered for path 'third_party/cpuinfo' Submodule 'third_party/cudnn_frontend' (https://github.com/NVIDIA/cudnn-frontend.git) registered for path 'third_party/cudnn_frontend' Submodule 'third_party/cutlass' (https://github.com/NVIDIA/cutlass.git) registered for path 'third_party/cutlass' Submodule 'third_party/eigen' (https://gitlab.com/libeigen/eigen.git) registered for path 'third_party/eigen' Submodule 'third_party/fbgemm' (https://github.com/pytorch/fbgemm) registered for path 'third_party/fbgemm' Submodule 'third_party/flatbuffers' (https://github.com/google/flatbuffers.git) registered for path 'third_party/flatbuffers' Submodule 'third_party/fmt' (https://github.com/fmtlib/fmt.git) registered for path 'third_party/fmt' Submodule 'third_party/foxi' (https://github.com/houseroad/foxi.git) registered for path 'third_party/foxi' Submodule 'third_party/gemmlowp/gemmlowp' (https://github.com/google/gemmlowp.git) registered for path 'third_party/gemmlowp/gemmlowp' Submodule 'third_party/gloo' (https://github.com/facebookincubator/gloo) registered for path 'third_party/gloo' Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/googletest' Submodule 'third_party/ideep' (https://github.com/intel/ideep) registered for path 'third_party/ideep' Submodule 'third_party/ittapi' (https://github.com/intel/ittapi.git) registered for path 'third_party/ittapi' Submodule 'third_party/kineto' (https://github.com/pytorch/kineto) registered for path 'third_party/kineto' Submodule 'third_party/mimalloc' (https://github.com/microsoft/mimalloc.git) registered for path 'third_party/mimalloc' Submodule 'third_party/nccl/nccl' (https://github.com/NVIDIA/nccl) registered for path 'third_party/nccl/nccl' Submodule 'third_party/nlohmann' (https://github.com/nlohmann/json.git) registered for path 'third_party/nlohmann' Submodule 'third_party/onnx' (https://github.com/onnx/onnx.git) registered for path 'third_party/onnx' Submodule 'third_party/opentelemetry-cpp' (https://github.com/open-telemetry/opentelemetry-cpp.git) registered for path 'third_party/opentelemetry-cpp' Submodule 'third_party/pocketfft' (https://github.com/mreineck/pocketfft) registered for path 'third_party/pocketfft' Submodule 'third_party/protobuf' (https://github.com/protocolbuffers/protobuf.git) registered for path 'third_party/protobuf' Submodule 'third_party/NNPACK_deps/psimd' (https://github.com/Maratyszcza/psimd.git) registered for path 'third_party/psimd' Submodule 'third_party/NNPACK_deps/pthreadpool' (https://github.com/Maratyszcza/pthreadpool.git) registered for path 'third_party/pthreadpool' Submodule 'third_party/pybind11' (https://github.com/pybind/pybind11.git) registered for path 'third_party/pybind11' Submodule 'third_party/python-peachpy' (https://github.com/malfet/PeachPy.git) registered for path 'third_party/python-peachpy' Submodule 'third_party/sleef' (https://github.com/shibatch/sleef) registered for path 'third_party/sleef' Submodule 'third_party/tensorpipe' (https://github.com/pytorch/tensorpipe.git) registered for path 'third_party/tensorpipe' Cloning into 'E:/PyTorch_Build/pytorch/pytorch/android/libs/fbjni'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/FP16'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/FXdiv'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/NNPACK'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/VulkanMemoryAllocator'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/XNNPACK'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/benchmark'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/cpp-httplib'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/cpuinfo'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/cudnn_frontend'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/cutlass'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/eigen'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/fbgemm'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/flatbuffers'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/fmt'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/foxi'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/gemmlowp/gemmlowp'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/gloo'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/googletest'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/ideep'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/ittapi'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/kineto'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/mimalloc'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/nccl/nccl'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/nlohmann'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/onnx'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/opentelemetry-cpp'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/pocketfft'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/protobuf'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/psimd'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/pthreadpool'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/pybind11'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/python-peachpy'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/sleef'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/tensorpipe'... Submodule path 'android/libs/fbjni': checked out '7e1e1fe3858c63c251c637ae41a20de425dde96f' Submodule path 'third_party/FP16': checked out '4dfe081cf6bcd15db339cf2680b9281b8451eeb3' Submodule path 'third_party/FXdiv': checked out 'b408327ac2a15ec3e43352421954f5b1967701d1' Submodule path 'third_party/NNPACK': checked out 'c07e3a0400713d546e0dea2d5466dd22ea389c73' Submodule path 'third_party/VulkanMemoryAllocator': checked out 'a6bfc237255a6bac1513f7c1ebde6d8aed6b5191' Submodule path 'third_party/XNNPACK': checked out 'fcbf55af6cf28a4627bcd1f703ab7ad843f0f3a2' Submodule path 'third_party/benchmark': checked out '0d98dba29d66e93259db7daa53a9327df767a415' Submodule path 'third_party/cpp-httplib': checked out '3b6597bba913d51161383657829b7e644e59c006' Submodule path 'third_party/cpuinfo': checked out 'd6860c477c99f1fce9e28eb206891af3c0e1a1d7' Submodule path 'third_party/cudnn_frontend': checked out 'b740542818f36857acf7f9853f749bbad4118c65' Submodule path 'third_party/cutlass': checked out 'bbe579a9e3beb6ea6626d9227ec32d0dae119a49' Submodule path 'third_party/eigen': checked out '3147391d946bb4b6c68edd901f2add6ac1f31f8c' Submodule path 'third_party/fbgemm': checked out 'dbc3157bf256f1339b3fa1fef2be89ac4078be0e' Submodule 'third_party/asmjit' (https://github.com/asmjit/asmjit.git) registered for path 'third_party/fbgemm/third_party/asmjit' Submodule 'third_party/cpuinfo' (https://github.com/pytorch/cpuinfo) registered for path 'third_party/fbgemm/third_party/cpuinfo' Submodule 'third_party/cutlass' (https://github.com/NVIDIA/cutlass.git) registered for path 'third_party/fbgemm/third_party/cutlass' Submodule 'third_party/googletest' (https://github.com/google/googletest) registered for path 'third_party/fbgemm/third_party/googletest' Submodule 'third_party/hipify_torch' (https://github.com/ROCmSoftwarePlatform/hipify_torch.git) registered for path 'third_party/fbgemm/third_party/hipify_torch' Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/fbgemm/third_party/asmjit'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/fbgemm/third_party/cpuinfo'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/fbgemm/third_party/cutlass'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/fbgemm/third_party/googletest'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/fbgemm/third_party/hipify_torch'... Submodule path 'third_party/fbgemm/third_party/asmjit': checked out 'd3fbf7c9bc7c1d1365a94a45614b91c5a3706b81' Submodule path 'third_party/fbgemm/third_party/cpuinfo': checked out 'ed8b86a253800bafdb7b25c5c399f91bff9cb1f3' Submodule path 'third_party/fbgemm/third_party/cutlass': checked out 'fc9ebc645b63f3a6bc80aaefde5c063fb72110d6' Submodule path 'third_party/fbgemm/third_party/googletest': checked out 'cbf019de22c8dd37b2108da35b2748fd702d1796' Submodule path 'third_party/fbgemm/third_party/hipify_torch': checked out '23f53b025b466d8ec3c45d52290d3442f7fbe6b1' Submodule path 'third_party/flatbuffers': checked out '01834de25e4bf3975a9a00e816292b1ad0fe184b' Submodule path 'third_party/fmt': checked out 'e69e5f977d458f2650bb346dadf2ad30c5320281' Submodule path 'third_party/foxi': checked out 'c278588e34e535f0bb8f00df3880d26928038cad' Submodule path 'third_party/gemmlowp/gemmlowp': checked out '3fb5c176c17c765a3492cd2f0321b0dab712f350' Submodule path 'third_party/gloo': checked out '5354032ea08eadd7fc4456477f7f7c6308818509' Submodule path 'third_party/googletest': checked out 'e2239ee6043f73722e7aa812a459f54a28552929' Submodule path 'third_party/ideep': checked out '55ca0191687aaf19aca5cdb7881c791e3bea442b' Submodule 'mkl-dnn' (https://github.com/intel/mkl-dnn.git) registered for path 'third_party/ideep/mkl-dnn' Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/ideep/mkl-dnn'... Submodule path 'third_party/ideep/mkl-dnn': checked out '1137e04ec0b5251ca2b4400a4fd3c667ce843d67' Submodule path 'third_party/ittapi': checked out '5b8a7d7422611c3a0d799fb5fc5dd4abfae35b42' Submodule path 'third_party/kineto': checked out 'be1317644c68b4bfc4646024a6b221066e430031' Submodule 'libkineto/third_party/dynolog' (https://github.com/facebookincubator/dynolog.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog' Submodule 'libkineto/third_party/fmt' (https://github.com/fmtlib/fmt.git) registered for path 'third_party/kineto/libkineto/third_party/fmt' Submodule 'libkineto/third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/kineto/libkineto/third_party/googletest' Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/kineto/libkineto/third_party/fmt'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/kineto/libkineto/third_party/googletest'... Submodule path 'third_party/kineto/libkineto/third_party/dynolog': checked out '7d04a0053a845370ae06ce317a22a48e9edcc74e' Submodule 'third_party/DCGM' (https://github.com/NVIDIA/DCGM.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' Submodule 'third_party/cpr' (https://github.com/libcpr/cpr.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' Submodule 'third_party/fmt' (https://github.com/fmtlib/fmt.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' Submodule 'third_party/gflags' (https://github.com/gflags/gflags.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' Submodule 'third_party/glog' (https://github.com/google/glog.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' Submodule 'third_party/json' (https://github.com/nlohmann/json.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' Submodule 'third_party/pfs' (https://github.com/dtrugman/pfs.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/cpr'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/fmt'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/gflags'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/glog'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/googletest'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/json'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/pfs'... Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM': checked out 'ffde4e54bc7249a6039a5e6b45b395141e1217f9' Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr': checked out '871ed52d350214a034f6ef8a3b8f51c5ce1bd400' Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt': checked out 'cd4af11efc9c622896a3e4cb599fa28668ca3d05' Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags': checked out 'e171aa2d15ed9eb17054558e0b3a6a413bb01067' Submodule 'doc' (https://github.com/gflags/gflags.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc'... Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc': checked out '8411df715cf522606e3b1aca386ddfc0b63d34b4' Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog': checked out 'b33e3bad4c46c8a6345525fd822af355e5ef9446' Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest': checked out '58d77fa8070e8cec2dc1ed015d66b454c8d78850' Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/json': checked out '4f8fba14066156b73f1189a2b8bd568bde5284c5' Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs': checked out 'f68a2fa8ea36c783bdd760371411fcb495aa3150' Submodule path 'third_party/kineto/libkineto/third_party/fmt': checked out 'a33701196adfad74917046096bf5a2aa0ab0bb50' Submodule path 'third_party/kineto/libkineto/third_party/googletest': checked out '7aca84427f224eeed3144123d5230d5871e93347' Submodule path 'third_party/mimalloc': checked out 'b66e3214d8a104669c2ec05ae91ebc26a8f5ab78' Submodule path 'third_party/nccl/nccl': checked out '48bb7fec7953112ff37499a272317f6663f8f600' Submodule path 'third_party/nlohmann': checked out '87cda1d6646592ac5866dc703c8e1839046a6806' Submodule path 'third_party/onnx': checked out '990217f043af7222348ca8f0301e17fa7b841781' Submodule 'third_party/benchmark' (https://github.com/google/benchmark.git) registered for path 'third_party/onnx/third_party/benchmark' Submodule 'third_party/pybind11' (https://github.com/pybind/pybind11.git) registered for path 'third_party/onnx/third_party/pybind11' Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/onnx/third_party/benchmark'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/onnx/third_party/pybind11'... Submodule path 'third_party/onnx/third_party/benchmark': checked out '2dd015dfef425c866d9a43f2c67d8b52d709acb6' Submodule path 'third_party/onnx/third_party/pybind11': checked out '5b0a6fc2017fcc176545afe3e09c9f9885283242' Submodule path 'third_party/opentelemetry-cpp': checked out 'a799f4aed9c94b765dcdaabaeab7d5e7e2310878' Submodule 'third_party/benchmark' (https://github.com/google/benchmark) registered for path 'third_party/opentelemetry-cpp/third_party/benchmark' Submodule 'third_party/googletest' (https://github.com/google/googletest) registered for path 'third_party/opentelemetry-cpp/third_party/googletest' Submodule 'third_party/ms-gsl' (https://github.com/microsoft/GSL) registered for path 'third_party/opentelemetry-cpp/third_party/ms-gsl' Submodule 'third_party/nlohmann-json' (https://github.com/nlohmann/json) registered for path 'third_party/opentelemetry-cpp/third_party/nlohmann-json' Submodule 'third_party/opentelemetry-proto' (https://github.com/open-telemetry/opentelemetry-proto) registered for path 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' Submodule 'third_party/opentracing-cpp' (https://github.com/opentracing/opentracing-cpp.git) registered for path 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' Submodule 'third_party/prometheus-cpp' (https://github.com/jupp0r/prometheus-cpp) registered for path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' Submodule 'tools/vcpkg' (https://github.com/Microsoft/vcpkg) registered for path 'third_party/opentelemetry-cpp/tools/vcpkg' Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/benchmark'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/googletest'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/ms-gsl'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/nlohmann-json'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/opentelemetry-proto'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/opentracing-cpp'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/prometheus-cpp'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/opentelemetry-cpp/tools/vcpkg'... Submodule path 'third_party/opentelemetry-cpp/third_party/benchmark': checked out 'd572f4777349d43653b21d6c2fc63020ab326db2' Submodule path 'third_party/opentelemetry-cpp/third_party/googletest': checked out 'b796f7d44681514f58a683a3a71ff17c94edb0c1' Submodule path 'third_party/opentelemetry-cpp/third_party/ms-gsl': checked out '6f4529395c5b7c2d661812257cd6780c67e54afa' Submodule path 'third_party/opentelemetry-cpp/third_party/nlohmann-json': checked out 'bc889afb4c5bf1c0d8ee29ef35eaaf4c8bef8a5d' Submodule path 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto': checked out '4ca4f0335c63cda7ab31ea7ed70d6553aee14dce' Submodule path 'third_party/opentelemetry-cpp/third_party/opentracing-cpp': checked out '06b57f48ded1fa3bdd3d4346f6ef29e40e08eaf5' Submodule path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp': checked out 'c9ffcdda9086ffd9e1283ea7a0276d831f3c8a8d' Submodule 'civetweb' (https://github.com/civetweb/civetweb.git) registered for path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' Submodule 'googletest' (https://github.com/google/googletest.git) registered for path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest'... Submodule path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb': checked out 'eefb26f82b233268fc98577d265352720d477ba4' Submodule path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest': checked out 'e2239ee6043f73722e7aa812a459f54a28552929' Submodule path 'third_party/opentelemetry-cpp/tools/vcpkg': checked out '8eb57355a4ffb410a2e94c07b4dca2dffbee8e50' Submodule path 'third_party/pocketfft': checked out '9d3ab05a7fffbc71a492bc6a17be034e83e8f0fe' Submodule path 'third_party/protobuf': checked out 'd1eca4e4b421cd2997495c4b4e65cea6be4e9b8a' Submodule 'third_party/benchmark' (https://github.com/google/benchmark.git) registered for path 'third_party/protobuf/third_party/benchmark' Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/protobuf/third_party/googletest' Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/protobuf/third_party/benchmark'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/protobuf/third_party/googletest'... Submodule path 'third_party/protobuf/third_party/benchmark': checked out '5b7683f49e1e9223cf9927b24f6fd3d6bd82e3f8' Submodule path 'third_party/protobuf/third_party/googletest': checked out '5ec7f0c4a113e2f18ac2c6cc7df51ad6afc24081' Submodule path 'third_party/psimd': checked out '072586a71b55b7f8c584153d223e95687148a900' Submodule path 'third_party/pthreadpool': checked out '4fe0e1e183925bf8cfa6aae24237e724a96479b8' Submodule path 'third_party/pybind11': checked out '3e9dfa2866941655c56877882565e7577de6fc7b' Submodule path 'third_party/python-peachpy': checked out 'f45429b087dd7d5bc78bb40dc7cf06425c252d67' Submodule path 'third_party/sleef': checked out '60e76d2bce17d278b439d9da17177c8f957a9e9b' Submodule path 'third_party/tensorpipe': checked out '52791a2fd214b2a9dc5759d36725909c1daa7f2e' Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/tensorpipe/third_party/googletest' Submodule 'third_party/libnop' (https://github.com/google/libnop.git) registered for path 'third_party/tensorpipe/third_party/libnop' Submodule 'third_party/libuv' (https://github.com/libuv/libuv.git) registered for path 'third_party/tensorpipe/third_party/libuv' Submodule 'third_party/pybind11' (https://github.com/pybind/pybind11.git) registered for path 'third_party/tensorpipe/third_party/pybind11' Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/tensorpipe/third_party/googletest'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/tensorpipe/third_party/libnop'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/tensorpipe/third_party/libuv'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/tensorpipe/third_party/pybind11'... Submodule path 'third_party/tensorpipe/third_party/googletest': checked out 'aee0f9d9b5b87796ee8a0ab26b7587ec30e8858e' Submodule path 'third_party/tensorpipe/third_party/libnop': checked out '910b55815be16109f04f4180e9adee14fb4ce281' Submodule path 'third_party/tensorpipe/third_party/libuv': checked out '1dff88e5161cba5c59276d2070d2e304e4dcb242' Submodule path 'third_party/tensorpipe/third_party/pybind11': checked out 'a23996fce38ff6ccfbcdc09f1e63f2c4be5ea2ef' Submodule 'tools/clang' (https://github.com/wjakob/clang-cindex-python3) registered for path 'third_party/tensorpipe/third_party/pybind11/tools/clang' Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/tensorpipe/third_party/pybind11/tools/clang'... Submodule path 'third_party/tensorpipe/third_party/pybind11/tools/clang': checked out '6a00cbc4a9b8e68b71caf7f774b3f9c753ae84d5' --- Submodule initialization took 555.36 sec Traceback (most recent call last): File "E:\PyTorch_Build\pytorch\pytorch\setup.py", line 1510, in <module> main() File "E:\PyTorch_Build\pytorch\pytorch\setup.py", line 1176, in main build_deps() File "E:\PyTorch_Build\pytorch\pytorch\setup.py", line 471, in build_deps build_caffe2( File "E:\PyTorch_Build\pytorch\pytorch\tools\build_pytorch_libs.py", line 82, in build_caffe2 my_env = _create_build_env() File "E:\PyTorch_Build\pytorch\pytorch\tools\build_pytorch_libs.py", line 68, in _create_build_env my_env = _overlay_windows_vcvars(my_env) File "E:\PyTorch_Build\pytorch\pytorch\tools\build_pytorch_libs.py", line 37, in _overlay_windows_vcvars vc_env: Dict[str, str] = distutils._msvccompiler._get_vc_env(vc_arch) AttributeError: module 'distutils' has no attribute '_msvccompiler'. Did you mean: 'ccompiler'? (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> # 降级 NumPy 到兼容版本 (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> pip uninstall -y numpy Found existing installation: numpy 2.1.2 Uninstalling numpy-2.1.2: Successfully uninstalled numpy-2.1.2 (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> pip install numpy==1.26.4 Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Collecting numpy==1.26.4 Using cached https://pypi.tuna.tsinghua.edu.cn/packages/19/77/538f202862b9183f54108557bfda67e17603fc560c384559e769321c9d92/numpy-1.26.4-cp310-cp310-win_amd64.whl (15.8 MB) Installing collected packages: numpy Successfully installed numpy-1.26.4 (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> # 安装兼容的 PyTorch 版本 (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> pip install torch==2.3.0+cu121 torchvision==0.18.0+cu121 --no-deps Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple ERROR: Could not find a version that satisfies the requirement torch==2.3.0+cu121 (from versions: 1.11.0, 1.12.0, 1.12.1, 1.13.0, 1.13.1, 2.0.0, 2.0.1, 2.1.0, 2.1.1, 2.1.2, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.1, 2.4.0, 2.4.1, 2.5.0, 2.5.1, 2.6.0, 2.7.0, 2.7.1, 2.8.0) ERROR: No matching distribution found for torch==2.3.0+cu121 (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> import torch import: The term 'import' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> import numpy as np import: The term 'import' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> print('='*50) 无法初始化设备 PRN (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> print(f'PyTorch 版本: {torch.__version__}') fPyTorch 版本: {torch.__version__}: The term 'fPyTorch 版本: {torch.__version__}' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> print(f'CUDA 可用: {torch.cuda.is_available()}') fCUDA 可用: {torch.cuda.is_available()}: The term 'fCUDA 可用: {torch.cuda.is_available()}' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> print(f'CUDA 版本: {torch.version.cuda}') fCUDA 版本: {torch.version.cuda}: The term 'fCUDA 版本: {torch.version.cuda}' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> print(f'cuDNN 版本: {torch.backends.cudnn.version()}') fcuDNN 版本: {torch.backends.cudnn.version()}: The term 'fcuDNN 版本: {torch.backends.cudnn.version()}' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> print(f'GPU 名称: {torch.cuda.get_device_name(0)}') fGPU 名称: {torch.cuda.get_device_name(0)}: The term 'fGPU 名称: {torch.cuda.get_device_name(0)}' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> print(f'计算能力: {torch.cuda.get_device_capability(0)}') f计算能力: {torch.cuda.get_device_capability(0)}: The term 'f计算能力: {torch.cuda.get_device_capability(0)}' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> # 简单 CUDA 测试 (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> x = torch.randn(100, 100, device='cuda') ParserError: Line | 1 | x = torch.randn(100, 100, device='cuda') | ~ | Missing expression after ','. (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> y = torch.randn(100, 100, device='cuda') ParserError: Line | 1 | y = torch.randn(100, 100, device='cuda') | ~ | Missing expression after ','. (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> z = x @ y ParserError: Line | 1 | z = x @ y | ~ | Unrecognized token in source text. (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> print(f'矩阵乘法完成: {z.size()}') f矩阵乘法完成: {z.size()}: The term 'f矩阵乘法完成: {z.size()}' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> # NumPy 互操作测试 (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> a = np.random.rand(100, 100) a: The term 'a' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> b = torch.from_numpy(a).cuda() a: The term 'a' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> c = b.cpu().numpy() ParserError: Line | 1 | c = b.cpu().numpy() | ~ | An expression was expected after '('. (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> print(f'NumPy 互操作测试成功: {c.shape == a.shape}') fNumPy 互操作测试成功: {c.shape == a.shape}: The term 'fNumPy 互操作测试成功: {c.shape == a.shape}' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> print('='*50) 无法初始化设备 PRN (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch>

PowerShell 7 环境已加载 (版本: 7.5.2) PowerShell 7 环境已加载 (版本: 7.5.2) PS C:\Users\Administrator\Desktop> cd E:\PyTorch_Build\pytorch PS E:\PyTorch_Build\pytorch> .\pytorch_env\Scripts\activate (pytorch_env) PS E:\PyTorch_Build\pytorch> # 退出虚拟环境 (pytorch_env) PS E:\PyTorch_Build\pytorch> deactivate PS E:\PyTorch_Build\pytorch> PS E:\PyTorch_Build\pytorch> # 删除旧环境 PS E:\PyTorch_Build\pytorch> Remove-Item -Recurse -Force .\pytorch_env PS E:\PyTorch_Build\pytorch> Remove-Item -Recurse -Force .\cuda_env PS E:\PyTorch_Build\pytorch> PS E:\PyTorch_Build\pytorch> # 创建新虚拟环境 PS E:\PyTorch_Build\pytorch> python -m venv rtx5070_env PS E:\PyTorch_Build\pytorch> .\rtx5070_env\Scripts\activate (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 安装基础编译工具 (rtx5070_env) PS E:\PyTorch_Build\pytorch> pip install -U pip setuptools wheel ninja cmake Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Requirement already satisfied: pip in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (22.3.1) Collecting pip Using cached https://pypi.tuna.tsinghua.edu.cn/packages/b7/3f/945ef7ab14dc4f9d7f40288d2df998d1837ee0888ec3659c813487572faa/pip-25.2-py3-none-any.whl (1.8 MB) Requirement already satisfied: setuptools in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (65.5.0) Collecting setuptools Using cached https://pypi.tuna.tsinghua.edu.cn/packages/a3/dc/17031897dae0efacfea57dfd3a82fdd2a2aeb58e0ff71b77b87e44edc772/setuptools-80.9.0-py3-none-any.whl (1.2 MB) Collecting wheel Using cached https://pypi.tuna.tsinghua.edu.cn/packages/0b/2c/87f3254fd8ffd29e4c02732eee68a83a1d3c346ae39bc6822dcbcb697f2b/wheel-0.45.1-py3-none-any.whl (72 kB) Collecting ninja Using cached https://pypi.tuna.tsinghua.edu.cn/packages/29/45/c0adfbfb0b5895aa18cec400c535b4f7ff3e52536e0403602fc1a23f7de9/ninja-1.13.0-py3-none-win_amd64.whl (309 kB) Collecting cmake Using cached https://pypi.tuna.tsinghua.edu.cn/packages/7c/d0/73cae88d8c25973f2465d5a4457264f95617c16ad321824ed4c243734511/cmake-4.1.0-py3-none-win_amd64.whl (37.6 MB) ERROR: To modify pip, please run the following command: E:\PyTorch_Build\pytorch\rtx5070_env\Scripts\python.exe -m pip install -U pip setuptools wheel ninja cmake [notice] A new release of pip available: 22.3.1 -> 25.2 [notice] To update, run: python.exe -m pip install --upgrade pip (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 验证 CUDA 安装 (rtx5070_env) PS E:\PyTorch_Build\pytorch> nvcc --version # 应显示 CUDA 12.x nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2025 NVIDIA Corporation Built on Wed_Jul_16_20:06:48_Pacific_Daylight_Time_2025 Cuda compilation tools, release 13.0, V13.0.48 Build cuda_13.0.r13.0/compiler.36260728_0 (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 正确更新 pip 和工具链 (rtx5070_env) PS E:\PyTorch_Build\pytorch> python -m pip install -U pip setuptools wheel ninja cmake Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Requirement already satisfied: pip in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (22.3.1) Collecting pip Using cached https://pypi.tuna.tsinghua.edu.cn/packages/b7/3f/945ef7ab14dc4f9d7f40288d2df998d1837ee0888ec3659c813487572faa/pip-25.2-py3-none-any.whl (1.8 MB) Requirement already satisfied: setuptools in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (65.5.0) Collecting setuptools Using cached https://pypi.tuna.tsinghua.edu.cn/packages/a3/dc/17031897dae0efacfea57dfd3a82fdd2a2aeb58e0ff71b77b87e44edc772/setuptools-80.9.0-py3-none-any.whl (1.2 MB) Collecting wheel Using cached https://pypi.tuna.tsinghua.edu.cn/packages/0b/2c/87f3254fd8ffd29e4c02732eee68a83a1d3c346ae39bc6822dcbcb697f2b/wheel-0.45.1-py3-none-any.whl (72 kB) Collecting ninja Using cached https://pypi.tuna.tsinghua.edu.cn/packages/29/45/c0adfbfb0b5895aa18cec400c535b4f7ff3e52536e0403602fc1a23f7de9/ninja-1.13.0-py3-none-win_amd64.whl (309 kB) Collecting cmake Using cached https://pypi.tuna.tsinghua.edu.cn/packages/7c/d0/73cae88d8c25973f2465d5a4457264f95617c16ad321824ed4c243734511/cmake-4.1.0-py3-none-win_amd64.whl (37.6 MB) Installing collected packages: wheel, setuptools, pip, ninja, cmake Attempting uninstall: setuptools Found existing installation: setuptools 65.5.0 Uninstalling setuptools-65.5.0: Successfully uninstalled setuptools-65.5.0 Attempting uninstall: pip Found existing installation: pip 22.3.1 Uninstalling pip-22.3.1: Successfully uninstalled pip-22.3.1 Successfully installed cmake-4.1.0 ninja-1.13.0 pip-25.2 setuptools-80.9.0 wheel-0.45.1 (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 验证版本 (rtx5070_env) PS E:\PyTorch_Build\pytorch> pip --version # 应显示 25.2+ pip 25.2 from E:\PyTorch_Build\pytorch\rtx5070_env\lib\site-packages\pip (python 3.10) (rtx5070_env) PS E:\PyTorch_Build\pytorch> cmake --version # 应显示 4.1.0+ cmake version 4.1.0 CMake suite maintained and supported by Kitware (kitware.com/cmake). (rtx5070_env) PS E:\PyTorch_Build\pytorch> ninja --version # 应显示 1.13.0+ 1.13.0.git.kitware.jobserver-pipe-1 (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 设置 CUDA 12.1 环境变量 (rtx5070_env) PS E:\PyTorch_Build\pytorch> $env:CUDA_PATH = "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1" (rtx5070_env) PS E:\PyTorch_Build\pytorch> $env:PATH = "$env:CUDA_PATH\bin;" + $env:PATH (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 验证 CUDA 版本 (rtx5070_env) PS E:\PyTorch_Build\pytorch> nvcc --version # 应显示 release 12.1 nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2025 NVIDIA Corporation Built on Wed_Jul_16_20:06:48_Pacific_Daylight_Time_2025 Cuda compilation tools, release 13.0, V13.0.48 Build cuda_13.0.r13.0/compiler.36260728_0 (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 设置 cuDNN 路径(根据实际安装位置) (rtx5070_env) PS E:\PyTorch_Build\pytorch> $env:CUDNN_INCLUDE_DIR = "$env:CUDA_PATH\include" (rtx5070_env) PS E:\PyTorch_Build\pytorch> $env:CUDNN_LIBRARY = "$env:CUDA_PATH\lib\x64\cudnn.lib" (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 安装必要依赖 (rtx5070_env) PS E:\PyTorch_Build\pytorch> pip install pyyaml numpy typing_extensions Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Collecting pyyaml Using cached https://pypi.tuna.tsinghua.edu.cn/packages/b5/84/0fa4b06f6d6c958d207620fc60005e241ecedceee58931bb20138e1e5776/PyYAML-6.0.2-cp310-cp310-win_amd64.whl (161 kB) Collecting numpy Using cached https://pypi.tuna.tsinghua.edu.cn/packages/a3/dd/4b822569d6b96c39d1215dbae0582fd99954dcbcf0c1a13c61783feaca3f/numpy-2.2.6-cp310-cp310-win_amd64.whl (12.9 MB) Collecting typing_extensions Using cached https://pypi.tuna.tsinghua.edu.cn/packages/18/67/36e9267722cc04a6b9f15c7f3441c2363321a3ea07da7ae0c0707beb2a9c/typing_extensions-4.15.0-py3-none-any.whl (44 kB) Installing collected packages: typing_extensions, pyyaml, numpy Successfully installed numpy-2.2.6 pyyaml-6.0.2 typing_extensions-4.15.0 (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 安装 GPU 相关依赖 (rtx5070_env) PS E:\PyTorch_Build\pytorch> pip install mkl mkl-include intel-openmp Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Collecting mkl Using cached https://pypi.tuna.tsinghua.edu.cn/packages/91/ae/025174ee141432b974f97ecd2aea529a3bdb547392bde3dd55ce48fe7827/mkl-2025.2.0-py2.py3-none-win_amd64.whl (153.6 MB) Collecting mkl-include Using cached https://pypi.tuna.tsinghua.edu.cn/packages/06/87/3eee37bf95c6b820b6394ad98e50132798514ecda1b2584c71c2c96b973c/mkl_include-2025.2.0-py2.py3-none-win_amd64.whl (1.3 MB) Collecting intel-openmp Using cached https://pypi.tuna.tsinghua.edu.cn/packages/89/ed/13fed53fcc7ea17ff84095e89e63418df91d4eeefdc74454243d529bf5a3/intel_openmp-2025.2.1-py2.py3-none-win_amd64.whl (34.0 MB) Collecting tbb==2022.* (from mkl) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/4e/d2/01e2a93f9c644585088188840bf453f23ed1a2838ec51d5ba1ada1ebca71/tbb-2022.2.0-py3-none-win_amd64.whl (420 kB) Collecting intel-cmplr-lib-ur==2025.2.1 (from intel-openmp) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/a8/70/938e81f58886fd4e114d5a5480d98c1396e73e40b7650f566ad0c4395311/intel_cmplr_lib_ur-2025.2.1-py2.py3-none-win_amd64.whl (1.2 MB) Collecting umf==0.11.* (from intel-cmplr-lib-ur==2025.2.1->intel-openmp) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/33/a0/c8d755f08f50ddd99cb4a29a7e950ced7a0903cb72253e57059063609103/umf-0.11.0-py2.py3-none-win_amd64.whl (231 kB) Collecting tcmlib==1.* (from tbb==2022.*->mkl) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/91/7b/e30c461a27b97e0090e4db822eeb1d37b310863241f8c3ee56f68df3e76e/tcmlib-1.4.0-py2.py3-none-win_amd64.whl (370 kB) Installing collected packages: tcmlib, mkl-include, umf, tbb, intel-cmplr-lib-ur, intel-openmp, mkl Successfully installed intel-cmplr-lib-ur-2025.2.1 intel-openmp-2025.2.1 mkl-2025.2.0 mkl-include-2025.2.0 tbb-2022.2.0 tcmlib-1.4.0 umf-0.11.0 (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 安装必要依赖 (rtx5070_env) PS E:\PyTorch_Build\pytorch> pip install pyyaml numpy typing_extensions Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Requirement already satisfied: pyyaml in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (6.0.2) Requirement already satisfied: numpy in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (2.2.6) Requirement already satisfied: typing_extensions in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (4.15.0) (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 安装 GPU 相关依赖 (rtx5070_env) PS E:\PyTorch_Build\pytorch> pip install mkl mkl-include intel-openmp Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Requirement already satisfied: mkl in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (2025.2.0) Requirement already satisfied: mkl-include in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (2025.2.0) Requirement already satisfied: intel-openmp in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (2025.2.1) Requirement already satisfied: tbb==2022.* in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from mkl) (2022.2.0) Requirement already satisfied: intel-cmplr-lib-ur==2025.2.1 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from intel-openmp) (2025.2.1) Requirement already satisfied: umf==0.11.* in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from intel-cmplr-lib-ur==2025.2.1->intel-openmp) (0.11.0) Requirement already satisfied: tcmlib==1.* in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from tbb==2022.*->mkl) (1.4.0) (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 设置编译参数 (rtx5070_env) PS E:\PyTorch_Build\pytorch> $env:USE_CUDA=1 (rtx5070_env) PS E:\PyTorch_Build\pytorch> $env:USE_CUDNN=1 (rtx5070_env) PS E:\PyTorch_Build\pytorch> $env:CMAKE_GENERATOR="Ninja" (rtx5070_env) PS E:\PyTorch_Build\pytorch> $env:MAX_JOBS=8 # 根据 CPU 核心数设置 (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 运行编译 (rtx5070_env) PS E:\PyTorch_Build\pytorch> python setup.py install >> --cmake >> --cmake-only >> --cmake-generator="Ninja" >> --verbose >> -DCMAKE_CUDA_COMPILER="${env:CUDA_PATH}\bin\nvcc.exe" >> -DCUDNN_INCLUDE_DIR="${env:CUDNN_INCLUDE_DIR}" >> -DCUDNN_LIBRARY="${env:CUDNN_LIBRARY}" >> -DTORCH_CUDA_ARCH_LIST="8.9;9.0;12.0" Building wheel torch-2.9.0a0+git2d31c3d option --cmake-generator not recognized (rtx5070_env) PS E:\PyTorch_Build\pytorch> python rtx5070_test.py ============================================================ Traceback (most recent call last): File "E:\PyTorch_Build\pytorch\rtx5070_test.py", line 39, in <module> verify_gpu_support() File "E:\PyTorch_Build\pytorch\rtx5070_test.py", line 6, in verify_gpu_support if not torch.cuda.is_available(): AttributeError: module 'torch' has no attribute 'cuda' (rtx5070_env) PS E:\PyTorch_Build\pytorch>

PowerShell 7 环境已加载 (版本: 7.5.2) PS C:\Users\Administrator\Desktop> # 1. 回到 PyTorch 源码根目录 PS C:\Users\Administrator\Desktop> cd E:\PyTorch_Build\pytorch PS E:\PyTorch_Build\pytorch> PS E:\PyTorch_Build\pytorch> # 2. 删除之前可能创建的所有构建目录 PS E:\PyTorch_Build\pytorch> Remove-Item -Recurse -Force build -ErrorAction SilentlyContinue PS E:\PyTorch_Build\pytorch> Remove-Item -Recurse -Force build2 -ErrorAction SilentlyContinue PS E:\PyTorch_Build\pytorch> PS E:\PyTorch_Build\pytorch> # 3. 创建一个新的构建目录 PS E:\PyTorch_Build\pytorch> mkdir build Directory: E:\PyTorch_Build\pytorch Mode LastWriteTime Length Name ---- ------------- ------ ---- d---- 2025/8/31 5:15 build PS E:\PyTorch_Build\pytorch> cd build PS E:\PyTorch_Build\pytorch\build> PS E:\PyTorch_Build\pytorch\build> # 4. 运行 CMake 配置命令(注意最后的 .. 表示上级目录) PS E:\PyTorch_Build\pytorch\build> cmake -G "Visual Studio 17 2022" -Ax64 -Thost=x64 >> -DBUILD_PYTHON=ON >> -DUSE_CUDA=ON >> -DUSE_CUDNN=ON >> -DCUDA_TOOLKIT_ROOT_DIR="E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0" >> -DTORCH_CUDA_ARCH_LIST="8.0;8.6;8.9;9.0" >> -DCMAKE_BUILD_TYPE=Release >> -DPython_EXECUTABLE="E:\Python310\python.exe" >> .. CMake Deprecation Warning at CMakeLists.txt:9 (cmake_policy): The OLD behavior for policy CMP0126 will be removed from a future version of CMake. The cmake-policies(7) manual explains that the OLD behaviors of all policies are deprecated and that a policy should be set to OLD only under specific short-term circumstances. Projects should be ported to the NEW behavior and not rely on setting a policy to OLD. CMake Error at CMakeLists.txt:25 (message): In-source build are not supported -- Configuring incomplete, errors occurred! PS E:\PyTorch_Build\pytorch\build> pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121 Looking in indexes: https://download.pytorch.org/whl/cu121 Requirement already satisfied: torch in e:\python310\lib\site-packages (2.5.1+cu121) Requirement already satisfied: torchvision in e:\python310\lib\site-packages (0.20.1+cu121) Requirement already satisfied: torchaudio in e:\python310\lib\site-packages (2.5.1+cu121) Requirement already satisfied: filelock in e:\python310\lib\site-packages (from torch) (3.19.1) Requirement already satisfied: typing-extensions>=4.8.0 in e:\python310\lib\site-packages (from torch) (4.14.1) Requirement already satisfied: networkx in e:\python310\lib\site-packages (from torch) (3.4.2) Requirement already satisfied: jinja2 in e:\python310\lib\site-packages (from torch) (3.1.6) Requirement already satisfied: fsspec in e:\python310\lib\site-packages (from torch) (2025.7.0) Requirement already satisfied: sympy==1.13.1 in e:\python310\lib\site-packages (from torch) (1.13.1) Requirement already satisfied: mpmath<1.4,>=1.1.0 in e:\python310\lib\site-packages (from sympy==1.13.1->torch) (1.3.0) Requirement already satisfied: numpy in e:\python310\lib\site-packages (from torchvision) (1.26.3) Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in e:\python310\lib\site-packages (from torchvision) (10.4.0) Requirement already satisfied: MarkupSafe>=2.0 in e:\python310\lib\site-packages (from jinja2->torch) (2.1.5) PS E:\PyTorch_Build\pytorch\build> python -c " >> import torch >> print('PyTorch版本:', torch.__version__) >> print('CUDA是否可用:', torch.cuda.is_available()) >> if torch.cuda.is_available(): >> print('GPU设备名称:', torch.cuda.get_device_name(0)) >> print('CUDA计算能力:', torch.cuda.get_device_capability(0)) >> print('CUDA版本:', torch.version.cuda) >> " PyTorch版本: 2.5.1+cu121 CUDA是否可用: True E:\Python310\lib\site-packages\torch\cuda\__init__.py:235: UserWarning: NVIDIA GeForce RTX 5070 with CUDA capability sm_120 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilities sm_50 sm_60 sm_61 sm_70 sm_75 sm_80 sm_86 sm_90. If you want to use the NVIDIA GeForce RTX 5070 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/ warnings.warn( GPU设备名称: NVIDIA GeForce RTX 5070 CUDA计算能力: (12, 0) CUDA版本: 12.1 PS E:\PyTorch_Build\pytorch\build> cd E:\PyTorch_Build\pytorch PS E:\PyTorch_Build\pytorch> python setup.py install Building wheel torch-2.9.0a0+git2d31c3d -- Building version 2.9.0a0+git2d31c3d -- Checkout nccl release tag: v2.27.5-1 cmake -GVisual Studio 16 2019 -Ax64 -Thost=x64 -DBUILD_PYTHON=True -DBUILD_TEST=True -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=E:\PyTorch_Build\pytorch\torch -DCMAKE_PREFIX_PATH=E:\Python310\Lib\site-packages -DPython_EXECUTABLE=E:\Python310\python.exe -DPython_NumPy_INCLUDE_DIR=E:\Python310\lib\site-packages\numpy\core\include -DTORCH_BUILD_VERSION=2.9.0a0+git2d31c3d -DUSE_NUMPY=True E:\PyTorch_Build\pytorch CMake Error: Error: generator : Visual Studio 16 2019 Does not match the generator used previously: Visual Studio 17 2022 Either remove the CMakeCache.txt file and CMakeFiles directory or choose a different binary directory. PS E:\PyTorch_Build\pytorch>

PowerShell 7 环境已加载 (版本: 7.5.2) PS C:\Users\Administrator\Desktop> cd E:\PyTorch_Build\pytorch PS E:\PyTorch_Build\pytorch> .\pytorch_env\Scripts\activate (pytorch_env) PS E:\PyTorch_Build\pytorch> # 移除可能导致冲突的镜像源 (pytorch_env) PS E:\PyTorch_Build\pytorch> conda config --remove-key channels (pytorch_env) PS E:\PyTorch_Build\pytorch> conda config --remove-key default_channels CondaKeyError: 'default_channels': undefined in config (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 设置官方通道优先级 (pytorch_env) PS E:\PyTorch_Build\pytorch> conda config --add channels pytorch-nightly C:\Miniconda3\Lib\site-packages\conda\base\context.py:211: FutureWarning: Adding 'defaults' to channel list implicitly is deprecated and will be removed in 25.9. To remove this warning, please choose a default channel explicitly with conda's regular configuration system, e.g. by adding 'defaults' to the list of channels: conda config --add channels defaults For more information see https://docs.conda.io/projects/conda/en/stable/user-guide/configuration/use-condarc.html deprecated.topic( (pytorch_env) PS E:\PyTorch_Build\pytorch> conda config --add channels nvidia (pytorch_env) PS E:\PyTorch_Build\pytorch> conda config --add channels conda-forge (pytorch_env) PS E:\PyTorch_Build\pytorch> conda config --add channels defaults Warning: 'defaults' already in 'channels' list, moving to the top (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 设置通道优先级为 strict(避免混合来源包) (pytorch_env) PS E:\PyTorch_Build\pytorch> conda config --set channel_priority strict (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 验证配置 (pytorch_env) PS E:\PyTorch_Build\pytorch> conda config --show channels channels: - defaults - conda-forge - nvidia - pytorch-nightly (pytorch_env) PS E:\PyTorch_Build\pytorch> conda config --show channel_priority channel_priority: strict (pytorch_env) PS E:\PyTorch_Build\pytorch> # 1. 安装基础依赖 (pytorch_env) PS E:\PyTorch_Build\pytorch> conda install -y python=3.10 cudatoolkit=12.1 cudnn numpy ninja 3 channel Terms of Service accepted Channels: - defaults - conda-forge - nvidia - pytorch-nightly Platform: win-64 Collecting package metadata (repodata.json): done Solving environment: failed LibMambaUnsatisfiableError: Encountered problems while solving: - unsupported request - package mkl-service-2.5.2-py313haca3b5c_0 requires python_abi 3.13.* *_cp313, but none of the providers can be instd Could not solve for environment specs The following packages are incompatible ├─ cudatoolkit =12.1 * does not exist (perhaps a typo or a missing channel); ├─ mkl-service =* * is installable with the potential options │ ├─ mkl-service 2.5.2 would require │ │ └─ python_abi =3.13 *_cp313 with the potential options │ │ ├─ python_abi 3.13 would require │ │ │ └─ python =3.13 *_cp313, which can be installed; │ │ └─ python_abi 3.13 conflicts with any installable versions previously reported; │ ├─ mkl-service 1.1.2 would require │ │ └─ mkl >=2019.1,<2021.0a0 *, which can be installed; │ ├─ mkl-service 1.1.2 would require │ │ └─ mkl >=2018.0.0,<2019.0a0 *, which can be installed; │ ├─ mkl-service 1.1.2 would require │ │ └─ mkl >=2018.0.3,<2019.0a0 *, which can be installed; │ ├─ mkl-service 2.0.2 would require │ │ └─ mkl >=2019.3,<2021.0a0 *, which can be installed; │ ├─ mkl-service 2.3.0 would require │ │ └─ mkl >=2019.4,<2021.0a0 *, which can be installed; │ ├─ mkl-service [2.3.0|2.4.0] would require │ │ └─ mkl >=2021.2.0,<2022.0a0 *, which can be installed; │ ├─ mkl-service 2.4.0 would require │ │ └─ mkl >=2021.4.0,<2022.0a0 *, which can be installed; │ ├─ mkl-service 2.4.0 would require │ │ └─ mkl >=2023.1.0,<2024.0a0 *, which can be installed; │ ├─ mkl-service 2.4.0 would require │ │ └─ mkl >=2025.0.0,<2026.0a0 *, which can be installed; │ └─ mkl-service [2.0.1|2.0.2|...|2.5.2] conflicts with any installable versions previously reported; ├─ mkl ==2024.2.2 * is not installable because it conflicts with any installable versions previously reported; └─ python =3.10 * is not installable because it conflicts with any installable versions previously reported. (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 2. 单独安装 PyTorch (pytorch_env) PS E:\PyTorch_Build\pytorch> conda install -y pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch-nightly -c nvidia 3 channel Terms of Service accepted Channels: - pytorch-nightly - nvidia - defaults - conda-forge Platform: win-64 Collecting package metadata (repodata.json): done Solving environment: failed LibMambaUnsatisfiableError: Encountered problems while solving: - package torchvision-0.20.0.dev20241112-py310_cu124 requires python >=3.10,<3.11.0a0, but none of the providers can d - package pytorch-2.5.0.dev20240618-py3.11_cuda12.4_cudnn8_0 requires mkl 2021.4.*, but none of the providers can be d - nothing provides pytorch 2.1.0.dev20230523 needed by torchaudio-2.1.0.dev20230523-py311_cu117 Could not solve for environment specs The following packages are incompatible ├─ libuv =1.44 * is requested and can be installed; ├─ mkl ==2024.2.2 * is requested and can be installed; ├─ pin on python 3.13.* =* * is installable and it requires │ └─ python =3.13 *, which can be installed; ├─ pytorch =* * is not installable because there are no viable options │ ├─ pytorch [2.5.0.dev20240618|2.5.0.dev20240619] would require │ │ └─ mkl =2021.4 *, which conflicts with any installable versions previously reported; │ ├─ pytorch [2.5.0.dev20240618|2.5.0.dev20240619|2.5.0.dev20240730|2.5.0.dev20240731|2.6.0.dev20241111] would require │ │ └─ mkl =2023.1 *, which conflicts with any installable versions previously reported; │ ├─ pytorch 2.6.0.dev20241112 would require │ │ ├─ libuv >=1.48.0,<2.0a0 *, which conflicts with any installable versions previously reported; │ │ └─ mkl =2023.1 *, which conflicts with any installable versions previously reported; │ └─ pytorch [1.0.1|1.10.2|...|2.7.1] conflicts with any installable versions previously reported; ├─ torchaudio =* * is not installable because there are no viable options │ ├─ torchaudio 2.1.0.dev20230523 would require │ │ └─ pytorch ==2.1.0.0dev20230523 *, which does not exist (perhaps a missing channel); │ ├─ torchaudio 2.4.0.dev20240729 would require │ │ └─ pytorch ==2.5.0.0dev20240726 *, which does not exist (perhaps a missing channel); │ ├─ torchaudio 2.4.0.dev20240729 would require │ │ └─ pytorch ==2.5.0.0dev20240729 *, which does not exist (perhaps a missing channel); │ ├─ torchaudio 2.4.0.dev20240729 would require │ │ └─ pytorch ==2.5.0.0dev20240728 *, which does not exist (perhaps a missing channel); │ ├─ torchaudio [2.5.0.dev20241112|2.5.0.dev20241113|...|2.5.0.dev20241118] would require │ │ └─ pytorch ==2.6.0.0dev20241112 *, which cannot be installed (as previously explained); │ └─ torchaudio 2.5.1 conflicts with any installable versions previously reported; └─ torchvision =* * is not installable because there are no viable options ├─ torchvision [0.20.0.dev20241112|0.20.0.dev20241113|...|0.20.0.dev20241118] would require │ └─ python >=3.9,<3.10.0a0 *, which conflicts with any installable versions previously reported; ├─ torchvision [0.20.0.dev20241112|0.20.0.dev20241113|...|0.20.0.dev20241118] would require │ └─ python >=3.10,<3.11.0a0 *, which conflicts with any installable versions previously reported; ├─ torchvision [0.20.0.dev20241112|0.20.0.dev20241113|...|0.20.0.dev20241118] would require │ └─ python >=3.11,<3.12.0a0 *, which conflicts with any installable versions previously reported; ├─ torchvision [0.20.0.dev20241112|0.20.0.dev20241113|...|0.20.0.dev20241118] would require │ └─ python >=3.12,<3.13.0a0 *, which conflicts with any installable versions previously reported; └─ torchvision [0.11.3|0.13.1|...|0.22.0] conflicts with any installable versions previously reported. Pins seem to be involved in the conflict. Currently pinned specs: - python=3.13 (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 3. 安装补充依赖 (pytorch_env) PS E:\PyTorch_Build\pytorch> conda install -y pyyaml mkl mkl-include setuptools cmake cffi typing_extensions 3 channel Terms of Service accepted Channels: - defaults - conda-forge - nvidia - pytorch-nightly Platform: win-64 Collecting package metadata (repodata.json): done Solving environment: done ## Package Plan ## environment location: C:\Miniconda3 added / updated specs: - cffi - cmake - mkl - mkl-include - pyyaml - setuptools - typing_extensions The following packages will be downloaded: package | build ---------------------------|----------------- cmake-3.26.4 | h693b641_0 12.0 MB defaults pyyaml-6.0.2 | py313h827c3e9_0 198 KB defaults yaml-0.2.5 | he774522_0 62 KB defaults ------------------------------------------------------------ Total: 12.2 MB The following NEW packages will be INSTALLED: cmake pkgs/main/win-64::cmake-3.26.4-h693b641_0 pyyaml pkgs/main/win-64::pyyaml-6.0.2-py313h827c3e9_0 yaml pkgs/main/win-64::yaml-0.2.5-he774522_0 Downloading and Extracting Packages: Preparing transaction: done Verifying transaction: done Executing transaction: done (pytorch_env) PS E:\PyTorch_Build\pytorch> python cuda_test.py ================================================== PyTorch 版本: 2.6.0.dev20241112+cu121 CUDA 可用: True CUDA 版本: 12.1 cuDNN 版本: 90100 E:\PyTorch_Build\pytorch\pytorch_env\lib\site-packages\torch\cuda\__init__.py:235: UserWarning: NVIDIA GeForce RTX 5070 with CUDA capability sm_120 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilities sm_50 sm_60 sm_61 sm_70 sm_75 sm_80 sm_86 sm_90. If you want to use the NVIDIA GeForce RTX 5070 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/ warnings.warn( GPU 型号: NVIDIA GeForce RTX 5070 计算能力: (12, 0) Traceback (most recent call last): File "E:\PyTorch_Build\pytorch\cuda_test.py", line 25, in <module> check_cuda() File "E:\PyTorch_Build\pytorch\cuda_test.py", line 16, in check_cuda a = torch.randn(1000, 1000, device='cuda') RuntimeError: CUDA error: no kernel image is available for execution on the device CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile with TORCH_USE_CUDA_DSA to enable device-side assertions. (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 创建新的虚拟环境 (pytorch_env) PS E:\PyTorch_Build\pytorch> python -m venv cuda_env (pytorch_env) PS E:\PyTorch_Build\pytorch> .\cuda_env\Scripts\activate (cuda_env) PS E:\PyTorch_Build\pytorch> (cuda_env) PS E:\PyTorch_Build\pytorch> # 安装基础依赖 (cuda_env) PS E:\PyTorch_Build\pytorch> pip install numpy==1.26.4 ninja pyyaml mkl mkl-include setuptools cmake Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Collecting numpy==1.26.4 Downloading https://pypi.tuna.tsinghua.edu.cn/packages/19/77/538f202862b9183f54108557bfda67e17603fc560c384559e769321c9d92/numpy-1.26.4-cp310-cp310-win_amd64.whl (15.8 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 15.8/15.8 MB 34.6 MB/s eta 0:00:00 Collecting ninja Using cached https://pypi.tuna.tsinghua.edu.cn/packages/29/45/c0adfbfb0b5895aa18cec400c535b4f7ff3e52536e0403602fc1a23f7de9/ninja-1.13.0-py3-none-win_amd64.whl (309 kB) Collecting pyyaml Using cached https://pypi.tuna.tsinghua.edu.cn/packages/b5/84/0fa4b06f6d6c958d207620fc60005e241ecedceee58931bb20138e1e5776/PyYAML-6.0.2-cp310-cp310-win_amd64.whl (161 kB) Collecting mkl Downloading https://pypi.tuna.tsinghua.edu.cn/packages/91/ae/025174ee141432b974f97ecd2aea529a3bdb547392bde3dd55ce48fe7827/mkl-2025.2.0-py2.py3-none-win_amd64.whl (153.6 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 153.6/153.6 MB 24.2 MB/s eta 0:00:00 Collecting mkl-include Downloading https://pypi.tuna.tsinghua.edu.cn/packages/06/87/3eee37bf95c6b820b6394ad98e50132798514ecda1b2584c71c2c96b973c/mkl_include-2025.2.0-py2.py3-none-win_amd64.whl (1.3 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.3/1.3 MB 87.9 MB/s eta 0:00:00 Requirement already satisfied: setuptools in e:\pytorch_build\pytorch\cuda_env\lib\site-packages (65.5.0) Collecting cmake Using cached https://pypi.tuna.tsinghua.edu.cn/packages/7c/d0/73cae88d8c25973f2465d5a4457264f95617c16ad321824ed4c243734511/cmake-4.1.0-py3-none-win_amd64.whl (37.6 MB) Collecting tbb==2022.* Downloading https://pypi.tuna.tsinghua.edu.cn/packages/4e/d2/01e2a93f9c644585088188840bf453f23ed1a2838ec51d5ba1ada1ebca71/tbb-2022.2.0-py3-none-win_amd64.whl (420 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 420.6/420.6 kB ? eta 0:00:00 Collecting intel-openmp<2026,>=2024 Downloading https://pypi.tuna.tsinghua.edu.cn/packages/89/ed/13fed53fcc7ea17ff84095e89e63418df91d4eeefdc74454243d529bf5a3/intel_openmp-2025.2.1-py2.py3-none-win_amd64.whl (34.0 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 34.0/34.0 MB 43.5 MB/s eta 0:00:00 Collecting tcmlib==1.* Downloading https://pypi.tuna.tsinghua.edu.cn/packages/91/7b/e30c461a27b97e0090e4db822eeb1d37b310863241f8c3ee56f68df3e76e/tcmlib-1.4.0-py2.py3-none-win_amd64.whl (370 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 370.3/370.3 kB ? eta 0:00:00 Collecting intel-cmplr-lib-ur==2025.2.1 Downloading https://pypi.tuna.tsinghua.edu.cn/packages/a8/70/938e81f58886fd4e114d5a5480d98c1396e73e40b7650f566ad0c4395311/intel_cmplr_lib_ur-2025.2.1-py2.py3-none-win_amd64.whl (1.2 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.2/1.2 MB 72.4 MB/s eta 0:00:00 Collecting umf==0.11.* Downloading https://pypi.tuna.tsinghua.edu.cn/packages/33/a0/c8d755f08f50ddd99cb4a29a7e950ced7a0903cb72253e57059063609103/umf-0.11.0-py2.py3-none-win_amd64.whl (231 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 231.7/231.7 kB ? eta 0:00:00 Installing collected packages: tcmlib, mkl-include, umf, tbb, pyyaml, numpy, ninja, cmake, intel-cmplr-lib-ur, intel-openmp, mkl Successfully installed cmake-4.1.0 intel-cmplr-lib-ur-2025.2.1 intel-openmp-2025.2.1 mkl-2025.2.0 mkl-include-2025.2.0 ninja-1.13.0 numpy-1.26.4 pyyaml-6.0.2 tbb-2022.2.0 tcmlib-1.4.0 umf-0.11.0 [notice] A new release of pip available: 22.3.1 -> 25.2 [notice] To update, run: python.exe -m pip install --upgrade pip (cuda_env) PS E:\PyTorch_Build\pytorch> (cuda_env) PS E:\PyTorch_Build\pytorch> # 安装 PyTorch Nightly (cuda_env) PS E:\PyTorch_Build\pytorch> pip install --pre torch torchvision torchaudio >> --index-url https://download.pytorch.org/whl/nightly/cu121 >> --no-deps Looking in indexes: https://download.pytorch.org/whl/nightly/cu121 Collecting torch Using cached https://download.pytorch.org/whl/nightly/cu121/torch-2.6.0.dev20241112%2Bcu121-cp310-cp310-win_amd64.whl (2456.2 MB) Collecting torchvision Using cached https://download.pytorch.org/whl/nightly/cu121/torchvision-0.20.0.dev20241112%2Bcu121-cp310-cp310-win_amd64.whl (6.2 MB) Collecting torchaudio Using cached https://download.pytorch.org/whl/nightly/cu121/torchaudio-2.5.0.dev20241112%2Bcu121-cp310-cp310-win_amd64.whl (4.2 MB) Installing collected packages: torchaudio, torchvision, torch Successfully installed torch-2.6.0.dev20241112+cu121 torchaudio-2.5.0.dev20241112+cu121 torchvision-0.20.0.dev20241112+cu121 [notice] A new release of pip available: 22.3.1 -> 25.2 [notice] To update, run: python.exe -m pip install --upgrade pip (cuda_env) PS E:\PyTorch_Build\pytorch> (cuda_env) PS E:\PyTorch_Build\pytorch> # 安装补充依赖 (cuda_env) PS E:\PyTorch_Build\pytorch> pip install typing_extensions future six requests dataclasses Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Collecting typing_extensions Using cached https://pypi.tuna.tsinghua.edu.cn/packages/18/67/36e9267722cc04a6b9f15c7f3441c2363321a3ea07da7ae0c0707beb2a9c/typing_extensions-4.15.0-py3-none-any.whl (44 kB) Collecting future Downloading https://pypi.tuna.tsinghua.edu.cn/packages/da/71/ae30dadffc90b9006d77af76b393cb9dfbfc9629f339fc1574a1c52e6806/future-1.0.0-py3-none-any.whl (491 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 491.3/491.3 kB 1.5 MB/s eta 0:00:00 Collecting six Using cached https://pypi.tuna.tsinghua.edu.cn/packages/b7/ce/149a00dd41f10bc29e5921b496af8b574d8413afcd5e30dfa0ed46c2cc5e/six-1.17.0-py2.py3-none-any.whl (11 kB) Collecting requests Using cached https://pypi.tuna.tsinghua.edu.cn/packages/1e/db/4254e3eabe8020b458f1a747140d32277ec7a271daf1d235b70dc0b4e6e3/requests-2.32.5-py3-none-any.whl (64 kB) Collecting dataclasses Downloading https://pypi.tuna.tsinghua.edu.cn/packages/26/2f/1095cdc2868052dd1e64520f7c0d5c8c550ad297e944e641dbf1ffbb9a5d/dataclasses-0.6-py3-none-any.whl (14 kB) Collecting charset_normalizer<4,>=2 Using cached https://pypi.tuna.tsinghua.edu.cn/packages/e2/c6/f05db471f81af1fa01839d44ae2a8bfeec8d2a8b4590f16c4e7393afd323/charset_normalizer-3.4.3-cp310-cp310-win_amd64.whl (107 kB) Collecting idna<4,>=2.5 Using cached https://pypi.tuna.tsinghua.edu.cn/packages/76/c6/c88e154df9c4e1a2a66ccf0005a88dfb2650c1dffb6f5ce603dfbd452ce3/idna-3.10-py3-none-any.whl (70 kB) Collecting urllib3<3,>=1.21.1 Using cached https://pypi.tuna.tsinghua.edu.cn/packages/a7/c2/fe1e52489ae3122415c51f387e221dd0773709bad6c6cdaa599e8a2c5185/urllib3-2.5.0-py3-none-any.whl (129 kB) Collecting certifi>=2017.4.17 Using cached https://pypi.tuna.tsinghua.edu.cn/packages/e5/48/1549795ba7742c948d2ad169c1c8cdbae65bc450d6cd753d124b17c8cd32/certifi-2025.8.3-py3-none-any.whl (161 kB) Installing collected packages: dataclasses, urllib3, typing_extensions, six, idna, future, charset_normalizer, certifi, requests ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. torch 2.6.0.dev20241112+cu121 requires filelock, which is not installed. torch 2.6.0.dev20241112+cu121 requires fsspec, which is not installed. torch 2.6.0.dev20241112+cu121 requires jinja2, which is not installed. torch 2.6.0.dev20241112+cu121 requires networkx, which is not installed. torch 2.6.0.dev20241112+cu121 requires sympy==1.13.1; python_version >= "3.9", which is not installed. Successfully installed certifi-2025.8.3 charset_normalizer-3.4.3 dataclasses-0.6 future-1.0.0 idna-3.10 requests-2.32.5 six-1.17.0 typing_extensions-4.15.0 urllib3-2.5.0 [notice] A new release of pip available: 22.3.1 -> 25.2 [notice] To update, run: python.exe -m pip install --upgrade pip (cuda_env) PS E:\PyTorch_Build\pytorch> (cuda_env) PS E:\PyTorch_Build\pytorch> # 运行验证脚本 (cuda_env) PS E:\PyTorch_Build\pytorch> python cuda_test.py ================================================== PyTorch 版本: 2.6.0.dev20241112+cu121 CUDA 可用: True CUDA 版本: 12.1 cuDNN 版本: 90100 E:\PyTorch_Build\pytorch\cuda_env\lib\site-packages\torch\cuda\__init__.py:235: UserWarning: NVIDIA GeForce RTX 5070 with CUDA capability sm_120 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilities sm_50 sm_60 sm_61 sm_70 sm_75 sm_80 sm_86 sm_90. If you want to use the NVIDIA GeForce RTX 5070 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/ warnings.warn( GPU 型号: NVIDIA GeForce RTX 5070 计算能力: (12, 0) Traceback (most recent call last): File "E:\PyTorch_Build\pytorch\cuda_test.py", line 25, in <module> check_cuda() File "E:\PyTorch_Build\pytorch\cuda_test.py", line 16, in check_cuda a = torch.randn(1000, 1000, device='cuda') RuntimeError: CUDA error: no kernel image is available for execution on the device CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile with TORCH_USE_CUDA_DSA to enable device-side assertions. (cuda_env) PS E:\PyTorch_Build\pytorch>

大家在看

recommend-type

抓取BT-audio音乐音频总结v1.2.docx

Qcom平台抓取蓝牙audio日志;介绍: 如何使用QXDM抓取日志, 如何使用qcap解析isf文件, 解析出来的额pcm数据如何用音频工具差异, 如何判断蓝牙音频问题。
recommend-type

CMDB制度规范

cmdb流程规范 配置管理规范 设计规范
recommend-type

AAA2.5及汉化补丁

Advanced Aircraft Analysis V2.5.1.53 (3A) 在win7 64位上安装测试。有注册机和安装视频。支持winxp和win732位和64位系统。 Darcorp Advanced Aircraft Analysis V2.5.1.53 (AAA) 软件是一款面向于高级用户的飞机设计和仿真分析软件,目前广泛应用于数十个国家的各种机构,已然成为飞机设计、开发、稳定性分析以及飞行控制的工业标准软件。适用于 FAR23、FAR25、UAV无人驾驶飞机与 Military 规范,为全球飞机公司(如波音公司)、政府部门(如 FAA)与学校采用于飞机初步设计、分析、与 3-D 绘图的一套完整软件工具。 Advanced Aircraft Analysis (AAA) 是行业标准的飞机设计,稳定性和控制分析软件。 安装在超过45个国家,AAA所使用的主要航空工程大学,飞机制造商和世界各地的军事组织。 Advanced Aircraft Analysis(AAA)是行业标准的飞机设计 AAA提供了一个功能强大的框架,以支持飞机初步设计迭代和非独特的过程。 AAA计划允许学生和初步设计工程师从早期的大小通过开环和闭环动态稳定性和灵敏度分析的重量,而该机的配置工作在监管和成本的限制。
recommend-type

MAX30100心率血氧中文参考手册

MAX30100心率血氧传感器中文翻译。Max30100是一款集成的脉搏血氧和心率检测传感器。它使用了两个LED灯,一个用来优化光学的光电探测器,和低噪声模拟信号处理器,用来检测脉搏的血氧和心率信号。 Max30100的运行电压在1.8V到3.3V之间,并且可以通过软件来控制,待机电流极小,可以忽略不计,这样可以使电源在如何时候都能保持连接状态。
recommend-type

nivisv32.zip

nivisv32.zip

最新推荐

recommend-type

优化算法基于四则运算的算术优化算法原理与Python实现:面向图像分割的全局寻优方法研究

内容概要:本文系统介绍了算术优化算法(AOA)的基本原理、核心思想及Python实现方法,并通过图像分割的实际案例展示了其应用价值。AOA是一种基于种群的元启发式算法,其核心思想来源于四则运算,利用乘除运算进行全局勘探,加减运算进行局部开发,通过数学优化器加速函数(MOA)和数学优化概率(MOP)动态控制搜索过程,在全局探索与局部开发之间实现平衡。文章详细解析了算法的初始化、勘探与开发阶段的更新策略,并提供了完整的Python代码实现,结合Rastrigin函数进行测试验证。进一步地,以Flask框架搭建前后端分离系统,将AOA应用于图像分割任务,展示了其在实际工程中的可行性与高效性。最后,通过收敛速度、寻优精度等指标评估算法性能,并提出自适应参数调整、模型优化和并行计算等改进策略。; 适合人群:具备一定Python编程基础和优化算法基础知识的高校学生、科研人员及工程技术人员,尤其适合从事人工智能、图像处理、智能优化等领域的从业者;; 使用场景及目标:①理解元启发式算法的设计思想与实现机制;②掌握AOA在函数优化、图像分割等实际问题中的建模与求解方法;③学习如何将优化算法集成到Web系统中实现工程化应用;④为算法性能评估与改进提供实践参考; 阅读建议:建议读者结合代码逐行调试,深入理解算法流程中MOA与MOP的作用机制,尝试在不同测试函数上运行算法以观察性能差异,并可进一步扩展图像分割模块,引入更复杂的预处理或后处理技术以提升分割效果。
recommend-type

Docker化部署TS3AudioBot教程与实践

### 标题知识点 #### TS3AudioBot_docker - **Dockerfile的用途与组成**:Dockerfile是一个文本文件,包含了所有构建Docker镜像的命令。开发者可以通过编辑Dockerfile来指定Docker镜像创建时所需的所有指令,包括基础镜像、运行时指令、环境变量、软件安装、文件复制等。TS3AudioBot_docker表明这个Dockerfile与TS3AudioBot项目相关,TS3AudioBot可能是一个用于TeamSpeak 3服务器的音频机器人,用于播放音频或与服务器上的用户进行交互。 - **Docker构建过程**:在描述中,有两种方式来获取TS3AudioBot的Docker镜像。一种是从Dockerhub上直接运行预构建的镜像,另一种是自行构建Docker镜像。自建过程会使用到docker build命令,而从Dockerhub运行则会用到docker run命令。 ### 描述知识点 #### Docker命令的使用 - **docker run**:这个命令用于运行一个Docker容器。其参数说明如下: - `--name tsbot`:为运行的容器指定一个名称,这里命名为tsbot。 - `--restart=always`:设置容器重启策略,这里是总是重启,确保容器在失败后自动重启。 - `-it`:这是一对参数,-i 表示交互式操作,-t 分配一个伪终端。 - `-d`:表示后台运行容器。 - `-v /home/tsBot/data:/data`:将宿主机的/home/tsBot/data目录挂载到容器内的/data目录上,以便持久化存储数据。 - `rofl256/tsaudiobot` 或 `tsaudiobot`:指定Docker镜像名称。前者可能是从DockerHub上获取的带有用户名命名空间的镜像,后者是本地构建或已重命名的镜像。 #### Docker构建流程 - **构建镜像**:使用docker build命令可以将Dockerfile中的指令转化为一个Docker镜像。`docker build . -t tsaudiobot`表示从当前目录中读取Dockerfile,并创建一个名为tsaudiobot的镜像。构建过程中,Docker会按顺序执行Dockerfile中的指令,比如FROM、RUN、COPY等,最终形成一个包含所有依赖和配置的应用镜像。 ### 标签知识点 #### Dockerfile - **Dockerfile的概念**:Dockerfile是一个包含创建Docker镜像所有命令的文本文件。它被Docker程序读取,用于自动构建Docker镜像。Dockerfile中的指令通常包括安装软件、设置环境变量、复制文件等。 - **Dockerfile中的命令**:一些常用的Dockerfile命令包括: - FROM:指定基础镜像。 - RUN:执行命令。 - COPY:将文件或目录复制到镜像中。 - ADD:类似于COPY,但是 ADD 支持从URL下载文件以及解压 tar 文件。 - ENV:设置环境变量。 - EXPOSE:声明端口。 - VOLUME:创建挂载点。 - CMD:容器启动时要运行的命令。 - ENTRYPOINT:配置容器启动时的执行命令。 ### 压缩包子文件的文件名称列表知识点 #### 文件命名 - **TS3AudioBot_docker-main**:此文件名表明了这是一个主要的代码库或Dockerfile的存放位置。在开发中,通常main分支代表当前的主版本或正在积极开发的分支。因此TS3AudioBot_docker-main可能表示这是在Dev分支上开发的Dockerfile的主要代码版本。主分支一般比较稳定,并作为新的特性开发的基础。 ### 综合知识点 - **Docker在DevOps中的角色**:Docker作为一种轻量级的容器化技术,在DevOps领域扮演重要角色。它可以快速部署、一致的运行环境、便于测试和迁移应用。通过Dockerfile的编写和docker build命令,开发者可以构建可移植的容器镜像,通过docker run命令则可以快速启动容器实例。 - **TS3AudioBot与TeamSpeak**:TS3AudioBot很可能是一个针对TeamSpeak 3服务器的自动化解决方案。TeamSpeak是一个语音通讯软件,广泛用于线上游戏团队进行沟通。一个音频机器人可以提供自动化的消息通知、音频流控制等功能,提高游戏社区的交流效率。 - **Docker镜像的版本管理**:镜像的命名通常包括用户名、项目名和标签。如rofl256/tsaudiobot中rofl256可能是一个用户名,tsaudiobot可能是项目名称,这样的命名规则有利于镜像的共享和管理。 在介绍以上知识点后,我们理解了标题、描述、标签及文件名称列表中所含的知识点。这些知识点涵盖了Dockerfile的构建、Docker镜像的使用、TS3AudioBot功能以及版本控制等多个方面,为IT专业人员在使用和开发Docker相关项目时提供了必要的信息。
recommend-type

零售销售数据的探索性分析与DeepAR模型预测

### 零售销售数据的探索性分析与DeepAR模型预测 #### 1. 探索性数据分析 在拥有45家商店的情况下,我们选择了第20号商店,来分析其不同部门在三年间的销售表现。借助DeepAR算法,我们可以了解不同部门商品的销售情况。 在SageMaker中,通过生命周期配置(Lifecycle Configurations),我们可以在笔记本实例启动前自定义安装Python包,避免在执行笔记本前手动跟踪所需的包。为了探索零售销售数据,我们需要安装最新版本(0.9.0)的seaborn库。具体操作步骤如下: 1. 在SageMaker的Notebook下,点击Lifecycle Config
recommend-type

llm agent平台

<think>好的,我现在需要帮用户寻找与LLM代理平台解决方案相关的信息。首先,我得明确用户的需求。用户提到了“LLM agent platform solutions”,也就是大型语言模型代理平台的解决方案。这可能涉及到如何构建、部署或优化基于LLM的代理系统。接下来,我应该回忆之前用户提供的引用内容,看看有没有相关的信息可以利用。 根据引用[1],提到构建LLM应用程序的步骤分解,可能涉及到代理平台的设计。引用[2]讨论了评估LLM的挑战,包括可重复性和开源模型的解决方案,这可能影响代理平台的稳定性和选择。引用[3]则提到大模型相关的岗位和面试题,可能涉及实际应用中的技术问题。 接下
recommend-type

Docker实现OAuth2代理:安全的HTTPS解决方案

### 知识点详细说明: #### Dockerfile基础 Dockerfile是一种文本文件,它包含了用户创建Docker镜像所需的命令和参数。Docker通过读取Dockerfile中的指令自动构建镜像。Dockerfile通常包含了如下载基础镜像、安装软件包、执行脚本等指令。 #### Dockerfile中的常用指令 1. **FROM**: 指定基础镜像,所有的Dockerfile都必须以FROM开始。 2. **RUN**: 在构建过程中执行命令,如安装软件。 3. **CMD**: 设置容器启动时运行的命令,可以被docker run命令后面的参数覆盖。 4. **EXPOSE**: 告诉Docker容器在运行时监听指定的网络端口。 5. **ENV**: 设置环境变量。 6. **ADD**: 将本地文件复制到容器中,如果是tar归档文件会自动解压。 7. **ENTRYPOINT**: 设置容器启动时的默认命令,不会被docker run命令覆盖。 8. **VOLUME**: 创建一个挂载点以挂载外部存储,如磁盘或网络文件系统。 #### OAuth 2.0 Proxy OAuth 2.0 Proxy 是一个轻量级的认证代理,用于在应用程序前提供OAuth认证功能。它主要通过HTTP重定向和回调机制,实现对下游服务的安全访问控制,支持多种身份提供商(IdP),如Google, GitHub等。 #### HTTPS和SSL/TLS HTTPS(HTTP Secure)是HTTP的安全版本,它通过SSL/TLS协议加密客户端和服务器之间的通信。使用HTTPS可以保护数据的机密性和完整性,防止数据在传输过程中被窃取或篡改。SSL(Secure Sockets Layer)和TLS(Transport Layer Security)是用来在互联网上进行通信时加密数据的安全协议。 #### Docker容器与HTTPS 为了在使用Docker容器时启用HTTPS,需要在容器内配置SSL/TLS证书,并确保使用443端口。这通常涉及到配置Nginx或Apache等Web服务器,并将其作为反向代理运行在Docker容器内。 #### 临时分叉(Fork) 在开源领域,“分叉”指的是一种特殊的复制项目的行为,通常是为了对原项目进行修改或增强功能。分叉的项目可以独立于原项目发展,并可选择是否合并回原项目。在本文的语境下,“临时分叉”可能指的是为了实现特定功能(如HTTPS支持)而在现有Docker-oauth2-proxy项目基础上创建的分支版本。 #### 实现步骤 要实现HTTPS支持的docker-oauth2-proxy,可能需要进行以下步骤: 1. **准备SSL/TLS证书**:可以使用Let's Encrypt免费获取证书或自行生成。 2. **配置Nginx/Apache服务器**:在Dockerfile中添加配置,以使用SSL证书和代理设置。 3. **修改OAuth2 Proxy设置**:调整OAuth2 Proxy配置以使用HTTPS连接。 4. **分叉Docker-oauth2-proxy项目**:创建项目的分支副本,以便进行修改。 5. **编辑Dockerfile**:在分叉的项目中编写或修改Dockerfile,包括下载基础镜像、设置环境变量、添加SSL证书、配置Nginx/Apache和OAuth2 Proxy等步骤。 6. **构建和测试新镜像**:使用Docker构建镜像,并在安全环境中进行测试,确保HTTPS配置正确,并且OAuth2 Proxy功能正常工作。 7. **部署到生产环境**:在确认无误后,将配置好的镜像部署到生产环境中。 #### 压缩包子文件的文件名称列表 - **docker-oauth2-proxy-master**: 这可能是指在GitHub等代码托管平台上,docker-oauth2-proxy项目的主分支或主仓库。名称列表中的“master”暗示了该文件夹包含的是主分支的代码。 总结来说,要实现一个支持HTTPS的docker-oauth2-proxy,开发者需要进行一系列的配置和编码工作,包括使用Dockerfile来构建自定义的Docker镜像,配置SSL/TLS证书,分叉并修改现有的开源项目代码。通过这些步骤,可以确保OAuth2 Proxy能够安全地处理HTTPS请求,并为下游服务提供安全认证功能。
recommend-type

利用AmazonSageMaker进行图像分类:从理论到实践

# 利用 Amazon SageMaker 进行图像分类:从理论到实践 ## 1. 主题建模与图像分类概述 在数据科学领域,从大量非结构化数据中提取信息和主题至关重要。像 SageMaker 的神经主题模型(NTM)这类主题建模技术,提供了线性和非线性学习方法,能帮助我们深入挖掘数据中的潜在主题。它通过特定的架构和内部机制工作,还支持分布式训练,将数据集分割成多个块进行并行处理。训练完成后,我们可以将模型部署为端点并进行推理,例如解读安然邮件中的主题。 图像分类在过去五年中一直是热门研究领域,因为它能解决众多行业的实际业务问题,如自动驾驶汽车行业就高度依赖图像分类和目标检测模型的准确性。A
recommend-type

wed前端开发网页设计

<think>好的,用户需要寻找关于Web前端开发和网页设计的相关资源或教程。首先,我需要理解用户的具体需求,他们可能刚入门,想要系统学习,或者有一定基础,需要进阶教程。根据用户的问题,他们提到了“教程”和“资源”,可能希望推荐书籍、在线课程、框架文档以及社区论坛等。 接下来,我需要参考用户提供的引用内容。引用[1]提到了周文洁的《HTML5网页前端设计实战》,这是一本配套的实战项目教程,适合有基础的读者,可能可以作为书籍推荐之一。引用[2]概述了Web前端开发的技术分类,包括客户端和服务器端技术,以及常用框架如Bootstrap、React等。引用[3]是关于Delphi的TMS WEB
recommend-type

eosforce下的scatter API应用实例教程

### eosforce使用分散API #### 知识点一:什么是EOSForce EOSForce是以EOSIO为技术基础,旨在为区块链应用提供高性能的公链解决方案。它类似于EOS,也使用了EOSIO软件套件,开发者可以基于EOSIO构建DAPP应用,同时它可能拥有与EOS不同的社区治理结构和经济模型。对于开发者来说,了解EOSForce的API和功能是非常关键的,因为它直接影响到应用的开发与部署。 #### 知识点二:scatter API的介绍 scatter API 是一个开源的JavaScript库,它的目的是为了简化EOSIO区块链上各类操作,包括账户管理和交易签名等。scatter旨在提供一个更为便捷、安全的用户界面,通过API接口与EOSIO区块链进行交互。用户无需保存私钥即可与区块链进行交互,使得整个过程更加安全,同时开发者也能够利用scatter实现功能更加强大的应用。 #### 知识点三:scatter API在EOSForce上的应用 在EOSForce上使用scatter API可以简化开发者对于区块链交互的工作,无需直接处理复杂的私钥和签名问题。scatter API提供了一整套用于与区块链交互的方法,包括但不限于账户创建、身份验证、签名交易、数据读取等。通过scatter API,开发者可以更加专注于应用逻辑的实现,而不必担心底层的区块链交互细节。 #### 知识点四:安装和运行scatter_demo项目 scatter_demo是基于scatter API的一个示例项目,通过它可以学习如何将scatter集成到应用程序中。根据提供的描述,安装该项目需要使用npm,即Node.js的包管理器。首先需要执行`npm install`来安装依赖,这个过程中npm会下载scatter_demo项目所需的所有JavaScript包。安装完成后,可以通过运行`npm run dev`命令启动项目,该命令通常与项目中的开发环境配置文件(如webpack.config.js)相对应,用于启动本地开发服务器和热重载功能,以便开发者实时观察代码修改带来的效果。 #### 知识点五:配置eosforce到scatter 在scatter_demo项目中,将eosforce配置到scatter需要进入scatter的设置界面。scatter提供了一个可视化的界面,允许用户管理自己的区块链网络配置。在scatter设置中选择“网络”一栏,然后选择“新建”,在此步骤中需要选择“eos”作为区块链类型。之后,将eosforce的节点配置信息填入对应区域,完成网络的设置。这样,scatter就能够连接到eosforce区块链,用户可以通过scatter API与eosforce区块链进行交互。 #### 知识点六:npm包管理器及安装命令 npm是Node.js的包管理器,它在Node.js项目中扮演着至关重要的角色,用来安装、管理和分享项目所需的代码包。`npm install`是npm的常用命令,用于安装项目依赖。在执行此命令时,npm会根据项目根目录中的`package.json`文件自动下载并安装所需的所有依赖包。这个文件描述了项目的依赖关系,包括每个依赖的版本范围。通过这种方式,开发者可以确保项目在不同的环境中都能够有相同的行为表现。 #### 结论 了解和掌握如何在EOSForce中使用scatter API对于区块链应用开发者来说是极其重要的。scatter简化了与EOSIO区块链的交互过程,提高开发效率,降低了实现复杂功能的技术门槛。在实际操作中,开发者需要关注如何正确安装scatter_demo项目,如何配置eosforce节点到scatter中,以及如何利用scatter API开发出功能完善的区块链应用。通过这些知识点的了解,开发者可以更好地把握EOSForce生态中的开发实践,并在项目中实现安全、便捷的区块链交互。
recommend-type

AI模型评估与应用拓展

# AI模型评估与应用拓展 ## 1. 模型性能评估 ### 1.1 实验假设 我们有三个模型,假设第一个基于周一和周二点击数据训练的模型,在预测周四后期应用下载量方面,不如第二和第三个模型。第二个基于周一到周三点击数据训练的模型,性能也不如第三个基于周一到周四大部分时间点击数据训练的模型。 ### 1.2 特征重要性分析 以下代码用于分析每个模型中重要的特征: ```python exp_lst = ['exp1', 'exp2', 'exp3'] for exp in exp_lst: model_file = os.path.join(sm_output_loc, exp,
recommend-type

elasticsearch serverless

<think> 首先,用户要求了解 Elasticsearch Serverless 的相关信息,包括介绍、使用和优势等。参考的引用内容提供了背景信息,我需要基于这些引用进行回答。引用 [1] 到 [5] 提供了关键点。 关键引用内容: - [^1]: Elasticsearch Serverless 基于 Elasticsearch 的优势,满足现代工作负载的需求,如大数据集、AI 搜索和不可预测的流量。 - [^2]: Elasticsearch Serverless 是云端全托管的服务,基于云原生 Serverless 技术,提供自动弹性和免运维能力,解决资源成本问题,兼容 ELK 生