活动介绍

/home/ts/workspace/sharp/.repo/repo/main.py:19: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses import imp Traceback (most recent call last): File "/home/ts/workspace/sharp/.repo/repo/main.py", line 37, in <module> from git_config import init_ssh, close_ssh File "/home/ts/workspace/sharp/.repo/repo/git_config.py", line 17, in <module> import cPickle ModuleNotFoundError: No module named 'cPickle'

时间: 2023-08-23 21:26:05 浏览: 346
这个错误信息显示在 `git_config.py` 文件中找不到名为 `cPickle` 的模块,这可能是因为在 Python 3 中,`cPickle` 模块已经更改为 `pickle` 了,因此 Python 3 不支持 `cPickle` 模块。 要解决这个问题,您可以尝试以下几个步骤: 1. 检查您的 Python 版本,如果您使用的是 Python 3,请将 `cPickle` 更改为 `pickle`。 2. 如果您的代码需要兼容 Python 2 和 Python 3,请使用以下代码来导入 pickle 模块: ``` try: import cPickle as pickle except ImportError: import pickle ``` 这将在 Python 2 中使用 `cPickle` 模块,在 Python 3 中使用 `pickle` 模块。 3. 如果您的代码只需要在 Python 3 中运行,请将 `cPickle` 更改为 `pickle`。 希望这些步骤可以帮助您解决这个问题。
相关问题

src/task2_test.py:22: FutureWarning: The pandas.datetime class is deprecated and will be removed from pandas in a future version. Import from datetime instead. data['weekdays'] = pd.to_datetime(data['time']).apply(pd.datetime.weekday) + 1 /data/workspace/myshixun/src/task2.py:12: FutureWarning: The pandas.datetime class is deprecated and will be removed from pandas in a future version. Import from datetime instead. new_data['weekdays'] = pd.to_datetime(new_data['time']).apply(pd.datetime.weekday) + 1 #时间转化 /usr/local/lib/python3.6/site-packages/matplotlib/cbook/__init__.py:1402: FutureWarning: Support for multi-dimensional indexing (e.g. `obj[:, None]`) is deprecated and will be removed in a future version. Convert to a numpy array before indexing instead. x[:, None] /usr/local/lib/python3.6/site-packages/matplotlib/axes/_base.py:276: FutureWarning: Support for multi-dimensional indexing (e.g. `obj[:, None]`) is deprecated and will be removed in a future version. Convert to a numpy array before indexing instead. x = x[:, np.newaxis] /usr/local/lib/python3.6/site-packages/matplotlib/axes/_base.py:278: FutureWarning: Support for multi-dimensional indexing (e.g. `obj[:, None]`) is deprecated and will be removed in a future version. Convert to a numpy array before indexing instead. y = y[:, np.newaxis] 购买意愿与星期之间的关系图完成! 购买意愿与日期之间的关系图完成!

这个问题似乎不是一个问题,而是一段代码的输出结果。这段代码包含了一些FutureWarning,这是由于使用了即将被弃用的代码或方法而产生的警告。这些警告不会影响代码的运行,但是建议尽快修改代码以避免可能出现的问题。另外,这段代码还包含了两张关系图,分别是购买意愿与星期之间的关系图和购买意愿与日期之间的关系图。这些图表可能是使用matplotlib等数据可视化工具生成的。

root@5e85c311d50f:/workspace/my-workspace/NVFlare# pip install tenseal==0.3.15 -i https://pypi.org/simple DEPRECATION: Loading egg at /usr/local/lib/python3.12/dist-packages/dill-0.3.9-py3.12.egg is deprecated. pip 25.1 will enforce this behaviour change. A possible replacement is to use pip for package installation. Discussion can be found at https://github.com/pypa/pip/issues/12330 DEPRECATION: Loading egg at /usr/local/lib/python3.12/dist-packages/opt_einsum-3.4.0-py3.12.egg is deprecated. pip 25.1 will enforce this behaviour change. A possible replacement is to use pip for package installation. Discussion can be found at https://github.com/pypa/pip/issues/12330 DEPRECATION: Loading egg at /usr/local/lib/python3.12/dist-packages/lightning_utilities-0.14.3-py3.12.egg is deprecated. pip 25.1 will enforce this behaviour change. A possible replacement is to use pip for package installation. Discussion can be found at https://github.com/pypa/pip/issues/12330 DEPRECATION: Loading egg at /usr/local/lib/python3.12/dist-packages/lightning_thunder-0.2.2.dev0-py3.12.egg is deprecated. pip 25.1 will enforce this behaviour change. A possible replacement is to use pip for package installation. Discussion can be found at https://github.com/pypa/pip/issues/12330 DEPRECATION: Loading egg at /usr/local/lib/python3.12/dist-packages/nvfuser-0.2.27a0+5111d3b-py3.12-linux-aarch64.egg is deprecated. pip 25.1 will enforce this behaviour change. A possible replacement is to use pip for package installation. Discussion can be found at https://github.com/pypa/pip/issues/12330 DEPRECATION: Loading egg at /usr/local/lib/python3.12/dist-packages/looseversion-1.3.0-py3.12.egg is deprecated. pip 25.1 will enforce this behaviour change. A possible replacement is to use pip for package installation. Discussion can be found at https://github.com/pypa/pip/issues/12330 ERROR: Could not find a version that satisfies the requirement tenseal==0.3.15 (from versions: none) ERROR: No matching distribution found for tenseal==0.3.15

<think>我们正在解决安装特定版本tenseal(0.3.15)时出现的错误。根据用户提供的错误信息,核心问题是找不到该版本的包。结合引用资料,我们可以采用以下步骤: 1. 更新pip和setuptools(引用[1][2]中提到更新pip可以解决部分问题) 2. 使用镜像源(引用[2][4]提到使用国内镜像源如清华、阿里云等) 3. 考虑使用tenseal官方提供的wheel仓库(因为tenseal是一个需要编译的库,官方可能提供历史版本的wheel) 4. 如果以上方法都不行,考虑从源码编译(引用[1]提到在没有网络的环境下可能需要手动安装,但这里我们有网络) 特别注意:tenseal 0.3.15是一个相对较旧的版本,可能已经从PyPI中移除,所以我们需要使用官方提供的wheel仓库。 根据tenseal官方文档(https://github.com/OpenMined/TenSEAL),他们提供了自己的wheel仓库:https://download.openmined.org/tenseal-wheels/ 因此,我们可以尝试: pip install tenseal==0.3.15 --extra-index-url https://download.openmined.org/tenseal-wheels/ 另外,由于tenseal需要编译,如果上述方法不行,可能需要安装系统依赖(如C++编译器和OpenMP)然后从源码安装。 步骤: 1. 更新pip和setuptools 2. 尝试使用镜像源(如果官方wheel仓库下载慢,可以使用镜像源,但注意镜像源可能没有这个旧版本) 3. 使用tenseal官方wheel仓库 4. 从源码安装 我们分步骤给出解决方案。</think>### 解决 `pip install tenseal==0.3.15` 找不到版本的完整方案 #### 一、核心问题分析 1. **版本下架**:TenSEAL 0.3.15 已从PyPI主仓库移除(常见于旧版本包)[^2] 2. **平台兼容性**:该版本仅支持特定Python版本(通常为Python 3.7-3.9)[^1] 3. **依赖冲突**:需要匹配的PyTorch版本(通常为1.10.x)[^3] 4. **镜像源限制**:国内镜像可能未同步历史版本[^4] #### 二、分步解决方案 **方法1:使用官方历史仓库(推荐)** ```bash # 添加TenSEAL官方历史版本仓库 pip install tenseal==0.3.15 \ --extra-index-url https://download.openmined.org/tenseal-wheels/ \ --trusted-host download.openmined.org ``` > 验证命令:`python -c "import tenseal; print(tenseal.__version__)"` 应输出 `0.3.15` **方法2:指定兼容的Python环境** ```bash # 创建Python 3.8虚拟环境(版本兼容性最佳) python -m venv tenseal_env source tenseal_env/bin/activate # Linux/macOS # tenseal_env\Scripts\activate # Windows # 安装基础依赖 pip install torch==1.10.0 -f https://download.pytorch.org/whl/cpu/torch_stable.html # 再安装tenseal pip install tenseal==0.3.15 --extra-index-url https://download.openmined.org/tenseal-wheels/ ``` **方法3:手动下载whl安装** 1. 访问 [TenSEAL历史版本仓库](https://download.openmined.org/tenseal-wheels/) 2. 根据系统选择对应文件: - Linux: `tenseal-0.3.15-cp38-cp38-manylinux2014_x86_64.whl` - Windows: `tenseal-0.3.15-cp38-cp38-win_amd64.whl` 3. 本地安装: ```bash pip install tenseal-0.3.15-cp38-cp38-*.whl ``` **方法4:源码编译(无预编译包时)** ```bash # 安装编译依赖 sudo apt install libomp-dev cmake # Ubuntu brew install libomp cmake # macOS # 编译安装 pip install tenseal==0.3.15 \ --no-binary=:all: \ --extra-index-url https://download.openmined.org/tenseal-wheels/ ``` #### 三、验证安装 ```python import tenseal as ts context = ts.context(ts.SCHEME_TYPE.CKKS, 8192, coeff_mod_bit_sizes=[60, 40, 40, 60]) print("TenSEAL版本:", ts.__version__) # 应输出 0.3.15 ``` #### 四、常见错误处理 | 错误类型 | 解决方案 | |---------|----------| | `ERROR: Failed building wheel` | 安装C++编译工具:<br>`sudo apt install build-essential` 或 VS Build Tools | | `ImportError: libomp.so` | 安装OpenMP运行时:<br>`sudo apt install libomp5` | | 证书错误 | 添加信任参数:<br>`--trusted-host download.openmined.org` | | 网络超时 | 使用代理:<br>`--proxy http://user:pass@proxy_ip:port` | > **系统要求**: > - **Linux**: gcc≥9.0, OpenMP 5.0+ > - **Windows**: Visual Studio 2019+ > - **macOS**: Xcode≥12.0 --- ### 永久配置建议 添加到pip配置文件(`~/.pip/pip.conf`): ```ini [global] extra-index-url = https://download.openmined.org/tenseal-wheels/ trusted-host = download.openmined.org timeout = 60 retries = 5 ```
阅读全文

相关推荐

bash: docker: command not found [root@ac6b15bb1f77 mas]# python -m vllm.entrypoints.openai.api_server \ > --model /models/z50051264/summary/Qwen2.5-7B-awq/ \ > --max-num-seqs=256 \ > --max-model-len=4096 \ > --max-num-batched-tokens=4096 \ > --tensor-parallel-size=1 \ > --block-size=128 \ > --host=0.0.0.0 \ > --port=8080 \ > --gpu-memory-utilization=0.9 \ > --trust-remote-code \ > --served-model-name=zzz \ > --quantization awq INFO 07-25 03:09:14 [__init__.py:39] Available plugins for group vllm.platform_plugins: INFO 07-25 03:09:14 [__init__.py:41] - ascend -> vllm_ascend:register INFO 07-25 03:09:14 [__init__.py:44] All plugins in this group will be loaded. Set VLLM_PLUGINS to control which plugins to load. INFO 07-25 03:09:14 [__init__.py:235] Platform plugin ascend is activated WARNING 07-25 03:09:15 [_custom_ops.py:20] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'") INFO 07-25 03:09:18 [importing.py:63] Triton not installed or not compatible; certain GPU-related functions will not be available. WARNING 07-25 03:09:19 [registry.py:413] Model architecture DeepSeekMTPModel is already registered, and will be overwritten by the new model class vllm_ascend.models.deepseek_mtp:CustomDeepSeekMTP. WARNING 07-25 03:09:19 [registry.py:413] Model architecture Qwen2VLForConditionalGeneration is already registered, and will be overwritten by the new model class vllm_ascend.models.qwen2_vl:AscendQwen2VLForConditionalGeneration. WARNING 07-25 03:09:19 [registry.py:413] Model architecture Qwen2_5_VLForConditionalGeneration is already registered, and will be overwritten by the new model class vllm_ascend.models.qwen2_5_vl:AscendQwen2_5_VLForConditionalGeneration. WARNING 07-25 03:09:19 [registry.py:413] Model architecture DeepseekV2ForCausalLM is already registered, and will be overwritten by the new model class vllm_ascend.models.deepseek_v2:CustomDeepseekV2ForCausalLM. WARNING 07-25 03:09:19 [registry.py:413] Model architecture DeepseekV3ForCausalLM is already registered, and will be overwritten by the new model class vllm_ascend.models.deepseek_v2:CustomDeepseekV3ForCausalLM. WARNING 07-25 03:09:19 [registry.py:413] Model architecture Qwen3MoeForCausalLM is already registered, and will be overwritten by the new model class vllm_ascend.models.qwen3_moe:CustomQwen3MoeForCausalLM. INFO 07-25 03:09:20 [api_server.py:1395] vLLM API server version 0.9.2 INFO 07-25 03:09:20 [cli_args.py:325] non-default args: {'host': '0.0.0.0', 'port': 8080, 'model': '/models/z50051264/summary/Qwen2.5-7B-awq/', 'trust_remote_code': True, 'max_model_len': 4096, 'quantization': 'awq', 'served_model_name': ['zzz'], 'block_size': 128, 'max_num_batched_tokens': 4096, 'max_num_seqs': 256} INFO 07-25 03:09:34 [config.py:841] This model supports multiple tasks: {'generate', 'classify', 'embed', 'reward'}. Defaulting to 'generate'. INFO 07-25 03:09:34 [config.py:1472] Using max model len 4096 WARNING 07-25 03:09:35 [config.py:960] ascend quantization is not fully optimized yet. The speed can be slower than non-quantized models. INFO 07-25 03:09:35 [config.py:2285] Chunked prefill is enabled with max_num_batched_tokens=4096. INFO 07-25 03:09:35 [platform.py:174] PIECEWISE compilation enabled on NPU. use_inductor not supported - using only ACL Graph mode INFO 07-25 03:09:35 [utils.py:321] Calculated maximum supported batch sizes for ACL graph: 66 INFO 07-25 03:09:35 [utils.py:336] Adjusted ACL graph batch sizes for Qwen2ForCausalLM model (layers: 28): 67 → 66 sizes INFO 07-25 03:09:45 [__init__.py:39] Available plugins for group vllm.platform_plugins: INFO 07-25 03:09:45 [__init__.py:41] - ascend -> vllm_ascend:register INFO 07-25 03:09:45 [__init__.py:44] All plugins in this group will be loaded. Set VLLM_PLUGINS to control which plugins to load. INFO 07-25 03:09:45 [__init__.py:235] Platform plugin ascend is activated WARNING 07-25 03:09:46 [_custom_ops.py:20] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'") INFO 07-25 03:09:50 [importing.py:63] Triton not installed or not compatible; certain GPU-related functions will not be available. INFO 07-25 03:09:50 [core.py:526] Waiting for init message from front-end. WARNING 07-25 03:09:50 [registry.py:413] Model architecture DeepSeekMTPModel is already registered, and will be overwritten by the new model class vllm_ascend.models.deepseek_mtp:CustomDeepSeekMTP. WARNING 07-25 03:09:50 [registry.py:413] Model architecture Qwen2VLForConditionalGeneration is already registered, and will be overwritten by the new model class vllm_ascend.models.qwen2_vl:AscendQwen2VLForConditionalGeneration. WARNING 07-25 03:09:50 [registry.py:413] Model architecture Qwen2_5_VLForConditionalGeneration is already registered, and will be overwritten by the new model class vllm_ascend.models.qwen2_5_vl:AscendQwen2_5_VLForConditionalGeneration. WARNING 07-25 03:09:50 [registry.py:413] Model architecture DeepseekV2ForCausalLM is already registered, and will be overwritten by the new model class vllm_ascend.models.deepseek_v2:CustomDeepseekV2ForCausalLM. WARNING 07-25 03:09:50 [registry.py:413] Model architecture DeepseekV3ForCausalLM is already registered, and will be overwritten by the new model class vllm_ascend.models.deepseek_v2:CustomDeepseekV3ForCausalLM. WARNING 07-25 03:09:50 [registry.py:413] Model architecture Qwen3MoeForCausalLM is already registered, and will be overwritten by the new model class vllm_ascend.models.qwen3_moe:CustomQwen3MoeForCausalLM. INFO 07-25 03:09:50 [core.py:69] Initializing a V1 LLM engine (v0.9.2) with config: model='/models/z50051264/summary/Qwen2.5-7B-awq/', speculative_config=None, tokenizer='/models/z50051264/summary/Qwen2.5-7B-awq/', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config={}, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=4096, download_dir=None, load_format=auto, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=True, quantization=ascend, enforce_eager=False, kv_cache_dtype=auto, device_config=npu, decoding_config=DecodingConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=0, served_model_name=zzz, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, pooler_config=None, compilation_config={"level":3,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":["all"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output","vllm.unified_ascend_attention_with_output"],"use_inductor":false,"compile_sizes":[],"inductor_compile_config":{},"inductor_passes":{},"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"cudagraph_copy_inputs":false,"full_cuda_graph":false,"max_capture_size":512,"local_cache_dir":null} ERROR 07-25 03:09:53 [core.py:586] EngineCore failed to start. ERROR 07-25 03:09:53 [core.py:586] Traceback (most recent call last): ERROR 07-25 03:09:53 [core.py:586] File "/vllm-workspace/vllm/vllm/v1/engine/core.py", line 577, in run_engine_core ERROR 07-25 03:09:53 [core.py:586] engine_core = EngineCoreProc(*args, **kwargs) ERROR 07-25 03:09:53 [core.py:586] File "/vllm-workspace/vllm/vllm/v1/engine/core.py", line 404, in __init__ ERROR 07-25 03:09:53 [core.py:586] super().__init__(vllm_config, executor_class, log_stats, ERROR 07-25 03:09:53 [core.py:586] File "/vllm-workspace/vllm/vllm/v1/engine/core.py", line 75, in __init__ ERROR 07-25 03:09:53 [core.py:586] self.model_executor = executor_class(vllm_config) ERROR 07-25 03:09:53 [core.py:586] File "/vllm-workspace/vllm/vllm/executor/executor_base.py", line 53, in __init__ ERROR 07-25 03:09:53 [core.py:586] self._init_executor() ERROR 07-25 03:09:53 [core.py:586] File "/vllm-workspace/vllm/vllm/executor/uniproc_executor.py", line 47, in _init_executor ERROR 07-25 03:09:53 [core.py:586] self.collective_rpc("init_device") ERROR 07-25 03:09:53 [core.py:586] File "/vllm-workspace/vllm/vllm/executor/uniproc_executor.py", line 57, in collective_rpc ERROR 07-25 03:09:53 [core.py:586] answer = run_method(self.driver_worker, method, args, kwargs) ERROR 07-25 03:09:53 [core.py:586] File "/vllm-workspace/vllm/vllm/utils/__init__.py", line 2736, in run_method ERROR 07-25 03:09:53 [core.py:586] return func(*args, **kwargs) ERROR 07-25 03:09:53 [core.py:586] File "/vllm-workspace/vllm/vllm/worker/worker_base.py", line 606, in init_device ERROR 07-25 03:09:53 [core.py:586] self.worker.init_device() # type: ignore ERROR 07-25 03:09:53 [core.py:586] File "/vllm-workspace/vllm-ascend/vllm_ascend/worker/worker_v1.py", line 132, in init_device ERROR 07-25 03:09:53 [core.py:586] NPUPlatform.set_device(device) ERROR 07-25 03:09:53 [core.py:586] File "/vllm-workspace/vllm-ascend/vllm_ascend/platform.py", line 98, in set_device ERROR 07-25 03:09:53 [core.py:586] torch.npu.set_device(device) ERROR 07-25 03:09:53 [core.py:586] File "/usr/local/python3.10.17/lib/python3.10/site-packages/torch_npu/npu/utils.py", line 80, in set_device ERROR 07-25 03:09:53 [core.py:586] torch_npu._C._npu_setDevice(device_id) ERROR 07-25 03:09:53 [core.py:586] RuntimeError: SetPrecisionMode:build/CMakeFiles/torch_npu.dir/compiler_depend.ts:156 NPU function error: at_npu::native::AclSetCompileopt(aclCompileOpt::ACL_PRECISION_MODE, precision_mode), error code is 500001 ERROR 07-25 03:09:53 [core.py:586] [ERROR] 2025-07-25-03:09:53 (PID:977, Device:0, RankID:-1) ERR00100 PTA call acl api failed ERROR 07-25 03:09:53 [core.py:586] [Error]: The internal ACL of the system is incorrect. ERROR 07-25 03:09:53 [core.py:586] Rectify the fault based on the error information in the ascend log. ERROR 07-25 03:09:53 [core.py:586] EC0010: [PID: 977] 2025-07-25-03:09:53.177.260 Failed to import Python module [AttributeError: np.float_ was removed in the NumPy 2.0 release. Use np.float64 instead..]. ERROR 07-25 03:09:53 [core.py:586] Solution: Check that all required components are properly installed and the specified Python path matches the Python installation directory. (If the path does not match the directory, run set_env.sh in the installation package.) ERROR 07-25 03:09:53 [core.py:586] TraceBack (most recent call last): ERROR 07-25 03:09:53 [core.py:586] AOE Failed to call InitCannKB[FUNC:Initialize][FILE:python_adapter_manager.cc][LINE:47] ERROR 07-25 03:09:53 [core.py:586] Failed to initialize TeConfigInfo. ERROR 07-25 03:09:53 [core.py:586] [GraphOpt][InitializeInner][InitTbeFunc] Failed to init tbe.[FUNC:InitializeTeFusion][FILE:tbe_op_store_adapter.cc][LINE:1889] ERROR 07-25 03:09:53 [core.py:586] [GraphOpt][InitializeInner][InitTeFusion]: Failed to initialize TeFusion.[FUNC:InitializeInner][FILE:tbe_op_store_adapter.cc][LINE:1856] ERROR 07-25 03:09:53 [core.py:586] [SubGraphOpt][PreCompileOp][InitAdapter] InitializeAdapter adapter [tbe_op_adapter] failed! Ret [4294967295][FUNC:InitializeAdapter][FILE:op_store_adapter_manager.cc][LINE:79] ERROR 07-25 03:09:53 [core.py:586] [SubGraphOpt][PreCompileOp][Init] Initialize op store adapter failed, OpsStoreName[tbe-custom].[FUNC:Initialize][FILE:op_store_adapter_manager.cc][LINE:120] ERROR 07-25 03:09:53 [core.py:586] [FusionMngr][Init] Op store adapter manager init failed.[FUNC:Initialize][FILE:fusion_manager.cc][LINE:115] ERROR 07-25 03:09:53 [core.py:586] PluginManager InvokeAll failed.[FUNC:Initialize][FILE:ops_kernel_manager.cc][LINE:83] ERROR 07-25 03:09:53 [core.py:586] OpsManager initialize failed.[FUNC:InnerInitialize][FILE:gelib.cc][LINE:259] ERROR 07-25 03:09:53 [core.py:586] GELib::InnerInitialize failed.[FUNC:Initialize][FILE:gelib.cc][LINE:184] ERROR 07-25 03:09:53 [core.py:586] GEInitialize failed.[FUNC:GEInitialize][FILE:ge_api.cc][LINE:371] ERROR 07-25 03:09:53 [core.py:586] [Initialize][Ge]GEInitialize failed. ge result = 4294967295[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] ERROR 07-25 03:09:53 [core.py:586] [Init][Compiler]Init compiler failed[FUNC:ReportInnerError][FILE:log_inner.cpp][LINE:145] ERROR 07-25 03:09:53 [core.py:586] [Set][Options]OpCompileProcessor init failed![FUNC:ReportInnerError][FILE:log_inner.cpp][LINE:145] ERROR 07-25 03:09:53 [core.py:586] Process EngineCore_0: Traceback (most recent call last): File "/usr/local/python3.10.17/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/local/python3.10.17/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/vllm-workspace/vllm/vllm/v1/engine/core.py", line 590, in run_engine_core raise e File "/vllm-workspace/vllm/vllm/v1/engine/core.py", line 577, in run_engine_core engine_core = EngineCoreProc(*args, **kwargs) File "/vllm-workspace/vllm/vllm/v1/engine/core.py", line 404, in __init__ super().__init__(vllm_config, executor_class, log_stats, File "/vllm-workspace/vllm/vllm/v1/engine/core.py", line 75, in __init__ self.model_executor = executor_class(vllm_config) File "/vllm-workspace/vllm/vllm/executor/executor_base.py", line 53, in __init__ self._init_executor() File "/vllm-workspace/vllm/vllm/executor/uniproc_executor.py", line 47, in _init_executor self.collective_rpc("init_device") File "/vllm-workspace/vllm/vllm/executor/uniproc_executor.py", line 57, in collective_rpc answer = run_method(self.driver_worker, method, args, kwargs) File "/vllm-workspace/vllm/vllm/utils/__init__.py", line 2736, in run_method return func(*args, **kwargs) File "/vllm-workspace/vllm/vllm/worker/worker_base.py", line 606, in init_device self.worker.init_device() # type: ignore File "/vllm-workspace/vllm-ascend/vllm_ascend/worker/worker_v1.py", line 132, in init_device NPUPlatform.set_device(device) File "/vllm-workspace/vllm-ascend/vllm_ascend/platform.py", line 98, in set_device torch.npu.set_device(device) File "/usr/local/python3.10.17/lib/python3.10/site-packages/torch_npu/npu/utils.py", line 80, in set_device torch_npu._C._npu_setDevice(device_id) RuntimeError: SetPrecisionMode:build/CMakeFiles/torch_npu.dir/compiler_depend.ts:156 NPU function error: at_npu::native::AclSetCompileopt(aclCompileOpt::ACL_PRECISION_MODE, precision_mode), error code is 500001 [ERROR] 2025-07-25-03:09:53 (PID:977, Device:0, RankID:-1) ERR00100 PTA call acl api failed [Error]: The internal ACL of the system is incorrect. Rectify the fault based on the error information in the ascend log. EC0010: [PID: 977] 2025-07-25-03:09:53.177.260 Failed to import Python module [AttributeError: np.float_ was removed in the NumPy 2.0 release. Use np.float64 instead..]. Solution: Check that all required components are properly installed and the specified Python path matches the Python installation directory. (If the path does not match the directory, run set_env.sh in the installation package.) TraceBack (most recent call last): AOE Failed to call InitCannKB[FUNC:Initialize][FILE:python_adapter_manager.cc][LINE:47] Failed to initialize TeConfigInfo. [GraphOpt][InitializeInner][InitTbeFunc] Failed to init tbe.[FUNC:InitializeTeFusion][FILE:tbe_op_store_adapter.cc][LINE:1889] [GraphOpt][InitializeInner][InitTeFusion]: Failed to initialize TeFusion.[FUNC:InitializeInner][FILE:tbe_op_store_adapter.cc][LINE:1856] [SubGraphOpt][PreCompileOp][InitAdapter] InitializeAdapter adapter [tbe_op_adapter] failed! Ret [4294967295][FUNC:InitializeAdapter][FILE:op_store_adapter_manager.cc][LINE:79] [SubGraphOpt][PreCompileOp][Init] Initialize op store adapter failed, OpsStoreName[tbe-custom].[FUNC:Initialize][FILE:op_store_adapter_manager.cc][LINE:120] [FusionMngr][Init] Op store adapter manager init failed.[FUNC:Initialize][FILE:fusion_manager.cc][LINE:115] PluginManager InvokeAll failed.[FUNC:Initialize][FILE:ops_kernel_manager.cc][LINE:83] OpsManager initialize failed.[FUNC:InnerInitialize][FILE:gelib.cc][LINE:259] GELib::InnerInitialize failed.[FUNC:Initialize][FILE:gelib.cc][LINE:184] GEInitialize failed.[FUNC:GEInitialize][FILE:ge_api.cc][LINE:371] [Initialize][Ge]GEInitialize failed. ge result = 4294967295[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] [Init][Compiler]Init compiler failed[FUNC:ReportInnerError][FILE:log_inner.cpp][LINE:145] [Set][Options]OpCompileProcessor init failed![FUNC:ReportInnerError][FILE:log_inner.cpp][LINE:145] Traceback (most recent call last): File "/usr/local/python3.10.17/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/local/python3.10.17/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/vllm-workspace/vllm/vllm/entrypoints/openai/api_server.py", line 1495, in <module> uvloop.run(run_server(args)) File "/usr/local/python3.10.17/lib/python3.10/site-packages/uvloop/__init__.py", line 82, in run return loop.run_until_complete(wrapper()) File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete File "/usr/local/python3.10.17/lib/python3.10/site-packages/uvloop/__init__.py", line 61, in wrapper return await main File "/vllm-workspace/vllm/vllm/entrypoints/openai/api_server.py", line 1431, in run_server await run_server_worker(listen_address, sock, args, **uvicorn_kwargs) File "/vllm-workspace/vllm/vllm/entrypoints/openai/api_server.py", line 1451, in run_server_worker async with build_async_engine_client(args, client_config) as engine_client: File "/usr/local/python3.10.17/lib/python3.10/contextlib.py", line 199, in __aenter__ return await anext(self.gen) File "/vllm-workspace/vllm/vllm/entrypoints/openai/api_server.py", line 158, in build_async_engine_client async with build_async_engine_client_from_engine_args( File "/usr/local/python3.10.17/lib/python3.10/contextlib.py", line 199, in __aenter__ return await anext(self.gen) File "/vllm-workspace/vllm/vllm/entrypoints/openai/api_server.py", line 194, in build_async_engine_client_from_engine_args async_llm = AsyncLLM.from_vllm_config( File "/vllm-workspace/vllm/vllm/v1/engine/async_llm.py", line 162, in from_vllm_config return cls( File "/vllm-workspace/vllm/vllm/v1/engine/async_llm.py", line 124, in __init__ self.engine_core = EngineCoreClient.make_async_mp_client( File "/vllm-workspace/vllm/vllm/v1/engine/core_client.py", line 96, in make_async_mp_client return AsyncMPClient(*client_args) File "/vllm-workspace/vllm/vllm/v1/engine/core_client.py", line 666, in __init__ super().__init__( File "/vllm-workspace/vllm/vllm/v1/engine/core_client.py", line 403, in __init__ with launch_core_engines(vllm_config, executor_class, File "/usr/local/python3.10.17/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/vllm-workspace/vllm/vllm/v1/engine/utils.py", line 434, in launch_core_engines wait_for_engine_startup( File "/vllm-workspace/vllm/vllm/v1/engine/utils.py", line 484, in wait_for_engine_startup raise RuntimeError("Engine core initialization failed. " RuntimeError: Engine core initialization failed. See root cause above. Failed core proc(s): {} [ERROR] 2025-07-25-03:09:59 (PID:707, Device:-1, RankID:-1) ERR99999 UNKNOWN applicaiton exception 分析报错

(VScode) zhaohy@workspace2:~/VSCode-main$ python train_test_eval.py --Training True --Testing True --Evaluation True /home/zhaohy/anaconda3/envs/VScode/lib/python3.8/site-packages/timm/models/layers/__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning) /home/zhaohy/anaconda3/envs/VScode/lib/python3.8/site-packages/timm/models/layers/__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning) /home/zhaohy/anaconda3/envs/VScode/lib/python3.8/site-packages/timm/models/layers/__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning) W0311 15:54:21.788752 136232691705664 torch/multiprocessing/spawn.py:145] Terminating process 2342884 via signal SIGTERM Traceback (most recent call last): File "train_test_eval.py", line 67, in <module> Training.train_net(num_gpus=num_gpus, args=args) File "/home/zhaohy/VSCode-main/Training.py", line 64, in train_net mp.spawn(main, nprocs=num_gpus, args=(num_gpus, args)) File "/home/zhaohy/anaconda3/envs/VScode/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 281, in spawn return start_processes(fn, args, nprocs, join, daemon, start_method="spawn") File "/home/zhaohy/anaconda3/envs/VScode/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 237, in start_processes while not context.join(): File "/home/zhaohy/anaconda3/envs/VScode/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 188, in join raise ProcessRaisedException(msg, error_index, failed_process.pid) torch.multiprocessing.spawn.ProcessRaisedException: -- Process 1 terminated with the following error: Traceback (most recent call last): File "/home/zhaohy/anaconda3/envs/VScode/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 75, in _wrap fn(i, *args) File "/home/zhaohy/VSCode-main/Training.py", line 71, in main dist.init_process_group(backend='nccl', init_method='env://') # 自动从环境变量读取 RANK/WORLD_SIZE File "/home/zhaohy/anaconda3/envs/VScode/lib/python3.8/site-packages/torch/distributed/c10d_logger.py", line 75, in wrapper return func(*args, **kwargs) File "/home/zhaohy/anaconda3/envs/VScode/lib/python3.8/site-packages/torch/distributed/c10d_logger.py", line 89, in wrapper func_return = func(*args, **kwargs) File "/home/zhaohy/anaconda3/envs/VScode/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1305, in init_process_group store, rank, world_size = next(rendezvous_iterator) File "/home/zhaohy/anaconda3/envs/VScode/lib/python3.8/site-packages/torch/distributed/rendezvous.py", line 234, in _env_rendezvous_handler rank = int(_get_env_or_raise("RANK")) File "/home/zhaohy/anaconda3/envs/VScode/lib/python3.8/site-packages/torch/distributed/rendezvous.py", line 219, in _get_env_or_raise raise _env_error(env_var) ValueError: Error initializing torch.distributed using env:// rendezvous: environment variable RANK expected, but not set

root@b82c95bfc43f:/workspace/data-zj/test_code_v1# . train.sh /usr/local/lib/python3.10/dist-packages/torch/utils/_pytree.py:185: FutureWarning: optree is installed but the version is too old to support PyTorch Dynamo in C++ pytree. C++ pytree support is disabled. Please consider upgrading optree using python3 -m pip install --upgrade 'optree>=0.13.0'. warnings.warn( W0529 02:52:35.084000 938 torch/distributed/run.py:792] W0529 02:52:35.084000 938 torch/distributed/run.py:792] ***************************************** W0529 02:52:35.084000 938 torch/distributed/run.py:792] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. W0529 02:52:35.084000 938 torch/distributed/run.py:792] ***************************************** [W529 02:52:39.311003207 socket.cpp:204] [c10d] The hostname of the client socket cannot be retrieved. err=-3 [W529 02:52:43.313869930 socket.cpp:204] [c10d] The hostname of the client socket cannot be retrieved. err=-3 /usr/local/lib/python3.10/dist-packages/torch/utils/_pytree.py:185: FutureWarning: optree is installed but the version is too old to support PyTorch Dynamo in C++ pytree. C++ pytree support is disabled. Please consider upgrading optree using python3 -m pip install --upgrade 'optree>=0.13.0'. warnings.warn( /usr/local/lib/python3.10/dist-packages/torch/utils/_pytree.py:185: FutureWarning: optree is installed but the version is too old to support PyTorch Dynamo in C++ pytree. C++ pytree support is disabled. Please consider upgrading optree using python3 -m pip install --upgrade 'optree>=0.13.0'. warnings.warn( /usr/local/lib/python3.10/dist-packages/torch/utils/_pytree.py:185: FutureWarning: optree is installed but the version is too old to support PyTorch Dynamo in C++ pytree. C++ pytree support is disabled. Please consider upgrading optree using python3 -m pip install --upgrade 'optree>=0.13.0'. warnings.warn( /usr/local/lib/python3.10/dist-packages/torch/utils/_pytree.py:185: FutureWarning: optree is installed but the version is too old to support PyTorch Dynamo in C++ pytree. C++ pytree support is disabled. Please consider upgrading optree using python3 -m pip install --upgrade 'optree>=0.13.0'. warnings.warn( /usr/local/lib/python3.10/dist-packages/torch/utils/_pytree.py:185: FutureWarning: optree is installed but the version is too old to support PyTorch Dynamo in C++ pytree. C++ pytree support is disabled. Please consider upgrading optree using python3 -m pip install --upgrade 'optree>=0.13.0'. warnings.warn( /usr/local/lib/python3.10/dist-packages/torch/utils/_pytree.py:185: FutureWarning: optree is installed but the version is too old to support PyTorch Dynamo in C++ pytree. C++ pytree support is disabled. Please consider upgrading optree using python3 -m pip install --upgrade 'optree>=0.13.0'. warnings.warn( /usr/local/lib/python3.10/dist-packages/torch/utils/_pytree.py:185: FutureWarning: optree is installed but the version is too old to support PyTorch Dynamo in C++ pytree. C++ pytree support is disabled. Please consider upgrading optree using python3 -m pip install --upgrade 'optree>=0.13.0'. warnings.warn( /usr/local/lib/python3.10/dist-packages/torch/utils/_pytree.py:185: FutureWarning: optree is installed but the version is too old to support PyTorch Dynamo in C++ pytree. C++ pytree support is disabled. Please consider upgrading optree using python3 -m pip install --upgrade 'optree>=0.13.0'. warnings.warn( /usr/local/lib/python3.10/dist-packages/torch/utils/_pytree.py:185: FutureWarning: optree is installed but the version is too old to support PyTorch Dynamo in C++ pytree. C++ pytree support is disabled. Please consider upgrading optree using python3 -m pip install --upgrade 'optree>=0.13.0'. warnings.warn( /usr/local/lib/python3.10/dist-packages/torch/utils/_pytree.py:185: FutureWarning: optree is installed but the version is too old to support PyTorch Dynamo in C++ pytree. C++ pytree support is disabled. Please consider upgrading optree using python3 -m pip install --upgrade 'optree>=0.13.0'. warnings.warn( /usr/local/lib/python3.10/dist-packages/torch/utils/_pytree.py:185: FutureWarning: optree is installed but the version is too old to support PyTorch Dynamo in C++ pytree. C++ pytree support is disabled. Please consider upgrading optree using python3 -m pip install --upgrade 'optree>=0.13.0'. warnings.warn( /usr/local/lib/python3.10/dist-packages/torch/utils/_pytree.py:185: FutureWarning: optree is installed but the version is too old to support PyTorch Dynamo in C++ pytree. C++ pytree support is disabled. Please consider upgrading optree using python3 -m pip install --upgrade 'optree>=0.13.0'. warnings.warn( /usr/local/lib/python3.10/dist-packages/torch/utils/_pytree.py:185: FutureWarning: optree is installed but the version is too old to support PyTorch Dynamo in C++ pytree. C++ pytree support is disabled. Please consider upgrading optree using python3 -m pip install --upgrade 'optree>=0.13.0'. warnings.warn( /usr/local/lib/python3.10/dist-packages/torch/utils/_pytree.py:185: FutureWarning: optree is installed but the version is too old to support PyTorch Dynamo in C++ pytree. C++ pytree support is disabled. Please consider upgrading optree using python3 -m pip install --upgrade 'optree>=0.13.0'. warnings.warn( /usr/local/lib/python3.10/dist-packages/torch/utils/_pytree.py:185: FutureWarning: optree is installed but the version is too old to support PyTorch Dynamo in C++ pytree. C++ pytree support is disabled. Please consider upgrading optree using python3 -m pip install --upgrade 'optree>=0.13.0'. warnings.warn( /usr/local/lib/python3.10/dist-packages/torch/utils/_pytree.py:185: FutureWarning: optree is installed but the version is too old to support PyTorch Dynamo in C++ pytree. C++ pytree support is disabled. Please consider upgrading optree using python3 -m pip install --upgrade 'optree>=0.13.0'. warnings.warn( Traceback (most recent call last): File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.10/dist-packages/axolotl/cli/train.py", line 17, in <module> from axolotl.cli.config import load_cfg File "/usr/local/lib/python3.10/dist-packages/axolotl/cli/config.py", line 19, in <module> from axolotl.utils.config import ( File "/usr/local/lib/python3.10/dist-packages/axolotl/utils/config/__init__.py", line 16, in <module> from axolotl.utils.models import MULTIMODAL_AUTO_MODEL_MAPPING, load_model_config File "/usr/local/lib/python3.10/dist-packages/axolotl/utils/models.py", line 17, in <module> import transformers.modeling_utils File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 62, in <module> from .integrations.flash_attention import flash_attention_forward File "/usr/local/lib/python3.10/dist-packages/transformers/integrations/flash_attention.py", line 5, in <module> from ..modeling_flash_attention_utils import _flash_attention_forward, flash_attn_supports_top_left_mask File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_flash_attention_utils.py", line 36, in <module> from flash_attn.bert_padding import index_first_axis, pad_input, unpad_input # noqa File "/usr/local/lib/python3.10/dist-packages/flash_attn/__init__.py", line 3, in <module> from flash_attn.flash_attn_interface import ( File "/usr/local/lib/python3.10/dist-packages/flash_attn/flash_attn_interface.py", line 10, in <module> import flash_attn_2_cuda as flash_attn_cuda ImportError: /usr/local/lib/python3.10/dist-packages/flash_attn_2_cuda.cpython-310-x86_64-linux-gnu.so: undefined symbol: _ZN3c105ErrorC2ENS_14SourceLocationENSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE Traceback (most recent call last): File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.10/dist-packages/axolotl/cli/train.py", line 17, in <module> from axolotl.cli.config import load_cfg File "/usr/local/lib/python3.10/dist-packages/axolotl/cli/config.py", line 19, in <module> from axolotl.utils.config import ( File "/usr/local/lib/python3.10/dist-packages/axolotl/utils/config/__init__.py", line 16, in <module> from axolotl.utils.models import MULTIMODAL_AUTO_MODEL_MAPPING, load_model_config File "/usr/local/lib/python3.10/dist-packages/axolotl/utils/models.py", line 17, in <module> import transformers.modeling_utils File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 62, in <module> from .integrations.flash_attention import flash_attention_forward File "/usr/local/lib/python3.10/dist-packages/transformers/integrations/flash_attention.py", line 5, in <module> from ..modeling_flash_attention_utils import _flash_attention_forward, flash_attn_supports_top_left_mask File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_flash_attention_utils.py", line 36, in <module> from flash_attn.bert_padding import index_first_axis, pad_input, unpad_input # noqa File "/usr/local/lib/python3.10/dist-packages/flash_attn/__init__.py", line 3, in <module> from flash_attn.flash_attn_interface import ( File "/usr/local/lib/python3.10/dist-packages/flash_attn/flash_attn_interface.py", line 10, in <module> import flash_attn_2_cuda as flash_attn_cuda ImportError: /usr/local/lib/python3.10/dist-packages/flash_attn_2_cuda.cpython-310-x86_64-linux-gnu.so: undefined symbol: _ZN3c105ErrorC2ENS_14SourceLocationENSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE Traceback (most recent call last): File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.10/dist-packages/axolotl/cli/train.py", line 17, in <module> from axolotl.cli.config import load_cfg File "/usr/local/lib/python3.10/dist-packages/axolotl/cli/config.py", line 19, in <module> from axolotl.utils.config import ( File "/usr/local/lib/python3.10/dist-packages/axolotl/utils/config/__init__.py", line 16, in <module> from axolotl.utils.models import MULTIMODAL_AUTO_MODEL_MAPPING, load_model_config File "/usr/local/lib/python3.10/dist-packages/axolotl/utils/models.py", line 17, in <module> import transformers.modeling_utils File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 62, in <module> from .integrations.flash_attention import flash_attention_forward File "/usr/local/lib/python3.10/dist-packages/transformers/integrations/flash_attention.py", line 5, in <module> from ..modeling_flash_attention_utils import _flash_attention_forward, flash_attn_supports_top_left_mask File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_flash_attention_utils.py", line 36, in <module> from flash_attn.bert_padding import index_first_axis, pad_input, unpad_input # noqa File "/usr/local/lib/python3.10/dist-packages/flash_attn/__init__.py", line 3, in <module> from flash_attn.flash_attn_interface import ( File "/usr/local/lib/python3.10/dist-packages/flash_attn/flash_attn_interface.py", line 10, in <module> import flash_attn_2_cuda as flash_attn_cuda ImportError: /usr/local/lib/python3.10/dist-packages/flash_attn_2_cuda.cpython-310-x86_64-linux-gnu.so: undefined symbol: _ZN3c105ErrorC2ENS_14SourceLocationENSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE Traceback (most recent call last): File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.10/dist-packages/axolotl/cli/train.py", line 17, in <module> from axolotl.cli.config import load_cfg File "/usr/local/lib/python3.10/dist-packages/axolotl/cli/config.py", line 19, in <module> from axolotl.utils.config import ( File "/usr/local/lib/python3.10/dist-packages/axolotl/utils/config/__init__.py", line 16, in <module> from axolotl.utils.models import MULTIMODAL_AUTO_MODEL_MAPPING, load_model_config File "/usr/local/lib/python3.10/dist-packages/axolotl/utils/models.py", line 17, in <module> import transformers.modeling_utils File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 62, in <module> from .integrations.flash_attention import flash_attention_forward File "/usr/local/lib/python3.10/dist-packages/transformers/integrations/flash_attention.py", line 5, in <module> from ..modeling_flash_attention_utils import _flash_attention_forward, flash_attn_supports_top_left_mask File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_flash_attention_utils.py", line 36, in <module> from flash_attn.bert_padding import index_first_axis, pad_input, unpad_input # noqa File "/usr/local/lib/python3.10/dist-packages/flash_attn/__init__.py", line 3, in <module> from flash_attn.flash_attn_interface import ( File "/usr/local/lib/python3.10/dist-packages/flash_attn/flash_attn_interface.py", line 10, in <module> import flash_attn_2_cuda as flash_attn_cuda ImportError: /usr/local/lib/python3.10/dist-packages/flash_attn_2_cuda.cpython-310-x86_64-linux-gnu.so: undefined symbol: _ZN3c105ErrorC2ENS_14SourceLocationENSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE Traceback (most recent call last): File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.10/dist-packages/axolotl/cli/train.py", line 17, in <module> from axolotl.cli.config import load_cfg File "/usr/local/lib/python3.10/dist-packages/axolotl/cli/config.py", line 19, in <module> from axolotl.utils.config import ( File "/usr/local/lib/python3.10/dist-packages/axolotl/utils/config/__init__.py", line 16, in <module> from axolotl.utils.models import MULTIMODAL_AUTO_MODEL_MAPPING, load_model_config File "/usr/local/lib/python3.10/dist-packages/axolotl/utils/models.py", line 17, in <module> import transformers.modeling_utils File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 62, in <module> from .integrations.flash_attention import flash_attention_forward File "/usr/local/lib/python3.10/dist-packages/transformers/integrations/flash_attention.py", line 5, in <module> from ..modeling_flash_attention_utils import _flash_attention_forward, flash_attn_supports_top_left_mask File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_flash_attention_utils.py", line 36, in <module> from flash_attn.bert_padding import index_first_axis, pad_input, unpad_input # noqa File "/usr/local/lib/python3.10/dist-packages/flash_attn/__init__.py", line 3, in <module> from flash_attn.flash_attn_interface import ( File "/usr/local/lib/python3.10/dist-packages/flash_attn/flash_attn_interface.py", line 10, in <module> import flash_attn_2_cuda as flash_attn_cuda ImportError: /usr/local/lib/python3.10/dist-packages/flash_attn_2_cuda.cpython-310-x86_64-linux-gnu.so: undefined symbol: _ZN3c105ErrorC2ENS_14SourceLocationENSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE Traceback (most recent call last): File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.10/dist-packages/axolotl/cli/train.py", line 17, in <module> from axolotl.cli.config import load_cfg File "/usr/local/lib/python3.10/dist-packages/axolotl/cli/config.py", line 19, in <module> from axolotl.utils.config import ( File "/usr/local/lib/python3.10/dist-packages/axolotl/utils/config/__init__.py", line 16, in <module> from axolotl.utils.models import MULTIMODAL_AUTO_MODEL_MAPPING, load_model_config File "/usr/local/lib/python3.10/dist-packages/axolotl/utils/models.py", line 17, in <module> import transformers.modeling_utils File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 62, in <module> from .integrations.flash_attention import flash_attention_forward File "/usr/local/lib/python3.10/dist-packages/transformers/integrations/flash_attention.py", line 5, in <module> from ..modeling_flash_attention_utils import _flash_attention_forward, flash_attn_supports_top_left_mask File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_flash_attention_utils.py", line 36, in <module> from flash_attn.bert_padding import index_first_axis, pad_input, unpad_input # noqa File "/usr/local/lib/python3.10/dist-packages/flash_attn/__init__.py", line 3, in <module> from flash_attn.flash_attn_interface import ( File "/usr/local/lib/python3.10/dist-packages/flash_attn/flash_attn_interface.py", line 10, in <module> import flash_attn_2_cuda as flash_attn_cuda ImportError: /usr/local/lib/python3.10/dist-packages/flash_attn_2_cuda.cpython-310-x86_64-linux-gnu.so: undefined symbol: _ZN3c105ErrorC2ENS_14SourceLocationENSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE Traceback (most recent call last): File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.10/dist-packages/axolotl/cli/train.py", line 17, in <module> from axolotl.cli.config import load_cfg File "/usr/local/lib/python3.10/dist-packages/axolotl/cli/config.py", line 19, in <module> from axolotl.utils.config import ( File "/usr/local/lib/python3.10/dist-packages/axolotl/utils/config/__init__.py", line 16, in <module> from axolotl.utils.models import MULTIMODAL_AUTO_MODEL_MAPPING, load_model_config File "/usr/local/lib/python3.10/dist-packages/axolotl/utils/models.py", line 17, in <module> import transformers.modeling_utils File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 62, in <module> from .integrations.flash_attention import flash_attention_forward File "/usr/local/lib/python3.10/dist-packages/transformers/integrations/flash_attention.py", line 5, in <module> from ..modeling_flash_attention_utils import _flash_attention_forward, flash_attn_supports_top_left_mask File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_flash_attention_utils.py", line 36, in <module> from flash_attn.bert_padding import index_first_axis, pad_input, unpad_input # noqa File "/usr/local/lib/python3.10/dist-packages/flash_attn/__init__.py", line 3, in <module> from flash_attn.flash_attn_interface import ( File "/usr/local/lib/python3.10/dist-packages/flash_attn/flash_attn_interface.py", line 10, in <module> import flash_attn_2_cuda as flash_attn_cuda ImportError: /usr/local/lib/python3.10/dist-packages/flash_attn_2_cuda.cpython-310-x86_64-linux-gnu.so: undefined symbol: _ZN3c105ErrorC2ENS_14SourceLocationENSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE Traceback (most recent call last): File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.10/dist-packages/axolotl/cli/train.py", line 17, in <module> from axolotl.cli.config import load_cfg File "/usr/local/lib/python3.10/dist-packages/axolotl/cli/config.py", line 19, in <module> from axolotl.utils.config import ( File "/usr/local/lib/python3.10/dist-packages/axolotl/utils/config/__init__.py", line 16, in <module> from axolotl.utils.models import MULTIMODAL_AUTO_MODEL_MAPPING, load_model_config File "/usr/local/lib/python3.10/dist-packages/axolotl/utils/models.py", line 17, in <module> import transformers.modeling_utils File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 62, in <module> from .integrations.flash_attention import flash_attention_forward File "/usr/local/lib/python3.10/dist-packages/transformers/integrations/flash_attention.py", line 5, in <module> from ..modeling_flash_attention_utils import _flash_attention_forward, flash_attn_supports_top_left_mask File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_flash_attention_utils.py", line 36, in <module> from flash_attn.bert_padding import index_first_axis, pad_input, unpad_input # noqa File "/usr/local/lib/python3.10/dist-packages/flash_attn/__init__.py", line 3, in <module> from flash_attn.flash_attn_interface import ( File "/usr/local/lib/python3.10/dist-packages/flash_attn/flash_attn_interface.py", line 10, in <module> import flash_attn_2_cuda as flash_attn_cuda ImportError: /usr/local/lib/python3.10/dist-packages/flash_attn_2_cuda.cpython-310-x86_64-linux-gnu.so: undefined symbol: _ZN3c105ErrorC2ENS_14SourceLocationENSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE W0529 02:52:48.206000 938 torch/distributed/elastic/multiprocessing/api.py:897] Sending process 1003 closing signal SIGTERM W0529 02:52:48.207000 938 torch/distributed/elastic/multiprocessing/api.py:897] Sending process 1004 closing signal SIGTERM W0529 02:52:48.207000 938 torch/distributed/elastic/multiprocessing/api.py:897] Sending process 1005 closing signal SIGTERM W0529 02:52:48.207000 938 torch/distributed/elastic/multiprocessing/api.py:897] Sending process 1006 closing signal SIGTERM W0529 02:52:48.207000 938 torch/distributed/elastic/multiprocessing/api.py:897] Sending process 1007 closing signal SIGTERM W0529 02:52:48.207000 938 torch/distributed/elastic/multiprocessing/api.py:897] Sending process 1009 closing signal SIGTERM W0529 02:52:48.208000 938 torch/distributed/elastic/multiprocessing/api.py:897] Sending process 1010 closing signal SIGTERM E0529 02:52:48.385000 938 torch/distributed/elastic/multiprocessing/api.py:869] failed (exitcode: 1) local_rank: 5 (pid: 1008) of binary: /usr/bin/python Traceback (most recent call last): File "/usr/local/bin/torchrun", line 8, in <module> sys.exit(main()) File "/usr/local/lib/python3.10/dist-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper return f(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/distributed/run.py", line 918, in main run(args) File "/usr/local/lib/python3.10/dist-packages/torch/distributed/run.py", line 909, in run elastic_launch( File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 138, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 269, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ============================================================ axolotl.cli.train FAILED ------------------------------------------------------------ Failures: <NO_OTHER_FAILURES> ------------------------------------------------------------ Root Cause (first observed failure): [0]: time : 2025-05-29_02:52:48 host : b82c95bfc43f rank : 5 (local_rank: 5) exitcode : 1 (pid: 1008) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html ============================================================

INFO 07-25 07:11:43 [model_runner_v1.py:1745] Starting to load model /models/z50051264/summary/Qwen2.5-7B-nf4/... ERROR 07-25 07:11:44 [core.py:586] EngineCore failed to start. ERROR 07-25 07:11:44 [core.py:586] Traceback (most recent call last): ERROR 07-25 07:11:44 [core.py:586] File "/vllm-workspace/vllm/vllm/v1/engine/core.py", line 577, in run_engine_core ERROR 07-25 07:11:44 [core.py:586] engine_core = EngineCoreProc(*args, **kwargs) ERROR 07-25 07:11:44 [core.py:586] File "/vllm-workspace/vllm/vllm/v1/engine/core.py", line 404, in __init__ ERROR 07-25 07:11:44 [core.py:586] super().__init__(vllm_config, executor_class, log_stats, ERROR 07-25 07:11:44 [core.py:586] File "/vllm-workspace/vllm/vllm/v1/engine/core.py", line 75, in __init__ ERROR 07-25 07:11:44 [core.py:586] self.model_executor = executor_class(vllm_config) ERROR 07-25 07:11:44 [core.py:586] File "/vllm-workspace/vllm/vllm/executor/executor_base.py", line 53, in __init__ ERROR 07-25 07:11:44 [core.py:586] self._init_executor() ERROR 07-25 07:11:44 [core.py:586] File "/vllm-workspace/vllm/vllm/executor/uniproc_executor.py", line 48, in _init_executor ERROR 07-25 07:11:44 [core.py:586] self.collective_rpc("load_model") ERROR 07-25 07:11:44 [core.py:586] File "/vllm-workspace/vllm/vllm/executor/uniproc_executor.py", line 57, in collective_rpc ERROR 07-25 07:11:44 [core.py:586] answer = run_method(self.driver_worker, method, args, kwargs) ERROR 07-25 07:11:44 [core.py:586] File "/vllm-workspace/vllm/vllm/utils/__init__.py", line 2736, in run_method ERROR 07-25 07:11:44 [core.py:586] return func(*args, **kwargs) ERROR 07-25 07:11:44 [core.py:586] File "/vllm-workspace/vllm-ascend/vllm_ascend/worker/worker_v1.py", line 240, in load_model ERROR 07-25 07:11:44 [core.py:586] self.model_runner.load_model() ERROR 07-25 07:11:44 [core.py:586] File "/vllm-workspace/vllm-ascend/vllm_ascend/worker/model_runner_v1.py", line 1748, in load_model ERROR 07-25 07:11:44 [core.py:586] self.model = get_model(vllm_config=self.vllm_config) ERROR 07-25 07:11:44 [core.py:586] File "/vllm-workspace/vllm/vllm/model_executor/model_loader/__init__.py", line 59, in get_model ERROR 07-25 07:11:44 [core.py:586] return loader.load_model(vllm_config=vllm_config, ERROR 07-25 07:11:44 [core.py:586] File "/vllm-workspace/vllm/vllm/model_executor/model_loader/base_loader.py", line 38, in load_model ERROR 07-25 07:11:44 [core.py:586] model = initialize_model(vllm_config=vllm_config, ERROR 07-25 07:11:44 [core.py:586] File "/vllm-workspace/vllm/vllm/model_executor/model_loader/utils.py", line 64, in initialize_model ERROR 07-25 07:11:44 [core.py:586] return model_class(vllm_config=vllm_config, prefix=prefix) ERROR 07-25 07:11:44 [core.py:586] File "/vllm-workspace/vllm/vllm/model_executor/models/qwen2.py", line 448, in __init__ ERROR 07-25 07:11:44 [core.py:586] self.model = Qwen2Model(vllm_config=vllm_config, ERROR 07-25 07:11:44 [core.py:586] File "/vllm-workspace/vllm/vllm/compilation/decorators.py", line 152, in __init__ ERROR 07-25 07:11:44 [core.py:586] old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs) ERROR 07-25 07:11:44 [core.py:586] File "/vllm-workspace/vllm/vllm/model_executor/models/qwen2.py", line 317, in __init__ ERROR 07-25 07:11:44 [core.py:586] self.start_layer, self.end_layer, self.layers = make_layers( ERROR 07-25 07:11:44 [core.py:586] File "/vllm-workspace/vllm/vllm/model_executor/models/utils.py", line 639, in make_layers ERROR 07-25 07:11:44 [core.py:586] [PPMissingLayer() for _ in range(start_layer)] + [ ERROR 07-25 07:11:44 [core.py:586] File "/vllm-workspace/vllm/vllm/model_executor/models/utils.py", line 640, in ERROR 07-25 07:11:44 [core.py:586] maybe_offload_to_cpu(layer_fn(prefix=f"{prefix}.{idx}")) ERROR 07-25 07:11:44 [core.py:586] File "/vllm-workspace/vllm/vllm/model_executor/models/qwen2.py", line 319, in <lambda> ERROR 07-25 07:11:44 [core.py:586] lambda prefix: decoder_layer_type(config=config, ERROR 07-25 07:11:44 [core.py:586] File "/vllm-workspace/vllm/vllm/model_executor/models/qwen2.py", line 216, in __init__ ERROR 07-25 07:11:44 [core.py:586] self.self_attn = Qwen2Attention( ERROR 07-25 07:11:44 [core.py:586] File "/vllm-workspace/vllm/vllm/model_executor/models/qwen2.py", line 137, in __init__ ERROR 07-25 07:11:44 [core.py:586] self.qkv_proj = QKVParallelLinear( ERROR 07-25 07:11:44 [core.py:586] File "/vllm-workspace/vllm/vllm/model_executor/layers/linear.py", line 874, in __init__ ERROR 07-25 07:11:44 [core.py:586] super().__init__(input_size=input_size, ERROR 07-25 07:11:44 [core.py:586] File "/vllm-workspace/vllm/vllm/model_executor/layers/linear.py", line 420, in __init__ ERROR 07-25 07:11:44 [core.py:586] super().__init__(input_size, ERROR 07-25 07:11:44 [core.py:586] File "/vllm-workspace/vllm/vllm/model_executor/layers/linear.py", line 266, in __init__ ERROR 07-25 07:11:44 [core.py:586] self.quant_method = quant_config.get_quant_method(self, ERROR 07-25 07:11:44 [core.py:586] File "/vllm-workspace/vllm-ascend/vllm_ascend/quantization/quant_config.py", line 92, in get_quant_method ERROR 07-25 07:11:44 [core.py:586] if self.is_layer_skipped_ascend(prefix, ERROR 07-25 07:11:44 [core.py:586] File "/vllm-workspace/vllm-ascend/vllm_ascend/quantization/quant_config.py", line 126, in is_layer_skipped_ascend ERROR 07-25 07:11:44 [core.py:586] is_shard_skipped = self.quant_description[shard_prefix + ERROR 07-25 07:11:44 [core.py:586] KeyError: 'model.layers.0.self_attn.q_proj.weight' Process EngineCore_0: Traceback (most recent call last): File "/usr/local/python3.10.17/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/local/python3.10.17/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/vllm-workspace/vllm/vllm/v1/engine/core.py", line 590, in run_engine_core raise e File "/vllm-workspace/vllm/vllm/v1/engine/core.py", line 577, in run_engine_core engine_core = EngineCoreProc(*args, **kwargs) File "/vllm-workspace/vllm/vllm/v1/engine/core.py", line 404, in __init__ super().__init__(vllm_config, executor_class, log_stats, File "/vllm-workspace/vllm/vllm/v1/engine/core.py", line 75, in __init__ self.model_executor = executor_class(vllm_config) File "/vllm-workspace/vllm/vllm/executor/executor_base.py", line 53, in __init__ self._init_executor() File "/vllm-workspace/vllm/vllm/executor/uniproc_executor.py", line 48, in _init_executor self.collective_rpc("load_model") File "/vllm-workspace/vllm/vllm/executor/uniproc_executor.py", line 57, in collective_rpc answer = run_method(self.driver_worker, method, args, kwargs) File "/vllm-workspace/vllm/vllm/utils/__init__.py", line 2736, in run_method return func(*args, **kwargs) File "/vllm-workspace/vllm-ascend/vllm_ascend/worker/worker_v1.py", line 240, in load_model self.model_runner.load_model() File "/vllm-workspace/vllm-ascend/vllm_ascend/worker/model_runner_v1.py", line 1748, in load_model self.model = get_model(vllm_config=self.vllm_config) File "/vllm-workspace/vllm/vllm/model_executor/model_loader/__init__.py", line 59, in get_model return loader.load_model(vllm_config=vllm_config, File "/vllm-workspace/vllm/vllm/model_executor/model_loader/base_loader.py", line 38, in load_model model = initialize_model(vllm_config=vllm_config, File "/vllm-workspace/vllm/vllm/model_executor/model_loader/utils.py", line 64, in initialize_model return model_class(vllm_config=vllm_config, prefix=prefix) File "/vllm-workspace/vllm/vllm/model_executor/models/qwen2.py", line 448, in __init__ self.model = Qwen2Model(vllm_config=vllm_config, File "/vllm-workspace/vllm/vllm/compilation/decorators.py", line 152, in __init__ old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs) File "/vllm-workspace/vllm/vllm/model_executor/models/qwen2.py", line 317, in __init__ self.start_layer, self.end_layer, self.layers = make_layers( File "/vllm-workspace/vllm/vllm/model_executor/models/utils.py", line 639, in make_layers [PPMissingLayer() for _ in range(start_layer)] + [ File "/vllm-workspace/vllm/vllm/model_executor/models/utils.py", line 640, in maybe_offload_to_cpu(layer_fn(prefix=f"{prefix}.{idx}")) File "/vllm-workspace/vllm/vllm/model_executor/models/qwen2.py", line 319, in <lambda> lambda prefix: decoder_layer_type(config=config, File "/vllm-workspace/vllm/vllm/model_executor/models/qwen2.py", line 216, in __init__ self.self_attn = Qwen2Attention( File "/vllm-workspace/vllm/vllm/model_executor/models/qwen2.py", line 137, in __init__ self.qkv_proj = QKVParallelLinear( File "/vllm-workspace/vllm/vllm/model_executor/layers/linear.py", line 874, in __init__ super().__init__(input_size=input_size, File "/vllm-workspace/vllm/vllm/model_executor/layers/linear.py", line 420, in __init__ super().__init__(input_size, File "/vllm-workspace/vllm/vllm/model_executor/layers/linear.py", line 266, in __init__ self.quant_method = quant_config.get_quant_method(self, File "/vllm-workspace/vllm-ascend/vllm_ascend/quantization/quant_config.py", line 92, in get_quant_method if self.is_layer_skipped_ascend(prefix, File "/vllm-workspace/vllm-ascend/vllm_ascend/quantization/quant_config.py", line 126, in is_layer_skipped_ascend is_shard_skipped = self.quant_description[shard_prefix + KeyError: 'model.layers.0.self_attn.q_proj.weight' Traceback (most recent call last): File "/usr/local/python3.10.17/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/local/python3.10.17/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/vllm-workspace/vllm/vllm/entrypoints/openai/api_server.py", line 1495, in <module> uvloop.run(run_server(args)) File "/usr/local/python3.10.17/lib/python3.10/site-packages/uvloop/__init__.py", line 82, in run return loop.run_until_complete(wrapper()) File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete File "/usr/local/python3.10.17/lib/python3.10/site-packages/uvloop/__init__.py", line 61, in wrapper return await main File "/vllm-workspace/vllm/vllm/entrypoints/openai/api_server.py", line 1431, in run_server await run_server_worker(listen_address, sock, args, **uvicorn_kwargs) File "/vllm-workspace/vllm/vllm/entrypoints/openai/api_server.py", line 1451, in run_server_worker async with build_async_engine_client(args, client_config) as engine_client: File "/usr/local/python3.10.17/lib/python3.10/contextlib.py", line 199, in __aenter__ return await anext(self.gen) File "/vllm-workspace/vllm/vllm/entrypoints/openai/api_server.py", line 158, in build_async_engine_client async with build_async_engine_client_from_engine_args( File "/usr/local/python3.10.17/lib/python3.10/contextlib.py", line 199, in __aenter__ return await anext(self.gen) File "/vllm-workspace/vllm/vllm/entrypoints/openai/api_server.py", line 194, in build_async_engine_client_from_engine_args async_llm = AsyncLLM.from_vllm_config( File "/vllm-workspace/vllm/vllm/v1/engine/async_llm.py", line 162, in from_vllm_config return cls( File "/vllm-workspace/vllm/vllm/v1/engine/async_llm.py", line 124, in __init__ self.engine_core = EngineCoreClient.make_async_mp_client( File "/vllm-workspace/vllm/vllm/v1/engine/core_client.py", line 96, in make_async_mp_client return AsyncMPClient(*client_args) File "/vllm-workspace/vllm/vllm/v1/engine/core_client.py", line 666, in __init__ super().__init__( File "/vllm-workspace/vllm/vllm/v1/engine/core_client.py", line 403, in __init__ with launch_core_engines(vllm_config, executor_class, File "/usr/local/python3.10.17/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/vllm-workspace/vllm/vllm/v1/engine/utils.py", line 434, in launch_core_engines wait_for_engine_startup( File "/vllm-workspace/vllm/vllm/v1/engine/utils.py", line 484, in wait_for_engine_startup raise RuntimeError("Engine core initialization failed. " RuntimeError: Engine core initialization failed. See root cause above. Failed core proc(s): {} [ERROR] 2025-07-25-07:11:52 (PID:1889, Device:-1, RankID:-1) ERR99999 UNKNOWN applicaiton exception 这是怎么回事儿???我启动没量化的版本是正常的,但是启动量化模型出现上述错误

Traceback (most recent call last): File "/home/workspace/volume3/ps/practice/zx/ESIG/EasyControl-main/train/train.py", line 1317, in <module> main(args) File "/home/workspace/volume3/ps/practice/zx/ESIG/EasyControl-main/train/train.py", line 1169, in main model_pred = transformer( ^^^^^^^^^^^^ File "/home/workspace/volume3/ps/practice/zx/ESIG/EasyControl-main/.SuccessEasy/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/workspace/volume3/ps/practice/zx/ESIG/EasyControl-main/.SuccessEasy/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/workspace/volume3/ps/practice/zx/ESIG/EasyControl-main/train/src/transformer_flux.py", line 643, in forward encoder_hidden_states, hidden_states, cond_hidden_states = block( ^^^^^^ File "/home/workspace/volume3/ps/practice/zx/ESIG/EasyControl-main/.SuccessEasy/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/workspace/volume3/ps/practice/zx/ESIG/EasyControl-main/.SuccessEasy/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/workspace/volume3/ps/practice/zx/ESIG/EasyControl-main/train/src/transformer_flux.py", line 179, in forward attention_outputs = self.attn( ^^^^^^^^^^ File "/home/workspace/volume3/ps/practice/zx/ESIG/EasyControl-main/.SuccessEasy/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/workspace/volume3/ps/practice/zx/ESIG/EasyControl-main/.SuccessEasy/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/workspace/volume3/ps/practice/zx/ESIG/EasyControl-main/.SuccessEasy/lib/python3.11/site-packages/diffusers/models/attention_processor.py", line 588, in forward return self.processor( ^^^^^^^^^^^^^^^ File "/home/workspace/volume3/ps/practice/zx/ESIG/EasyControl-main/train/src/layers.py", line 243, in __call__ query = apply_rotary_emb(query, image_rotary_emb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/workspace/volume3/ps/practice/zx/ESIG/EasyControl-main/.SuccessEasy/lib/python3.11/site-packages/diffusers/models/embeddings.py", line 1208, in apply_rotary_emb out = (x.float() * cos + x_rotated.float() * sin).to(x.dtype) ~~~~~~~~~~^~~~~ RuntimeError: The size of tensor a (876) must match the size of tensor b (768) at non-singleton dimension 2

(foundationpose) root@localhost:/mnt/e/wsl/foundationpose-main# python run_demo.py Traceback (most recent call last): File "/mnt/e/wsl/foundationpose-main/run_demo.py", line 10, in <module> from estimater import * File "/mnt/e/wsl/foundationpose-main/estimater.py", line 10, in <module> from Utils import * File "/mnt/e/wsl/foundationpose-main/Utils.py", line 11, in <module> from pytorch3d.transforms import so3_log_map,so3_exp_map,se3_exp_map,se3_log_map,matrix_to_axis_angle,matrix_to_euler_angles,euler_angles_to_matrix, rotation_6d_to_matrix ModuleNotFoundError: No module named 'pytorch3d' (foundationpose) root@localhost:/mnt/e/wsl/foundationpose-main# python run_demo.py Traceback (most recent call last): File "/mnt/e/wsl/foundationpose-main/run_demo.py", line 10, in <module> from estimater import * File "/mnt/e/wsl/foundationpose-main/estimater.py", line 10, in <module> from Utils import * File "/mnt/e/wsl/foundationpose-main/Utils.py", line 11, in <module> from pytorch3d.transforms import so3_log_map,so3_exp_map,se3_exp_map,se3_log_map,matrix_to_axis_angle,matrix_to_euler_angles,euler_angles_to_matrix, rotation_6d_to_matrix ModuleNotFoundError: No module named 'pytorch3d' (foundationpose) root@localhost:/mnt/e/wsl/foundationpose-main# python run_demo.py /root/anaconda3/envs/foundationpose/lib/python3.9/site-packages/torch/utils/cpp_extension.py:25: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. from pkg_resources import packaging # type: ignore[attr-defined] Warp 1.0.2 initialized: CUDA Toolkit 11.5, Driver 12.6 Devices: "cpu" : "x86_64" "cuda:0" : "NVIDIA GeForce RTX 2070 SUPER" (8 GiB, sm_75, mempool enabled) Kernel cache: /root/.cache/warp/1.0.2 Traceback (most recent call last): File "/mnt/e/wsl/foundationpose-main/run_demo.py", line 29, in <module> mesh = trimesh.load(args.mesh_file) File "/root/anaconda3/envs/foundationpose/lib/python3.9/site-packages/trimesh/exchange/load.py", line 114, in load ) = _parse_file_args(file_obj=file_obj, file_type=file_type, resolver=resolver) File "/root/anaconda3/envs/foundationpose/lib/python3.9/site-packages/trimesh/exchange/load.py", line 608, in _parse_file_args raise ValueError(f"string is not a file: {file_obj}") ValueError: string is not a file: /mnt/e/wsl/foundationpose-main/demo_data/mustard0/mesh/textured_simple.obj

D:\software\develop\java\jdk-1.8\jdk-1.8\bin\java.exe "-javaagent:D:\software\develop\ideal\IntelliJ IDEA 2022.1.3\lib\idea_rt.jar=58407:D:\software\develop\ideal\IntelliJ IDEA 2022.1.3\bin" -Dfile.encoding=UTF-8 -classpath D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\charsets.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\deploy.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\ext\access-bridge-64.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\ext\cldrdata.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\ext\dnsns.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\ext\jaccess.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\ext\jfxrt.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\ext\localedata.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\ext\nashorn.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\ext\sunec.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\ext\sunjce_provider.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\ext\sunmscapi.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\ext\sunpkcs11.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\ext\zipfs.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\javaws.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\jce.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\jfr.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\jfxswt.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\jsse.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\management-agent.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\plugin.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\resources.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\rt.jar;D:\workspace\Maven\titanic_demo\target\scala-2.12\classes;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\clearspring\analytics\stream\2.9.6\stream-2.9.6.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\esotericsoftware\kryo-shaded\4.0.2\kryo-shaded-4.0.2.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\esotericsoftware\minlog\1.3.0\minlog-1.3.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\fasterxml\jackson\core\jackson-annotations\2.12.3\jackson-annotations-2.12.3.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\fasterxml\jackson\core\jackson-core\2.12.3\jackson-core-2.12.3.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\fasterxml\jackson\core\jackson-databind\2.12.3\jackson-databind-2.12.3.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\fasterxml\jackson\module\jackson-module-scala_2.12\2.12.3\jackson-module-scala_2.12-2.12.3.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\github\luben\zstd-jni\1.5.0-4\zstd-jni-1.5.0-4.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\google\code\findbugs\jsr305\3.0.2\jsr305-3.0.2.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\google\code\gson\gson\2.8.6\gson-2.8.6.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\google\crypto\tink\tink\1.6.0\tink-1.6.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\google\flatbuffers\flatbuffers-java\1.9.0\flatbuffers-java-1.9.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\google\guava\guava\16.0.1\guava-16.0.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\google\protobuf\protobuf-java\3.14.0\protobuf-java-3.14.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\ning\compress-lzf\1.0.3\compress-lzf-1.0.3.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\thoughtworks\paranamer\paranamer\2.8\paranamer-2.8.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\twitter\chill-java\0.10.0\chill-java-0.10.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\twitter\chill_2.12\0.10.0\chill_2.12-0.10.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\univocity\univocity-parsers\2.9.1\univocity-parsers-2.9.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\commons-codec\commons-codec\1.15\commons-codec-1.15.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\commons-collections\commons-collections\3.2.2\commons-collections-3.2.2.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\commons-io\commons-io\2.8.0\commons-io-2.8.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\commons-lang\commons-lang\2.6\commons-lang-2.6.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\commons-logging\commons-logging\1.1.3\commons-logging-1.1.3.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\commons-net\commons-net\3.1\commons-net-3.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\io\airlift\aircompressor\0.21\aircompressor-0.21.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\io\dropwizard\metrics\metrics-core\4.2.0\metrics-core-4.2.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\io\dropwizard\metrics\metrics-graphite\4.2.0\metrics-graphite-4.2.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\io\dropwizard\metrics\metrics-jmx\4.2.0\metrics-jmx-4.2.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\io\dropwizard\metrics\metrics-json\4.2.0\metrics-json-4.2.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\io\dropwizard\metrics\metrics-jvm\4.2.0\metrics-jvm-4.2.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\io\netty\netty-all\4.1.68.Final\netty-all-4.1.68.Final.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\io\netty\netty-buffer\4.1.50.Final\netty-buffer-4.1.50.Final.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\io\netty\netty-codec\4.1.50.Final\netty-codec-4.1.50.Final.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\io\netty\netty-common\4.1.50.Final\netty-common-4.1.50.Final.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\io\netty\netty-handler\4.1.50.Final\netty-handler-4.1.50.Final.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\io\netty\netty-resolver\4.1.50.Final\netty-resolver-4.1.50.Final.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\io\netty\netty-transport-native-epoll\4.1.50.Final\netty-transport-native-epoll-4.1.50.Final.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\io\netty\netty-transport-native-unix-common\4.1.50.Final\netty-transport-native-unix-common-4.1.50.Final.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\io\netty\netty-transport\4.1.50.Final\netty-transport-4.1.50.Final.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\jakarta\annotation\jakarta.annotation-api\1.3.5\jakarta.annotation-api-1.3.5.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\jakarta\servlet\jakarta.servlet-api\4.0.3\jakarta.servlet-api-4.0.3.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\jakarta\validation\jakarta.validation-api\2.0.2\jakarta.validation-api-2.0.2.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\jakarta\ws\rs\jakarta.ws.rs-api\2.1.6\jakarta.ws.rs-api-2.1.6.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\javax\activation\activation\1.1.1\activation-1.1.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\javax\annotation\javax.annotation-api\1.3.2\javax.annotation-api-1.3.2.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\javax\xml\bind\jaxb-api\2.2.11\jaxb-api-2.2.11.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\log4j\log4j\1.2.17\log4j-1.2.17.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\net\razorvine\pyrolite\4.30\pyrolite-4.30.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\net\sf\py4j\py4j\0.10.9.2\py4j-0.10.9.2.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\antlr\antlr4-runtime\4.8\antlr4-runtime-4.8.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\arrow\arrow-format\2.0.0\arrow-format-2.0.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\arrow\arrow-memory-core\2.0.0\arrow-memory-core-2.0.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\arrow\arrow-memory-netty\2.0.0\arrow-memory-netty-2.0.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\arrow\arrow-vector\2.0.0\arrow-vector-2.0.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\avro\avro-ipc\1.10.2\avro-ipc-1.10.2.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\avro\avro-mapred\1.10.2\avro-mapred-1.10.2.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\avro\avro\1.10.2\avro-1.10.2.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\commons\commons-compress\1.20\commons-compress-1.20.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\commons\commons-crypto\1.1.0\commons-crypto-1.1.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\commons\commons-lang3\3.12.0\commons-lang3-3.12.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\commons\commons-math3\3.4.1\commons-math3-3.4.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\commons\commons-text\1.6\commons-text-1.6.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\curator\curator-client\2.13.0\curator-client-2.13.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\curator\curator-framework\2.13.0\curator-framework-2.13.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\curator\curator-recipes\2.13.0\curator-recipes-2.13.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\hadoop\hadoop-client-api\3.3.1\hadoop-client-api-3.3.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\hadoop\hadoop-client-runtime\3.3.1\hadoop-client-runtime-3.3.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\hive\hive-storage-api\2.7.2\hive-storage-api-2.7.2.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\htrace\htrace-core4\4.1.0-incubating\htrace-core4-4.1.0-incubating.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\ivy\ivy\2.5.0\ivy-2.5.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\orc\orc-core\1.6.11\orc-core-1.6.11.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\orc\orc-mapreduce\1.6.11\orc-mapreduce-1.6.11.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\orc\orc-shims\1.6.11\orc-shims-1.6.11.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\parquet\parquet-column\1.12.1\parquet-column-1.12.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\parquet\parquet-common\1.12.1\parquet-common-1.12.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\parquet\parquet-encoding\1.12.1\parquet-encoding-1.12.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\parquet\parquet-format-structures\1.12.1\parquet-format-structures-1.12.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\parquet\parquet-hadoop\1.12.1\parquet-hadoop-1.12.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\parquet\parquet-jackson\1.12.1\parquet-jackson-1.12.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\spark\spark-catalyst_2.12\3.2.0\spark-catalyst_2.12-3.2.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\spark\spark-core_2.12\3.2.0\spark-core_2.12-3.2.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\spark\spark-kvstore_2.12\3.2.0\spark-kvstore_2.12-3.2.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\spark\spark-launcher_2.12\3.2.0\spark-launcher_2.12-3.2.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\spark\spark-network-common_2.12\3.2.0\spark-network-common_2.12-3.2.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\spark\spark-network-shuffle_2.12\3.2.0\spark-network-shuffle_2.12-3.2.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\spark\spark-sketch_2.12\3.2.0\spark-sketch_2.12-3.2.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\spark\spark-sql_2.12\3.2.0\spark-sql_2.12-3.2.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\spark\spark-tags_2.12\3.2.0\spark-tags_2.12-3.2.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\spark\spark-unsafe_2.12\3.2.0\spark-unsafe_2.12-3.2.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\xbean\xbean-asm9-shaded\4.20\xbean-asm9-shaded-4.20.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\yetus\audience-annotations\0.12.0\audience-annotations-0.12.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\zookeeper\zookeeper-jute\3.6.2\zookeeper-jute-3.6.2.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\zookeeper\zookeeper\3.6.2\zookeeper-3.6.2.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\codehaus\janino\commons-compiler\3.0.16\commons-compiler-3.0.16.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\codehaus\janino\janino\3.0.16\janino-3.0.16.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\fusesource\leveldbjni\leveldbjni-all\1.8\leveldbjni-all-1.8.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\glassfish\hk2\external\aopalliance-repackaged\2.6.1\aopalliance-repackaged-2.6.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\glassfish\hk2\external\jakarta.inject\2.6.1\jakarta.inject-2.6.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\glassfish\hk2\hk2-api\2.6.1\hk2-api-2.6.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\glassfish\hk2\hk2-locator\2.6.1\hk2-locator-2.6.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\glassfish\hk2\hk2-utils\2.6.1\hk2-utils-2.6.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\glassfish\hk2\osgi-resource-locator\1.0.3\osgi-resource-locator-1.0.3.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\glassfish\jersey\containers\jersey-container-servlet-core\2.34\jersey-container-servlet-core-2.34.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\glassfish\jersey\containers\jersey-container-servlet\2.34\jersey-container-servlet-2.34.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\glassfish\jersey\core\jersey-client\2.34\jersey-client-2.34.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\glassfish\jersey\core\jersey-common\2.34\jersey-common-2.34.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\glassfish\jersey\core\jersey-server\2.34\jersey-server-2.34.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\glassfish\jersey\inject\jersey-hk2\2.34\jersey-hk2-2.34.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\javassist\javassist\3.25.0-GA\javassist-3.25.0-GA.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\jetbrains\annotations\17.0.0\annotations-17.0.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\json4s\json4s-ast_2.12\3.7.0-M11\json4s-ast_2.12-3.7.0-M11.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\json4s\json4s-core_2.12\3.7.0-M11\json4s-core_2.12-3.7.0-M11.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\json4s\json4s-jackson_2.12\3.7.0-M11\json4s-jackson_2.12-3.7.0-M11.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\json4s\json4s-scalap_2.12\3.7.0-M11\json4s-scalap_2.12-3.7.0-M11.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\lz4\lz4-java\1.7.1\lz4-java-1.7.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\objenesis\objenesis\2.5.1\objenesis-2.5.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\roaringbitmap\RoaringBitmap\0.9.0\RoaringBitmap-0.9.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\roaringbitmap\shims\0.9.0\shims-0.9.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\rocksdb\rocksdbjni\6.20.3\rocksdbjni-6.20.3.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\scala-lang\modules\scala-parser-combinators_2.12\1.1.2\scala-parser-combinators_2.12-1.1.2.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\scala-lang\modules\scala-xml_2.12\1.2.0\scala-xml_2.12-1.2.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\scala-lang\scala-library\2.12.15\scala-library-2.12.15.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\scala-lang\scala-reflect\2.12.15\scala-reflect-2.12.15.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\slf4j\jcl-over-slf4j\1.7.30\jcl-over-slf4j-1.7.30.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\slf4j\jul-to-slf4j\1.7.30\jul-to-slf4j-1.7.30.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\slf4j\slf4j-api\1.7.30\slf4j-api-1.7.30.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\slf4j\slf4j-log4j12\1.7.30\slf4j-log4j12-1.7.30.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\spark-project\spark\unused\1.0.0\unused-1.0.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\threeten\threeten-extra\1.5.0\threeten-extra-1.5.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\tukaani\xz\1.8\xz-1.8.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\xerial\snappy\snappy-java\1.1.8.4\snappy-java-1.1.8.4.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\oro\oro\2.0.8\oro-2.0.8.jar Titanic Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 25/06/19 11:13:36 INFO SparkContext: Running Spark version 3.2.0 25/06/19 11:13:37 INFO ResourceUtils: ============================================================== 25/06/19 11:13:37 INFO ResourceUtils: No custom resources configured for spark.driver. 25/06/19 11:13:37 INFO ResourceUtils: ============================================================== 25/06/19 11:13:37 INFO SparkContext: Submitted application: Titanic 25/06/19 11:13:37 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0) 25/06/19 11:13:37 INFO ResourceProfile: Limiting resource is cpu 25/06/19 11:13:37 INFO ResourceProfileManager: Added ResourceProfile id: 0 25/06/19 11:13:37 INFO SecurityManager: Changing view acls to: 21279 25/06/19 11:13:37 INFO SecurityManager: Changing modify acls to: 21279 25/06/19 11:13:37 INFO SecurityManager: Changing view acls groups to: 25/06/19 11:13:37 INFO SecurityManager: Changing modify acls groups to: 25/06/19 11:13:37 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(21279); groups with view permissions: Set(); users with modify permissions: Set(21279); groups with modify permissions: Set() 25/06/19 11:13:38 INFO Utils: Successfully started service 'sparkDriver' on port 58426. 25/06/19 11:13:38 INFO SparkEnv: Registering MapOutputTracker 25/06/19 11:13:38 INFO SparkEnv: Registering BlockManagerMaster 25/06/19 11:13:38 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information 25/06/19 11:13:38 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up 25/06/19 11:13:38 INFO SparkEnv: Registering BlockManagerMasterHeartbeat 25/06/19 11:13:39 INFO DiskBlockManager: Created local directory at C:\Users\21279\AppData\Local\Temp\blockmgr-d9717778-5c19-4b54-ba8c-bdc23a2bec0f 25/06/19 11:13:39 INFO MemoryStore: MemoryStore started with capacity 1985.4 MiB 25/06/19 11:13:39 INFO SparkEnv: Registering OutputCommitCoordinator 25/06/19 11:13:39 INFO Utils: Successfully started service 'SparkUI' on port 4040. 25/06/19 11:13:39 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://lxf:4040 25/06/19 11:13:39 INFO Executor: Starting executor ID driver on host lxf 25/06/19 11:13:39 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 58453. 25/06/19 11:13:39 INFO NettyBlockTransferService: Server created on lxf:58453 25/06/19 11:13:39 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy 25/06/19 11:13:39 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, lxf, 58453, None) 25/06/19 11:13:39 INFO BlockManagerMasterEndpoint: Registering block manager lxf:58453 with 1985.4 MiB RAM, BlockManagerId(driver, lxf, 58453, None) 25/06/19 11:13:39 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, lxf, 58453, None) 25/06/19 11:13:39 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, lxf, 58453, None) 25/06/19 11:13:39 WARN SparkContext: Using an existing SparkContext; some configuration may not take effect. 25/06/19 11:13:40 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir. 25/06/19 11:13:40 INFO SharedState: Warehouse path is 'file:/D:/workspace/Maven/titanic_demo/spark-warehouse'. 25/06/19 11:13:41 WARN FileSystem: Failed to initialize fileystem hdfs://192.168.42:9000/user/hadoop/titanic/titanic.csv: java.io.IOException: Incomplete HDFS URI, no host: hdfs://192.168.42:9000/user/hadoop/titanic/titanic.csv 25/06/19 11:13:41 WARN FileStreamSink: Assume no metadata directory. Error while looking for metadata directory in the path: hdfs://192.168.42:9000/user/hadoop/titanic/titanic.csv. java.io.IOException: Incomplete HDFS URI, no host: hdfs://192.168.42:9000/user/hadoop/titanic/titanic.csv at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:183) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3469) at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3521) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:540) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365) at org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:53) at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:370) at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:274) at org.apache.spark.sql.DataFrameReader.$anonfun$load$3(DataFrameReader.scala:245) at scala.Option.getOrElse(Option.scala:189) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:245) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:188) at Titanic$.main(Titanic.scala:27) at Titanic.main(Titanic.scala) 25/06/19 11:13:41 WARN FileSystem: Failed to initialize fileystem hdfs://192.168.42:9000/user/hadoop/titanic/titanic.csv: java.io.IOException: Incomplete HDFS URI, no host: hdfs://192.168.42:9000/user/hadoop/titanic/titanic.csv Exception in thread "main" java.io.IOException: Incomplete HDFS URI, no host: hdfs://192.168.42:9000/user/hadoop/titanic/titanic.csv at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:183) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3469) at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3521) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:540) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365) at org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$checkAndGlobPathIfNecessary$1(DataSource.scala:747) at scala.collection.immutable.List.map(List.scala:293) at org.apache.spark.sql.execution.datasources.DataSource$.checkAndGlobPathIfNecessary(DataSource.scala:745) at org.apache.spark.sql.execution.datasources.DataSource.checkAndGlobPathIfNecessary(DataSource.scala:577) at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:408) at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:274) at org.apache.spark.sql.DataFrameReader.$anonfun$load$3(DataFrameReader.scala:245) at scala.Option.getOrElse(Option.scala:189) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:245) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:188) at Titanic$.main(Titanic.scala:27) at Titanic.main(Titanic.scala) 25/06/19 11:13:41 INFO SparkContext: Invoking stop() from shutdown hook 25/06/19 11:13:41 INFO SparkUI: Stopped Spark web UI at http://lxf:4040 25/06/19 11:13:41 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! 25/06/19 11:13:41 INFO MemoryStore: MemoryStore cleared 25/06/19 11:13:41 INFO BlockManager: BlockManager stopped 25/06/19 11:13:41 INFO BlockManagerMaster: BlockManagerMaster stopped 25/06/19 11:13:41 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! 25/06/19 11:13:41 INFO SparkContext: Successfully stopped SparkContext 25/06/19 11:13:41 INFO ShutdownHookManager: Shutdown hook called 25/06/19 11:13:41 INFO ShutdownHookManager: Deleting directory C:\Users\21279\AppData\Local\Temp\spark-f92de141-507f-4f46-a675-bdf38dd1487f Process finished with exit code 1

+ echo '基于本地mirror最新全代码下载命令: (mkdir vendor;cd vendor;repo init -u ssh://gerrit.realme.odm.scm.adc.com:29418/platform/manifest -b realme/ZR/UMS9230/V -m release_20250310/realme_sprd_v_vendor_release_20250310_dhaka.xml --repo-branch=update --reference=/work/odm_mirror --no-repo-verify;repo sync -fcq -j4 --no-tags --prune --no-repo-verify);(mkdir system;cd system;repo init -u ssh://gerrit.realme.odm.scm.adc.com:29418/platform/manifest -b realme/ZR/UMS9230/V -m release_20250310/realme_sprd_v_system_release_20250310_dhaka.xml --repo-branch=update --reference=/work/odm_mirror --no-repo-verify;repo sync -fcq -j4 --no-tags --prune --no-repo-verify)' + echo '>>>end download code<<<' >>>end download code<<< + fn_make_xml_tag + for XMLTagType in '${AndroidVersion_path}' + '[' system == vendor ']' + '[' system == system ']' + XMLTag_Path=system + echo '<<<---------------------------fn_system make_xml_tag--------------------------------->>>' <<<---------------------------fn_system make_xml_tag--------------------------------->>> + cd /work/jenkins_dailybuild_slave/workspace/UMS9230_15.0_dhaka_New_Release_PatchBuild/system/.repo/manifests + echo 'now we are in:' now we are in: + pwd /work/jenkins_dailybuild_slave/workspace/UMS9230_15.0_dhaka_New_Release_PatchBuild/system/.repo/manifests + repo manifest -r -o userdebug_system.xml ... A new repo command ( 1.27) is available. ... You should upgrade soon: cp /work/jenkins_dailybuild_slave/workspace/UMS9230_15.0_dhaka_New_Release_PatchBuild/system/.repo/repo/repo /bin/repo Saved manifest to userdebug_system.xml + echo 0 0 + [[ 0 != \0 ]] + echo 'The tag xml is .xml' The tag xml is .xml + echo '>>>---------------------------fn_make_xml_tag---------------------------------<<<' >>>---------------------------fn_make_xml_tag---------------------------------<<< + for XMLTagType in '${AndroidVersion_path}' + '[' vendor == vendor ']' + XMLTag_Path=vendor + echo '<<<---------------------------fn_vendor make_xml_tag--------------------

现在报这个错误[INFO|2025-03-14 11:38:36] llamafactory.data.template:143 >> Add <|im_end|> to stop words. Traceback (most recent call last): File "/usr/local/bin/llamafactory-cli", line 8, in <module> sys.exit(main()) File "/mnt/workspace/.cache/modelscope/LLaMA-Factory/src/llamafactory/cli.py", line 118, in main run_exp() File "/mnt/workspace/.cache/modelscope/LLaMA-Factory/src/llamafactory/train/tuner.py", line 103, in run_exp _training_function(config={"args": args, "callbacks": callbacks}) File "/mnt/workspace/.cache/modelscope/LLaMA-Factory/src/llamafactory/train/tuner.py", line 68, in _training_function run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks) File "/mnt/workspace/.cache/modelscope/LLaMA-Factory/src/llamafactory/train/sft/workflow.py", line 51, in run_sft dataset_module = get_dataset(template, model_args, data_args, training_args, stage="sft", **tokenizer_module) File "/mnt/workspace/.cache/modelscope/LLaMA-Factory/src/llamafactory/data/loader.py", line 297, in get_dataset dataset = _get_merged_dataset(data_args.dataset, model_args, data_args, training_args, stage) File "/mnt/workspace/.cache/modelscope/LLaMA-Factory/src/llamafactory/data/loader.py", line 171, in _get_merged_dataset for dataset_name, dataset_attr in zip(dataset_names, get_dataset_list(dataset_names, data_args.dataset_dir)): File "/mnt/workspace/.cache/modelscope/LLaMA-Factory/src/llamafactory/data/parser.py", line 129, in get_dataset_list raise ValueError(f"Undefined dataset {name} in {DATA_CONFIG}.") ValueError: Undefined dataset /mnt/workspace/.cache/modelscope/datasets/liucong/Chinese-DeepSeek-R1-Distill-data-110k/distill_r1_110k in dataset_info.json.

大家在看

recommend-type

rk3588 linux 系统添加分区和修改分区

root@rk3588-buildroot:/logo# df -h /dev/mmcblk0p3 124M 24K 123M 1% /logo /dev/mmcblk0p4 124M 24K 123M 1% /cfg 附件主要是去掉misc、recovery、backup等分区,然后添加logo,和cfg分区。
recommend-type

虚拟光驱DAEMON(支持2000/XP/2003)

非常好用的虚拟光驱软件,此版本完美支持2003操作系统。
recommend-type

ispVM18.1.1

lattice 下载工具 ispVM tool FPGA/CPLD烧写工具,并口及适配器通用FPGA/CPLD烧写工具,并口及适配器通用
recommend-type

kaggle疟疾细胞深度学习方法进行图像分类

这个资源是一个完整的机器学习项目工具包,专为疟疾诊断中的细胞图像分类任务设计。它使用了深度学习框架PyTorch来构建、训练和评估一个逻辑回归模型,适用于医学研究人员和数据科学家在图像识别领域的应用。 主要功能包括: 数据预处理与加载: 数据集自动分割为训练集和测试集。 图像数据通过PyTorch转换操作标准化和调整大小。 模型构建: 提供了一个基于逻辑回归的简单神经网络模型,适用于二分类问题。 模型结构清晰,易于理解和修改。 训练与优化: 使用Adam优化器和学习率调度,有效提升模型收敛速度。 实施早停机制,防止过拟合并优化训练时间。 性能评估: 提供准确率、分类报告和混淆矩阵,全面评估模型性能。 使用热图直观显示模型的分类效果。 这里面提供了一个完整的训练流程,但是模型用的相对简单,仅供参考。 可以帮助新手入门医学研究人员在实验室测试中快速识别疟疾细胞,还可以作为教育工具,帮助学生和新研究者理解和实践机器学习在实际医学应用中的运用。
recommend-type

SC4336P完整数据手册

SC4336P 是监控相机领域先进的数字 CMOS 图像传感器, 最高支持 2560H x 1440V @30fps 的传输速率。 SC4336P 输出 raw 格式图像, 有效像素窗口为 2568H x 1448V, 支持复杂的片上操作——例如窗口化、 水平镜像、 垂直倒置等。 SC4336P 可以通过标准的 I2C 接口读写寄存器。 SC4336P 可以通过 EFSYNC/ FSYNC 引脚实现外部控制曝光。 SC4336P 提供串行视频端口( MIPI) 。 SC4336P MIPI 接口支持 8/10bit, 1/2 lane 串行输出, 传输速率推荐不大于 1.0Gbps。 SC4336P 的 PLL 模块允许的输入时钟频率范围为 6~40MHz, 其中 VCO 输出频率 (FVCO) 的范围为 400MHz-1200MHz。

最新推荐

recommend-type

企业网络结构设计与拓扑图的PKT文件解析

企业网络拓扑设计是网络架构设计的一个重要组成部分,它涉及到企业内部网络的布局结构,确保信息传递的高效和网络安全。网络拓扑设计需要详细规划网络中每个组件的位置、连接方式、设备类型等关键要素。在设计过程中,通常会使用网络拓扑图来形象地表示这些组件和它们之间的关系。 网络拓扑设计中重要的知识点包括: 1. 拓扑图的类型:网络拓扑图主要有以下几种类型,每一种都有其特定的应用场景和设计要求。 - 总线拓扑:所有设备都连接到一条共享的主干线上,信息在全网中广播。适合小型网络,维护成本低,但故障排查较为困难。 - 星型拓扑:所有设备通过点对点连接到一个中心节点。便于管理和监控,中心节点的故障可能导致整个网络瘫痪。 - 环形拓扑:每个节点通过专用链路形成一个闭合环路。信息单向流动,扩展性较差,对单点故障敏感。 - 网状拓扑:网络中的设备通过多条路径连接,提供极高的冗余性。适合大型网络,成本较高。 2. 网络设备的选择:网络设备包括路由器、交换机、防火墙、无线接入点等。设计时需根据实际需求选择适合的设备类型和配置。 3. IP地址规划:合理的IP地址分配能确保网络的有序运行,包括私有地址和公有地址的规划,子网划分,以及IP地址的动态分配(DHCP)和静态分配。 4. 网络安全设计:保护企业网络不受攻击至关重要。包括设置防火墙规则、配置入侵检测系统(IDS)、实施访问控制列表(ACL)等安全策略。 5. 网络冗余和负载均衡:为防止网络中的单点故障,设计时需要考虑使用冗余技术和负载均衡技术,例如多线路接入、链路聚合、VRRP(虚拟路由器冗余协议)等。 6. 物理布线规划:这是指网络中的物理连接方式和布线方案,通常根据实际建筑环境和网络设备位置来决定,包括线缆的种类和长度限制等。 7. 虚拟化和云计算:在现代企业网络设计中,虚拟化技术和云计算服务的集成也变得越来越重要,需要合理规划服务器资源和云服务接口。 从提供的文件描述来看,包含了详细的网络结构设计和拓扑图PKT文件,以及源代码。这说明该课设文件不仅包括了网络拓扑图的视觉展示,还涉及到了实际的网络配置和编程实现。PKT文件可能指的是Packet Tracer(网络模拟软件)生成的网络拓扑文件, Packet Tracer 是Cisco开发的网络模拟工具,用户可以通过它创建网络拓扑图并模拟网络设备之间的数据通信。 由于文件名仅标注为“课设”,我们无法得知具体的课程名称或者网络拓扑设计的具体案例。不过,可以推测这份文件可能是一个教学材料或课程设计项目,旨在帮助学生或者网络工程师通过实践来加深对网络拓扑设计的理解和应用。其中包含的源代码可能是用于配置网络设备(如路由器和交换机的命令脚本),或者是一些网络应用的编程实现,比如网络监控工具或者数据分析脚本等。
recommend-type

【技术解读】:5个步骤深入自定义你的Winform窗口

# 摘要 本文全面概述了Winform窗口自定义开发的关键方面,涵盖了从基础操作到高级功能开发以及性能调优的各个方面。文章首先介绍了Winform窗口的基本操作、设计元素及响应式设计实现方法。随后,深入探讨了自定义控件创建、多线程处理以及插件机制等高级功能开发技术。为了优化交互体验,文中还阐述了用户输入处理、动画效果集成以及高级用户界面交互技术。此外,文章详细讨论了窗口的国
recommend-type

ARM/x86/c86 的具体区别

<think>好的,用户之前问过国产化服务器和操作系统,现在又想知道ARM、x86、x86的具体区别。看起来用户可能是在做技术选型或者需要了解不同架构的优缺点,以便在实际项目中选择合适的硬件和系统。 首先,用户提到的c86可能是个笔误,应该是x86。需要确认这一点,避免误解。然后需要分别从架构、设计理念、应用场景、性能、生态等方面对比ARM和x86。另外,用户可能对国产芯片如鲲鹏、飞腾等基于ARM的处理器感兴趣,所以需要联系之前的回答,说明这些国产芯片的架构背景。 接下来,需要检查技术细节的准确性,比如指令集类型、功耗、扩展性、授权模式等。还要考虑用户可能的实际需求,比如是否需要低功耗设备
recommend-type

最新Swift语言iOS开发实战教程免费下载

标题《Intermediate_swift_ios_12_book》表明了本书是一本关于Swift语言以及iOS 12平台的中阶开发教程。在Swift语言方面,它侧重于深入探讨和实践,旨在帮助读者提升在iOS开发方面的技能水平。自从2014年苹果公司首次推出Swift语言以来,它就成为了开发iOS、macOS、watchOS和tvOS应用的首选语言。Swift语言以其安全、快速、现代的特性逐渐取代了Objective-C,成为苹果生态系统中的主流开发语言。iOS 12作为苹果公司推出的最新操作系统版本,它引入了许多新特性,比如ARKit 2、MeasureKit和新的Screen Time功能,因此开发者需要学习和适应这些变化以充分利用它们。 描述强调了这本书是由Appcoda出版的,Appcoda是一家专注于提供高质量iOS和Swift编程教程的在线平台。通过Appcoda出版的教程,读者通常能够获得紧跟行业标准和实践的教学材料。此书被推荐给希望学习使用最新的Swift语言进行iOS开发的人群。这暗示了该书涵盖了iOS 12的新特性和API,这些内容对于想要掌握最新开发技术的开发者来说至关重要。 标签"ios swift programming practice"则进一步明确了这本书的三个主要知识点:iOS开发、Swift编程和编程实践。这些标签指向了iOS开发的核心技能和知识领域。iOS开发涉及到使用Xcode作为主要的开发环境,掌握使用Interface Builder构建用户界面,以及理解如何使用UIKit框架来创建和管理用户界面。Swift编程则集中在语言本身,包括其基本语法、类型系统、面向协议编程、闭包、泛型等高级特性。编程实践则强调实际编写代码的能力,如编写可测试、可维护和高性能的代码,以及如何使用设计模式来解决常见的开发问题。 文件名称列表中的"Intermediate swift ios12 book.epub"指出了该教程的电子书格式。EPUB是一种广泛使用的电子书标准格式,它支持可调整的布局,使得内容在不同尺寸的屏幕上都可阅读。EPUB格式允许用户在各种阅读设备上阅读书籍,如平板电脑、智能手机、电子书阅读器等。而文件名"._Intermediate swift ios12 book.epub"前面的点和下划线可能表明这是一个隐藏文件或在某种特定环境下被创建的临时文件。 综上所述,知识点涉及: 1. Swift语言基础:Swift是一种安全、快速、现代的编程语言,由苹果公司开发,用于iOS、macOS、watchOS和tvOS应用的开发。Swift语言的特性包括语法简洁、类型安全、内存管理自动化、对闭包和泛型的支持等。 2. iOS 12平台特性:iOS 12作为当时较新的操作系统版本,提供了许多新API和功能,如ARKit 2、MeasureKit等。开发者需要掌握如何在应用中利用这些API实现增强现实(AR)、时间管理等高级功能。 3. Xcode和UIKit框架:Xcode是iOS开发的主要集成开发环境(IDE),它提供了代码编辑器、调试工具、性能分析工具以及用户界面构建器等工具。UIKit框架是构建iOS应用用户界面的基础框架,它提供了丰富的用户界面组件和控件。 4. Swift高级特性和编程实践:学习Swift的高级特性有助于编写高效和可维护的代码。这包括理解闭包的使用、泛型编程、面向协议的设计等。同时,学习和实践良好的编程习惯,如编写可测试的代码、应用设计模式、以及遵循苹果的编码规范和最佳实践。 5. Appcoda及其教程特点:Appcoda是一家提供高质量iOS和Swift编程教程的平台,其教学材料通常紧跟技术发展和行业标准,很适合用于自我学习和提升技能。
recommend-type

【核心攻略】:掌握Winform界面构建的10大黄金法则

# 摘要 Winform界面构建是开发桌面应用程序的重要组成部分,本文从界面布局、数据管理、性能优化、安全性以及进阶技术等多方面进行深入探讨。第一章提供了一个概览,接下来的章节分别详细阐述了如何设计高效的Winform布局,包括布局容器的选择与嵌套布局策略;如何通过数据绑定简化数据管理并保证数据的正确性;以及如何优化界面性能,提高渲染效率并
recommend-type

给我讲解一下boost升压电路

<think>嗯,用户想了解Boost升压电路的工作原理和应用场景。首先,我需要回忆一下Boost电路的基本结构。根据引用[1],Boost升压电路由控制IC、功率电感和MOSFET组成,非同步Boost架构可能更简单,适合用来解释原理。那它的工作原理是怎样的呢? 记得Boost电路属于开关电源,通过开关管的通断来控制电感储能和释放。当MOSFET导通时,电流流过电感,电感开始储能,此时二极管截止,电容负责维持输出电压。而当MOSFET关闭时,电感电流不能突变,会产生反向电动势,这时候电感的电压加上输入电压,通过二极管给电容充电,从而提升输出电压。这个过程需要控制IC来调节开关的占空比,以维
recommend-type

全国国道矢量数据下载与arcgis软件应用

根据提供的文件信息,我们可以生成以下知识点: ### 地理信息系统(GIS) 地理信息系统,简称GIS,是一种特定的、全面的信息系统,它用来捕捉、存储、操纵、分析、管理和呈现地理数据。GIS技术可以对空间数据进行分析,以解决各种地理问题。在GIS中,空间数据通常包括矢量数据和栅格数据。矢量数据是一种图形化的数据格式,用于表示地图上的点、线、面等要素。 ### 国道数据 国道数据特指中国境内的国道信息,国道是指国家主要干线公路,具有连接城市、具有较大运输量、承担全国公路运输主要任务的特点。国道数据可以包括道路的位置、长度、宽度、类型、交通流量等信息。在地理信息系统中,国道数据的准确性对于路线规划、交通管理、城市规划等多个领域至关重要。 ### 矢量数据 矢量数据是GIS中的一个关键概念,它利用几何图形(如点、线、多边形等)来表示真实世界中的物体或区域。矢量数据与栅格数据相对,栅格数据通过像素阵列来表示信息,而矢量数据则通过坐标表示形状和位置。矢量数据具备以下几个特点: - 可无限放大缩小而不失真。 - 在空间分析和拓扑运算方面具有优势。 - 数据量相对较小,易于编辑和管理。 - 可以更好地表达地理要素的属性信息。 ### ArcGIS软件 ArcGIS是由美国Esri公司开发的地理信息系统软件,是业界广泛使用的一套GIS软件平台。ArcGIS提供了众多的工具来捕捉、分析、管理、展示地理信息。用户可以利用ArcGIS进行数据编辑、地图制作、地理分析、数据管理和应用开发等多种操作。ArcGIS支持多种数据格式,包括我们这里提到的矢量数据格式。 ### SHP文件格式 SHP文件格式是一种流行的矢量数据文件格式,它是由Esri公司在其ArcGIS产品中创建的一种空间数据存储格式,用于存储空间和属性信息。SHP文件包含了构成矢量图形的几何形状(点、线、面)和相关的属性信息。每个SHP文件通常都伴随着DBF文件(属性表)和.prj文件(定义空间参考系统的文件)。SHP格式由于其广泛的支持和开放性,成为了交换GIS数据的常用格式之一。 ### 全国国道数据的应用 全国国道数据在GIS中的应用非常广泛,包括但不限于: - **交通规划**:分析国道的通行能力,规划新的交通线路,优化现有路线。 - **应急响应**:在自然灾害或紧急情况中,用于规划救援路线和物资分配。 - **市政建设**:帮助规划城市扩展、土地利用以及基础设施建设。 - **旅游规划**:制定旅游路线,提升旅游服务的便捷性和舒适度。 - **车辆导航**:为导航系统提供精确的道路数据,帮助驾驶者快速到达目的地。 ### 数据处理与分析 利用ArcGIS等GIS软件,用户可以对全国国道数据进行一系列的空间分析和处理。包括但不限于以下几点: - **缓冲区分析**:分析国道周边一定范围内的情况,如人口分布、环境影响等。 - **网络分析**:进行道路连通性分析,为交通管理提供决策支持。 - **叠加分析**:将国道数据与其他地理数据层进行叠加,提取有用信息。 - **数据转换**:将国道数据转换为不同格式或投影,以适应不同的GIS平台或系统。 ### 数据共享与标准化 为了促进数据的共享和再利用,国家和地方制定了相应的GIS数据标准和规范。通过标准化的数据格式,不同的机构和组织可以交换数据,从而在更广泛的范围内发挥GIS数据的作用。对于全国国道数据而言,使用标准格式和协议可以提高数据的互操作性,便于进行区域间的协作与信息交流。 总结来说,全国国道数据作为一种重要的地理信息资源,对于国家交通网络的建设和管理具有重要的参考价值。通过GIS软件,特别是ArcGIS,用户可以充分利用矢量数据格式的优点,进行深入的数据分析和应用开发,从而更好地服务于社会经济发展和公众的日常需要。
recommend-type

Creo 1.0曲面设计宝典:提升模型质量的光顺技巧

# 摘要 本文旨在全面介绍Creo软件在曲面设计领域的应用,涵盖了从理论基础到高级技巧的全方位知识。首先,文章概述了Creo曲面设计的概念和重要性,随后深入探讨了曲面光顺的理论基础,包括曲面的连续性和曲率分析方法,以及光顺技术的数学原理。接着,文章通过实例操作演示了如何在Creo中高效执行曲面光顺,并分享了提高光顺效率的技巧。进一步地,介绍了优化曲面设计的高级技巧,重点在于复杂曲面建模策略和模型优化方法。最后,探讨了曲面设计评审与质量控制的最佳实践,并对曲面设计技术未来的发展趋势进行了展望,强调了新兴技术和人工智能的潜在应用。 # 关键字 Creo曲面设计;曲面光顺;曲面连续性;曲面曲率分析
recommend-type

DevEco Studio 如何全局加分号的快捷键

<think>好的,用户想了解在DevEco Studio中如何设置全局添加分号的快捷键。首先,我需要回忆一下DevEco Studio的快捷键设置方式,因为它是基于IntelliJ平台的,所以可能和Android Studio或者IDEA类似。 根据用户提供的引用内容,特别是引用[2]提到在Android Studio中可以使用Ctrl+Shift+Enter快速补全分号。但用户现在是在DevEco Studio中寻找类似功能。我需要确认DevEco Studio是否有相同的快捷键,或者需要自定义设置。 首先,全局添加分号可能指的是在代码中自动补全分号,尤其是在行尾添加。这通常与代码补全
recommend-type

360 CAD病毒专杀工具1.0.0.1011:全面防护CAD文件

标题中的知识点: CAD病毒专杀工具指的是一种专门用于检测和清除计算机辅助设计(Computer Aided Design,简称CAD)软件中的病毒或恶意软件的工具。由于CAD软件在工程设计、建筑、机械等领域中应用广泛,其生成的文件往往体积庞大,包含复杂的图形和结构数据,这使得CAD软件成为了病毒攻击的潜在目标。CAD病毒专杀工具能够针对CAD软件运行环境或CAD文件中的病毒特征进行深度扫描,帮助用户安全地清除恶意代码。 描述中的知识点: “360_CAD病毒专杀工具_1.0.0.1011”是一款由360安全中心发布的CAD病毒专杀工具版本号为1.0.0.1011。360安全中心是中国著名的网络安全服务提供商,其产品线覆盖了电脑和移动设备的安全防护。该工具的发布意味着360安全中心已经具备了针对CAD软件特定病毒的解决方案,能够帮助设计和工程领域的专业人员解决可能遇到的病毒感染问题。 标签中的知识点: “CAD”是计算机辅助设计的缩写,它是一种利用计算机技术进行设计和绘图的技术。在工程建设、机械制造、电子电路设计等行业中,CAD软件扮演着至关重要的角色。由于CAD文件通常具有专业性和复杂性,因而对CAD病毒的防护以及专杀工具的需求也随之产生。 压缩包子文件的文件名称列表中的知识点: 1. miniui.dll:DLL是动态链接库(Dynamic Link Library)的缩写,是Windows操作系统中实现共享函数库的一种方式。DLL文件中包含了可以被Windows程序共享的代码和数据,miniui.dll可能是一个特定的动态链接库文件,它可能包含了用户界面(User Interface,简称UI)元素,例如按钮、列表框等控件,以便CAD病毒专杀工具提供更加友好的用户交互体验。 2. CADVirusKiller.exe:这是一个可执行文件,是CAD病毒专杀工具的主要程序文件。当用户双击该文件时,会启动CAD病毒专杀工具,进行病毒扫描和清理操作。.exe文件是Windows操作系统中的可执行文件格式,它包含了程序的运行代码。 3. 软件主要功能使用视频:该文件很可能是一个视频文件,用于指导用户如何正确安装和使用CAD病毒专杀工具。在视频中可能会展示软件的安装过程、如何使用软件进行病毒扫描、扫描后如何处理检测到的病毒等问题。此类视频教学资料对于用户来说非常实用,尤其对于不熟悉安全软件操作的用户来说,可以直观地学习到软件的基本使用方法和维护技巧。 结合以上信息,可以了解到360安全中心推出的CAD病毒专杀工具,主要是为了帮助工程师和设计师等专业人员,解决他们在使用CAD软件过程中可能遇到的病毒感染问题。该工具通过独立的可执行文件进行病毒扫描和清除,并且可能支持丰富的人机交互界面,为用户提供更加直观、便捷的病毒防护服务。此外,软件提供者还考虑到了用户的易用性,准备了操作视频教学资料,以便用户更加高效地利用工具维护CAD系统的安全性。