torch.ops.load_library("/home/pliu3/AI-preprocess/Picasso-onnx-debug/onnx/build/libmesh_decimation.so") vertexOut, faceOut, isDegenerate, repOut, mapOut, nvOut, mfOut = torch.ops.mesh_decimate.simplify(vertexIn, faceIn, geometryIn, \ nvIn_cumsum, mfIn_cumsum, nv2Remove) 我的pytorch模型中有这样一个自定义算子 我想将pytorch模型转为onnx 因此对这个自定义算子进行符号函数注册 def simplify_symbolic(g, vertexIn, faceIn, geometryIn, nvIn_cumsum, mfIn_cumsum, nv2Remove): # 定义符号函数 `g.op`向图中添加一个操作节点,操作名称、输入张量以及属性 # return g.op("ai.onnx.contrib::Simplify", vertexIn, faceIn, geometryIn, nvIn_cumsum, mfIn_cumsum, nv2Remove, outputs=7) vertexOut_type = g.op("Constant", value_t=torch.tensor([], dtype=torch.float32)).type().with_sizes([None, 3]) faceOut_type = g.op("Constant", value_t=torch.tensor([], dtype=torch.int32)).type().with_sizes([None, 3]) int_vector_type = g.op("Constant", value_t=torch.tensor([], dtype=torch.int32)).type().with_sizes([None]) scalar_type = g.op("Constant", value_t=torch.tensor([], dtype=torch.int32)).type().with_sizes([1]) node = g.op("ai.onnx.contrib::Simplify", vertexIn, faceIn, geometryIn, nvIn_cumsum, mfIn_cumsum, nv2Remove, outputs=7) node[0].setType(vertexOut_type) # vertexOut: [动态顶点数, 3] node[1].setType(faceOut_type) # faceOut: [动态面数, 3] node[2].setType(int_vector_type) # isDegenerate: [动态长度] node[3].setType(int_vector_type) # repOut: [动态长度] node[4].setType(int_vector_type) # mapOut: [动态长度] node[5].setType(scalar_type) # nvOut: [1] node[6].setType(scalar_type) # return node return node[0], node[1], node[2], node[3], node[4], node[5], node[6] torch.onnx.register_custom_op_symbolic("mesh_decimate::simplify", simplify_symbolic, 13) 这样写不对吗 为什么在export的时候报错 导出失败: ONNX export failed on an operator with unrecognized namespace 'mesh_decimate::simplify'. If you are trying to export a custom operator, make sure you registered it with the right domain and version.

时间: 2025-08-02 12:18:39 AIGC 浏览: 20
<think>我们正在解决自定义算子导出ONNX的问题。用户遇到了一个错误,错误信息指出无法为命名空间"mesh_decimate"找到符号函数"simplify"。这通常是因为我们没有为自定义算子正确注册ONNX符号函数。根据引用[2]的内容,我们需要为自定义算子注册一个symbolic函数,这个函数应该在导出ONNX时被调用,以告诉ONNX导出器如何将这个算子转换为ONNX中的表示。步骤:1.定义自定义算子的前向传播和反向传播(如果必要)函数,通常通过继承`torch.autograd.Function`来实现。2.在该自定义算子的类中,添加一个`symbolic`静态方法,该方法将定义如何在ONNX图中表示这个算子。3.在symbolic函数中,我们使用`g.op`来创建一个ONNX算子。这里需要指定算子的域名(domain)和算子名(opname)。注意,域名应该与导出时使用的域名一致,以避免找不到符号函数的问题。4.在模型中调用自定义算子时,使用我们定义的Function.apply。5.导出模型时,确保使用正确的opset_version。针对用户的自定义算子'mesh_decimate::simplify',我们需要创建一个对应的symbolic函数,并确保域名(domain)为"mesh_decimate",算子名为"simplify"。具体步骤:第一步:定义自定义算子的Function类,包含forward和symbolic方法。第二步:在symbolic方法中,使用g.op("mesh_decimate::simplify",...)来定义ONNX节点。第三步:在模型中使用这个自定义算子。第四步:导出ONNX模型。注意:symbolic函数的第一个参数是`g`(代表ONNX图),后面的参数是算子的输入。我们需要将PyTorch算子的输入传递给g.op。同时,symbolic函数应该返回一个或多个输出(与forward的输出个数一致)。示例代码:假设我们的自定义算子简化函数名为`simplify`,在PyTorch中定义如下:```pythonimporttorchclassSimplifyFunction(torch.autograd.Function):@staticmethoddefforward(ctx,vertices,faces,target_face_count):#实际简化操作#这里假设简化后返回新的顶点和面simplified_vertices=...#计算简化后的顶点simplified_faces=...#计算简化后的面ctx.save_for_backward(vertices,faces)#如果需要反向传播则保存returnsimplified_vertices,simplified_faces@staticmethoddefsymbolic(g,vertices,faces,target_face_count):#这里我们定义如何在ONNX中表示这个算子#注意:我们使用自定义域名"mesh_decimate"和算子名"simplify"#输入参数按照顺序传递#注意:target_face_count可能是一个整数,我们需要将它转换为ONNX的节点#由于ONNX要求节点输入都是张量,因此我们需要将标量转换为张量#但注意,在symbolic函数中,我们通常直接使用g.op,并可以传递常量张量#将target_face_count转换为一个常量张量节点target_face_count_tensor=g.op("Constant",value_t=torch.tensor(target_face_count,dtype=torch.int64))#创建ONNX节点#注意:我们返回两个输出,所以需要指定num_outputs=2returng.op("mesh_decimate::simplify",vertices,faces,target_face_count_tensor,outputs=2)#为了方便,定义一个applysimplify=SimplifyFunction.apply```然后,在模型中使用:```pythonclassMyModel(torch.nn.Module):def__init__(self):super().__init__()defforward(self,vertices,faces,target_face_count):#调用自定义算子vertices,faces=simplify(vertices,faces,target_face_count)returnvertices,faces```导出ONNX:```pythonmodel=MyModel()vertices=torch.randn(100,3)#假设100个顶点,每个顶点3个坐标faces=torch.randint(0,100,(200,3),dtype=torch.int64)#200个面,每个面3个顶点索引target_face_count=torch.tensor(100,dtype=torch.int64)#目标面数#导出torch.onnx.export(model,(vertices,faces,target_face_count),"model.onnx",input_names=["vertices","faces","target_face_count"],output_names=["simplified_vertices","simplified_faces"],opset_version=12,#选择一个合适的opset版本,确保支持自定义域名custom_opsets={"mesh_decimate":1}#自定义域名的版本,这里设为1)```注意点:1.在symbolic函数中,我们使用`g.op`创建节点,第一个参数是完整的算子名(包括域名)。2.对于非张量的参数(比如整数),我们需要将它们转换为常量张量节点,因为ONNX要求所有输入都是张量。3.在导出时,通过`custom_opsets`参数指定自定义域名的版本号(这里设为1)。如果按照以上步骤,应该能够解决找不到符号函数的问题。另外,引用[1]提到,如果使用了Tensor.item(),会导致导出ONNX时追踪失败。因此,在自定义算子的实现中,避免使用这类操作。引用[3]提到,自定义算子的结果可能与预期有差异,需要仔细检查算子的实现。所以,在导出后,建议进行结果对比验证。如果问题仍然存在,请检查:-自定义算子的symbolic函数是否正确定义,并且域名和算子名是否与导出时一致。-确保在导出模型时,自定义算子所在的模块已经被正确导入(即定义symbolic的代码在导出前已经运行)。希望这些步骤能帮助用户成功导出ONNX模型。</think>要正确为PyTorch自定义算子`mesh_decimate::simplify`注册ONNX符号函数并解决导出失败问题,请遵循以下步骤: ### 解决方案步骤 1. **定义符号函数** 创建`symbolic`静态方法,声明算子域名和类型: ```python import torch class SimplifyFunction(torch.autograd.Function): @staticmethod def forward(ctx, vertices, faces, target_face_count): # 实际网格简化计算逻辑 simplified_vertices = ... simplified_faces = ... return simplified_vertices, simplified_faces @staticmethod def symbolic(g, vertices, faces, target_face_count): # 关键:声明自定义域名和算子名 return g.op( "mesh_decimate::simplify", # 完整算子域名 vertices, faces, target_face_count, outputs=2 # 指定双输出 ) # 创建算子调用接口 simplify = SimplifyFunction.apply ``` 2. **模型集成与调用** 在模型中使用自定义算子: ```python class MeshDecimationModel(torch.nn.Module): def __init__(self): super().__init__() def forward(self, vertices, faces, target_count): return simplify(vertices, faces, target_count) ``` 3. **ONNX导出配置** 导出时需显式声明自定义算子域: ```python model = MeshDecimationModel() dummy_vertices = torch.rand(100, 3) # 100个顶点 dummy_faces = torch.randint(0, 100, (200, 3)) # 200个三角面 target_count = torch.tensor([50]) # 目标面数 torch.onnx.export( model, (dummy_vertices, dummy_faces, target_count), "mesh_simplify.onnx", input_names=["vertices", "faces", "target_face_count"], output_names=["simplified_vertices", "simplified_faces"], opset_version=15, custom_opsets={"mesh_decimate": 1} # 声明自定义域版本 ) ``` ### 关键注意事项 1. **域名一致性** 确保`symbolic`中的域名`mesh_decimate::simplify`与导出时`custom_opsets`的域名**完全匹配**,大小写敏感。 2. **输入/输出类型** - ONNX要求所有输入输出为张量,标量需封装为`torch.tensor` - 使用`outputs=2`明确双输出(顶点+面片) 3. **算子实现要求** - 避免在`forward`中使用`.item()`等无法追踪的操作[^1] - 算子逻辑需完全基于PyTorch张量操作 ### 验证方法 1. 使用ONNX Runtime验证模型加载: ```python import onnxruntime as ort sess = ort.InferenceSession("mesh_simplify.onnx") outputs = sess.run(None, { "vertices": dummy_vertices.numpy(), "faces": dummy_faces.numpy(), "target_face_count": target_count.numpy() }) ``` 2. 对比PyTorch与ONNX输出差异(参考引用[3]的Golden对比方法) > 若仍出现`Couldn't find operator`错误,检查: > 1. 自定义类是否在导出前被正确实例化 > 2. 所有输入参数是否均为张量 > 3. 算子域名是否包含拼写错误
阅读全文

相关推荐

^CTraceback (most recent call last): File "/home/sunrise/桌面/yolov5-7.0 (kk)/detect.py", line 6, in <module> import torch File "/home/sunrise/miniconda3/envs/py38/lib/python3.8/site-packages/torch/__init__.py", line 2143, in <module> from . import _meta_registrations File "/home/sunrise/miniconda3/envs/py38/lib/python3.8/site-packages/torch/_meta_registrations.py", line 9, in <module> from torch._decomp import ( File "/home/sunrise/miniconda3/envs/py38/lib/python3.8/site-packages/torch/_decomp/__init__.py", line 245, in <module> import torch._decomp.decompositions File "/home/sunrise/miniconda3/envs/py38/lib/python3.8/site-packages/torch/_decomp/decompositions.py", line 12, in <module> import torch._prims as prims File "/home/sunrise/miniconda3/envs/py38/lib/python3.8/site-packages/torch/_prims/__init__.py", line 588, in <module> bessel_i0e = _make_elementwise_unary_prim( File "/home/sunrise/miniconda3/envs/py38/lib/python3.8/site-packages/torch/_prims/__init__.py", line 472, in _make_elementwise_unary_prim return _make_prim( File "/home/sunrise/miniconda3/envs/py38/lib/python3.8/site-packages/torch/_prims/__init__.py", line 320, in _make_prim prim_def = torch.library.custom_op( File "/home/sunrise/miniconda3/envs/py38/lib/python3.8/site-packages/torch/_library/custom_ops.py", line 142, in custom_op return inner(fn) File "/home/sunrise/miniconda3/envs/py38/lib/python3.8/site-packages/torch/_library/custom_ops.py", line 123, in inner result = CustomOpDef(namespace, opname, schema_str, fn) File "/home/sunrise/miniconda3/envs/py38/lib/python3.8/site-packages/torch/_library/custom_ops.py", line 169, in __init__ self._register_to_dispatcher() File "/home/sunrise/miniconda3/envs/py38/lib/python3.8/site-packages/torch/_library/custom_ops.py", line 475, in _register_to_dispatcher autograd_impl = _library.autograd.make_autograd_impl(self._opoverload, self) File "/home/sunrise/miniconda3/envs/py38/lib/py

import torch import torch.onnx import onnxruntime as ort import numpy as np from onnxruntime_extensions import ( onnx_op, PyCustomOpDef, make_onnx_model, PyOrtFunction, get_library_path as _get_library_path) def test_Inversion(): # 定义自定义算子(示例:矩阵逆运算 + 恒等连接) class InverseFunction(torch.autograd.Function): @staticmethod def forward(ctx, x): inv_x = torch.inverse(x) return inv_x + x, inv_x - x # 自定义逻辑 @staticmethod def symbolic(g, x): # 告诉 PyTorch 如何将操作转换为 ONNX 自定义算子 # 映射到 ONNX 自定义算子(域名::算子名) return g.op("ai.onnx.contrib::Inverse2", x, outputs=2) # # 封装为模型 class CustomModel(torch.nn.Module): def forward(self, x): return InverseFunction.apply(x) # 导出 ONNX 模型 model = CustomModel() dummy_input = torch.randn(3, 3) # 确保矩阵可逆 torch.onnx.export( model, dummy_input, "custom_inverse.onnx", input_names=["input_matrix"], output_names=["output1", "output2"], opset_version=12 ) @onnx_op(op_type="Inverse2", domain="ai.onnx.contrib") def inverse2(x: np.ndarray): return np.linalg.inv(x) + x, np.linalg.inv(x) - x # 加载模型时传递 SessionOptions so = ort.SessionOptions() so.register_custom_ops_library(_get_library_path()) session = ort.InferenceSession("./custom_inverse.onnx", so, providers=['CPUExecutionProvider']) # 准备输入(确保矩阵可逆) input_matrix = np.array([ [1.0, 0.5, 0.0], [0.2, 1.0, 0.3], [0.0, 0.1, 1.0] ], dtype=np.float32) # 运行推理 output1, output2 = session.run( output_names=["output1", "output2"], input_feed={"input_matrix": input_matrix} )[0] print("自定义算子输出:\n", output1.shape) print("自定义算子输出:\n", output2.shape) 我这个函数写的对吗 报错了: Traceback (most recent call last): File "/home/pliu3/AI-preprocess/Picasso-onnx-debug/onnx_inference_demo.py", line 72, in <module> test_Inversion() File "/home/pliu3/AI-preprocess/Picasso-onnx-debug/onnx_inference_demo.py", line 50, in test_Inversion session = ort.InferenceSession("./custom_inverse.onnx", so, providers=['CPUExecutionProvider']) File "/home/pliu3/AI-preprocess/PicassoPlus/pytorch-env/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 465, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/pliu3/AI-preprocess/PicassoPlus/pytorch-env/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 526, in _create_inference_session sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model) onnxruntime.capi.onnxruntime_pybind11_state.InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : Load model from ./custom_inverse.onnx failed:This is an invalid model. In Node, ("/Inverse2", Inverse2, "ai.onnx.contrib", -1) : ("input_matrix": tensor(float),) -> ("output1": tensor(float),"output2": tensor(float),) , Error Node (/Inverse2) has output size 2 not in range [min=1, max=1].

A module that was compiled using NumPy 1.x cannot be run in NumPy 2.0.2 as it may crash. To support both 1.x and 2.x versions of NumPy, modules must be compiled with NumPy 2.0. Some module may need to rebuild instead e.g. with 'pybind11>=2.12'. If you are a user of the module, the easiest solution will be to downgrade to 'numpy<2' or try to upgrade the affected module. We expect that some modules will need time to support NumPy 2. Traceback (most recent call last): File "/emotion-recogniton-pytorch-orangepiaipro-main/train_emotion_classifier.py", line 2, in <module> import torch File "/usr/local/miniconda3/lib/python3.9/site-packages/torch/__init__.py", line 1382, in <module> from .functional import * # noqa: F403 File "/usr/local/miniconda3/lib/python3.9/site-packages/torch/functional.py", line 7, in <module> import torch.nn.functional as F File "/usr/local/miniconda3/lib/python3.9/site-packages/torch/nn/__init__.py", line 1, in <module> from .modules import * # noqa: F403 File "/usr/local/miniconda3/lib/python3.9/site-packages/torch/nn/modules/__init__.py", line 35, in <module> from .transformer import TransformerEncoder, TransformerDecoder, \ File "/usr/local/miniconda3/lib/python3.9/site-packages/torch/nn/modules/transformer.py", line 20, in <module> device: torch.device = torch.device(torch._C._get_default_device()), # torch.device('cpu'), /usr/local/miniconda3/lib/python3.9/site-packages/torch/nn/modules/transformer.py:20: UserWarning: Failed to initialize NumPy: _ARRAY_API not found (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:84.) device: torch.device = torch.device(torch._C._get_default_device()), # torch.device('cpu'), Traceback (most recent call last): File "/emotion-recogniton-pytorch-orangepiaipro-main/train_emotion_classifier.py", line 6, in <module> from torchvision import transforms File "/usr/local/miniconda3/lib/python3.9/site-packages/torchvision/__init__.py", line 6, in <module> from torchvision import _meta_registrations, datasets, io, models, ops, transforms, utils File "/usr/local/miniconda3/lib/python3.9/site-packages/torchvision/models/__init__.py", line 2, in <module> from .convnext import * File "/usr/local/miniconda3/lib/python3.9/site-packages/torchvision/models/convnext.py", line 8, in <module> from ..ops.misc import Conv2dNormActivation, Permute File "/usr/local/miniconda3/lib/python3.9/site-packages/torchvision/ops/__init__.py", line 1, in <module> from ._register_onnx_ops import _register_custom_op File "/usr/local/miniconda3/lib/python3.9/site-packages/torchvision/ops/_register_onnx_ops.py", line 5, in <module> from torch.onnx import symbolic_opset11 as opset11 File "/usr/local/miniconda3/lib/python3.9/site-packages/torch/onnx/__init__.py", line 57, in <module> from ._internal.onnxruntime import ( File "/usr/local/miniconda3/lib/python3.9/site-packages/torch/onnx/_internal/onnxruntime.py", line 34, in <module> import onnx File "/home/HwHiAiUser/.local/lib/python3.9/site-packages/onnx/__init__.py", line 11, in <module> from onnx.external_data_helper import load_external_data_for_model, write_external_data_tensors, convert_model_to_external_data File "/home/HwHiAiUser/.local/lib/python3.9/site-packages/onnx/external_data_helper.py", line 14, in <module> from .onnx_pb import TensorProto, ModelProto File "/home/HwHiAiUser/.local/lib/python3.9/site-packages/onnx/onnx_pb.py", line 8, in <module> from .onnx_ml_pb2 import * # noqa File "/home/HwHiAiUser/.local/lib/python3.9/site-packages/onnx/onnx_ml_pb2.py", line 33, in <module> _descriptor.EnumValueDescriptor( File "/usr/local/miniconda3/lib/python3.9/site-packages/google/protobuf/descriptor.py", line 789, in __new__ _message.Message._CheckCalledFromGeneratedFile() TypeError: Descriptors cannot be created directly. If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0. If you cannot immediately regenerate your protos, some other possible workarounds are: 1. Downgrade the protobuf package to 3.20.x or lower. 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower). More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates

(openpcdet) autodriver@ubuntu:~/wzk2/OpenPCDet$ python setup.py develop running develop /home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/setuptools/command/develop.py:41: EasyInstallDeprecationWarning: easy_install command is deprecated. !! ******************************************************************************** Please avoid running setup.py and easy_install. Instead, use pypa/build, pypa/installer or other standards-based tools. See https://github.com/pypa/setuptools/issues/917 for details. ******************************************************************************** !! easy_install.initialize_options(self) /home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/setuptools/_distutils/cmd.py:66: SetuptoolsDeprecationWarning: setup.py install is deprecated. !! ******************************************************************************** Please avoid running setup.py directly. Instead, use pypa/build, pypa/installer or other standards-based tools. See https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html for details. ******************************************************************************** !! self.initialize_options() running egg_info writing pcdet.egg-info/PKG-INFO writing dependency_links to pcdet.egg-info/dependency_links.txt writing requirements to pcdet.egg-info/requires.txt writing top-level names to pcdet.egg-info/top_level.txt reading manifest file 'pcdet.egg-info/SOURCES.txt' adding license file 'LICENSE' writing manifest file 'pcdet.egg-info/SOURCES.txt' running build_ext building 'pcdet.ops.iou3d_nms.iou3d_nms_cuda' extension Emitting ninja build file /home/autodriver/wzk2/OpenPCDet/build/temp.linux-aarch64-cpython-38/build.ninja... Compiling objects... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) ninja: no work to do. g++ -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O3 -Wall -fPIC -O3 -isystem /home/autodriver/miniconda3/envs/openpcdet/include -fPIC -O3 -isystem /home/autodriver/miniconda3/envs/openpcdet/include -shared /home/autodriver/wzk2/OpenPCDet/build/temp.linux-aarch64-cpython-38/pcdet/ops/iou3d_nms/src/iou3d_cpu.o /home/autodriver/wzk2/OpenPCDet/build/temp.linux-aarch64-cpython-38/pcdet/ops/iou3d_nms/src/iou3d_nms.o /home/autodriver/wzk2/OpenPCDet/build/temp.linux-aarch64-cpython-38/pcdet/ops/iou3d_nms/src/iou3d_nms_api.o /home/autodriver/wzk2/OpenPCDet/build/temp.linux-aarch64-cpython-38/pcdet/ops/iou3d_nms/src/iou3d_nms_kernel.o -L/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/lib -L/usr/local/cuda-11.4/lib64 -lc10 -ltorch -ltorch_cpu -ltorch_python -lcudart -lc10_cuda -ltorch_cuda -o build/lib.linux-aarch64-cpython-38/pcdet/ops/iou3d_nms/iou3d_nms_cuda.cpython-38-aarch64-linux-gnu.so building 'pcdet.ops.roiaware_pool3d.roiaware_pool3d_cuda' extension Emitting ninja build file /home/autodriver/wzk2/OpenPCDet/build/temp.linux-aarch64-cpython-38/build.ninja... Compiling objects... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) ninja: no work to do. g++ -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O3 -Wall -fPIC -O3 -isystem /home/autodriver/miniconda3/envs/openpcdet/include -fPIC -O3 -isystem /home/autodriver/miniconda3/envs/openpcdet/include -shared /home/autodriver/wzk2/OpenPCDet/build/temp.linux-aarch64-cpython-38/pcdet/ops/roiaware_pool3d/src/roiaware_pool3d.o /home/autodriver/wzk2/OpenPCDet/build/temp.linux-aarch64-cpython-38/pcdet/ops/roiaware_pool3d/src/roiaware_pool3d_kernel.o -L/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/lib -L/usr/local/cuda-11.4/lib64 -lc10 -ltorch -ltorch_cpu -ltorch_python -lcudart -lc10_cuda -ltorch_cuda -o build/lib.linux-aarch64-cpython-38/pcdet/ops/roiaware_pool3d/roiaware_pool3d_cuda.cpython-38-aarch64-linux-gnu.so building 'pcdet.ops.roipoint_pool3d.roipoint_pool3d_cuda' extension Emitting ninja build file /home/autodriver/wzk2/OpenPCDet/build/temp.linux-aarch64-cpython-38/build.ninja... Compiling objects... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) ninja: no work to do. g++ -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O3 -Wall -fPIC -O3 -isystem /home/autodriver/miniconda3/envs/openpcdet/include -fPIC -O3 -isystem /home/autodriver/miniconda3/envs/openpcdet/include -shared /home/autodriver/wzk2/OpenPCDet/build/temp.linux-aarch64-cpython-38/pcdet/ops/roipoint_pool3d/src/roipoint_pool3d.o /home/autodriver/wzk2/OpenPCDet/build/temp.linux-aarch64-cpython-38/pcdet/ops/roipoint_pool3d/src/roipoint_pool3d_kernel.o -L/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/lib -L/usr/local/cuda-11.4/lib64 -lc10 -ltorch -ltorch_cpu -ltorch_python -lcudart -lc10_cuda -ltorch_cuda -o build/lib.linux-aarch64-cpython-38/pcdet/ops/roipoint_pool3d/roipoint_pool3d_cuda.cpython-38-aarch64-linux-gnu.so building 'pcdet.ops.pointnet2.pointnet2_stack.pointnet2_stack_cuda' extension Emitting ninja build file /home/autodriver/wzk2/OpenPCDet/build/temp.linux-aarch64-cpython-38/build.ninja... Compiling objects... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) [1/6] c++ -MMD -MF /home/autodriver/wzk2/OpenPCDet/build/temp.linux-aarch64-cpython-38/pcdet/ops/pointnet2/pointnet2_stack/src/ball_query.o.d -pthread -B /home/autodriver/miniconda3/envs/openpcdet/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O3 -Wall -fPIC -O3 -isystem /home/autodriver/miniconda3/envs/openpcdet/include -fPIC -O3 -isystem /home/autodriver/miniconda3/envs/openpcdet/include -fPIC -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include/TH -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.4/include -I/home/autodriver/miniconda3/envs/openpcdet/include/python3.8 -c -c /home/autodriver/wzk2/OpenPCDet/pcdet/ops/pointnet2/pointnet2_stack/src/ball_query.cpp -o /home/autodriver/wzk2/OpenPCDet/build/temp.linux-aarch64-cpython-38/pcdet/ops/pointnet2/pointnet2_stack/src/ball_query.o -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1013"' -DTORCH_EXTENSION_NAME=pointnet2_stack_cuda -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14 FAILED: /home/autodriver/wzk2/OpenPCDet/build/temp.linux-aarch64-cpython-38/pcdet/ops/pointnet2/pointnet2_stack/src/ball_query.o c++ -MMD -MF /home/autodriver/wzk2/OpenPCDet/build/temp.linux-aarch64-cpython-38/pcdet/ops/pointnet2/pointnet2_stack/src/ball_query.o.d -pthread -B /home/autodriver/miniconda3/envs/openpcdet/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O3 -Wall -fPIC -O3 -isystem /home/autodriver/miniconda3/envs/openpcdet/include -fPIC -O3 -isystem /home/autodriver/miniconda3/envs/openpcdet/include -fPIC -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include/TH -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.4/include -I/home/autodriver/miniconda3/envs/openpcdet/include/python3.8 -c -c /home/autodriver/wzk2/OpenPCDet/pcdet/ops/pointnet2/pointnet2_stack/src/ball_query.cpp -o /home/autodriver/wzk2/OpenPCDet/build/temp.linux-aarch64-cpython-38/pcdet/ops/pointnet2/pointnet2_stack/src/ball_query.o -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1013"' -DTORCH_EXTENSION_NAME=pointnet2_stack_cuda -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14 /home/autodriver/wzk2/OpenPCDet/pcdet/ops/pointnet2/pointnet2_stack/src/ball_query.cpp:10:10: fatal error: THC/THC.h: 没有那个文件或目录 10 | #include <THC/THC.h> | ^~~~~~~~~~~ compilation terminated. [2/6] c++ -MMD -MF /home/autodriver/wzk2/OpenPCDet/build/temp.linux-aarch64-cpython-38/pcdet/ops/pointnet2/pointnet2_stack/src/interpolate.o.d -pthread -B /home/autodriver/miniconda3/envs/openpcdet/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O3 -Wall -fPIC -O3 -isystem /home/autodriver/miniconda3/envs/openpcdet/include -fPIC -O3 -isystem /home/autodriver/miniconda3/envs/openpcdet/include -fPIC -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include/TH -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.4/include -I/home/autodriver/miniconda3/envs/openpcdet/include/python3.8 -c -c /home/autodriver/wzk2/OpenPCDet/pcdet/ops/pointnet2/pointnet2_stack/src/interpolate.cpp -o /home/autodriver/wzk2/OpenPCDet/build/temp.linux-aarch64-cpython-38/pcdet/ops/pointnet2/pointnet2_stack/src/interpolate.o -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1013"' -DTORCH_EXTENSION_NAME=pointnet2_stack_cuda -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14 FAILED: /home/autodriver/wzk2/OpenPCDet/build/temp.linux-aarch64-cpython-38/pcdet/ops/pointnet2/pointnet2_stack/src/interpolate.o c++ -MMD -MF /home/autodriver/wzk2/OpenPCDet/build/temp.linux-aarch64-cpython-38/pcdet/ops/pointnet2/pointnet2_stack/src/interpolate.o.d -pthread -B /home/autodriver/miniconda3/envs/openpcdet/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O3 -Wall -fPIC -O3 -isystem /home/autodriver/miniconda3/envs/openpcdet/include -fPIC -O3 -isystem /home/autodriver/miniconda3/envs/openpcdet/include -fPIC -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include/TH -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.4/include -I/home/autodriver/miniconda3/envs/openpcdet/include/python3.8 -c -c /home/autodriver/wzk2/OpenPCDet/pcdet/ops/pointnet2/pointnet2_stack/src/interpolate.cpp -o /home/autodriver/wzk2/OpenPCDet/build/temp.linux-aarch64-cpython-38/pcdet/ops/pointnet2/pointnet2_stack/src/interpolate.o -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1013"' -DTORCH_EXTENSION_NAME=pointnet2_stack_cuda -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14 /home/autodriver/wzk2/OpenPCDet/pcdet/ops/pointnet2/pointnet2_stack/src/interpolate.cpp:10:10: fatal error: THC/THC.h: 没有那个文件或目录 10 | #include <THC/THC.h> | ^~~~~~~~~~~ compilation terminated. [3/6] c++ -MMD -MF /home/autodriver/wzk2/OpenPCDet/build/temp.linux-aarch64-cpython-38/pcdet/ops/pointnet2/pointnet2_stack/src/voxel_query.o.d -pthread -B /home/autodriver/miniconda3/envs/openpcdet/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O3 -Wall -fPIC -O3 -isystem /home/autodriver/miniconda3/envs/openpcdet/include -fPIC -O3 -isystem /home/autodriver/miniconda3/envs/openpcdet/include -fPIC -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include/TH -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.4/include -I/home/autodriver/miniconda3/envs/openpcdet/include/python3.8 -c -c /home/autodriver/wzk2/OpenPCDet/pcdet/ops/pointnet2/pointnet2_stack/src/voxel_query.cpp -o /home/autodriver/wzk2/OpenPCDet/build/temp.linux-aarch64-cpython-38/pcdet/ops/pointnet2/pointnet2_stack/src/voxel_query.o -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1013"' -DTORCH_EXTENSION_NAME=pointnet2_stack_cuda -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14 FAILED: /home/autodriver/wzk2/OpenPCDet/build/temp.linux-aarch64-cpython-38/pcdet/ops/pointnet2/pointnet2_stack/src/voxel_query.o c++ -MMD -MF /home/autodriver/wzk2/OpenPCDet/build/temp.linux-aarch64-cpython-38/pcdet/ops/pointnet2/pointnet2_stack/src/voxel_query.o.d -pthread -B /home/autodriver/miniconda3/envs/openpcdet/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O3 -Wall -fPIC -O3 -isystem /home/autodriver/miniconda3/envs/openpcdet/include -fPIC -O3 -isystem /home/autodriver/miniconda3/envs/openpcdet/include -fPIC -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include/TH -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.4/include -I/home/autodriver/miniconda3/envs/openpcdet/include/python3.8 -c -c /home/autodriver/wzk2/OpenPCDet/pcdet/ops/pointnet2/pointnet2_stack/src/voxel_query.cpp -o /home/autodriver/wzk2/OpenPCDet/build/temp.linux-aarch64-cpython-38/pcdet/ops/pointnet2/pointnet2_stack/src/voxel_query.o -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1013"' -DTORCH_EXTENSION_NAME=pointnet2_stack_cuda -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14 /home/autodriver/wzk2/OpenPCDet/pcdet/ops/pointnet2/pointnet2_stack/src/voxel_query.cpp:3:10: fatal error: THC/THC.h: 没有那个文件或目录 3 | #include <THC/THC.h> | ^~~~~~~~~~~ compilation terminated. [4/6] c++ -MMD -MF /home/autodriver/wzk2/OpenPCDet/build/temp.linux-aarch64-cpython-38/pcdet/ops/pointnet2/pointnet2_stack/src/group_points.o.d -pthread -B /home/autodriver/miniconda3/envs/openpcdet/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O3 -Wall -fPIC -O3 -isystem /home/autodriver/miniconda3/envs/openpcdet/include -fPIC -O3 -isystem /home/autodriver/miniconda3/envs/openpcdet/include -fPIC -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include/TH -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.4/include -I/home/autodriver/miniconda3/envs/openpcdet/include/python3.8 -c -c /home/autodriver/wzk2/OpenPCDet/pcdet/ops/pointnet2/pointnet2_stack/src/group_points.cpp -o /home/autodriver/wzk2/OpenPCDet/build/temp.linux-aarch64-cpython-38/pcdet/ops/pointnet2/pointnet2_stack/src/group_points.o -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1013"' -DTORCH_EXTENSION_NAME=pointnet2_stack_cuda -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14 FAILED: /home/autodriver/wzk2/OpenPCDet/build/temp.linux-aarch64-cpython-38/pcdet/ops/pointnet2/pointnet2_stack/src/group_points.o c++ -MMD -MF /home/autodriver/wzk2/OpenPCDet/build/temp.linux-aarch64-cpython-38/pcdet/ops/pointnet2/pointnet2_stack/src/group_points.o.d -pthread -B /home/autodriver/miniconda3/envs/openpcdet/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O3 -Wall -fPIC -O3 -isystem /home/autodriver/miniconda3/envs/openpcdet/include -fPIC -O3 -isystem /home/autodriver/miniconda3/envs/openpcdet/include -fPIC -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include/TH -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.4/include -I/home/autodriver/miniconda3/envs/openpcdet/include/python3.8 -c -c /home/autodriver/wzk2/OpenPCDet/pcdet/ops/pointnet2/pointnet2_stack/src/group_points.cpp -o /home/autodriver/wzk2/OpenPCDet/build/temp.linux-aarch64-cpython-38/pcdet/ops/pointnet2/pointnet2_stack/src/group_points.o -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1013"' -DTORCH_EXTENSION_NAME=pointnet2_stack_cuda -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14 /home/autodriver/wzk2/OpenPCDet/pcdet/ops/pointnet2/pointnet2_stack/src/group_points.cpp:12:10: fatal error: THC/THC.h: 没有那个文件或目录 12 | #include <THC/THC.h> | ^~~~~~~~~~~ compilation terminated. [5/6] c++ -MMD -MF /home/autodriver/wzk2/OpenPCDet/build/temp.linux-aarch64-cpython-38/pcdet/ops/pointnet2/pointnet2_stack/src/sampling.o.d -pthread -B /home/autodriver/miniconda3/envs/openpcdet/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O3 -Wall -fPIC -O3 -isystem /home/autodriver/miniconda3/envs/openpcdet/include -fPIC -O3 -isystem /home/autodriver/miniconda3/envs/openpcdet/include -fPIC -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include/TH -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.4/include -I/home/autodriver/miniconda3/envs/openpcdet/include/python3.8 -c -c /home/autodriver/wzk2/OpenPCDet/pcdet/ops/pointnet2/pointnet2_stack/src/sampling.cpp -o /home/autodriver/wzk2/OpenPCDet/build/temp.linux-aarch64-cpython-38/pcdet/ops/pointnet2/pointnet2_stack/src/sampling.o -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1013"' -DTORCH_EXTENSION_NAME=pointnet2_stack_cuda -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14 FAILED: /home/autodriver/wzk2/OpenPCDet/build/temp.linux-aarch64-cpython-38/pcdet/ops/pointnet2/pointnet2_stack/src/sampling.o c++ -MMD -MF /home/autodriver/wzk2/OpenPCDet/build/temp.linux-aarch64-cpython-38/pcdet/ops/pointnet2/pointnet2_stack/src/sampling.o.d -pthread -B /home/autodriver/miniconda3/envs/openpcdet/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O3 -Wall -fPIC -O3 -isystem /home/autodriver/miniconda3/envs/openpcdet/include -fPIC -O3 -isystem /home/autodriver/miniconda3/envs/openpcdet/include -fPIC -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include/TH -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.4/include -I/home/autodriver/miniconda3/envs/openpcdet/include/python3.8 -c -c /home/autodriver/wzk2/OpenPCDet/pcdet/ops/pointnet2/pointnet2_stack/src/sampling.cpp -o /home/autodriver/wzk2/OpenPCDet/build/temp.linux-aarch64-cpython-38/pcdet/ops/pointnet2/pointnet2_stack/src/sampling.o -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1013"' -DTORCH_EXTENSION_NAME=pointnet2_stack_cuda -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14 /home/autodriver/wzk2/OpenPCDet/pcdet/ops/pointnet2/pointnet2_stack/src/sampling.cpp:4:10: fatal error: THC/THC.h: 没有那个文件或目录 4 | #include <THC/THC.h> | ^~~~~~~~~~~ compilation terminated. [6/6] /usr/local/cuda-11.4/bin/nvcc -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include/TH -I/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.4/include -I/home/autodriver/miniconda3/envs/openpcdet/include/python3.8 -c -c /home/autodriver/wzk2/OpenPCDet/pcdet/ops/pointnet2/pointnet2_stack/src/voxel_query_gpu.cu -o /home/autodriver/wzk2/OpenPCDet/build/temp.linux-aarch64-cpython-38/pcdet/ops/pointnet2/pointnet2_stack/src/voxel_query_gpu.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1013"' -DTORCH_EXTENSION_NAME=pointnet2_stack_cuda -D_GLIBCXX_USE_CXX11_ABI=1 -gencode=arch=compute_87,code=compute_87 -gencode=arch=compute_87,code=sm_87 -std=c++14 ninja: build stopped: subcommand failed. Traceback (most recent call last): File "/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1740, in _run_ninja_build subprocess.run( File "/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/subprocess.py", line 516, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "setup.py", line 34, in <module> setup( File "/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/setuptools/__init__.py", line 117, in setup return distutils.core.setup(**attrs) File "/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 183, in setup return run_commands(dist) File "/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 199, in run_commands dist.run_commands() File "/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 954, in run_commands self.run_command(cmd) File "/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/setuptools/dist.py", line 999, in run_command super().run_command(command) File "/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 973, in run_command cmd_obj.run() File "/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/setuptools/command/develop.py", line 35, in run self.install_for_development() File "/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/setuptools/command/develop.py", line 112, in install_for_development self.run_command('build_ext') File "/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 316, in run_command self.distribution.run_command(command) File "/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/setuptools/dist.py", line 999, in run_command super().run_command(command) File "/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 973, in run_command cmd_obj.run() File "/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 98, in run _build_ext.run(self) File "/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 359, in run self.build_extensions() File "/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 741, in build_extensions build_ext.build_extensions(self) File "/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 476, in build_extensions self._build_extensions_serial() File "/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 502, in _build_extensions_serial self.build_extension(ext) File "/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 263, in build_extension _build_ext.build_extension(self, ext) File "/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/Cython/Distutils/build_ext.py", line 136, in build_extension super().build_extension(ext) File "/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 557, in build_extension objects = self.compiler.compile( File "/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 562, in unix_wrap_ninja_compile _write_ninja_file_and_compile_objects( File "/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1419, in _write_ninja_file_and_compile_objects _run_ninja_build( File "/home/autodriver/miniconda3/envs/openpcdet/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1756, in _run_ninja_build raise RuntimeError(message) from e RuntimeError: Error compiling objects for extension

/usr/bin/python3.10 /home/a/rknn-toolkit2/rknn-toolkit2/examples/onnx/yolov5/minist_onnx_to_rknn.py --> Config model Config done --> Loading model W __init__: rknn-toolkit2 version: 1.6.0+81f21f4d W load_onnx: It is recommended onnx opset 19, but your onnx model opset is 13! W load_onnx: Model converted from pytorch, 'opset_version' should be set 19 in torch.onnx.export for successful convert! Loading : 100%|██████████████████████████████████████████████████| 11/11 [00:00<00:00, 30076.50it/s] E load_onnx: The input shape ['batch_size', 1, 28, 28] of 'input' is not support! Please set the 'inputs' / 'input_size_list' parameters of 'rknn.load_onnx', or set the 'dyanmic_input' parameter of 'rknn.config' to fix the input shape! W load_onnx: ===================== WARN(3) ===================== E rknn-toolkit2 version: 1.6.0+81f21f4d E load_onnx: Catch exception when loading onnx model: /home/a/rknn-toolkit2/rknn-toolkit2/examples/onnx/yolov5/model_Mnist.onnx! E load_onnx: Traceback (most recent call last): E load_onnx: File "rknn/api/rknn_base.py", line 1540, in rknn.api.rknn_base.RKNNBase.load_onnx E load_onnx: File "rknn/api/rknn_base.py", line 769, in rknn.api.rknn_base.RKNNBase._create_ir_and_inputs_meta E load_onnx: File "rknn/api/rknn_log.py", line 92, in rknn.api.rknn_log.RKNNLog.e E load_onnx: ValueError: The input shape ['batch_size', 1, 28, 28] of 'input' is not support! E load_onnx: Please set the 'inputs' / 'input_size_list' parameters of 'rknn.load_onnx', or set the 'dyanmic_input' parameter of 'rknn.config' to fix the input shape! W If you can't handle this error, please try updating to the latest version of the toolkit2 and runtime from: https://console.zbox.filez.com/l/I00fc3 (Pwd: rknn) Path: RKNPU2_SDK / 1.X.X / develop / If the error still exists in the latest version, please collect the corresponding error logs and the model, convert script, and input data that can reproduce the problem, and then submit an issue on: https://redmine.rock-chips.com (Please consult our sales or FAE for the redmine account) Traceback (most recent call last): File "/home/a/rknn-toolkit2/rknn-toolkit2/examples/onnx/yolov5/minist_onnx_to_rknn.py", line 49, in <module> convert_to_rknn() File "/home/a/rknn-toolkit2/rknn-toolkit2/examples/onnx/yolov5/minist_onnx_to_rknn.py", line 26, in convert_to_rknn raise RuntimeError(f"Load ONNX failed! Error code: {ret}") RuntimeError: Load ONNX failed! Error code: -1

No sympy found Traceback (most recent call last): File "/home/wuwei/anaconda3/envs/yolov8/bin/yolo", line 5, in <module> from ultralytics.cfg import entrypoint File "/home/wuwei/anaconda3/envs/yolov8/lib/python3.8/site-packages/ultralytics/__init__.py", line 11, in <module> from ultralytics.models import NAS, RTDETR, SAM, YOLO, YOLOE, FastSAM, YOLOWorld File "/home/wuwei/anaconda3/envs/yolov8/lib/python3.8/site-packages/ultralytics/models/__init__.py", line 3, in <module> from .fastsam import FastSAM File "/home/wuwei/anaconda3/envs/yolov8/lib/python3.8/site-packages/ultralytics/models/fastsam/__init__.py", line 3, in <module> from .model import FastSAM File "/home/wuwei/anaconda3/envs/yolov8/lib/python3.8/site-packages/ultralytics/models/fastsam/model.py", line 5, in <module> from ultralytics.engine.model import Model File "/home/wuwei/anaconda3/envs/yolov8/lib/python3.8/site-packages/ultralytics/engine/model.py", line 8, in <module> import torch File "/home/wuwei/anaconda3/envs/yolov8/lib/python3.8/site-packages/torch/__init__.py", line 1465, in <module> from . import _meta_registrations File "/home/wuwei/anaconda3/envs/yolov8/lib/python3.8/site-packages/torch/_meta_registrations.py", line 7, in <module> from torch._decomp import _add_op_to_registry, global_decomposition_table, meta_table File "/home/wuwei/anaconda3/envs/yolov8/lib/python3.8/site-packages/torch/_decomp/__init__.py", line 169, in <module> import torch._decomp.decompositions File "/home/wuwei/anaconda3/envs/yolov8/lib/python3.8/site-packages/torch/_decomp/decompositions.py", line 10, in <module> import torch._prims as prims File "/home/wuwei/anaconda3/envs/yolov8/lib/python3.8/site-packages/torch/_prims/__init__.py", line 33, in <module> from torch._subclasses.fake_tensor import FakeTensor, FakeTensorMode File "/home/wuwei/anaconda3/envs/yolov8/lib/python3.8/site-packages/torch/_subclasses/__init__.py", line 3, in <module> from torch._subclasses.fake_tensor import ( File "/home/wuwei/anaconda3/envs/yolov8/lib/python3.8/site-packages/torch/_subclasses/fake_tensor.py", line 13, in <module> from torch._guards import Source File "/home/wuwei/anaconda3/envs/yolov8/lib/python3.8/site-packages/torch/_guards.py", line 78, in <module> class ShapeGuard(NamedTuple): File "/home/wuwei/anaconda3/envs/yolov8/lib/python3.8/site-packages/torch/_guards.py", line 79, in ShapeGuard expr: sympy.Expr NameError: name 'sympy' is not defined

我的错误类型是输入/输出形状不匹配 在进行某些操作时数据形状不匹配,我现在使用了onnx作为中间转换工具,但是在onnx转换为rknn模型的过程中依旧报错Traceback (most recent call last): File "/home/liuxin/PycharmProjects/PythonProject/onnx2rknn.py", line 33, in <module> ret = rknn.build(do_quantization=False) File "/home/liuxin/miniconda3/envs/py38/lib/python3.8/site-packages/rknn/api/rknn.py", line 198, in build return self.rknn_base.build(do_quantization=do_quantization, dataset=dataset, expand_batch_size=rknn_batch_size, auto_hybrid=auto_hybrid) File "rknn/api/rknn_log.py", line 349, in rknn.api.rknn_log.error_catch_decorator.error_catch_wrapper File "rknn/api/rknn_log.py", line 95, in rknn.api.rknn_log.RKNNLog.e ValueError: Traceback (most recent call last): File "rknn/api/rknn_log.py", line 344, in rknn.api.rknn_log.error_catch_decorator.error_catch_wrapper File "rknn/api/rknn_base.py", line 1990, in rknn.api.rknn_base.RKNNBase.build File "rknn/api/graph_optimizer.py", line 942, in rknn.api.graph_optimizer.GraphOptimizer.fold_constant File "rknn/api/session.py", line 34, in rknn.api.session.Session.__init__ File "rknn/api/session.py", line 131, in rknn.api.session.Session.sess_build File "/home/liuxin/miniconda3/envs/py38/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/liuxin/miniconda3/envs/py38/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 482, in _create_inference_session sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model) onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Node (sequential/transformer_encoder/positional_embedding/add) Op (Add) [ShapeInferenceError] Incompatible dimensions

大家在看

recommend-type

sdb-explorer, 查看和创建Microsoft数据库文件( SDB )的工具.zip

sdb-explorer, 查看和创建Microsoft数据库文件( SDB )的工具 资源管理器概述SDB explorer是一个工具,提供了在内存补丁中读取和写入微软修复程序的能力,也称为。 Windows 10支持注意 Windows 10中的apphelp.dll 版本不包含函数 SeiApplyPatch 。
recommend-type

SecureCRT-FX_CN-6.6.1-20130106.rar

SecureCRT-FX_CN-20130106 有效期至2013年1月6号 汉化破解绿色版
recommend-type

DayDreamInGIS_Geometry地块分割调整工具.rar

DayDreamInGIS_Geometry地块分割调整工具.rar
recommend-type

C语言流程图生成工具

AutoFlowChart 自动生成流程图 AutoFlowchart 是一个极佳的根据源码生成流程图的工具 它生成的流程图支持展开 合拢 并且可以预定义流程图块的大小和间隔 移动和缩放流程图也很方便 你还可以把它导出到WORD文档或BMP文件 它可以帮助程序员更好地理解程序 制作文档和可视化代码 支持C C++ VC++ Visual C++ NET Delphi Object Pascal 主要功能 根据源程序生成流程图 导出流程图到WORD文档中 展开 合拢流程图 自动生成一个 TreeView显示所有函数 过程 同步显示对应块的源程序和流程图 自定义流程图的配色方案 自定义流程图的大小和间距 根据格式自动排列程序 自由缩小 放大 移动流程图 显示程序行号 支持清除当前流程图 导出流程图到 bmp文件 发展前瞻 ① 支持各种语言 已经完成Pascal C 待完成:Java FoxPro Basic Fortan等; ② 支持反向操作 可以动态修改流程图 并可根据流程图生成相应的语言代码; ③ 结合Delphi专家 嵌入IDE直接运行 已经完成详见主页 操作说明 ① 打开一个或多个文件; ② 双击一个If For While Case Repeat Try begin的起始行 你就可以看到流程图; ③ 双击流程图中相应的框 可以同步显示程序块位置;">AutoFlowChart 自动生成流程图 AutoFlowchart 是一个极佳的根据源码生成流程图的工具 它生成的流程图支持展开 合拢 并且可以预定义流程图块的大小和间隔 移动和缩放流程图也很方便 你还可以把它导出到WORD文档或BMP文件 [更多]
recommend-type

现代密码学的答案习题

偏向于电子科大方面的教学,较为基础的信息概述和练习

最新推荐

recommend-type

SSRSSubscriptionManager工具:简化SSRS订阅的XML文件导入

### 知识点概述 #### 标题知识点 1. **SSRSSubscriptionManager**: 这是一个专门用于管理SQL Server Reporting Services (SSRS) 订阅的工具或脚本。它允许用户从一个集中的位置管理SSRS订阅。 2. **从XML文件导入SSRS订阅**: 描述了一个通过读取XML文件来配置SSRS订阅的过程。这可能是为了减少重复的手动设置和避免错误,提高管理效率。 #### 描述知识点 3. **快速部署多个SSRS订阅**: 该工具或脚本的一个主要功能是能够快速设置多个订阅,这比传统的SSRS在线向导更为高效。 4. **标准SSRS在线向导的局限性**: 描述了标准SSRS向导的不足之处,例如操作缓慢、单次只能设置一个订阅,以及易于出现人为错误。 5. **SSRS订阅管理器的优势**: 解释了为什么使用SSRS订阅管理器比标准向导更可靠。它允许使用预定义的XML文档进行设置,这些文档可以经过测试和验证以减少错误。 6. **受控文档**: 强调了使用SSRS订阅管理器的一个好处是能够控制订阅设置,使其更为可靠且易于管理。 7. **版本控制和订阅设置**: 讨论了SSRS报告可以进行版本控制,但是传统的订阅设置通常不包含在版本控制中,而SSRS订阅管理器提供了一种方式,可以对这些设置进行记录和控制。 #### 标签知识点 8. **C#**: 指示了实现SSRSSubscriptionManager可能使用的技术,C# 是一种面向对象的编程语言,通常用于开发.NET应用程序,包括SSRS订阅管理器。 #### 压缩包子文件名列表 9. **SSRSSubscriptionManager-master**: 表示这是一个开源项目或组件的主干文件夹。名称表明这是一个版本控制仓库中的主分支,可能包含了源代码、项目文件和其他资源文件。 ### 详细知识点 #### 关于SSRS - SQL Server Reporting Services (SSRS) 是一个服务器基础的报告平台,它能够通过Web界面、文件共享和电子邮件来交付报表内容。SSRS用户可以根据数据源生成数据驱动的报表,并设置订阅以便自动分发这些报表。 - SSRS订阅是一个功能,允许用户根据设定的计划或用户触发条件自动获取报表。订阅可以是快照订阅、数据驱动订阅或基于事件的订阅。 #### 关于SSRSSubscriptionManager - SSRSSubscriptionManager是一个工具,其设计意图是简化SSRS订阅的管理过程。它允许管理员在单个操作中部署大量订阅,相比于传统方法,它极大地节省了时间。 - 通过使用XML文件来定义订阅的设置,该工具提供了更高的准确性和一致性,因为XML文件可以被严格地测试和审核。 - 自动化和批量操作可以减少因手动设置造成的错误,并且提高了操作效率。这对于有大量报表和订阅需求的企业来说尤为重要。 - SSRSSubscriptionManager的出现也表明了开发人员对IT自动化、脚本化操作和管理工具的需求,这可以视为一种持续的向DevOps文化和实践的推进。 #### 关于C# - C# 是一种由微软开发的通用编程语言,它被广泛应用于开发Windows应用程序、服务器端Web应用程序以及移动和游戏开发。 - 在开发SSRSSubscriptionManager时,C# 语言的利用可能涉及到多种.NET框架中的类库,例如System.Xml用于解析和操作XML文件,System.Data用于数据库操作等。 - 使用C# 实现SSRS订阅管理器可以享受到.NET平台的诸多优势,比如类型安全、内存管理和跨平台兼容性。 #### 关于版本控制 - 版本控制是一种记录源代码文件更改历史的方法,它允许开发团队追踪和管理代码随时间的变化。常见的版本控制系统包括Git、Subversion等。 - 在SSRS订阅的上下文中,版本控制意味着可以追踪每个订阅设置的变更,从而保证订阅设置的一致性和可追溯性。 - SSRSSubscriptionManager通过使用XML文件,可以使得版本控制变得更加容易,因为XML文件可以被版本控制系统跟踪。 - 这种做法还确保了订阅设置文件的历史版本可以被审计,对企业的合规性和管理都有积极影响。 ### 结论 SSRSSubscriptionManager通过集成自动化、XML文件和版本控制,为SSRS订阅管理提供了更高效、可信赖和可管理的解决方案。使用C# 实现的这一工具能够极大提高IT专业人员在创建和维护SSRS订阅时的工作效率,并减少可能由手工操作引入的错误。通过强调自动化和可控制的文档处理,它也反映了IT行业的趋势,即追求效率、可靠性和版本管理。
recommend-type

图形缩放与平移实现全攻略:Delphi视图变换核心技术详解

# 摘要 本文系统探讨了图形缩放与平移技术的基本原理及其在实际开发中的应用,涵盖从数学基础到编程实现的全过程。文章首先介绍了图形变换的数学模型,包括坐标系统、矩
recommend-type

Unknown custom element: <CustomForm> - did you register the component correctly? For recursive components, make sure to provide the "name" option.

在使用 Vue.js 时,如果遇到未知自定义组件 `<CustomForm>` 的错误提示,通常是由于组件注册过程中存在某些疏漏或错误。以下是常见的原因及对应的解决方案: ### 1. 组件未正确注册 确保 `<CustomForm>` 组件已经在使用它的父组件或全局中进行了注册。如果未注册,Vue 会提示该组件是未知的。 正确的注册方式如下: - **全局注册**(适用于所有组件都能访问的场景): ```javascript import CustomForm from '@/components/CustomForm.vue' Vue.component('CustomForm',
recommend-type

使用KnockoutJS开发的黑客新闻阅读器 hn-ko

在给定的文件信息中,我们可以提炼出以下IT相关知识点: ### 标题知识点 #### KnockoutJS - **KnockoutJS定义**:Knockout是一个轻量级的JavaScript库,它允许开发者利用声明式绑定方式创建富交互的Web应用程序。它特别擅长于实现UI的自动更新,当模型的数据发生变化时,视图会自动响应这些变化而更新,无需手动操作DOM。 - **KnockoutJS核心特性**: - **依赖项跟踪**:Knockout能够跟踪数据模型中的变化,当数据更新时自动更新相关联的UI元素。 - **声明式绑定**:开发者可以使用简单的数据绑定语法在HTML标记中直接指定数据与DOM元素之间的关系,这样可以使代码更加清晰和易于维护。 - **模板和自定义绑定**:Knockout提供了灵活的模板系统,可以创建可复用的UI组件,并通过自定义绑定来扩展其核心功能,以满足特定需求。 - **组件化**:Knockout支持创建独立的、可复用的视图模型组件,以构建复杂的用户界面。 ### 描述知识点 #### 入门和运行应用 - **Git克隆**:通过`git clone`命令可以从远程仓库克隆代码到本地环境,这是版本控制中常见的操作,有助于团队协作和代码共享。`https://github.com/crissdev/hn-ko.git`指向一个特定的GitHub仓库,其中包含着使用KnockoutJS编写的黑客新闻应用代码。 - **NPM(Node Package Manager)**:NPM是随Node.js一起安装的一个包管理工具,它用于安装和管理JavaScript项目依赖。`npm install`命令用于安装项目中的所有依赖项,这可能包括KnockoutJS库以及其他可能用到的库或框架。 - **启动应用**:`npm start`是启动脚本的命令,它通常在`package.json`文件的scripts部分定义,用以启动开发服务器或运行应用。 #### 麻省理工学院许可证 - **MIT许可证**:这是一种常见的开源许可证,允许用户在任何类型的项目中免费使用软件,无论是个人的还是商业的。在保留原作者版权声明的同时,用户可以根据自己的需要修改和分发代码。这是很多开源项目选择的许可证。 ### 标签知识点 #### JavaScript - **JavaScript作用**:JavaScript是一种高级的、解释执行的编程语言,它通常是运行在浏览器中的脚本语言,用于实现网页的动态效果和用户交互。JavaScript作为全栈开发的关键技术之一,也被广泛用于服务器端开发(Node.js)。 - **JavaScript特点**: - **事件驱动**:JavaScript可以响应用户的点击、输入等事件,并据此进行操作。 - **对象导向**:JavaScript支持面向对象编程,可以通过创建对象、继承、多态等特性来组织代码。 - **异步编程**:JavaScript支持异步编程模型,利用回调函数、Promises、async/await等技术,可以有效处理网络请求、用户输入等异步操作。 ### 压缩包子文件的文件名称列表知识点 - **hn-ko-master**:这表明压缩包中的文件是从名为`hn-ko`的GitHub仓库的`master`分支获取的。文件列表中的这个名称可以帮助开发者快速识别包含KnockoutJS项目的代码仓库版本。 ### 总结 以上知识点总结了文件信息中提及的关于KnockoutJS、Git、NPM、MIT许可证和JavaScript的核心概念和应用实践。KnockoutJS作为一个功能强大的前端库,特别适用于复杂用户界面的数据绑定和动态更新。而通过Git的使用可以方便地管理项目的版本,并与其他开发者协作。NPM则使得项目的依赖管理和模块化开发变得更加简单高效。MIT许可证为项目的使用者提供了法律上的许可,确保了软件使用的自由度。JavaScript作为一种多用途的编程语言,在前端开发中扮演了不可替代的角色。理解并运用这些知识点,将有助于进行现代Web应用的开发工作。
recommend-type

Delphi图层管理机制设计:打造高效绘图控件的架构之道

# 摘要 本文系统研究了Delphi图层管理机制的核心概念、理论基础与实现细节,重点分析了图层的数据模型、渲染流程及其交互机制。通过对图层容器设计、绘制性能优化与事件分发模型的深入探讨,提出了一个高效、可扩展的图层管理架构,并结合实际绘图控件开发,验证了该机制
recommend-type

激光slam14讲

激光SLAM(Simultaneous Localization and Mapping,同步定位与地图构建)是机器人领域中的关键技术之一,广泛应用于室内机器人、自动驾驶、无人机导航等领域。对于初学者来说,系统地学习相关理论和实践方法是入门的关键。以下是一些推荐的学习资料和学习路径,帮助你更好地掌握激光SLAM。 ### 推荐书籍与资料 1. **《视觉SLAM十四讲》**:虽然书名强调“视觉”,但其中的许多核心理论,如贝叶斯估计、卡尔曼滤波、因子图优化等,与激光SLAM有高度重合,是入门SLAM的必备读物。 2. **《概率机器人》**:这本书是SLAM领域的经典教材,深入讲解了粒子滤
recommend-type

星云Dapp加密游戏深度解析与实践指南

### 星云的Dapp加密游戏知识点梳理 #### 标题解读 标题“dapp-crypto-game:星云的Dapp加密游戏”中的“dapp”指的是“Decentralized Application”,即去中心化应用。而“crypto-game”则表示这是一款基于加密货币技术的游戏,它可能涉及到区块链技术、加密资产交易、智能合约等元素。而“星云”可能是游戏的名称或者主题背景,但没有更多的信息,我们无法得知它是否指一个特定的区块链项目。 #### 描述解读 描述中的“星云的Dapp加密游戏”是一个简短的说明,它指明了这是一个与星云相关主题的去中心化应用程序,并且是一款游戏。描述信息过于简洁,没有提供具体的游戏玩法、加密技术的应用细节等关键信息。 #### 标签解读 标签“JavaScript”说明该Dapp游戏的前端或后端开发可能使用了JavaScript语言。JavaScript是一种广泛应用于网页开发的脚本语言,它也是Node.js的基础,Node.js是一种运行在服务器端的JavaScript环境,使得JavaScript能够用于开发服务器端应用程序。在区块链和Dapp开发领域,JavaScript及其相关的开发工具库(如web3.js)是与以太坊等智能合约平台交互的重要技术。 #### 文件名称解读 文件名称“dapp-crypto-game-master”表明这是一个包含Dapp游戏源代码的压缩包,并且该压缩包内包含了一个“master”目录。这通常意味着它是一个版本控制系统(如Git)中的主分支或主版本的代码。开发者可能会使用这种命名习惯来区分不同的开发阶段,如开发版、测试版和稳定版。 #### 知识点详细说明 1. **区块链技术与加密游戏**:Dapp加密游戏通常建立在区块链技术之上,允许玩家拥有独一无二的游戏资产,这些资产可以是游戏内的货币、道具或者角色,它们以加密货币或代币的形式存在,并储存在区块链上。区块链提供的不可篡改性和透明性,使得游戏资产的安全性和真实性得以保障。 2. **智能合约**:智能合约是区块链上自动执行、控制或文档化相关事件和动作的计算机程序。在Dapp加密游戏中,智能合约可以用来定义游戏规则,自动结算比赛胜负,分发游戏奖励等。智能合约的编写通常涉及专门的编程语言,如Solidity。 3. **加密货币**:加密游戏可能会用到各种类型的加密货币,包括但不限于比特币、以太币、ERC20或ERC721代币。在区块链游戏中,玩家可能需要使用这些货币来购买游戏内资产、参与游戏或赚取收益。 4. **JavaScript在Dapp开发中的应用**:由于区块链技术在前端的应用需要与用户进行交云,JavaScript在Dapp的前端开发中扮演重要角色。web3.js等库让JavaScript能够与区块链进行通信,使得开发人员能够构建用户界面,与智能合约进行交互。 5. **去中心化应用(Dapp)的特性**:Dapp的一个核心特性是它们不是由单一实体控制的。用户可以在不受第三方干涉的情况下运行或访问Dapp。这样的开放性和去中心化给用户带来了自由度,但同时也带来了安全性和法律方面的新挑战。 6. **版本控制**:使用版本控制系统的“master”分支来组织代码是一种常见的实践,它保证了项目的主版本代码是最稳定的。在多人协作的项目中,“master”分支往往只允许合并已经过测试和审查的代码。 7. **开发环境搭建**:对于想要开发Dapp的开发者来说,他们需要搭建适当的开发环境,包括安装Node.js和相应的库,如web3.js,以与区块链进行交互,以及可能的开发工具和IDE(如Truffle, Remix, Visual Studio Code)。 8. **安全性考虑**:在开发Dapp加密游戏时,安全是非常重要的考量。游戏开发者需要对智能合约进行安全审计,防止漏洞被利用导致玩家资产被盗,以及确保游戏的公平性和稳定性。 #### 结语 以上就是对“dapp-crypto-game:星云的Dapp加密游戏”相关知识点的详细解读。概括来说,一个加密游戏Dapp的开发涉及到区块链技术、智能合约、加密货币以及前端开发技术等多个领域。随着区块链技术的不断成熟和普及,越来越多的开发者将参与到Dapp游戏的开发中来,为用户提供全新的去中心化游戏体验。
recommend-type

抗锯齿技术深度对比:Delphi绘图中图像质量提升实战方案

# 摘要 抗锯齿技术是提升计算机图形质量的关键手段,尤其在Delphi绘图引擎中对图像边缘的平滑处理具有重要意义。本文系统阐述了抗锯齿技术的理论基础,包括锯齿成因、技术分类及其在性能与画质之间的权衡。结合Delphi平台的图形支持体系,探讨了GDI+、Direct2D及第三方库在抗锯齿实现中的技术路径与关键API配置。文章进一步提出了在矢量图形绘制、图像变换及动态渲染中的实战优化技巧,并分析了抗锯齿带来
recommend-type

mano关节点顺序

在 MANO 模型中,关节点的顺序定义是基于手部解剖结构的层级排列,以支持对手部姿态的参数化控制。通常,MANO 模型包含 16 个主要的关节点(joints),其中包括一个根关节(root joint)和 15 个手指关节,这些关节按照一定的顺序排列,用于描述手部的全局位置和各手指的弯曲与伸展状态。 具体的关节点顺序如下: 1. 根关节(Wrist / Root) 2. 大拇指(Thumb): - Thumb 1 (thumb MCP) - Thumb 2 (thumb PIP) - Thumb 3 (thumb DIP) 3. 食指(Index finger):
recommend-type

TeamColony-RSS:实现团队报告的RSS Feed生成器

### 知识点详解 #### 1. RSS Feed概念 RSS(Really Simple Syndication)是一种基于XML的标准,用于共享网站更新(如博客文章或新闻条目)。RSS Feed生成器是一种工具,能够自动创建RSS格式的文件,供用户订阅。RSS Feed的优势在于它允许用户以一种统一的、标准化的方式接收来自不同来源的内容更新。 #### 2. TeamColony介绍 TeamColony是一个项目管理和团队协作的工具,旨在帮助团队成员保持同步,管理项目进度。使用TeamColony,团队可以跟踪工作进度、分配任务以及记录每日报告等。 #### 3. RSS Feed在TeamColony中的应用 在描述中提到的问题是,TeamColony缺少一种通知机制,即当有人填写当天的报告时,它无法及时通知到团队成员。这个问题通过一个基于Sinatra的简单应用程序得到了解决,该程序将TeamColony的JSON格式报告转换成RSS Feed。转换后的RSS Feed可以被各种阅读器订阅,从而实现每当TeamColony有更新时,团队成员都能得到即时通知。 #### 4. Sinatra框架 Sinatra是一个基于Ruby的轻量级Web应用框架,专门用于快速、简单地开发Web应用和API。在本例中,使用Sinatra来搭建一个RSS Feed生成器,体现了其在Web开发中的灵活性和易用性。 #### 5. RSS与Slack的集成 Slack是一个团队协作工具,它允许团队成员进行即时通讯,并集成了许多外部服务。RSS与Slack的集成意味着RSS Feed可以作为信息源被Slack接收,从而在Slack中实时呈现TeamColony的更新信息。这使得团队能够在使用Slack进行日常沟通的同时,直接获得TeamColony的相关通知。 #### 6. 环境变量的配置方法 在开发基于Web的应用程序时,环境变量是一种常用来存储敏感信息或配置信息的方法。例如,本应用程序中使用了 TEAMCOLONY_NEW_SESSION cookie 进行身份验证,而这个cookie的值需要被设置为一个环境变量。同样,需要设置TEAM_NAME环境变量,以便程序能够识别当前操作的团队名称。 在Heroku这样的PaaS(平台即服务)环境中,环境变量通常通过其提供的命令行工具或界面进行配置。环境变量的好处在于它使得应用程序的配置更加灵活,不依赖于代码中的硬编码设置,有助于保持代码的清洁和安全。 #### 7. 开发者指南 为了开发者更好地理解和使用teamcolony-rss程序,文档可能提供了进一步的指导,比如如何设置和运行本地开发环境、如何配置环境变量、如何与TeamColony集成等。这部分内容通常包括了对程序各个部分的详细描述,以及如何进行测试、调试和部署的步骤。 #### 8. 压缩包子文件列表说明 在给定的文件信息中,有一个“压缩包子文件的文件名称列表”指向了一个名为“teamcolony-rss-master”的压缩文件。这表明相关的源代码和资源文件可能被打包在一起,供下载、安装和使用。一般而言,这类压缩文件包含了整个项目的文件结构,可能是.zip或者.tar.gz格式,便于开发者在本地环境中复原相同的文件结构,进而对程序进行开发和测试。 ### 结语 本文对提供的文件信息进行了解读,涵盖了RSS Feed、TeamColony工具、Sinatra框架、Slack集成和环境变量配置等多个方面的知识点。读者应该能对teamcolony-rss项目的基本概念、结构、应用场景以及如何设置和使用有了深入的理解。这有助于开发者更好地利用这些工具和技术提升团队的协作效率。