鸿蒙进阶——Mindspore Lite AI框架源码解读之模型加载详解(二)

引言

书接上文鸿蒙进阶——Mindspore Lite AI框架源码解读之模型加载详解(一)

一、Model::Import把模型文件Buffer转为Model

把模型文件Buffer转为Model,这是使用模型进行推理时的第一步,无论是通过CPU还是NNRT方式。Model::Import从给定的模型数据缓冲区中导入模型。它通过调用 ImportFromBuffer 函数,允许用户根据指定的模型数据和大小来构建一个 Model 对象。

Model *Model::Import(const char *model_buf, size_t size) { return ImportFromBuffer(model_buf, size, false); }
Model *Model::Import(const char *filename) { return ImportFromPath(filename); }

1、ImportFromBuffer(const char *model_buf, size_t size, bool take_buf, mindspore::ModelType model_type,const std::string &path)

从指定的模型缓冲区导入并构造一个MindSpore模型。它首先尝试使用模型加载器导入,如果没有找到合适的加载器,则直接创建一个LiteModel实例。lite::ImportFromBuffer 通过不同的类型的Model对应不同的ModelLoader,由ModelLoader ImportModel对传入过来的Model Buffer 进行包装为LiteModel对象并把Buffer数据copy过来,并设置对应的模型类型。

#ifdef ENABLE_LITE_HELPER
Model *ImportFromBuffer(const char *model_buf, size_t size, bool take_buf, mindspore::ModelType model_type,
                        const std::string &path, mindspore::infer::helper::InferHelpers *infer_helpers) {
#else
Model *ImportFromBuffer(const char *model_buf, size_t size, bool take_buf, mindspore::ModelType model_type,const std::string &path) {
#endif
  auto model_loader = mindspore::infer::ModelLoaderRegistry::GetInstance()->GetModelLoader(model_type);
  if (model_loader != nullptr) {
    MS_LOG(INFO) << "import model from model loader";
    auto model = model_loader->ImportModel(model_buf, size, true);
    if (model != nullptr) {
      return model;
    }
  }
  MS_LOG(INFO) << "import model from lite model";
  auto *model = new (std::nothrow) LiteModel(path);
#ifdef ENABLE_LITE_HELPER
  auto status = model->ConstructModel(model_buf, size, take_buf, infer_helpers);
#else
  auto status = model->ConstructModel(model_buf, size, take_buf);
#endif
   return model;
}

1.1、MindirModelLoader::ImportModel

从给定的模型缓冲区导入 MindIR 格式的模型。它负责创建模型实例、初始化缓冲区、解析模型原型以及转换模型至内部格式

AbstractBaseModel *MindirModelLoader::ImportModel(const char *model_buf, size_t size, bool take_buf) {
  this->model_ = new MindirModel();
  this->model_->model_type_ = mindspore::lite::ModelType_MindIR;
    auto ret = this->InitModelBuffer(this->model_, model_buf, size, take_buf);
    ...
   if (!this->ConvertModel(this->model_->mindir_model_proto_)) {
    MS_LOG(ERROR)
      << "MindirModelLoader: Import model failed, convert model error, please check the correctness of the file.";
    delete this->model_;
    this->model_ = nullptr;
    return nullptr;
  }
  return this->model_;
}
1.1.1、ModelLoader::InitModelBuffer 只是把Buffer copy到LiteModel中就是开始new出来的这个this->model_
int ModelLoader::InitModelBuffer(AbstractBaseModel *model, const char *model_buf, size_t size, bool take_buf) {
  if (take_buf) {
    model->buf = const_cast<char *>(model_buf);
  } else {
    if (size > kMaxModelBufferSize) {
      return mindspore::lite::RET_ERROR;
    }
    model->buf = new char[size];
    memcpy(model->buf, model_buf, size);
  }
  model->buf_size_ = size;
  return mindspore::lite::RET_OK;
}
1.1.2、MindirModelLoader::ConvertModel 将输入的 MindIR 模型的描述(mind_ir::ModelProto 类型)转换为具体的内部模型结构,以便后续的推断和执行
bool MindirModelLoader::ConvertModel(const mind_ir::ModelProto &model_proto) {
  this->model_->graph_.name_ = "";
  if (model_proto.has_model_version()) {
    this->model_->graph_.version_ = model_proto.model_version();
  }
//模型源语转换确保确保所有模型的基本组件(如算子)都被正确解析。如果转换失败,则返回错误信息。
  MS_CHECK_TRUE_MSG(
    ConvertPrimitives(model_proto), false,
    "MindirModelLoader: Import model failed, convert primitives error, please check the correctness of the file.");
  this->tensor_count_ = 0;
  this->node_count_ = 0;
  if (model_proto.has_graph()) {
    this->model_->graph_.name_ = model_proto.graph().name();
    // root graph, do not pass sub graph
    //判断是否存在多个子图。如果存在多个函数(子图),则直接处理根图,并转换为内部格式;如果没有子图,则创建一个新的子图并将其添加到内部模型的图中。
    if (model_proto.functions_size() > 0) {
      MS_CHECK_TRUE_MSG(
        ConvertGraph(model_proto.graph(), nullptr, true), false,
        "MindirModelLoader: Import model failed, convert root graph error, please check the correctness of the file.");
    } else {
      // no subgraph, add graph to subgraph
      auto *sub_graph = new LiteGraph::SubGraph();
      sub_graph->name_ = model_proto.graph().name();
      MS_CHECK_TRUE_MSG(
        ConvertGraph(model_proto.graph(), sub_graph, true), false,
        "MindirModelLoader: Import model failed, convert root graph error, please check the correctness of the file.");
      this->model_->graph_.sub_graphs_.push_back(sub_graph);
    }
  }
//循环遍历所有的函数(子图),创建并转换每一个子图,将其添加到内部模型的子图列表中。如果创建子图失败,则记录错误并返回。
  for (int i = 0; i < model_proto.functions_size(); i++) {
    auto sub_graph_proto = model_proto.functions(i);
    auto *sub_graph = new LiteGraph::SubGraph();
    ...
    sub_graph->name_ = sub_graph_proto.name();
    MS_CHECK_TRUE_MSG(
      ConvertGraph(sub_graph_proto, sub_graph), false,
      "MindirModelLoader: Import model failed, convert sub graph error, please check the correctness of the file.");
    this->model_->graph_.sub_graphs_.push_back(sub_graph);
  }
  MS_LOG(INFO) << "MindirModelLoader: Import model successful.";
  return true;
}
1.1.2.1、在ConvertModel里ConvertGraph,即ConvertTensors 和 ConvertNodes
bool MindirModelLoader::ConvertGraph(const mind_ir::GraphProto &graph_proto, LiteGraph::SubGraph *sub_graph,
                                     bool is_main_graph) {
  MS_CHECK_TRUE_MSG(
    ConvertTensors(graph_proto, sub_graph, is_main_graph), false,
    "MindirModelLoader: Convert Graph failed, convert tensors error, please check the correctness of the file.");
  MS_CHECK_TRUE_MSG(
    ConvertNodes(graph_proto, sub_graph, is_main_graph), false,
    "MindirModelLoader: Convert Graph failed, convert nodes error, please check the correctness of the file.");
  return true;
}
ConvertTensors

std::vector all_mindir_tensors_;
遍历Convert所有输入张量,存入all_mindir_tensors_
遍历Convert所有输出张量,存入all_mindir_tensors_
遍历Convet所有参数张量,存入all_mindir_tensors_

bool MindirModelLoader::ConvertTensors(const mind_ir::GraphProto &graph_proto, LiteGraph::SubGraph *sub_graph,bool is_main_graph) {
   //1、遍历所有输入张量
  for (int i = 0; i < graph_proto.input_size(); i++) {
  //获取输入张量的协议缓冲区。
    const mind_ir::TensorProto &tensor_proto = graph_proto.input(i).tensor(0);
    //创建 TensorProtoWrap 以封装输入张量,后面会将其添加到 all_mindir_tensors_ 列表中。
    TensorProtoWrap tensor_wrap(graph_proto.input(i).name(), tensor_proto);
    this->model_->all_mindir_tensors_.push_back(tensor_wrap);
    //在 tensor_index_map_ 中记录张量名称和当前张量索引的映射。
    this->tensor_index_map_[graph_proto.input(i).name()] = this->tensor_count_;
    //若存在子图则将输入张量索引添加到子图的输入列表中
    if (sub_graph != nullptr) {
      sub_graph->input_indices_.push_back(this->tensor_count_);
      sub_graph->tensor_indices_.push_back(this->tensor_count_);
    }
    //如果是主图,还需将索引添加到主图的输入列表中
    if (is_main_graph) {
this->model_->graph_.input_indices_.push_back(this->tensor_count_);
    }
    //张量计数 tensor_count_。+1
    this->tensor_count_++;
  }
  //遍历输出张量
  for (int i = 0; i < graph_proto.output_size(); i++) {
    const mind_ir::TensorProto &tensor_proto = graph_proto.output(i).tensor(0);
    TensorProtoWrap tensor_wrap(graph_proto.output(i).name(), tensor_proto);
    this->model_->all_mindir_tensors_.push_back(tensor_wrap);
    this->tensor_index_map_[graph_proto.output(i).name()] = this->tensor_count_;
    if (sub_graph != nullptr) {
      sub_graph->output_indices_.push_back(this->tensor_count_);
      sub_graph->tensor_indices_.push_back(this->tensor_count_);
    }
    if (is_main_graph) {
      this->model_->graph_.output_indices_.push_back(this->tensor_count_);
    }
    this->tensor_count_++;
  }
  //3、遍历参数张量,将每个参数张量以相似的方式添加到 all_mindir_tensors_ 中。
  for (int i = 0; i < graph_proto.parameter_size(); i++) {
    const mind_ir::TensorProto &tensor_proto = graph_proto.parameter(i);
    TensorProtoWrap tensor_wrap(tensor_proto.name(), tensor_proto);
    this->model_->all_mindir_tensors_.push_back(tensor_wrap);
    this->tensor_index_map_[tensor_proto.name()] = this->tensor_count_;
    if (sub_graph != nullptr) {
      sub_graph->tensor_indices_.push_back(this->tensor_count_);
    }
    this->tensor_count_++;
  }
  return true;
}
ConvertNodes

接收一个 GraphProto 对象(包含所有节点的信息),一个指向子图的指针和一个布尔值(指示是否为主图),返回一个布尔值表示转换是否成功
ConvertNode主要就是根据输入构造出相应的LiteGraph::Node并初始化然后存入对应的列表中

bool MindirModelLoader::ConvertNodes(const mind_ir::GraphProto &graph_proto, LiteGraph::SubGraph *sub_graph,bool is_main_graph) {
// 循环遍历每个节点,node_proto 是当前处理的节点
  for (int i = 0; i < graph_proto.node_size(); i++) {
    auto node_proto = graph_proto.node(i);
    //如果当前节点类型为常量节点,将其属性中的张量转换并存储。
    if (node_proto.op_type() == kNodeTypeConstant) {
      // Constant node, convert to tensor
      for (int j = 0; j < node_proto.attribute_size(); j++) {
        auto attribute_proto = node_proto.attribute(j);
        //遍历其属性,如果属性类型为张量,创建 TensorProtoWrap 实例,并将其添加到模型的所有 MindIR 张量列表中,再记录张量的索引。
        if (attribute_proto.type() == mind_ir::AttributeProto_AttributeType_TENSORS) {
          const mind_ir::TensorProto &tensor_proto = attribute_proto.tensors(0);
          TensorProtoWrap tensor_wrap(node_proto.name(), tensor_proto);
          this->model_->all_mindir_tensors_.push_back(tensor_wrap);
          this->tensor_index_map_[node_proto.name()] = this->tensor_count_;
          if (sub_graph != nullptr) {
            sub_graph->tensor_indices_.push_back(this->tensor_count_);
          }
          this->tensor_count_++;
        }
      }
      continue;
    }
    //创建并转换其他节点,对于非常量节点,创建一个新的 LiteGraph::Node 实例
    auto *node = new LiteGraph::Node();
    ...
    node->name_ = node_proto.name();
    //通过 MakePrimitiveC 方法获取当前节点的基本操作,并设置节点的名称和操作类型
    node->base_operator_ = this->MakePrimitiveC(node_proto.op_type());
    auto base_operator = std::reinterpret_pointer_cast<ops::BaseOperator>(node->base_operator_);
    node->op_type_ = base_operator->GetPrim()->instance_name();

    // 处理节点的输入,查找并记录对应的张量索引
    for (int j = 0; j < node_proto.input_size(); j++) {
      std::string input_name = node_proto.input(j);
      auto it = this->tensor_index_map_.find(input_name);
      if (it == this->tensor_index_map_.end()) {
        MS_LOG(WARNING) << "MindirModelLoader: Convert nodes failed, cannot find input index with " << input_name;
        continue;
      }
      node->input_indices_.push_back(it->second);
    }
    // 处理节点的输入,查找并记录对应的张量索引
    for (int j = 0; j < node_proto.output_size(); j++) {
      std::string output_name = node_proto.output(j);
      auto it = this->tensor_index_map_.find(output_name);
      if (it == this->tensor_index_map_.end()) {
        MS_LOG(WARNING) << "MindirModelLoader: Convert nodes failed, cannot find output index with " << output_name;
        continue;
      }
      node->output_indices_.push_back(it->second);
    }
    this->model_->graph_.all_nodes_.push_back(node);
    if (sub_graph != nullptr) {
      sub_graph->node_indices_.push_back(this->node_count_);
    }
    this->node_count_++;
  }
  return true;
}

1.2、MindirModelLoader::ConstructModel

构建和初始化轻量级模型(LiteModel)。通过验证模型源数据的完整性和有效性、生成适当的数据结构以及确保模型参数的正确性,确保模型在后续推理过程中能够正确工作。Convert之后ConstructModel 接收就是Convert处理之后的buffer 和对应的size,在这里又进行一次InitModelBuffer?

#ifdef ENABLE_LITE_HELPER
int LiteModel::ConstructModel(const char *model_buf, size_t size, bool take_buf, mindspore::infer::helper::InferHelpers *infer_helpers) {
#else
int LiteModel::ConstructModel(const char *model_buf, size_t size, bool take_buf) {
#endif
  auto ret = InitModelBuffer(this, model_buf, size, take_buf);
  ...
int status = GenerateModelByVersion();
#ifdef ENABLE_LITE_HELPER
  if (!PrepareInnerTensors(infer_helpers)) {
#else
  if (!PrepareInnerTensors()) {
#endif
    MS_LOG(ERROR) << "PrepareInnerTensors failed.";
    return RET_ERROR;
  }
  return RET_OK;
}

1.1、PrepareInnerTensors

PrepareInnerTensors 为 LiteModel 准备内部张量,将每个张量封装为 SchemaTensorWrapper 对象,并完成必要的初始化工作。通过这种方式,模型可以管理、储存和操作自己的张量数据,以便进行后续的推理和计算。准备工作成功与否通过返回值来表明。如果张量已经准备过,则函数会立即返回失败

std::vector<SchemaTensorWrapper *> inner_all_tensors_

#ifdef ENABLE_LITE_HELPER
bool LiteModel::PrepareInnerTensors(mindspore::infer::helper::InferHelpers *infer_helpers) {
#else
bool LiteModel::PrepareInnerTensors() {
#endif
  if (!this->inner_all_tensors_.empty()) {
    MS_LOG(ERROR) << "Already prepared tensors";
    return false;
  }
  //使用 GetDirectory 函数获取模型路径的目录,对后续初始化张量时可能需要的文件操作做准备。
  auto dir = GetDirectory(this->model_path_);
  //根据graph_.all_tensors_ 的大小调整 inner_all_tensors_ 的大小,即内部张量的数量将与图中定义的张量数量相同。
  this->inner_all_tensors_.resize(graph_.all_tensors_.size());
  //遍历创建SchemaTensorWrapper对象并使用graph_的all_tensors_ 初始化SchemaTensorWrapper
  for (size_t i = 0; i < graph_.all_tensors_.size(); i++) {
    auto tensor_wrapper = new (std::nothrow) SchemaTensorWrapper();
    ...
#ifdef ENABLE_LITE_HELPER
    if (!tensor_wrapper->Init(*(graph_.all_tensors_.at(i)), static_cast<SCHEMA_VERSION>(schema_version_), dir,
                              infer_helpers)) {
#else
    if (!tensor_wrapper->Init(*(graph_.all_tensors_.at(i)), static_cast<SCHEMA_VERSION>(schema_version_), dir)) {
#endif
      delete tensor_wrapper;
      return false;
    }
    this->inner_all_tensors_[i] = tensor_wrapper;
  }
  return true;
}

而SchemaTensorWrapper::Init(const schema::Tensor &tensor, const SCHEMA_VERSION schema_version,const std::string &base_path) 初始化 SchemaTensorWrapper 对象中的张量,处理数据读取,并根据张量的不同状态和配置来确认其数据来源。通过不同的条件判断,它能够从内存中直接获取张量的数据或者从指定的文件路径读取外部数据,确保了张量在被使用前数据的正确性和可用性

bool SchemaTensorWrapper::Init(const schema::Tensor &tensor, const SCHEMA_VERSION schema_version,
                               const std::string &base_path) {
#endif
  // add magic-num-check and checksum-check here
  this->handler_ = &tensor;
  if (tensor.data() != nullptr && tensor.data()->data() != nullptr) {
    auto data = tensor.data()->data();
    auto data_size = tensor.data()->size();
    this->length_ = data_size;
    this->data_ = const_cast<unsigned char *>(data);
    this->if_own_data_ = false;
    return true;
  }
  if (schema_version == SCHEMA_V0) {
    return true;
  }
  if (tensor.externalData() == nullptr) {
    return true;
  }
  if (tensor.externalData()->size() != 1) {
    MS_LOG(ERROR) << "Only support tensor saved in one file now";
    return false;
  }
  auto external_data = tensor.externalData()->Get(0);
  this->length_ = static_cast<size_t>(external_data->length());
#ifdef ENABLE_LITE_HELPER
  if (infer_helpers != nullptr && infer_helpers->GetExternalTensorHelper() != nullptr) {
    this->data_ = infer_helpers->GetExternalTensorHelper()->GetExternalTensorData(external_data);
    this->if_own_data_ = false;
  } else {
    this->data_ =
      ReadFileSegment(base_path + external_data->location()->str(), external_data->offset(), external_data->length());
    this->if_own_data_ = true;
  }
#else
  this->data_ =
    ReadFileSegment(base_path + external_data->location()->str(), external_data->offset(), external_data->length());
  this->if_own_data_ = true;
#endif
  if (this->length_ > 0 && this->data_ == nullptr) {
    MS_LOG(ERROR) << "Read tensor data from msw file failed";
    return false;
  }
  return true;
}

2、Model *ImportFromPath(const char *model_path)

通过模型的物理路径进行转化细节与From Buffer大同小异。

Model *ImportFromPath(const char *model_path) { return LiteImportFromPath(model_path); }

未完待续…

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

CrazyMo_

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值