决策森林
使用集合让一切井井有条
根据您的偏好保存内容并对其进行分类。
决策森林是一个通用术语,用于描述由多个决策树组成的模型。决策森林的预测是其决策树预测的汇总。此汇总的实现取决于用于训练决策树的算法。例如,在多类分类随机森林(一种决策森林)中,每棵树都会投票给一个类,而随机森林预测结果是得票最多的类。在二分类梯度提升树 (GBT)(另一种决策森林)中,每个树都会输出一个 logit(一个浮点值),而梯度提升树预测是这些值的总和,后跟一个激活函数(例如 Sigmoid)。
接下来的两个章节详细介绍了这两种决策树算法。
如未另行说明,那么本页面中的内容已根据知识共享署名 4.0 许可获得了许可,并且代码示例已根据 Apache 2.0 许可获得了许可。有关详情,请参阅 Google 开发者网站政策。Java 是 Oracle 和/或其关联公司的注册商标。
最后更新时间 (UTC):2025-07-27。
[null,null,["最后更新时间 (UTC):2025-07-27。"],[[["\u003cp\u003eDecision forests encompass models composed of multiple decision trees, with predictions derived from aggregating individual tree predictions.\u003c/p\u003e\n"],["\u003cp\u003ePrediction aggregation methods vary depending on the specific decision forest algorithm employed, such as voting in random forests or logit summation in gradient boosted trees.\u003c/p\u003e\n"],["\u003cp\u003eRandom forests and gradient boosted trees are two primary examples of decision forest algorithms, each utilizing a unique approach to prediction aggregation.\u003c/p\u003e\n"],["\u003cp\u003eUpcoming chapters will delve deeper into the workings of random forests and gradient boosted trees.\u003c/p\u003e\n"]]],[],null,["\u003cbr /\u003e\n\nA **decision forest** is a generic term to describe models made of multiple\ndecision trees. The prediction of a decision forest is the aggregation of the\npredictions of its decision trees. The implementation of this aggregation\ndepends on the algorithm used to train the decision forest. For example, in a\nmulti-class classification random forest (a type of decision forest), each tree\nvotes for a single class, and the random forest prediction is the most\nrepresented class. In a binary classification gradient boosted Tree (GBT)\n(another type of decision forest), each tree outputs a logit (a floating point\nvalue), and the gradient boosted tree prediction is the sum of those values\nfollowed by an activation function (e.g. sigmoid).\n\nThe next two chapters detail those two decision forests algorithms."]]