活动介绍

xlsx-style 工具类

时间: 2023-09-10 20:01:53 浏览: 137
xlsx-style 是一个用于操作 Excel 文件的工具类,它基于 JavaScript,可以帮助我们轻松地创建、读取和修改 Excel 文件的样式。它提供了一套简洁方便的 API,使我们可以在 Excel 文件中设置字体、颜色、边框、对齐方式等样式。 使用 xlsx-style 工具类可以方便地对 Excel 文件进行定制化的样式设置,例如可以设置不同单元格的字体样式、背景颜色,以及自动调整行高和列宽等。这个工具类还支持设置边框样式,在单元格周围绘制边框,使 Excel 文件更加美观。 此外,xlsx-style 工具类还支持设置单元格的数据格式。我们可以定义单元格的数据类型(如数字、日期、货币等),以及数值的精度、日期的格式等。这样可以更好地满足我们在处理 Excel 文件时的需求。 除了样式设置,xlsx-style 还具有读取和修改 Excel 文件内容的功能。我们可以通过它读取 Excel 中的数据,并进行数据的处理和计算。同时,我们还可以使用它来修改已有的 Excel 文件内容,比如添加新的工作表、删除或重命名已有的工作表,以及插入、删除、合并单元格等操作。 总的来说,xlsx-style 是一个非常实用的工具类,可以帮助我们在 JavaScript 环境下轻松地处理 Excel 文件的样式和内容。无论是用于数据分析、报表生成还是数据导出,都能够大大提高我们的工作效率,简化操作流程。
相关问题

INFO Starting development server... 98% after emitting CopyPlugin ERROR Failed to compile with 1 error 10:20:49 This relative module was not found: * ./cptable in ./node_modules/xlsx-style/dist/cpexcel.js

<think>好的,我现在要解决用户遇到的这个编译错误。首先,用户描述的问题是在运行开发服务器时出现了错误,提示找不到相对模块`./cptable`,路径是在`xlsx-style`包的`cpexcel.js`文件中。我需要分析这个问题可能的原因,并提供解决方案。 首先,我需要理解错误信息。错误指出在`xlsx-style`的`dist/cpexcel.js`中引用了`./cptable`,但找不到这个模块。`./cptable`是相对路径,所以应该是在同一目录下的`cptable.js`或者其他类似文件。但显然在`node_modules/xlsx-style/dist`目录下没有这个文件,导致模块无法找到。 接下来,我需要考虑可能的原因。常见的情况是,某些npm包可能依赖外部文件或者需要额外的配置才能正确工作。比如,有些包可能在构建过程中生成必要的文件,或者需要用户手动引入某些依赖项。 查阅`xlsx-style`的文档或GitHub仓库可能会有帮助。但假设用户没有提到查阅文档,我需要根据经验来推断。`xlsx-style`是一个处理Excel文件的库,而`cptable`可能与代码页表(codepage tables)有关,用于处理不同的字符编码。可能`xlsx-style`在某个版本中需要`cptable`作为依赖项,或者需要用户自行安装。 接下来,我需要检查用户是否安装了`cptable`模块。如果`cptable`是一个独立的npm包,用户可能需要单独安装它。但根据错误信息,路径是相对路径`./cptable`,这意味着`xlsx-style`期望该模块存在于自己的`dist`目录中,而不是从外部依赖安装。这可能表明`xlsx-style`的构建过程有问题,或者安装过程中某些文件缺失。 另一个可能性是,`xlsx-style`可能需要额外的配置或步骤来生成`cptable`文件。例如,有些库在安装后需要运行特定的脚本来生成必要的文件。如果用户没有运行这些脚本,可能会导致文件缺失。 还有一种可能是,`xlsx-style`与某些其他库(如`xlsx`)存在版本兼容性问题。例如,如果用户同时安装了`xlsx`和`xlsx-style`,并且版本不兼容,可能会导致模块引用的路径错误。 接下来,我需要考虑解决方案。首先,建议用户检查`node_modules/xlsx-style/dist`目录下是否存在`cptable.js`或类似文件。如果没有,可能需要重新安装`xlsx-style`,或者在安装时确保所有文件都被正确下载和构建。另外,可能需要安装`cptable`作为独立的依赖项,尽管根据相对路径来看,这可能是`xlsx-style`内部的问题。 如果重新安装`xlsx-style`无效,可能需要手动处理。例如,用户可能需要从`xlsx-style`的源代码中获取`cptable`文件,或者调整构建配置以包含这些文件。还有一种可能是在使用webpack或类似的打包工具时,需要配置别名(alias)来正确解析`cptable`的路径。 此外,用户可能在使用旧版本的`xlsx-style`,而新版本已经修复了这个问题。建议用户升级到最新版本,或者查看项目的GitHub issues是否存在类似的问题和解决方案。 总结可能的解决步骤: 1. 确保`xlsx-style`正确安装,尝试删除`node_modules`和`package-lock.json`后重新安装依赖。 2. 检查是否存在`cptable`的独立包,可能需要单独安装。 3. 查看`xlsx-style`的文档或GitHub仓库,确认是否需要额外的构建步骤。 4. 如果问题依旧,尝试在项目中手动添加`cptable.js`文件到指定路径。 5. 配置打包工具(如webpack)的别名,将`./cptable`指向正确的模块路径。 6. 检查是否有版本冲突,升级或降级相关依赖。 最后,需要验证这些步骤的有效性,并确保用户能够正确应用解决方案。</think>### **问题分析:`ERROR Failed to compile with 1 error`** 该错误表明构建工具(如 webpack)在编译时未能找到 `xlsx-style` 依赖的内部模块 `./cptable`。核心原因是 **`xlsx-style` 包的代码中硬编码了未正确分发的文件路径**,通常与以下情况相关: | **原因分类** | **说明** | |-----------------------|-------------------------------------------------------------------------| | **包安装不完整** | `xlsx-style` 的安装过程未正确生成或下载 `cptable` 文件 | | **包版本兼容性问题** | 当前版本的 `xlsx-style` 与项目环境(如 webpack 版本)存在兼容性问题 | | **构建配置缺失** | 未配置 webpack 对某些文件类型的处理规则(如 `.js` 文件未正确解析) | --- ### **分步解决方案** #### **1. 修复 `xlsx-style` 的依赖(推荐方案)** `xlsx-style` 是一个社区维护的库,已知其依赖 `cptable` 需要手动处理。按以下步骤操作: 1. **安装 `cptable` 依赖** 运行以下命令安装缺失的模块: ```bash npm install cptable --save ``` 2. **手动创建符号链接** 在 `xlsx-style` 的 `dist` 目录下创建指向 `cptable` 的软链接: ```bash # 进入项目目录的 node_modules cd node_modules/xlsx-style/dist # 创建符号链接(Linux/macOS) ln -s ../../../cptable ./cptable # Windows 用户使用 mklink(需管理员权限) mklink cptable ..\..\..\cptable ``` --- #### **2. 修改 webpack 配置(针对构建工具)** 如果使用 webpack,需显式声明对 `cptable` 的引用: 1. **在 `vue.config.js` 或 `webpack.config.js` 中添加别名** ```javascript module.exports = { configureWebpack: { resolve: { alias: { './cptable': 'cptable' } } } }; ``` 2. **重新启动开发服务器** ```bash npm run serve ``` --- #### **3. 降级 `xlsx-style` 版本** 某些新版 `xlsx-style` 存在兼容性问题,尝试使用旧版本: ```bash # 卸载当前版本 npm uninstall xlsx-style # 安装已知稳定的旧版本(例如 0.8.13) npm install [email protected] --save ``` --- #### **4. 使用 `xlsx` 替代 `xlsx-style`(终极方案)** 如果上述方法无效,建议改用更活跃维护的库 `xlsx`: ```bash npm uninstall xlsx-style npm install xlsx --save ``` 在代码中替换导入路径: ```javascript // 原代码 import XLSX from 'xlsx-style'; // 修改为 import XLSX from 'xlsx'; ``` --- ### **原理解释** - **模块解析机制**:webpack 在编译时会解析所有 `import` 或 `require` 语句。当 `xlsx-style/dist/cpexcel.js` 尝试引用 `./cptable` 时,实际需要定位到 `node_modules/xlsx-style/dist/cptable`,但该文件未被正确分发。 - **符号链接的作用**:通过软链接将 `cptable` 的真实路径映射到 `xlsx-style` 预期位置,欺骗构建工具认为文件存在。 --- ### **典型错误场景修复** **场景:使用 Vue CLI 创建的项目** 若错误出现在 Vue 项目中,按以下步骤修复: 1. 修改 `vue.config.js`: ```javascript module.exports = { chainWebpack: config => { config.resolve.alias.set('./cptable', 'cptable'); } }; ``` 2. 重启开发服务器。 --- ### **流程图:模块解析失败处理逻辑** ``` 开始 │ ├─ 是否接受手动修复? → 安装 cptable 并创建符号链接 │ ├─ 是否接受配置修改? → 修改 webpack 别名 │ ├─ 是否接受降级? → 安装旧版 xlsx-style │ └─ 是否接受替代方案? → 改用 xlsx 库 ``` --- ### **附加说明** - **检查文件路径**:确认 `node_modules/cptable` 是否存在,若缺失需重新安装。 - **清理缓存**:删除 `node_modules/.cache` 和 `dist` 目录后重试。 - **版本兼容性表**: | `xlsx-style` 版本 | 适用场景 | |------------------|-----------------------| | 0.8.x | 兼容 webpack 4 及以下 | | 0.9.x | 需配合 webpack 5 配置 | --- 提供以下信息可进一步定位问题: 1. 项目使用的 webpack 版本(`npm list webpack`) 2. `xlsx-style` 和 `cptable` 的版本号(`npm list xlsx-style cptable`) 3. 完整的错误日志截图

XLSX 单元格 工具有哪些

<think>好的,我现在需要回答用户关于“XLSX单元格工具有哪些”的问题。用户之前已经询问过XLSX.stream流式处理接口,现在可能是在继续深入了解处理Excel文件时的相关工具。首先,我需要确定用户所指的“单元格工具”具体指什么,可能是指处理Excel单元格的各种库、函数或方法。 用户可能是在使用类似SheetJS的库(比如xlsx库),所以需要确认该库中处理单元格的相关工具。根据之前的回答,用户可能对技术细节比较感兴趣,比如如何读取、写入、格式化单元格,或者进行数据操作等。需要涵盖单元格的地址解析、数据操作、样式处理、类型转换、实用函数等。 接下来,我需要回忆xlsx库中与单元格相关的API,比如cell对象的结构,常用的工具函数如XLSX.utils.encode_cell、decode_range等。同时,可能还需要提到第三方工具库,比如xlsx-js-style,用于处理样式,或者其他的辅助库。 还要考虑用户可能的实际应用场景,比如生成报表时需要处理单元格合并、样式设置,或者数据导入导出时的单元格操作。需要将这些工具分类,例如基础工具、样式处理、高级操作等,结构清晰,方便用户理解。 需要确保回答符合之前的格式要求,使用中文,结构清晰,分点说明,可能包括代码示例。同时注意使用正确的LaTeX格式,但根据用户当前的问题,可能代码示例更多,而数学公式较少,但如果有涉及计算的部分,需要正确应用格式。 需要验证每个提到的工具函数是否正确,比如encode_cell、decode_cell、format_cell等,确保参数和用法准确。此外,注意是否需要提到处理日期、公式、超链接等特殊单元格内容。 可能还需要比较不同工具之间的差异,比如内置工具和第三方库的优缺点,帮助用户选择适合的方案。同时,提醒注意事项,比如样式处理在xlsx库中的限制,需要第三方库支持等。 最后,总结使用这些工具的最佳实践,帮助用户高效处理单元格操作,避免常见问题,如内存泄漏、性能问题等。</think>以下是关于 XLSX 单元格处理工具的详细技术解析: --- ### 一、核心单元格工具分类 #### 1. 基础单元操作工具 **1.1 地址转换工具** - `XLSX.utils.encode_cell({c:C, r:R})` 将数字行列转为 A1 格式地址 示例:`encode_cell({c:2, r:3}) => "C4"` 公式原理:$列号 = C + 1$(0-based 转 1-based) - `XLSX.utils.decode_cell("A1")` 解析地址为行列对象 示例:`decode_cell("B3") => {c:1, r:2}` **1.2 区域处理工具** - `XLSX.utils.decode_range("A1:C5")` 返回范围对象: ```json { s: {c:0, r:0}, e: {c:2, r:4} } ``` - `XLSX.utils.encode_range({s:{c:0,r:0}, e:{c:2,r:4}})` 生成范围字符串:`"A1:C5"` #### 2. 数据格式工具 **2.1 数据类型处理** ```javascript // 获取单元格数据类型 const cellType = XLSX.utils.get_cell_type(cell); // 强制类型转换 XLSX.utils.format_cell(cell, { cellNF: true, // 保留数字格式 dateNF: "yyyy-mm-dd" // 自定义日期格式 }); ``` **2.2 公式处理** ```javascript // 解析公式依赖 const deps = XLSX.utils.get_formula_deps("=SUM(A1:B2)"); // 输出: ["A1","A2","B1","B2"] // 公式转换 const translated = XLSX.utils.translate_formula( "=A1+B2", { c:1, r:1 }, // 原锚点 { c:3, r:4 } // 新锚点 ); // 输出: "=C5+D6" ``` --- ### 三、单元格高级操作 #### 1. 样式处理工具 *(需配合 `xlsx-js-style` 等扩展库)* ```javascript // 设置单元格样式 cell.s = { fill: { patternType: "solid", fgColor: { rgb: "FF0000" } }, font: { bold: true, color: { rgb: "FFFFFF" } } }; // 读取样式 const bgColor = cell.s?.fill?.fgColor?.rgb || "default"; ``` #### 2. 合并单元格操作 ```javascript // 添加合并区域 worksheet["!merges"] = [ { s: {r:0,c:0}, e: {r:0,c:3} }, // 合并 A1:D1 { s: {r:1,c:0}, e: {r:5,c:0} } // 合并 A2:A6 ]; // 检测合并状态 function isMergedCell(ws, address) { return ws["!merges"]?.some(range => XLSX.utils.is_in_range(address, range) ); } ``` --- ### 四、实用工具函数库 #### 1. 批量操作工具 **1.1 遍历工具** ```javascript XLSX.utils.sheet_to_row_object_array(worksheet) .forEach((row, rowIndex) => { Object.keys(row).forEach(col => { const cellAddress = XLSX.utils.encode_cell({ c: XLSX.utils.decode_col(col), r: rowIndex }); // 处理每个单元格... }); }); ``` **1.2 矩阵处理** ```javascript // 生成二维数组 const matrix = XLSX.utils.sheet_to_json(worksheet, { header: 1, // 强制数组格式 defval: null // 空单元格填充值 }); // 定位最大值坐标 let maxCell = { address: "A1", value: -Infinity }; XLSX.utils.sheet_to_row_object_array(worksheet) .forEach((row, r) => { Object.entries(row).forEach(([k, v]) => { if(typeof v === 'number' && v > maxCell.value) { maxCell = { address: XLSX.utils.encode_cell({ c: XLSX.utils.decode_col(k), r: r }), value: v }; } }); }); ``` --- ### 五、特殊单元格处理 #### 1. 超链接处理 ```javascript // 读取超链接 const link = cell.l?.Target || ""; // 写入超链接 cell.l = { Target: "https://example.com", Tooltip: "访问示例网站" }; ``` #### 2. 批注处理 ```javascript // 添加批注 cell.c = [{ t: "批注内容", a: "作者" }]; // 读取批注 const comment = cell.c?.[0]?.t || ""; ``` --- ### 六、性能优化技巧 1. **惰性加载**: ```javascript // 仅加载可见区域单元格 const viewportCells = XLSX.utils.get_viewport_range(worksheet, { width: 1000, // 像素宽度 height: 800, // 像素高度 zoom: 1.0 // 缩放比例 }); ``` 2. **增量更新**: ```javascript // 只更新修改过的单元格 const dirtyCells = new Set(); function markDirty(cellAddr) { dirtyCells.add(cellAddr); } ``` --- ### 七、工具链对比 | 功能 | 官方工具 (xlsx) | 扩展库 (xlsx-js-style) | 第三方库 (exceljs) | |--------------------|-------------------------|------------------------|-------------------| | 基础单元格操作 | ✅ | ✅ | ✅ | | 样式处理 | ❌ | ✅ | ✅ | | 流式处理 | ✅ | ❌ | ✅ | | 公式计算引擎 | ❌ | ❌ | ✅ | | 图表支持 | ❌ | ❌ | ✅ | --- ### 八、最佳实践建议 1. **统一地址格式**:始终使用 A1 格式进行存储,仅在计算时转换为行列数字 2. **类型安全处理**: ```javascript function safeGetValue(cell) { return cell?.v ?? cell?.w ?? ""; } ``` 3. **内存管理**:定期清理无用的单元格引用 ```javascript delete worksheet["Z1000"]; // 显式删除单元格 ``` --- 通过合理使用这些工具,开发者可以实现从简单的数据提取到复杂的报表生成等各种 Excel 操作需求。建议结合具体业务场景选择工具组合,如:数据清洗优先使用官方工具,复杂报表生成推荐配合样式扩展库。
阅读全文

相关推荐

#!/usr/bin/env python # -*- coding: utf-8 -*- import os import re import sys import argparse import xlwt from collections import defaultdict # 分区名称映射表(前缀 → 友好名称) PARTITION_NAME_MAP = { '02_': 'system', '03_': 'vendor', '04_': 'product', '05_': 'odm', '06_': 'my_product', '07_': 'my_engineering', '08_': 'my_stock', '09_': 'my_heytap', '10_': 'my_company', '11_': 'my_carrier', '12_': 'my_region', '13_': 'my_preload', '14_': 'data', '15_': 'my_bigball', '16_': 'my_manifest', '17_system_dlkm': 'system_dlkm', # 新增独立分区 '17_vendor_dlkm': 'vendor_dlkm', # 新增独立分区 '17_cache': 'cache' # 新增独立分区 } def parse_du_file(file_path): """解析du命令输出文件并转换为MB""" data = {} try: with open(file_path, 'r') as f: for line in f: if 'Permission denied' in line or 'No such file' in line or not line.strip(): continue match = re.match(r'(\d+\.?\d*)\s*([KMG]?)[Bb]?\s+(.*)', line.strip()) if match: size, unit, path = match.groups() size = float(size) # 单位转换到MB if unit == 'K': size = size / 1024.0 elif unit == '': size = size / (1024*1024.0) # 默认字节转MB elif unit == 'M': pass # 已经是MB elif unit == 'G': size = size * 1024.0 data[path] = round(size, 4) # 保留两位小数 except IOError as e: print("警告: 无法读取文件 {}: {}".format(file_path, str(e))) return data def extract_file_prefix(filename): """提取文件前缀""" if filename.startswith('17_'): # 移除.txt扩展名后返回完整标识 return filename.replace('.txt', '') match = re.match(r'^(\d+_)', filename) return match.group(1) if match else "other_" def is_main_partition_file(filename, prefix): """检查是否为主分区文件""" # 17_开头的文件都视为主分区文件 if prefix.startswith('17_'): return True # 使用兼容的字符串拼接方式替代f-string expected_name = prefix + PARTITION_NAME_MAP[prefix] + ".txt" return filename == expected_name def generate_dual_report(folder1, folder2, output_xlsx): """生成双机对比报告""" # 获取文件夹名称用于显示 folder1_name = os.path.basename(os.path.normpath(folder1)) folder2_name = os.path.basename(os.path.normpath(folder2)) # 验证文件夹是否存在 for folder in [folder1, folder2]: if not os.path.exists(folder): print("错误: 目录不存在 - {}".format(folder)) return "目录 {} 不存在,请检查路径".format(folder) if not os.path.isdir(folder): print("错误: 路径不是目录 - {}".format(folder)) return "{} 不是有效目录".format(folder) # 初始化数据结构 # 主文件数据(用于总览页) machine1_main_data = {} # {prefix: {path: size}} machine2_main_data = {} # {prefix: {path: size}} # 所有文件数据(用于分区详细页) machine1_all_files = defaultdict(dict) # {filename: {path: size}} machine2_all_files = defaultdict(dict) # {filename: {path: size}} # 收集数据 for folder_path, main_dict, all_dict in [ (folder1, machine1_main_data, machine1_all_files), (folder2, machine2_main_data, machine2_all_files) ]: print("处理目录: {}".format(folder_path)) try: for filename in os.listdir(folder_path): if not filename.endswith('.txt'): continue # 提取文件前缀 prefix = extract_file_prefix(filename) # 跳过01_mount.txt和无效前缀 if prefix == '01_' or prefix not in PARTITION_NAME_MAP: continue file_path = os.path.join(folder_path, filename) partition_name = PARTITION_NAME_MAP[prefix] # 解析文件数据 file_data = parse_du_file(file_path) # 添加到所有文件数据 all_dict[filename] = file_data # 如果是主文件,添加到主文件数据(用于总览页) if is_main_partition_file(filename, prefix): print("解析主分区文件: {}".format(file_path)) main_dict[prefix] = file_data except OSError as e: print("目录访问错误: {}".format(str(e))) return "无法访问目录 {}: {}".format(folder_path, str(e)) # 创建Excel工作簿 try: wb = xlwt.Workbook(encoding='utf-8') header_style = xlwt.easyxf('font: bold on') title_style = xlwt.easyxf('font: bold on, height 280; align: wrap on, vert centre') normal_style = xlwt.easyxf() added_style = xlwt.easyxf('pattern: pattern solid, fore_colour light_green;') removed_style = xlwt.easyxf('pattern: pattern solid, fore_colour rose;') summary_style = xlwt.easyxf('font: bold on, color blue;') # ====== 创建总览Sheet页(添加备注列) ====== ws_overview = wb.add_sheet('总览') print("创建总览Sheet页(仅主文件数据)") current_row = 0 # 写入总览标题(扩展列数) ws_overview.write_merge( current_row, current_row, 0, 5, # 扩展到6列 "存储使用总览(仅主分区文件)", title_style ) current_row += 1 # 写入文件夹名称 ws_overview.write(current_row, 1, folder1_name, header_style) ws_overview.write(current_row, 2, folder2_name, header_style) current_row += 1 # 写入表头(增加备注列) headers = ['分区', '总大小(MB)', '总大小(MB)', '差值(MB)', '标记', '增大TOP路径'] for col, header in enumerate(headers): ws_overview.write(current_row, col, header, header_style) current_row += 1 # 存储各分区汇总数据 overview_data = [] total_machine1 = 0.0 total_machine2 = 0.0 # 按分区顺序处理数据 for prefix in sorted(PARTITION_NAME_MAP.keys()): partition_name = PARTITION_NAME_MAP[prefix] # 跳过data分区 if partition_name == 'data': continue # 获取主文件数据 data1 = machine1_main_data.get(prefix, {}) data2 = machine2_main_data.get(prefix, {}) # 计算主文件总大小 partition_total1 = round(sum(data1.values()), 2) partition_total2 = round(sum(data2.values()), 2) diff = partition_total1 - partition_total2 # 更新总计 total_machine1 += partition_total1 total_machine2 += partition_total2 # 确定标记样式 if diff > 0: mark = "增加" style = added_style elif diff < 0: mark = "减少" style = removed_style else: mark = "无变化" style = normal_style # 计算分区中增大的TOP3路径(仅当分区增大时) top_notes = [] if diff > 0: path_diffs = [] all_paths = set(data1.keys()) | set(data2.keys()) for path in all_paths: size1 = data1.get(path, 0.0) size2 = data2.get(path, 0.0) path_diff = size1 - size2 if path_diff > 0: # 只记录增大的路径 path_diffs.append((path, path_diff)) # 按增大值降序排序,取TOP3 path_diffs.sort(key=lambda x: x[1], reverse=True) for i, (path, diff_val) in enumerate(path_diffs[:3]): # 简化长路径显示 if len(path) > 50: path = "..." + path[-47:] top_notes.append(f"{i+1}. {path}: +{diff_val:.2f}MB") # 保存分区数据 overview_data.append({ 'name': partition_name, 'machine1': partition_total1, 'machine2': partition_total2, 'diff': diff, 'style': style, 'mark': mark, 'notes': "\n".join(top_notes) if top_notes else "无显著增大路径" }) # 写入行数据到总览页(新增备注列) ws_overview.write(current_row, 0, partition_name, style) ws_overview.write(current_row, 1, partition_total1, style) ws_overview.write(current_row, 2, partition_total2, style) ws_overview.write(current_row, 3, diff, style) ws_overview.write(current_row, 4, mark, style) ws_overview.write(current_row, 5, overview_data[-1]['notes'], wrap_style) current_row += 1 # 设置备注列宽度(100字符) ws_overview.col(5).width = 256 * 100 # 添加空行 current_row += 1 # 写入总计行(使用新的差值计算方式) total_diff = total_machine1 - total_machine2 if total_diff > 0: total_mark = "总增加" # folder1 > folder2 total_style = added_style elif total_diff < 0: total_mark = "总减少" # folder1 < folder2 total_style = removed_style else: total_mark = "无变化" total_style = normal_style ws_overview.write(current_row, 0, "总计", header_style) ws_overview.write(current_row, 1, total_machine1, header_style) ws_overview.write(current_row, 2, total_machine2, header_style) ws_overview.write(current_row, 3, total_diff, header_style) ws_overview.write(current_row, 4, total_mark, header_style) # ====== 为每个文件创建单独的Sheet页 ====== # 获取所有唯一的文件名(两个文件夹的并集) all_filenames = sorted(set(machine1_all_files.keys()) | set(machine2_all_files.keys())) for filename in all_filenames: # 提取文件前缀 prefix = extract_file_prefix(filename) # 跳过无效前缀 if prefix not in PARTITION_NAME_MAP: continue # 获取分区名称 partition_name = PARTITION_NAME_MAP[prefix] # 创建Sheet页名称(文件名不带扩展名) sheet_name = filename.replace('.txt', '') if len(sheet_name) > 31: # Excel sheet名称长度限制 sheet_name = sheet_name[:31] # 创建Sheet页 ws = wb.add_sheet(sheet_name) print("创建文件Sheet页: {}".format(sheet_name)) # 当前行指针 current_row = 0 # 写入分区标题 title = "分区: {} - 文件: {}".format(partition_name, filename) ws.write_merge( current_row, current_row, 0, 5, title, title_style ) current_row += 1 # 写入文件夹名称(新格式) ws.write_merge(current_row, current_row, 0, 1, folder1_name, header_style) ws.write_merge(current_row, current_row, 2, 3, folder2_name, header_style) ws.write(current_row, 4, "差异(M)", header_style) ws.write(current_row, 5, "标记", header_style) current_row += 1 # 写入表头(新格式) headers = ['路径', '大小(M)', '路径', '大小(M)', '差异(M)', '标记'] for col, header in enumerate(headers): ws.write(current_row, col, header, header_style) current_row += 1 # 获取文件数据 data1 = machine1_all_files.get(filename, {}) data2 = machine2_all_files.get(filename, {}) # 获取所有路径(合并两个文件夹的路径) all_paths = sorted(set(data1.keys()) | set(data2.keys())) # 初始化变化统计数据 total_increase = 0.0 # 增大总和 total_decrease = 0.0 # 减小总和 total_added = 0.0 # 新增文件总和 total_removed = 0.0 # 去除文件总和 # 写入数据行(新格式) for path in all_paths: size1 = data1.get(path, 0.0) size2 = data2.get(path, 0.0) # 修改为 folder1 - folder2 的差值计算方式 diff = size1 - size2 # 确定标记和样式(根据新的差值计算方式调整) if size1 == 0 and size2 > 0: mark = "除去" cell_style = removed_style total_removed += size2 elif size1 > 0 and size2 == 0: mark = "新增" cell_style = added_style total_added += size1 else: if diff > 0: mark = "增大" cell_style = added_style total_increase += diff elif diff < 0: mark = "减小" cell_style = removed_style total_decrease += abs(diff) else: mark = "相同" cell_style = normal_style # 写入行数据(新格式) # folder1列 if size1 > 0: ws.write(current_row, 0, path, cell_style) ws.write(current_row, 1, size1, cell_style) else: ws.write(current_row, 0, "", cell_style) ws.write(current_row, 1, "", cell_style) # folder2列 if size2 > 0: ws.write(current_row, 2, path, cell_style) ws.write(current_row, 3, size2, cell_style) else: ws.write(current_row, 2, "", cell_style) ws.write(current_row, 3, "", cell_style) # 差异和标记列 ws.write(current_row, 4, diff, cell_style) ws.write(current_row, 5, mark, cell_style) current_row += 1 # 添加文件汇总行(新格式) file_total1 = sum(data1.values()) file_total2 = sum(data2.values()) file_diff = file_total1 - file_total2 # 写入汇总行(新格式) ws.write(current_row, 0, "文件汇总", header_style) ws.write(current_row, 1, file_total1, header_style) ws.write(current_row, 2, "", header_style) ws.write(current_row, 3, file_total2, header_style) ws.write(current_row, 4, file_diff, header_style) ws.write(current_row, 5, "", header_style) current_row += 1 # 添加变化分类统计行 - 修改为Python 2.7兼容的格式 message = ( u"{partition_name}路径下: " u"减小{total_decrease:.2f}M " u"增大{total_increase:.2f}M " u"新增文件{total_added:.2f}M " u"减少文件{total_removed:.2f}M" ).format( partition_name=partition_name, total_decrease=total_decrease, total_increase=total_increase, total_added=total_added, total_removed=total_removed ) ws.write_merge( current_row, current_row, 0, 5, message, summary_style ) # 保存文件 wb.save(output_xlsx) return "对比报告已成功生成: {}".format(output_xlsx) except Exception as e: import traceback traceback.print_exc() return "生成Excel文件时出错: {}".format(str(e)) def generate_single_report(folder, output_xlsx): """生成单机拆解报告""" # 获取文件夹名称用于显示 folder_name = os.path.basename(os.path.normpath(folder)) # 验证文件夹是否存在 if not os.path.exists(folder): print("错误: 目录不存在 - {}".format(folder)) return "目录 {} 不存在,请检查路径".format(folder) if not os.path.isdir(folder): print("错误: 路径不是目录 - {}".format(folder)) return "{} 不是有效目录".format(folder) # 初始化数据结构 main_data = {} # {prefix: {path: size}} 主文件数据 all_files = defaultdict(dict) # {filename: {path: size}} 所有文件数据 print("处理目录: {}".format(folder)) try: for filename in os.listdir(folder): if not filename.endswith('.txt'): continue # 提取文件前缀 prefix = extract_file_prefix(filename) # 跳过01_mount.txt和无效前缀 if prefix == '01_' or prefix not in PARTITION_NAME_MAP: continue file_path = os.path.join(folder, filename) partition_name = PARTITION_NAME_MAP[prefix] # 解析文件数据 file_data = parse_du_file(file_path) # 添加到所有文件数据 all_files[filename] = file_data # 如果是主文件,添加到主文件数据(用于总览页) if is_main_partition_file(filename, prefix): print("解析主分区文件: {}".format(file_path)) main_data[prefix] = file_data except OSError as e: print("目录访问错误: {}".format(str(e))) return "无法访问目录 {}: {}".format(folder, str(e)) # 创建Excel工作簿 try: wb = xlwt.Workbook(encoding='utf-8') header_style = xlwt.easyxf('font: bold on') title_style = xlwt.easyxf('font: bold on, height 280; align: wrap on, vert centre') normal_style = xlwt.easyxf() # ====== 创建总览Sheet页(只包含主文件数据) ====== ws_overview = wb.add_sheet('总览') print("创建总览Sheet页(仅主文件数据)") # 当前行指针 current_row = 0 # 写入总览标题 ws_overview.write_merge( current_row, current_row, 0, 1, "存储使用总览(仅主分区文件) - 单机报告", title_style ) current_row += 1 # 写入文件夹名称 ws_overview.write(current_row, 0, "分区", header_style) ws_overview.write(current_row, 1, "总大小(MB)", header_style) current_row += 1 # 存储各分区汇总数据 total_size = 0.0 # 按分区顺序处理数据 for prefix in sorted(PARTITION_NAME_MAP.keys()): partition_name = PARTITION_NAME_MAP[prefix] # 跳过data分区 if partition_name == 'data': continue # 获取主文件数据 data = main_data.get(prefix, {}) partition_total = round(sum(data.values()), 2) total_size += partition_total # 写入行数据到总览页 ws_overview.write(current_row, 0, partition_name, normal_style) ws_overview.write(current_row, 1, partition_total, normal_style) current_row += 1 # 添加空行 current_row += 1 # 写入总计行 ws_overview.write(current_row, 0, "总计", header_style) ws_overview.write(current_row, 1, total_size, header_style) # ====== 为每个文件创建单独的Sheet页 ====== for filename in sorted(all_files.keys()): # 提取文件前缀 prefix = extract_file_prefix(filename) # 跳过无效前缀 if prefix not in PARTITION_NAME_MAP: continue # 获取分区名称 partition_name = PARTITION_NAME_MAP[prefix] # 创建Sheet页名称(文件名不带扩展名) sheet_name = filename.replace('.txt', '') if len(sheet_name) > 31: # Excel sheet名称长度限制 sheet_name = sheet_name[:31] # 创建Sheet页 ws = wb.add_sheet(sheet_name) print("创建文件Sheet页: {}".format(sheet_name)) # 当前行指针 current_row = 0 # 写入分区标题 title = "分区: {} - 文件: {}".format(partition_name, filename) ws.write_merge( current_row, current_row, 0, 1, title, title_style ) current_row += 1 # 写入表头 headers = ['路径', '大小(M)'] for col, header in enumerate(headers): ws.write(current_row, col, header, header_style) current_row += 1 # 获取文件数据 data = all_files[filename] # 按路径排序 sorted_paths = sorted(data.keys()) # 写入数据行 for path in sorted_paths: size = data[path] ws.write(current_row, 0, path, normal_style) ws.write(current_row, 1, size, normal_style) current_row += 1 # 写入汇总行 file_total = sum(data.values()) ws.write(current_row, 0, "文件汇总", header_style) ws.write(current_row, 1, file_total, header_style) # 保存文件 wb.save(output_xlsx) return "单机报告已成功生成: {}".format(output_xlsx) except Exception as e: import traceback traceback.print_exc() return "生成Excel文件时出错: {}".format(str(e)) if __name__ == "__main__": # 创建参数解析器 parser = argparse.ArgumentParser(description='存储空间分析工具') subparsers = parser.add_subparsers(dest='mode', help='运行模式') # 双机对比模式 dual_parser = subparsers.add_parser('dual', help='双机对比模式') dual_parser.add_argument('folder1', help='第一个文件夹路径') dual_parser.add_argument('folder2', help='第二个文件夹路径') dual_parser.add_argument('output', help='输出Excel文件路径') # 单机拆解模式 single_parser = subparsers.add_parser('single', help='单机拆解模式') single_parser.add_argument('folder', help='待分析文件夹路径') single_parser.add_argument('output', help='输出Excel文件路径') # 解析参数 args = parser.parse_args() if args.mode == 'dual': print("运行双机对比模式...") result = generate_dual_report(args.folder1, args.folder2, args.output) elif args.mode == 'single': print("运行单机拆解模式...") result = generate_single_report(args.folder, args.output) else: result = "错误:请选择 'dual' 或 'single' 模式" print(result)V27960021@dg03podv27960021kj4p:~/存储/hawaii$ python storage.py dual ./22610/存储/测试机 ./22610/存储/对比机 22610_report_dual.xlsx File "storage.py", line 222 top_notes.append(f"{i+1}. {path}: +{diff_val:.2f}MB") ^ SyntaxError: invalid syntax

#!/usr/bin/env python # -*- coding: utf-8 -*- import os import re import sys import argparse import xlwt from collections import defaultdict # 分区名称映射表(前缀 → 友好名称) PARTITION_NAME_MAP = { '02_': 'system', '03_': 'vendor', '04_': 'product', '05_': 'odm', '06_': 'my_product', '07_': 'my_engineering', '08_': 'my_stock', '09_': 'my_heytap', '10_': 'my_company', '11_': 'my_carrier', '12_': 'my_region', '13_': 'my_preload', '14_': 'data', '15_': 'my_bigball', '16_': 'my_manifest', '17_system_dlkm': 'system_dlkm', '17_vendor_dlkm': 'vendor_dlkm', '17_cache': 'cache' } def parse_du_file(file_path): """解析du命令输出文件并转换为MB""" data = {} try: with open(file_path, 'r') as f: for line in f: if 'Permission denied' in line or 'No such file' in line or not line.strip(): continue match = re.match(r'(\d+\.?\d*)\s*([KMG]?)[Bb]?\s+(.*)', line.strip()) if match: size, unit, path = match.groups() size = float(size) # 单位转换到MB if unit == 'K': size = size / 1024.0 elif unit == '': size = size / (1024*1024.0) elif unit == 'M': pass elif unit == 'G': size = size * 1024.0 data[path] = round(size, 4) except IOError as e: print("警告: 无法读取文件 {}: {}".format(file_path, str(e))) return data def extract_file_prefix(filename): """提取文件前缀""" if filename.startswith('17_'): return filename.replace('.txt', '') match = re.match(r'^(\d+_)', filename) return match.group(1) if match else "other_" def is_main_partition_file(filename, prefix): """检查是否为主分区文件""" if prefix.startswith('17_'): return True expected_name = prefix + PARTITION_NAME_MAP[prefix] + ".txt" return filename == expected_name def generate_dual_report(folder1, folder2, output_xlsx): """生成双机对比报告""" folder1_name = os.path.basename(os.path.normpath(folder1)) folder2_name = os.path.basename(os.path.normpath(folder2)) for folder in [folder1, folder2]: if not os.path.exists(folder): print("错误: 目录不存在 - {}".format(folder)) return "目录 {} 不存在,请检查路径".format(folder) if not os.path.isdir(folder): print("错误: 路径不是目录 - {}".format(folder)) return "{} 不是有效目录".format(folder) # 初始化数据结构 machine1_main_data = {} machine2_main_data = {} machine1_all_files = defaultdict(dict) machine2_all_files = defaultdict(dict) # 收集数据 for folder_path, main_dict, all_dict in [ (folder1, machine1_main_data, machine1_all_files), (folder2, machine2_main_data, machine2_all_files) ]: print("处理目录: {}".format(folder_path)) try: for filename in os.listdir(folder_path): if not filename.endswith('.txt'): continue prefix = extract_file_prefix(filename) if prefix == '01_' or prefix not in PARTITION_NAME_MAP: continue file_path = os.path.join(folder_path, filename) partition_name = PARTITION_NAME_MAP[prefix] file_data = parse_du_file(file_path) all_dict[filename] = file_data if is_main_partition_file(filename, prefix): print("解析主分区文件: {}".format(file_path)) main_dict[prefix] = file_data except OSError as e: print("目录访问错误: {}".format(str(e))) return "无法访问目录 {}: {}".format(folder_path, str(e)) # 创建Excel工作簿 try: wb = xlwt.Workbook(encoding='utf-8') # ====== 修复:正确定义所有样式变量 ====== header_style = xlwt.easyxf('font: bold on') title_style = xlwt.easyxf('font: bold on, height 280; align: wrap on, vert centre') normal_style = xlwt.easyxf() added_style = xlwt.easyxf('pattern: pattern solid, fore_colour light_green;') removed_style = xlwt.easyxf('pattern: pattern solid, fore_colour rose;') summary_style = xlwt.easyxf('font: bold on, color blue;') wrap_style = xlwt.easyxf('align: wrap on, vert centre') # 修复:正确定义wrap_style # ====== 创建总览Sheet页 ====== ws_overview = wb.add_sheet('总览') print("创建总览Sheet页(仅主文件数据)") current_row = 0 # 写入总览标题 ws_overview.write_merge( current_row, current_row, 0, 5, "存储使用总览(仅主分区文件)", title_style ) current_row += 1 # 写入文件夹名称 ws_overview.write(current_row, 1, folder1_name, header_style) ws_overview.write(current_row, 2, folder2_name, header_style) current_row += 1 # 写入表头(增加备注列) headers = ['分区', '总大小(MB)', '总大小(MB)', '差值(MB)', '标记', '备注(增大TOP3)'] for col, header in enumerate(headers): ws_overview.write(current_row, col, header, header_style) current_row += 1 # 存储各分区汇总数据 overview_data = [] total_machine1 = 0.0 total_machine2 = 0.0 # 按分区顺序处理数据 for prefix in sorted(PARTITION_NAME_MAP.keys()): partition_name = PARTITION_NAME_MAP[prefix] # 跳过data分区 if partition_name == 'data': continue # 获取主文件数据 data1 = machine1_main_data.get(prefix, {}) data2 = machine2_main_data.get(prefix, {}) # 计算主文件总大小 partition_total1 = round(sum(data1.values()), 2) partition_total2 = round(sum(data2.values()), 2) diff = partition_total1 - partition_total2 # 更新总计 total_machine1 += partition_total1 total_machine2 += partition_total2 # 确定标记样式 if diff > 0: mark = "增加" style = added_style elif diff < 0: mark = "减少" style = removed_style else: mark = "无变化" style = normal_style # 计算分区中增大的TOP3路径 top_notes = [] if diff > 0: # 只在分区增大时计算TOP路径 path_diffs = [] all_paths = set(data1.keys()) | set(data2.keys()) for path in all_paths: size1 = data1.get(path, 0.0) size2 = data2.get(path, 0.0) path_diff = size1 - size2 if path_diff > 0: # 只记录增大的路径 path_diffs.append((path, path_diff)) # 按增大值降序排序,取TOP3 path_diffs.sort(key=lambda x: x[1], reverse=True) for i, (path, diff_val) in enumerate(path_diffs[:3]): # 截断过长的路径名 if len(path) > 50: path = "..." + path[-47:] top_notes.append("{}. {}: +{:.2f}MB".format(i+1, path, diff_val)) # 保存分区数据 overview_data.append({ 'name': partition_name, 'machine1': partition_total1, 'machine2': partition_total2, 'diff': diff, 'style': style, 'mark': mark, 'notes': "\n".join(top_notes) if top_notes else "无显著增大路径" }) # 写入行数据到总览页(增加备注列) ws_overview.write(current_row, 0, partition_name, style) ws_overview.write(current_row, 1, partition_total1, style) ws_overview.write(current_row, 2, partition_total2, style) ws_overview.write(current_row, 3, diff, style) ws_overview.write(current_row, 4, mark, style) ws_overview.write(current_row, 5, overview_data[-1]['notes'], wrap_style) # 使用已定义的wrap_style current_row += 1 # 添加空行 current_row += 1 # 写入总计行 total_diff = total_machine1 - total_machine2 if total_diff > 0: total_mark = "总增加" total_style = added_style elif total_diff < 0: total_mark = "总减少" total_style = removed_style else: total_mark = "无变化" total_style = normal_style ws_overview.write(current_row, 0, "总计", header_style) ws_overview.write(current_row, 1, total_machine1, header_style) ws_overview.write(current_row, 2, total_machine2, header_style) ws_overview.write(current_row, 3, total_diff, header_style) ws_overview.write(current_row, 4, total_mark, header_style) ws_overview.write(current_row, 5, "", header_style) # 备注列留空 # 设置备注列宽度(100字符) ws_overview.col(5).width = 256 * 100 # ====== 为每个文件创建单独的Sheet页(保持不变) ====== # 获取所有唯一的文件名(两个文件夹的并集) all_filenames = sorted(set(machine1_all_files.keys()) | set(machine2_all_files.keys())) for filename in all_filenames: # 提取文件前缀 prefix = extract_file_prefix(filename) # 跳过无效前缀 if prefix not in PARTITION_NAME_MAP: continue # 获取分区名称 partition_name = PARTITION_NAME_MAP[prefix] # 创建Sheet页名称(文件名不带扩展名) sheet_name = filename.replace('.txt', '') if len(sheet_name) > 31: # Excel sheet名称长度限制 sheet_name = sheet_name[:31] # 创建Sheet页 ws = wb.add_sheet(sheet_name) print("创建文件Sheet页: {}".format(sheet_name)) # 当前行指针 current_row = 0 # 写入分区标题 title = "分区: {} - 文件: {}".format(partition_name, filename) ws.write_merge( current_row, current_row, 0, 5, title, title_style ) current_row += 1 # 写入文件夹名称 ws.write_merge(current_row, current_row, 0, 1, folder1_name, header_style) ws.write_merge(current_row, current_row, 2, 3, folder2_name, header_style) ws.write(current_row, 4, "差异(M)", header_style) ws.write(current_row, 5, "标记", header_style) current_row += 1 # 写入表头 headers = ['路径', '大小(M)', '路径', '大小(M)', '差异(M)', '标记'] for col, header in enumerate(headers): ws.write(current_row, col, header, header_style) current_row += 1 # 获取文件数据 data1 = machine1_all_files.get(filename, {}) data2 = machine2_all_files.get(filename, {}) # 获取所有路径(合并两个文件夹的路径) all_paths = sorted(set(data1.keys()) | set(data2.keys())) # 初始化变化统计数据 total_increase = 0.0 # 增大总和 total_decrease = 0.0 # 减小总和 total_added = 0.0 # 新增文件总和 total_removed = 0.0 # 去除文件总和 # 写入数据行 for path in all_paths: size1 = data1.get(path, 0.0) size2 = data2.get(path, 0.0) # 计算差值 diff = size1 - size2 # 确定标记和样式 if size1 == 0 and size2 > 0: mark = "除去" cell_style = removed_style total_removed += size2 elif size1 > 0 and size2 == 0: mark = "新增" cell_style = added_style total_added += size1 else: if diff > 0: mark = "增大" cell_style = added_style total_increase += diff elif diff < 0: mark = "减小" cell_style = removed_style total_decrease += abs(diff) else: mark = "相同" cell_style = normal_style # 写入行数据 # folder1列 if size1 > 0: ws.write(current_row, 0, path, cell_style) ws.write(current_row, 1, size1, cell_style) else: ws.write(current_row, 0, "", cell_style) ws.write(current_row, 1, "", cell_style) # folder2列 if size2 > 0: ws.write(current_row, 2, path, cell_style) ws.write(current_row, 3, size2, cell_style) else: ws.write(current_row, 2, "", cell_style) ws.write(current_row, 3, "", cell_style) # 差异和标记列 ws.write(current_row, 4, diff, cell_style) ws.write(current_row, 5, mark, cell_style) current_row += 1 # 添加文件汇总行 file_total1 = sum(data1.values()) file_total2 = sum(data2.values()) file_diff = file_total1 - file_total2 # 写入汇总行 ws.write(current_row, 0, "文件汇总", header_style) ws.write(current_row, 1, file_total1, header_style) ws.write(current_row, 2, "", header_style) ws.write(current_row, 3, file_total2, header_style) ws.write(current_row, 4, file_diff, header_style) ws.write(current_row, 5, "", header_style) current_row += 1 # 添加变化分类统计行 message = ( u"{partition_name}路径下: " u"减小{total_decrease:.2f}M " u"增大{total_increase:.2f}M " u"新增文件{total_added:.2f}M " u"减少文件{total_removed:.2f}M" ).format( partition_name=partition_name, total_decrease=total_decrease, total_increase=total_increase, total_added=total_added, total_removed=total_removed ) ws.write_merge( current_row, current_row, 0, 5, message, summary_style ) # 保存文件 wb.save(output_xlsx) return "对比报告已成功生成: {}".format(output_xlsx) except Exception as e: import traceback traceback.print_exc() return "生成Excel文件时出错: {}".format(str(e)) # 单机拆解模式保持不变 def generate_single_report(folder, output_xlsx): """单机拆解模式(保持不变)""" # ...(原有单机拆解模式实现)... if __name__ == "__main__": # 创建参数解析器 parser = argparse.ArgumentParser(description='存储空间分析工具') subparsers = parser.add_subparsers(dest='mode', help='运行模式') # 双机对比模式 dual_parser = subparsers.add_parser('dual', help='双机对比模式') dual_parser.add_argument('folder1', help='第一个文件夹路径') dual_parser.add_argument('folder2', help='第二个文件夹路径') dual_parser.add_argument('output', help='输出Excel文件路径') # 单机拆解模式 single_parser = subparsers.add_parser('single', help='单机拆解模式') single_parser.add_argument('folder', help='待分析文件夹路径') single_parser.add_argument('output', help='输出Excel文件路径') # 解析参数 args = parser.parse_args() if args.mode == 'dual': print("运行双机对比模式...") result = generate_dual_report(args.folder1, args.folder2, args.output) elif args.mode == 'single': print("运行单机拆解模式...") result = generate_single_report(args.folder, args.output) else: result = "错误:请选择 'dual' 或 'single' 模式" print(result)参考上面脚本输出文件,下面是获取测试机和对比机数据的脚本,设计一个可以对比文件差异的脚本adb root adb shell du -a system > system.txt 2>&1 adb shell du -a system_ext > system_ext.txt 2>&1 adb shell du -a vendor > vendor.txt 2>&1 adb shell du -a product > product.txt 2>&1 adb shell du -a odm > odm.txt 2>&1 adb shell du -a my_product > my_product.txt 2>&1 adb shell du -a my_engineering > my_engineering.txt 2>&1 adb shell du -a my_stock > my_stock.txt 2>&1 adb shell du -a my_heytap > my_heytap.txt 2>&1 adb shell du -a my_company > my_company.txt 2>&1 adb shell du -a my_carrier > my_carrier.txt 2>&1 adb shell du -a my_region > my_region.txt 2>&1 adb shell du -a my_preload > my_preload.txt 2>&1 adb shell du -a my_manifest > my_manifest.txt 2>&1 adb shell du -a data > data.txt 2>&1 adb shell du -a my_bigball > my_bigball.txt 2>&1 adb shell du -a vendor_dlkm > vendor_dlkm.txt 2>&1 adb shell du -a system_dlkm > system_dlkm.txt 2>&1

#!/usr/bin/env python # -*- coding: utf-8 -*- import os import re import sys import xlwt from collections import defaultdict # 分区名称映射表(前缀 → 友好名称) PARTITION_NAME_MAP = { '02_': 'system', '03_': 'vendor', '04_': 'product', '05_': 'odm', '06_': 'my_product', '07_': 'my_engineering', '08_': 'my_stock', '09_': 'my_heytap', '10_': 'my_company', '11_': 'my_carrier', '12_': 'my_region', '13_': 'my_preload', '14_': 'data', '15_': 'my_bigball', '16_': 'my_manifest', '17_system_dlkm': 'system_dlkm', # 新增独立分区 '17_vendor_dlkm': 'vendor_dlkm', # 新增独立分区 '17_cache': 'cache' # 新增独立分区 } def parse_du_file(file_path): """解析du命令输出文件并转换为MB""" data = {} try: with open(file_path, 'r') as f: for line in f: if 'Permission denied' in line or 'No such file' in line or not line.strip(): continue match = re.match(r'(\d+\.?\d*)\s*([KMG]?)[Bb]?\s+(.*)', line.strip()) if match: size, unit, path = match.groups() size = float(size) # 单位转换到MB if unit == 'K': size = size / 1024.0 elif unit == '': size = size / (1024*1024.0) # 默认字节转MB elif unit == 'M': pass # 已经是MB elif unit == 'G': size = size * 1024.0 data[path] = round(size, 4) # 保留两位小数 except IOError as e: print("警告: 无法读取文件 {}: {}".format(file_path, str(e))) return data def extract_file_prefix(filename): if filename.startswith('17_'): # 移除.txt扩展名后返回完整标识 return filename.replace('.txt', '') """提取文件前缀""" match = re.match(r'^(\d+_)', filename) return match.group(1) if match else "other_" def is_main_partition_file(filename, prefix): """检查是否为主分区文件""" # 17_开头的文件都视为主分区文件 if prefix.startswith('17_'): return True # 使用兼容的字符串拼接方式替代f-string expected_name = prefix + PARTITION_NAME_MAP[prefix] + ".txt" return filename == expected_name def generate_comparison_report(folder1, folder2, output_xlsx): """生成存储对比报告""" # 获取文件夹名称用于显示 folder1_name = os.path.basename(os.path.normpath(folder1)) folder2_name = os.path.basename(os.path.normpath(folder2)) # 验证文件夹是否存在 for folder in [folder1, folder2]: if not os.path.exists(folder): print("错误: 目录不存在 - {}".format(folder)) return "目录 {} 不存在,请检查路径".format(folder) if not os.path.isdir(folder): print("错误: 路径不是目录 - {}".format(folder)) return "{} 不是有效目录".format(folder) # 初始化数据结构 # 主文件数据(用于总览页) machine1_main_data = {} # {prefix: {path: size}} machine2_main_data = {} # {prefix: {path: size}} # 所有文件数据(用于分区详细页) machine1_all_files = defaultdict(dict) # {filename: {path: size}} machine2_all_files = defaultdict(dict) # {filename: {path: size}} # 收集数据 for folder_path, main_dict, all_dict in [ (folder1, machine1_main_data, machine1_all_files), (folder2, machine2_main_data, machine2_all_files) ]: print("处理目录: {}".format(folder_path)) try: for filename in os.listdir(folder_path): if not filename.endswith('.txt'): continue # 提取文件前缀 prefix = extract_file_prefix(filename) # 跳过01_mount.txt和无效前缀 if prefix == '01_' or prefix not in PARTITION_NAME_MAP: continue file_path = os.path.join(folder_path, filename) partition_name = PARTITION_NAME_MAP[prefix] # 解析文件数据 file_data = parse_du_file(file_path) # 添加到所有文件数据 all_dict[filename] = file_data # 如果是主文件,添加到主文件数据(用于总览页) if is_main_partition_file(filename, prefix): print("解析主分区文件: {}".format(file_path)) main_dict[prefix] = file_data except OSError as e: print("目录访问错误: {}".format(str(e))) return "无法访问目录 {}: {}".format(folder_path, str(e)) # 创建Excel工作簿 try: wb = xlwt.Workbook(encoding='utf-8') header_style = xlwt.easyxf('font: bold on') title_style = xlwt.easyxf('font: bold on, height 280; align: wrap on, vert centre') normal_style = xlwt.easyxf() added_style = xlwt.easyxf('pattern: pattern solid, fore_colour light_green;') removed_style = xlwt.easyxf('pattern: pattern solid, fore_colour rose;') summary_style = xlwt.easyxf('font: bold on, color blue;') # ====== 创建总览Sheet页(只包含主文件数据) ====== ws_overview = wb.add_sheet('总览') print("创建总览Sheet页(仅主文件数据)") # 当前行指针 current_row = 0 # 写入总览标题 ws_overview.write_merge( current_row, current_row, 0, 3, "存储使用总览(仅主分区文件)", title_style ) current_row += 1 # 写入文件夹名称 ws_overview.write(current_row, 1, folder1_name, header_style) ws_overview.write(current_row, 2, folder2_name, header_style) current_row += 1 # 写入表头 headers = ['分区', '总大小(MB)', '总大小(MB)', '差值(MB)', '标记'] for col, header in enumerate(headers): ws_overview.write(current_row, col, header, header_style) current_row += 1 # 存储各分区汇总数据 overview_data = [] total_machine1 = 0.0 total_machine2 = 0.0 # 按分区顺序处理数据 for prefix in sorted(PARTITION_NAME_MAP.keys()): partition_name = PARTITION_NAME_MAP[prefix] # 跳过data分区(删除总览页的data行) if partition_name == 'data': continue # 跳过这个分区,不处理 # 获取主文件数据 data1 = machine1_main_data.get(prefix, {}) data2 = machine2_main_data.get(prefix, {}) # 计算主文件总大小 partition_total1 = round(sum(data1.values()), 2) partition_total2 = round(sum(data2.values()), 2) # 修改为 folder1 - folder2 的差值计算方式 diff = partition_total1 - partition_total2 # 更新总计 total_machine1 += partition_total1 total_machine2 += partition_total2 # 确定标记样式(根据新的差值计算方式) if diff > 0: mark = "增加" # folder1 > folder2 style = added_style elif diff < 0: mark = "减少" # folder1 < folder2 style = removed_style else: mark = "无变化" style = normal_style # 保存分区数据 overview_data.append({ 'name': partition_name, 'machine1': partition_total1, 'machine2': partition_total2, 'diff': diff, 'style': style, 'mark': mark }) # 写入行数据到总览页 ws_overview.write(current_row, 0, partition_name, style) ws_overview.write(current_row, 1, partition_total1, style) ws_overview.write(current_row, 2, partition_total2, style) ws_overview.write(current_row, 3, diff, style) ws_overview.write(current_row, 4, mark, style) current_row += 1 # 添加空行 current_row += 1 # 写入总计行(使用新的差值计算方式) total_diff = total_machine1 - total_machine2 if total_diff > 0: total_mark = "总增加" # folder1 > folder2 total_style = added_style elif total_diff < 0: total_mark = "总减少" # folder1 < folder2 total_style = removed_style else: total_mark = "无变化" total_style = normal_style ws_overview.write(current_row, 0, "总计", header_style) ws_overview.write(current_row, 1, total_machine1, header_style) ws_overview.write(current_row, 2, total_machine2, header_style) ws_overview.write(current_row, 3, total_diff, header_style) ws_overview.write(current_row, 4, total_mark, header_style) # ====== 为每个文件创建单独的Sheet页 ====== # 获取所有唯一的文件名(两个文件夹的并集) all_filenames = sorted(set(machine1_all_files.keys()) | set(machine2_all_files.keys())) for filename in all_filenames: # 提取文件前缀 prefix = extract_file_prefix(filename) # 跳过无效前缀 if prefix not in PARTITION_NAME_MAP: continue # 获取分区名称 partition_name = PARTITION_NAME_MAP[prefix] # 创建Sheet页名称(文件名不带扩展名) sheet_name = filename.replace('.txt', '') if len(sheet_name) > 31: # Excel sheet名称长度限制 sheet_name = sheet_name[:31] # 创建Sheet页 ws = wb.add_sheet(sheet_name) print("创建文件Sheet页: {}".format(sheet_name)) # 当前行指针 current_row = 0 # 写入分区标题 title = "分区: {} - 文件: {}".format(partition_name, filename) ws.write_merge( current_row, current_row, 0, 5, title, title_style ) current_row += 1 # 写入文件夹名称(新格式) ws.write_merge(current_row, current_row, 0, 1, folder1_name, header_style) ws.write_merge(current_row, current_row, 2, 3, folder2_name, header_style) ws.write(current_row, 4, "差异(M)", header_style) ws.write(current_row, 5, "标记", header_style) current_row += 1 # 写入表头(新格式) headers = ['路径', '大小(M)', '路径', '大小(M)', '差异(M)', '标记'] for col, header in enumerate(headers): ws.write(current_row, col, header, header_style) current_row += 1 # 获取文件数据 data1 = machine1_all_files.get(filename, {}) data2 = machine2_all_files.get(filename, {}) # 获取所有路径(合并两个文件夹的路径) all_paths = sorted(set(data1.keys()) | set(data2.keys())) # 初始化变化统计数据 total_increase = 0.0 # 增大总和 total_decrease = 0.0 # 减小总和 total_added = 0.0 # 新增文件总和 total_removed = 0.0 # 去除文件总和 # 写入数据行(新格式) for path in all_paths: size1 = data1.get(path, 0.0) size2 = data2.get(path, 0.0) # 修改为 folder1 - folder2 的差值计算方式 diff = size1 - size2 # 确定标记和样式(根据新的差值计算方式调整) if size1 == 0 and size2 > 0: mark = "除去" cell_style = removed_style total_removed += size2 elif size1 > 0 and size2 == 0: mark = "新增" cell_style = added_style total_added += size1 else: if diff > 0: mark = "增大" cell_style = added_style total_increase += diff elif diff < 0: mark = "减小" cell_style = removed_style total_decrease += abs(diff) else: mark = "相同" cell_style = normal_style # 写入行数据(新格式) # folder1列 if size1 > 0: ws.write(current_row, 0, path, cell_style) ws.write(current_row, 1, size1, cell_style) else: ws.write(current_row, 0, "", cell_style) ws.write(current_row, 1, "", cell_style) # folder2列 if size2 > 0: ws.write(current_row, 2, path, cell_style) ws.write(current_row, 3, size2, cell_style) else: ws.write(current_row, 2, "", cell_style) ws.write(current_row, 3, "", cell_style) # 差异和标记列 ws.write(current_row, 4, diff, cell_style) ws.write(current_row, 5, mark, cell_style) current_row += 1 # 添加文件汇总行(新格式) file_total1 = sum(data1.values()) file_total2 = sum(data2.values()) file_diff = file_total1 - file_total2 # 写入汇总行(新格式) ws.write(current_row, 0, "文件汇总", header_style) ws.write(current_row, 1, file_total1, header_style) ws.write(current_row, 2, "", header_style) ws.write(current_row, 3, file_total2, header_style) ws.write(current_row, 4, file_diff, header_style) ws.write(current_row, 5, "", header_style) current_row += 1 # 添加变化分类统计行 - 修改为Python 2.7兼容的格式 message = ( u"{partition_name}路径下: " u"减小{total_decrease:.2f}M " u"增大{total_increase:.2f}M " u"新增文件{total_added:.2f}M " u"减少文件{total_removed:.2f}M" ).format( partition_name=partition_name, total_decrease=total_decrease, total_increase=total_increase, total_added=total_added, total_removed=total_removed ) ws.write_merge( current_row, current_row, 0, 5, message, summary_style ) # 保存文件 wb.save(output_xlsx) return "对比报告已成功生成: {}".format(output_xlsx) except Exception as e: import traceback traceback.print_exc() return "生成Excel文件时出错: {}".format(str(e)) if __name__ == "__main__": # 使用当前目录下的相对路径 report = generate_comparison_report( folder1='./测试机', # 当前目录下的moscow文件夹 folder2='./对比机', # 当前目录下的tacoof文件夹 output_xlsx='storage_comparison_report.xlsx' ) print(report)基于这个脚本添加两个选项,1.“对比拆解”,实现双数脚本功能。2.“单机拆解”会只提供一个folder路径生成结果没有folder2的“路径”,“大小”列也没有“差异”和标记列

{ "name": "lowcode-demo", "version": "1.0.10", "description": "Low-Code Engine 低代码搭建引擎 Demo 项目", "scripts": { "eslint": "eslint \"./src/**/*.{vue,html,js,jsx,ts,tsx}\"", "build": "set NODE_OPTIONS=--openssl-legacy-provider && build-scripts build && sh ./scripts/delete.sh", "build:AUI": "set NODE_OPTIONS=--max_old_space_size=8096 && build-scripts build --config build.aurora-vue.json", "prepublishOnly": "npm run build", "pub": "node ./scripts/watchdog.js && npm pub", "start": "set NODE_OPTIONS=--max_old_space_size=9896 && build-scripts start --disable-reload --port 5556", "stylelint": "stylelint \"./src/**/*.{vue,htm,html,css,sss,less,scss,sass}\" --cache --cache-location node_modules/.cache/stylelint/", "analyz": "set NODE_ENV=production && set npm_config_report=true && npm run build" }, "main": "index.js", "files": [ "build" ], "dependencies": { "@aurora/service": "~2.5.0", "@aurora/vue": "5.0.38", "@aurora/vue-cli-plugin-aurora-theme": "~3.0.13", "@aurora/vue-icon": "~5.0.0", "@wecode/welink-wlkh5-sdk": "^1.0.1", "element-ui": "2.15.12", "html2canvas": "1.4.1", "jquery": "3.6.3", "lodash": "4.17.21", "uuid": "v10.0.0", "vue": "2.6.14" }, "devDependencies": { "@alib/build-scripts": "0.1.32", "@alilc/lowcode-engine": "1.1.6", "@alilc/lowcode-engine-ext": "1.0.5", "@alilc/lowcode-plugin-components-pane": "1.0.7", "@alilc/lowcode-plugin-inject": "1.2.2", "@alilc/lowcode-plugin-schema": "1.0.2", "@alilc/lowcode-types": "1.0.3", "@alilc/lowcode-utils": "1.1.6", "@ffe/engine-vue-utils": "2.0.23", "@ffe/eslint-config-ffe": "1.0.29", "@ffe/fcore-lowcode-env": "1.1.5", "@ffe/stylelint-standard": "^1.0.10", "@types/events": "^3.0.0", "@types/react": "16.8.6", "@types/react-dom": "16.8.4", "build-plugin-moment-locales": "^0.1.0", "build-plugin-react-app": "^1.1.2", "copy-webpack-plugin": "5.1.2", "cross-env": "7.0.3", "css-loader": "3.6.0", "eslint": "8.34.0", "eslint-plugin-import": "^2.29.1", "fs-extra": "^10.0.1", "husky": "^8.0.3", "less-loader": "^7.3.0", "moment": "2.29.4", "prettier": "^2.8.8", "prop-types": "^15.5.8", "react": "16.14.0", "react-dom": "16.14.0", "screenfull": "5.2.0", "tsconfig-paths-webpack-plugin": "3.2.0", "vue-clipboard2": "0.3.3", "vue-grid-layout": "2.4.0", "vue-i18n": "8.27.2", "vue-loader": "15.9.8", "vue-router": "3.6.5", "vue-style-loader": "^4.1.3", "vue-template-compiler": "2.6.14", "vuera": "^0.2.7", "webpack-bundle-analyzer": "^4.10.2", "xlsx": "0.17.0" }, "config": {}, "license": "MIT", "repository": "[email protected]:alibaba/lowcode-demo.git" } 上面是packetjson,这个项目的依赖还能做什么优化?

#!/usr/bin/env python -- coding: utf-8 -- import os import re import sys import argparse import xlwt from collections import defaultdict 分区名称映射表(前缀 → 友好名称) PARTITION_NAME_MAP = { ‘02_’: ‘system’, ‘03_’: ‘vendor’, ‘04_’: ‘product’, ‘05_’: ‘odm’, ‘06_’: ‘my_product’, ‘07_’: ‘my_engineering’, ‘08_’: ‘my_stock’, ‘09_’: ‘my_heytap’, ‘10_’: ‘my_company’, ‘11_’: ‘my_carrier’, ‘12_’: ‘my_region’, ‘13_’: ‘my_preload’, ‘14_’: ‘data’, ‘15_’: ‘my_bigball’, ‘16_’: ‘my_manifest’, ‘17_system_dlkm’: ‘system_dlkm’, ‘17_vendor_dlkm’: ‘vendor_dlkm’, ‘17_cache’: ‘cache’ } def parse_du_file(file_path): “”“解析du命令输出文件并转换为MB”“” data = {} try: with open(file_path, ‘r’) as f: for line in f: if ‘Permission denied’ in line or ‘No such file’ in line or not line.strip(): continue match = re.match(r'(\d+\.?\d*)\s*([KMG]?)[Bb]?\s+(.*)', line.strip()) if match: size, unit, path = match.groups() size = float(size) # 单位转换到MB if unit == 'K': size = size / 1024.0 elif unit == '': size = size / (1024*1024.0) elif unit == 'M': pass elif unit == 'G': size = size * 1024.0 data[path] = round(size, 4) except IOError as e: print("警告: 无法读取文件 {}: {}".format(file_path, str(e))) return data def extract_file_prefix(filename): “”“提取文件前缀”“” if filename.startswith(‘17_’): return filename.replace(‘.txt’, ‘’) match = re.match(r’^(\d+)', filename) return match.group(1) if match else "other" def is_main_partition_file(filename, prefix): “”“检查是否为主分区文件”“” if prefix.startswith(‘17_’): return True expected_name = prefix + PARTITION_NAME_MAP[prefix] + “.txt” return filename == expected_name def generate_dual_report(folder1, folder2, output_xlsx): “”“生成双机对比报告”“” folder1_name = os.path.basename(os.path.normpath(folder1)) folder2_name = os.path.basename(os.path.normpath(folder2)) for folder in [folder1, folder2]: if not os.path.exists(folder): print("错误: 目录不存在 - {}".format(folder)) return "目录 {} 不存在,请检查路径".format(folder) if not os.path.isdir(folder): print("错误: 路径不是目录 - {}".format(folder)) return "{} 不是有效目录".format(folder) # 初始化数据结构 machine1_main_data = {} machine2_main_data = {} machine1_all_files = defaultdict(dict) machine2_all_files = defaultdict(dict) # 收集数据 for folder_path, main_dict, all_dict in [ (folder1, machine1_main_data, machine1_all_files), (folder2, machine2_main_data, machine2_all_files) ]: print("处理目录: {}".format(folder_path)) try: for filename in os.listdir(folder_path): if not filename.endswith('.txt'): continue prefix = extract_file_prefix(filename) if prefix == '01_' or prefix not in PARTITION_NAME_MAP: continue file_path = os.path.join(folder_path, filename) partition_name = PARTITION_NAME_MAP[prefix] file_data = parse_du_file(file_path) all_dict[filename] = file_data if is_main_partition_file(filename, prefix): print("解析主分区文件: {}".format(file_path)) main_dict[prefix] = file_data except OSError as e: print("目录访问错误: {}".format(str(e))) return "无法访问目录 {}: {}".format(folder_path, str(e)) # 创建Excel工作簿 try: wb = xlwt.Workbook(encoding='utf-8') # ====== 修复:正确定义所有样式变量 ====== header_style = xlwt.easyxf('font: bold on') title_style = xlwt.easyxf('font: bold on, height 280; align: wrap on, vert centre') normal_style = xlwt.easyxf() added_style = xlwt.easyxf('pattern: pattern solid, fore_colour light_green;') removed_style = xlwt.easyxf('pattern: pattern solid, fore_colour rose;') summary_style = xlwt.easyxf('font: bold on, color blue;') wrap_style = xlwt.easyxf('align: wrap on, vert centre') # 修复:正确定义wrap_style # ====== 创建总览Sheet页 ====== ws_overview = wb.add_sheet('总览') print("创建总览Sheet页(仅主文件数据)") current_row = 0 # 写入总览标题 ws_overview.write_merge( current_row, current_row, 0, 5, "存储使用总览(仅主分区文件)", title_style ) current_row += 1 # 写入文件夹名称 ws_overview.write(current_row, 1, folder1_name, header_style) ws_overview.write(current_row, 2, folder2_name, header_style) current_row += 1 # 写入表头(增加备注列) headers = ['分区', '总大小(MB)', '总大小(MB)', '差值(MB)', '标记', '备注(增大TOP3)'] for col, header in enumerate(headers): ws_overview.write(current_row, col, header, header_style) current_row += 1 # 存储各分区汇总数据 overview_data = [] total_machine1 = 0.0 total_machine2 = 0.0 # 按分区顺序处理数据 for prefix in sorted(PARTITION_NAME_MAP.keys()): partition_name = PARTITION_NAME_MAP[prefix] # 跳过data分区 if partition_name == 'data': continue # 获取主文件数据 data1 = machine1_main_data.get(prefix, {}) data2 = machine2_main_data.get(prefix, {}) # 计算主文件总大小 partition_total1 = round(sum(data1.values()), 2) partition_total2 = round(sum(data2.values()), 2) diff = partition_total1 - partition_total2 # 更新总计 total_machine1 += partition_total1 total_machine2 += partition_total2 # 确定标记样式 if diff > 0: mark = "增加" style = added_style elif diff < 0: mark = "减少" style = removed_style else: mark = "无变化" style = normal_style # 计算分区中增大的TOP3路径 top_notes = [] if diff > 0: # 只在分区增大时计算TOP路径 path_diffs = [] all_paths = set(data1.keys()) | set(data2.keys()) for path in all_paths: size1 = data1.get(path, 0.0) size2 = data2.get(path, 0.0) path_diff = size1 - size2 if path_diff > 0: # 只记录增大的路径 path_diffs.append((path, path_diff)) # 按增大值降序排序,取TOP3 path_diffs.sort(key=lambda x: x[1], reverse=True) for i, (path, diff_val) in enumerate(path_diffs[:3]): # 截断过长的路径名 if len(path) > 50: path = "..." + path[-47:] top_notes.append("{}. {}: +{:.2f}MB".format(i+1, path, diff_val)) # 保存分区数据 overview_data.append({ 'name': partition_name, 'machine1': partition_total1, 'machine2': partition_total2, 'diff': diff, 'style': style, 'mark': mark, 'notes': "\n".join(top_notes) if top_notes else "无显著增大路径" }) # 写入行数据到总览页(增加备注列) ws_overview.write(current_row, 0, partition_name, style) ws_overview.write(current_row, 1, partition_total1, style) ws_overview.write(current_row, 2, partition_total2, style) ws_overview.write(current_row, 3, diff, style) ws_overview.write(current_row, 4, mark, style) ws_overview.write(current_row, 5, overview_data[-1]['notes'], wrap_style) # 使用已定义的wrap_style current_row += 1 # 添加空行 current_row += 1 # 写入总计行 total_diff = total_machine1 - total_machine2 if total_diff > 0: total_mark = "总增加" total_style = added_style elif total_diff < 0: total_mark = "总减少" total_style = removed_style else: total_mark = "无变化" total_style = normal_style ws_overview.write(current_row, 0, "总计", header_style) ws_overview.write(current_row, 1, total_machine1, header_style) ws_overview.write(current_row, 2, total_machine2, header_style) ws_overview.write(current_row, 3, total_diff, header_style) ws_overview.write(current_row, 4, total_mark, header_style) ws_overview.write(current_row, 5, "", header_style) # 备注列留空 # 设置备注列宽度(100字符) ws_overview.col(5).width = 256 * 100 # ====== 为每个文件创建单独的Sheet页(保持不变) ====== # 获取所有唯一的文件名(两个文件夹的并集) all_filenames = sorted(set(machine1_all_files.keys()) | set(machine2_all_files.keys())) for filename in all_filenames: # 提取文件前缀 prefix = extract_file_prefix(filename) # 跳过无效前缀 if prefix not in PARTITION_NAME_MAP: continue # 获取分区名称 partition_name = PARTITION_NAME_MAP[prefix] # 创建Sheet页名称(文件名不带扩展名) sheet_name = filename.replace('.txt', '') if len(sheet_name) > 31: # Excel sheet名称长度限制 sheet_name = sheet_name[:31] # 创建Sheet页 ws = wb.add_sheet(sheet_name) print("创建文件Sheet页: {}".format(sheet_name)) # 当前行指针 current_row = 0 # 写入分区标题 title = "分区: {} - 文件: {}".format(partition_name, filename) ws.write_merge( current_row, current_row, 0, 5, title, title_style ) current_row += 1 # 写入文件夹名称 ws.write_merge(current_row, current_row, 0, 1, folder1_name, header_style) ws.write_merge(current_row, current_row, 2, 3, folder2_name, header_style) ws.write(current_row, 4, "差异(M)", header_style) ws.write(current_row, 5, "标记", header_style) current_row += 1 # 写入表头 headers = ['路径', '大小(M)', '路径', '大小(M)', '差异(M)', '标记'] for col, header in enumerate(headers): ws.write(current_row, col, header, header_style) current_row += 1 # 获取文件数据 data1 = machine1_all_files.get(filename, {}) data2 = machine2_all_files.get(filename, {}) # 获取所有路径(合并两个文件夹的路径) all_paths = sorted(set(data1.keys()) | set(data2.keys())) # 初始化变化统计数据 total_increase = 0.0 # 增大总和 total_decrease = 0.0 # 减小总和 total_added = 0.0 # 新增文件总和 total_removed = 0.0 # 去除文件总和 # 写入数据行 for path in all_paths: size1 = data1.get(path, 0.0) size2 = data2.get(path, 0.0) # 计算差值 diff = size1 - size2 # 确定标记和样式 if size1 == 0 and size2 > 0: mark = "除去" cell_style = removed_style total_removed += size2 elif size1 > 0 and size2 == 0: mark = "新增" cell_style = added_style total_added += size1 else: if diff > 0: mark = "增大" cell_style = added_style total_increase += diff elif diff < 0: mark = "减小" cell_style = removed_style total_decrease += abs(diff) else: mark = "相同" cell_style = normal_style # 写入行数据 # folder1列 if size1 > 0: ws.write(current_row, 0, path, cell_style) ws.write(current_row, 1, size1, cell_style) else: ws.write(current_row, 0, "", cell_style) ws.write(current_row, 1, "", cell_style) # folder2列 if size2 > 0: ws.write(current_row, 2, path, cell_style) ws.write(current_row, 3, size2, cell_style) else: ws.write(current_row, 2, "", cell_style) ws.write(current_row, 3, "", cell_style) # 差异和标记列 ws.write(current_row, 4, diff, cell_style) ws.write(current_row, 5, mark, cell_style) current_row += 1 # 添加文件汇总行 file_total1 = sum(data1.values()) file_total2 = sum(data2.values()) file_diff = file_total1 - file_total2 # 写入汇总行 ws.write(current_row, 0, "文件汇总", header_style) ws.write(current_row, 1, file_total1, header_style) ws.write(current_row, 2, "", header_style) ws.write(current_row, 3, file_total2, header_style) ws.write(current_row, 4, file_diff, header_style) ws.write(current_row, 5, "", header_style) current_row += 1 # 添加变化分类统计行 message = ( u"{partition_name}路径下: " u"减小{total_decrease:.2f}M " u"增大{total_increase:.2f}M " u"新增文件{total_added:.2f}M " u"减少文件{total_removed:.2f}M" ).format( partition_name=partition_name, total_decrease=total_decrease, total_increase=total_increase, total_added=total_added, total_removed=total_removed ) ws.write_merge( current_row, current_row, 0, 5, message, summary_style ) # 保存文件 wb.save(output_xlsx) return "对比报告已成功生成: {}".format(output_xlsx) except Exception as e: import traceback traceback.print_exc() return "生成Excel文件时出错: {}".format(str(e)) 单机拆解模式保持不变 def generate_single_report(folder, output_xlsx): “”“单机拆解模式(保持不变)”“” # …(原有单机拆解模式实现)… if name == “main”: # 创建参数解析器 parser = argparse.ArgumentParser(description=‘存储空间分析工具’) subparsers = parser.add_subparsers(dest=‘mode’, help=‘运行模式’) # 双机对比模式 dual_parser = subparsers.add_parser('dual', help='双机对比模式') dual_parser.add_argument('folder1', help='第一个文件夹路径') dual_parser.add_argument('folder2', help='第二个文件夹路径') dual_parser.add_argument('output', help='输出Excel文件路径') # 单机拆解模式 single_parser = subparsers.add_parser('single', help='单机拆解模式') single_parser.add_argument('folder', help='待分析文件夹路径') single_parser.add_argument('output', help='输出Excel文件路径') # 解析参数 args = parser.parse_args() if args.mode == 'dual': print("运行双机对比模式...") result = generate_dual_report(args.folder1, args.folder2, args.output) elif args.mode == 'single': print("运行单机拆解模式...") result = generate_single_report(args.folder, args.output) else: result = "错误:请选择 'dual' 或 'single' 模式" print(result)基于这个脚本修改总览页输出根据18_lpdump.txt文件生成总大小,差值和标记列 备注的to3差异大小改为大于5M的差异都列出Slot 0: Metadata version: 10.2 Metadata size: 2640 bytes Metadata max size: 65536 bytes Metadata slot count: 3 Header flags: virtual_ab_device Partition table: Name: system_a Group: qti_dynamic_partitions_a Attributes: readonly Extents: 0 … 1527575 linear super 2048 Name: system_b Group: qti_dynamic_partitions_b Attributes: readonly Extents: Name: system_ext_a Group: qti_dynamic_partitions_a Attributes: readonly Extents: 0 … 2482807 linear super 1529856 Name: system_ext_b Group: qti_dynamic_partitions_b Attributes: readonly Extents: Name: vendor_a Group: qti_dynamic_partitions_a Attributes: readonly Extents: 0 … 715543 linear super 4014080 Name: vendor_b Group: qti_dynamic_partitions_b Attributes: readonly Extents: Name: product_a Group: qti_dynamic_partitions_a Attributes: readonly Extents: 0 … 16159 linear super 4730880 Name: product_b Group: qti_dynamic_partitions_b Attributes: readonly Extents: Name: my_product_a Group: qti_dynamic_partitions_a Attributes: readonly Extents: 0 … 4165551 linear super 4747264 Name: my_product_b Group: qti_dynamic_partitions_b Attributes: readonly Extents: Name: odm_a Group: qti_dynamic_partitions_a Attributes: readonly Extents: 0 … 2075367 linear super 8912896 Name: odm_b Group: qti_dynamic_partitions_b Attributes: readonly Extents: Name: my_engineering_a Group: qti_dynamic_partitions_a Attributes: readonly Extents: 0 … 655 linear super 10989568 Name: my_engineering_b Group: qti_dynamic_partitions_b Attributes: readonly Extents: Name: vendor_dlkm_a Group: qti_dynamic_partitions_a Attributes: readonly Extents: 0 … 164735 linear super 10991616 Name: vendor_dlkm_b Group: qti_dynamic_partitions_b Attributes: readonly Extents: Name: system_dlkm_a Group: qti_dynamic_partitions_a Attributes: readonly Extents: 0 … 839 linear super 11157504 Name: system_dlkm_b Group: qti_dynamic_partitions_b Attributes: readonly Extents: Name: my_stock_a Group: qti_dynamic_partitions_a Attributes: readonly Extents: 0 … 4617279 linear super 11159552 Name: my_stock_b Group: qti_dynamic_partitions_b Attributes: readonly Extents: Name: my_heytap_a Group: qti_dynamic_partitions_a Attributes: readonly Extents: 0 … 655 linear super 15777792 Name: my_heytap_b Group: qti_dynamic_partitions_b Attributes: readonly Extents: Name: my_carrier_a Group: qti_dynamic_partitions_a Attributes: readonly Extents: 0 … 655 linear super 15779840 Name: my_carrier_b Group: qti_dynamic_partitions_b Attributes: readonly Extents: Name: my_region_a Group: qti_dynamic_partitions_a Attributes: readonly Extents: 0 … 168575 linear super 15781888 Name: my_region_b Group: qti_dynamic_partitions_b Attributes: readonly Extents: Name: my_company_a Group: qti_dynamic_partitions_a Attributes: readonly Extents: 0 … 655 linear super 15951872 Name: my_company_b Group: qti_dynamic_partitions_b Attributes: readonly Extents: Name: my_preload_a Group: qti_dynamic_partitions_a Attributes: readonly Extents: 0 … 3191663 linear super 15953920 Name: my_preload_b Group: qti_dynamic_partitions_b Attributes: readonly Extents: Name: my_bigball_a Group: qti_dynamic_partitions_a Attributes: readonly Extents: 0 … 655 linear super 19146752 Name: my_bigball_b Group: qti_dynamic_partitions_b Attributes: readonly Extents: Name: my_manifest_a Group: qti_dynamic_partitions_a Attributes: readonly Extents: 0 … 911 linear super 19148800 Name: my_manifest_b Group: qti_dynamic_partitions_b Attributes: readonly Extents: Super partition layout: super: 2048 … 1529624: system_a (1527576 sectors) super: 1529856 … 4012664: system_ext_a (2482808 sectors) super: 4014080 … 4729624: vendor_a (715544 sectors) super: 4730880 … 4747040: product_a (16160 sectors) super: 4747264 … 8912816: my_product_a (4165552 sectors) super: 8912896 … 10988264: odm_a (2075368 sectors) super: 10989568 … 10990224: my_engineering_a (656 sectors) super: 10991616 … 11156352: vendor_dlkm_a (164736 sectors) super: 11157504 … 11158344: system_dlkm_a (840 sectors) super: 11159552 … 15776832: my_stock_a (4617280 sectors) super: 15777792 … 15778448: my_heytap_a (656 sectors) super: 15779840 … 15780496: my_carrier_a (656 sectors) super: 15781888 … 15950464: my_region_a (168576 sectors) super: 15951872 … 15952528: my_company_a (656 sectors) super: 15953920 … 19145584: my_preload_a (3191664 sectors) super: 19146752 … 19147408: my_bigball_a (656 sectors) super: 19148800 … 19149712: my_manifest_a (912 sectors) Block device table: Partition name: super First sector: 2048 Size: 12348030976 bytes Flags: none Group table: Name: default Maximum size: 0 bytes Flags: none Name: qti_dynamic_partitions_a Maximum size: 12343836672 bytes Flags: none Name: qti_dynamic_partitions_b Maximum size: 12343836672 bytes Flags: none Snapshot state: Update state: none Using snapuserd: 0 Using userspace snapshots: 0 Using io_uring: 0 Using o_direct: 0 Using XOR compression: 1 Current slot: _a Boot indicator: booting from unknown slot Rollback indicator: No such file or directory Forward merge indicator: No such file or directory Source build fingerprint:

#!/usr/bin/env python # -*- coding: utf-8 -*- import os import re import sys import argparse import xlwt import locale from collections import defaultdict # 设置系统默认编码 reload(sys) sys.setdefaultencoding('utf-8') # 分区名称映射表(文件名 → 友好名称) PARTITION_NAME_MAP = { 'system.txt': 'system', 'system_ext.txt': 'system_ext', 'vendor.txt': 'vendor', 'product.txt': 'product', 'odm.txt': 'odm', 'my_product.txt': 'my_product', 'my_engineering.txt': 'my_engineering', 'my_stock.txt': 'my_stock', 'my_heytap.txt': 'my_heytap', 'my_company.txt': 'my_company', 'my_carrier.txt': 'my_carrier', 'my_region.txt': 'my_region', 'my_preload.txt': 'my_preload', 'my_manifest.txt': 'my_manifest', 'data.txt': 'data', 'my_bigball.txt': 'my_bigball', 'vendor_dlkm.txt': 'vendor_dlkm', 'system_dlkm.txt': 'system_dlkm' } # 分区显示顺序(按重要性排序) PARTITION_ORDER = [ 'system', 'system_ext', 'vendor', 'product', 'odm', 'my_product', 'my_engineering', 'my_stock', 'my_heytap', 'my_company', 'my_carrier', 'my_region', 'my_preload', 'my_manifest', 'my_bigball', 'vendor_dlkm', 'system_dlkm' ] def to_unicode(obj): """将对象转换为Unicode字符串""" if isinstance(obj, unicode): return obj if isinstance(obj, str): try: return obj.decode('utf-8') except UnicodeDecodeError: try: return obj.decode(locale.getpreferredencoding()) except: return obj.decode('utf-8', 'ignore') return unicode(obj) def parse_du_file(file_path): """ 解析du命令输出文件并转换为MB 已知输入数据单位为KB,所以无单位或带K的都视为KB """ data = {} try: with open(file_path, 'r') as f: for line in f: # 跳过错误行和空行 if 'Permission denied' in line or 'No such file' in line or not line.strip(): continue # 使用更灵活的匹配模式处理不同格式 # 匹配格式: <数字> <可选单位> <路径> match = re.match(r'(\d+\.?\d*)\s*([kKmMgG]?)\s*(.*)', line.strip()) if match: size, unit, path = match.groups() size = float(size) # 单位转换到MB(已知输入单位为KB) unit = unit.upper() # 统一转为大写 # 处理单位转换 if unit == 'K' or unit == '': # KB → MB: 除以1024 size = size / 1024.0 elif unit == 'M': # 已经是MB,保持不变 pass elif unit == 'G': # GB → MB: 乘以1024 size = size * 1024.0 else: # 未知单位,默认按KB处理 size = size / 1024.0 data[to_unicode(path)] = round(size, 4) except IOError as e: print(u"警告: 无法读取文件 {}: {}".format(file_path, str(e))) return data def get_partition_name(filename): """从文件名获取分区名称""" return PARTITION_NAME_MAP.get(filename, filename.replace('.txt', '')) def is_main_partition_file(filename): """检查是否为主分区文件""" return filename in PARTITION_NAME_MAP def generate_dual_report(folder1, folder2, output_xlsx): """生成双机对比报告""" folder1_name = to_unicode(os.path.basename(os.path.normpath(folder1))) folder2_name = to_unicode(os.path.basename(os.path.normpath(folder2))) # 修复:将ffolder改为folder for folder in [folder1, folder2]: if not os.path.exists(folder): print(u"错误: 目录不存在 - {}".format(folder)) return u"目录 {} 不存在,请检查路径".format(folder) if not os.path.isdir(folder): # 修复这里的变量名 print(u"错误: 路径不是目录 - {}".format(folder)) return u"{} 不是有效目录".format(folder) # 初始化数据结构 machine1_main_data = {} machine2_main_data = {} machine1_all_files = defaultdict(dict) machine2_all_files = defaultdict(dict) # 收集数据 for folder_path, main_dict, all_dict in [ (folder1, machine1_main_data, machine1_all_files), (folder2, machine2_main_data, machine2_all_files) ]: print(u"处理目录: {}".format(folder_path)) try: for filename in os.listdir(folder_path): if not filename.endswith('.txt'): continue file_path = os.path.join(folder_path, filename) filename_unicode = to_unicode(filename) partition_name = get_partition_name(filename_unicode) file_data = parse_du_file(file_path) all_dict[filename_unicode] = file_data if is_main_partition_file(filename_unicode): print(u"解析主分区文件: {}".format(file_path)) main_dict[filename_unicode] = file_data except OSError as e: print(u"目录访问错误: {}".format(str(e))) return u"无法访问目录 {}: {}".format(folder_path, str(e)) # 创建Excel工作簿 try: wb = xlwt.Workbook(encoding='utf-8') # 定义样式 header_style = xlwt.easyxf('font: bold on') title_style = xlwt.easyxf('font: bold on, height 280; align: wrap on, vert centre') normal_style = xlwt.easyxf() added_style = xlwt.easyxf('pattern: pattern solid, fore_colour light_green;') removed_style = xlwt.easyxf('pattern: pattern solid, fore_colour rose;') summary_style = xlwt.easyxf('font: bold on, color blue;') wrap_style = xlwt.easyxf('align: wrap on, vert centre') # ====== 创建总览Sheet页 ====== ws_overview = wb.add_sheet(u'总览') print(u"创建总览Sheet页(仅主文件数据)") current_row = 0 # 写入总览标题 ws_overview.write_merge( current_row, current_row, 0, 5, u"存储使用总览 (单位: MB)", title_style ) current_row += 1 # 写入文件夹名称 ws_overview.write(current_row, 1, folder1_name, header_style) ws_overview.write(current_row, 2, folder2_name, header_style) current_row += 1 # 写入表头(增加备注列) headers = [u'分区', u'总大小(MB)', u'总大小(MB)', u'差值(MB)', u'标记', u'备注(增大TOP3)'] for col, header in enumerate(headers): ws_overview.write(current_row, col, header, header_style) current_row += 1 # 存储各分区汇总数据 overview_data = [] total_machine1 = 0.0 total_machine2 = 0.0 # 按分区顺序处理数据 for partition in PARTITION_ORDER: # 查找对应的文件名 filename = None for fname in machine1_main_data: if get_partition_name(fname) == partition: filename = fname break if not filename: continue # 获取主文件数据 data1 = machine1_main_data.get(filename, {}) data2 = machine2_main_data.get(filename, {}) # 计算主文件总大小 partition_total1 = round(sum(data1.values()), 2) partition_total2 = round(sum(data2.values()), 2) diff = partition_total1 - partition_total2 # 更新总计 total_machine1 += partition_total1 total_machine2 += partition_total2 # 确定标记样式 if diff > 0: mark = u"增加" style = added_style elif diff < 0: mark = u"减少" style = removed_style else: mark = u"无变化" style = normal_style # 计算分区中增大的TOP3路径 top_notes = [] if diff > 0: # 只在分区增大时计算TOP路径 path_diffs = [] all_paths = set(data1.keys()) | set(data2.keys()) for path in all_paths: size1 = data1.get(path, 0.0) size2 = data2.get(path, 0.0) path_diff = size1 - size2 if path_diff > 0: # 只记录增大的路径 path_diffs.append((path, path_diff)) # 按增大值降序排序,取TOP3 path_diffs.sort(key=lambda x: x[1], reverse=True) for i, (path, diff_val) in enumerate(path_diffs[:3]): # 截断过长的路径名 if len(path) > 50: path = u"..." + path[-47:] top_notes.append(u"{}. {}: +{:.2f}MB".format(i+1, path, diff_val)) # 保存分区数据 overview_data.append({ 'name': partition, 'machine1': partition_total1, 'machine2': partition_total2, 'diff': diff, 'style': style, 'mark': mark, 'notes': u"\n".join(top_notes) if top_notes else u"无显著增大路径" }) # 写入行数据到总览页(增加备注列) ws_overview.write(current_row, 0, to_unicode(partition), style) ws_overview.write(current_row, 1, partition_total1, style) ws_overview.write(current_row, 2, partition_total2, style) ws_overview.write(current_row, 3, diff, style) ws_overview.write(current_row, 4, mark, style) ws_overview.write(current_row, 5, overview_data[-1]['notes'], wrap_style) current_row += 1 # 添加空行 current_row += 1 # 写入总计行 total_diff = total_machine1 - total_machine2 if total_diff > 0: total_mark = u"总增加" total_style = added_style elif total_diff < 0: total_mark = u"总减少" total_style = removed_style else: total_mark = u"无变化" total_style = normal_style ws_overview.write(current_row, 0, u"总计", header_style) ws_overview.write(current_row, 1, total_machine1, header_style) ws_overview.write(current_row, 2, total_machine2, header_style) ws_overview.write(current_row, 3, total_diff, header_style) ws_overview.write(current_row, 4, total_mark, header_style) ws_overview.write(current_row, 5, u"", header_style) # 备注列留空 # 设置备注列宽度(100字符) ws_overview.col(5).width = 256 * 100 # ====== 为每个文件创建单独的Sheet页 ====== # 获取所有唯一的文件名(两个文件夹的并集) all_filenames = sorted(set(machine1_all_files.keys()) | set(machine2_all_files.keys())) for filename in all_filenames: # 获取分区名称 partition_name = get_partition_name(filename) # 创建Sheet页名称(文件名不带扩展名) sheet_name = filename.replace('.txt', '') if len(sheet_name) > 31: # Excel sheet名称长度限制 sheet_name = sheet_name[:31] # 创建Sheet页 ws = wb.add_sheet(sheet_name) print(u"创建文件Sheet页: {}".format(sheet_name)) # 当前行指针 current_row = 0 # 写入分区标题 title = u"分区: {} - 文件: {} (单位: MB)".format(partition_name, filename) ws.write_merge( current_row, current_row, 0, 5, title, title_style ) current_row += 1 # 写入文件夹名称 ws.write_merge(current_row, current_row, 0, 1, folder1_name, header_style) ws.write_merge(current_row, current_row, 2, 3, folder2_name, header_style) ws.write(current_row, 4, u"差异(MB)", header_style) ws.write(current_row, 5, u"标记", header_style) current_row += 1 # 写入表头 headers = [u'路径', u'大小(MB)', u'路径', u'大小(MB)', u'差异(MB)', u'标记'] for col, header in enumerate(headers): ws.write(current_row, col, header, header_style) current_row += 1 # 获取文件数据 data1 = machine1_all_files.get(filename, {}) data2 = machine2_all_files.get(filename, {}) # 获取所有路径(合并两个文件夹的路径) all_paths = sorted(set(data1.keys()) | set(data2.keys())) # 初始化变化统计数据 total_increase = 0.0 # 增大总和 total_decrease = 0.0 # 减小总和 total_added = 0.0 # 新增文件总和 total_removed = 0.0 # 去除文件总和 # 写入数据行 for path in all_paths: size1 = data1.get(path, 0.0) size2 = data2.get(path, 0.0) # 计算差值 diff = size1 - size2 # 确定标记和样式 if size1 == 0 and size2 > 0: mark = u"除去" cell_style = removed_style total_removed += size2 elif size1 > 0 and size2 == 0: mark = u"新增" cell_style = added_style total_added += size1 else: if diff > 0: mark = u"增大" cell_style = added_style total_increase += diff elif diff < 0: mark = u"减小" cell_style = removed_style total_decrease += abs(diff) else: mark = u"相同" cell_style = normal_style # 写入行数据 # folder1列 if size1 > 0: ws.write(current_row, 0, path, cell_style) ws.write(current_row, 1, size1, cell_style) else: ws.write(current_row, 0, u"", cell_style) ws.write(current_row, 1, u"", cell_style) # folder2列 if size2 > 0: ws.write(current_row, 2, path, cell_style) ws.write(current_row, 3, size2, cell_style) else: ws.write(current_row, 2, u"", cell_style) ws.write(current_row, 3, u"", cell_style) # 差异和标记列 ws.write(current_row, 4, diff, cell_style) ws.write(current_row, 5, mark, cell_style) current_row += 1 # 添加文件汇总行 file_total1 = sum(data1.values()) file_total2 = sum(data2.values()) file_diff = file_total1 - file_total2 # 写入汇总行 ws.write(current_row, 0, u"文件汇总", header_style) ws.write(current_row, 1, file_total1, header_style) ws.write(current_row, 2, u"", header_style) ws.write(current_row, 3, file_total2, header_style) ws.write(current_row, 4, file_diff, header_style) ws.write(current_row, 5, u"", header_style) current_row += 1 # 添加变化分类统计行 message = ( u"{partition_name}路径下: " u"减小{total_decrease:.2f}MB " u"增大{total_increase:.2f}MB " u"新增文件{total_added:.2f}MB " u"减少文件{total_removed:.2f}MB" ).format( partition_name=partition_name, total_decrease=total_decrease, total_increase=total_increase, total_added=total_added, total_removed=total_removed ) ws.write_merge( current_row, current_row, 0, 5, message, summary_style ) # 保存文件 wb.save(output_xlsx) return u"对比报告已成功生成: {}".format(output_xlsx) except Exception as e: import traceback traceback.print_exc() return u"生成Excel文件时出错: {}".format(str(e)) def main(): """主函数:从本地目录读取数据并生成报告""" parser = argparse.ArgumentParser(description='设备存储空间对比工具') parser.add_argument('--test_dir', required=True, help='测试机数据目录') parser.add_argument('--ref_dir', required=True, help='对比机数据目录') parser.add_argument('--output', required=True, help='输出Excel文件路径') args = parser.parse_args() # 确保命令行参数正确转换为Unicode test_dir = to_unicode(args.test_dir) ref_dir = to_unicode(args.ref_dir) output = to_unicode(args.output) print(u"=" * 60) print(u"开始对比: 测试机({}) vs 对比机({})".format(test_dir, ref_dir)) print(u"=" * 60) result = generate_dual_report(test_dir, ref_dir, output) print(result) if __name__ == "__main__": main() 这个是解析的脚本分析为什么出错

最新推荐

recommend-type

美赛教程&建模&数据分析&案例分析&文档写作&编程实现等资源

美赛教程&建模&数据分析&案例分析&文档写作&编程实现等资源
recommend-type

自动驾驶场景库的提取方案及实现代码解析 自动驾驶场景库提取方案与实现代码详解 自动驾驶场景库提取方案及实现代码设计 自动驾驶场景库提取方案及实现代码构建 自动驾驶场景库提取方案及实现代码方案 自动驾驶

资源下载链接为: https://pan.quark.cn/s/8471dd710e10 自动驾驶场景库的提取方案及实现代码解析 自动驾驶场景库提取方案与实现代码详解 自动驾驶场景库提取方案及实现代码设计 自动驾驶场景库提取方案及实现代码构建 自动驾驶场景库提取方案及实现代码方案 自动驾驶场景库提取方法及实现代码方案 自动驾驶场景库提取方案及实现代码研究 自动驾驶场景库的提取方案及实现代码 自动驾驶场景库提取完整方案及实现代码 自动驾驶场景库提取方案及实现代码详述(最新、最全版本!打开链接下载即可用!)
recommend-type

一步到位:Blender Flamenco服务器安装及使用指南

### 知识点 #### 1. Flamenco Server for Blender简介 Flamenco是一个与Blender集成的分布式渲染解决方案,它允许艺术家和动画师将渲染工作分配到多台计算机上进行处理,以提高渲染效率。Flamenco Server是该解决方案的核心组件,它负责管理和分配任务给多个Flamenco Workers。 #### 2. 安装Flamenco Server的重要性 安装Flamenco Server对于需要自托管分布式渲染环境的用户来说至关重要。这允许用户完全控制渲染集群的操作,并可根据特定需求进行定制。例如,用户可能需要特定的软件版本或渲染硬件配置,这在使用第三方服务时可能无法满足。 #### 3. Flamenco Server的使用案例 文档提到了许多自托管Flamenco Server的用例,但同时也指出大多数用户可能会觉得Blender Cloud提供的服务已经足够。Blender Cloud是一个支持Blender的订阅服务,它包含用于渲染和其他Blender相关服务的云基础设施。 #### 4. 支持Blender Foundation 文档鼓励用户在安装Flamenco Server前考虑对Blender进行一次性捐赠。Blender Foundation是一个非营利组织,支持开源3D创作套件Blender的开发和维护。通过捐赠,用户不仅能够帮助维持和改进Blender及其相关项目,还能得到Blender Foundation支持的专家的帮助。 #### 5. 安装指南的详细步骤 文档承诺提供完整的逐步指南来安装Flamenco Server所需的各个组件。这可能包括软件依赖、环境配置、网络设置等。对于初学者来说,从头开始设置可能具有挑战性,因此文档建议用户先使用Blender Cloud的服务器进行实践和学习。 #### 6. Sybren博士的建议 Sybren博士是一位在Flamenco项目中具有权威的开发者,他在文档中给出了建议:即使用户有意向自托管,也建议先尝试使用Blender Cloud服务器以获得相关经验。这说明Blender Cloud不仅仅是一个快捷方便的选择,也是学习和了解Flamenco工作流程的有效途径。 #### 7. 文档的实用价值 本安装指南对于那些想要深入学习和掌握Blender以及Flamenco分布式渲染技术的用户来说具有极高的实用价值。尽管文档本身没有提供标签信息,其内容的实用性和指导性对于目标用户群体来说十分关键。 #### 8. 对Blender Cloud订阅的支持意义 Blender Cloud不仅为用户提供了一个现成的解决方案,而且其收益直接用于支持Blender Foundation,促进开源3D创作软件的持续发展。因此,在考虑是否自托管Flamenco Server之前,用户应评估Blender Cloud服务的价值和对开源社区的支持意义。 #### 9. 文档的结构与内容预测 考虑到文档标题仅给出了“flamenco-server-installation-guide”这一信息,我们可以推测该文档可能包含以下内容: - Flamenco Server和Blender Cloud服务的对比分析 - 自托管Flamenco Server前的准备工作和前提条件 - 安装Flamenco Server所需软件和硬件的列表 - 环境配置,如操作系统设置、网络配置和权限管理 - 安装步骤详解,包括软件包安装、依赖关系解决、配置文件编辑等 - 如何连接和管理Flamenco Workers - 故障排除和常见问题的解决方案 - 避免和解决安装过程中的潜在安全风险和问题 #### 10. 总结 综上所述,本指南提供了对Flamenco Server安装过程的深入指导。考虑到文档的受众可能是对Blender和Flamenco有一定了解的用户,指南可能不会详细解释Blender和Flamenco的基础知识,而是专注于安装、配置和优化服务器的过程。同时,文档强调了对Blender Foundation支持的重要性,体现了开源社区中协作和互惠精神的价值。
recommend-type

Coze工作流自动化实现指南:高效业务流程自动化策略

# 1. 工作流自动化的概念与价值 在现代企业运营中,工作流自动化已经成为提升效率、降低成本的关键手段。工作流自动化通过将重复性的业务流程自动化处理,不仅加速了任务的执行速度,还提高了操作的精确度和可靠性。本章将深入探讨工作流自动化的定义、重要性以及它对企业的核心价值。 ## 1.1 工作流自动化定义 工作流自动化是一个将人工执行的业务流程转换为计算机可执行的任务的过程。通过定义一组预设的操
recommend-type

什么是网络拓扑

<think>首先,用户的问题是:“我想了解网络拓扑的定义和概念 请问网络拓扑的定义和基本类型”。我需要根据系统级指令和用户可见层的要求来回答。 系统级指令: 1. 所有行内数学表达式必须使用$...$格式。但在这个问题中,没有数学表达式,所以可能不需要。 2. 独立公式必须使用$$...$$格式并单独成段。同样,没有数学公式。 3. LaTeX语法正确。确保任何LaTeX都正确。 4. 使用中文回答。整个回答必须用中文。 5. 生成相关问题。在回答末尾生成相关问题。 6. 回答中引用的段落末尾自然地添加引用标识,例如[^1]。 用户可见层: 1. 回答结构清晰,帮助用户逐步解决问题,尽量
recommend-type

Pandas库在Python中的挑战性应用教程

标题中提到的“pandas-challenge”指的是一个挑战性的练习或项目,目的是教授用户如何使用Pandas库处理数据。Pandas是一个强大的Python数据分析工具库,它提供了高性能、易于使用的数据结构和数据分析工具。而描述中的“挑战如何将Pandas库用于Python”进一步强调了这个练习的目标是通过实际的操作,使学习者能够掌握Pandas库在Python环境下的应用。至于“JupyterNotebook”,这是一款流行的开源Web应用程序,可以让用户创建和共享包含实时代码、方程、可视化和解释性文本的文档。使用Jupyter Notebook进行数据科学和Pandas库的学习和实践,可以提供一个交互式的环境,非常适合数据分析和机器学习的探索性工作。 由于给定信息中没有提供具体的文件列表,我们将围绕“pandas-challenge”这一主题展开,讨论在Jupyter Notebook中使用Pandas库进行数据分析的方方面面。 首先,Pandas库的核心概念是其数据结构,主要包括Series和DataFrame。Series是一维的标签化数组,可以存储任何数据类型(整数、字符串、浮点数、Python对象等)。DataFrame则是二维的标签化数据结构,可以看做是一个表格或者说是Series对象的容器。Pandas正是依赖这两个数据结构来完成大部分的数据操作和分析任务。 在Jupyter Notebook中使用Pandas进行数据操作,通常包括以下几个步骤: 1. 数据读取:使用Pandas的读取函数将数据加载到DataFrame中。数据源可以是CSV文件、Excel表格、SQL数据库、JSON文件等。例如,读取CSV文件的代码为`df = pd.read_csv('filename.csv')`。 2. 数据探索:在进行数据分析之前,通常需要了解数据的结构、内容以及缺失值情况等。Pandas提供了大量方法来探索数据,如`df.head()`, `df.tail()`, `df.info()`, `df.describe()`等。 3. 数据清洗:数据往往需要经过清洗才能用于分析。Pandas提供了丰富的数据处理功能,包括处理缺失数据、合并数据、数据过滤、数据转换等。例如,填充缺失值可以用`df.fillna(value)`方法,删除含有缺失值的行或列可以使用`df.dropna()`方法。 4. 数据处理:Pandas的数据处理能力非常强大,能够对数据进行切片、筛选、分组、聚合等操作。例如,可以使用`.loc[]`和`.iloc[]`进行行和列的选择,使用`groupby()`进行分组聚合。 5. 数据可视化:Pandas支持与Matplotlib等可视化库集成,方便用户绘制图表。例如,可以使用`df.plot()`方法快速绘制数据的折线图、柱状图等。 6. 数据分析:Pandas还内置了许多统计函数,可以方便地进行描述性统计分析。比如,可以使用`df.mean()`, `df.max()`, `df.min()`, `df.std()`等方法来获取数据的统计信息。 在Jupyter Notebook中,所有的代码都可以在一个单元格内执行,并且支持Markdown语言进行文档的书写和格式化。这意味着学习者不仅可以通过运行代码学习Pandas的使用,还可以将学习笔记和代码注释整合在同一个文档中,提高学习效率。 最后,针对文件名“pandas-challenge-main”,我们可以推测该文件可能是包含挑战性练习的主要文档或入口,可能是包含多个单元格的Jupyter Notebook文件,每个单元格中都包含用Pandas解决特定数据分析问题的代码和解释。 总结来说,pandas-challenge的核心是利用Pandas库在Python环境下进行数据处理和分析。通过在Jupyter Notebook中的实际操作,可以提升学习者在数据清洗、处理和可视化等方面的能力。
recommend-type

Coze工作流性能与故障处理手册:进阶教程优化指南

# 1. Coze工作流基础 ## 工作流系统简介 工作流系统是企业自动化执行业务流程的解决方案。Coze工作流是一种先进的工作流管理系统,以其灵活性和可扩展性在业界广受欢迎。它支持复杂的工作流设计,使企业能够定制和优化业务流程,提升效率和生产力。 ## 核心特性 Coze工作流的核心特性包括:高度可配置的流程设计、丰富的任务类型、集成多种数据源的能力以及强大的API接口。这些特性共同构成了Coz
recommend-type

Checking For Single Node Nets WARNING(ORCAP-1600): Net has fewer than two connections GND_POWER Checking For Unconnected Bus Nets Checking Physical Rules Checking Pins and Pin Connections -------------------------------------------------- Checking Schematic: SCHEMATIC1 -------------------------------------------------- INFO(ORCAP-2242): Checking Incorrect Pin Group Assignment Report for Invalid References Report for Duplicate References -------------------------------------------------- Checking Entire Design: DCDC电源设计 -------------------------------------------------- INFO(ORCAP-2211): Check High Speed Properties Syntax INFO(ORCAP-2212): Check Power Ground Mismatch 报错如何处理

<think>首先,用户的问题是关于ORCAD中电源网络检查的警告和错误信息,具体是ORCAP-1600、ORCAP-2211、ORCAP-2212错误,以及GND_POWER网络连接问题。用户提供了三个引用,我需要参考这些引用来回答问题,但不要直接复制;要自然地融入回答中。 关键引用: - 引用[1]:提到问题本质是原理图管脚型号的设定问题,电源管脚名称同时为NET名称,导致两个名称冲突。解决办法是确保器件的电源引脚标号和电源网络标号一致。 - 引用[2]:关于跨页符没有连接的错误(ORCAP-11022),解决方法是通过搜索或坐标定位信号。 - 引用[3]:关于Pin type conf
recommend-type

编码器机器人:新经济系统与Discord.js开发指南

标题 "coders-bots" 暗示了本段内容将主要围绕编程、机器人、以及与开发者相关的内容进行展开。具体的知识点可以细化为以下几个方面: 1. **编码器机器人**: - “编码器机器人”可能指的是一种自动化工具,它用于编码或转换数据格式。在计算机科学中,编码器可能指的是数据编码、转换、或者更具体地,指实现某种编码算法的软件程序。这类机器人或程序在数据处理、通信协议和加密解密等领域尤为常见。 2. **针对开发者的工具或平台**: - “Creado Para Coders”意指“为编码者创造”,这表明“coders-bots”是一个专门为开发者设计的工具或平台。这可能是提供给开发者使用的API、库、框架或模板,以帮助他们在开发过程中自动化某些任务或功能。 3. **新经济系统**: - 描述中的“新经济系统”可能指的是一个专门针对编码者的经济体系,或许是在某个平台或社区中,开发者可以通过贡献代码、共享知识、提供服务等形式赚取某种形式的收益。这类系统可能结合了区块链、加密货币等技术来实现。 4. **基于discord.js的指南**: - “基于discord.js是JavaScript克隆”指的可能是一个使用JavaScript语言,以discord.js库为基础的项目。discord.js是一个流行的Node.js模块,用于创建和管理Discord机器人。它允许开发者通过编写JavaScript代码来控制机器人,这些机器人可以执行各种任务,如自动化社区管理、游戏内互动、集成第三方服务等。 5. **Git仓库和安装指南**: - 文档提供了git克隆命令和后续的安装步骤(cd coders-bots和npm install),这表明该项目托管在GitHub上。开发者可以通过git来克隆整个项目到本地,然后通过Node.js的包管理工具npm来安装项目依赖。 6. **Discord.js社区支持**: - “Una comunada dispuesta ayudar和dodos sobre Discord.js”暗示这个项目背后的社区愿意帮助开发者学习和使用discord.js。这表明该项目可能拥有活跃的开发者社区,能够提供指导、技术支持、最佳实践分享等。 7. **控制台颜色代码**: - 描述中包含了控制台颜色代码的说明,这可能用于开发者在开发时需要自定义控制台输出的样式,提高代码的可读性和用户体验。例如,使用Bright、Dim、Underscore、Blink、Reverse、Hidden、FgBlack等控制台ANSI颜色代码来格式化文本输出。 【标签】为"JavaScript",这强化了前面提到的使用了discord.js库的事实,也表明整个项目是基于JavaScript语言的,而JavaScript是目前最流行的前端开发语言,也广泛用于后端开发(特别是通过Node.js)。 【压缩包子文件的文件名称列表】为"coders-bots-main",说明了整个项目的主要文件或入口文件可能名为"coders-bots-main"。这可能意味着这个项目可以被当作一个包进行管理,主文件中可能包含了机器人的主要逻辑和功能,开发者可以在这个文件的基础上进行定制和扩展。 综上所述,这些信息点为潜在的开发者揭示了一个可能在Discord社区中具有实用价值的编码者专用工具或平台的概貌,它通过JavaScript和discord.js库的结合,提供了一个利用新经济系统的机器人开发环境。
recommend-type

Coze工作流实战升级:从理论到实践的8大高效技巧

# 1. Coze工作流的基础概念与核心组成 在信息技术行业中,工作流是自动化业务过程的重要组成部分,它通过定义任务执行的顺序、规则和决策点,实现任务的协调和数据的传递。Coze工作流,作为一种先进的工作流管理系统,它的设计理念是让企业能够轻松地定义、执行和管理复杂的业务流程。 ## 1.1 Coze工作流的核心组成 Coze工作流由几个关键组件构成,它们共同工作以确保业务流