file-type

简化@teamhive/core依赖安装的项目设置操作

ZIP文件

下载需积分: 5 | 2KB | 更新于2025-09-07 | 84 浏览量 | 0 下载量 举报 收藏
download 立即下载
从提供的文件信息中,我们可以梳理出以下IT相关知识点: ### 标题知识点 **设置核心回购(set-up-core-repo)**: - **核心回购(Core Repository)**:通常指的是一个项目中最重要的代码仓库,其中包含了项目最核心的代码库和依赖。这个概念在软件开发领域被广泛使用,特别是对于大型项目或者软件产品来说,核心代码库的维护和管理是至关重要的。 - **设置项目(Setting up a Project)**:在软件开发中,一个项目开始前往往需要配置开发环境,安装必要的依赖库,以及配置项目的初始代码结构等。这里的设置项目指的可能是对核心代码库的初始化配置,确保项目的基础结构和依赖都准备就绪。 ### 描述知识点 **设置核心回购操作**: - **依赖项安装**:描述中提到此操作将安装依赖项,特别是在软件开发中,依赖管理是一个非常关键的步骤。依赖项通常是指项目代码运行所需要的所有库文件、框架和其他软件包。这里特指安装来自缓存的依赖项,这意味着可能涉及到缓存机制,快速从本地或远程源获取依赖而不必每次都从头下载。 - **@teamhive/core-cli的安装**:这个特定的标记说明了要安装一个由TeamHive组织提供的核心命令行工具(CLI)。CLI在软件开发中广泛应用于自动化任务,如代码编译、测试、打包等。 - **@teamhive/core-actions-common框架**:这个框架可能是一个预设的行动或脚本集,允许开发者快速部署常见的开发任务。框架上安装的私有操作暗示了一种可以处理特定任务的私有代码库,这可能涉及到权限管理和安全策略。 - **非公开软件包的依赖安装**:描述中提到此操作主要好处是简化了安装非公开的@teamhive/core依赖关系的过程。这说明了项目可能使用了私有或半公开的软件包,这些包不对外公开,可能需要特殊的权限和认证过程来安装和使用。 ### 输入项知识点 **install-dependencies**: - **输入项说明**:操作流程中存在一个输入项,名为install-dependencies,用户可以设置其值为true或false。 - **Yarn依赖项安装**:当输入项为true时,操作将自动安装yarn依赖项。Yarn是一个流行的JavaScript包管理工具,类似于npm,它能从package.json文件中读取项目依赖并安装。这里可能涉及到yarn.lock文件,它记录了项目中使用的依赖的确切版本,保证了项目依赖的一致性和可靠性。 ### 其他知识点 - **复合GitHub操作**:描述中提到了执行复合GitHub操作的要求,这可能涉及到使用GitHub Actions,这是GitHub提供的一个功能,允许用户自动化软件开发工作流程。复合GitHub操作可能指的是复用或组合多个GitHub Actions来实现复杂的自动化任务。 - **TeamHive组织权限**:提到这个操作对于TeamHive组织以外的项目可能无济于事,这可能意味着此操作或其使用的工具和包有特定的访问权限要求。只有被授权的TeamHive组织成员才能充分利用这些功能。 ### 综合结论 根据给定的文件信息,可以推断这是一个针对特定开发者组织(TeamHive)的项目设置脚本,其中特别强调了依赖管理和自动化工具的使用。此脚本的目的是为了简化设置过程,特别是对于那些使用组织内特定依赖和工具链的项目。它可能被设计成可以在GitHub上执行,以实现自动化配置。此外,操作中提到的私有操作和依赖说明了在组织层面上对于代码管理和访问控制的重视。

相关推荐

filetype

D:\software\develop\java\jdk-1.8\jdk-1.8\bin\java.exe "-javaagent:D:\software\develop\ideal\IntelliJ IDEA 2022.1.3\lib\idea_rt.jar=58407:D:\software\develop\ideal\IntelliJ IDEA 2022.1.3\bin" -Dfile.encoding=UTF-8 -classpath D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\charsets.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\deploy.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\ext\access-bridge-64.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\ext\cldrdata.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\ext\dnsns.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\ext\jaccess.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\ext\jfxrt.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\ext\localedata.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\ext\nashorn.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\ext\sunec.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\ext\sunjce_provider.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\ext\sunmscapi.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\ext\sunpkcs11.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\ext\zipfs.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\javaws.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\jce.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\jfr.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\jfxswt.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\jsse.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\management-agent.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\plugin.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\resources.jar;D:\software\develop\java\jdk-1.8\jdk-1.8\jre\lib\rt.jar;D:\workspace\Maven\titanic_demo\target\scala-2.12\classes;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\clearspring\analytics\stream\2.9.6\stream-2.9.6.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\esotericsoftware\kryo-shaded\4.0.2\kryo-shaded-4.0.2.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\esotericsoftware\minlog\1.3.0\minlog-1.3.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\fasterxml\jackson\core\jackson-annotations\2.12.3\jackson-annotations-2.12.3.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\fasterxml\jackson\core\jackson-core\2.12.3\jackson-core-2.12.3.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\fasterxml\jackson\core\jackson-databind\2.12.3\jackson-databind-2.12.3.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\fasterxml\jackson\module\jackson-module-scala_2.12\2.12.3\jackson-module-scala_2.12-2.12.3.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\github\luben\zstd-jni\1.5.0-4\zstd-jni-1.5.0-4.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\google\code\findbugs\jsr305\3.0.2\jsr305-3.0.2.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\google\code\gson\gson\2.8.6\gson-2.8.6.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\google\crypto\tink\tink\1.6.0\tink-1.6.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\google\flatbuffers\flatbuffers-java\1.9.0\flatbuffers-java-1.9.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\google\guava\guava\16.0.1\guava-16.0.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\google\protobuf\protobuf-java\3.14.0\protobuf-java-3.14.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\ning\compress-lzf\1.0.3\compress-lzf-1.0.3.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\thoughtworks\paranamer\paranamer\2.8\paranamer-2.8.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\twitter\chill-java\0.10.0\chill-java-0.10.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\twitter\chill_2.12\0.10.0\chill_2.12-0.10.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\univocity\univocity-parsers\2.9.1\univocity-parsers-2.9.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\commons-codec\commons-codec\1.15\commons-codec-1.15.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\commons-collections\commons-collections\3.2.2\commons-collections-3.2.2.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\commons-io\commons-io\2.8.0\commons-io-2.8.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\commons-lang\commons-lang\2.6\commons-lang-2.6.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\commons-logging\commons-logging\1.1.3\commons-logging-1.1.3.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\commons-net\commons-net\3.1\commons-net-3.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\io\airlift\aircompressor\0.21\aircompressor-0.21.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\io\dropwizard\metrics\metrics-core\4.2.0\metrics-core-4.2.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\io\dropwizard\metrics\metrics-graphite\4.2.0\metrics-graphite-4.2.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\io\dropwizard\metrics\metrics-jmx\4.2.0\metrics-jmx-4.2.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\io\dropwizard\metrics\metrics-json\4.2.0\metrics-json-4.2.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\io\dropwizard\metrics\metrics-jvm\4.2.0\metrics-jvm-4.2.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\io\netty\netty-all\4.1.68.Final\netty-all-4.1.68.Final.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\io\netty\netty-buffer\4.1.50.Final\netty-buffer-4.1.50.Final.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\io\netty\netty-codec\4.1.50.Final\netty-codec-4.1.50.Final.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\io\netty\netty-common\4.1.50.Final\netty-common-4.1.50.Final.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\io\netty\netty-handler\4.1.50.Final\netty-handler-4.1.50.Final.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\io\netty\netty-resolver\4.1.50.Final\netty-resolver-4.1.50.Final.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\io\netty\netty-transport-native-epoll\4.1.50.Final\netty-transport-native-epoll-4.1.50.Final.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\io\netty\netty-transport-native-unix-common\4.1.50.Final\netty-transport-native-unix-common-4.1.50.Final.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\io\netty\netty-transport\4.1.50.Final\netty-transport-4.1.50.Final.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\jakarta\annotation\jakarta.annotation-api\1.3.5\jakarta.annotation-api-1.3.5.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\jakarta\servlet\jakarta.servlet-api\4.0.3\jakarta.servlet-api-4.0.3.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\jakarta\validation\jakarta.validation-api\2.0.2\jakarta.validation-api-2.0.2.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\jakarta\ws\rs\jakarta.ws.rs-api\2.1.6\jakarta.ws.rs-api-2.1.6.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\javax\activation\activation\1.1.1\activation-1.1.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\javax\annotation\javax.annotation-api\1.3.2\javax.annotation-api-1.3.2.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\javax\xml\bind\jaxb-api\2.2.11\jaxb-api-2.2.11.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\log4j\log4j\1.2.17\log4j-1.2.17.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\net\razorvine\pyrolite\4.30\pyrolite-4.30.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\net\sf\py4j\py4j\0.10.9.2\py4j-0.10.9.2.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\antlr\antlr4-runtime\4.8\antlr4-runtime-4.8.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\arrow\arrow-format\2.0.0\arrow-format-2.0.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\arrow\arrow-memory-core\2.0.0\arrow-memory-core-2.0.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\arrow\arrow-memory-netty\2.0.0\arrow-memory-netty-2.0.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\arrow\arrow-vector\2.0.0\arrow-vector-2.0.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\avro\avro-ipc\1.10.2\avro-ipc-1.10.2.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\avro\avro-mapred\1.10.2\avro-mapred-1.10.2.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\avro\avro\1.10.2\avro-1.10.2.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\commons\commons-compress\1.20\commons-compress-1.20.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\commons\commons-crypto\1.1.0\commons-crypto-1.1.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\commons\commons-lang3\3.12.0\commons-lang3-3.12.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\commons\commons-math3\3.4.1\commons-math3-3.4.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\commons\commons-text\1.6\commons-text-1.6.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\curator\curator-client\2.13.0\curator-client-2.13.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\curator\curator-framework\2.13.0\curator-framework-2.13.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\curator\curator-recipes\2.13.0\curator-recipes-2.13.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\hadoop\hadoop-client-api\3.3.1\hadoop-client-api-3.3.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\hadoop\hadoop-client-runtime\3.3.1\hadoop-client-runtime-3.3.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\hive\hive-storage-api\2.7.2\hive-storage-api-2.7.2.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\htrace\htrace-core4\4.1.0-incubating\htrace-core4-4.1.0-incubating.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\ivy\ivy\2.5.0\ivy-2.5.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\orc\orc-core\1.6.11\orc-core-1.6.11.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\orc\orc-mapreduce\1.6.11\orc-mapreduce-1.6.11.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\orc\orc-shims\1.6.11\orc-shims-1.6.11.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\parquet\parquet-column\1.12.1\parquet-column-1.12.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\parquet\parquet-common\1.12.1\parquet-common-1.12.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\parquet\parquet-encoding\1.12.1\parquet-encoding-1.12.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\parquet\parquet-format-structures\1.12.1\parquet-format-structures-1.12.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\parquet\parquet-hadoop\1.12.1\parquet-hadoop-1.12.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\parquet\parquet-jackson\1.12.1\parquet-jackson-1.12.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\spark\spark-catalyst_2.12\3.2.0\spark-catalyst_2.12-3.2.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\spark\spark-core_2.12\3.2.0\spark-core_2.12-3.2.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\spark\spark-kvstore_2.12\3.2.0\spark-kvstore_2.12-3.2.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\spark\spark-launcher_2.12\3.2.0\spark-launcher_2.12-3.2.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\spark\spark-network-common_2.12\3.2.0\spark-network-common_2.12-3.2.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\spark\spark-network-shuffle_2.12\3.2.0\spark-network-shuffle_2.12-3.2.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\spark\spark-sketch_2.12\3.2.0\spark-sketch_2.12-3.2.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\spark\spark-sql_2.12\3.2.0\spark-sql_2.12-3.2.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\spark\spark-tags_2.12\3.2.0\spark-tags_2.12-3.2.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\spark\spark-unsafe_2.12\3.2.0\spark-unsafe_2.12-3.2.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\xbean\xbean-asm9-shaded\4.20\xbean-asm9-shaded-4.20.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\yetus\audience-annotations\0.12.0\audience-annotations-0.12.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\zookeeper\zookeeper-jute\3.6.2\zookeeper-jute-3.6.2.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\zookeeper\zookeeper\3.6.2\zookeeper-3.6.2.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\codehaus\janino\commons-compiler\3.0.16\commons-compiler-3.0.16.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\codehaus\janino\janino\3.0.16\janino-3.0.16.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\fusesource\leveldbjni\leveldbjni-all\1.8\leveldbjni-all-1.8.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\glassfish\hk2\external\aopalliance-repackaged\2.6.1\aopalliance-repackaged-2.6.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\glassfish\hk2\external\jakarta.inject\2.6.1\jakarta.inject-2.6.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\glassfish\hk2\hk2-api\2.6.1\hk2-api-2.6.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\glassfish\hk2\hk2-locator\2.6.1\hk2-locator-2.6.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\glassfish\hk2\hk2-utils\2.6.1\hk2-utils-2.6.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\glassfish\hk2\osgi-resource-locator\1.0.3\osgi-resource-locator-1.0.3.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\glassfish\jersey\containers\jersey-container-servlet-core\2.34\jersey-container-servlet-core-2.34.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\glassfish\jersey\containers\jersey-container-servlet\2.34\jersey-container-servlet-2.34.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\glassfish\jersey\core\jersey-client\2.34\jersey-client-2.34.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\glassfish\jersey\core\jersey-common\2.34\jersey-common-2.34.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\glassfish\jersey\core\jersey-server\2.34\jersey-server-2.34.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\glassfish\jersey\inject\jersey-hk2\2.34\jersey-hk2-2.34.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\javassist\javassist\3.25.0-GA\javassist-3.25.0-GA.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\jetbrains\annotations\17.0.0\annotations-17.0.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\json4s\json4s-ast_2.12\3.7.0-M11\json4s-ast_2.12-3.7.0-M11.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\json4s\json4s-core_2.12\3.7.0-M11\json4s-core_2.12-3.7.0-M11.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\json4s\json4s-jackson_2.12\3.7.0-M11\json4s-jackson_2.12-3.7.0-M11.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\json4s\json4s-scalap_2.12\3.7.0-M11\json4s-scalap_2.12-3.7.0-M11.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\lz4\lz4-java\1.7.1\lz4-java-1.7.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\objenesis\objenesis\2.5.1\objenesis-2.5.1.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\roaringbitmap\RoaringBitmap\0.9.0\RoaringBitmap-0.9.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\roaringbitmap\shims\0.9.0\shims-0.9.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\rocksdb\rocksdbjni\6.20.3\rocksdbjni-6.20.3.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\scala-lang\modules\scala-parser-combinators_2.12\1.1.2\scala-parser-combinators_2.12-1.1.2.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\scala-lang\modules\scala-xml_2.12\1.2.0\scala-xml_2.12-1.2.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\scala-lang\scala-library\2.12.15\scala-library-2.12.15.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\scala-lang\scala-reflect\2.12.15\scala-reflect-2.12.15.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\slf4j\jcl-over-slf4j\1.7.30\jcl-over-slf4j-1.7.30.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\slf4j\jul-to-slf4j\1.7.30\jul-to-slf4j-1.7.30.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\slf4j\slf4j-api\1.7.30\slf4j-api-1.7.30.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\slf4j\slf4j-log4j12\1.7.30\slf4j-log4j12-1.7.30.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\spark-project\spark\unused\1.0.0\unused-1.0.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\threeten\threeten-extra\1.5.0\threeten-extra-1.5.0.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\tukaani\xz\1.8\xz-1.8.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\xerial\snappy\snappy-java\1.1.8.4\snappy-java-1.1.8.4.jar;C:\Users\21279\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\oro\oro\2.0.8\oro-2.0.8.jar Titanic Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 25/06/19 11:13:36 INFO SparkContext: Running Spark version 3.2.0 25/06/19 11:13:37 INFO ResourceUtils: ============================================================== 25/06/19 11:13:37 INFO ResourceUtils: No custom resources configured for spark.driver. 25/06/19 11:13:37 INFO ResourceUtils: ============================================================== 25/06/19 11:13:37 INFO SparkContext: Submitted application: Titanic 25/06/19 11:13:37 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0) 25/06/19 11:13:37 INFO ResourceProfile: Limiting resource is cpu 25/06/19 11:13:37 INFO ResourceProfileManager: Added ResourceProfile id: 0 25/06/19 11:13:37 INFO SecurityManager: Changing view acls to: 21279 25/06/19 11:13:37 INFO SecurityManager: Changing modify acls to: 21279 25/06/19 11:13:37 INFO SecurityManager: Changing view acls groups to: 25/06/19 11:13:37 INFO SecurityManager: Changing modify acls groups to: 25/06/19 11:13:37 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(21279); groups with view permissions: Set(); users with modify permissions: Set(21279); groups with modify permissions: Set() 25/06/19 11:13:38 INFO Utils: Successfully started service 'sparkDriver' on port 58426. 25/06/19 11:13:38 INFO SparkEnv: Registering MapOutputTracker 25/06/19 11:13:38 INFO SparkEnv: Registering BlockManagerMaster 25/06/19 11:13:38 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information 25/06/19 11:13:38 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up 25/06/19 11:13:38 INFO SparkEnv: Registering BlockManagerMasterHeartbeat 25/06/19 11:13:39 INFO DiskBlockManager: Created local directory at C:\Users\21279\AppData\Local\Temp\blockmgr-d9717778-5c19-4b54-ba8c-bdc23a2bec0f 25/06/19 11:13:39 INFO MemoryStore: MemoryStore started with capacity 1985.4 MiB 25/06/19 11:13:39 INFO SparkEnv: Registering OutputCommitCoordinator 25/06/19 11:13:39 INFO Utils: Successfully started service 'SparkUI' on port 4040. 25/06/19 11:13:39 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://lxf:4040 25/06/19 11:13:39 INFO Executor: Starting executor ID driver on host lxf 25/06/19 11:13:39 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 58453. 25/06/19 11:13:39 INFO NettyBlockTransferService: Server created on lxf:58453 25/06/19 11:13:39 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy 25/06/19 11:13:39 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, lxf, 58453, None) 25/06/19 11:13:39 INFO BlockManagerMasterEndpoint: Registering block manager lxf:58453 with 1985.4 MiB RAM, BlockManagerId(driver, lxf, 58453, None) 25/06/19 11:13:39 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, lxf, 58453, None) 25/06/19 11:13:39 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, lxf, 58453, None) 25/06/19 11:13:39 WARN SparkContext: Using an existing SparkContext; some configuration may not take effect. 25/06/19 11:13:40 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir. 25/06/19 11:13:40 INFO SharedState: Warehouse path is 'file:/D:/workspace/Maven/titanic_demo/spark-warehouse'. 25/06/19 11:13:41 WARN FileSystem: Failed to initialize fileystem hdfs://192.168.42:9000/user/hadoop/titanic/titanic.csv: java.io.IOException: Incomplete HDFS URI, no host: hdfs://192.168.42:9000/user/hadoop/titanic/titanic.csv 25/06/19 11:13:41 WARN FileStreamSink: Assume no metadata directory. Error while looking for metadata directory in the path: hdfs://192.168.42:9000/user/hadoop/titanic/titanic.csv. java.io.IOException: Incomplete HDFS URI, no host: hdfs://192.168.42:9000/user/hadoop/titanic/titanic.csv at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:183) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3469) at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3521) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:540) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365) at org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:53) at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:370) at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:274) at org.apache.spark.sql.DataFrameReader.$anonfun$load$3(DataFrameReader.scala:245) at scala.Option.getOrElse(Option.scala:189) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:245) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:188) at Titanic$.main(Titanic.scala:27) at Titanic.main(Titanic.scala) 25/06/19 11:13:41 WARN FileSystem: Failed to initialize fileystem hdfs://192.168.42:9000/user/hadoop/titanic/titanic.csv: java.io.IOException: Incomplete HDFS URI, no host: hdfs://192.168.42:9000/user/hadoop/titanic/titanic.csv Exception in thread "main" java.io.IOException: Incomplete HDFS URI, no host: hdfs://192.168.42:9000/user/hadoop/titanic/titanic.csv at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:183) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3469) at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3521) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:540) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365) at org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$checkAndGlobPathIfNecessary$1(DataSource.scala:747) at scala.collection.immutable.List.map(List.scala:293) at org.apache.spark.sql.execution.datasources.DataSource$.checkAndGlobPathIfNecessary(DataSource.scala:745) at org.apache.spark.sql.execution.datasources.DataSource.checkAndGlobPathIfNecessary(DataSource.scala:577) at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:408) at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:274) at org.apache.spark.sql.DataFrameReader.$anonfun$load$3(DataFrameReader.scala:245) at scala.Option.getOrElse(Option.scala:189) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:245) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:188) at Titanic$.main(Titanic.scala:27) at Titanic.main(Titanic.scala) 25/06/19 11:13:41 INFO SparkContext: Invoking stop() from shutdown hook 25/06/19 11:13:41 INFO SparkUI: Stopped Spark web UI at http://lxf:4040 25/06/19 11:13:41 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! 25/06/19 11:13:41 INFO MemoryStore: MemoryStore cleared 25/06/19 11:13:41 INFO BlockManager: BlockManager stopped 25/06/19 11:13:41 INFO BlockManagerMaster: BlockManagerMaster stopped 25/06/19 11:13:41 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! 25/06/19 11:13:41 INFO SparkContext: Successfully stopped SparkContext 25/06/19 11:13:41 INFO ShutdownHookManager: Shutdown hook called 25/06/19 11:13:41 INFO ShutdownHookManager: Deleting directory C:\Users\21279\AppData\Local\Temp\spark-f92de141-507f-4f46-a675-bdf38dd1487f Process finished with exit code 1

filetype

echer@echer:~/application-core$ aem start [INFO] Determine whether host GPU is available ... [INFO] USE_GPU_HOST: 1 [INFO] Start pulling docker image registry.baidubce.com/apollo/apollo-env-gpu:10.0-u22 ... 10.0-u22: Pulling from apollo/apollo-env-gpu Digest: sha256:7c33b27bf30bccf5c13f4d1058c72437930681b646ee0b7bfece979f482a2530 Status: Image is up to date for registry.baidubce.com/apollo/apollo-env-gpu:10.0-u22 registry.baidubce.com/apollo/apollo-env-gpu:10.0-u22 [INFO] Remove existing Apollo Development container ... [INFO] Starting Docker container "apollo_neo_dev_10.0.0_pkg" ... + docker run --gpus all -itd --privileged --name apollo_neo_dev_10.0.0_pkg --label owner=echer -e DISPLAY=:0 -e CROSS_PLATFORM=0 -e DOCKER_USER=echer -e USER=echer -e DOCKER_USER_ID=1000 -e HISTFILE=/apollo_workspace/.cache/.bash_history -e DOCKER_GRP=echer -e DOCKER_GRP_ID=1000 -e DOCKER_IMG=registry.baidubce.com/apollo/apollo-env-gpu:10.0-u22 -e USE_GPU_HOST=1 -e NVIDIA_VISIBLE_DEVICES=all -e NVIDIA_DRIVER_CAPABILITIES=compute,video,graphics,utility -e APOLLO_ENV_CONTAINER_IMAGE=registry.baidubce.com/apollo/apollo-env-gpu:10.0-u22 -e APOLLO_ENV_CONTAINER_PREFIX=apollo_neo_dev_ -e APOLLO_ENV_CONTAINER_REPO=registry.baidubce.com/apollo/apollo-env-gpu -e APOLLO_ENV_CONTAINER_REPO_ARM=registry.baidubce.com/apollo/apollo-env-arm -e APOLLO_ENV_CONTAINER_REPO_X86=registry.baidubce.com/apollo/apollo-env-gpu -e APOLLO_ENV_CONTAINER_TAG=10.0-u22 -e APOLLO_ENV_NAME=10.0.0_pkg -e APOLLO_ENV_WORKLOCAL=1 -e APOLLO_ENV_WORKROOT=/apollo_workspace -e APOLLO_ENV_WORKSPACE=/home/echer/application-core -v /home/echer/.apollo:/home/echer/.apollo -v /dev:/dev -v /media:/media -v /tmp/.X11-unix:/tmp/.X11-unix:rw -v /etc/localtime:/etc/localtime:ro -v /usr/src:/usr/src -v /lib/modules:/lib/modules -v apollo_neo_dev_10.0.0_pkg_apollo:/apollo -v apollo_neo_dev_10.0.0_pkg_opt:/opt -v /home/echer/application-core:/apollo_workspace -v /home/echer/application-core/data:/apollo/data -v /home/echer/application-core/output:/apollo/output -v /home/echer/application-core/data/log:/opt/apollo/neo/data/log -v /home/echer/application-core/data/calibration_data:/apollo/modules/calibration/data -v /home/echer/application-core/data/map_data:/apollo/modules/map/data -v /opt/apollo/neo/packages/env-manager-dev/9.0.0-rc1:/opt/apollo/neo/packages/env-manager-dev/9.0.0-rc1 --net host -w /apollo_workspace --add-host in-dev-docker:127.0.0.1 --add-host echer:127.0.0.1 --hostname in-dev-docker --shm-size 2G --pid=host -v /dev/null:/dev/raw1394 registry.baidubce.com/apollo/apollo-env-gpu:10.0-u22 /bin/bash f2c09881a3913965f5d928b261e2089da4be2f24406b7e69d7b62cb93f40b171 + '[' 0 -ne 0 ']' + set +x Error response from daemon: container f2c09881a3913965f5d928b261e2089da4be2f24406b7e69d7b62cb93f40b171 is not running Error response from daemon: container f2c09881a3913965f5d928b261e2089da4be2f24406b7e69d7b62cb93f40b171 is not running Error response from daemon: container f2c09881a3913965f5d928b261e2089da4be2f24406b7e69d7b62cb93f40b171 is not running Error response from daemon: container f2c09881a3913965f5d928b261e2089da4be2f24406b7e69d7b62cb93f40b171 is not running Error response from daemon: container f2c09881a3913965f5d928b261e2089da4be2f24406b7e69d7b62cb93f40b171 is not running

filetype

[2025-07-02T06:40:45.476Z] #### failed to build some targets (04:59 (mm:ss)) #### [2025-07-02T06:40:45.476Z] [2025-07-02T06:40:45.476Z] >> [Wed Jul 2 06:21:56 UTC 2025][build-odm-pipeline.sh]: FAILED: make vext_images -j240 [2025-07-02T06:40:45.477Z] command "bash /home/work/mnt/miui_codes2/build_home_rom-vext-merged/miui/pangu/build_tools/build-odm-pipeline.sh" returned 1: None [2025-07-02T06:40:45.477Z] Disabled remote cache [2025-07-02T06:40:45.477Z] + echo 'FATAL: failed to build odm by mibuild.' [2025-07-02T06:40:45.477Z] FATAL: failed to build odm by mibuild. [2025-07-02T06:40:45.477Z] + exit 1 [2025-07-02T06:40:45.477Z] + V=4 [2025-07-02T06:40:45.477Z] + info 'waiting for mtk_build_vendor_target_files:80760' [2025-07-02T06:40:45.477Z] + local V=4 [2025-07-02T06:40:45.477Z] + [[ 4 < 4 ]] [2025-07-02T06:40:45.477Z] + for message in "$@" [2025-07-02T06:40:45.477Z] + echo 'waiting for mtk_build_vendor_target_files:80760' [2025-07-02T06:40:45.477Z] waiting for mtk_build_vendor_target_files:80760 [2025-07-02T06:40:45.477Z] + xb::util::wait_pids 'build odm_module' 80760 [2025-07-02T06:40:45.477Z] + local 'messages=build odm_module' [2025-07-02T06:40:45.477Z] + pids=("${@:2}") [2025-07-02T06:40:45.477Z] + local pids [2025-07-02T06:40:45.477Z] ++ xb::util::get_pgrp 1748351 [2025-07-02T06:40:45.477Z] ++ local pid=1748351 [2025-07-02T06:40:45.477Z] ++ local pgrp= [2025-07-02T06:40:45.477Z] ++ [[ -z 1748351 ]] [2025-07-02T06:40:45.477Z] +++ ps -p 1748351 -o pgrp= [2025-07-02T06:40:45.477Z] +++ sed 's/^\s*//g' [2025-07-02T06:40:45.477Z] ++ pgrp=257 [2025-07-02T06:40:45.477Z] ++ echo 257 [2025-07-02T06:40:45.477Z] + local PGRP=257 [2025-07-02T06:40:45.477Z] + local ppid= [2025-07-02T06:40:45.477Z] + local pgrp= [2025-07-02T06:40:45.477Z] + [[ -z 80760 ]] [2025-07-02T06:40:45.477Z] + echo 'xb::util::wait_pids: waiting for build odm_module' [2025-07-02T06:40:45.477Z] xb::util::wait_pids: waiting for build odm_module [2025-07-02T06:40:45.477Z] + for pid in ${pids[@]} [2025-07-02T06:40:45.477Z] ++ xb::util::get_ppid 80760 [2025-07-02T06:40:45.477Z] ++ local pid=80760 [2025-07-02T06:40:45.477Z] ++ local ppid= [2025-07-02T06:40:45.477Z] ++ [[ -z 80760 ]] [2025-07-02T06:40:45.477Z] +++ ps -p 80760 -o ppid= [2025-07-02T06:40:45.477Z] +++ sed 's/^\s*//g' [2025-07-02T06:40:45.477Z] ++++ errexit [2025-07-02T06:40:45.477Z] ++++ local 'err=1 0' [2025-07-02T06:40:45.477Z] ++++ set +o [2025-07-02T06:40:45.477Z] ++++ grep -qe '-o errexit' [2025-07-02T06:40:45.477Z] ++++ return [2025-07-02T06:40:45.477Z] ++ ppid= [2025-07-02T06:40:45.477Z] +++ errexit [2025-07-02T06:40:45.477Z] +++ local err=1 [2025-07-02T06:40:45.477Z] +++ set +o [2025-07-02T06:40:45.477Z] +++ grep -qe '-o errexit' [2025-07-02T06:40:45.477Z] +++ return [2025-07-02T06:40:45.477Z] ++ echo '' [2025-07-02T06:40:45.477Z] + ppid= [2025-07-02T06:40:45.477Z] ++ xb::util::get_pgrp 80760 [2025-07-02T06:40:45.477Z] ++ local pid=80760 [2025-07-02T06:40:45.477Z] ++ local pgrp= [2025-07-02T06:40:45.477Z] ++ [[ -z 80760 ]] [2025-07-02T06:40:45.477Z] +++ ps -p 80760 -o pgrp= [2025-07-02T06:40:45.477Z] +++ sed 's/^\s*//g' [2025-07-02T06:40:45.477Z] ++++ errexit [2025-07-02T06:40:45.477Z] ++++ local 'err=1 0' [2025-07-02T06:40:45.477Z] ++++ set +o [2025-07-02T06:40:45.477Z] ++++ grep -qe '-o errexit' [2025-07-02T06:40:45.477Z] ++++ return [2025-07-02T06:40:45.477Z] ++ pgrp= [2025-07-02T06:40:45.477Z] +++ errexit [2025-07-02T06:40:45.477Z] +++ local err=1 [2025-07-02T06:40:45.477Z] +++ set +o [2025-07-02T06:40:45.477Z] +++ grep -qe '-o errexit' [2025-07-02T06:40:45.477Z] +++ return [2025-07-02T06:40:45.477Z] ++ echo '' [2025-07-02T06:40:45.477Z] + pgrp= [2025-07-02T06:40:45.477Z] + [[ -z '' ]] [2025-07-02T06:40:45.477Z] + [[ -z '' ]] [2025-07-02T06:40:45.477Z] + wait 80760 [2025-07-02T06:40:45.477Z] + echo 'xb::util::wait_pids: FATAL: failed to build odm_module!' [2025-07-02T06:40:45.477Z] xb::util::wait_pids: FATAL: failed to build odm_module! [2025-07-02T06:40:45.477Z] + exit 1 [2025-07-02T06:40:45.477Z] + V=4 [2025-07-02T06:40:45.477Z] + info 'waiting for build_vendor_target_files:80746...' [2025-07-02T06:40:45.477Z] + local V=4 [2025-07-02T06:40:45.477Z] + [[ 4 < 4 ]] [2025-07-02T06:40:45.477Z] + for message in "$@" [2025-07-02T06:40:45.477Z] + echo 'waiting for build_vendor_target_files:80746...' [2025-07-02T06:40:45.477Z] waiting for build_vendor_target_files:80746... [2025-07-02T06:40:45.477Z] + xb::util::wait_pids 'build vendor target file' 80746 [2025-07-02T06:40:45.477Z] + local 'messages=build vendor target file' [2025-07-02T06:40:45.477Z] + pids=("${@:2}") [2025-07-02T06:40:45.477Z] + local pids [2025-07-02T06:40:45.477Z] ++ xb::util::get_pgrp 1748371 [2025-07-02T06:40:45.477Z] ++ local pid=1748371 [2025-07-02T06:40:45.477Z] ++ local pgrp= [2025-07-02T06:40:45.477Z] ++ [[ -z 1748371 ]] [2025-07-02T06:40:45.477Z] +++ ps -p 1748371 -o pgrp= [2025-07-02T06:40:45.477Z] +++ sed 's/^\s*//g' [2025-07-02T06:40:45.477Z] ++ pgrp=257 [2025-07-02T06:40:45.477Z] ++ echo 257 [2025-07-02T06:40:45.477Z] + local PGRP=257 [2025-07-02T06:40:45.477Z] + local ppid= [2025-07-02T06:40:45.477Z] + local pgrp= [2025-07-02T06:40:45.477Z] + [[ -z 80746 ]] [2025-07-02T06:40:45.477Z] + echo 'xb::util::wait_pids: waiting for build vendor target file' [2025-07-02T06:40:45.477Z] xb::util::wait_pids: waiting for build vendor target file [2025-07-02T06:40:45.477Z] + for pid in ${pids[@]} [2025-07-02T06:40:45.477Z] ++ xb::util::get_ppid 80746 [2025-07-02T06:40:45.477Z] ++ local pid=80746 [2025-07-02T06:40:45.477Z] ++ local ppid= [2025-07-02T06:40:45.477Z] ++ [[ -z 80746 ]] [2025-07-02T06:40:45.477Z] +++ ps -p 80746 -o ppid= [2025-07-02T06:40:45.477Z] +++ sed 's/^\s*//g' [2025-07-02T06:40:45.477Z] ++++ errexit [2025-07-02T06:40:45.477Z] ++++ local 'err=1 0' [2025-07-02T06:40:45.477Z] ++++ set +o [2025-07-02T06:40:45.477Z] ++++ grep -qe '-o errexit' [2025-07-02T06:40:45.477Z] ++++ return [2025-07-02T06:40:45.477Z] ++ ppid= [2025-07-02T06:40:45.477Z] +++ errexit [2025-07-02T06:40:45.477Z] +++ local err=1 [2025-07-02T06:40:45.477Z] +++ set +o [2025-07-02T06:40:45.477Z] +++ grep -qe '-o errexit' [2025-07-02T06:40:45.477Z] +++ return [2025-07-02T06:40:45.477Z] ++ echo '' [2025-07-02T06:40:45.477Z] + ppid= [2025-07-02T06:40:45.477Z] ++ xb::util::get_pgrp 80746 [2025-07-02T06:40:45.477Z] ++ local pid=80746 [2025-07-02T06:40:45.477Z] ++ local pgrp= [2025-07-02T06:40:45.477Z] ++ [[ -z 80746 ]] [2025-07-02T06:40:45.477Z] +++ ps -p 80746 -o pgrp= [2025-07-02T06:40:45.477Z] +++ sed 's/^\s*//g' [2025-07-02T06:40:45.477Z] ++++ errexit [2025-07-02T06:40:45.477Z] ++++ local 'err=1 0' [2025-07-02T06:40:45.477Z] ++++ set +o [2025-07-02T06:40:45.477Z] ++++ grep -qe '-o errexit' [2025-07-02T06:40:45.477Z] ++++ return [2025-07-02T06:40:45.477Z] ++ pgrp= [2025-07-02T06:40:45.477Z] +++ errexit [2025-07-02T06:40:45.477Z] +++ local err=1 [2025-07-02T06:40:45.477Z] +++ set +o [2025-07-02T06:40:45.477Z] +++ grep -qe '-o errexit' [2025-07-02T06:40:45.477Z] +++ return [2025-07-02T06:40:45.477Z] ++ echo '' [2025-07-02T06:40:45.477Z] + pgrp= [2025-07-02T06:40:45.477Z] + [[ -z '' ]] [2025-07-02T06:40:45.477Z] + [[ -z '' ]] [2025-07-02T06:40:45.477Z] + wait 80746 [2025-07-02T06:40:45.477Z] + echo 'xb::util::wait_pids: FATAL: failed to build vendor target file!' [2025-07-02T06:40:45.477Z] xb::util::wait_pids: FATAL: failed to build vendor target file! [2025-07-02T06:40:45.477Z] + exit 1 [2025-07-02T06:40:45.477Z] + convert_cache_to_json [2025-07-02T06:40:45.477Z] + set +e [2025-07-02T06:40:45.477Z] + local -r ccache_info_path=/home/jenkins/agent/workspace/pangu_build_component_vendor/ccache_size.log [2025-07-02T06:40:45.477Z] + local ccache_hit_total= [2025-07-02T06:40:45.477Z] + local ccache_total_total= [2025-07-02T06:40:45.477Z] + local ccache_size= [2025-07-02T06:40:45.477Z] + local raw_ccache_hit_info= [2025-07-02T06:40:45.477Z] + local raw_ccache_size_info= [2025-07-02T06:40:45.477Z] + local ccache_timestamp= [2025-07-02T06:40:45.477Z] + local ccache_hostname= [2025-07-02T06:40:45.477Z] + touch /home/jenkins/agent/workspace/pangu_build_component_vendor/ccache_size.log [2025-07-02T06:40:45.477Z] + show_cache_stat [2025-07-02T06:40:45.477Z] + [[ -e /home/jenkins/agent/workspace/pangu_build_component_vendor/xbuilder-script/build/tools/ccache/v4.7.4/ccache ]] [2025-07-02T06:40:45.477Z] + /home/jenkins/agent/workspace/pangu_build_component_vendor/xbuilder-script/build/tools/ccache/v4.7.4/ccache -sv [2025-07-02T06:40:45.477Z] ++ grep -E '^Cacheable calls:' -A1 /home/jenkins/agent/workspace/pangu_build_component_vendor/ccache_size.log [2025-07-02T06:40:45.477Z] ++ grep -v 'Cacheable calls:' [2025-07-02T06:40:45.477Z] +++ errexit [2025-07-02T06:40:45.477Z] +++ local 'err=1 1' [2025-07-02T06:40:45.477Z] +++ set +o [2025-07-02T06:40:45.477Z] +++ grep -qe '-o errexit' [2025-07-02T06:40:45.477Z] +++ return [2025-07-02T06:40:45.477Z] + raw_ccache_hit_info= [2025-07-02T06:40:45.477Z] ++ errexit [2025-07-02T06:40:45.477Z] ++ local err=1 [2025-07-02T06:40:45.477Z] ++ set +o [2025-07-02T06:40:45.477Z] ++ grep -qe '-o errexit' [2025-07-02T06:40:45.477Z] ++ return [2025-07-02T06:40:45.477Z] ++ echo '' [2025-07-02T06:40:45.477Z] ++ awk '{print $2}' [2025-07-02T06:40:45.477Z] + ccache_hit_total= [2025-07-02T06:40:45.477Z] ++ echo '' [2025-07-02T06:40:45.477Z] ++ awk '{print $4}' [2025-07-02T06:40:45.477Z] + ccache_total_total= [2025-07-02T06:40:45.477Z] ++ grep -E '^Local storage:' -A1 /home/jenkins/agent/workspace/pangu_build_component_vendor/ccache_size.log [2025-07-02T06:40:45.477Z] ++ grep -v 'Local storage:' [2025-07-02T06:40:45.477Z] + raw_ccache_size_info=' Cache size (GB): 464.99 / 500.00 (93.00%)' [2025-07-02T06:40:45.477Z] ++ echo ' Cache size (GB): 464.99 / 500.00 (93.00%)' [2025-07-02T06:40:45.477Z] ++ awk '{print $4}' [2025-07-02T06:40:45.477Z] + ccache_size=464.99 [2025-07-02T06:40:45.477Z] +++ date +%s [2025-07-02T06:40:45.477Z] ++ xb::timedate::timestamp_fmt_with_utc 1751438437 [2025-07-02T06:40:45.477Z] ++ local timestamp_to_convert=1751438437 [2025-07-02T06:40:45.477Z] ++ local tz_selected=UTC-8 [2025-07-02T06:40:45.477Z] ++ local formatted_time= [2025-07-02T06:40:45.477Z] +++ TZ=UTC-8 [2025-07-02T06:40:45.477Z] +++ date -d@1751438437 '+%F %T' [2025-07-02T06:40:45.477Z] ++ formatted_time='2025-07-02 14:40:37' [2025-07-02T06:40:45.477Z] ++ echo '2025-07-02 14:40:37' [2025-07-02T06:40:45.477Z] + ccache_timestamp='2025-07-02 14:40:37' [2025-07-02T06:40:45.477Z] + ccache_hostname=NODE_NAME [2025-07-02T06:40:45.477Z] + local random_file_suffix= [2025-07-02T06:40:45.477Z] ++ xb::util::generate_random_name 8 [2025-07-02T06:40:45.477Z] ++ local length=8 [2025-07-02T06:40:45.477Z] ++ local random_string= [2025-07-02T06:40:45.477Z] +++ tr -dc a-zA-Z0-9 [2025-07-02T06:40:45.477Z] +++ fold -w 8 [2025-07-02T06:40:45.477Z] +++ head -n 1 [2025-07-02T06:40:45.477Z] ++++ errexit [2025-07-02T06:40:45.477Z] ++++ local 'err=141 141 0' [2025-07-02T06:40:45.477Z] ++++ set +o [2025-07-02T06:40:45.477Z] ++++ grep -qe '-o errexit' [2025-07-02T06:40:45.478Z] ++++ return [2025-07-02T06:40:45.478Z] ++ random_string=TGEeAakh [2025-07-02T06:40:45.478Z] +++ errexit [2025-07-02T06:40:45.478Z] +++ local err=141 [2025-07-02T06:40:45.478Z] +++ set +o [2025-07-02T06:40:45.478Z] +++ grep -qe '-o errexit' [2025-07-02T06:40:45.478Z] +++ return [2025-07-02T06:40:45.478Z] ++ echo TGEeAakh [2025-07-02T06:40:45.478Z] + random_file_suffix=TGEeAakh [2025-07-02T06:40:45.478Z] + local ccache_json_file=/home/work/data/miui_codes/log/ccache_info-TGEeAakh.json [2025-07-02T06:40:45.478Z] + local job_task_type=vendor [2025-07-02T06:40:45.478Z] + echo '[{"corgi_id": 6079118, "task_type":"vendor", "ccache_hit_total": 0, "ccache_total_total": 0, "ccache_size": 464.99,"ccache_timestamp":"2025-07-02 14:40:37", "ccache_hostname":"NODE_NAME"}]' [2025-07-02T06:40:45.478Z] + xb::ccache::data2talos /home/work/data/miui_codes/log/ccache_info-TGEeAakh.json [2025-07-02T06:40:45.478Z] + declare data_path=/home/work/data/miui_codes/log/ccache_info-TGEeAakh.json [2025-07-02T06:40:45.478Z] + declare talos_client=/home/jenkins/agent/workspace/pangu_build_component_vendor/xbuilder-script/build/api/talos_client.py [2025-07-02T06:40:45.478Z] + [[ ! -e /home/jenkins/agent/workspace/pangu_build_component_vendor/xbuilder-script/build/api/talos_client.py ]] [2025-07-02T06:40:45.478Z] + timeout 30s python3 /home/jenkins/agent/workspace/pangu_build_component_vendor/xbuilder-script/build/api/talos_client.py /home/work/data/miui_codes/log/ccache_info-TGEeAakh.json cache [2025-07-02T06:40:45.483Z] Talos topic: ccache_perf_log [2025-07-02T06:40:45.483Z] Put 1 messages to talos successfully, see detail in /home/jenkins/agent/workspace/pangu_build_component_vendor/talos_producer-success.log [2025-07-02T06:40:45.483Z] + set -e [2025-07-02T06:40:45.488Z] [Pipeline] } [2025-07-02T06:40:45.505Z] [Pipeline] // script [2025-07-02T06:40:45.510Z] [Pipeline] } [2025-07-02T06:40:45.527Z] [Pipeline] // container [2025-07-02T06:40:45.533Z] [Pipeline] } [2025-07-02T06:40:45.548Z] [Pipeline] // stage [2025-07-02T06:40:45.554Z] [Pipeline] stage [2025-07-02T06:40:45.555Z] [Pipeline] { (Declarative: Post Actions) [2025-07-02T06:40:45.570Z] [Pipeline] container [2025-07-02T06:40:45.572Z] [Pipeline] { [2025-07-02T06:40:45.582Z] [Pipeline] echo [2025-07-02T06:40:45.583Z] result : failure [2025-07-02T06:40:45.585Z] [Pipeline] sh [2025-07-02T06:40:46.080Z] + set -x [2025-07-02T06:40:46.080Z] + timeout 3m bash -x /home/jenkins/agent/workspace/pangu_build_component_vendor/xbuilder-script/bbparse/parse.sh [2025-07-02T06:41:00.931Z] + true [2025-07-02T06:41:00.931Z] + export XBUILDER_SCRIPT_PATH=/home/jenkins/agent/workspace/pangu_build_component_vendor/xbuilder-script [2025-07-02T06:41:00.931Z] + XBUILDER_SCRIPT_PATH=/home/jenkins/agent/workspace/pangu_build_component_vendor/xbuilder-script [2025-07-02T06:41:00.931Z] + source /home/jenkins/agent/workspace/pangu_build_component_vendor/xbuilder-script/build/api/corgi.sh [2025-07-02T06:41:00.931Z] + callback_corgi_status vendor failure [2025-07-02T06:41:00.931Z] + local task_type=vendor [2025-07-02T06:41:00.931Z] + local result=failure [2025-07-02T06:41:00.931Z] + [[ -z vendor ]] [2025-07-02T06:41:00.931Z] + [[ -z failure ]] [2025-07-02T06:41:00.931Z] + local file_dir= [2025-07-02T06:41:00.931Z] + local cust_version= [2025-07-02T06:41:00.931Z] + local artifacts_comp_file= [2025-07-02T06:41:00.931Z] + [[ ! -d '' ]] [2025-07-02T06:41:00.931Z] + [[ -n '' ]] [2025-07-02T06:41:00.931Z] + retry -s 10 -t 10 -e timeout 60s python3 /home/jenkins/agent/workspace/pangu_build_component_vendor/xbuilder-script/build/api/send_mq.py --type=vendor --result=failure --artfs= --cust= --artfs_comp= [2025-07-02T06:41:00.931Z] can not find atest_suite_generate_result.txt [2025-07-02T06:41:00.931Z] {"id": 6079118, "taskType": "vendor", "buildUrl": "https://builder03.pt.miui.com/job/pangu_build_component_vendor/441631/", "result": "failure", "custVersion": "", "machine": "pangu-build-component-vendor-441631-0xff8-fg3kw-gntwp", "outFiles": [], "atestSuiteGenerateResult": "", "malteseRunId": "0", "errorCode": "120034", "codeOwner": "", "description": "FAILED报错,编译问题", "repositoryOwner": ""} [2025-07-02T06:41:00.931Z] Send message status: SendStatus.OK, msgId: 0A0F7950B4B50BE3D34B084CF15C0000 [2025-07-02T06:41:00.931Z] + status=0 [2025-07-02T06:41:00.931Z] + [[ 0 == \1\2\4 ]] [2025-07-02T06:41:00.931Z] + [[ 0 == \1 ]] [2025-07-02T06:41:00.931Z] + source /home/jenkins/agent/workspace/pangu_build_component_vendor/xbuilder-script/build/api/timedate.sh [2025-07-02T06:41:00.931Z] + record_timestamp total failure [2025-07-02T06:41:00.931Z] + local stage=total [2025-07-02T06:41:00.931Z] + local status=failure [2025-07-02T06:41:00.931Z] ++ date +%s%3N [2025-07-02T06:41:00.931Z] + local timestamp=1751438460410 [2025-07-02T06:41:00.931Z] + local time_tracking_raw_file=/home/jenkins/agent/workspace/pangu_build_component_vendor/time_tracking_raw.txt [2025-07-02T06:41:00.931Z] + echo total:failure:1751438460410 [2025-07-02T06:41:00.931Z] + record_timestamp 'post build' start [2025-07-02T06:41:00.931Z] + local 'stage=post build' [2025-07-02T06:41:00.931Z] + local status=start [2025-07-02T06:41:00.931Z] ++ date +%s%3N [2025-07-02T06:41:00.931Z] + local timestamp=1751438460411 [2025-07-02T06:41:00.931Z] + local time_tracking_raw_file=/home/jenkins/agent/workspace/pangu_build_component_vendor/time_tracking_raw.txt [2025-07-02T06:41:00.931Z] + echo 'post build:start:1751438460411' [2025-07-02T06:41:00.931Z] + echo 'copy repo record' [2025-07-02T06:41:00.931Z] copy repo record [2025-07-02T06:41:00.931Z] + export BUILD_HOME=/home/work/data/miui_codes/build_home_rom [2025-07-02T06:41:00.931Z] + BUILD_HOME=/home/work/data/miui_codes/build_home_rom [2025-07-02T06:41:00.931Z] + [[ -s /home/work/data/miui_codes/build_home_rom/.repo/.repo_fetchtimes.json ]] [2025-07-02T06:41:00.931Z] + cp /home/work/data/miui_codes/build_home_rom/.repo/.repo_fetchtimes.json /home/jenkins/agent/workspace/pangu_build_component_vendor/.repo_fetchtimes.json [2025-07-02T06:41:00.931Z] + [[ -s /home/work/data/miui_codes/build_home_rom/.repo/.repo_checkouttimes.json ]] [2025-07-02T06:41:00.931Z] + cp /home/work/data/miui_codes/build_home_rom/.repo/.repo_checkouttimes.json /home/jenkins/agent/workspace/pangu_build_component_vendor/.repo_checkouttimes.json [2025-07-02T06:41:00.931Z] + echo 'cleaning up the old build workspace(build_home_rom*)...' [2025-07-02T06:41:00.931Z] cleaning up the old build workspace(build_home_rom*)... [2025-07-02T06:41:00.931Z] + rm -rf /home/work/data/miui_codes/build_home_rom /home/work/data/miui_codes/build_home_rom-hal-merged /home/work/data/miui_codes/build_home_rom-hal-upper /home/work/data/miui_codes/build_home_rom-hal-work /home/work/data/miui_codes/build_home_rom-krn-merged /home/work/data/miui_codes/build_home_rom-krn-upper /home/work/data/miui_codes/build_home_rom-krn-work /home/work/data/miui_codes/build_home_rom-vext-merged /home/work/data/miui_codes/build_home_rom-vext-upper /home/work/data/miui_codes/build_home_rom-vext-work [2025-07-02T06:41:48.256Z] + true [2025-07-02T06:41:48.265Z] [Pipeline] archiveArtifacts [2025-07-02T06:41:48.269Z] Archiving artifacts [2025-07-02T06:41:48.638Z] [Pipeline] step [2025-07-02T06:41:48.655Z] [Pipeline] logParser

filetype

# YOLOv5 🚀 by Ultralytics, GPL-3.0 license """ Train a YOLOv5 model on a custom dataset. Models and datasets download automatically from the latest YOLOv5 release. Usage - Single-GPU training: $ python train.py --data coco128.yaml --weights yolov5s.pt --img 640 # from pretrained (recommended) $ python train.py --data coco128.yaml --weights '' --cfg yolov5s.yaml --img 640 # from scratch Usage - Multi-GPU DDP training: $ python -m torch.distributed.run --nproc_per_node 4 --master_port 1 train.py --data coco128.yaml --weights yolov5s.pt --img 640 --device 0,1,2,3 Models: https://github.com/ultralytics/yolov5/tree/master/models Datasets: https://github.com/ultralytics/yolov5/tree/master/data Tutorial: https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data """ import argparse import math import os os.environ["GIT_PYTHON_REFRESH"] = "quiet" # add there import random import sys import time from copy import deepcopy from datetime import datetime from pathlib import Path import numpy as np import torch import torch.distributed as dist import torch.nn as nn import yaml from torch.optim import lr_scheduler from tqdm import tqdm # import numpy # import torch.serialization # torch.serialization.add_safe_globals([numpy._core.multiarray._reconstruct]) FILE = Path(__file__).resolve() ROOT = FILE.parents[0] # YOLOv5 root directory if str(ROOT) not in sys.path: sys.path.append(str(ROOT)) # add ROOT to PATH ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative import val as validate # for end-of-epoch mAP from models.experimental import attempt_load from models.yolo import Model from utils.autoanchor import check_anchors from utils.autobatch import check_train_batch_size from utils.callbacks import Callbacks from utils.dataloaders import create_dataloader from utils.downloads import attempt_download, is_url from utils.general import (LOGGER, TQDM_BAR_FORMAT, check_amp, check_dataset, check_file, check_git_info, check_git_status, check_img_size, check_requirements, check_suffix, check_yaml, colorstr, get_latest_run, increment_path, init_seeds, intersect_dicts, labels_to_class_weights, labels_to_image_weights, methods, one_cycle, print_args, print_mutation, strip_optimizer, yaml_save) from utils.loggers import Loggers from utils.loggers.comet.comet_utils import check_comet_resume from utils.loss import ComputeLoss from utils.metrics import fitness from utils.plots import plot_evolve from utils.torch_utils import (EarlyStopping, ModelEMA, de_parallel, select_device, smart_DDP, smart_optimizer, smart_resume, torch_distributed_zero_first) LOCAL_RANK = int(os.getenv('LOCAL_RANK', -1)) # https://pytorch.org/docs/stable/elastic/run.html RANK = int(os.getenv('RANK', -1)) WORLD_SIZE = int(os.getenv('WORLD_SIZE', 1)) GIT_INFO = check_git_info() def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictionary save_dir, epochs, batch_size, weights, single_cls, evolve, data, cfg, resume, noval, nosave, workers, freeze = \ Path(opt.save_dir), opt.epochs, opt.batch_size, opt.weights, opt.single_cls, opt.evolve, opt.data, opt.cfg, \ opt.resume, opt.noval, opt.nosave, opt.workers, opt.freeze callbacks.run('on_pretrain_routine_start') # Directories w = save_dir / 'weights' # weights dir (w.parent if evolve else w).mkdir(parents=True, exist_ok=True) # make dir last, best = w / 'last.pt', w / 'best.pt' # Hyperparameters if isinstance(hyp, str): with open(hyp, errors='ignore') as f: hyp = yaml.safe_load(f) # load hyps dict LOGGER.info(colorstr('hyperparameters: ') + ', '.join(f'{k}={v}' for k, v in hyp.items())) opt.hyp = hyp.copy() # for saving hyps to checkpoints # Save run settings if not evolve: yaml_save(save_dir / 'hyp.yaml', hyp) yaml_save(save_dir / 'opt.yaml', vars(opt)) # Loggers data_dict = None if RANK in {-1, 0}: loggers = Loggers(save_dir, weights, opt, hyp, LOGGER) # loggers instance # Register actions for k in methods(loggers): callbacks.register_action(k, callback=getattr(loggers, k)) # Process custom dataset artifact link data_dict = loggers.remote_dataset if resume: # If resuming runs from remote artifact weights, epochs, hyp, batch_size = opt.weights, opt.epochs, opt.hyp, opt.batch_size # Config plots = not evolve and not opt.noplots # create plots cuda = device.type != 'cpu' init_seeds(opt.seed + 1 + RANK, deterministic=True) with torch_distributed_zero_first(LOCAL_RANK): data_dict = data_dict or check_dataset(data) # check if None train_path, val_path = data_dict['train'], data_dict['val'] nc = 1 if single_cls else int(data_dict['nc']) # number of classes names = {0: 'item'} if single_cls and len(data_dict['names']) != 1 else data_dict['names'] # class names is_coco = isinstance(val_path, str) and val_path.endswith('coco/val2017.txt') # COCO dataset # Model check_suffix(weights, '.pt') # check weights pretrained = weights.endswith('.pt') if pretrained: with torch_distributed_zero_first(LOCAL_RANK): weights = attempt_download(weights) # download if not found locally ckpt = torch.load(weights, map_location='cpu', weights_only=False) # load checkpoint to CPU to avoid CUDA memory leak model = Model(cfg or ckpt['model'].yaml, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create exclude = ['anchor'] if (cfg or hyp.get('anchors')) and not resume else [] # exclude keys csd = ckpt['model'].float().state_dict() # checkpoint state_dict as FP32 csd = intersect_dicts(csd, model.state_dict(), exclude=exclude) # intersect model.load_state_dict(csd, strict=False) # load LOGGER.info(f'Transferred {len(csd)}/{len(model.state_dict())} items from {weights}') # report else: model = Model(cfg, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create amp = check_amp(model) # check AMP # Freeze freeze = [f'model.{x}.' for x in (freeze if len(freeze) > 1 else range(freeze[0]))] # layers to freeze for k, v in model.named_parameters(): v.requires_grad = True # train all layers # v.register_hook(lambda x: torch.nan_to_num(x)) # NaN to 0 (commented for erratic training results) if any(x in k for x in freeze): LOGGER.info(f'freezing {k}') v.requires_grad = False # Image size gs = max(int(model.stride.max()), 32) # grid size (max stride) imgsz = check_img_size(opt.imgsz, gs, floor=gs * 2) # verify imgsz is gs-multiple # Batch size if RANK == -1 and batch_size == -1: # single-GPU only, estimate best batch size batch_size = check_train_batch_size(model, imgsz, amp) loggers.on_params_update({"batch_size": batch_size}) # Optimizer nbs = 64 # nominal batch size accumulate = max(round(nbs / batch_size), 1) # accumulate loss before optimizing hyp['weight_decay'] *= batch_size * accumulate / nbs # scale weight_decay optimizer = smart_optimizer(model, opt.optimizer, hyp['lr0'], hyp['momentum'], hyp['weight_decay']) # Scheduler if opt.cos_lr: lf = one_cycle(1, hyp['lrf'], epochs) # cosine 1->hyp['lrf'] else: lf = lambda x: (1 - x / epochs) * (1.0 - hyp['lrf']) + hyp['lrf'] # linear scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lf) # plot_lr_scheduler(optimizer, scheduler, epochs) # EMA ema = ModelEMA(model) if RANK in {-1, 0} else None # Resume best_fitness, start_epoch = 0.0, 0 if pretrained: if resume: best_fitness, start_epoch, epochs = smart_resume(ckpt, optimizer, ema, weights, epochs, resume) del ckpt, csd # DP mode if cuda and RANK == -1 and torch.cuda.device_count() > 1: LOGGER.warning('WARNING ⚠️ DP not recommended, use torch.distributed.run for best DDP Multi-GPU results.\n' 'See Multi-GPU Tutorial at https://github.com/ultralytics/yolov5/issues/475 to get started.') model = torch.nn.DataParallel(model) # SyncBatchNorm if opt.sync_bn and cuda and RANK != -1: model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model).to(device) LOGGER.info('Using SyncBatchNorm()') # Trainloader train_loader, dataset = create_dataloader(train_path, imgsz, batch_size // WORLD_SIZE, gs, single_cls, hyp=hyp, augment=True, cache=None if opt.cache == 'val' else opt.cache, rect=opt.rect, rank=LOCAL_RANK, workers=workers, image_weights=opt.image_weights, quad=opt.quad, prefix=colorstr('train: '), shuffle=True) labels = np.concatenate(dataset.labels, 0) mlc = int(labels[:, 0].max()) # max label class assert mlc < nc, f'Label class {mlc} exceeds nc={nc} in {data}. Possible class labels are 0-{nc - 1}' # Process 0 if RANK in {-1, 0}: val_loader = create_dataloader(val_path, imgsz, batch_size // WORLD_SIZE * 2, gs, single_cls, hyp=hyp, cache=None if noval else opt.cache, rect=True, rank=-1, workers=workers * 2, pad=0.5, prefix=colorstr('val: '))[0] if not resume: if not opt.noautoanchor: check_anchors(dataset, model=model, thr=hyp['anchor_t'], imgsz=imgsz) # run AutoAnchor model.half().float() # pre-reduce anchor precision callbacks.run('on_pretrain_routine_end', labels, names) # DDP mode if cuda and RANK != -1: model = smart_DDP(model) # Model attributes nl = de_parallel(model).model[-1].nl # number of detection layers (to scale hyps) hyp['box'] *= 3 / nl # scale to layers hyp['cls'] *= nc / 80 * 3 / nl # scale to classes and layers hyp['obj'] *= (imgsz / 640) ** 2 * 3 / nl # scale to image size and layers hyp['label_smoothing'] = opt.label_smoothing model.nc = nc # attach number of classes to model model.hyp = hyp # attach hyperparameters to model model.class_weights = labels_to_class_weights(dataset.labels, nc).to(device) * nc # attach class weights model.names = names # Start training t0 = time.time() nb = len(train_loader) # number of batches nw = max(round(hyp['warmup_epochs'] * nb), 100) # number of warmup iterations, max(3 epochs, 100 iterations) # nw = min(nw, (epochs - start_epoch) / 2 * nb) # limit warmup to < 1/2 of training last_opt_step = -1 maps = np.zeros(nc) # mAP per class results = (0, 0, 0, 0, 0, 0, 0) # P, R, [email protected], [email protected], val_loss(box, obj, cls) scheduler.last_epoch = start_epoch - 1 # do not move scaler = torch.cuda.amp.GradScaler(enabled=amp) stopper, stop = EarlyStopping(patience=opt.patience), False compute_loss = ComputeLoss(model) # init loss class callbacks.run('on_train_start') LOGGER.info(f'Image sizes {imgsz} train, {imgsz} val\n' f'Using {train_loader.num_workers * WORLD_SIZE} dataloader workers\n' f"Logging results to {colorstr('bold', save_dir)}\n" f'Starting training for {epochs} epochs...') for epoch in range(start_epoch, epochs): # epoch ------------------------------------------------------------------ callbacks.run('on_train_epoch_start') model.train() # Update image weights (optional, single-GPU only) if opt.image_weights: cw = model.class_weights.cpu().numpy() * (1 - maps) ** 2 / nc # class weights iw = labels_to_image_weights(dataset.labels, nc=nc, class_weights=cw) # image weights dataset.indices = random.choices(range(dataset.n), weights=iw, k=dataset.n) # rand weighted idx # Update mosaic border (optional) # b = int(random.uniform(0.25 * imgsz, 0.75 * imgsz + gs) // gs * gs) # dataset.mosaic_border = [b - imgsz, -b] # height, width borders mloss = torch.zeros(3, device=device) # mean losses if RANK != -1: train_loader.sampler.set_epoch(epoch) pbar = enumerate(train_loader) LOGGER.info(('\n' + '%11s' * 7) % ('Epoch', 'GPU_mem', 'box_loss', 'obj_loss', 'cls_loss', 'Instances', 'Size')) if RANK in {-1, 0}: pbar = tqdm(pbar, total=nb, bar_format=TQDM_BAR_FORMAT) # progress bar optimizer.zero_grad() for i, (imgs, targets, paths, _) in pbar: # batch ------------------------------------------------------------- callbacks.run('on_train_batch_start') ni = i + nb * epoch # number integrated batches (since train start) imgs = imgs.to(device, non_blocking=True).float() / 255 # uint8 to float32, 0-255 to 0.0-1.0 # Warmup if ni <= nw: xi = [0, nw] # x interp # compute_loss.gr = np.interp(ni, xi, [0.0, 1.0]) # iou loss ratio (obj_loss = 1.0 or iou) accumulate = max(1, np.interp(ni, xi, [1, nbs / batch_size]).round()) for j, x in enumerate(optimizer.param_groups): # bias lr falls from 0.1 to lr0, all other lrs rise from 0.0 to lr0 x['lr'] = np.interp(ni, xi, [hyp['warmup_bias_lr'] if j == 0 else 0.0, x['initial_lr'] * lf(epoch)]) if 'momentum' in x: x['momentum'] = np.interp(ni, xi, [hyp['warmup_momentum'], hyp['momentum']]) # Multi-scale if opt.multi_scale: sz = random.randrange(imgsz * 0.5, imgsz * 1.5 + gs) // gs * gs # size sf = sz / max(imgs.shape[2:]) # scale factor if sf != 1: ns = [math.ceil(x * sf / gs) * gs for x in imgs.shape[2:]] # new shape (stretched to gs-multiple) imgs = nn.functional.interpolate(imgs, size=ns, mode='bilinear', align_corners=False) # Forward # with torch.cuda.amp.autocast(amp): with torch.amp.autocast(device_type='cuda'): pred = model(imgs) # forward loss, loss_items = compute_loss(pred, targets.to(device)) # loss scaled by batch_size if RANK != -1: loss *= WORLD_SIZE # gradient averaged between devices in DDP mode if opt.quad: loss *= 4. # Backward scaler.scale(loss).backward() # Optimize - https://pytorch.org/docs/master/notes/amp_examples.html if ni - last_opt_step >= accumulate: scaler.unscale_(optimizer) # unscale gradients torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=10.0) # clip gradients scaler.step(optimizer) # optimizer.step scaler.update() optimizer.zero_grad() if ema: ema.update(model) last_opt_step = ni # Log if RANK in {-1, 0}: mloss = (mloss * i + loss_items) / (i + 1) # update mean losses mem = f'{torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0:.3g}G' # (GB) pbar.set_description(('%11s' * 2 + '%11.4g' * 5) % (f'{epoch}/{epochs - 1}', mem, *mloss, targets.shape[0], imgs.shape[-1])) callbacks.run('on_train_batch_end', model, ni, imgs, targets, paths, list(mloss)) if callbacks.stop_training: return # end batch ------------------------------------------------------------------------------------------------ # Scheduler lr = [x['lr'] for x in optimizer.param_groups] # for loggers scheduler.step() if RANK in {-1, 0}: # mAP callbacks.run('on_train_epoch_end', epoch=epoch) ema.update_attr(model, include=['yaml', 'nc', 'hyp', 'names', 'stride', 'class_weights']) final_epoch = (epoch + 1 == epochs) or stopper.possible_stop if not noval or final_epoch: # Calculate mAP results, maps, _ = validate.run(data_dict, batch_size=batch_size // WORLD_SIZE * 2, imgsz=imgsz, half=amp, model=ema.ema, single_cls=single_cls, dataloader=val_loader, save_dir=save_dir, plots=False, callbacks=callbacks, compute_loss=compute_loss) # Update best mAP fi = fitness(np.array(results).reshape(1, -1)) # weighted combination of [P, R, [email protected], [email protected]] stop = stopper(epoch=epoch, fitness=fi) # early stop check if fi > best_fitness: best_fitness = fi log_vals = list(mloss) + list(results) + lr callbacks.run('on_fit_epoch_end', log_vals, epoch, best_fitness, fi) # Save model if (not nosave) or (final_epoch and not evolve): # if save ckpt = { 'epoch': epoch, 'best_fitness': best_fitness, 'model': deepcopy(de_parallel(model)).half(), 'ema': deepcopy(ema.ema).half(), 'updates': ema.updates, 'optimizer': optimizer.state_dict(), 'opt': vars(opt), 'git': GIT_INFO, # {remote, branch, commit} if a git repo 'date': datetime.now().isoformat()} # Save last, best and delete torch.save(ckpt, last) if best_fitness == fi: torch.save(ckpt, best) if opt.save_period > 0 and epoch % opt.save_period == 0: torch.save(ckpt, w / f'epoch{epoch}.pt') del ckpt callbacks.run('on_model_save', last, epoch, final_epoch, best_fitness, fi) # EarlyStopping if RANK != -1: # if DDP training broadcast_list = [stop if RANK == 0 else None] dist.broadcast_object_list(broadcast_list, 0) # broadcast 'stop' to all ranks if RANK != 0: stop = broadcast_list[0] if stop: break # must break all DDP ranks # end epoch ---------------------------------------------------------------------------------------------------- # end training ----------------------------------------------------------------------------------------------------- if RANK in {-1, 0}: LOGGER.info(f'\n{epoch - start_epoch + 1} epochs completed in {(time.time() - t0) / 3600:.3f} hours.') for f in last, best: if f.exists(): strip_optimizer(f) # strip optimizers if f is best: LOGGER.info(f'\nValidating {f}...') results, _, _ = validate.run( data_dict, batch_size=batch_size // WORLD_SIZE * 2, imgsz=imgsz, model=attempt_load(f, device).half(), iou_thres=0.65 if is_coco else 0.60, # best pycocotools at iou 0.65 single_cls=single_cls, dataloader=val_loader, save_dir=save_dir, save_json=is_coco, verbose=True, plots=plots, callbacks=callbacks, compute_loss=compute_loss) # val best model with plots if is_coco: callbacks.run('on_fit_epoch_end', list(mloss) + list(results) + lr, epoch, best_fitness, fi) callbacks.run('on_train_end', last, best, epoch, results) torch.cuda.empty_cache() return results def parse_opt(known=False): parser = argparse.ArgumentParser() parser.add_argument('--weights', type=str, default='./weights/yolov5s.pt', help='initial weights path') parser.add_argument('--cfg', type=str, default='./models/yolov5s.yaml', help='model.yaml path') parser.add_argument('--data', type=str, default=r'C:data/AAAA.yaml', help='data.yaml path') parser.add_argument('--hyp', type=str, default=ROOT / 'data/hyps/hyp.scratch-low.yaml', help='hyperparameters path') parser.add_argument('--epochs', type=int, default=100, help='total training epochs') parser.add_argument('--batch-size', type=int, default=1, help='total batch size for all GPUs, -1 for autobatch') parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='train, val image size (pixels)') parser.add_argument('--rect', action='store_true', help='rectangular training') parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume most recent training') parser.add_argument('--nosave', action='store_true', help='only save final checkpoint') parser.add_argument('--noval', action='store_true', help='only validate final epoch') parser.add_argument('--noautoanchor', action='store_true', help='disable AutoAnchor') parser.add_argument('--noplots', action='store_true', help='save no plot files') parser.add_argument('--evolve', type=int, nargs='?', const=300, help='evolve hyperparameters for x generations') parser.add_argument('--bucket', type=str, default='', help='gsutil bucket') parser.add_argument('--cache', type=str, nargs='?', const='ram', help='image --cache ram/disk') parser.add_argument('--image-weights', action='store_true', help='use weighted image selection for training') parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') parser.add_argument('--multi-scale', action='store_true', help='vary img-size +/- 50%%') parser.add_argument('--single-cls', action='store_true', help='train multi-class data as single-class') parser.add_argument('--optimizer', type=str, choices=['SGD', 'Adam', 'AdamW'], default='SGD', help='optimizer') parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode') parser.add_argument('--workers', type=int, default=8, help='max dataloader workers (per RANK in DDP mode)') parser.add_argument('--project', default=ROOT / 'runs/train', help='save to project/name') parser.add_argument('--name', default='welding_defect_yolov5s_20241101_300', help='save to project/name') parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment') parser.add_argument('--quad', action='store_true', help='quad dataloader') parser.add_argument('--cos-lr', action='store_true', help='cosine LR scheduler') parser.add_argument('--label-smoothing', type=float, default=0.0, help='Label smoothing epsilon') parser.add_argument('--patience', type=int, default=100, help='EarlyStopping patience (epochs without improvement)') parser.add_argument('--freeze', nargs='+', type=int, default=[0], help='Freeze layers: backbone=10, first3=0 1 2') parser.add_argument('--save-period', type=int, default=5, help='Save checkpoint every x epochs (disabled if < 1)') parser.add_argument('--seed', type=int, default=0, help='Global training seed') parser.add_argument('--local_rank', type=int, default=-1, help='Automatic DDP Multi-GPU argument, do not modify') # Logger arguments parser.add_argument('--entity', default=None, help='Entity') parser.add_argument('--upload_dataset', nargs='?', const=True, default=False, help='Upload data, "val" option') parser.add_argument('--bbox_interval', type=int, default=-1, help='Set bounding-box image logging interval') parser.add_argument('--artifact_alias', type=str, default='latest', help='Version of dataset artifact to use') return parser.parse_known_args()[0] if known else parser.parse_args() def main(opt, callbacks=Callbacks()): # Checks if RANK in {-1, 0}: print_args(vars(opt)) check_git_status() check_requirements() # Resume (from specified or most recent last.pt) if opt.resume and not check_comet_resume(opt) and not opt.evolve: last = Path(check_file(opt.resume) if isinstance(opt.resume, str) else get_latest_run()) opt_yaml = last.parent.parent / 'opt.yaml' # train options yaml opt_data = opt.data # original dataset if opt_yaml.is_file(): with open(opt_yaml, errors='ignore') as f: d = yaml.safe_load(f) else: d = torch.load(last, map_location='cpu')['opt'] opt = argparse.Namespace(**d) # replace opt.cfg, opt.weights, opt.resume = '', str(last), True # reinstate if is_url(opt_data): opt.data = check_file(opt_data) # avoid HUB resume auth timeout else: opt.data, opt.cfg, opt.hyp, opt.weights, opt.project = \ check_file(opt.data), check_yaml(opt.cfg), check_yaml(opt.hyp), str(opt.weights), str(opt.project) # checks assert len(opt.cfg) or len(opt.weights), 'either --cfg or --weights must be specified' if opt.evolve: if opt.project == str(ROOT / 'runs/train'): # if default project name, rename to runs/evolve opt.project = str(ROOT / 'runs/evolve') opt.exist_ok, opt.resume = opt.resume, False # pass resume to exist_ok and disable resume if opt.name == 'cfg': opt.name = Path(opt.cfg).stem # use model.yaml as name opt.save_dir = str(increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok)) # DDP mode device = select_device(opt.device, batch_size=opt.batch_size) if LOCAL_RANK != -1: msg = 'is not compatible with YOLOv5 Multi-GPU DDP training' assert not opt.image_weights, f'--image-weights {msg}' assert not opt.evolve, f'--evolve {msg}' assert opt.batch_size != -1, f'AutoBatch with --batch-size -1 {msg}, please pass a valid --batch-size' assert opt.batch_size % WORLD_SIZE == 0, f'--batch-size {opt.batch_size} must be multiple of WORLD_SIZE' assert torch.cuda.device_count() > LOCAL_RANK, 'insufficient CUDA devices for DDP command' torch.cuda.set_device(LOCAL_RANK) device = torch.device('cuda', LOCAL_RANK) dist.init_process_group(backend="nccl" if dist.is_nccl_available() else "gloo") # Train if not opt.evolve: train(opt.hyp, opt, device, callbacks) # Evolve hyperparameters (optional) else: # Hyperparameter evolution metadata (mutation scale 0-1, lower_limit, upper_limit) meta = { 'lr0': (1, 1e-5, 1e-1), # initial learning rate (SGD=1E-2, Adam=1E-3) 'lrf': (1, 0.01, 1.0), # final OneCycleLR learning rate (lr0 * lrf) 'momentum': (0.3, 0.6, 0.98), # SGD momentum/Adam beta1 'weight_decay': (1, 0.0, 0.001), # optimizer weight decay 'warmup_epochs': (1, 0.0, 5.0), # warmup epochs (fractions ok) 'warmup_momentum': (1, 0.0, 0.95), # warmup initial momentum 'warmup_bias_lr': (1, 0.0, 0.2), # warmup initial bias lr 'box': (1, 0.02, 0.2), # box loss gain 'cls': (1, 0.2, 4.0), # cls loss gain 'cls_pw': (1, 0.5, 2.0), # cls BCELoss positive_weight 'obj': (1, 0.2, 4.0), # obj loss gain (scale with pixels) 'obj_pw': (1, 0.5, 2.0), # obj BCELoss positive_weight 'iou_t': (0, 0.1, 0.7), # IoU training threshold 'anchor_t': (1, 2.0, 8.0), # anchor-multiple threshold 'anchors': (2, 2.0, 10.0), # anchors per output grid (0 to ignore) 'fl_gamma': (0, 0.0, 2.0), # focal loss gamma (efficientDet default gamma=1.5) 'hsv_h': (1, 0.0, 0.1), # image HSV-Hue augmentation (fraction) 'hsv_s': (1, 0.0, 0.9), # image HSV-Saturation augmentation (fraction) 'hsv_v': (1, 0.0, 0.9), # image HSV-Value augmentation (fraction) 'degrees': (1, 0.0, 45.0), # image rotation (+/- deg) 'translate': (1, 0.0, 0.9), # image translation (+/- fraction) 'scale': (1, 0.0, 0.9), # image scale (+/- gain) 'shear': (1, 0.0, 10.0), # image shear (+/- deg) 'perspective': (0, 0.0, 0.001), # image perspective (+/- fraction), range 0-0.001 'flipud': (1, 0.0, 1.0), # image flip up-down (probability) 'fliplr': (0, 0.0, 1.0), # image flip left-right (probability) 'mosaic': (1, 0.0, 1.0), # image mixup (probability) 'mixup': (1, 0.0, 1.0), # image mixup (probability) 'copy_paste': (1, 0.0, 1.0)} # segment copy-paste (probability) with open(opt.hyp, errors='ignore') as f: hyp = yaml.safe_load(f) # load hyps dict if 'anchors' not in hyp: # anchors commented in hyp.yaml hyp['anchors'] = 3 if opt.noautoanchor: del hyp['anchors'], meta['anchors'] opt.noval, opt.nosave, save_dir = True, True, Path(opt.save_dir) # only val/save final epoch # ei = [isinstance(x, (int, float)) for x in hyp.values()] # evolvable indices evolve_yaml, evolve_csv = save_dir / 'hyp_evolve.yaml', save_dir / 'evolve.csv' if opt.bucket: os.system(f'gsutil cp gs://{opt.bucket}/evolve.csv {evolve_csv}') # download evolve.csv if exists for _ in range(opt.evolve): # generations to evolve if evolve_csv.exists(): # if evolve.csv exists: select best hyps and mutate # Select parent(s) parent = 'single' # parent selection method: 'single' or 'weighted' x = np.loadtxt(evolve_csv, ndmin=2, delimiter=',', skiprows=1) n = min(5, len(x)) # number of previous results to consider x = x[np.argsort(-fitness(x))][:n] # top n mutations w = fitness(x) - fitness(x).min() + 1E-6 # weights (sum > 0) if parent == 'single' or len(x) == 1: # x = x[random.randint(0, n - 1)] # random selection x = x[random.choices(range(n), weights=w)[0]] # weighted selection elif parent == 'weighted': x = (x * w.reshape(n, 1)).sum(0) / w.sum() # weighted combination # Mutate mp, s = 0.8, 0.2 # mutation probability, sigma npr = np.random npr.seed(int(time.time())) g = np.array([meta[k][0] for k in hyp.keys()]) # gains 0-1 ng = len(meta) v = np.ones(ng) while all(v == 1): # mutate until a change occurs (prevent duplicates) v = (g * (npr.random(ng) < mp) * npr.randn(ng) * npr.random() * s + 1).clip(0.3, 3.0) for i, k in enumerate(hyp.keys()): # plt.hist(v.ravel(), 300) hyp[k] = float(x[i + 7] * v[i]) # mutate # Constrain to limits for k, v in meta.items(): hyp[k] = max(hyp[k], v[1]) # lower limit hyp[k] = min(hyp[k], v[2]) # upper limit hyp[k] = round(hyp[k], 5) # significant digits # Train mutation results = train(hyp.copy(), opt, device, callbacks) callbacks = Callbacks() # Write mutation results keys = ('metrics/precision', 'metrics/recall', 'metrics/mAP_0.5', 'metrics/mAP_0.5:0.95', 'val/box_loss', 'val/obj_loss', 'val/cls_loss') print_mutation(keys, results, hyp.copy(), save_dir, opt.bucket) # Plot results plot_evolve(evolve_csv) LOGGER.info(f'Hyperparameter evolution finished {opt.evolve} generations\n' f"Results saved to {colorstr('bold', save_dir)}\n" f'Usage example: $ python train.py --hyp {evolve_yaml}') def run(**kwargs): # Usage: import train; train.run(data='coco128.yaml', imgsz=320, weights='yolov5m.pt') opt = parse_opt(True) for k, v in kwargs.items(): setattr(opt, k, v) main(opt) return opt if __name__ == "__main__": opt = parse_opt() main(opt) 为什么训练之后,他的runs里面并没有显示best.pt跟last.pt 请查找原因

filetype

PS C:\Users\Administrator\Desktop> cd E:\PyTorch_Build\pytorch PS E:\PyTorch_Build\pytorch> # 1. 激活虚拟环境 PS E:\PyTorch_Build\pytorch> .\pytorch_env\Scripts\activate (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 2. 修复conda路径(执行一次即可) (pytorch_env) PS E:\PyTorch_Build\pytorch> $condaPath = "${env:USERPROFILE}\miniconda3\Scripts" (pytorch_env) PS E:\PyTorch_Build\pytorch> $env:PATH += ";$condaPath" (pytorch_env) PS E:\PyTorch_Build\pytorch> [Environment]::SetEnvironmentVariable("PATH", $env:PATH, "Machine") (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 3. 验证修复 (pytorch_env) PS E:\PyTorch_Build\pytorch> conda --version # 应显示conda版本 conda: The term 'conda' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch> # 1. 安装正确版本的MKL (pytorch_env) PS E:\PyTorch_Build\pytorch> pip uninstall -y mkl-static mkl-include Found existing installation: mkl-static 2024.1.0 Uninstalling mkl-static-2024.1.0: Successfully uninstalled mkl-static-2024.1.0 Found existing installation: mkl-include 2024.1.0 Uninstalling mkl-include-2024.1.0: Successfully uninstalled mkl-include-2024.1.0 (pytorch_env) PS E:\PyTorch_Build\pytorch> pip install mkl-static==2024.1 mkl-include==2024.1 Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Collecting mkl-static==2024.1 Using cached https://pypi.tuna.tsinghua.edu.cn/packages/d8/f0/3b9976df82906d8f3244213b6d8beb67cda19ab5b0645eb199da3c826127/mkl_static-2024.1.0-py2.py3-none-win_amd64.whl (220.8 MB) Collecting mkl-include==2024.1 Using cached https://pypi.tuna.tsinghua.edu.cn/packages/06/1b/f05201146f7f12bf871fa2c62096904317447846b5d23f3560a89b4bbaae/mkl_include-2024.1.0-py2.py3-none-win_amd64.whl (1.3 MB) Requirement already satisfied: intel-openmp==2024.* in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from mkl-static==2024.1) (2024.2.1) Requirement already satisfied: tbb-devel==2021.* in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from mkl-static==2024.1) (2021.13.1) Requirement already satisfied: intel-cmplr-lib-ur==2024.2.1 in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from intel-openmp==2024.*->mkl-static==2024.1) (2024.2.1) Requirement already satisfied: tbb==2021.13.1 in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from tbb-devel==2021.*->mkl-static==2024.1) (2021.13.1) Installing collected packages: mkl-include, mkl-static Successfully installed mkl-include-2024.1.0 mkl-static-2024.1.0 (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 2. 安装libuv (pytorch_env) PS E:\PyTorch_Build\pytorch> conda install -c conda-forge libuv=1.46 conda: The term 'conda' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 3. 安装OpenSSL (pytorch_env) PS E:\PyTorch_Build\pytorch> conda install -c conda-forge openssl=3.1 conda: The term 'conda' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 4. 验证安装 (pytorch_env) PS E:\PyTorch_Build\pytorch> python -c "import mkl; print('MKL版本:', mkl.__version__)" Traceback (most recent call last): File "<string>", line 1, in <module> ModuleNotFoundError: No module named 'mkl' (pytorch_env) PS E:\PyTorch_Build\pytorch> conda list | Select-String "libuv|openssl" conda: The term 'conda' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch> # 验证所有关键组件 (pytorch_env) PS E:\PyTorch_Build\pytorch> python -c "import mkl; print('✓ MKL已安装')" Traceback (most recent call last): File "<string>", line 1, in <module> ModuleNotFoundError: No module named 'mkl' (pytorch_env) PS E:\PyTorch_Build\pytorch> conda list | Select-String "libuv|openssl" conda: The term 'conda' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch> dir "E:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.0\bin\cudnn*" (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 验证环境变量 (pytorch_env) PS E:\PyTorch_Build\pytorch> python -c "import os; print('环境变量检查:'); >> print('CUDNN_PATH:', os.getenv('CUDA_PATH')); >> print('CONDA_PREFIX:', os.getenv('CONDA_PREFIX'))" 环境变量检查: CUDNN_PATH: E:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.0 CONDA_PREFIX: None (pytorch_env) PS E:\PyTorch_Build\pytorch> # 清理并重建 (pytorch_env) PS E:\PyTorch_Build\pytorch> Remove-Item -Recurse -Force build (pytorch_env) PS E:\PyTorch_Build\pytorch> python setup.py install Building wheel torch-2.9.0a0+git2d31c3d -- Building version 2.9.0a0+git2d31c3d E:\PyTorch_Build\pytorch\pytorch_env\lib\site-packages\setuptools\_distutils\_msvccompiler.py:12: UserWarning: _get_vc_env is private; find an alternative (pypa/distutils#340) warnings.warn( -- Checkout nccl release tag: v2.27.5-1 cmake -GNinja -DBUILD_PYTHON=True -DBUILD_TEST=True -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=E:\PyTorch_Build\pytorch\torch -DCMAKE_PREFIX_PATH=E:\PyTorch_Build\pytorch\pytorch_env\Lib\site-packages -DPython_EXECUTABLE=E:\PyTorch_Build\pytorch\pytorch_env\Scripts\python.exe -DTORCH_BUILD_VERSION=2.9.0a0+git2d31c3d -DUSE_NUMPY=True E:\PyTorch_Build\pytorch CMake Deprecation Warning at CMakeLists.txt:18 (cmake_policy): The OLD behavior for policy CMP0126 will be removed from a future version of CMake. The cmake-policies(7) manual explains that the OLD behaviors of all policies are deprecated and that a policy should be set to OLD only under specific short-term circumstances. Projects should be ported to the NEW behavior and not rely on setting a policy to OLD. -- The CXX compiler identification is MSVC 19.44.35215.0 -- The C compiler identification is MSVC 19.44.35215.0 -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe - skipped -- Detecting C compile features -- Detecting C compile features - done -- Not forcing any particular BLAS to be found CMake Warning at CMakeLists.txt:425 (message): TensorPipe cannot be used on Windows. Set it to OFF CMake Warning at CMakeLists.txt:427 (message): KleidiAI cannot be used on Windows. Set it to OFF CMake Warning at CMakeLists.txt:439 (message): Libuv is not installed in current conda env. Set USE_DISTRIBUTED to OFF. Please run command 'conda install -c conda-forge libuv=1.39' to install libuv. -- Performing Test C_HAS_AVX_1 -- Performing Test C_HAS_AVX_1 - Success -- Performing Test C_HAS_AVX2_1 -- Performing Test C_HAS_AVX2_1 - Success -- Performing Test C_HAS_AVX512_1 -- Performing Test C_HAS_AVX512_1 - Success -- Performing Test CXX_HAS_AVX_1 -- Performing Test CXX_HAS_AVX_1 - Success -- Performing Test CXX_HAS_AVX2_1 -- Performing Test CXX_HAS_AVX2_1 - Success -- Performing Test CXX_HAS_AVX512_1 -- Performing Test CXX_HAS_AVX512_1 - Success -- Current compiler supports avx2 extension. Will build perfkernels. -- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY -- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY - Failed -- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY -- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY - Failed -- Could not find hardware support for NEON on this machine. -- No OMAP3 processor on this machine. -- No OMAP4 processor on this machine. -- Compiler does not support SVE extension. Will not build perfkernels. CMake Warning at CMakeLists.txt:845 (message): x64 operating system is required for FBGEMM. Not compiling with FBGEMM. Turn this warning off by USE_FBGEMM=OFF. -- Performing Test HAS/UTF_8 -- Performing Test HAS/UTF_8 - Success -- Found CUDA: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 (found version "13.0") -- The CUDA compiler identification is NVIDIA 13.0.48 with host compiler MSVC 19.44.35215.0 -- Detecting CUDA compiler ABI info -- Detecting CUDA compiler ABI info - done -- Check for working CUDA compiler: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe - skipped -- Detecting CUDA compile features -- Detecting CUDA compile features - done -- Found CUDAToolkit: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/include (found version "13.0.48") -- PyTorch: CUDA detected: 13.0 -- PyTorch: CUDA nvcc is: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe -- PyTorch: CUDA toolkit directory: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 -- PyTorch: Header version is: 13.0 -- Found Python: E:\PyTorch_Build\pytorch\pytorch_env\Scripts\python.exe (found version "3.10.10") found components: Interpreter CMake Warning at cmake/public/cuda.cmake:140 (message): Failed to compute shorthash for libnvrtc.so Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:873 (include) -- Could NOT find CUDNN (missing: CUDNN_LIBRARY_PATH CUDNN_INCLUDE_PATH) CMake Warning at cmake/public/cuda.cmake:201 (message): Cannot find cuDNN library. Turning the option off Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:873 (include) -- Could NOT find CUSPARSELT (missing: CUSPARSELT_LIBRARY_PATH CUSPARSELT_INCLUDE_PATH) CMake Warning at cmake/public/cuda.cmake:226 (message): Cannot find cuSPARSELt library. Turning the option off Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:873 (include) -- Could NOT find CUDSS (missing: CUDSS_LIBRARY_PATH CUDSS_INCLUDE_PATH) CMake Warning at cmake/public/cuda.cmake:242 (message): Cannot find CUDSS library. Turning the option off Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:873 (include) -- USE_CUFILE is set to 0. Compiling without cuFile support -- Autodetected CUDA architecture(s): 12.0 CMake Warning at cmake/public/cuda.cmake:317 (message): pytorch is not compatible with `CMAKE_CUDA_ARCHITECTURES` and will ignore its value. Please configure `TORCH_CUDA_ARCH_LIST` instead. Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:873 (include) -- Added CUDA NVCC flags for: -gencode;arch=compute_120,code=sm_120 CMake Warning at cmake/Dependencies.cmake:95 (message): Not compiling with XPU. Could NOT find SYCL. Suppress this warning with -DUSE_XPU=OFF. Call Stack (most recent call first): CMakeLists.txt:873 (include) -- Building using own protobuf under third_party per request. -- Use custom protobuf build. CMake Warning at cmake/ProtoBuf.cmake:37 (message): Ancient protobuf forces CMake compatibility Call Stack (most recent call first): cmake/ProtoBuf.cmake:87 (custom_protobuf_find) cmake/Dependencies.cmake:107 (include) CMakeLists.txt:873 (include) CMake Deprecation Warning at third_party/protobuf/cmake/CMakeLists.txt:2 (cmake_minimum_required): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. -- -- 3.13.0.0 -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed -- Looking for pthread_create in pthreads -- Looking for pthread_create in pthreads - not found -- Looking for pthread_create in pthread -- Looking for pthread_create in pthread - not found -- Found Threads: TRUE -- Caffe2 protobuf include directory: $<BUILD_INTERFACE:E:/PyTorch_Build/pytorch/third_party/protobuf/src>$<INSTALL_INTERFACE:include> -- Trying to find preferred BLAS backend of choice: MKL -- MKL_THREADING = OMP -- Looking for sys/types.h -- Looking for sys/types.h - found -- Looking for stdint.h -- Looking for stdint.h - found -- Looking for stddef.h -- Looking for stddef.h - found -- Check size of void* -- Check size of void* - done -- MKL_THREADING = OMP CMake Warning at cmake/Dependencies.cmake:213 (message): MKL could not be found. Defaulting to Eigen Call Stack (most recent call first): CMakeLists.txt:873 (include) CMake Warning at cmake/Dependencies.cmake:279 (message): Preferred BLAS (MKL) cannot be found, now searching for a general BLAS library Call Stack (most recent call first): CMakeLists.txt:873 (include) -- MKL_THREADING = OMP -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_sequential - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_sequential - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - libiomp5md - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - libiomp5md - pthread] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - pthread] -- Library mkl_intel: not found -- Checking for [mkl - guide - pthread - m] -- Library mkl: not found -- MKL library not found -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Checking for [Accelerate] -- Library Accelerate: BLAS_Accelerate_LIBRARY-NOTFOUND -- Checking for [vecLib] -- Library vecLib: BLAS_vecLib_LIBRARY-NOTFOUND -- Checking for [flexiblas] -- Library flexiblas: BLAS_flexiblas_LIBRARY-NOTFOUND -- Checking for [openblas] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m - gomp] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [libopenblas] -- Library libopenblas: BLAS_libopenblas_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran - pthread] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [acml - gfortran] -- Library acml: BLAS_acml_LIBRARY-NOTFOUND -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Could NOT find Atlas (missing: Atlas_CBLAS_INCLUDE_DIR Atlas_CLAPACK_INCLUDE_DIR Atlas_CBLAS_LIBRARY Atlas_BLAS_LIBRARY Atlas_LAPACK_LIBRARY) -- Checking for [ptf77blas - atlas - gfortran] -- Library ptf77blas: BLAS_ptf77blas_LIBRARY-NOTFOUND -- Checking for [] -- Looking for sgemm_ -- Looking for sgemm_ - not found -- Cannot find a library with BLAS API. Not using BLAS. -- Using pocketfft in directory: E:/PyTorch_Build/pytorch/third_party/pocketfft/ CMake Deprecation Warning at third_party/pthreadpool/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Deprecation Warning at third_party/FXdiv/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Deprecation Warning at third_party/cpuinfo/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. -- The ASM compiler identification is MSVC CMake Warning (dev) at pytorch_env/Lib/site-packages/cmake/data/share/cmake-4.1/Modules/CMakeDetermineASMCompiler.cmake:234 (message): Policy CMP194 is not set: MSVC is not an assembler for language ASM. Run "cmake --help-policy CMP194" for policy details. Use the cmake_policy command to set the policy and suppress this warning. Call Stack (most recent call first): third_party/XNNPACK/CMakeLists.txt:18 (PROJECT) This warning is for project developers. Use -Wno-dev to suppress it. -- Found assembler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe -- Building for XNNPACK_TARGET_PROCESSOR: x86_64 -- Generating microkernels.cmake Duplicate microkernel definition: src\qs8-qc4w-packw\gen\qs8-qc4w-packw-x8c8-gemm-goi-avx256vnni.c and src\qs8-qc4w-packw\gen\qs8-qc4w-packw-x8c8-gemm-goi-avxvnni.c (1th function) Duplicate microkernel definition: src\qs8-qc4w-packw\gen\qs8-qc4w-packw-x8c8-gemm-goi-avxvnni.c and src\qs8-qc4w-packw\gen\qs8-qc4w-packw-x8c8-gemm-goi-scalar.c No microkernel found in src\reference\binary-elementwise.cc No microkernel found in src\reference\packing.cc No microkernel found in src\reference\unary-elementwise.cc -- Found Git: E:/Program Files/Git/cmd/git.exe (found version "2.51.0.windows.1") -- Google Benchmark version: v1.9.3, normalized to 1.9.3 -- Looking for shm_open in rt -- Looking for shm_open in rt - not found -- Performing Test HAVE_CXX_FLAG_WX -- Performing Test HAVE_CXX_FLAG_WX - Success -- Compiling and running to test HAVE_STD_REGEX -- Performing Test HAVE_STD_REGEX -- success -- Compiling and running to test HAVE_GNU_POSIX_REGEX -- Performing Test HAVE_GNU_POSIX_REGEX -- failed to compile -- Compiling and running to test HAVE_POSIX_REGEX -- Performing Test HAVE_POSIX_REGEX -- failed to compile -- Compiling and running to test HAVE_STEADY_CLOCK -- Performing Test HAVE_STEADY_CLOCK -- success -- Compiling and running to test HAVE_PTHREAD_AFFINITY -- Performing Test HAVE_PTHREAD_AFFINITY -- failed to compile CMake Deprecation Warning at third_party/ittapi/CMakeLists.txt:7 (cmake_minimum_required): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Warning at cmake/Dependencies.cmake:749 (message): FP16 is only cmake-2.8 compatible Call Stack (most recent call first): CMakeLists.txt:873 (include) CMake Deprecation Warning at third_party/FP16/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Deprecation Warning at third_party/psimd/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. -- Using third party subdirectory Eigen. -- Found Python: E:\PyTorch_Build\pytorch\pytorch_env\Scripts\python.exe (found version "3.10.10") found components: Interpreter Development.Module missing components: NumPy CMake Warning at cmake/Dependencies.cmake:826 (message): NumPy could not be found. Not building with NumPy. Suppress this warning with -DUSE_NUMPY=OFF Call Stack (most recent call first): CMakeLists.txt:873 (include) -- Using third_party/pybind11. -- pybind11 include dirs: E:/PyTorch_Build/pytorch/cmake/../third_party/pybind11/include -- Could NOT find OpenTelemetryApi (missing: OpenTelemetryApi_INCLUDE_DIRS) -- Using third_party/opentelemetry-cpp. -- opentelemetry api include dirs: E:/PyTorch_Build/pytorch/cmake/../third_party/opentelemetry-cpp/api/include -- Could NOT find MPI_C (missing: MPI_C_LIB_NAMES MPI_C_HEADER_DIR MPI_C_WORKS) -- Could NOT find MPI_CXX (missing: MPI_CXX_LIB_NAMES MPI_CXX_HEADER_DIR MPI_CXX_WORKS) -- Could NOT find MPI (missing: MPI_C_FOUND MPI_CXX_FOUND) CMake Warning at cmake/Dependencies.cmake:894 (message): Not compiling with MPI. Suppress this warning with -DUSE_MPI=OFF Call Stack (most recent call first): CMakeLists.txt:873 (include) -- MKL_THREADING = OMP -- Check OMP with lib C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/lib/x64/libomp.lib and flags -openmp:experimental -- MKL_THREADING = OMP -- Check OMP with lib C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/lib/x64/libomp.lib and flags -openmp:experimental -- Found OpenMP_C: -openmp:experimental -- Found OpenMP_CXX: -openmp:experimental -- Found OpenMP: TRUE -- Adding OpenMP CXX_FLAGS: -openmp:experimental -- Will link against OpenMP libraries: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/lib/x64/libomp.lib -- Found nvtx3: E:/PyTorch_Build/pytorch/third_party/NVTX/c/include -- ROCM_PATH environment variable is not set and C:/opt/rocm does not exist. Building without ROCm support. -- Found Python3: E:\PyTorch_Build\pytorch\pytorch_env\Scripts\python.exe (found version "3.10.10") found components: Interpreter -- ONNX_PROTOC_EXECUTABLE: $<TARGET_FILE:protobuf::protoc> -- Protobuf_VERSION: Protobuf_VERSION_NOTFOUND Generated: E:/PyTorch_Build/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.proto Generated: E:/PyTorch_Build/pytorch/build/third_party/onnx/onnx/onnx-operators_onnx_torch-ml.proto Generated: E:/PyTorch_Build/pytorch/build/third_party/onnx/onnx/onnx-data_onnx_torch.proto -- -- ******** Summary ******** -- CMake version : 4.1.0 -- CMake command : E:/PyTorch_Build/pytorch/pytorch_env/Lib/site-packages/cmake/data/bin/cmake.exe -- System : Windows -- C++ compiler : C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe -- C++ compiler version : 19.44.35215.0 -- CXX flags : /DWIN32 /D_WINDOWS /EHsc /Zc:__cplusplus /bigobj /FS /utf-8 -DUSE_PTHREADPOOL /EHsc /wd26812 -- Build type : Release -- Compile definitions : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1 -- CMAKE_PREFIX_PATH : E:\PyTorch_Build\pytorch\pytorch_env\Lib\site-packages;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 -- CMAKE_INSTALL_PREFIX : E:/PyTorch_Build/pytorch/torch -- CMAKE_MODULE_PATH : E:/PyTorch_Build/pytorch/cmake/Modules;E:/PyTorch_Build/pytorch/cmake/public/../Modules_CUDA_fix -- -- ONNX version : 1.18.0 -- ONNX NAMESPACE : onnx_torch -- ONNX_USE_LITE_PROTO : OFF -- USE_PROTOBUF_SHARED_LIBS : OFF -- ONNX_DISABLE_EXCEPTIONS : OFF -- ONNX_DISABLE_STATIC_REGISTRATION : OFF -- ONNX_WERROR : OFF -- ONNX_BUILD_TESTS : OFF -- BUILD_SHARED_LIBS : OFF -- -- Protobuf compiler : $<TARGET_FILE:protobuf::protoc> -- Protobuf includes : -- Protobuf libraries : -- ONNX_BUILD_PYTHON : OFF -- Found CUDA with FP16 support, compiling with torch.cuda.HalfTensor -- Adding -DNDEBUG to compile flags -- Checking prototype magma_get_sgeqrf_nb for MAGMA_V2 -- Checking prototype magma_get_sgeqrf_nb for MAGMA_V2 - False -- MAGMA not found. Compiling without MAGMA support -- Could not find hardware support for NEON on this machine. -- No OMAP3 processor on this machine. -- No OMAP4 processor on this machine. -- MKL_THREADING = OMP -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_sequential - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_sequential - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - libiomp5md - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - libiomp5md - pthread] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - pthread] -- Library mkl_intel: not found -- Checking for [mkl - guide - pthread - m] -- Library mkl: not found -- MKL library not found -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Checking for [Accelerate] -- Library Accelerate: BLAS_Accelerate_LIBRARY-NOTFOUND -- Checking for [vecLib] -- Library vecLib: BLAS_vecLib_LIBRARY-NOTFOUND -- Checking for [flexiblas] -- Library flexiblas: BLAS_flexiblas_LIBRARY-NOTFOUND -- Checking for [openblas] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m - gomp] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [libopenblas] -- Library libopenblas: BLAS_libopenblas_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran - pthread] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [acml - gfortran] -- Library acml: BLAS_acml_LIBRARY-NOTFOUND -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Could NOT find Atlas (missing: Atlas_CBLAS_INCLUDE_DIR Atlas_CLAPACK_INCLUDE_DIR Atlas_CBLAS_LIBRARY Atlas_BLAS_LIBRARY Atlas_LAPACK_LIBRARY) -- Checking for [ptf77blas - atlas - gfortran] -- Library ptf77blas: BLAS_ptf77blas_LIBRARY-NOTFOUND -- Checking for [] -- Cannot find a library with BLAS API. Not using BLAS. -- LAPACK requires BLAS -- Cannot find a library with LAPACK API. Not using LAPACK. disabling ROCM because NOT USE_ROCM is set -- MIOpen not found. Compiling without MIOpen support disabling MKLDNN because USE_MKLDNN is not set -- {fmt} version: 11.2.0 -- Build type: Release -- Using Kineto with CUPTI support -- Configuring Kineto dependency: -- KINETO_SOURCE_DIR = E:/PyTorch_Build/pytorch/third_party/kineto/libkineto -- KINETO_BUILD_TESTS = OFF -- KINETO_LIBRARY_TYPE = static -- CUDA_SOURCE_DIR = E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 -- CUDA_INCLUDE_DIRS = E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/include -- CUPTI_INCLUDE_DIR = E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/extras/CUPTI/include -- CUDA_cupti_LIBRARY = E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/extras/CUPTI/lib64/cupti.lib -- Found CUPTI CMake Deprecation Warning at third_party/kineto/libkineto/CMakeLists.txt:7 (cmake_minimum_required): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Warning (dev) at third_party/kineto/libkineto/CMakeLists.txt:15 (find_package): Policy CMP0148 is not set: The FindPythonInterp and FindPythonLibs modules are removed. Run "cmake --help-policy CMP0148" for policy details. Use the cmake_policy command to set the policy and suppress this warning. This warning is for project developers. Use -Wno-dev to suppress it. -- Found PythonInterp: E:/PyTorch_Build/pytorch/pytorch_env/Scripts/python.exe (found version "3.10.10") -- ROCM_SOURCE_DIR = -- Kineto: FMT_SOURCE_DIR = E:/PyTorch_Build/pytorch/third_party/fmt -- Kineto: FMT_INCLUDE_DIR = E:/PyTorch_Build/pytorch/third_party/fmt/include -- CUPTI_INCLUDE_DIR = E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/extras/CUPTI/include -- ROCTRACER_INCLUDE_DIR = /include/roctracer -- DYNOLOG_INCLUDE_DIR = E:/PyTorch_Build/pytorch/third_party/kineto/libkineto/third_party/dynolog/ -- IPCFABRIC_INCLUDE_DIR = E:/PyTorch_Build/pytorch/third_party/kineto/libkineto/third_party/dynolog//dynolog/src/ipcfabric/ -- Configured Kineto -- Performing Test HAS/WD4624 -- Performing Test HAS/WD4624 - Success -- Performing Test HAS/WD4068 -- Performing Test HAS/WD4068 - Success -- Performing Test HAS/WD4067 -- Performing Test HAS/WD4067 - Success -- Performing Test HAS/WD4267 -- Performing Test HAS/WD4267 - Success -- Performing Test HAS/WD4661 -- Performing Test HAS/WD4661 - Success -- Performing Test HAS/WD4717 -- Performing Test HAS/WD4717 - Success -- Performing Test HAS/WD4244 -- Performing Test HAS/WD4244 - Success -- Performing Test HAS/WD4804 -- Performing Test HAS/WD4804 - Success -- Performing Test HAS/WD4273 -- Performing Test HAS/WD4273 - Success -- Performing Test HAS_WNO_STRINGOP_OVERFLOW -- Performing Test HAS_WNO_STRINGOP_OVERFLOW - Failed -- -- Architecture: x64 -- Use the C++ compiler to compile (MI_USE_CXX=ON) -- -- Library name : mimalloc -- Version : 2.2.4 -- Build type : release -- C++ Compiler : C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe -- Compiler flags : /Zc:__cplusplus -- Compiler defines : MI_CMAKE_BUILD_TYPE=release;MI_BUILD_RELEASE -- Link libraries : psapi;shell32;user32;advapi32;bcrypt -- Build targets : static -- CMake Error at CMakeLists.txt:1264 (add_subdirectory): The source directory E:/PyTorch_Build/pytorch/torch/headeronly does not contain a CMakeLists.txt file. -- don't use NUMA -- Looking for backtrace -- Looking for backtrace - not found -- Could NOT find Backtrace (missing: Backtrace_LIBRARY Backtrace_INCLUDE_DIR) -- Autodetected CUDA architecture(s): 12.0 -- Autodetected CUDA architecture(s): 12.0 -- Autodetected CUDA architecture(s): 12.0 -- headers outputs: torch\csrc\inductor\aoti_torch\generated\c_shim_cpu.h not found torch\csrc\inductor\aoti_torch\generated\c_shim_aten.h not found torch\csrc\inductor\aoti_torch\generated\c_shim_cuda.h not found -- sources outputs: -- declarations_yaml outputs: -- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT -- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT - Failed -- Using ATen parallel backend: OMP -- Could NOT find OpenSSL, try to set the path to OpenSSL root folder in the system variable OPENSSL_ROOT_DIR (missing: OPENSSL_CRYPTO_LIBRARY OPENSSL_INCLUDE_DIR) -- Check size of long double -- Check size of long double - done -- Performing Test COMPILER_SUPPORTS_FLOAT128 -- Performing Test COMPILER_SUPPORTS_FLOAT128 - Failed -- Performing Test COMPILER_SUPPORTS_SSE2 -- Performing Test COMPILER_SUPPORTS_SSE2 - Success -- Performing Test COMPILER_SUPPORTS_SSE4 -- Performing Test COMPILER_SUPPORTS_SSE4 - Success -- Performing Test COMPILER_SUPPORTS_AVX -- Performing Test COMPILER_SUPPORTS_AVX - Success -- Performing Test COMPILER_SUPPORTS_FMA4 -- Performing Test COMPILER_SUPPORTS_FMA4 - Success -- Performing Test COMPILER_SUPPORTS_AVX2 -- Performing Test COMPILER_SUPPORTS_AVX2 - Success -- Performing Test COMPILER_SUPPORTS_AVX512F -- Performing Test COMPILER_SUPPORTS_AVX512F - Success -- Found OpenMP_C: -openmp:experimental (found version "2.0") -- Found OpenMP_CXX: -openmp:experimental (found version "2.0") -- Found OpenMP_CUDA: -openmp (found version "2.0") -- Found OpenMP: TRUE (found version "2.0") -- Performing Test COMPILER_SUPPORTS_OPENMP -- Performing Test COMPILER_SUPPORTS_OPENMP - Success -- Performing Test COMPILER_SUPPORTS_OMP_SIMD -- Performing Test COMPILER_SUPPORTS_OMP_SIMD - Failed -- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES -- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES - Failed -- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH -- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH - Failed -- Performing Test COMPILER_SUPPORTS_SYS_GETRANDOM -- Performing Test COMPILER_SUPPORTS_SYS_GETRANDOM - Failed -- Configuring build for SLEEF-v3.8.0 Target system: Windows-10.0.26100 Target processor: AMD64 Host system: Windows-10.0.26100 Host processor: AMD64 Detected C compiler: MSVC @ C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe CMake: 4.1.0 Make program: E:/PyTorch_Build/pytorch/pytorch_env/Scripts/ninja.exe -- Using option `/D_CRT_SECURE_NO_WARNINGS /D_CRT_NONSTDC_NO_DEPRECATE ` to compile libsleef -- Building shared libs : OFF -- Building static test bins: OFF -- MPFR : LIB_MPFR-NOTFOUND -- GMP : LIBGMP-NOTFOUND -- RT : -- FFTW3 : LIBFFTW3-NOTFOUND -- OPENSSL : -- SDE : SDE_COMMAND-NOTFOUND -- COMPILER_SUPPORTS_OPENMP : FALSE AT_INSTALL_INCLUDE_DIR include/ATen/core core header install: E:/PyTorch_Build/pytorch/build/aten/src/ATen/core/aten_interned_strings.h core header install: E:/PyTorch_Build/pytorch/build/aten/src/ATen/core/enum_tag.h core header install: E:/PyTorch_Build/pytorch/build/aten/src/ATen/core/TensorBody.h CMake Error: File E:/PyTorch_Build/pytorch/torch/_utils_internal.py does not exist. CMake Error at caffe2/CMakeLists.txt:241 (configure_file): configure_file Problem configuring file CMake Error: File E:/PyTorch_Build/pytorch/torch/csrc/api/include/torch/version.h.in does not exist. CMake Error at caffe2/CMakeLists.txt:246 (configure_file): configure_file Problem configuring file -- NVSHMEM not found, not building with NVSHMEM support. CMake Error at caffe2/CMakeLists.txt:1398 (add_subdirectory): The source directory E:/PyTorch_Build/pytorch/torch does not contain a CMakeLists.txt file. CMake Warning at CMakeLists.txt:1285 (message): Generated cmake files are only fully tested if one builds with system glog, gflags, and protobuf. Other settings may generate files that are not well tested. -- -- ******** Summary ******** -- General: -- CMake version : 4.1.0 -- CMake command : E:/PyTorch_Build/pytorch/pytorch_env/Lib/site-packages/cmake/data/bin/cmake.exe -- System : Windows -- C++ compiler : C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe -- C++ compiler id : MSVC -- C++ compiler version : 19.44.35215.0 -- Using ccache if found : OFF -- CXX flags : /DWIN32 /D_WINDOWS /EHsc /Zc:__cplusplus /bigobj /FS /utf-8 -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE /wd4624 /wd4068 /wd4067 /wd4267 /wd4661 /wd4717 /wd4244 /wd4804 /wd4273 -- Shared LD flags : /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099 -- Static LD flags : /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099 -- Module LD flags : /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099 -- Build type : Release -- Compile definitions : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1;ONNX_NAMESPACE=onnx_torch;_CRT_SECURE_NO_DEPRECATE=1;USE_EXTERNAL_MZCRC;MINIZ_DISABLE_ZIP_READER_CRC32_CHECKS;EXPORT_AOTI_FUNCTIONS;WIN32_LEAN_AND_MEAN;_UCRT_LEGACY_INFINITY;NOMINMAX;USE_MIMALLOC -- CMAKE_PREFIX_PATH : E:\PyTorch_Build\pytorch\pytorch_env\Lib\site-packages;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 -- CMAKE_INSTALL_PREFIX : E:/PyTorch_Build/pytorch/torch -- USE_GOLD_LINKER : OFF -- -- TORCH_VERSION : 2.9.0 -- BUILD_STATIC_RUNTIME_BENCHMARK: OFF -- BUILD_BINARY : OFF -- BUILD_CUSTOM_PROTOBUF : ON -- Link local protobuf : ON -- BUILD_PYTHON : True -- Python version : 3.10.10 -- Python executable : E:\PyTorch_Build\pytorch\pytorch_env\Scripts\python.exe -- Python library : E:/Python310/libs/python310.lib -- Python includes : E:/Python310/Include -- Python site-package : E:\PyTorch_Build\pytorch\pytorch_env\Lib\site-packages -- BUILD_SHARED_LIBS : ON -- CAFFE2_USE_MSVC_STATIC_RUNTIME : OFF -- BUILD_TEST : True -- BUILD_JNI : OFF -- BUILD_MOBILE_AUTOGRAD : OFF -- BUILD_LITE_INTERPRETER: OFF -- INTERN_BUILD_MOBILE : -- TRACING_BASED : OFF -- USE_BLAS : 0 -- USE_LAPACK : 0 -- USE_ASAN : OFF -- USE_TSAN : OFF -- USE_CPP_CODE_COVERAGE : OFF -- USE_CUDA : ON -- CUDA static link : OFF -- USE_CUDNN : OFF -- USE_CUSPARSELT : OFF -- USE_CUDSS : OFF -- USE_CUFILE : OFF -- CUDA version : 13.0 -- USE_FLASH_ATTENTION : OFF -- USE_MEM_EFF_ATTENTION : ON -- CUDA root directory : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 -- CUDA library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cuda.lib -- cudart library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cudart.lib -- cublas library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cublas.lib -- cufft library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cufft.lib -- curand library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/curand.lib -- cusparse library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cusparse.lib -- nvrtc : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/nvrtc.lib -- CUDA include path : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/include -- NVCC executable : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe -- CUDA compiler : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe -- CUDA flags : -DLIBCUDACXX_ENABLE_SIMPLIFIED_COMPLEX_OPERATIONS -Xcompiler /Zc:__cplusplus -Xcompiler /w -w -Xcompiler /FS -Xfatbin -compress-all -DONNX_NAMESPACE=onnx_torch --use-local-env -gencode arch=compute_120,code=sm_120 -Xcudafe --diag_suppress=cc_clobber_ignored,--diag_suppress=field_without_dll_interface,--diag_suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=bad_friend_decl --Werror cross-execution-space-call --no-host-device-move-forward --expt-relaxed-constexpr --expt-extended-lambda -Xcompiler=/wd4819,/wd4503,/wd4190,/wd4244,/wd4251,/wd4275,/wd4522 -Wno-deprecated-gpu-targets --expt-extended-lambda -DCUB_WRAPPED_NAMESPACE=at_cuda_detail -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -- CUDA host compiler : -- CUDA --device-c : OFF -- USE_TENSORRT : -- USE_XPU : OFF -- USE_ROCM : OFF -- BUILD_NVFUSER : -- USE_EIGEN_FOR_BLAS : ON -- USE_EIGEN_FOR_SPARSE : OFF -- USE_FBGEMM : OFF -- USE_KINETO : ON -- USE_GFLAGS : OFF -- USE_GLOG : OFF -- USE_LITE_PROTO : OFF -- USE_PYTORCH_METAL : OFF -- USE_PYTORCH_METAL_EXPORT : OFF -- USE_MPS : OFF -- CAN_COMPILE_METAL : -- USE_MKL : OFF -- USE_MKLDNN : OFF -- USE_UCC : OFF -- USE_ITT : ON -- USE_XCCL : OFF -- USE_NCCL : OFF -- Found NVSHMEM : -- USE_NNPACK : OFF -- USE_NUMPY : OFF -- USE_OBSERVERS : ON -- USE_OPENCL : OFF -- USE_OPENMP : ON -- USE_MIMALLOC : ON -- USE_MIMALLOC_ON_MKL : OFF -- USE_VULKAN : OFF -- USE_PROF : OFF -- USE_PYTORCH_QNNPACK : OFF -- USE_XNNPACK : ON -- USE_DISTRIBUTED : OFF -- Public Dependencies : -- Private Dependencies : Threads::Threads;pthreadpool;cpuinfo;XNNPACK;microkernels-prod;ittnotify;fp16;caffe2::openmp;fmt::fmt-header-only;kineto -- Public CUDA Deps. : -- Private CUDA Deps. : caffe2::curand;caffe2::cufft;caffe2::cublas;fmt::fmt-header-only;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cudart_static.lib;CUDA::cusparse;CUDA::cufft;CUDA::cusolver;ATEN_CUDA_FILES_GEN_LIB -- USE_COREML_DELEGATE : OFF -- BUILD_LAZY_TS_BACKEND : ON -- USE_ROCM_KERNEL_ASSERT : OFF -- Performing Test HAS_WMISSING_PROTOTYPES -- Performing Test HAS_WMISSING_PROTOTYPES - Failed -- Performing Test HAS_WERROR_MISSING_PROTOTYPES -- Performing Test HAS_WERROR_MISSING_PROTOTYPES - Failed -- Configuring incomplete, errors occurred! (pytorch_env) PS E:\PyTorch_Build\pytorch> # 永久修复conda命令不可用问题 (pytorch_env) PS E:\PyTorch_Build\pytorch> $condaPaths = @( >> "$env:USERPROFILE\miniconda3\Scripts", >> "$env:USERPROFILE\anaconda3\Scripts", >> "C:\ProgramData\miniconda3\Scripts" >> ) (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> foreach ($path in $condaPaths) { >> if (Test-Path $path) { >> $env:PATH = "$path;$env:PATH" >> [Environment]::SetEnvironmentVariable("PATH", $env:PATH, "Machine") >> break >> } >> } (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 验证修复 (pytorch_env) PS E:\PyTorch_Build\pytorch> conda --version conda: The term 'conda' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch> # 设置 cuDNN v9.12 路径 (pytorch_env) PS E:\PyTorch_Build\pytorch> $cudnnPath = "E:\Program Files\NVIDIA\CUNND\v9.12" (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 添加到环境变量 (pytorch_env) PS E:\PyTorch_Build\pytorch> $env:CUDNN_ROOT_DIR = $cudnnPath (pytorch_env) PS E:\PyTorch_Build\pytorch> $env:CUDNN_INCLUDE_DIR = "$cudnnPath\include" (pytorch_env) PS E:\PyTorch_Build\pytorch> $env:CUDNN_LIBRARY = "$cudnnPath\lib\x64\cudnn.lib" (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 永久生效 (pytorch_env) PS E:\PyTorch_Build\pytorch> [Environment]::SetEnvironmentVariable("CUDNN_ROOT_DIR", $cudnnPath, "Machine") (pytorch_env) PS E:\PyTorch_Build\pytorch> [Environment]::SetEnvironmentVariable("CUDNN_INCLUDE_DIR", "$cudnnPath\include", "Machine") (pytorch_env) PS E:\PyTorch_Build\pytorch> [Environment]::SetEnvironmentVariable("CUDNN_LIBRARY", "$cudnnPath\lib\x64\cudnn.lib", "Machine") (pytorch_env) PS E:\PyTorch_Build\pytorch> # 原始代码大约在 190 行左右 (pytorch_env) PS E:\PyTorch_Build\pytorch> # 替换为以下内容强制使用 v9.12: (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> set(CUDNN_VERSION "9.12.0") # 手动指定版本 CUDNN_VERSION: The term 'CUDNN_VERSION' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch> set(CUDNN_FOUND TRUE) CUDNN_FOUND: The term 'CUDNN_FOUND' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch> set(CUDNN_INCLUDE_DIR $ENV{CUDNN_INCLUDE_DIR}) InvalidOperation: The variable '$ENV' cannot be retrieved because it has not been set. (pytorch_env) PS E:\PyTorch_Build\pytorch> set(CUDNN_LIBRARY $ENV{CUDNN_LIBRARY}) InvalidOperation: The variable '$ENV' cannot be retrieved because it has not been set. (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> message(STATUS "Using manually configured cuDNN v${CUDNN_VERSION}") InvalidOperation: The variable '$CUDNN_VERSION' cannot be retrieved because it has not been set. (pytorch_env) PS E:\PyTorch_Build\pytorch> message(STATUS " Include path: ${CUDNN_INCLUDE_DIR}") InvalidOperation: The variable '$CUDNN_INCLUDE_DIR' cannot be retrieved because it has not been set. (pytorch_env) PS E:\PyTorch_Build\pytorch> message(STATUS " Library path: ${CUDNN_LIBRARY}") InvalidOperation: The variable '$CUDNN_LIBRARY' cannot be retrieved because it has not been set. (pytorch_env) PS E:\PyTorch_Build\pytorch> # 精确查找 conda.bat (pytorch_env) PS E:\PyTorch_Build\pytorch> $condaPath = Get-ChildItem -Path C:\ -Recurse -Filter conda.bat -ErrorAction SilentlyContinue | >> Select-Object -First 1 | >> ForEach-Object { $_.DirectoryName } (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> if ($condaPath) { >> $env:PATH = "$condaPath;$env:PATH" >> [Environment]::SetEnvironmentVariable("PATH", $env:PATH, "Machine") >> Write-Host "Conda found at: $condaPath" -ForegroundColor Green >> } else { >> Write-Host "Conda not found! Installing miniconda..." -ForegroundColor Yellow >> # 自动安装 miniconda >> Invoke-WebRequest -Uri "https://repo.anaconda.com/miniconda/Miniconda3-latest-Windows-x86_64.exe" -OutFile "$env:TEMP\miniconda.exe" >> Start-Process -FilePath "$env:TEMP\miniconda.exe" -ArgumentList "/S", "/AddToPath=1", "/InstallationType=AllUsers", "/D=C:\Miniconda3" -Wait >> $env:PATH = "C:\Miniconda3\Scripts;$env:PATH" >> } Conda not found! Installing miniconda... /AddToPath=1 is disabled and ignored in 'All Users' installations Welcome to Miniconda3 py313_25.7.0-2 By continuing this installation you are accepting this license agreement: C:\Miniconda3\EULA.txt Please run the installer in GUI mode to read the details. Miniconda3 will now be installed into this location: C:\Miniconda3 Unpacking payload... Setting up the package cache... Setting up the base environment... Installing packages for base, creating shortcuts if necessary... Initializing conda directories... Setting installation directory permissions... Done! (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch>

filetype

2025-07-01 15:17:22,311 INFO o.j.r.JARSourceHTTP: Found cached repo 2025-07-01 15:17:22,536 INFO o.j.r.PluginManager: Plugins Status: [jpgc-plugins-manager=1.11, jmeter-core=5.6.3, jmeter-ftp=5.6.3, jmeter-http=5.6.3, jmeter-jdbc=5.6.3, jmeter-jms=5.6.3, jmeter-junit=5.6.3, jmeter-java=5.6.3, jmeter-ldap=5.6.3, jmeter-mail=5.6.3, jmeter-mongodb=5.6.3, jmeter-native=5.6.3, jmeter-tcp=5.6.3, jmeter-components=5.6.3] 2025-07-01 15:17:32,191 INFO o.a.j.s.FileServer: Default base='D:\日常软件\H.Jmeter\apache-jmeter-5.6.3\bin' 2025-07-01 15:17:32,193 INFO o.a.j.g.a.Load: Loading file: D:\日常软件\H.Jmeter\查看结果树.jmx 2025-07-01 15:17:32,193 INFO o.a.j.s.FileServer: Set new base='D:\日常软件\H.Jmeter' 2025-07-01 15:17:32,240 INFO o.a.j.s.SaveService: Testplan (JMX) version: 2.2. Testlog (JTL) version: 2.2 2025-07-01 15:17:32,240 INFO o.a.j.s.SaveService: Using SaveService properties file encoding UTF-8 2025-07-01 15:17:32,240 INFO o.a.j.s.SaveService: Using SaveService properties version 5.0 2025-07-01 15:17:32,256 INFO o.a.j.s.SaveService: Loading file: D:\日常软件\H.Jmeter\查看结果树.jmx 2025-07-01 15:17:32,319 INFO o.a.j.p.h.s.HTTPSamplerBase: Parser for text/html is org.apache.jmeter.protocol.http.parser.LagartoBasedHtmlParser 2025-07-01 15:17:32,319 INFO o.a.j.p.h.s.HTTPSamplerBase: Parser for application/xhtml+xml is org.apache.jmeter.protocol.http.parser.LagartoBasedHtmlParser 2025-07-01 15:17:32,319 INFO o.a.j.p.h.s.HTTPSamplerBase: Parser for application/xml is org.apache.jmeter.protocol.http.parser.LagartoBasedHtmlParser 2025-07-01 15:17:32,319 INFO o.a.j.p.h.s.HTTPSamplerBase: Parser for text/xml is org.apache.jmeter.protocol.http.parser.LagartoBasedHtmlParser 2025-07-01 15:17:32,319 INFO o.a.j.p.h.s.HTTPSamplerBase: Parser for text/vnd.wap.wml is org.apache.jmeter.protocol.http.parser.RegexpHTMLParser 2025-07-01 15:17:32,319 INFO o.a.j.p.h.s.HTTPSamplerBase: Parser for text/css is org.apache.jmeter.protocol.http.parser.CssParser 2025-07-01 15:17:32,557 INFO o.a.j.s.SampleResult: Note: Sample TimeStamps are START times 2025-07-01 15:17:32,557 INFO o.a.j.s.SampleResult: sampleresult.default.encoding is set to UTF-8 2025-07-01 15:17:32,557 INFO o.a.j.s.SampleResult: sampleresult.useNanoTime=true 2025-07-01 15:17:32,557 INFO o.a.j.s.SampleResult: sampleresult.nanoThreadSleep=5000 2025-07-01 15:17:32,636 INFO o.a.j.r.ClassFinder: Will scan jar D:\日常软件\H.Jmeter\apache-jmeter-5.6.3\lib\ext\jmeter-plugins-manager-1.11.jar with filter ExtendsClassFilter [parents=[interface org.apache.jmeter.visualizers.ResultRenderer], inner=false, contains=null, notContains=null]. Consider exposing JMeter plugins via META-INF/services, and add JMeter-Skip-Class-Scanning=true manifest attribute so JMeter can skip classfile scanning 2025-07-01 15:17:32,636 INFO o.a.j.r.ClassFinder: Will scan jar D:\日常软件\H.Jmeter\apache-jmeter-5.6.3\lib\ext\jmeter-plugins-manager-1.11.jar with filter ExtendsClassFilter [parents=[interface org.apache.jmeter.visualizers.RequestView], inner=false, contains=null, notContains=null]. Consider exposing JMeter plugins via META-INF/services, and add JMeter-Skip-Class-Scanning=true manifest attribute so JMeter can skip classfile scanning 2025-07-01 15:17:32,668 INFO o.a.j.s.FileServer: Set new base='D:\日常软件\H.Jmeter' 2025-07-01 15:17:37,105 INFO o.a.j.e.StandardJMeterEngine: Running the test! 2025-07-01 15:17:37,105 INFO o.a.j.s.SampleEvent: List of sample_variables: [] 2025-07-01 15:17:37,105 INFO o.a.j.s.SampleEvent: List of sample_variables: [] 2025-07-01 15:17:37,105 INFO o.a.j.e.u.CompoundVariable: Note: Function class names must contain the string: '.functions.' 2025-07-01 15:17:37,105 INFO o.a.j.e.u.CompoundVariable: Note: Function class names must not contain the string: '.gui.' 2025-07-01 15:17:37,254 INFO o.a.j.r.ClassFinder: Will scan jar D:\日常软件\H.Jmeter\apache-jmeter-5.6.3\lib\ext\jmeter-plugins-manager-1.11.jar with filter ExtendsClassFilter [parents=[interface org.apache.jmeter.functions.Function], inner=false, contains=null, notContains=null]. Consider exposing JMeter plugins via META-INF/services, and add JMeter-Skip-Class-Scanning=true manifest attribute so JMeter can skip classfile scanning 2025-07-01 15:17:37,254 INFO o.a.j.g.u.JMeterMenuBar: setRunning(true, *local*) 2025-07-01 15:17:37,254 INFO o.a.j.e.StandardJMeterEngine: Starting setUp thread groups 2025-07-01 15:17:37,254 INFO o.a.j.e.StandardJMeterEngine: Starting setUp ThreadGroup: 1 : tbd 线程组 2025-07-01 15:17:37,254 INFO o.a.j.e.StandardJMeterEngine: Starting 1 threads for group tbd 线程组. 2025-07-01 15:17:37,254 INFO o.a.j.e.StandardJMeterEngine: Thread will continue on error 2025-07-01 15:17:37,254 INFO o.a.j.t.ThreadGroup: Starting thread group... number=1 threads=1 ramp-up=1 delayedStart=false 2025-07-01 15:17:37,270 INFO o.a.j.t.ThreadGroup: Started thread group number 1 2025-07-01 15:17:37,270 INFO o.a.j.e.StandardJMeterEngine: Waiting for all setup thread groups to exit 2025-07-01 15:17:37,270 INFO o.a.j.t.JMeterThread: Thread started: tbd 线程组 1-1 2025-07-01 15:17:37,270 INFO o.a.j.p.h.s.HTTPHCAbstractImpl: Local host = jizhi 2025-07-01 15:17:37,285 INFO o.a.j.p.h.s.HTTPHC4Impl: HTTP request retry count = 0 2025-07-01 15:17:37,424 ERROR o.a.j.u.BeanShellInterpreter: Error invoking bsh method: eval Sourced file: inline evaluation of: ``// JSR223 Sampler 中使用 import org.apache.http.client.methods.HttpPost; import org . . . '' : Typed variable declaration : Object constructor 2025-07-01 15:17:37,424 WARN o.a.j.e.BeanShellPostProcessor: Problem in BeanShell script: org.apache.jorphan.util.JMeterException: Error invoking bsh method: eval Sourced file: inline evaluation of: ``// JSR223 Sampler 中使用 import org.apache.http.client.methods.HttpPost; import org . . . '' : Typed variable declaration : Object constructor 2025-07-01 15:17:37,460 INFO o.a.j.t.JMeterThread: Thread is done: tbd 线程组 1-1 2025-07-01 15:17:37,460 INFO o.a.j.t.JMeterThread: Thread finished: tbd 线程组 1-1 2025-07-01 15:17:37,475 INFO o.a.j.e.StandardJMeterEngine: All Setup Threads have ended 2025-07-01 15:17:37,539 INFO o.a.j.e.StandardJMeterEngine: No enabled thread groups found 2025-07-01 15:17:37,539 INFO o.a.j.e.StandardJMeterEngine: Notifying test listeners of end of test 2025-07-01 15:17:37,539 INFO o.a.j.g.u.JMeterMenuBar: setRunning(false, *local*)

filetype

"C:\Program Files\Java\jdk-1.8\bin\java.exe" -Dpandora.location=E:\apache-maven-3.8.1-bin\taobao-hsf.sar-dev-SNAPSHOT.jar -Xmx1g -XX:TieredStopAtLevel=1 -noverify -Dspring.output.ansi.enabled=always -Dcom.sun.management.jmxremote -Dspring.jmx.enabled=true -Dspring.liveBeansView.mbeanDomain -Dspring.application.admin.enabled=true "-Dmanagement.endpoints.jmx.exposure.include=*" "-javaagent:E:\idea\IntelliJ IDEA 2023.2\lib\idea_rt.jar=53638:E:\idea\IntelliJ IDEA 2023.2\bin" -Dfile.encoding=UTF-8 -classpath C:\Users\Lplayer\AppData\Local\Temp\classpath1138593618.jar com.insigma.InsiisWebApplication fail to download http://mvnrepo.alibaba-inc.com/mvn/repository/com/alibaba/citrus/tool/antx-autoconfig/1.2-jdk9/antx-autoconfig-1.2-jdk9.jar to C:\Users\Lplayer\.autoconf\autoconf-1.2-jdk9.jar C:\Users\Lplayer\.autoconf\autoconf-1.2-jdk9.jar doesn't exist. ____ _ ____ _ | _ \ __ _ _ __ __| | ___ _ __ __ _ | __ ) ___ ___ | |_ | |_) / _` | '_ \ / _` |/ _ \| '__/ _` | | _ \ / _ \ / _ \| __| | __/ (_| | | | | (_| | (_) | | | (_| | | |_) | (_) | (_) | |_ |_| \__,_|_| |_|\__,_|\___/|_| \__,_| |____/ \___/ \___/ \__| :: Pandora Boot :: 2.1.9.1 Set log4j.defaultInitOverride to true. JM.Log:INFO Init JM logger with Log4jLoggerFactory JM.Log:INFO Log root path: C:\Users\Lplayer\logs\ JM.Log:INFO Set pandora log path: C:\Users\Lplayer\logs\pandora JM.Log:INFO Init JM logger with Log4jLoggerFactory JM.Log:INFO Log root path: C:\Users\Lplayer\logs\ JM.Log:INFO Set pandora log path: C:\Users\Lplayer\logs\pandora JM.Log:INFO Init JM logger with Log4jLoggerFactory JM.Log:INFO Log root path: C:\Users\Lplayer\logs\ JM.Log:INFO Set pandolet log path: C:\Users\Lplayer\logs\pandolet INFO: spas-client-initializer init Wed Aug 06 09:00:34 CST 2025 spas-sdk-client's ModuleClassLoader JM.Log:INFO Init JM logger with Slf4jLoggerFactory success, spas-sdk-client's ModuleClassLoader Wed Aug 06 09:00:34 CST 2025 spas-sdk-client's ModuleClassLoader JM.Log:INFO Log root path: C:\Users\Lplayer\logs\ Wed Aug 06 09:00:34 CST 2025 spas-sdk-client's ModuleClassLoader JM.Log:INFO Set spas log path: C:\Users\Lplayer\logs\spas Wed Aug 06 09:00:34 CST 2025 eagleeye-core's ModuleClassLoader JM.Log:INFO Init JM logger with Slf4jLoggerFactory success, eagleeye-core's ModuleClassLoader Wed Aug 06 09:00:34 CST 2025 eagleeye-core's ModuleClassLoader JM.Log:INFO Log root path: C:\Users\Lplayer\logs\ Wed Aug 06 09:00:34 CST 2025 eagleeye-core's ModuleClassLoader JM.Log:INFO Set metrics log path: C:\Users\Lplayer\logs\metrics Wed Aug 06 09:00:35 CST 2025 vipserver-client's ModuleClassLoader JM.Log:INFO Init JM logger with Log4jLoggerFactory, vipserver-client's ModuleClassLoader Wed Aug 06 09:00:35 CST 2025 vipserver-client's ModuleClassLoader JM.Log:INFO Log root path: C:\Users\Lplayer\logs\ Wed Aug 06 09:00:35 CST 2025 vipserver-client's ModuleClassLoader JM.Log:INFO Set vipsrv-logs log path: C:\Users\Lplayer\logs\vipsrv-logs Wed Aug 06 09:00:35 CST 2025 monitor's ModuleClassLoader JM.Log:INFO Init JM logger with Slf4jLoggerFactory success, monitor's ModuleClassLoader Wed Aug 06 09:00:35 CST 2025 monitor's ModuleClassLoader JM.Log:INFO Log root path: C:\Users\Lplayer\logs\ Wed Aug 06 09:00:35 CST 2025 monitor's ModuleClassLoader JM.Log:INFO Set tomcat-monitor log path: C:\Users\Lplayer\logs\tomcat-monitor Wed Aug 06 09:00:35 CST 2025 hsf's ModuleClassLoader JM.Log:INFO Init JM logger with Slf4jLoggerFactory success, hsf's ModuleClassLoader Wed Aug 06 09:00:35 CST 2025 hsf's ModuleClassLoader JM.Log:INFO Log root path: C:\Users\Lplayer\logs\ Wed Aug 06 09:00:35 CST 2025 hsf's ModuleClassLoader JM.Log:INFO Set hsf log path: C:\Users\Lplayer\logs\hsf SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/E:/apache-maven-3.8.1-bin/taobao-hsf.sar-dev-SNAPSHOT.jar!/plugins/hsf!/lib/logback-classic-1.2.3.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/E:/apache-maven-3.8.1-bin/taobao-hsf.sar-dev-SNAPSHOT.jar!/plugins/hsf!/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [ch.qos.logback.classic.util.ContextSelectorStaticBinder] *******************HSF PORT:12200 ************** *************Pandora QOS PORT:12201 ************** *************Tomcat Monitor Port:8006 ************** *******************Sentinel PORT:8719 ************** log4j:WARN No appenders could be found for logger (io.netty.util.internal.logging.InternalLoggerFactory). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. **************************************************************************************** ** ** ** Pandora Container ** ** ** ** Pandora Host: 10.66.242.118 ** ** Pandora Version: 2.1.4 ** ** SAR Version: edas.sar.V3.5.3 ** ** Package Time: 2025-08-06 09:00:32 ** ** ** ** Plug-in Modules: 19 ** ** ** ** metrics .......................................... 1.7.0 ** ** edas-assist ...................................... 2.0 ** ** pandora-qos-service .............................. edas215 ** ** pandolet ......................................... 1.0.0 ** ** spas-sdk-client .................................. 1.3.0 ** ** eagleeye-core .................................... 1.7.10.1 ** ** tddl-driver ...................................... 1.0.5-SNAPSHOT ** ** vipserver-client ................................. 4.7.9-SNAPSHOT ** ** diamond-client ................................... 3.8.10 ** ** configcenter-client .............................. 1.0.3 ** ** spas-sdk-service ................................. 1.3.0 ** ** dpath ............................................ 1.4 ** ** config-client .................................... 1.9.6 ** ** unitrouter ....................................... 1.0.11 ** ** monitor .......................................... 1.2.3-SNAPSHOT ** ** sentinel-plugin .................................. 2.12.12-edas ** ** ons-client ....................................... 1.8.0-EagleEye ** ** hsf .............................................. 2.2.7.3.1-TLS ** ** pandora-framework ................................ 2.0.8 ** ** ** ** [WARNING] All these plug-in modules will override maven pom.xml dependencies. ** ** More: http://gitlab.alibaba-inc.com/middleware-container/pandora/wikis/home ** ** ** **************************************************************************************** INFO: spas-client-initializer start JM.Log:INFO Init JM logger with Log4jLoggerFactory JM.Log:INFO Log root path: C:\Users\Lplayer\logs\ JM.Log:INFO Set pandora log path: C:\Users\Lplayer\logs\pandora Init available components Scanning for available components in the runtime Starting available components Skip ProjectInfoInitializer. 2025-08-06 09:00:37.814 INFO 3324 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration' of type [org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$dd14cfb2] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-08-06 09:00:37.845 INFO 3324 --- [ main] c.a.c.c.acm.AliCloudAcmInitializer : Initialize acm from acm configuration. . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \\/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v2.1.4.RELEASE) Wed Aug 06 09:00:38 CST 2025 diamond-client's ModuleClassLoader JM.Log:INFO Init JM logger with Slf4jLoggerFactory success, diamond-client's ModuleClassLoader Wed Aug 06 09:00:38 CST 2025 diamond-client's ModuleClassLoader JM.Log:INFO Log root path: C:\Users\Lplayer\logs\ Wed Aug 06 09:00:38 CST 2025 diamond-client's ModuleClassLoader JM.Log:INFO Set diamond-client log path: C:\Users\Lplayer\logs\diamond-client 09:00:38.851 [main] INFO c.t.d.identify.CredentialWatcher - [] [] [] No credential found 2025-08-06 09:00:38.961 INFO 3324 --- [ main] b.c.PropertySourceBootstrapConfiguration : Located property source: CompositePropertySource {name='diamond', propertySources=[]} Skip ProjectInfoInitializer. 2025-08-06 09:00:38.981 INFO 3324 --- [ main] com.insigma.InsiisWebApplication : The following profiles are active: redis,datasource,security,mybatis,async 2025-08-06 09:00:42.299 INFO 3324 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Multiple Spring Data modules found, entering strict repository configuration mode! 2025-08-06 09:00:42.299 INFO 3324 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data repositories in DEFAULT mode. 2025-08-06 09:00:42.810 INFO 3324 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 503ms. Found 21 repository interfaces. 2025-08-06 09:00:42.817 INFO 3324 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Multiple Spring Data modules found, entering strict repository configuration mode! 2025-08-06 09:00:42.817 INFO 3324 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data repositories in DEFAULT mode. 2025-08-06 09:00:42.981 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data MongoDB - Could not safely identify store assignment for repository candidate interface com.insigma.sys.repository.Aa26Repository. 2025-08-06 09:00:42.982 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data MongoDB - Could not safely identify store assignment for repository candidate interface com.insigma.sys.repository.MenuRepository. 2025-08-06 09:00:42.982 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data MongoDB - Could not safely identify store assignment for repository candidate interface com.insigma.sys.repository.RoleRepository. 2025-08-06 09:00:42.983 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data MongoDB - Could not safely identify store assignment for repository candidate interface com.insigma.sys.repository.SysCodeRepository. 2025-08-06 09:00:42.983 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data MongoDB - Could not safely identify store assignment for repository candidate interface com.insigma.sys.repository.SysErrorRepository. 2025-08-06 09:00:42.983 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data MongoDB - Could not safely identify store assignment for repository candidate interface com.insigma.sys.repository.SysHolidayRepository. 2025-08-06 09:00:42.983 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data MongoDB - Could not safely identify store assignment for repository candidate interface com.insigma.sys.repository.SysIdMappingRespository. 2025-08-06 09:00:42.984 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data MongoDB - Could not safely identify store assignment for repository candidate interface com.insigma.sys.repository.SysOrgRepository. 2025-08-06 09:00:42.984 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data MongoDB - Could not safely identify store assignment for repository candidate interface com.insigma.sys.repository.SysRoleFunctionRepository. 2025-08-06 09:00:42.984 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data MongoDB - Could not safely identify store assignment for repository candidate interface com.insigma.sys.repository.SysUserAreaRepository. 2025-08-06 09:00:42.985 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data MongoDB - Could not safely identify store assignment for repository candidate interface com.insigma.sys.repository.SysUserRepository. 2025-08-06 09:00:42.986 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data MongoDB - Could not safely identify store assignment for repository candidate interface com.insigma.sys.repository.SysUserRoleRepository. 2025-08-06 09:00:42.986 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data MongoDB - Could not safely identify store assignment for repository candidate interface com.insigma.drugs.repository.ProtalDrugsRepository. 2025-08-06 09:00:42.986 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data MongoDB - Could not safely identify store assignment for repository candidate interface com.insigma.framework.oplog.repository.OpLogRepository. 2025-08-06 09:00:42.986 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data MongoDB - Could not safely identify store assignment for repository candidate interface com.insigma.framework.commons.repository.SysOperateLogRepository. 2025-08-06 09:00:42.987 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data MongoDB - Could not safely identify store assignment for repository candidate interface com.insigma.framework.oplog.repository.OpLogFormRepository. 2025-08-06 09:00:42.987 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data MongoDB - Could not safely identify store assignment for repository candidate interface com.insigma.web.support.dao.Aa01Repository. 2025-08-06 09:00:42.989 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data MongoDB - Could not safely identify store assignment for repository candidate interface com.insigma.web.support.dao.CodeTypeRepository. 2025-08-06 09:00:42.989 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data MongoDB - Could not safely identify store assignment for repository candidate interface com.insigma.web.support.dao.MdParamRepository. 2025-08-06 09:00:42.989 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data MongoDB - Could not safely identify store assignment for repository candidate interface com.insigma.framework.web.securities.repository.SysLogonLogRepository. 2025-08-06 09:00:43.026 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data MongoDB - Could not safely identify store assignment for repository candidate interface com.insigma.framework.web.securities.repository.SysLogonLogRepository. 2025-08-06 09:00:43.027 INFO 3324 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 210ms. Found 0 repository interfaces. 2025-08-06 09:00:43.038 INFO 3324 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Multiple Spring Data modules found, entering strict repository configuration mode! 2025-08-06 09:00:43.039 INFO 3324 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data repositories in DEFAULT mode. 2025-08-06 09:00:43.205 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data Redis - Could not safely identify store assignment for repository candidate interface com.insigma.sys.repository.Aa26Repository. 2025-08-06 09:00:43.205 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data Redis - Could not safely identify store assignment for repository candidate interface com.insigma.sys.repository.MenuRepository. 2025-08-06 09:00:43.205 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data Redis - Could not safely identify store assignment for repository candidate interface com.insigma.sys.repository.RoleRepository. 2025-08-06 09:00:43.205 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data Redis - Could not safely identify store assignment for repository candidate interface com.insigma.sys.repository.SysCodeRepository. 2025-08-06 09:00:43.206 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data Redis - Could not safely identify store assignment for repository candidate interface com.insigma.sys.repository.SysErrorRepository. 2025-08-06 09:00:43.206 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data Redis - Could not safely identify store assignment for repository candidate interface com.insigma.sys.repository.SysHolidayRepository. 2025-08-06 09:00:43.206 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data Redis - Could not safely identify store assignment for repository candidate interface com.insigma.sys.repository.SysIdMappingRespository. 2025-08-06 09:00:43.206 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data Redis - Could not safely identify store assignment for repository candidate interface com.insigma.sys.repository.SysOrgRepository. 2025-08-06 09:00:43.206 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data Redis - Could not safely identify store assignment for repository candidate interface com.insigma.sys.repository.SysRoleFunctionRepository. 2025-08-06 09:00:43.206 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data Redis - Could not safely identify store assignment for repository candidate interface com.insigma.sys.repository.SysUserAreaRepository. 2025-08-06 09:00:43.206 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data Redis - Could not safely identify store assignment for repository candidate interface com.insigma.sys.repository.SysUserRepository. 2025-08-06 09:00:43.207 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data Redis - Could not safely identify store assignment for repository candidate interface com.insigma.sys.repository.SysUserRoleRepository. 2025-08-06 09:00:43.207 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data Redis - Could not safely identify store assignment for repository candidate interface com.insigma.drugs.repository.ProtalDrugsRepository. 2025-08-06 09:00:43.207 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data Redis - Could not safely identify store assignment for repository candidate interface com.insigma.framework.oplog.repository.OpLogRepository. 2025-08-06 09:00:43.207 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data Redis - Could not safely identify store assignment for repository candidate interface com.insigma.framework.commons.repository.SysOperateLogRepository. 2025-08-06 09:00:43.207 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data Redis - Could not safely identify store assignment for repository candidate interface com.insigma.framework.oplog.repository.OpLogFormRepository. 2025-08-06 09:00:43.207 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data Redis - Could not safely identify store assignment for repository candidate interface com.insigma.web.support.dao.Aa01Repository. 2025-08-06 09:00:43.207 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data Redis - Could not safely identify store assignment for repository candidate interface com.insigma.web.support.dao.CodeTypeRepository. 2025-08-06 09:00:43.207 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data Redis - Could not safely identify store assignment for repository candidate interface com.insigma.web.support.dao.MdParamRepository. 2025-08-06 09:00:43.207 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data Redis - Could not safely identify store assignment for repository candidate interface com.insigma.framework.web.securities.repository.SysLogonLogRepository. 2025-08-06 09:00:43.231 INFO 3324 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data Redis - Could not safely identify store assignment for repository candidate interface com.insigma.framework.web.securities.repository.SysLogonLogRepository. 2025-08-06 09:00:43.231 INFO 3324 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 184ms. Found 0 repository interfaces. 2025-08-06 09:00:43.414 WARN 3324 --- [ main] o.m.s.mapper.ClassPathMapperScanner : Skipping MapperFactoryBean with name 'caseInfoDDao' and 'com.insigma.business.bigdata.dao.CaseInfoDDao' mapperInterface. Bean already defined with the same name! 2025-08-06 09:00:43.515 INFO 3324 --- [ main] o.s.cloud.context.scope.GenericScope : BeanFactory id=715b0589-9638-3765-9a2d-50e339059fc9 2025-08-06 09:00:43.637 INFO 3324 --- [ main] c.a.b.h.c.HsfConsumerPostProcessor : registered HSFConsumerBean "queryAdmdvsService" in spring context. 2025-08-06 09:00:43.637 INFO 3324 --- [ main] c.a.b.h.c.HsfConsumerPostProcessor : registered HSFConsumerBean "queryDataDicService" in spring context. 2025-08-06 09:00:43.637 INFO 3324 --- [ main] c.a.b.h.c.HsfConsumerPostProcessor : registered HSFConsumerBean "roleAuthInfoService" in spring context. 2025-08-06 09:00:43.637 INFO 3324 --- [ main] c.a.b.h.c.HsfConsumerPostProcessor : registered HSFConsumerBean "admrolService" in spring context. 2025-08-06 09:00:43.637 INFO 3324 --- [ main] c.a.b.h.c.HsfConsumerPostProcessor : registered HSFConsumerBean "orguntService" in spring context. 2025-08-06 09:00:43.637 INFO 3324 --- [ main] c.a.b.h.c.HsfConsumerPostProcessor : registered HSFConsumerBean "unitService" in spring context. 2025-08-06 09:00:43.637 INFO 3324 --- [ main] c.a.b.h.c.HsfConsumerPostProcessor : registered HSFConsumerBean "userService" in spring context. 2025-08-06 09:00:43.637 INFO 3324 --- [ main] c.a.b.h.c.HsfConsumerPostProcessor : registered HSFConsumerBean "sysUactService" in spring context. 2025-08-06 09:00:43.637 INFO 3324 --- [ main] c.a.b.h.c.HsfConsumerPostProcessor : registered HSFConsumerBean "resuService" in spring context. 2025-08-06 09:00:43.637 INFO 3324 --- [ main] c.a.b.h.c.HsfConsumerPostProcessor : registered HSFConsumerBean "bizrolService" in spring context. 2025-08-06 09:00:43.637 INFO 3324 --- [ main] c.a.b.h.c.HsfConsumerPostProcessor : registered HSFConsumerBean "publicFeedetlChkDetMgtService" in spring context. Wed Aug 06 09:00:43 CST 2025 dpath's ModuleClassLoader JM.Log:INFO Init JM logger with Slf4jLoggerFactory success, dpath's ModuleClassLoader Wed Aug 06 09:00:43 CST 2025 dpath's ModuleClassLoader JM.Log:INFO Log root path: C:\Users\Lplayer\logs\ Wed Aug 06 09:00:43 CST 2025 dpath's ModuleClassLoader JM.Log:INFO Set dpath log path: C:\Users\Lplayer\logs\dpath Wed Aug 06 09:00:43 CST 2025 dpath's ModuleClassLoader JM.Log:INFO Can't find method for class ch.qos.logback.classic.AsyncAppender setMaxFlushTime 3000 Wed Aug 06 09:00:43 CST 2025 dpath's ModuleClassLoader JM.Log:INFO Can't find method for class ch.qos.logback.classic.AsyncAppender setNeverBlock true Wed Aug 06 09:00:46 CST 2025 config-client's ModuleClassLoader JM.Log:INFO Init JM logger with Slf4jLoggerFactory success, config-client's ModuleClassLoader Wed Aug 06 09:00:46 CST 2025 config-client's ModuleClassLoader JM.Log:INFO Log root path: C:\Users\Lplayer\logs\ 09:00:46.888 [HSF-Framework-ExportRefer-14-thread-1] INFO ConfigClientLogger - [] [] [] JM_CC_LOG_RETAIN_COUNT:6, JM_LOG_FILE_SIZE:200MB Wed Aug 06 09:00:46 CST 2025 config-client's ModuleClassLoader JM.Log:INFO Set configclient log path: C:\Users\Lplayer\logs\configclient 2025-08-06 09:00:49.973 INFO 3324 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'queryAdmdvsService' of type [com.taobao.hsf.app.spring.util.HSFSpringConsumerBean] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-08-06 09:00:52.994 INFO 3324 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'queryDataDicService' of type [com.taobao.hsf.app.spring.util.HSFSpringConsumerBean] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-08-06 09:00:56.018 INFO 3324 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'roleAuthInfoService' of type [com.taobao.hsf.app.spring.util.HSFSpringConsumerBean] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-08-06 09:00:59.040 INFO 3324 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'admrolService' of type [com.taobao.hsf.app.spring.util.HSFSpringConsumerBean] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-08-06 09:01:02.062 INFO 3324 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'orguntService' of type [com.taobao.hsf.app.spring.util.HSFSpringConsumerBean] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-08-06 09:01:05.082 INFO 3324 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'unitService' of type [com.taobao.hsf.app.spring.util.HSFSpringConsumerBean] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-08-06 09:01:08.103 INFO 3324 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'userService' of type [com.taobao.hsf.app.spring.util.HSFSpringConsumerBean] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-08-06 09:01:11.121 INFO 3324 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'sysUactService' of type [com.taobao.hsf.app.spring.util.HSFSpringConsumerBean] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-08-06 09:01:14.146 INFO 3324 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'resuService' of type [com.taobao.hsf.app.spring.util.HSFSpringConsumerBean] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-08-06 09:01:17.170 INFO 3324 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'bizrolService' of type [com.taobao.hsf.app.spring.util.HSFSpringConsumerBean] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-08-06 09:01:20.193 INFO 3324 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'publicFeedetlChkDetMgtService' of type [com.taobao.hsf.app.spring.util.HSFSpringConsumerBean] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-08-06 09:01:20.270 INFO 3324 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.transaction.annotation.ProxyTransactionManagementConfiguration' of type [org.springframework.transaction.annotation.ProxyTransactionManagementConfiguration$$EnhancerBySpringCGLIB$$c0faccb5] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-08-06 09:01:20.378 INFO 3324 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'redisConfig' of type [com.insigma.web.support.redis.RedisConfig$$EnhancerBySpringCGLIB$$20be0ef0] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-08-06 09:01:20.401 INFO 3324 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'multipleDataSourceConfig' of type [com.insigma.hsaf.common.config.MultipleDataSourceConfig$$EnhancerBySpringCGLIB$$1e06f17d] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-08-06 09:01:20.432 INFO 3324 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'spring.datasource-org.springframework.boot.autoconfigure.jdbc.DataSourceProperties' of type [org.springframework.boot.autoconfigure.jdbc.DataSourceProperties] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-08-06 09:01:20.475 INFO 3324 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'com.alibaba.druid.spring.boot.autoconfigure.stat.DruidFilterConfiguration' of type [com.alibaba.druid.spring.boot.autoconfigure.stat.DruidFilterConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-08-06 09:01:20.492 INFO 3324 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'statFilter' of type [com.alibaba.druid.filter.stat.StatFilter] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-08-06 09:01:20.597 INFO 3324 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'wallConfig' of type [com.alibaba.druid.wall.WallConfig] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-08-06 09:01:20.616 INFO 3324 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'wallFilter' of type [com.alibaba.druid.wall.WallFilter] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-08-06 09:01:20.676 INFO 3324 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'slf4jLogFilter' of type [com.alibaba.druid.filter.logging.Slf4jLogFilter] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) Wed Aug 06 09:01:20 CST 2025 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification. Wed Aug 06 09:01:20 CST 2025 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification. Wed Aug 06 09:01:20 CST 2025 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification. Wed Aug 06 09:01:20 CST 2025 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification. Wed Aug 06 09:01:21 CST 2025 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification. 2025-08-06 09:01:21.030 INFO 3324 --- [ main] com.alibaba.druid.pool.DruidDataSource : {dataSource-1} inited 2025-08-06 09:01:21.030 INFO 3324 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'primaryDataSource' of type [com.insigma.hsaf.common.config.MultipleDataSourceConfig$DruidDataSourceWrapper] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-08-06 09:01:21.074 INFO 3324 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.boot.autoconfigure.jdbc.DataSourceInitializerInvoker' of type [org.springframework.boot.autoconfigure.jdbc.DataSourceInitializerInvoker] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-08-06 09:01:21.086 INFO 3324 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'spring.jdbc-org.springframework.boot.autoconfigure.jdbc.JdbcProperties' of type [org.springframework.boot.autoconfigure.jdbc.JdbcProperties] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-08-06 09:01:21.087 INFO 3324 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.boot.autoconfigure.jdbc.JdbcTemplateAutoConfiguration$JdbcTemplateConfiguration' of type [org.springframework.boot.autoconfigure.jdbc.JdbcTemplateAutoConfiguration$JdbcTemplateConfiguration$$EnhancerBySpringCGLIB$$d985fb22] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-08-06 09:01:21.097 INFO 3324 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'jdbcTemplate' of type [org.springframework.jdbc.core.JdbcTemplate] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-08-06 09:01:21.117 INFO 3324 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'odinLogServiceImpl' of type [com.insigma.framework.log.service.impl.OdinLogServiceImpl] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-08-06 09:01:21.136 INFO 3324 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'odinLogAspect' of type [com.insigma.framework.log.OdinLogAspect] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-08-06 09:01:21.144 INFO 3324 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration' of type [org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$dd14cfb2] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-08-06 09:01:21.458 INFO 3324 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 9100 (http) 八月 06, 2025 9:01:21 上午 org.apache.coyote.AbstractProtocol init 信息: Initializing ProtocolHandler ["http-nio-9100"] log4j:WARN No appenders could be found for logger (org.apache.coyote.http11.Http11NioProtocol). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. 八月 06, 2025 9:01:21 上午 org.apache.catalina.core.StandardService startInternal 信息: Starting service [Tomcat] 八月 06, 2025 9:01:21 上午 org.apache.catalina.core.StandardEngine startInternal 信息: Starting Servlet engine: [Apache Tomcat/9.0.17] 八月 06, 2025 9:01:21 上午 org.apache.catalina.core.AprLifecycleListener lifecycleEvent 信息: Loaded APR based Apache Tomcat Native library [1.3.1] using APR version [1.7.4]. 八月 06, 2025 9:01:21 上午 org.apache.catalina.core.AprLifecycleListener lifecycleEvent 信息: APR capabilities: IPv6 [true], sendfile [true], accept filters [false], random [true]. 八月 06, 2025 9:01:21 上午 org.apache.catalina.core.AprLifecycleListener lifecycleEvent 信息: APR/OpenSSL configuration: useAprConnector [false], useOpenSSL [true] 八月 06, 2025 9:01:21 上午 org.apache.catalina.core.AprLifecycleListener initializeSSL 信息: OpenSSL successfully initialized [OpenSSL 3.0.14 4 Jun 2024] 八月 06, 2025 9:01:21 上午 org.apache.catalina.core.ApplicationContext log 信息: Initializing Spring embedded WebApplicationContext 2025-08-06 09:01:21.555 INFO 3324 --- [ main] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 42560 ms 2025-08-06 09:01:21.567 INFO 3324 --- [ main] c.i.f.system.safety.SysSafetyProperties : sql: SysSafetyProperties.SQL(badstr=null) System is starting! UTF-8:一个汉字的字节数为:3 2025-08-06 09:01:22.651 INFO 3324 --- [ main] c.i.o.f.safe.validate.SignatureValidate : li: anytomcat2023-03-04anyunlimitedinsiisunlimited安徽省医保核心配置文件anyWindowsTf-c4-d3-UZ-d3-1646357567308502[][]10安徽省医保(核三框架)DEVELOP[{}]rule=0, version=6.0delay=0, extend=, format=2, product=insiis, release=1.0 2025-08-06 09:01:22.652 INFO 3324 --- [ main] c.i.o.f.safe.validate.SignatureValidate : 9180cd3c218c8736c1a23333546bff22f4017978 2025-08-06 09:01:22.666 INFO 3324 --- [ main] c.i.o.f.safe.ValidateContExecute : Apache Tomcat/9.0.17 您的核心配置文件已经失效测!请及时更换。Your core config have expired! Please update it! 2025-08-06 09:01:23.038 INFO 3324 --- [ main] c.i.o.f.safe.ValidateContExecute : IP: [127.0.0.1, 0:0:0:0:0:0:0:1, fe80:0:0:0:cb88:c7b3:57ec:13b3%eth3, fe80:0:0:0:6d46:2122:72d0:9648%eth4, 10.66.242.118, fe80:0:0:0:b0b9:6640:37a:c4b2%eth5, fe80:0:0:0:83dc:74e7:38e:235b%eth7, fe80:0:0:0:fc5c:c0e0:f227:7889%wlan0, 192.168.18.1, fe80:0:0:0:b714:55b0:2962:c0f2%eth9, 192.168.163.1, fe80:0:0:0:d554:a3f0:a6ea:47d3%eth10, fe80:0:0:0:aed2:bc06:dc31:db57%wlan1, 192.168.83.243, 240e:45a:48d:67c2:6dbd:62cf:5eda:307c, 240e:45a:48d:67c2:fddf:2d50:d7db:573f, fe80:0:0:0:2dc6:6f4:b194:de03%wlan2] 2025-08-06 09:01:23.038 INFO 3324 --- [ main] c.i.o.f.safe.ValidateContExecute : CPU core: 20 2025-08-06 09:01:23.039 INFO 3324 --- [ main] c.i.o.f.safe.ValidateContExecute : Windows 11 10.0 2025-08-06 09:01:23.100 INFO 3324 --- [ main] c.i.o.f.safe.ValidateContExecute : Computer MAC: [] 2025-08-06 09:01:23.409 INFO 3324 --- [ main] o.hibernate.jpa.internal.util.LogHelper : HHH000204: Processing PersistenceUnitInfo [ name: default ...] 2025-08-06 09:01:23.476 INFO 3324 --- [ main] org.hibernate.Version : HHH000412: Hibernate Core {5.0.12.Final} 2025-08-06 09:01:23.478 INFO 3324 --- [ main] org.hibernate.cfg.Environment : HHH000206: hibernate.properties not found 2025-08-06 09:01:23.479 INFO 3324 --- [ main] org.hibernate.cfg.Environment : HHH000021: Bytecode provider name : javassist 2025-08-06 09:01:23.522 INFO 3324 --- [ main] o.hibernate.annotations.common.Version : HCANN000001: Hibernate Commons Annotations {5.0.1.Final} 2025-08-06 09:01:23.613 INFO 3324 --- [ main] org.hibernate.dialect.Dialect : HHH000400: Using dialect: org.hibernate.dialect.MySQL5InnoDBDialect 2025-08-06 09:01:23.775 WARN 3324 --- [ main] org.hibernate.id.UUIDHexGenerator : HHH000409: Using org.hibernate.id.UUIDHexGenerator which does not generate IETF RFC 4122 compliant UUID values; consider using org.hibernate.id.UUIDGenerator instead 2025-08-06 09:01:24.127 INFO 3324 --- [ main] j.LocalContainerEntityManagerFactoryBean : Initialized JPA EntityManagerFactory for persistence unit 'default' 2025-08-06 09:01:25.142 INFO 3324 --- [ main] io.lettuce.core.EpollProvider : Starting without optional epoll library 2025-08-06 09:01:25.143 INFO 3324 --- [ main] io.lettuce.core.KqueueProvider : Starting without optional kqueue library 2025-08-06 09:01:26.137 INFO 3324 --- [ main] c.i.h.c.a.cache.AdmdvsDimCacheManager : init admdvsDim Start 2025-08-06 09:01:27.066 INFO 3324 --- [ main] c.i.h.c.a.cache.AdmdvsDimCacheManager : init admdvsDim End 2025-08-06 09:01:27.635 INFO 3324 --- [ main] o.h.h.i.QueryTranslatorFactoryInitiator : HHH000397: Using ASTQueryTranslatorFactory 2025-08-06 09:01:27.993 INFO 3324 --- [ main] com.alibaba.druid.pool.DruidDataSource : {dataSource-2} inited 2025-08-06 09:01:28.092 INFO 3324 --- [ main] f.a.AutowiredAnnotationBeanPostProcessor : Autowired annotation is not supported on static fields: private static boolean com.insigma.sys.common.SysManageMode.tripleMode 2025-08-06 09:01:28.136 INFO 3324 --- [ main] com.insigma.sys.config.UEditorConfig : 读取UEditor配置文件成功! 2025-08-06 09:01:28.808 INFO 3324 --- [ main] c.i.hsa.common.base.job.CacheRefreshJob : begin refresh enforce dict cache ... 2025-08-06 09:01:28.809 INFO 3324 --- [ main] c.i.hsa.common.base.job.CacheRefreshJob : success refresh enforce dict cache ... 2025-08-06 09:01:28.809 INFO 3324 --- [ main] c.i.hsa.common.base.job.CacheRefreshJob : begin refresh enforce admdvsDim cache ... 2025-08-06 09:01:28.815 INFO 3324 --- [ main] c.i.h.c.a.cache.AdmdvsDimCacheManager : init admdvsDim Start 2025-08-06 09:01:29.615 INFO 3324 --- [ main] c.i.h.c.a.cache.AdmdvsDimCacheManager : init admdvsDim End 2025-08-06 09:01:29.615 INFO 3324 --- [ main] c.i.hsa.common.base.job.CacheRefreshJob : success refresh enforce admdvsDim cache ... 2025-08-06 09:01:29.627 INFO 3324 --- [ main] f.a.AutowiredAnnotationBeanPostProcessor : Autowired annotation is not supported on static methods: public static void com.insigma.hsa.common.excel.ExcelUtils.setPageSize(int) 2025-08-06 09:01:29.735 WARN 3324 --- [ main] c.i.framework.GlobalExceptionCollector : 没有找到公共服务异常统一管理方法[call4Exception],该功能将自动关闭! 2025-08-06 09:01:30.454 INFO 3324 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor' 2025-08-06 09:01:30.519 WARN 3324 --- [ main] aWebConfiguration$JpaWebMvcConfiguration : spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning Logging initialized using 'class org.apache.ibatis.logging.stdout.StdOutImpl' adapter. 2025-08-06 09:01:31.239 INFO 3324 --- [ main] .s.s.UserDetailsServiceAutoConfiguration : Using generated security password: f0640dd2-67b8-4024-bdd5-51708f0024e3 2025-08-06 09:01:31.321 INFO 3324 --- [ main] o.s.s.web.DefaultSecurityFilterChain : Creating filter chain: org.springframework.security.web.util.matcher.AnyRequestMatcher@1, [org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@34fd82cd, org.springframework.security.web.context.SecurityContextPersistenceFilter@18010665, org.springframework.security.web.header.HeaderWriterFilter@2e41e742, org.springframework.security.web.authentication.logout.LogoutFilter@203b2f14, com.insigma.hsaf.security.web.support.SSOUserContextFilter@28d38918, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@19f1d1d, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@2a5b5e33, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@14029881, org.springframework.security.web.session.SessionManagementFilter@307d7688, org.springframework.security.web.access.ExceptionTranslationFilter@5e09b1a6, org.springframework.security.web.access.intercept.FilterSecurityInterceptor@ddbbb82] 2025-08-06 09:01:31.325 INFO 3324 --- [ main] o.s.s.web.DefaultSecurityFilterChain : Creating filter chain: org.springframework.security.web.util.matcher.AnyRequestMatcher@1, [org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@1925b878, org.springframework.security.web.context.SecurityContextPersistenceFilter@e449c7c, org.springframework.security.web.header.HeaderWriterFilter@4464aae2, org.springframework.security.web.authentication.logout.LogoutFilter@247dd07, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@112d42ba, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@707fc9c7, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@7f9b0fb, org.springframework.security.web.session.SessionManagementFilter@238a281f, org.springframework.security.web.access.ExceptionTranslationFilter@63ff8137, org.springframework.security.web.access.intercept.FilterSecurityInterceptor@61a5226d] 2025-08-06 09:01:32.445 INFO 3324 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 7099 (http) 2025-08-06 09:01:32.451 INFO 3324 --- [ main] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 81 ms 八月 06, 2025 9:01:32 上午 org.apache.coyote.AbstractProtocol init 信息: Initializing ProtocolHandler ["http-nio-7099"] 八月 06, 2025 9:01:32 上午 org.apache.catalina.core.StandardService startInternal 信息: Starting service [Tomcat] 八月 06, 2025 9:01:32 上午 org.apache.catalina.core.StandardEngine startInternal 信息: Starting Servlet engine: [Apache Tomcat/9.0.17] 八月 06, 2025 9:01:32 上午 org.apache.catalina.core.ApplicationContext log 信息: Initializing Spring embedded WebApplicationContext 2025-08-06 09:01:32.465 INFO 3324 --- [ main] o.s.b.a.e.web.EndpointLinksResolver : Exposing 2 endpoint(s) beneath base path '/actuator' 2025-08-06 09:01:32.501 INFO 3324 --- [ool-20-thread-1] c.i.h.c.d.impl.DataDicLocalServiceImpl : DataDicLocalServiceImpl---------getDataDict--------start 八月 06, 2025 9:01:32 上午 org.apache.coyote.AbstractProtocol start 信息: Starting ProtocolHandler ["http-nio-7099"] 2025-08-06 09:01:32.541 INFO 3324 --- [ool-20-thread-1] c.i.h.c.d.impl.DataDicLocalServiceImpl : 配置的字典编码.size=0 2025-08-06 09:01:32.557 INFO 3324 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 7099 (http) with context path '' 2025-08-06 09:01:32.574 INFO 3324 --- [ool-20-thread-1] c.i.hsa.common.dict.DataDictManager : 加载字典数=1160 2025-08-06 09:01:33.165 INFO 3324 --- [ main] s.a.ScheduledAnnotationBeanPostProcessor : No TaskScheduler/ScheduledExecutorService bean found for scheduled processing 2025-08-06 09:01:33.173 INFO 3324 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 9100 (http) with context path '/insiis7-enforce' 2025-08-06 09:01:33.175 INFO 3324 --- [ main] com.insigma.InsiisWebApplication : Started InsiisWebApplication in 57.123 seconds (JVM running for 67.713) 八月 06, 2025 9:01:33 上午 org.apache.coyote.AbstractProtocol start 信息: Starting ProtocolHandler ["http-nio-9100"] 2025-08-06 09:01:33.205 INFO 3324 --- [ main] c.i.b.send.server.init.InitServer : 初始化结果en:false Service(pandora boot) startup in 61482 ms 2025-08-06 09:01:33.786 INFO 3324 --- [ool-21-thread-1] c.i.h.c.region.service.PoolareaManager : 加载统筹区数据! 2025-08-06 09:01:33.908 INFO 3324 --- [ool-21-thread-1] c.i.h.c.region.service.PoolareaManager : 加载和缓存所有统筹区完毕!queryResult.size=4478 2025-08-06 09:01:33.943 INFO 3324 --- [ool-21-thread-1] c.i.h.c.region.service.PoolareaManager : 加载统筹区数据完毕!highest.size=469,groupResult.size=373 Wed Aug 06 09:01:34 CST 2025 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification. 八月 06, 2025 9:01:34 上午 org.apache.catalina.core.ApplicationContext log 信息: Initializing Spring DispatcherServlet 'dispatcherServlet' 2025-08-06 09:01:34.244 INFO 3324 --- [)-10.66.242.118] o.s.web.servlet.DispatcherServlet : Initializing Servlet 'dispatcherServlet' 2025-08-06 09:01:34.256 INFO 3324 --- [)-10.66.242.118] o.s.web.servlet.DispatcherServlet : Completed initialization in 12 ms Wed Aug 06 09:01:34 CST 2025 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification. Wed Aug 06 09:01:34 CST 2025 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification. Wed Aug 06 09:01:34 CST 2025 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification. Wed Aug 06 09:01:34 CST 2025 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification. 2025-08-06 09:01:34.363 INFO 3324 --- [)-10.66.242.118] com.alibaba.druid.pool.DruidDataSource : {dataSource-3} inited 2025-08-06 09:01:35.750 INFO 3324 --- [ool-21-thread-1] c.i.h.c.admdvs.bo.impl.AdmdvsBOImpl : 加载行政区划数据... 2025-08-06 09:01:35.778 INFO 3324 --- [ool-21-thread-1] c.i.h.c.admdvs.bo.impl.AdmdvsBOImpl : 行政区划数据缓存完毕!省份数=1,地市数=1,区县数=17 2025-08-06 09:04:28.806 INFO 3324 --- [nio-9100-exec-9] c.i.h.s.w.support.SSOUserContextFilter : in SSOUserContextFilter! securitytype is hsa-sso-mock 2025-08-06 09:04:28.806 INFO 3324 --- [nio-9100-exec-3] c.i.h.s.w.support.SSOUserContextFilter : in SSOUserContextFilter! securitytype is hsa-sso-mock 2025-08-06 09:04:28.806 INFO 3324 --- [nio-9100-exec-1] c.i.h.s.w.support.SSOUserContextFilter : in SSOUserContextFilter! securitytype is hsa-sso-mock 2025-08-06 09:04:28.829 INFO 3324 --- [nio-9100-exec-9] c.i.f.w.s.web.RepeatRequestFilter : ===repeat=false=== 2025-08-06 09:04:28.829 INFO 3324 --- [nio-9100-exec-3] c.i.f.w.s.web.RepeatRequestFilter : ===repeat=false=== 2025-08-06 09:04:28.830 INFO 3324 --- [nio-9100-exec-1] c.i.f.w.s.web.RepeatRequestFilter : ===repeat=false=== Hibernate: select sysmenu0_.functionid as function1_7_, sysmenu0_.active as active2_7_, sysmenu0_.auflag as auflag3_7_, sysmenu0_.description as descript4_7_, sysmenu0_.developer as develope5_7_, sysmenu0_.digest as digest6_7_, sysmenu0_.funcode as funcode7_7_, sysmenu0_.funorder as funorder8_7_, sysmenu0_.funtype as funtype9_7_, sysmenu0_.icon as icon10_7_, sysmenu0_.idpath as idpath11_7_, sysmenu0_.islog as islog12_7_, sysmenu0_.location as locatio13_7_, sysmenu0_.nodetype as nodetyp14_7_, sysmenu0_.parentid as parenti15_7_, sysmenu0_.rbflag as rbflag16_7_, sysmenu0_.slevel as slevel17_7_, sysmenu0_.title as title18_7_ from SYSFUNCTION sysmenu0_ order by sysmenu0_.funorder 2025-08-06 09:04:31.567 INFO 3324 --- [nio-9100-exec-2] c.i.h.s.w.support.SSOUserContextFilter : in SSOUserContextFilter! securitytype is hsa-sso-mock 2025-08-06 09:04:31.569 INFO 3324 --- [nio-9100-exec-2] c.i.f.w.s.web.RepeatRequestFilter : ===repeat=false=== Hibernate: select sysmenu0_.functionid as function1_7_, sysmenu0_.active as active2_7_, sysmenu0_.auflag as auflag3_7_, sysmenu0_.description as descript4_7_, sysmenu0_.developer as develope5_7_, sysmenu0_.digest as digest6_7_, sysmenu0_.funcode as funcode7_7_, sysmenu0_.funorder as funorder8_7_, sysmenu0_.funtype as funtype9_7_, sysmenu0_.icon as icon10_7_, sysmenu0_.idpath as idpath11_7_, sysmenu0_.islog as islog12_7_, sysmenu0_.location as locatio13_7_, sysmenu0_.nodetype as nodetyp14_7_, sysmenu0_.parentid as parenti15_7_, sysmenu0_.rbflag as rbflag16_7_, sysmenu0_.slevel as slevel17_7_, sysmenu0_.title as title18_7_ from SYSFUNCTION sysmenu0_ order by sysmenu0_.funorder 2025-08-06 09:04:31.570 INFO 3324 --- [nio-9100-exec-6] c.i.h.s.w.support.SSOUserContextFilter : in SSOUserContextFilter! securitytype is hsa-sso-mock 2025-08-06 09:04:31.570 INFO 3324 --- [io-9100-exec-10] c.i.h.s.w.support.SSOUserContextFilter : in SSOUserContextFilter! securitytype is hsa-sso-mock 2025-08-06 09:04:31.572 INFO 3324 --- [io-9100-exec-10] c.i.f.w.s.web.RepeatRequestFilter : ===repeat=false=== 2025-08-06 09:04:31.572 INFO 3324 --- [nio-9100-exec-6] c.i.f.w.s.web.RepeatRequestFilter : ===repeat=false=== 为什么运行不结束

filetype

root@cpms-linux:~# docker logs fossology-new ***************************************************** WARNING: No database host was set and therefore the internal database without persistency will be used. THIS IS NOT RECOMENDED FOR PRODUCTIVE USE! ***************************************************** Starting PostgreSQL 11 database server: main. *** Running postinstall for common actions*** *** Creating user and group *** NOTE: group 'fossy' already exists, good. NOTE: user 'fossy' already exists, good. *** Making sure needed dirs exist with right ownership/permissions *** *** clearing file cache *** NOTE: Repository already exists at /srv/fossology/repository NOTE: Running the PostgreSQL vacuum and analyze command can result in a large database performance improvement. We suggest that you either configure postgres to run its autovacuum and autoanalyze daemons, or maintagent -D in a cron job, or run Admin > Maintenance on a regular basis. Admin > Dashboard will show you the last time vacuum and analyze have been run. *** Setting up the FOSSology database *** NOTE: fossology database already exists, not creating *** Checking for plpgsql support *** NOTE: plpgsql already exists in fossology database, good *** Checking for 'uuid-ossp' support *** NOTE: 'uuid-ossp' already exists in fossology database, good *** update the database and license_ref table *** Old release was 3.3.0 Applying database functions DB schema has been updated for fossology. Database schema update completed successfully. Update reference licenses *** Instance UUID ***INSTANCE UUID: a527ab24-e3ad-4472-90d5-68483f7376d3 *** Table copyright already migrated to copyright_event table *** *** Table author already migrated to author_event table *** *** Table ecc already migrated to ecc_event table *** *** Table keyword already migrated to keyword_event table *** FOSSology postinstall complete, but sure to complete the remaining steps in the INSTALL instructions. Fossology initialisation complete; Starting up... Starting periodic command scheduler: cron. 2025-03-21 08:18:26 scheduler [155] :: NOTE: ***************************************************************** 2025-03-21 08:18:26 scheduler [155] :: NOTE: *** FOSSology scheduler started *** 2025-03-21 08:18:26 scheduler [155] :: NOTE: *** pid: 155 *** 2025-03-21 08:18:26 scheduler [155] :: NOTE: *** verbose: 3 *** 2025-03-21 08:18:26 scheduler [155] :: NOTE: *** config: /usr/local/etc/fossology *** 2025-03-21 08:18:26 scheduler [155] :: NOTE: ***************************************************************** AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.3. Set the 'ServerName' directive globally to suppress this message AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.3. Set the 'ServerName' directive globally to suppress this message [Fri Mar 21 08:18:26.602843 2025] [core:warn] [pid 158] AH00098: pid file /var/run/apache2/apache2.pid overwritten -- Unclean shutdown of previous Apache run? [Fri Mar 21 08:18:26.618724 2025] [mpm_prefork:notice] [pid 158] AH00163: Apache/2.4.38 (Debian) mod_ldap_userdir/1.1.19 configured -- resuming normal operations [Fri Mar 21 08:18:26.620058 2025] [core:notice] [pid 158] AH00094: Command line: '/usr/sbin/apache2 -D FOREGROUND' 10.194.150.5 - - [21/Mar/2025:08:18:27 +0000] "GET / HTTP/1.1" 200 3380 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36 Edg/134.0.0.0" 10.194.150.5 - - [21/Mar/2025:08:18:32 +0000] "GET /repo HTTP/1.1" 404 495 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36 Edg/134.0.0.0" 10.194.150.5 - - [21/Mar/2025:08:19:24 +0000] "-" 408 0 "-" "-"

filetype

[2025-05-12T16:43:47,345][INFO ][o.e.b.Elasticsearch ] [DESKTOP-A854GSO] version[9.0.1], pid[10096], build[zip/73f7594ea00db50aa7e941e151a5b3985f01e364/2025-04-30T10:07:41.393025990Z], OS[Windows 11/10.0/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/24/24+36-3646] [2025-05-12T16:43:47,356][INFO ][o.e.b.Elasticsearch ] [DESKTOP-A854GSO] JVM home [D:\elasticsearch\elasticsearch-9.0.1\jdk], using bundled JDK [true] [2025-05-12T16:43:47,357][INFO ][o.e.b.Elasticsearch ] [DESKTOP-A854GSO] JVM arguments [-Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=CLDR, -Dorg.apache.lucene.vectorization.upperJavaFeatureVersion=24, -Des.distribution.type=zip, -Des.java.type=bundled JDK, --enable-native-access=org.elasticsearch.nativeaccess,org.apache.lucene.core, --enable-native-access=ALL-UNNAMED, --illegal-native-access=deny, -XX:ReplayDataFile=logs/replay_pid%p.log, -Des.entitlements.enabled=true, -XX:+EnableDynamicAgentLoading, -Djdk.attach.allowAttachSelf=true, --patch-module=java.base=lib\entitlement-bridge\elasticsearch-entitlement-bridge-9.0.1.jar, --add-exports=java.base/org.elasticsearch.entitlement.bridge=org.elasticsearch.entitlement,java.logging,java.net.http,java.naming,jdk.net, -XX:+UseG1GC, -Djava.io.tmpdir=C:\Users\鍥AppData\Local\Temp\elasticsearch, --add-modules=jdk.incubator.vector, -Dorg.apache.lucene.store.defaultReadAdvice=normal, -XX:+HeapDumpOnOutOfMemoryError, -XX:+ExitOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,level,pid,tags:filecount=32,filesize=64m, -Xms7778m, -Xmx7778m, -XX:MaxDirectMemorySize=4078960640, -XX:G1HeapRegionSize=4m, -XX:InitiatingHeapOccupancyPercent=30, -XX:G1ReservePercent=15, --module-path=D:\elasticsearch\elasticsearch-9.0.1\lib, --add-modules=jdk.net, --add-modules=jdk.management.agent, --add-modules=ALL-MODULE-PATH, -Djdk.module.main=org.elasticsearch.server] [2025-05-12T16:43:47,358][INFO ][o.e.b.Elasticsearch ] [DESKTOP-A854GSO] Default Locale [zh_CN] [2025-05-12T16:43:47,528][INFO ][o.e.n.NativeAccess ] [DESKTOP-A854GSO] Using [jdk] native provider and native methods for [Windows] [2025-05-12T16:43:47,660][INFO ][o.a.l.i.v.PanamaVectorizationProvider] [DESKTOP-A854GSO] Java vector incubator API enabled; uses preferredBitSize=256; FMA enabled [2025-05-12T16:43:47,802][INFO ][o.e.b.Elasticsearch ] [DESKTOP-A854GSO] Bootstrapping Entitlements [2025-05-12T16:43:52,351][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [repository-url] [2025-05-12T16:43:52,352][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [rest-root] [2025-05-12T16:43:52,352][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-core] [2025-05-12T16:43:52,353][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-redact] [2025-05-12T16:43:52,353][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [ingest-user-agent] [2025-05-12T16:43:52,353][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-async-search] [2025-05-12T16:43:52,354][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-monitoring] [2025-05-12T16:43:52,354][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [repository-s3] [2025-05-12T16:43:52,354][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-analytics] [2025-05-12T16:43:52,354][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-esql-core] [2025-05-12T16:43:52,355][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-ent-search] [2025-05-12T16:43:52,355][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-autoscaling] [2025-05-12T16:43:52,355][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [lang-painless] [2025-05-12T16:43:52,356][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-ml] [2025-05-12T16:43:52,356][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [lang-mustache] [2025-05-12T16:43:52,356][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [legacy-geo] [2025-05-12T16:43:52,356][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [logsdb] [2025-05-12T16:43:52,357][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-ql] [2025-05-12T16:43:52,357][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [rank-rrf] [2025-05-12T16:43:52,357][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [analysis-common] [2025-05-12T16:43:52,358][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [health-shards-availability] [2025-05-12T16:43:52,358][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [transport-netty4] [2025-05-12T16:43:52,358][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [aggregations] [2025-05-12T16:43:52,358][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [ingest-common] [2025-05-12T16:43:52,359][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [frozen-indices] [2025-05-12T16:43:52,359][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-identity-provider] [2025-05-12T16:43:52,359][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-shutdown] [2025-05-12T16:43:52,359][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-text-structure] [2025-05-12T16:43:52,360][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [snapshot-repo-test-kit] [2025-05-12T16:43:52,360][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [ml-package-loader] [2025-05-12T16:43:52,360][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [kibana] [2025-05-12T16:43:52,360][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [constant-keyword] [2025-05-12T16:43:52,361][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-logstash] [2025-05-12T16:43:52,361][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-ccr] [2025-05-12T16:43:52,361][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-graph] [2025-05-12T16:43:52,361][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [rank-vectors] [2025-05-12T16:43:52,362][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-esql] [2025-05-12T16:43:52,362][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [parent-join] [2025-05-12T16:43:52,362][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [counted-keyword] [2025-05-12T16:43:52,362][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-enrich] [2025-05-12T16:43:52,363][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [repositories-metering-api] [2025-05-12T16:43:52,363][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [transform] [2025-05-12T16:43:52,363][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [repository-azure] [2025-05-12T16:43:52,364][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [dot-prefix-validation] [2025-05-12T16:43:52,364][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [repository-gcs] [2025-05-12T16:43:52,364][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [spatial] [2025-05-12T16:43:52,364][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-otel-data] [2025-05-12T16:43:52,364][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [apm] [2025-05-12T16:43:52,365][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [mapper-extras] [2025-05-12T16:43:52,365][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [mapper-version] [2025-05-12T16:43:52,365][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-rollup] [2025-05-12T16:43:52,365][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [percolator] [2025-05-12T16:43:52,366][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-migrate] [2025-05-12T16:43:52,366][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [data-streams] [2025-05-12T16:43:52,366][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-stack] [2025-05-12T16:43:52,366][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [rank-eval] [2025-05-12T16:43:52,366][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [reindex] [2025-05-12T16:43:52,367][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-security] [2025-05-12T16:43:52,367][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [blob-cache] [2025-05-12T16:43:52,367][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [searchable-snapshots] [2025-05-12T16:43:52,367][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-slm] [2025-05-12T16:43:52,367][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-geoip-enterprise-downloader] [2025-05-12T16:43:52,367][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [snapshot-based-recoveries] [2025-05-12T16:43:52,368][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-watcher] [2025-05-12T16:43:52,368][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [old-lucene-versions] [2025-05-12T16:43:52,368][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-ilm] [2025-05-12T16:43:52,368][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-inference] [2025-05-12T16:43:52,368][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-voting-only-node] [2025-05-12T16:43:52,369][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-deprecation] [2025-05-12T16:43:52,369][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-fleet] [2025-05-12T16:43:52,369][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-aggregate-metric] [2025-05-12T16:43:52,369][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-downsample] [2025-05-12T16:43:52,369][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-profiling] [2025-05-12T16:43:52,370][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [ingest-geoip] [2025-05-12T16:43:52,370][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-write-load-forecaster] [2025-05-12T16:43:52,370][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [search-business-rules] [2025-05-12T16:43:52,370][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [ingest-attachment] [2025-05-12T16:43:52,370][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [wildcard] [2025-05-12T16:43:52,371][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-apm-data] [2025-05-12T16:43:52,371][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [unsigned-long] [2025-05-12T16:43:52,371][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-sql] [2025-05-12T16:43:52,371][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [runtime-fields-common] [2025-05-12T16:43:52,372][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-async] [2025-05-12T16:43:52,372][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [vector-tile] [2025-05-12T16:43:52,372][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-kql] [2025-05-12T16:43:52,372][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [lang-expression] [2025-05-12T16:43:52,372][INFO ][o.e.p.PluginsService ] [DESKTOP-A854GSO] loaded module [x-pack-eql] [2025-05-12T16:43:54,309][INFO ][o.e.e.NodeEnvironment ] [DESKTOP-A854GSO] using [1] data paths, mounts [[(D:)]], net usable_space [546.3gb], net total_space [953.8gb], types [NTFS] [2025-05-12T16:43:54,310][INFO ][o.e.e.NodeEnvironment ] [DESKTOP-A854GSO] heap size [7.5gb], compressed ordinary object pointers [true] [2025-05-12T16:43:54,399][INFO ][o.e.n.Node ] [DESKTOP-A854GSO] node name [DESKTOP-A854GSO], node ID [oThuyDfhTraitepa21vwPA], cluster name [elasticsearch], roles [ingest, data_frozen, ml, data_hot, transform, data_content, data_warm, master, remote_cluster_client, data, data_cold] [2025-05-12T16:43:57,587][INFO ][o.e.i.r.RecoverySettings ] [DESKTOP-A854GSO] using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b] [2025-05-12T16:43:57,830][INFO ][o.e.f.FeatureService ] [DESKTOP-A854GSO] Registered local node features [ES_V_8, ES_V_9, cluster.reroute.ignores_metric_param, cluster.stats.source_modes, linear_retriever_supported, lucene_10_1_upgrade, lucene_10_upgrade, security.queryable_built_in_roles, simulate.ignored.fields] [2025-05-12T16:43:57,873][INFO ][o.e.c.m.DataStreamGlobalRetentionSettings] [DESKTOP-A854GSO] Updated default factory retention to [null] [2025-05-12T16:43:57,874][INFO ][o.e.c.m.DataStreamGlobalRetentionSettings] [DESKTOP-A854GSO] Updated max factory retention to [null] [2025-05-12T16:43:58,428][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [DESKTOP-A854GSO] [controller/15508] [Main.cc@123] controller (64 bit): Version 9.0.1 (Build 5ac89bc732bee2) Copyright (c) 2025 Elasticsearch BV [2025-05-12T16:43:58,907][INFO ][o.e.x.o.OTelPlugin ] [DESKTOP-A854GSO] OTel ingest plugin is enabled [2025-05-12T16:43:58,937][INFO ][o.e.x.c.t.YamlTemplateRegistry] [DESKTOP-A854GSO] OpenTelemetry index template registry is enabled [2025-05-12T16:43:58,953][INFO ][o.e.t.a.APM ] [DESKTOP-A854GSO] Sending apm metrics is disabled [2025-05-12T16:43:58,954][INFO ][o.e.t.a.APM ] [DESKTOP-A854GSO] Sending apm tracing is disabled [2025-05-12T16:43:59,004][INFO ][o.e.x.s.Security ] [DESKTOP-A854GSO] Security is enabled [2025-05-12T16:43:59,379][INFO ][o.e.x.s.a.s.FileRolesStore] [DESKTOP-A854GSO] parsed [0] roles from file [D:\elasticsearch\elasticsearch-9.0.1\config\roles.yml] [2025-05-12T16:44:00,003][INFO ][o.e.x.w.Watcher ] [DESKTOP-A854GSO] Watcher initialized components at 2025-05-12T08:44:00.002Z [2025-05-12T16:44:00,143][INFO ][o.e.x.p.ProfilingPlugin ] [DESKTOP-A854GSO] Profiling is enabled [2025-05-12T16:44:00,162][INFO ][o.e.x.p.ProfilingPlugin ] [DESKTOP-A854GSO] profiling index templates will not be installed or reinstalled [2025-05-12T16:44:00,170][INFO ][o.e.x.a.APMPlugin ] [DESKTOP-A854GSO] APM ingest plugin is enabled [2025-05-12T16:44:00,207][INFO ][o.e.x.c.t.YamlTemplateRegistry] [DESKTOP-A854GSO] apm index template registry is enabled [2025-05-12T16:44:00,764][INFO ][o.e.t.n.NettyAllocator ] [DESKTOP-A854GSO] creating NettyAllocator with the following configs: [name=elasticsearch_configured, chunk_size=1mb, suggested_max_allocation_size=1mb, factors={es.unsafe.use_netty_default_chunk_and_page_size=false, g1gc_enabled=true, g1gc_region_size=4mb}] [2025-05-12T16:44:00,837][INFO ][o.e.d.DiscoveryModule ] [DESKTOP-A854GSO] using discovery type [multi-node] and seed hosts providers [settings] [2025-05-12T16:44:02,474][INFO ][o.e.n.Node ] [DESKTOP-A854GSO] initialized [2025-05-12T16:44:02,475][INFO ][o.e.n.Node ] [DESKTOP-A854GSO] starting ... [2025-05-12T16:44:02,520][INFO ][o.e.x.s.c.f.PersistentCache] [DESKTOP-A854GSO] persistent cache index loaded [2025-05-12T16:44:02,521][INFO ][o.e.x.d.l.DeprecationIndexingComponent] [DESKTOP-A854GSO] deprecation component started [2025-05-12T16:44:02,642][INFO ][o.e.t.TransportService ] [DESKTOP-A854GSO] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300} [2025-05-12T16:44:03,031][WARN ][o.e.c.c.ClusterBootstrapService] [DESKTOP-A854GSO] this node is locked into cluster UUID [_h3NONUQTr-ueBQPxrmidA] but [cluster.initial_master_nodes] is set to [DESKTOP-A854GSO]; remove this setting to avoid possible data loss caused by subsequent cluster bootstrap attempts; for further information see https://www.elastic.co/docs/deploy-manage/deploy/self-managed/important-settings-configuration?version=9.0#initial_master_nodes [2025-05-12T16:44:03,176][INFO ][o.e.c.s.MasterService ] [DESKTOP-A854GSO] elected-as-master ([1] nodes joined in term 4)[_FINISH_ELECTION_, {DESKTOP-A854GSO}{oThuyDfhTraitepa21vwPA}{ELpoXthvShu35FsJ-XSBeA}{DESKTOP-A854GSO}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}{9.0.1}{8000099-9009000} completing election], term: 4, version: 98, delta: master node changed {previous [], current [{DESKTOP-A854GSO}{oThuyDfhTraitepa21vwPA}{ELpoXthvShu35FsJ-XSBeA}{DESKTOP-A854GSO}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}{9.0.1}{8000099-9009000}]} [2025-05-12T16:44:03,257][INFO ][o.e.c.s.ClusterApplierService] [DESKTOP-A854GSO] master node changed {previous [], current [{DESKTOP-A854GSO}{oThuyDfhTraitepa21vwPA}{ELpoXthvShu35FsJ-XSBeA}{DESKTOP-A854GSO}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}{9.0.1}{8000099-9009000}]}, term: 4, version: 98, reason: Publication{term=4, version=98} [2025-05-12T16:44:03,312][INFO ][o.e.c.c.NodeJoinExecutor ] [DESKTOP-A854GSO] node-join: [{DESKTOP-A854GSO}{oThuyDfhTraitepa21vwPA}{ELpoXthvShu35FsJ-XSBeA}{DESKTOP-A854GSO}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}{9.0.1}{8000099-9009000}] with reason [completing election] [2025-05-12T16:44:03,317][INFO ][o.e.h.AbstractHttpServerTransport] [DESKTOP-A854GSO] publish_address {192.168.10.4:9200}, bound_addresses {[::]:9200} [2025-05-12T16:44:03,328][INFO ][o.e.x.w.LicensedWriteLoadForecaster] [DESKTOP-A854GSO] license state changed, now [valid] [2025-05-12T16:44:03,341][INFO ][o.e.n.Node ] [DESKTOP-A854GSO] started {DESKTOP-A854GSO}{oThuyDfhTraitepa21vwPA}{ELpoXthvShu35FsJ-XSBeA}{DESKTOP-A854GSO}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}{9.0.1}{8000099-9009000}{ml.allocated_processors_double=16.0, ml.max_jvm_size=8157921280, ml.config_version=12.0.0, xpack.installed=true, transform.config_version=10.0.0, ml.machine_memory=16312721408, ml.allocated_processors=16} [2025-05-12T16:44:03,387][WARN ][o.e.x.i.s.e.a.ElasticInferenceServiceAuthorizationHandler] [DESKTOP-A854GSO] Failed to revoke access to default inference endpoint IDs: [rainbow-sprinkles], error: org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized]; [2025-05-12T16:44:03,738][INFO ][o.e.x.m.MlIndexRollover ] [DESKTOP-A854GSO] ML legacy indices rolled over [2025-05-12T16:44:03,739][INFO ][o.e.x.m.MlAnomaliesIndexUpdate] [DESKTOP-A854GSO] legacy ml anomalies indices rolled over and aliases updated [2025-05-12T16:44:03,751][INFO ][o.e.x.s.a.Realms ] [DESKTOP-A854GSO] license mode is [basic], currently licensed security realms are [reserved/reserved,file/default_file,native/default_native] [2025-05-12T16:44:03,754][INFO ][o.e.l.ClusterStateLicenseService] [DESKTOP-A854GSO] license [c457fd5c-94b5-44ae-ba97-7a8de1c2a152] mode [basic] - valid [2025-05-12T16:44:03,756][INFO ][o.e.c.f.AbstractFileWatchingService] [DESKTOP-A854GSO] starting file watcher ... [2025-05-12T16:44:03,761][INFO ][o.e.c.f.AbstractFileWatchingService] [DESKTOP-A854GSO] file settings service up and running [tid=109] [2025-05-12T16:44:03,762][INFO ][o.e.r.s.FileSettingsService] [DESKTOP-A854GSO] setting file [D:\elasticsearch\elasticsearch-9.0.1\config\operator\settings.json] not found, initializing [file_settings] as empty [2025-05-12T16:44:03,775][INFO ][o.e.g.GatewayService ] [DESKTOP-A854GSO] recovered [3] indices into cluster_state [2025-05-12T16:44:03,878][INFO ][o.e.x.w.LicensedWriteLoadForecaster] [DESKTOP-A854GSO] license state changed, now [not valid] [2025-05-12T16:44:04,273][INFO ][o.e.h.n.s.HealthNodeTaskExecutor] [DESKTOP-A854GSO] Node [{DESKTOP-A854GSO}{oThuyDfhTraitepa21vwPA}] is selected as the current health node. [2025-05-12T16:44:04,277][INFO ][o.e.c.r.a.AllocationService] [DESKTOP-A854GSO] current.health="GREEN" message="Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.ds-.logs-elasticsearch.deprecation-default-2025.05.12-000001][0], [.ds-ilm-history-7-2025.05.12-000001][0], [.security-7][0]]])." previous.health="RED" reason="shards started [[.ds-.logs-elasticsearch.deprecation-default-2025.05.12-000001][0], [.ds-ilm-history-7-2025.05.12-000001][0], [.security-7][0]]" 中文回答

filetype

pip install "xinference[all]" Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Collecting xinference[all] Using cached https://pypi.tuna.tsinghua.edu.cn/packages/b6/a0/d0754f1b6d7278bf40cf35832656cf52b18f138aa77cc026588f0e6ad5c0/xinference-1.3.1.post1-py3-none-any.whl (33.2 MB) Collecting xoscar>=0.4.4 (from xinference[all]) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/92/10/bf763c5367b73536bcac15de0af795a10bdbe87aae657e12cc157260a298/xoscar-0.4.6.tar.gz (128 kB) Installing build dependencies ... done Getting requirements to build wheel ... error error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─> [32 lines of output] Compiling xoscar\context.pyx because it changed. Compiling xoscar\core.pyx because it changed. Compiling xoscar\_utils.pyx because it changed. Compiling xoscar\backends\message.pyx because it changed. Compiling xoscar\serialization\core.pyx because it changed. [1/5] Cythonizing xoscar\_utils.pyx [2/5] Cythonizing xoscar\backends\message.pyx [3/5] Cythonizing xoscar\context.pyx [4/5] Cythonizing xoscar\core.pyx [5/5] Cythonizing xoscar\serialization\core.pyx warning: xoscar\backends\message.pyx:74:18: Annotation ignored since class-level attributes must be Python objects. Were you trying to set up an instance attribute? warning: xoscar\backends\message.pyx:227:22: Unknown type declaration 'BaseException' in annotation, ignoring warning: xoscar\core.pyx:596:62: Strings should no longer be used for type declarations. Use 'cython.int' etc. directly. warning: xoscar\core.pyx:602:29: Strings should no longer be used for type declarations. Use 'cython.int' etc. directly. warning: xoscar\core.pyx:634:73: Strings should no longer be used for type declarations. Use 'cython.int' etc. directly. warning: xoscar\core.pyx:640:40: Strings should no longer be used for

filetype

2025-07-01 15:20:56,557 INFO o.a.j.e.StandardJMeterEngine: Running the test! 2025-07-01 15:20:56,557 INFO o.a.j.s.SampleEvent: List of sample_variables: [] 2025-07-01 15:20:56,557 INFO o.a.j.g.u.JMeterMenuBar: setRunning(true, *local*) 2025-07-01 15:20:56,557 INFO o.a.j.e.StandardJMeterEngine: Starting setUp thread groups 2025-07-01 15:20:56,557 INFO o.a.j.e.StandardJMeterEngine: Starting setUp ThreadGroup: 1 : tbd 线程组 2025-07-01 15:20:56,557 INFO o.a.j.e.StandardJMeterEngine: Starting 1 threads for group tbd 线程组. 2025-07-01 15:20:56,557 INFO o.a.j.e.StandardJMeterEngine: Thread will continue on error 2025-07-01 15:20:56,557 INFO o.a.j.t.ThreadGroup: Starting thread group... number=1 threads=1 ramp-up=1 delayedStart=false 2025-07-01 15:20:56,557 INFO o.a.j.t.ThreadGroup: Started thread group number 1 2025-07-01 15:20:56,557 INFO o.a.j.e.StandardJMeterEngine: Waiting for all setup thread groups to exit 2025-07-01 15:20:56,557 INFO o.a.j.t.JMeterThread: Thread started: tbd 线程组 1-1 2025-07-01 15:20:56,677 INFO o.a.j.u.BeanShellTestElement: Classpath: /D:/%e6%97%a5%e5%b8%b8%e8%bd%af%e4%bb%b6/H.Jmeter/apache-jmeter-5.6.3/bin/ApacheJMeter.jar 2025-07-01 15:20:56,677 INFO o.a.j.u.BeanShellTestElement: ✅ ITesseract 类加载成功 2025-07-01 15:20:56,783 INFO o.a.j.t.JMeterThread: Thread is done: tbd 线程组 1-1 2025-07-01 15:20:56,783 INFO o.a.j.t.JMeterThread: Thread finished: tbd 线程组 1-1 2025-07-01 15:20:56,783 INFO o.a.j.e.StandardJMeterEngine: All Setup Threads have ended 2025-07-01 15:20:56,924 INFO o.a.j.e.StandardJMeterEngine: No enabled thread groups found 2025-07-01 15:20:56,924 INFO o.a.j.e.StandardJMeterEngine: Notifying test listeners of end of test 2025-07-01 15:20:56,924 INFO o.a.j.g.u.JMeterMenuBar: setRunning(false, *local*)

Jeckaijew
  • 粉丝: 53
上传资源 快速赚钱