活动介绍

fortcom: Severe: No such file or directory ... file is 'var_global.f90'

时间: 2024-06-02 20:10:59 AIGC 浏览: 385
这个错误通常发生在Fortran编译器无法找到所需的文件时。它可能是由于文件名拼写错误、文件路径错误、缺少文件等原因导致的。 请检查您的代码中是否有名为'var_global.f90'的文件,并确保文件路径正确。如果文件确实存在,请检查编译器是否可以访问该文件。如果文件不存在,请确保您已将其包含在项目中。 另外,您也可以尝试使用绝对路径来指定文件位置,这可能会解决文件路径错误的问题。
相关问题

现在报错 reading data ... forrtl: severe (174): SIGSEGV, segmentation fault occurred Image PC Routine Line Source libifcoremt.so.5 00002AB506BED2F6 for__signal_handl Unknown Unknown libpthread-2.17.s 00002AB504A855D0 Unknown Unknown Unknown da_svd.exe 0000000000454CDD module_io_mp_read 80 module_io.F90 da_svd.exe 00000000004186CE da_system_IP_read 989 NUDT_RFS_DA.F90 da_svd.exe 00000000004042CF MAIN__ 236 NUDT_RFS_DA.F90 da_svd.exe 00000000004024EE Unknown Unknown Unknown libc-2.17.so 00002AB50847A3D5 __libc_start_main Unknown Unknown da_svd.exe 00000000004023F9 Unknown Unknown Unknown subroutine read_ensemble_forecasts(ensemble_data) use module_grid use module_initial character(len=256) :: filename, filename_prefix integer :: iunit, i, j, k, member, ios type(model_ensemble_data_ptr), pointer, intent(inout) :: ensemble_data type(model_data) :: modelvar_data_temp real :: minv, maxv character(len=20) :: fmt_str filename_prefix = trim(grid_cfg%data_path_ensemble) // '/' // trim(grid_cfg%name_ensemble_prefix) fmt_str = '(A,' // trim(grid_cfg%name_ensemble_number) // ')' call allocate_mdata(modelvar_data_temp) do member = 1, nsample ! write(filename, '(A,I3.3)') 'modelvar_', member ! grid_cfg%name_ensemble_number = 'I3.3' write(filename, fmt_str) trim(filename_prefix), member if (myid_monitor == 0) write(*,'(A,I0)') 'Reading ensemble member: ', member call read_modelvar(trim(filename), modelvar_data_temp) if (myid_monitor == 0) call print_minmax(modelvar_data_temp) call grid_interp_c2a_f2h(modelvar_data_temp) ensemble_data%pi(grid_is:grid_ie,grid_ks:grid_ke,grid_js:grid_je,member) = modelvar_data_temp%pi(grid_is:grid_ie,grid_ks:grid_ke,grid_js:grid_je) ensemble_data%u(grid_is:grid_ie,grid_ks:grid_ke,grid_js:grid_je,member) = modelvar_data_temp%u(grid_is:grid_ie,grid_ks:grid_ke,grid_js:grid_je) ensemble_data%v(grid_is:grid_ie,grid_ks:grid_ke,grid_js:grid_je,member) = modelvar_data_temp%v(grid_is:grid_ie,grid_ks:grid_ke,grid_js:grid_je) 第80行是变量v;说明指针分配可能在第三个变量处开始出现问题,但是没有报错出来

<think>好的,用户现在遇到了段错误(SIGSEGV),报错发生在读取集合数据时的`ensemble_data%v`赋值语句(第80行)。根据用户描述,问题可能出在指针分配,尤其是第三个变量(v)的指针位置。我需要结合之前的对话(关于共享内存指针分配和`disp_unit`设置)来分析可能原因。 首先,回顾之前的讨论,用户的子程序使用了MPI共享内存分配,通过`MPI_Win_allocate_shared`和`MPI_Win_shared_query`获取指针,并通过`c_f_pointer`关联到Fortran的派生类型(如`model_ensemble_data_ptr`中的`pi`、`u`、`v`等指针)。关键点在于`disp_unit`的设置是否正确,以及指针偏移量计算是否准确。 用户当前的错误是访问`v`时出现段错误,可能的原因包括: 1. **`disp_unit`设置错误**:如果`disp_unit`与实际`real`类型的字节大小不匹配(如实际是双精度8字节,但`disp_unit`设为4),会导致`u`、`v`等变量的指针偏移量计算错误,指针指向无效内存地址,访问时触发段错误。 2. **指针偏移量计算错误**:集合数据的变量按顺序存储(`pi`→`u`→`v`→...),每个变量的偏移量需正确计算。例如,`pi`的大小是`nx*nz*ny*nsample`,`u`的偏移量应为`pi`的大小×`disp_unit`,`v`的偏移量应为`(pi_size + u_size)`×`disp_unit`。若偏移量计算错误(如维度顺序错误、未考虑`nsample`等),`v`的指针可能指向未分配的内存区域。 3. **指针未正确关联(`c_f_pointer`失败)**:若`baseptr_ensemble`为空指针(如`MPI_Win_shared_query`失败),或`c_f_pointer`转换时参数错误(如维度不匹配),`v`的指针可能未正确关联到共享内存,导致访问非法地址。 4. **数组维度越界**:`ensemble_data%v`的维度(`grid_is:grid_ie, grid_ks:grid_ke, grid_js:grid_je, member`)可能超出实际分配的内存范围。例如,`grid_is`小于1,或`grid_ie`大于`nx`,导致访问越界。 5. **共享内存分配大小不足**:`MPI_Win_allocate_shared`分配的总大小(`ssize_ensemble`)可能未包含所有变量(如`pi`、`u`、`v`及后续变量),导致`v`的存储区域未被分配,访问时出错。 接下来,需要针对这些可能原因提出排查步骤: - **检查`disp_unit`是否正确**:确认`disp_unit`是否等于`real`类型的字节大小(单精度4,双精度8)。可通过添加打印语句输出`disp_unit`的值,或使用`c_sizeof`动态获取验证。 - **验证指针偏移量计算**:检查`pi`、`u`、`v`等变量的偏移量是否按顺序累加。例如,`pi`的大小是`nx*nz*ny*nsample`,`u`的偏移量应为`pi_size * disp_unit`,`v`的偏移量应为`(pi_size + u_size) * disp_unit`(假设`u`和`pi`大小相同)。 - **检查`c_f_pointer`关联是否成功**:在`c_f_pointer`调用后,可添加检查(如判断指针是否为`null()`),若失败则输出错误信息。例如: ```fortran call c_f_pointer(baseptr_ensemble + offset_pi, ensemble_data%pi, [nx, nz, ny, nsample]) if (.not. associated(ensemble_data%pi)) then print *, "Error: ensemble_data%pi pointer association failed!" call MPI_Abort(MPI_COMM_WORLD, 1, ierr) end if ``` 类似地检查`u`和`v`的指针关联。 - **检查数组维度和索引范围**:确认`grid_is`、`grid_ie`等索引是否在有效范围内(如1≤`grid_is`≤`grid_ie`≤`nx`),`nsample`是否与实际集合成员数一致。可打印这些维度值,或添加越界检查。 - **验证共享内存总大小**:计算`ssize_ensemble`是否等于所有变量的总大小×`disp_unit`。例如,若有5个3D变量(每个大小`nx*nz*ny*nsample`)和3个2D变量(每个大小`nx*ny*nsample`),总大小应为`(5*nx*nz*ny + 3*nx*ny) * nsample * disp_unit`。若`ssize_ensemble`小于此值,分配的内存不足,导致后续变量指针越界。 - **检查`read_modelvar`是否正确读取数据**:若`modelvar_data_temp%v`未正确读取(如文件不存在、数据损坏),赋值时可能访问无效内存。可添加打印语句,输出`modelvar_data_temp%v`的部分值,确认是否有数据。 - **单成员测试**:注释掉循环,仅读取第一个成员(`member=1`),观察是否仍报错。若单成员正常,可能是多成员时`nsample`过大导致内存不足或索引越界。 - **使用调试工具**:通过`gdb`或`valgrind`运行程序,定位具体的段错误地址,分析是哪部分内存访问导致的问题。例如,`gdb`可显示`ensemble_data%v`的地址是否在共享内存分配的范围内。 总结,用户需要重点检查`disp_unit`设置、指针偏移量计算、指针关联状态以及数组索引范围,这些是导致访问`v`时出现段错误的常见原因。 </think>根据报错信息(SIGSEGV段错误发生在`ensemble_data%v`的赋值行)和共享内存指针分配的背景,可能的原因及排查步骤如下: ### **可能原因分析** 段错误通常由**非法内存访问**引起,结合代码逻辑,重点排查以下方向: #### 1. **`disp_unit`设置错误导致指针偏移错位** 若`disp_unit`与实际`real`类型的字节大小不匹配(如双精度`real`设为`disp_unit=4`),会导致`v`的指针偏移量计算错误,指向未分配的内存区域。例如: - `pi`的大小:`nx*nz*ny*nsample`个`real`元素,占`nx*nz*ny*nsample*disp_unit`字节。 - `u`的偏移量:`pi_size * disp_unit`。 - `v`的偏移量:`(pi_size + u_size) * disp_unit`(假设`u`与`pi`同维度)。 若`disp_unit`过小(如双精度用4),`v`的实际偏移量会比计算值小一半,导致指针指向`u`的中间位置,赋值时越界。 #### 2. **指针关联失败(`c_f_pointer`未正确绑定)** `ensemble_data%v`的指针未正确关联到共享内存,可能因: - `baseptr_ensemble`为空(`MPI_Win_shared_query`失败)。 - 维度参数错误(如`[nx, nz, ny, nsample]`与实际数组维度不匹配)。 此时`v`的指针指向随机地址,访问时触发段错误。 #### 3. **数组索引越界** `grid_is:grid_ie`、`grid_ks:grid_ke`、`grid_js:grid_je`或`member`的索引超出共享内存分配的数组维度。例如: - `grid_ie > nx`或`grid_ke > nz`,导致访问`v`的`(grid_ie,...)`时越界。 - `member > nsample`(循环中`member`从1到`nsample`,但`nsample`实际值可能更小)。 #### 4. **共享内存总大小不足** `MPI_Win_allocate_shared`分配的`ssize_ensemble`未包含`v`的存储空间。例如: - 计算总大小时漏掉`v`的维度(如只计算了`pi`和`u`,未算`v`),导致`v`的存储区域未被分配,访问时越界。 ### **排查步骤与验证方法** #### 1. **确认`disp_unit`是否正确** - **添加调试打印**:在共享内存分配代码中,输出`disp_unit`和`real`类型的字节大小(通过`c_sizeof`),确认是否一致。 示例: ```fortran use iso_c_binding real :: dummy_real integer :: expected_disp_unit expected_disp_unit = c_sizeof(dummy_real) print *, "disp_unit = ", disp_unit, " (设置值)" print *, "real字节大小 = ", expected_disp_unit, " (实际值)" ``` 若`disp_unit ≠ expected_disp_unit`,说明`disp_unit`设置错误(如双精度用了4)。 #### 2. **检查指针关联是否成功** - **添加指针关联检查**:在`c_f_pointer`关联`v`的指针后,判断是否关联成功。 示例: ```fortran ! 关联v的指针 call c_f_pointer(baseptr_ensemble + offset_v, ensemble_data%v, [nx, nz, ny, nsample]) if (.not. associated(ensemble_data%v)) then print *, "Error: ensemble_data%v指针关联失败!offset_v = ", offset_v call MPI_Abort(MPI_COMM_WORLD, 1, ierr) end if ``` 若输出错误,说明`offset_v`或`baseptr_ensemble`异常。 #### 3. **验证数组索引范围** - **打印索引参数**:在赋值`v`之前,输出`grid_is, grid_ie, nx`、`grid_ks, grid_ke, nz`、`grid_js, grid_je, ny`、`member, nsample`的值,确认是否越界。 示例: ```fortran if (myid_monitor == 0) then print *, "v索引检查:" print *, "nx = ", nx, " grid_is-ie = ", grid_is, grid_ie print *, "nz = ", nz, " grid_ks-ke = ", grid_ks, grid_ke print *, "ny = ", ny, " grid_js-je = ", grid_js, grid_je print *, "nsample = ", nsample, " member = ", member end if ``` 若`grid_ie > nx`或`member > nsample`,说明索引越界。 #### 4. **检查共享内存总大小** - **重新计算总大小**:手动计算`ssize_ensemble`是否包含所有变量(包括`v`)。 假设`pi`、`u`、`v`均为3D变量(维度`nx×nz×ny×nsample`),总大小应为: ```fortran ssize_ensemble = (3 * nx * nz * ny * nsample) * disp_unit ! 3个变量(pi、u、v) ``` 若原代码中`ssize_ensemble`计算时漏掉`v`(如只算了2个变量),会导致内存不足。 ### **临时修复建议** - 若`disp_unit`错误,改为动态获取(如用`c_sizeof`),确保与`real`字节大小一致。 - 若索引越界,调整`grid_is`/`grid_ie`等参数,确保在`1~nx`、`1~nz`、`1~ny`范围内。 - 若指针关联失败,检查`offset_v`的计算(如是否累加了`pi`和`u`的大小)。 ### **总结** 段错误的核心原因大概率是`v`的指针指向了未分配或越界的内存,需重点排查`disp_unit`设置、指针偏移量计算、索引范围及共享内存总大小。通过添加调试打印和指针关联检查,可快速定位具体问题。

Using CATALINA_OPTS: touch: /Users/luosifu/Library/tomcat/logs/catalina.out: No such file or directory /Users/luosifu/Library/tomcat/bin/catalina.sh: line 465: /Users/luosifu/Library/tomcat/logs/catalina.out: No such file or directory

<think>我们参考用户提供的引用来解决Tomcat启动时报错catalina.out文件不存在的问题。 引用[1]中提到:当提示找不到catalina.out文件时,可以通过命令创建一个文件,即`touch catalina.out`,但需要注意创建的路径。用户提到他曾经创建到了bin目录下,而实际应该创建在报错信息中指定的目录(通常是logs目录)。 因此,解决步骤如下: 1. **确定Tomcat的日志目录**: Tomcat的日志目录通常位于Tomcat安装目录下的`logs`子目录。例如,如果Tomcat安装路径为`/opt/tomcat`,则日志目录为`/opt/tomcat/logs`。 2. **检查日志目录是否存在**: 使用命令查看日志目录是否存在: ```bash ls -l /path/to/your/tomcat/logs ``` 如果`logs`目录不存在,则需要先创建该目录。 3. **创建catalina.out文件**: 进入Tomcat的日志目录,然后使用`touch`命令创建`catalina.out`文件: ```bash cd /path/to/your/tomcat/logs touch catalina.out ``` 4. **设置文件权限**: 确保Tomcat进程有权限写入该文件。通常,Tomcat运行用户(如`tomcat`用户)需要对该文件有写权限。可以使用以下命令修改文件权限: ```bash chmod 666 catalina.out # 或者更严格的权限设置,比如chown tomcat:tomcat catalina.out && chmod 644 catalina.out ``` 注意:如果Tomcat是以特定用户(如`tomcat`)运行的,最好将文件的所有者改为该用户: ```bash chown tomcat:tomcat catalina.out ``` 5. **重新启动Tomcat**: 完成上述步骤后,重新启动Tomcat服务。 另外,引用[3]中展示了Tomcat启动时的命令行参数,其中包含了`-Djava.io.tmpdir`、`-Dcatalina.base`和`-Dcatalina.home`等参数。这些参数决定了Tomcat的目录结构。因此,如果日志目录不在默认位置,可能是因为设置了`CATALINA_BASE`或`CATALINA_HOME`环境变量,指向了不同的路径。 所以,在创建文件之前,需要确认Tomcat使用的`CATALINA_BASE`和`CATALINA_HOME`环境变量。可以通过以下方式查看: - 查看Tomcat的启动脚本(如`catalina.sh`或`startup.sh`)中是否设置了这些变量。 - 或者使用`ps aux | grep tomcat`命令查看Tomcat进程启动时的参数,如引用[3]所示,其中会显示`-Dcatalina.base`和`-Dcatalina.home`的值。 例如,在引用[3]中,Tomcat的`catalina.base`和`catalina.home`都设置为`/data/tomcat_test`,那么日志目录就应该在`/data/tomcat_test/logs`。 因此,如果发现日志目录不是默认的`$CATALINA_HOME/logs`,而是由`$CATALINA_BASE`指定的路径下的`logs`目录,那么就需要在对应的路径下创建`catalina.out`。 总结:关键是要在正确的日志目录下创建`catalina.out`文件,并确保Tomcat进程有权限写入。 如果按照上述步骤操作后问题仍然存在,则需要检查Tomcat的日志配置(如`logging.properties`)是否指定了不同的日志文件路径。 相关问题: 1. 如何配置Tomcat的日志输出路径? 2. Tomcat启动时如何指定CATALINA_BASE和CATALINA_HOME? 3. Tomcat日志文件权限设置的最佳实践是什么?</think>### 解决 Tomcat 启动报错 "catalina.out No such file or directory" 的方法 根据引用[1][^1]和引用[3][^3]的信息,此错误通常是由于 Tomcat 的日志目录中缺少 `catalina.out` 文件导致的。以下是完整的解决步骤: #### 1. **确认日志目录位置** - 查看 Tomcat 的启动脚本(如 `catalina.sh`)或进程信息(引用[3][^3]): ```bash ps aux | grep tomcat ``` - 从输出中查找 `-Dcatalina.base` 参数的值(如 `/data/tomcat_test`),日志目录通常在此路径下的 `logs` 子目录 #### 2. **创建缺失的日志文件** ```bash # 进入 Tomcat 日志目录(根据实际路径修改) cd /path/to/your/tomcat/logs # 创建 catalina.out 文件(引用[1][^1]) sudo touch catalina.out # 设置文件权限(确保 Tomcat 用户有写入权限) sudo chmod 666 catalina.out # 或更严格的 chown tomcat:tomcat catalina.out ``` #### 3. **验证目录结构** 确保目录结构符合 Tomcat 要求: ``` tomcat_home/ ├── bin/ ├── logs/ # 必须存在此目录 │ └── catalina.out # 新创建的文件 ├── conf/ └── webapps/ ``` #### 4. **检查环境变量(可选)** 如果引用[2][^2]提到的 `CATALINA_BASE` 或 `CATALINA_HOME` 被修改: ```bash # 检查当前设置 echo $CATALINA_BASE echo $CATALINA_HOME # 恢复默认(如果需要) export CATALINA_BASE=/path/to/correct/tomcat ``` #### 5. **重启 Tomcat** ```bash # 停止服务 sudo /path/to/tomcat/bin/shutdown.sh # 启动服务(观察日志) sudo /path/to/tomcat/bin/startup.sh && tail -f /path/to/tomcat/logs/catalina.out ``` > **关键提示**:引用[1][^1]特别强调必须**在正确的日志目录下创建文件**(而非 bin 或其他目录),否则问题仍会出现。 #### 常见问题排查 1. **权限不足**:使用 `sudo` 创建文件并检查 Tomcat 运行用户的写入权限 2. **目录不存在**:若 `logs` 目录缺失,需先创建: ```bash sudo mkdir -p /path/to/tomcat/logs ``` 3. **配置错误**:检查 `conf/logging.properties` 中是否有自定义路径 --- ### 相关问题 1. 如何自定义 Tomcat 的日志文件路径? 2. Tomcat 启动时报 `Permission denied` 错误如何解决? 3. 为什么 Tomcat 会突然停止运行?(引用[2][^2]) 4. 如何分析 Tomcat 的 `SEVERE: Context startup failed` 错误?(引用[4][^4]) [^1]: 引用来源:解决 catalina.out 文件缺失的创建方法和路径注意事项 [^2]: 引用来源:Tomcat 异常停止时的现象排查 [^3]: 引用来源:Tomcat 进程信息和目录结构参考 [^4]: 引用来源:Tomcat 上下文启动失败的关联错误信息
阅读全文

相关推荐

FO: Starting Servlet Engine: Apache Tomcat/7.0.59 Sep 03, 2025 1:28:08 PM org.apache.catalina.startup.HostConfig deployDirectory INFO: Deploying web application directory /data/EnterpriseApp/TomcatApp/tomcat-7.0-bdcs_hande/webapps/bdcs Sep 03, 2025 1:28:09 PM org.apache.catalina.startup.ContextConfig processAnnotationsJar SEVERE: Unable to process Jar entry [META-INF/versions/9/module-info.class] from Jar [jar:file:/data/EnterpriseApp/TomcatApp/tomcat-7.0-bdcs_hande/webapps/bdcs/WEB-INF/lib/commons-net-3.10.0.jar!/] for annotations org.apache.tomcat.util.bcel.classfile.ClassFormatException: Invalid byte tag in constant pool: 19 at org.apache.tomcat.util.bcel.classfile.Constant.readConstant(Constant.java:97) at org.apache.tomcat.util.bcel.classfile.ConstantPool.<init>(ConstantPool.java:55) at org.apache.tomcat.util.bcel.classfile.ClassParser.readConstantPool(ClassParser.java:177) at org.apache.tomcat.util.bcel.classfile.ClassParser.parse(ClassParser.java:85) at org.apache.catalina.startup.ContextConfig.processAnnotationsStream(ContextConfig.java:2089) at org.apache.catalina.startup.ContextConfig.processAnnotationsJar(ContextConfig.java:1965) at org.apache.catalina.startup.ContextConfig.processAnnotationsUrl(ContextConfig.java:1931) at org.apache.catalina.startup.ContextConfig.processAnnotations(ContextConfig.java:1916) at org.apache.catalina.startup.ContextConfig.webConfig(ContextConfig.java:1330) at org.apache.catalina.startup.ContextConfig.configureStart(ContextConfig.java:889) at org.apache.catalina.startup.ContextConfig.lifecycleEvent(ContextConfig.java:386) at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:117) at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:90) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5412) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:649) at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1245) at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1895) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Sep 03, 2025 1:28:10 PM org.apache.catalina.startup.TaglibUriRule body INFO: TLD skipped. URI: webwork is already defined log4j:ERROR setFile(null,true) call failed. java.io.FileNotFoundException: /logs/bdcs.log (No such file or directory) at java.io.FileOutputStream.open0(Native Method) at java.io.FileOutputStream.open(FileOutputStream.java:270) at java.io.FileOutputStream.<init>(FileOutputStream.java:213) at java.io.FileOutputStream.<init>(FileOutputStream.java:133) at org.apache.log4j.FileAppender.setFile(FileAppender.java:272) at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:151) at org.apache.log4j.DailyRollingFileAppender.activateOptions(DailyRollingFileAppender.java:206) at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:247) at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:123) at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:87) at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:645) at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:603) at org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:500) at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:406) at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:432) at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:460) at org.apache.log4j.LogManager.<clinit>(LogManager.java:113) at org.apache.log4j.Category.getInstance(Category.java:530) at org.apache.commons.logging.impl.Log4jFactory.getInstance(Log4jFactory.java:140) at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:257) at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:390) at com.opensymphony.clickstream.ClickstreamListener.<clinit>(ClickstreamListener.java:24) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:422) at java.lang.Class.newInstance(Class.java:442) at org.apache.catalina.core.DefaultInstanceManager.newInstance(DefaultInstanceManager.java:116) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4932) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5524) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:649) at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1245) at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1895) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) log4j:ERROR Either File or DatePattern options are not set for appender [bdcs]. 09-03 13:28:11.438 INFO [ContextLoader.java:142] Root WebApplicationContext: initialization started 09-03 13:28:11.457 INFO [XmlBeanDefinitionReader.java:119] Loading XML bean definitions from class path resource [com/smics/apps/sso/test_sso_context.xml] 09-03 13:28:11.479 INFO [XmlBeanDefinitionReader.java:119] Loading XML bean definitions from class path resource [com/smics/apps/exporthr/test_exporthr_context.xml] 09-03 13:28:11.486 INFO [XmlBeanDefinitionReader.java:119] Loading XML bean definitions from class path resource [com/smics/apps/prodtech/prod_prodtech_context.xml] 09-03 13:28:11.500 INFO [XmlBeanDefinitionReader.java:119] Loading XML bean definitions from class path resource [com/smics/apps/bdcs/mail_content.xml] 09-03 13:28:11.501 ERROR [ContextLoader.java:172] Context initialization failed org.springframework.beans.factory.BeanDefinitionStoreException: IOException parsing XML document from class path resource [com/smics/apps/bdcs/mail_content.xml]; nested exception is java.io.FileNotFoundException: class path resource [com/smics/apps/bdcs/mail_content.xml] cannot be opened because it does not exist java.io.FileNotFoundException: class path resource [com/smics/apps/bdcs/mail_content.xml] cannot be opened because it does not exist at org.springframework.core.io.ClassPathResource.getInputStream(ClassPathResource.java:127) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:131) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:99) at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:114) at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:82) at org.springframework.context.support.AbstractRefreshableApplicationContext.refreshBeanFactory(AbstractRefreshableApplicationContext.java:87) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:262) at org.springframework.web.context.support.AbstractRefreshableWebApplicationContext.refresh(AbstractRefreshableWebApplicationContext.java:131) at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:224) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:150) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:48) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:5016) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5524) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:649) at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1245) at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1895) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)什么原因

// (c) Copyright 1986-2022 Xilinx, Inc. All Rights Reserved. // (c) Copyright 2022-2025 Advanced Micro Devices, Inc. All rights reserved. // // This file contains confidential and proprietary information // of AMD and is protected under U.S. and international copyright // and other intellectual property laws. // // DISCLAIMER // This disclaimer is not a license and does not grant any // rights to the materials distributed herewith. Except as // otherwise provided in a valid license issued to you by // AMD, and to the maximum extent permitted by applicable // law: (1) THESE MATERIALS ARE MADE AVAILABLE "AS IS" AND // WITH ALL FAULTS, AND AMD HEREBY DISCLAIMS ALL WARRANTIES // AND CONDITIONS, EXPRESS, IMPLIED, OR STATUTORY, INCLUDING // BUT NOT LIMITED TO WARRANTIES OF MERCHANTABILITY, NON- // INFRINGEMENT, OR FITNESS FOR ANY PARTICULAR PURPOSE; and // (2) AMD shall not be liable (whether in contract or tort, // including negligence, or under any other theory of // liability) for any loss or damage of any kind or nature // related to, arising under or in connection with these // materials, including for any direct, or any indirect, // special, incidental, or consequential loss or damage // (including loss of data, profits, goodwill, or any type of // loss or damage suffered as a result of any action brought // by a third party) even if such damage or loss was // reasonably foreseeable or AMD had been advised of the // possibility of the same. // // CRITICAL APPLICATIONS // AMD products are not designed or intended to be fail- // safe, or for use in any application requiring fail-safe // performance, such as life-support or safety devices or // systems, Class III medical devices, nuclear facilities, // applications related to the deployment of airbags, or any // other applications that could lead to death, personal // injury, or severe property or environmental damage // (individually and collectively, "Critical // Applications"). Customer assumes the sole risk and // liability of any use of AMD products in Critical // Applications, subject only to applicable laws and // regulations governing limitations on product liability. // // THIS COPYRIGHT NOTICE AND DISCLAIMER MUST BE RETAINED AS // PART OF THIS FILE AT ALL TIMES. // // DO NOT MODIFY THIS FILE. // IP VLNV: xilinx.com:ip:xfft:9.1 // IP Revision: 10 // The following must be inserted into your Verilog file for this // core to be instantiated. Change the instance name and port connections // (in parentheses) to your own signal names. //----------- Begin Cut here for INSTANTIATION Template ---// INST_TAG xfft_0 your_instance_name ( .aclk(aclk), // input wire aclk .aresetn(aresetn), // input wire aresetn .s_axis_config_tdata(s_axis_config_tdata), // input wire [7 : 0] s_axis_config_tdata .s_axis_config_tvalid(s_axis_config_tvalid), // input wire s_axis_config_tvalid .s_axis_config_tready(s_axis_config_tready), // output wire s_axis_config_tready .s_axis_data_tdata(s_axis_data_tdata), // input wire [31 : 0] s_axis_data_tdata .s_axis_data_tvalid(s_axis_data_tvalid), // input wire s_axis_data_tvalid .s_axis_data_tready(s_axis_data_tready), // output wire s_axis_data_tready .s_axis_data_tlast(s_axis_data_tlast), // input wire s_axis_data_tlast .m_axis_data_tdata(m_axis_data_tdata), // output wire [63 : 0] m_axis_data_tdata .m_axis_data_tuser(m_axis_data_tuser), // output wire [15 : 0] m_axis_data_tuser .m_axis_data_tvalid(m_axis_data_tvalid), // output wire m_axis_data_tvalid .m_axis_data_tready(m_axis_data_tready), // input wire m_axis_data_tready .m_axis_data_tlast(m_axis_data_tlast), // output wire m_axis_data_tlast .event_frame_started(event_frame_started), // output wire event_frame_started .event_tlast_unexpected(event_tlast_unexpected), // output wire event_tlast_unexpected .event_tlast_missing(event_tlast_missing), // output wire event_tlast_missing .event_status_channel_halt(event_status_channel_halt), // output wire event_status_channel_halt .event_data_in_channel_halt(event_data_in_channel_halt), // output wire event_data_in_channel_halt .event_data_out_channel_halt(event_data_out_channel_halt) // output wire event_data_out_channel_halt ); // INST_TAG_END ------ End INSTANTIATION Template --------- // You must compile the wrapper file xfft_0.v when simulating // the core, xfft_0. When compiling the wrapper file, be sure to // reference the Verilog simulation library. 请根据这个ip核veo文件重新生成

// 加载河南省边界数据 var heNan = ee.FeatureCollection('projects/ee-wu196552/assets/HeNan'); // 定义时间范围 var time_start = '2001', time_end = '2023'; // 加载MODIS NDVI数据集 var ndvi = ee.ImageCollection("MODIS/061/MOD13A2") .select(['NDVI']) .filterDate(time_start, time_end); // 计算VCI var ndvi_min = ndvi.min().multiply(0.0001); var ndvi_max = ndvi.max().multiply(0.0001); var vci = ndvi.map(function (img) { var band = img.multiply(0.0001); var index = band.expression('((ndvi - min)/(max - min))*100.0', { 'ndvi': band, 'min': ndvi_min, 'max': ndvi_max }).rename('vci'); return index.copyProperties(img, ['system:time_start', 'system:time_end']); }); // 计算VCI中值 var vci_median = vci.median(); Map.addLayer(vci_median.clip(heNan), [], 'vci_median', false); // 打印VCI直方图 print(ui.Chart.image.histogram(vci_median, heNan, 1000)); // VCI分类 var cons = ee.Image.constant(0); var extreme = cons.where(vci_median.gte(0).and(vci_median.lt(10)), 1); var severe = extreme.where(vci_median.gte(10).and(vci_median.lt(20)), 2); var moderate = severe.where(vci_median.gte(20).and(vci_median.lt(30)), 3); var light = moderate.where(vci_median.gte(30).and(vci_median.lt(40)), 4); var no1 = light.where(vci_median.gte(40).and(vci_median.lt(60)), 5); var no2 = no1.where(vci_median.gte(60).and(vci_median.lt(80)), 6); var no3 = no2.where(vci_median.gte(80), 7); Map.addLayer(moderate.clip(heNan), {min: 1, max: 7}, 'drought_map', false); // VCI时间序列分类 var time_start = '2001', time_end = '2024'; var ndvi2 = ee.ImageCollection("MODIS/061/MOD13A2") .select(['NDVI']) .filterDate(time_start, time_end); var ndvi_min2 = ndvi2.min().multiply(0.0001); var ndvi_max2 = ndvi2.max().multiply(0.0001); var vci2 = ndvi2.map(function (img) { var band = img.multiply(0.0001); var index = band.expression('((ndvi - min)/(max - min))*100.0', { 'ndvi': band, 'min': ndvi_min2, 'max': ndvi_max2 }).rename('vci'); return index.copyProperties(img, ['system:time_start', 'system:ti

import os import numpy as np import pandas as pd import torch import torch.nn as nn import torch.optim as optim from torch.utils.data import Dataset, DataLoader from torch.optim.lr_scheduler import ReduceLROnPlateau import matplotlib.pyplot as plt from sklearn.metrics import confusion_matrix, classification_report import seaborn as sns from tqdm import tqdm # 配置参数 class Config: data_root ="C:\\Users\\86190\\Desktop\\超声无损检测\\机器学习\\data" # 数据根目录 sampling_rate = 1e6 # 采样率1MHz(根据实际调整) window_size = 2048 # 样本长度 stride = 512 # 滑动窗口步长 batch_size = 32 #可以进行修改, lr = 1e-4 epochs = 100 device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # ==================== 数据预处理 ==================== class AIDataset(Dataset): def __init__(self, config, phase='train', transform=False): self.config = config self.phase = phase self.transform = transform self.samples = [] self.labels = [] self.class_names = ["undamaged", "light_damage", "medium_damage", "severe_damage"] # 遍历每个类别文件夹 for label, folder in enumerate(self.class_names): folder_path = os.path.join(config.data_root, folder) if not os.path.exists(folder_path): continue # 加载匹配的参数和波形文件 param_files = sorted([f for f in os.listdir(folder_path) if "_param.csv" in f]) waveform_files = sorted([f for f in os.listdir(folder_path) if "_waveform.csv" in f]) for param_file, waveform_file in zip(param_files, waveform_files): param_path = os.path.join(folder_path, param_file) waveform_path = os.path.join(folder_path, waveform_file) # 加载波形数据 try: waveform = pd.read_csv(waveform_path)['Amplitude'].values.astype(np.float32) except Exception as e: print(f"Error loading waveform: {waveform_path}\n{str(e)}")

两个type结构体的指针分配有误 ! 设置大数组共享内存 - 分为5个变量 subroutine setup_shared_memory_data(node_comm, node_rank, ensemble_pi_ptr, & ensemble_u_ptr, ensemble_v_ptr, ensemble_th_ptr, ensemble_q_ptr, & back_data_ptr, true_data_ptr, & win_ensemble_pi, win_ensemble_u, win_ensemble_v, & win_ensemble_th, win_ensemble_q, win_back, win_true) use module_initial use iso_c_binding integer, intent(in) :: node_comm, node_rank real, pointer, intent(out) :: ensemble_pi_ptr(:,:,:,:), ensemble_u_ptr(:,:,:,:), ensemble_v_ptr(:,:,:,:), & ensemble_th_ptr(:,:,:,:), ensemble_q_ptr(:,:,:,:) type(model_data), pointer, intent(inout) :: back_data_ptr, true_data_ptr integer, intent(out) :: win_ensemble_pi, win_ensemble_u, win_ensemble_v, win_ensemble_th, win_ensemble_q, win_back, win_true integer(kind=MPI_ADDRESS_KIND) :: ssize_ensemble, ssize_data integer :: ierr, disp_unit type(c_ptr) :: baseptr_pi, baseptr_u, baseptr_v, baseptr_th, baseptr_q, baseptr_back, baseptr_true ! --------------------------------------------------- ! back_data_ptr 和 true_data_ptr 是 model_data 类型的指针 integer(kind=c_intptr_t) :: addr type(c_ptr) :: c_ptr_back, c_ptr_true ! --------------------------------------------------- disp_unit = 4 ! sizeof(real) ! 集合数据大小 - 每个变量单独分配 if (node_rank == 0) then ssize_ensemble = int(nx * nz * ny * nsample, MPI_ADDRESS_KIND) ! 单个变量 ssize_data = int(nx * nz * ny * 5, MPI_ADDRESS_KIND) ! 5个变量 else ssize_ensemble = 0 ssize_data = 0 end if ! 分配5个集合变量的共享内存 call MPI_Win_allocate_shared(ssize_ensemble * disp_unit, disp_unit, MPI_INFO_NULL, & node_comm, baseptr_pi, win_ensemble_pi, ierr) if (node_rank /= 0) then call MPI_Win_shared_query(win_ensemble_pi, 0, ssize_ensemble, disp_unit, baseptr_pi, ierr) end if call c_f_pointer(baseptr_pi, ensemble_pi_ptr, [nx, nz, ny, nsample]) call MPI_Win_allocate_shared(ssize_ensemble * disp_unit, disp_unit, MPI_INFO_NULL, & node_comm, baseptr_u, win_ensemble_u, ierr) if (node_rank /= 0) then call MPI_Win_shared_query(win_ensemble_u, 0, ssize_ensemble, disp_unit, baseptr_u, ierr) end if call c_f_pointer(baseptr_u, ensemble_u_ptr, [nx, nz, ny, nsample]) call MPI_Win_allocate_shared(ssize_ensemble * disp_unit, disp_unit, MPI_INFO_NULL, & node_comm, baseptr_v, win_ensemble_v, ierr) if (node_rank /= 0) then call MPI_Win_shared_query(win_ensemble_v, 0, ssize_ensemble, disp_unit, baseptr_v, ierr) end if call c_f_pointer(baseptr_v, ensemble_v_ptr, [nx, nz, ny, nsample]) call MPI_Win_allocate_shared(ssize_ensemble * disp_unit, disp_unit, MPI_INFO_NULL, & node_comm, baseptr_th, win_ensemble_th, ierr) if (node_rank /= 0) then call MPI_Win_shared_query(win_ensemble_th, 0, ssize_ensemble, disp_unit, baseptr_th, ierr) end if call c_f_pointer(baseptr_th, ensemble_th_ptr, [nx, nz, ny, nsample]) call MPI_Win_allocate_shared(ssize_ensemble * disp_unit, disp_unit, MPI_INFO_NULL, & node_comm, baseptr_q, win_ensemble_q, ierr) if (node_rank /= 0) then call MPI_Win_shared_query(win_ensemble_q, 0, ssize_ensemble, disp_unit, baseptr_q, ierr) end if call c_f_pointer(baseptr_q, ensemble_q_ptr, [nx, nz, ny, nsample]) ! 分配背景场共享内存 call MPI_Win_allocate_shared(ssize_data * disp_unit, disp_unit, MPI_INFO_NULL, & node_comm, baseptr_back, win_back, ierr) if (node_rank /= 0) then call MPI_Win_shared_query(win_back, 0, ssize_data, disp_unit, baseptr_back, ierr) end if ! call c_f_pointer(baseptr_back, back_data_ptr) call c_f_pointer(baseptr_back, back_data_ptr%pi, [nx, nz, ny]) addr = transfer(baseptr_back, addr) + 1 * nx * nz * ny * disp_unit c_ptr_back = transfer(addr, c_ptr_back) call c_f_pointer(c_ptr_back, back_data_ptr%u, [nx, nz, ny]) addr = transfer(baseptr_back, addr) + 2 * nx * nz * ny * disp_unit c_ptr_back = transfer(addr, c_ptr_back) call c_f_pointer(c_ptr_back, back_data_ptr%v, [nx, nz, ny]) addr = transfer(baseptr_back, addr) + 3 * nx * nz * ny * disp_unit c_ptr_back = transfer(addr, c_ptr_back) call c_f_pointer(c_ptr_back, back_data_ptr%th, [nx, nz, ny]) addr = transfer(baseptr_back, addr) + 4 * nx * nz * ny * disp_unit c_ptr_back = transfer(addr, c_ptr_back) call c_f_pointer(c_ptr_back, back_data_ptr%q, [nx, nz, ny]) ! 分配真值场共享内存 call MPI_Win_allocate_shared(ssize_data * disp_unit, disp_unit, MPI_INFO_NULL, & node_comm, baseptr_true, win_true, ierr) if (node_rank /= 0) then call MPI_Win_shared_query(win_true, 0, ssize_data, disp_unit, baseptr_true, ierr) end if ! call c_f_pointer(baseptr_true, true_data_ptr) call c_f_pointer(baseptr_true, true_data_ptr%pi, [nx, nz, ny]) addr = transfer(baseptr_true, addr) + 1 * nx * nz * ny * disp_unit c_ptr_true = transfer(addr, c_ptr_true) call c_f_pointer(c_ptr_true, true_data_ptr%u, [nx, nz, ny]) addr = transfer(baseptr_true, addr) + 2 * nx * nz * ny * disp_unit c_ptr_true = transfer(addr, c_ptr_true) call c_f_pointer(c_ptr_true, true_data_ptr%v, [nx, nz, ny]) addr = transfer(baseptr_true, addr) + 3 * nx * nz * ny * disp_unit c_ptr_true = transfer(addr, c_ptr_true) call c_f_pointer(c_ptr_true, true_data_ptr%th, [nx, nz, ny]) addr = transfer(baseptr_true, addr) + 4 * nx * nz * ny * disp_unit c_ptr_true = transfer(addr, c_ptr_true) call c_f_pointer(c_ptr_true, true_data_ptr%q, [nx, nz, ny]) end subroutine setup_shared_memory_data (pylk312) [hejx@login04 da_svd]$ cat mylog.e21493684 srun: ROUTE: split_hostlist: hl=a3310n[13-14],b3305r8n8,b3306r1n[1-8],b3306r2n[1-5],b3309r2n[4-8],b3309r3n[1-8],b3309r4n[1-3] tree_width 0 forrtl: severe (174): SIGSEGV, segmentation fault occurred Image PC Routine Line Source libifcoremt.so.5 00002B0FCC1622F6 for__signal_handl Unknown Unknown libpthread-2.17.s 00002B0FCBCD45D0 Unknown Unknown Unknown da_svd.exe 000000000041BD81 da_system_IP_setu 852 NUDT_RFS_DA.F90 da_svd.exe 0000000000402AD6 MAIN__ 152 NUDT_RFS_DA.F90 da_svd.exe 0000000000401D4E Unknown Unknown Unknown libc-2.17.so 00002B0FCDE3A3D5 __libc_start_main Unknown Unknown da_svd.exe 0000000000401C59 Unknown Unknown Unknown forrtl: severe (174): SIGSEGV, segmentation fault occurred Image PC Routine Line Source libifcoremt.so.5 00002B5530B802F6 for__signal_handl Unknown Unknown libpthread-2.17.s 00002B55306F25D0 Unknown Unknown Unknown da_svd.exe 000000000041BD81 da_system_IP_setu 852 NUDT_RFS_DA.F90 da_svd.exe 0000000000402AD6 MAIN__ 152 NUDT_RFS_DA.F90 da_svd.exe 0000000000401D4E Unknown Unknown Unknown libc-2.17.so 00002B55328583D5 __libc_start_main Unknown Unknown da_svd.exe 0000000000401C59 Unknown Unknown Unknown forrtl: severe (174): SIGSEGV, segmentation fault occurred Image PC Routine Line Source libifcoremt.so.5 00002B1B63F5B2F6 for__signal_handl Unknown Unknown libpthread-2.17.s 00002B1B63ACD5D0 Unknown Unknown Unknown da_svd.exe 000000000041BD81 da_system_IP_setu 852 NUDT_RFS_DA.F90 da_svd.exe 0000000000402AD6 MAIN__ 152 NUDT_RFS_DA.F90 da_svd.exe 0000000000401D4E Unknown Unknown Unknown libc-2.17.so 00002B1B65C333D5 __libc_start_main Unknown Unknown da_svd.exe 0000000000401C59 Unknown Unknown Unknown forrtl: severe (174): SIGSEGV, segmentation fault occurred

[07:37:43 ERROR]: Could not pass event EntityDamageByEntityEvent to AttributePlus v1.7.5-C org.bukkit.event.EventException: null at org.bukkit.plugin.java.JavaPluginLoader$1.execute(JavaPluginLoader.java:306) ~[Spigot-1.12.2.jar:git-Spigot-7754231-94b0980] at org.bukkit.plugin.RegisteredListener.callEvent(RegisteredListener.java:62) ~[Spigot-1.12.2.jar:git-Spigot-7754231-94b0980] at org.bukkit.plugin.SimplePluginManager.fireEvent(SimplePluginManager.java:499) [Spigot-1.12.2.jar:git-Spigot-7754231-94b0980] at org.bukkit.plugin.SimplePluginManager.callEvent(SimplePluginManager.java:484) [Spigot-1.12.2.jar:git-Spigot-7754231-94b0980] at org.bukkit.craftbukkit.v1_12_R1.event.CraftEventFactory.callEvent(CraftEventFactory.java:92) [Spigot-1.12.2.jar:git-Spigot-7754231-94b0980] at org.bukkit.craftbukkit.v1_12_R1.event.CraftEventFactory.callEntityDamageEvent(CraftEventFactory.java:579) [Spigot-1.12.2.jar:git-Spigot-7754231-94b0980] at org.bukkit.craftbukkit.v1_12_R1.event.CraftEventFactory.handleEntityDamageEvent(CraftEventFactory.java:482) [Spigot-1.12.2.jar:git-Spigot-7754231-94b0980] at org.bukkit.craftbukkit.v1_12_R1.event.CraftEventFactory.handleLivingEntityDamageEvent(CraftEventFactory.java:611) [Spigot-1.12.2.jar:git-Spigot-7754231-94b0980] at net.minecraft.server.v1_12_R1.EntityLiving.damageEntity0(EntityLiving.java:1339) [Spigot-1.12.2.jar:git-Spigot-7754231-94b0980] at net.minecraft.server.v1_12_R1.EntityLiving.damageEntity(EntityLiving.java:863) [Spigot-1.12.2.jar:git-Spigot-7754231-94b0980] at net.minecraft.server.v1_12_R1.EntityHuman.attack(EntityHuman.java:1005) [Spigot-1.12.2.jar:git-Spigot-7754231-94b0980] at net.minecraft.server.v1_12_R1.EntityPlayer.attack(EntityPlayer.java:1302) [Spigot-1.12.2.jar:git-Spigot-7754231-94b0980] at net.minecraft.server.v1_12_R1.PlayerConnection.a(PlayerConnection.java:1616) [Spigot-1.12.2.jar:git-Spigot-7754231-94b0980]

<Aug 7 10:52:25> <Error> <javax.enterprise.resource.webcontainer.jsf.application> <Error Rendering View[/ncoperation/nclist.jsf] com.sun.faces.mgbean.ManagedBeanCreationException: 鏃犳硶鍒涘缓鍙楃 bean appealOrgNoResultBeanV3銆傚彂鐜颁互涓嬮棶棰橈細 - 鍙楃 bean appealOrgNoResultBeanV3 鐨勫睘鎬 sib 涓嶅瓨鍦ㄣ� at com.sun.faces.mgbean.BeanManager.create(BeanManager.java:243) at com.sun.faces.el.ManagedBeanELResolver.resolveBean(ManagedBeanELResolver.java:200) at com.sun.faces.el.ManagedBeanELResolver.getValue(ManagedBeanELResolver.java:55) at com.sun.faces.el.DemuxCompositeELResolver._getValue(DemuxCompositeELResolver.java:156) at com.sun.faces.el.DemuxCompositeELResolver.getValue(DemuxCompositeELResolver.java:184) Truncated. see log file for complete stacktrace > Aug 07, 2025 10:52:25 AM org.primefaces.application.exceptionhandler.PrimeExceptionHandler logException SEVERE: 鏃犳硶鍒涘缓鍙楃 bean appealOrgNoResultBeanV3銆傚彂鐜颁互涓嬮棶棰橈細 - 鍙楃 bean appealOrgNoResultBeanV3 鐨勫睘鎬 sib 涓嶅瓨鍦ㄣ� com.sun.faces.mgbean.ManagedBeanCreationException: 鏃犳硶鍒涘缓鍙楃 bean appealOrgNoResultBeanV3銆傚彂鐜颁互涓嬮棶棰橈細 - 鍙楃 bean appealOrgNoResultBeanV3 鐨勫睘鎬 sib 涓嶅瓨鍦ㄣ� at com.sun.faces.mgbean.BeanManager.create(BeanManager.java:243) at com.sun.faces.el.ManagedBeanELResolver.resolveBean(ManagedBeanELResolver.java:200) at com.sun.faces.el.ManagedBeanELResolver.getValue(ManagedBeanELResolver.java:55) at com.sun.faces.el.DemuxCompositeELResolver._getValue(DemuxCompositeELResolver.java:156) at com.sun.faces.el.DemuxCompositeELResolver.getValue(DemuxCompositeELResolver.java:184) at com.sun.el.parser.AstIdentifier.getValue(AstIdentifier.java:92) at com.sun.el.parser.AstValue.getBase(AstValue.java:126) at com.sun.el.parser.AstValue.getValue(AstValue.java:175) at com.sun.el.ValueExpressionImpl.getValue(ValueExpressionImpl.java:202) at org.jboss.weld.module.web.el.WeldValueExpression.getValue(WeldValueExpression.java:50) at org.jboss.weld.module.web.el.WeldValueExpression.getValue(WeldValueExpression.java:50) at com.sun.faces.facelets.el.TagValueExpression.getValue(TagValueExpression.java:73) at javax.faces.component.ComponentStateHelper.eval(ComponentStateHelper.java:170) at javax.faces.component.ComponentStateHelper.eval(ComponentStateHelper.java:157) at javax.faces.component.UIData.getValue(UIData.java:736) at org.primefaces.component.api.UIData.getDataModel(UIData.java:768) at org.primefaces.component.datatable.DataTable.loadLazyData(DataTable.java:1141) at org.primefaces.component.datatable.DataTableRenderer.preRender(DataTableRenderer.java:144) at org.primefaces.component.datatable.DataTableRenderer.encodeEnd(DataTableRenderer.java:92) at javax.faces.component.UIComponentBase.encodeEnd(UIComponentBase.java:595) at com.sun.faces.renderkit.html_basic.HtmlBasicRenderer.encodeRecursive(HtmlBasicRenderer.java:286) at com.sun.faces.renderkit.html_basic.GroupRenderer.encodeChildren(GroupRenderer.java:90) at javax.faces.component.UIComponentBase.encodeChildren(UIComponentBase.java:566) at javax.faces.component.UIComponent.encodeAll(UIComponent.java:1647) at com.sun.faces.renderkit.html_basic.CompositeRenderer.encodeChildren(CompositeRenderer.java:54) at javax.faces.component.UIComponentBase.encodeChildren(UIComponentBase.java:566) at org.primefaces.renderkit.CoreRenderer.renderChild(CoreRenderer.java:86) at org.primefaces.renderkit.CoreRenderer.renderChildren(CoreRenderer.java:73) at org.primefaces.component.dialog.DialogRenderer.encodeContent(DialogRenderer.java:192) at org.primefaces.component.dialog.DialogRenderer.encodeMarkup(DialogRenderer.java:116) at org.primefaces.component.dialog.DialogRenderer.encodeEnd(DialogRenderer.java:48) at javax.faces.component.UIComponentBase.encodeEnd(UIComponentBase.java:595) at javax.faces.component.UIComponent.encodeAll(UIComponent.java:1654) at javax.faces.render.Renderer.encodeChildren(Renderer.java:152) at javax.faces.component.UIComponentBase.encodeChildren(UIComponentBase.java:566) at javax.faces.component.UIComponent.encodeAll(UIComponent.java:1647) at javax.faces.component.UIComponent.encodeAll(UIComponent.java:1650) at javax.faces.component.UIComponent.encodeAll(UIComponent.java:1650) at com.sun.faces.application.view.FaceletViewHandlingStrategy.renderView(FaceletViewHandlingStrategy.java:468) at com.sun.faces.application.view.MultiViewHandler.renderView(MultiViewHandler.java:170) at javax.faces.application.ViewHandlerWrapper.renderView(ViewHandlerWrapper.java:132) at com.sun.faces.lifecycle.RenderResponsePhase.execute(RenderResponsePhase.java:102) at com.sun.faces.lifecycle.Phase.doPhase(Phase.java:76) at com.sun.faces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:199) at javax.faces.webapp.FacesServlet.executeLifecyle(FacesServlet.java:708) at javax.faces.webapp.FacesServlet.service(FacesServlet.java:451) at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:295) at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:260) at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:137) at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:353) at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:25) at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:82) at org.primefaces.webapp.filter.FileUploadFilter.doFilter(FileUploadFilter.java:81) at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:82) at com.lkm.pmi.template.CharacterEncodingFilter.doFilter(CharacterEncodingFilter.java:21) at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:82) at weblogic.servlet.internal.RequestEventsFilter.doFilter(RequestEventsFilter.java:32) at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:82) at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3866) at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3829) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:344) at weblogic.security.service.SecurityManager.runAsForUserCode(SecurityManager.java:197) at weblogic.servlet.provider.WlsSecurityProvider.runAsForUserCode(WlsSecurityProvider.java:203) at weblogic.servlet.provider.WlsSubjectHandle.run(WlsSubjectHandle.java:71) at weblogic.servlet.internal.WebAppServletContext.processSecuredExecute(WebAppServletContext.java:2502) at weblogic.servlet.internal.WebAppServletContext.doSecuredExecute(WebAppServletContext.java:2351) at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2326) at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2304) at weblogic.servlet.internal.ServletRequestImpl.runInternal(ServletRequestImpl.java:1779) at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1733) at weblogic.servlet.provider.ContainerSupportProviderImpl$WlsRequestExecutor.run(ContainerSupportProviderImpl.java:272) at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:352) at weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:337) at weblogic.work.LivePartitionUtility.doRunWorkUnderContext(LivePartitionUtility.java:57) at weblogic.work.PartitionUtility.runWorkUnderContext(PartitionUtility.java:41) at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:651) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:420) at weblogic.work.ExecuteThread.run(ExecuteThread.java:360) <Aug 7 10:52:25> <Error> <org.primefaces.application.exceptionhandler.PrimeExceptionHandler> <鏃犳硶鍒涘缓鍙楃 bean appealOrgNoResultBeanV3銆傚彂鐜颁互涓嬮棶棰橈細 - 鍙楃 bean appealOrgNoResultBeanV3 鐨勫睘鎬 sib 涓嶅瓨鍦ㄣ� com.sun.faces.mgbean.ManagedBeanCreationException: 鏃犳硶鍒涘缓鍙楃 bean appealOrgNoResultBeanV3銆傚彂鐜颁互涓嬮棶棰橈細 - 鍙楃 bean appealOrgNoResultBeanV3 鐨勫睘鎬 sib 涓嶅瓨鍦ㄣ� at com.sun.faces.mgbean.BeanManager.create(BeanManager.java:243) at com.sun.faces.el.ManagedBeanELResolver.resolveBean(ManagedBeanELResolver.java:200) at com.sun.faces.el.ManagedBeanELResolver.getValue(ManagedBeanELResolver.java:55) at com.sun.faces.el.DemuxCompositeELResolver._getValue(DemuxCompositeELResolver.java:156) at com.sun.faces.el.DemuxCompositeELResolver.getValue(DemuxCompositeELResolver.java:184) Truncated. see log file for complete stacktrace > Aug 07, 2025 10:52:25 AM org.primefaces.application.exceptionhandler.PrimeExceptionHandler handle SEVERE: Could not handle exception! java.lang.IllegalStateException: Cannot resize buffer - 163828 bytes have already been written (Servlet 2.3, sec. 5.1) at weblogic.servlet.internal.ServletResponseImpl.setBufferSize(ServletResponseImpl.java:415) at com.sun.faces.context.ExternalContextImpl.setResponseBufferSize(ExternalContextImpl.java:974) at com.sun.faces.context.PartialViewContextImpl$DelayedInitPartialResponseWriter.<init>(PartialViewContextImpl.java:664) at com.sun.faces.context.PartialViewContextImpl.getPartialResponseWriter(PartialViewContextImpl.java:343) at org.primefaces.context.PrimePartialViewContext.getPartialResponseWriter(PrimePartialViewContext.java:68) at org.primefaces.application.exceptionhandler.PrimeExceptionHandler.handleRedirect(PrimeExceptionHandler.java:354) at com.lkm.pmi.mainvc.LkmExceptionHandler.handleRedirect(LkmExceptionHandler.java:67) at org.primefaces.application.exceptionhandler.PrimeExceptionHandler.handle(PrimeExceptionHandler.java:100) at com.sun.faces.lifecycle.Phase.doPhase(Phase.java:94) at com.sun.faces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:199) at javax.faces.webapp.FacesServlet.executeLifecyle(FacesServlet.java:708) at javax.faces.webapp.FacesServlet.service(FacesServlet.java:451) at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:295) at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:260) at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:137) at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:353) at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:25) at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:82) at org.primefaces.webapp.filter.FileUploadFilter.doFilter(FileUploadFilter.java:81) at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:82) at com.lkm.pmi.template.CharacterEncodingFilter.doFilter(CharacterEncodingFilter.java:21) at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:82) at weblogic.servlet.internal.RequestEventsFilter.doFilter(RequestEventsFilter.java:32) at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:82) at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3866) at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3829) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:344) at weblogic.security.service.SecurityManager.runAsForUserCode(SecurityManager.java:197) at weblogic.servlet.provider.WlsSecurityProvider.runAsForUserCode(WlsSecurityProvider.java:203) at weblogic.servlet.provider.WlsSubjectHandle.run(WlsSubjectHandle.java:71) at weblogic.servlet.internal.WebAppServletContext.processSecuredExecute(WebAppServletContext.java:2502) at weblogic.servlet.internal.WebAppServletContext.doSecuredExecute(WebAppServletContext.java:2351) at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2326) at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2304) at weblogic.servlet.internal.ServletRequestImpl.runInternal(ServletRequestImpl.java:1779) at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1733) at weblogic.servlet.provider.ContainerSupportProviderImpl$WlsRequestExecutor.run(ContainerSupportProviderImpl.java:272) at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:352) at weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:337) at weblogic.work.LivePartitionUtility.doRunWorkUnderContext(LivePartitionUtility.java:57) at weblogic.work.PartitionUtility.runWorkUnderContext(PartitionUtility.java:41) at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:651) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:420) at weblogic.work.ExecuteThread.run(ExecuteThread.java:360) <Aug 7 10:52:25> <Error> <org.primefaces.application.exceptionhandler.PrimeExceptionHandler> <Could not handle exception! java.lang.IllegalStateException: Cannot resize buffer - 163828 bytes have already been written (Servlet 2.3, sec. 5.1) at weblogic.servlet.internal.ServletResponseImpl.setBufferSize(ServletResponseImpl.java:415) at com.sun.faces.context.ExternalContextImpl.setResponseBufferSize(ExternalContextImpl.java:974) at com.sun.faces.context.PartialViewContextImpl$DelayedInitPartialResponseWriter.<init>(PartialViewContextImpl.java:664) at com.sun.faces.context.PartialViewContextImpl.getPartialResponseWriter(PartialViewContextImpl.java:343) at org.primefaces.context.PrimePartialViewContext.getPartialResponseWriter(PrimePartialViewContext.java:68) Truncated. see log file for complete stacktrace >

Jun 04, 2025 4:11:52 PM org.apache.ranger.server.tomcat.EmbeddedServer start INFO: Webapp file =./webapp, webAppName = /kms Jun 04, 2025 4:11:52 PM org.apache.ranger.server.tomcat.EmbeddedServer start INFO: Adding webapp [/kms] = path [./webapp] ..... Jun 04, 2025 4:11:52 PM org.apache.ranger.server.tomcat.EmbeddedServer start INFO: Finished init of webapp [/kms] = path [./webapp]. Jun 04, 2025 4:11:52 PM org.apache.coyote.AbstractProtocol init INFO: Initializing ProtocolHandler ["http-bio-9292"] Jun 04, 2025 4:11:52 PM org.apache.catalina.core.StandardService startInternal INFO: Starting service Tomcat Jun 04, 2025 4:11:52 PM org.apache.catalina.core.StandardEngine startInternal INFO: Starting Servlet Engine: Apache Tomcat/7.0.81 Jun 04, 2025 4:11:52 PM org.apache.catalina.startup.ContextConfig getDefaultWebXmlFragment INFO: No global web.xml found Jun 04, 2025 4:11:55 PM org.apache.catalina.startup.TldConfig execute INFO: At least one JAR was scanned for TLDs yet contained no TLDs. Enable debug logging for this logger for a complete list of JARs that were scanned but no TLDs were found in them. Skipping unneeded JARs during scanning can improve startup time and JSP compilation time. log4j:WARN No such property [maxFileSize] in org.apache.log4j.DailyRollingFileAppender. log4j:WARN No such property [maxFileSize] in org.apache.log4j.DailyRollingFileAppender. java.lang.IllegalArgumentException: Can't get Kerberos realm at org.apache.hadoop.security.HadoopKerberosName.setConfiguration(HadoopKerberosName.java:65) at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:306) at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:291) at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:846) at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:816) at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:689) at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2954) at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2944) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2810) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:390) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295) at org.apache.hadoop.security.alias.JavaKeyStoreProvider.initFileSystem(JavaKeyStoreProvider.java:89) at org.apache.hadoop.security.alias.AbstractJavaKeyStoreProvider.<init>(AbstractJavaKeyStoreProvider.java:82) at org.apache.hadoop.security.alias.JavaKeyStoreProvider.<init>(JavaKeyStoreProvider.java:49) at org.apache.hadoop.security.alias.JavaKeyStoreProvider.<init>(JavaKeyStoreProvider.java:41) at org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:100) at org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:71) at org.apache.ranger.credentialapi.CredentialReader.getDecryptedString(CredentialReader.java:59) at org.apache.hadoop.crypto.key.RangerKeyStoreProvider.getFromJceks(RangerKeyStoreProvider.java:368) at org.apache.hadoop.crypto.key.RangerKeyStoreProvider.getDBKSConf(RangerKeyStoreProvider.java:118) at org.apache.hadoop.crypto.key.RangerKeyStoreProvider.<init>(RangerKeyStoreProvider.java:82) at org.apache.hadoop.crypto.key.RangerKeyStoreProvider$Factory.createProvider(RangerKeyStoreProvider.java:399) at org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:95) at org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:177) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:5110) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5633) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:145) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1694) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1684) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.security.authentication.util.KerberosUtil.getDefaultRealm(KerberosUtil.java:88) at org.apache.hadoop.security.HadoopKerberosName.setConfiguration(HadoopKerberosName.java:63) ... 32 more Caused by: KrbException: Cannot locate default realm at sun.security.krb5.Config.getDefaultRealm(Config.java:1029) ... 38 more java.lang.IllegalArgumentException: Can't get Kerberos realm at org.apache.hadoop.security.HadoopKerberosName.setConfiguration(HadoopKerberosName.java:65) at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:306) at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:291) at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:846) at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:816) at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:689) at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2954) at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2944) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2810) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:390) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295) at org.apache.hadoop.security.alias.JavaKeyStoreProvider.initFileSystem(JavaKeyStoreProvider.java:89) at org.apache.hadoop.security.alias.AbstractJavaKeyStoreProvider.<init>(AbstractJavaKeyStoreProvider.java:82) at org.apache.hadoop.security.alias.JavaKeyStoreProvider.<init>(JavaKeyStoreProvider.java:49) at org.apache.hadoop.security.alias.JavaKeyStoreProvider.<init>(JavaKeyStoreProvider.java:41) at org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:100) at org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:71) at org.apache.ranger.credentialapi.CredentialReader.getDecryptedString(CredentialReader.java:59) at org.apache.hadoop.crypto.key.RangerKeyStoreProvider.getFromJceks(RangerKeyStoreProvider.java:368) at org.apache.hadoop.crypto.key.RangerKeyStoreProvider.getDBKSConf(RangerKeyStoreProvider.java:119) at org.apache.hadoop.crypto.key.RangerKeyStoreProvider.<init>(RangerKeyStoreProvider.java:82) at org.apache.hadoop.crypto.key.RangerKeyStoreProvider$Factory.createProvider(RangerKeyStoreProvider.java:399) at org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:95) at org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:177) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:5110) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5633) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:145) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1694) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1684) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.security.authentication.util.KerberosUtil.getDefaultRealm(KerberosUtil.java:88) at org.apache.hadoop.security.HadoopKerberosName.setConfiguration(HadoopKerberosName.java:63) ... 32 more Caused by: KrbException: Cannot locate default realm at sun.security.krb5.Config.getDefaultRealm(Config.java:1029) ... 38 more java.lang.IllegalArgumentException: Can't get Kerberos realm at org.apache.hadoop.security.HadoopKerberosName.setConfiguration(HadoopKerberosName.java:65) at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:306) at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:291) at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:846) at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:816) at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:689) at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2954) at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2944) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2810) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:390) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295) at org.apache.hadoop.security.alias.JavaKeyStoreProvider.initFileSystem(JavaKeyStoreProvider.java:89) at org.apache.hadoop.security.alias.AbstractJavaKeyStoreProvider.<init>(AbstractJavaKeyStoreProvider.java:82) at org.apache.hadoop.security.alias.JavaKeyStoreProvider.<init>(JavaKeyStoreProvider.java:49) at org.apache.hadoop.security.alias.JavaKeyStoreProvider.<init>(JavaKeyStoreProvider.java:41) at org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:100) at org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:71) at org.apache.ranger.credentialapi.CredentialReader.getDecryptedString(CredentialReader.java:59) at org.apache.hadoop.crypto.key.RangerKeyStoreProvider.getFromJceks(RangerKeyStoreProvider.java:368) at org.apache.hadoop.crypto.key.RangerKeyStoreProvider.<init>(RangerKeyStoreProvider.java:83) at org.apache.hadoop.crypto.key.RangerKeyStoreProvider$Factory.createProvider(RangerKeyStoreProvider.java:399) at org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:95) at org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:177) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:5110) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5633) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:145) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1694) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1684) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.security.authentication.util.KerberosUtil.getDefaultRealm(KerberosUtil.java:88) at org.apache.hadoop.security.HadoopKerberosName.setConfiguration(HadoopKerberosName.java:63) ... 31 more Caused by: KrbException: Cannot locate default realm at sun.security.krb5.Config.getDefaultRealm(Config.java:1029) ... 37 more java.lang.IllegalArgumentException: Can't get Kerberos realm at org.apache.hadoop.security.HadoopKerberosName.setConfiguration(HadoopKerberosName.java:65) at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:306) at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:291) at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:846) at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:816) at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:689) at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2954) at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2944) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2810) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:390) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295) at org.apache.hadoop.security.alias.JavaKeyStoreProvider.initFileSystem(JavaKeyStoreProvider.java:89) at org.apache.hadoop.security.alias.AbstractJavaKeyStoreProvider.<init>(AbstractJavaKeyStoreProvider.java:82) at org.apache.hadoop.security.alias.JavaKeyStoreProvider.<init>(JavaKeyStoreProvider.java:49) at org.apache.hadoop.security.alias.JavaKeyStoreProvider.<init>(JavaKeyStoreProvider.java:41) at org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:100) at org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:71) at org.apache.ranger.credentialapi.CredentialReader.getDecryptedString(CredentialReader.java:59) at org.apache.hadoop.crypto.key.RangerKeyStoreProvider.getFromJceks(RangerKeyStoreProvider.java:368) at org.apache.hadoop.crypto.key.RangerKeyStoreProvider.<init>(RangerKeyStoreProvider.java:84) at org.apache.hadoop.crypto.key.RangerKeyStoreProvider$Factory.createProvider(RangerKeyStoreProvider.java:399) at org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:95) at org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:177) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:5110) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5633) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:145) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1694) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1684) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.security.authentication.util.KerberosUtil.getDefaultRealm(KerberosUtil.java:88) at org.apache.hadoop.security.HadoopKerberosName.setConfiguration(HadoopKerberosName.java:63) ... 31 more Caused by: KrbException: Cannot locate default realm at sun.security.krb5.Config.getDefaultRealm(Config.java:1029) ... 37 more java.lang.IllegalArgumentException: Can't get Kerberos realm at org.apache.hadoop.security.HadoopKerberosName.setConfiguration(HadoopKerberosName.java:65) at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:306) at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:291) at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:846) at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:816) at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:689) at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2954) at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2944) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2810) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:390) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295) at org.apache.hadoop.security.alias.JavaKeyStoreProvider.initFileSystem(JavaKeyStoreProvider.java:89) at org.apache.hadoop.security.alias.AbstractJavaKeyStoreProvider.<init>(AbstractJavaKeyStoreProvider.java:82) at org.apache.hadoop.security.alias.JavaKeyStoreProvider.<init>(JavaKeyStoreProvider.java:49) at org.apache.hadoop.security.alias.JavaKeyStoreProvider.<init>(JavaKeyStoreProvider.java:41) at org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:100) at org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:71) at org.apache.ranger.credentialapi.CredentialReader.getDecryptedString(CredentialReader.java:59) at org.apache.hadoop.crypto.key.RangerKeyStoreProvider.getFromJceks(RangerKeyStoreProvider.java:368) at org.apache.hadoop.crypto.key.RangerKeyStoreProvider.<init>(RangerKeyStoreProvider.java:85) at org.apache.hadoop.crypto.key.RangerKeyStoreProvider$Factory.createProvider(RangerKeyStoreProvider.java:399) at org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:95) at org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:177) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:5110) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5633) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:145) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1694) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1684) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.security.authentication.util.KerberosUtil.getDefaultRealm(KerberosUtil.java:88) at org.apache.hadoop.security.HadoopKerberosName.setConfiguration(HadoopKerberosName.java:63) ... 31 more Caused by: KrbException: Cannot locate default realm at sun.security.krb5.Config.getDefaultRealm(Config.java:1029) ... 37 more [EL Severe]: ejb: 2025-06-04 16:11:57.703--ServerSession(987689800)--Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.5.2.v20140319-9ad6abd): org.eclipse.persistence.exceptions.DatabaseException Internal Exception: java.sql.SQLException: Access denied for user 'rangerkms'@'uc-hb2-zqkj-hadoop-hdp-node01' (using password: YES) Error Code: 1045 javax.persistence.PersistenceException: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.5.2.v20140319-9ad6abd): org.eclipse.persistence.exceptions.DatabaseException Internal Exception: java.sql.SQLException: Access denied for user 'rangerkms'@'uc-hb2-zqkj-hadoop-hdp-node01' (using password: YES) Error Code: 1045 at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl.deploy(EntityManagerSetupImpl.java:766) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryDelegate.getAbstractSession(EntityManagerFactoryDelegate.java:204) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryDelegate.createEntityManagerImpl(EntityManagerFactoryDelegate.java:304) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.createEntityManagerImpl(EntityManagerFactoryImpl.java:336) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:302) at org.apache.ranger.kms.dao.DaoManager.getEntityManager(DaoManager.java:44) at org.apache.hadoop.crypto.key.RangerKMSDB.initDBConnectivity(RangerKMSDB.java:116) at org.apache.hadoop.crypto.key.RangerKMSDB.<init>(RangerKMSDB.java:81) at org.apache.hadoop.crypto.key.RangerKeyStoreProvider.<init>(RangerKeyStoreProvider.java:86) at org.apache.hadoop.crypto.key.RangerKeyStoreProvider$Factory.createProvider(RangerKeyStoreProvider.java:399) at org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:95) at org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:177) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:5110) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5633) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:145) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1694) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1684) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.5.2.v20140319-9ad6abd): org.eclipse.persistence.exceptions.DatabaseException Internal Exception: java.sql.SQLException: Access denied for user 'rangerkms'@'uc-hb2-zqkj-hadoop-hdp-node01' (using password: YES) Error Code: 1045 at org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:331) at org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:326) at org.eclipse.persistence.sessions.DefaultConnector.connect(DefaultConnector.java:138) at org.eclipse.persistence.sessions.DatasourceLogin.connectToDatasource(DatasourceLogin.java:162) at org.eclipse.persistence.internal.sessions.DatabaseSessionImpl.setOrDetectDatasource(DatabaseSessionImpl.java:204) at org.eclipse.persistence.internal.sessions.DatabaseSessionImpl.loginAndDetectDatasource(DatabaseSessionImpl.java:741) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryProvider.login(EntityManagerFactoryProvider.java:239) at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl.deploy(EntityManagerSetupImpl.java:685) ... 20 more Caused by: java.sql.SQLException: Access denied for user 'rangerkms'@'uc-hb2-zqkj-hadoop-hdp-node01' (using password: YES) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:964) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3970) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3906) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:873) at com.mysql.jdbc.MysqlIO.proceedHandshakeWithPluggableAuthentication(MysqlIO.java:1710) at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1226) at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2253) at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2284) at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2083) at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:806) at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:47) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at com.mysql.jdbc.Util.handleNewInstance(Util.java:425) at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:410) at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:328) at java.sql.DriverManager.getConnection(DriverManager.java:664) at java.sql.DriverManager.getConnection(DriverManager.java:208) at org.eclipse.persistence.sessions.DefaultConnector.connect(DefaultConnector.java:98) ... 25 more java.io.IOException: Master Key Jceks does not exists at org.apache.hadoop.crypto.key.RangerKeyStoreProvider.<init>(RangerKeyStoreProvider.java:92) at org.apache.hadoop.crypto.key.RangerKeyStoreProvider$Factory.createProvider(RangerKeyStoreProvider.java:399) at org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:95) at org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:177) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:5110) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5633) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:145) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1694) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1684) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) ERROR: Hadoop KMS could not be started REASON: java.lang.NullPointerException Stacktrace: --------------------------------------------------- java.lang.NullPointerException at org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:178) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:5110) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5633) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:145) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1694) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1684) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) ---------------------------------------------------

我再日志中看到更多 信息: . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ ( ( )\___ | '_ | '_| | '_ \/ _ | \ \ \ \ \\/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v2.1.5.RELEASE) Sep 04, 2025 4:18:19 PM org.apache.coyote.AbstractProtocol init INFO: Initializing ProtocolHandler ["http-nio-18080"] Sep 04, 2025 4:18:19 PM org.apache.catalina.core.StandardService startInternal INFO: Starting service [Tomcat] Sep 04, 2025 4:18:19 PM org.apache.catalina.core.StandardEngine startInternal INFO: Starting Servlet engine: [Apache Tomcat/9.0.19] Sep 04, 2025 4:18:20 PM org.apache.catalina.core.ApplicationContext log INFO: Initializing Spring embedded WebApplicationContext Sep 04, 2025 4:18:20 PM org.apache.coyote.AbstractProtocol start INFO: Starting ProtocolHandler ["http-nio-18080"] Sep 04, 2025 4:18:20 PM org.apache.catalina.util.LifecycleBase handleSubClassException SEVERE: Failed to start component [Connector[HTTP/1.1-18080]] org.apache.catalina.LifecycleException: Protocol handler start failed at org.apache.catalina.connector.Connector.startInternal(Connector.java:1008) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) at org.apache.catalina.core.StandardService.addConnector(StandardService.java:226) at org.springframework.boot.web.embedded.tomcat.TomcatWebServer.addPreviouslyRemovedConnectors(TomcatWebServer.java:259) at org.springframework.boot.web.embedded.tomcat.TomcatWebServer.start(TomcatWebServer.java:197) at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.startWebServer(ServletWebServerApplicationContext.java:311) at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.finishRefresh(ServletWebServerApplicationContext.java:164) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:552) at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:142) at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:775) at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:397) at org.springframework.boot.SpringApplication.run(SpringApplication.java:316) at org.springframework.boot.SpringApplication.run(SpringApplication.java:1260) at org.springframework.boot.SpringApplication.run(SpringApplication.java:1248) at com.netease.ysf.qa.imtool.ImToolApplication.main(ImToolApplication.java:16) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566)

最新推荐

recommend-type

基于改进粒子群算法的配电网优化调度策略研究.docx

基于改进粒子群算法的配电网优化调度策略研究.docx
recommend-type

bitHEX-crx插件:提升cryptowat.ch与Binance平台易读性

根据给定文件信息,我们可以提炼出以下知识点: ### 标题知识点:bitHEX-crx插件 1. **插件名称解析**:该部分涉及一个名为“bitHEX”的插件,这里的“CRX”指的是Chrome扩展程序的文件格式。CRX文件是一种压缩包格式,用于在Google Chrome浏览器中安装扩展程序。这说明该插件是为Chrome浏览器设计的。 2. **插件功能定位**:标题直接表明了插件的功能特点,即调整和优化用户界面,特别是涉及到加密货币交易监控平台“cryptowat.ch”的界面颜色设置,以提升用户的视觉体验和阅读便利性。 ### 描述知识点:更改cryptowat.ch和Binance BASIC的颜色 1. **视觉优化目的**:“更改cryptowat.ch上的颜色”说明该插件的主要作用在于通过改变颜色配置,降低视觉上的饱和度,使得数字和线条的阅读变得更加轻松。 2. **平台适配性**:描述中提到的“至少现在是这样”,意味着插件在特定时间点上的功能可能仅限于调整“cryptowat.ch”的颜色设置。同时提到“和Binance BASIC以便于阅读”,表明该插件未来可能会扩展到对Binance等其他交易平台的颜色调整,提高用户在不同平台上的视觉体验。 3. **技术实现细节**:描述中还暗示了插件目前未能改变“交易平台画布上的饱和色”,表明该插件目前可能只影响了网页上的某些特定元素,而非全部。作者表示这一功能将在后续版本中实现。 ### 标签知识点:扩展程序 1. **软件分类**:标签“扩展程序”准确地揭示了bitHEX-crx的软件类型。扩展程序是可安装在浏览器上的小型软件包,用以增强或改变浏览器的默认功能。Chrome扩展程序特别流行于改变和自定义用户的网络浏览体验。 2. **使用环境**:该标签还表明该插件是专门为使用Google Chrome浏览器的用户设计的。Chrome浏览器作为全球使用最多的浏览器之一,有大量的用户基础,因此这类扩展程序的开发对提高用户体验具有重要意义。 ### 压缩包子文件的文件名称列表:bitHEX.crx 1. **文件格式说明**:从文件列表中我们得知该扩展程序的文件名是“bitHEX.crx”。CRX文件格式是专为Chrome浏览器扩展而设计,用户可以直接从Chrome网上应用店下载或通过开发者提供的URL安装CRX文件。 2. **安装方法**:用户获取该CRX文件后,可以通过简单的拖拽操作或在Chrome浏览器的“扩展程序”页面,启用“开发者模式”,然后点击“加载已解压的扩展程序”来安装该插件。 通过以上知识点的分析,我们可以得出,bitHEX-crx插件是一个针对加密货币交易监控平台用户界面的视觉优化Chrome扩展程序。它通过调整颜色设置来改善用户在使用特定金融交易平台时的视觉体验,目的是让数字和线条的显示更加清晰,以便用户能够舒适、高效地进行市场监控和交易操作。随着开发的推进,该插件未来可能会添加更多平台的颜色调整支持。
recommend-type

UnityML-Agents:相机使用与Python交互教程

### Unity ML-Agents:相机使用与Python交互教程 本文将深入探讨在Unity中为智能体添加相机、查看相机视角以及使用Python与Unity ML-Agents进行交互的详细过程。 #### 为智能体添加新相机 在Unity中,为智能体添加新相机可按以下步骤操作: 1. 使用层级面板,创建一个新相机作为“Balancing Ball”对象(智能体和球的父对象)的子对象。 2. 将新相机重命名为有意义的名称,如“Agent Camera”。 3. 调整新相机的位置,使其指向智能体和球。 4. 在层级面板中选择智能体,使用“Add Component”按钮添加“Camer
recommend-type

INA141仿真

INA141 是一款由 Texas Instruments(TI)生产的电流检测放大器,常用于高侧电流检测,具有较宽的共模电压范围和高精度的增益特性。对 INA141 芯片进行电路仿真时,通常可以使用主流的电路仿真工具如 **Pspice** 或 **Tina**,它们支持对电流检测电路的建模与分析。 ### 仿真方法 在进行 INA141 的电路仿真时,需构建一个完整的电流检测电路模型,包括输入端的采样电阻、负载、电源以及 INA141 的外围电路配置。以下是一个典型的仿真流程: 1. **搭建主电路模型**:将 INA141 的输入端连接至采样电阻两端,该电阻串联在电源与负载之间。
recommend-type

揭露不当行为:UT-Austin教授监控Chrome扩展

根据提供的文件信息,我们可以提炼出以下几点与“Professor Watch-crx插件”相关的知识点: 1. 插件功能定位: “Professor Watch-crx插件”是一款专为德克萨斯大学奥斯汀分校(UT-Austin)定制的Chrome扩展程序,旨在帮助学生和教师了解哪些教授曾经侵犯过学校性行为不当政策。该插件将相关信息整合到学校的课程目录中,以便用户在选课时能够掌握教师的相关背景信息。 2. 插件的作用和重要性: 插件提供了一种透明化的方式,使学生在选择课程和教授时能够做出知情的决定。在某些情况下,教授的不当行为可能会对学生的教育体验产生负面影响,因此该插件帮助用户避免潜在的风险。 3. 插件信息展示方式: 插件通过颜色编码来传达信息,具体如下: - 红色:表示UT的标题IX办公室已经确认某教授违反了性行为不当政策。 - 橙色:表示有主要媒体渠道报告了对某教授的不当行为指控,但截至2020年4月,UT的标题IX办公室尚未对此做出回应或提供相关信息。 4. 插件的数据来源和透明度: 插件中包含的信息来自于学生和教师的活动,并经过了一定的公开披露。该插件并非由德克萨斯大学创建或批准,它将用户引向一个外部网站(https://utmiscondone.wordpress.com/about-the-data/),以获取更多可公开获得的信息。 5. 插件的标签: 根据文件提供的信息,这款插件的标签是“扩展程序”(Extension),说明它属于Chrome浏览器的附加组件类别。 6. 插件文件名称: 插件的压缩包子文件名是“Professor_Watch.crx”,这是一个Chrome扩展程序的常用文件格式,用户可以通过Chrome浏览器直接下载并安装该插件。 7. 相关技术知识: - Chrome扩展程序开发涉及HTML、CSS和JavaScript等Web开发技术,以及Chrome扩展API。 - Chrome扩展程序可以利用各种浏览器提供的接口,如标签页、书签、历史记录、下载项和搜索等进行操作。 - 扩展程序的安装包为.crx文件,它是Chrome浏览器专用的打包格式,其中包含了扩展程序的所有资源和文件。 8. 法律与伦理考量: 使用此类插件可能会触及隐私权和诽谤等法律问题。开发者和用户需要注意不要侵犯他人名誉权,同时也要确保收集和发布的数据是准确和合法的。 9. 学术自由和言论自由的边界: 插件可能引发有关学术自由和言论自由边界的讨论。一方面,学生和教师需要在一个安全和支持的环境中学习和工作,另一方面,这也可能带来对教师声誉的不当影响。 通过这些知识点,我们可以看出“Professor Watch-crx插件”是一个旨在提高教育环境透明度和安全性的工具,它也引发了一系列关于个人隐私、信息准确性、法律义务和社会伦理的深入讨论。
recommend-type

UnityML-Agents合作学习与相机传感器应用指南

### Unity ML-Agents 合作学习与相机传感器应用指南 #### 合作学习环境搭建 在构建合作学习环境时,我们需要完成一系列的代码编写和配置工作。首先是环境管理器脚本部分,它的主要功能是重置定时器、旋转整个区域、为代理和方块分配随机位置和旋转,并将方块设置为活动状态。以下是相关代码: ```csharp item.BlockTransform.transform.SetPositionAndRotation(pos,rot); item.RigidBody.velocity = Vector3.zero; item.RigidBody.angularVelocity = Vec
recommend-type

edge下载linux

在Linux系统上下载和安装Microsoft Edge浏览器可以通过多种方式实现。以下是最常见的几种方法: ### 1. 从 Microsoft Edge Insider 网站下载安装包 Microsoft Edge Insider 网站提供了适用于 Linux 的 `.deb` 和 `.rpm` 安装包,用户可以直接访问该网站下载[^1]。这个方法的优势在于系统可以自动更新浏览器版本。 - 访问地址:[https://www.microsoftedgeinsider.com/zh-cn/download/](https://www.microsoftedgeinsider.com/zh
recommend-type

揭秘快速赚钱系统-免费使用CRX插件

根据给定的文件信息,可以提取以下知识点: 1. 插件类型:标题中提到的是一个名为 "Free Access To Money Making System" 的插件,且文件扩展名为 .crx,表明这是一个专为Google Chrome浏览器设计的扩展程序(也称为插件或扩展)。CRX文件是Chrome扩展程序的官方格式,允许用户通过Chrome Web Store或者直接通过下载安装到浏览器中。 2. 功能描述:从标题和描述可以看出,该插件可能被设计用于通过某种形式帮助用户赚钱。描述中提到“免费使用赚钱系统”,以及“比以往任何时候都更快地产生销售和佣金”,暗示了该插件可能涉及营销、广告、会员联盟或者其它形式的电子商务功能,用户可能可以通过推广产品或服务来赚取收益。 3. 语言信息:描述中提到插件的语言为English,即英语,说明该插件的用户界面和文档可能主要以英语呈现。 4. 安全性和合法性问题:在进行讨论时,需要指出网络上存在着许多声称能够帮助用户快速赚钱的程序,其中不少是诈骗或不合法的。关于这个特定插件,没有足够的信息来评价其合法性或安全性,因此用户在下载和使用此类插件时应保持警惕。合法赚钱系统通常要求用户付出真正的劳动或者提供有价值的服务,而不是依靠简单地安装软件来实现。 5. Chrome扩展程序的特点:Chrome扩展程序通常可以通过Chrome浏览器右上角的菜单访问,用户可以在"更多工具"下找到"扩展程序"选项来管理已安装的扩展。Chrome扩展允许用户对浏览器进行个性化配置,增强功能,如广告拦截、密码管理、视频下载等。扩展程序的来源可能包括Chrome Web Store或者直接从开发者提供的网站下载。 6. 扩展程序文件结构:文件名称 "Free_Access_To_Money_Making_System.crx" 反映了该扩展程序的文件结构。CRX文件通常为ZIP格式压缩包,用户可以通过更改文件扩展名从 .crx 到 .zip 来解压文件内容,以便查看其内部结构,包括插件的源代码、资源文件、插件的权限声明等。 7. 插件权限和隐私:在安装任何扩展程序时,都应该关注其权限请求。根据插件的功能,它可能要求访问用户的浏览数据、读取和修改浏览器历史记录、管理下载等。用户应仔细阅读并理解这些权限,以确保个人隐私和数据的安全。 综上,标题和描述中的知识点围绕一个名为 "Free Access To Money Making System" 的Chrome扩展程序展开,它可能具有帮助用户进行网络赚钱的特性。然而,用户在下载和使用该扩展程序之前,应该对其合法性和安全性进行充分的评估和审查。此外,Chrome扩展程序的管理和使用也是重要的知识点之一,包括对扩展程序文件结构和权限的理解,以及如何安全地安装和维护扩展程序。
recommend-type

高级模仿学习与课程学习指南

### 高级模仿学习与课程学习指南 在机器学习领域,高级模仿学习和课程学习是提升模型性能和效率的重要方法。下面将详细介绍如何使用GAIL(生成对抗模仿学习)进行训练,以及课程学习的概念和应用。 #### 1. 使用GAIL进行训练 ##### 1.1 准备工作 - **重命名场景**:将复制的场景重命名为“GAIL”或类似名称。 - **添加关键对象**: 1. 在项目层次结构中添加一个新的立方体。 2. 将这个立方体重命名为“key”。 ##### 1.2 修改代理代码 - **打开代码文件**:打开`Roller.cs`文件。 - **添加成员变量**:在类中添加以下
recommend-type

CSP-J2024初赛讲解

CSP-J(Certified Software Professional - Junior)是面向青少年的信息学奥林匹克竞赛,由中国计算机学会(CCF)主办,旨在选拔和培养具有潜力的编程人才。2024年的CSP-J初赛内容和备考策略可以从以下几个方面进行详细解析: ### 一、考试内容概述 CSP-J初赛主要考察学生的计算机基础知识、程序设计能力和算法思维。考试内容通常包括以下几个方面: 1. **计算机基础知识**:包括计算机组成原理、操作系统基础、网络基础、数据结构基础等[^1]。 2. **编程语言**:CSP-J主要使用C++、Python或Pascal作为编程语言,考生需要熟