mv apache-hive-3.1.0-bin hive

时间: 2025-01-17 18:06:27 AIGC 浏览: 49
当你提到"mv apache-3.1.0-bin hive"这个命令,它看起来像是在Linux或Unix环境下,将`apache-hive-3.1.0-bin`目录移动或重命名为`hive`。Apache Hive是一个数据仓库工具,它允许用户查询存储在Hadoop分布式文件系统(HDFS)上的大规模数据集。这个操作可能是为了简化目录结构或创建对Hive服务更友好的入口点。 如果你想要运行Hive,通常需要先解压下载的Hive包,并配置环境变量,然后启动Hive shell。具体的步骤可能会包括: 1. 解压缩文件: `tar -xvf apache-hive-3.1.0-bin.tar.gz` 2. 进入解压后的目录: `cd apache-hive-3.1.0-bin` 3. 配置环境变量,比如添加到`~/.bashrc`或`.bash_profile`: ```bash export HIVE_HOME=$(pwd) export PATH=$PATH:$HIVE_HOME/bin ``` 4. 使更改生效: `. ~/.bashrc` 或 `source ~/.bash_profile` 5. 启动Hive shell: `hive`
相关问题

任务描述 本关任务:在已安装Hadoop的基础上安装配置好Hive并运行。 相关知识 为了完成本关任务,你需要掌握: 1.Hive的基本安装; 2.Mysql的安装与设置; 3.Hive 的配置。 注意:本关实训Hive的安装与配置建立在Hadoop已安装配置好的情况下。 Hive 的基本安装 从 官网 下载Hive二进制包,不过这里你无须再从网上下载,我已经帮你下载好了放在/opt目录下。 进入/opt目录,解压apache-hive-3.1.0-bin.tar.gz到该目录下,并将解压的文件重命名为hive; ``` cd /opt tar -zxvf apache-hive-3.1.0-bin.tar.gz mv apache-hive-3.1.0-bin hive - 设置环境变量,编辑`vi /etc/profile`,在末尾添加以下两行代码; export HIVE_HOME=/opt/hive export PATH=$HIVE_HOME/bin:$PATH - `source /etc/profile`使环境生效; - 查看`hive`是否安装成功:`hive --version`。 ![](https://data.educoder.net/api/attachments/eUxxcWwvZHg5N2t5cDc3TXd1VHQ2QT09) 出现`hive`版本就说明安装成功,但是可以看到其中有一些警告信息,是由于包的冲突引起的。我们只需删除掉引起冲突的包即可:`rm /opt/hive/lib/log4j-slf4j-impl-2.10.0.jar`。 ![](https://data.educoder.net/api/attachments/Vlk5dFNnc0xBK0svNzR5TjdEV243Zz09) #####Mysql 的安装与设置 平台已安装`MySQL`数据库,同学们在本地没有安装可以采用以下命令进行安装: sudo apt-get install mysql-server #安装mysql服务 apt-get install mysql-client #安装mysql客户端 sudo apt-get install libmysqlclient-dev #安装相关依赖环境 *注意:安装过程中会提示设置密码什么的,不要忘了设置。* 安装完成之后可以使用如下命令来检查是否安装成功。 `sudo netstat -tap | grep mysql` 通过上述命令检查之后,如果看到有`MySQL`的`socket`处于`listen` 状态则表示安装成功。 `Hive`需要数据库来存储`metastore`的内容,因此我们需要配置一下`MySQL`数据库。 - 下载`mysql`驱动; 我已经帮你下载好放在`/opt`目录下,进入该目录并进行以下操作: tar -zxvf mysql-connector-java-5.1.45.tar.gz cd mysql-connector-java-5.1.45 cp mysql-connector-java-5.1.45-bin.jar /opt/hive/lib/ 接下来介绍`MySQL`为`Hive`做的设置。 - 使用`root`用户登录`MySQL`数据库; mysql -uroot -p123123 -h127.0.0.1 - 创建数据库实例`hiveDB`; create database hiveDB; - 创建用户`bee`,密码为`123123`; create user 'bee'@'%' identified by '123123'; - 授权用户`bee`拥有数据库实例`hiveDB`的所有权限; grant all privileges on hiveDB.* to 'bee'@'%' identified by '123123'; - 刷新系统权限表。 flush privileges; #####Hive 的配置 在`/opt/hive/conf`目录下,修改`hive-site.xml`和`hive-env.sh`两个文件。 ######hive-site.xml `hive-site.xml`保存`Hive`运行时所需要的相关配置信息。 - 如果目录下没有该文件,我们直接创建一个:`vi hive-site.xml`,将以下内容复制到其中(按`i`进入编辑模式后再进行粘贴)。 hive.metastore.warehouse.dir /opt/hive/warehouse hive.exec.scratchdir /opt/hive

### Hive在Hadoop环境下的完整安装与配置 #### 1. 安装Hive 为了在Hadoop环境中成功部署Hive,需按照以下流程操作: - 下载指定版本的Hive安装包,并将其上传至服务器上的`/opt`目录[^1]。 - 解压缩安装包: ```bash tar -zxvf hive-1.1.0_cdh5.14.2.tar.gz ``` - 将解压后的文件移动到目标路径 `/opt/soft/hive110` 中: ```bash mv hive-1.1.0-cdh5.14.2 /opt/soft/hive110 ``` #### 2. Mysql作为Metastore的设置 由于Hive需要一个关系型数据库来存储元数据,默认情况下推荐使用Mysql。 - **安装Mysql**:确保已经正确安装并启动Mysql服务。如果尚未完成,请参考官方文档或社区教程执行安装过程。 - **创建数据库和用户**:登录Mysql控制台,为Hive创建专用的数据库及其访问权限。 ```sql CREATE DATABASE metastore; GRANT ALL PRIVILEGES ON metastore.* TO 'hive_user'@'localhost' IDENTIFIED BY 'password'; FLUSH PRIVILEGES; ``` - **下载合适的JDBC驱动程序**:根据所使用的Mysql版本选择匹配的JDBC驱动器(如 `mysql-connector-java-8.x.x.jar`),并将此jar文件放置于Hive库目录下 `/opt/soft/hive110/lib/` [^3]。 #### 3. 配置hive-site.xml 编辑位于`/opt/soft/hive110/conf/` 文件夹内的 `hive-site.xml` 来定义必要的参数以连接到外部Mysql实例以及调整性能选项等。 以下是部分重要属性的例子: ```xml <configuration> <!-- Metastore Configuration --> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://localhost/metastore?createDatabaseIfNotExist=true&amp;useSSL=false</value> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.cj.jdbc.Driver</value> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>hive_user</value> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>password</value> </property> <!-- Reducer Settings --> <property> <name>hive.exec.reducers.max</name> <value>-1</value> </property> <property> <name>hive.exec.reducers.bytes.per.reducer</name> <value>256000000</value> </property> </configuration> ``` 注意这里的 reducer 设置逻辑遵循特定算法计算得出实际数量[^4]。 #### 4. 配置hive-env.sh 修改同一目录下的模板脚本 `hive-env.sh.template` 并重命名为 `hive-env.sh` ,添加如下内容以便支持分布式模式运行时所需的环境变量声明: ```bash export HADOOP_HOME=/path/to/hadoop export JAVA_HOME=/usr/java/latest export HIVE_CONF_DIR=/opt/soft/hive110/conf/ ``` 最后验证整个系统的正常工作状态通过简单的查询测试即可确认一切准备就绪! ---

[root@hadoop01 ~]# cd /export/servers/apache-hive-3.1.3-bin [root@hadoop01 apache-hive-3.1.3-bin]# bin/schematool -initSchema -dbType mysql -verbos 2025-06-16 19:01:04,676 INFO conf.HiveConf: Found configuration file file:/export/servers/apache-hive-3.1.3-bin/conf/hive-site.xml 2025-06-16 19:01:05,853 INFO tools.HiveSchemaHelper: Metastore connection URL: jdbc:mysql://hadoop01:3306/hive?createDatabaseIfNotExist=true&useSSL=false Metastore connection URL: jdbc:mysql://hadoop01:3306/hive?createDatabaseIfNotExist=true&useSSL=false 2025-06-16 19:01:05,854 INFO tools.HiveSchemaHelper: Metastore Connection Driver : com.mysql.jdbc.Driver Metastore Connection Driver : com.mysql.jdbc.Driver 2025-06-16 19:01:05,854 INFO tools.HiveSchemaHelper: Metastore connection User: root Metastore connection User: root Starting metastore schema initialization to 3.1.0 Initialization script hive-schema-3.1.0.mysql.sql Error: Table 'CTLGS' already exists (state=42S01,code=1050) org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization FAILED! Metastore state would be inconsistent !! Underlying cause: java.io.IOException : Schema script failed, errorcode 2 Use --verbose for detailed stacktrace. *** schemaTool failed ***

<think>我们正在解决Hive初始化元数据库时出现的错误:Table'CTLGS'alreadyexists。根据引用[1]中的信息,这个错误发生在执行`schematoo-initSchema`命令时,错误代码1050,状态42S01,表明表'CTLGS'已经存在。这通常是因为之前已经初始化过,或者元数据库中有残留的表。解决方案思路:1.确认问题:错误信息明确指出表'CTLGS'已经存在,所以我们需要处理这个表。2.清理元数据库:由于是初始化过程,我们可以考虑清理掉已存在的元数据库表,然后重新初始化。但要注意,如果生产环境中已有元数据,则不能直接清理,这里我们假设是初次初始化或测试环境。3.具体步骤:我们可以手动删除元数据库中的所有表,然后重新运行初始化命令。或者,我们可以只删除冲突的表(这里冲突的是CTLGS表),但为了确保初始化成功,通常建议删除整个元数据库并重建,然后重新初始化。操作步骤(在MySQL元数据库操作):步骤1:登录到MySQL数据库(假设元数据库名为metastore,用户为hive,密码为hive)mysql-uhive-p步骤2:选择metastore数据库USEmetastore;步骤3:删除所有元数据表(注意:这将清空所有元数据,仅适用于初始化失败且无重要数据的情况)DROPDATABASEmetastore;CREATEDATABASEmetastore;或者,如果不删除整个数据库,可以删除所有表(但需要先禁用外键约束,然后依次删除所有表,再启用外键约束,操作复杂,且可能因版本不同表结构不同而遗漏,因此建议直接删除数据库再重建)步骤4:退出MySQL,然后重新运行Hive的初始化命令schematool-initSchema-dbTypemysql但是,引用[2]中提到,如果元数据表VERSION不存在,会报错"Table'hive.version'doesn'texist",并且该表只能有一条记录。因此,我们也可以尝试只删除冲突的表(CTLGS)然后重新初始化,但这样可能不彻底,因为可能还有其他残留表。另一种方法:使用Hive提供的升级脚本(如果之前初始化过,现在重新初始化相当于升级?)但是这里我们不需要升级,而是需要重新初始化。因此,我们推荐的做法是:完全清除元数据库,然后重新初始化。具体操作(假设元数据库名为metastore,位于MySQL中):1.停止Hive服务(如果有运行的话)。2.登录MySQL,删除metastore数据库:在MySQL命令行执行:DROPDATABASEmetastore;CREATEDATABASEmetastoreCHARACTERSETlatin1;--注意:Hive元数据库默认字符集是latin1,但根据实际情况可能不同,有些版本支持utf8,具体参考Hive文档--或者使用CREATEDATABASEmetastore;然后赋予用户权限GRANTALLPRIVILEGESONmetastore.*TO'hive'@'%'IDENTIFIEDBY'hive';FLUSHPRIVILEGES;3.退出MySQL,然后重新运行初始化命令:schematool-initSchema-dbTypemysql-verbose注意:如果Hive版本升级,可能需要使用-upgradeSchema而不是-initSchema,但这里我们是首次初始化,所以使用-initSchema。如果上述方法不行,还可以尝试:-检查hive-site.xml配置是否正确,特别是连接数据库的URL、用户名和密码。-确保Hive的jar包和驱动正确。根据引用[3]中的操作,我们可以进行简单的Hive操作测试,以验证初始化是否成功。总结:解决该错误的关键是清理元数据库中已有的表(特别是冲突表),然后重新初始化。最彻底的方法是删除整个元数据库并重建。注意事项:在生产环境中,如果已有元数据,则不能直接删除,需要备份和恢复。但在测试环境或初次安装时,可以这样做。最后,我们生成相关问题:1.如何避免Hive元数据库初始化时的表已存在错误?2.Hive元数据库初始化失败后,如何手动删除冲突表?3.Hive元数据库应该使用什么字符集?</think>### Hive元数据库初始化错误解决方案 您在运行`schematoo -initSchema`时遇到的`Table 'CTLGS' already exists`错误(错误代码1050)表明Hive的元数据库中存在残留表结构。以下是系统化的解决方案: #### 根本原因分析 1. **重复初始化**:元数据库(如MySQL)中已存在Hive系统表,尤其是`CTLGS`表(Hive 3.x+的元数据表) 2. **版本冲突**:之前安装的Hive残留元数据未清理干净[^1] 3. **权限问题**:当前用户无权覆盖现有表结构 #### 解决方案步骤 **方法1:彻底重建元数据库(推荐)** ```sql -- 登录MySQL元数据库 mysql -u root -p -- 删除原有元数据库 DROP DATABASE metastore; -- 新建空数据库(注意字符集) CREATE DATABASE metastore CHARACTER SET latin1 COLLATE latin1_bin; -- 授权给Hive用户 GRANT ALL PRIVILEGES ON metastore.* TO 'hive'@'%' IDENTIFIED BY 'hive'; FLUSH PRIVILEGES; ``` 完成后重新初始化: ```bash schematool -initSchema -dbType mysql -verbose ``` **方法2:手动清理冲突表(快速修复)** ```sql USE metastore; -- 删除冲突的核心系统表 DROP TABLE IF EXISTS CTLGS; DROP TABLE IF EXISTS VERSION; -- 注意:VERSION表必须存在且仅含一条记录[^2] ``` 然后重新运行初始化命令 #### 关键注意事项 1. **字符集配置**:元数据库必须使用`latin1`字符集,否则会导致元数据异常 2. **版本兼容**: - Hive 3.x使用`CTLGS`表存储目录元数据 - 若从Hive 2.x升级,需用`schematool -upgradeSchema`代替`-initSchema` 3. **权限验证**:确保hive-site.xml中的数据库用户有`DROP/CREATE`权限 4. **连接检查**:确认hive-site.xml配置正确: ```xml <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://hadoop102:3306/metastore?createDatabaseIfNotExist=true</value> </property> ``` #### 验证成功初始化 ```sql -- 检查关键表是否存在 SHOW TABLES LIKE 'VERSION'; SELECT * FROM VERSION; -- 应返回唯一版本记录 -- 简单功能测试[^3] CREATE DATABASE test_db; USE test_db; CREATE TABLE test_tb(id int); INSERT INTO test_tb VALUES(1); SELECT * FROM test_tb; ``` > **重要提示**:生产环境操作前务必备份元数据库!若问题持续,检查Hive lib目录是否包含正确版本的JDBC驱动(如mysql-connector-java-8.0.xx.jar)。
阅读全文

相关推荐

root@hadoop01 apache-hive-3.1.3-bin]# bin/hive which: no hbase in (:/export/servers/apache-hive-3.1.3-bin/bin:/export/servers/flume-1.9.0/bin:/export/servers/hadoop-3.3.5/bin::/export/servers/apache-hive-3.1.3-bin/bin:/export/servers/flume-1.9.0/bin::/export/servers/apache-hive-3.1.3-bin/bin:/export/servers/flume-1.9.0/bin:/export/servers/flume-1.9.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/root/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/root/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/export/servers/sqoop/bin:) 2025-06-17 18:31:23,734 INFO conf.HiveConf: Found configuration file file:/export/servers/apache-hive-3.1.3-bin/conf/hive-site.xml Hive Session ID = 1bb412dc-6394-489e-80ca-943bacb068f6 2025-06-17 18:31:29,776 INFO SessionState: Hive Session ID = 1bb412dc-6394-489e-80ca-943bacb068f6 Logging initialized using configuration in jar:file:/export/servers/apache-hive-3.1.3-bin/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true 2025-06-17 18:31:29,920 INFO SessionState: Logging initialized using configuration in jar:file:/export/servers/apache-hive-3.1.3-bin/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true 2025-06-17 18:31:33,217 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/1bb412dc-6394-489e-80ca-943bacb068f6 2025-06-17 18:31:33,267 INFO session.SessionState: Created local directory: /tmp/root/1bb412dc-6394-489e-80ca-943bacb068f6 2025-06-17 18:31:33,280 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/1bb412dc-6394-489e-80ca-943bacb068f6/_tmp_space.db 2025-06-17 18:31:33,323 INFO conf.HiveConf: Using the default value passed in for log id: 1bb412dc-6394-489e-80ca-943bacb068f6 2025-06-17 18:31:33,323 INFO session.SessionState: Updating thread name to 1bb412dc-6394-489e-80ca-943bacb068f6 main 2025-06-17 18:31:35,971 INFO metastore.HiveMetaStore: 0: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-17 18:31:36,007 WARN metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored 2025-06-17 18:31:36,017 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-17 18:31:36,019 INFO conf.MetastoreConf: Found configuration file file:/export/servers/apache-hive-3.1.3-bin/conf/hive-site.xml 2025-06-17 18:31:36,021 INFO conf.MetastoreConf: Unable to find config file hivemetastore-site.xml 2025-06-17 18:31:36,021 INFO conf.MetastoreConf: Found configuration file null 2025-06-17 18:31:36,023 INFO conf.MetastoreConf: Unable to find config file metastore-site.xml 2025-06-17 18:31:36,023 INFO conf.MetastoreConf: Found configuration file null 2025-06-17 18:31:36,357 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored 2025-06-17 18:31:36,899 INFO hikari.HikariDataSource: HikariPool-1 - Starting... 2025-06-17 18:31:37,442 INFO hikari.HikariDataSource: HikariPool-1 - Start completed. 2025-06-17 18:31:37,532 INFO hikari.HikariDataSource: HikariPool-2 - Starting... 2025-06-17 18:31:37,674 INFO hikari.HikariDataSource: HikariPool-2 - Start completed. 2025-06-17 18:31:38,661 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" 2025-06-17 18:31:39,025 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-17 18:31:39,031 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-17 18:31:39,705 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 18:31:39,706 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 18:31:39,707 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 18:31:39,707 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 18:31:39,708 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 18:31:39,709 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 18:31:43,370 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 18:31:43,371 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 18:31:43,372 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 18:31:43,372 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 18:31:43,372 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 18:31:43,372 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 18:31:48,380 WARN metastore.ObjectStore: Version information not found in metastore. metastore.schema.verification is not enabled so recording the schema version 3.1.0 2025-06-17 18:31:48,380 WARN metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 3.1.0, comment = Set by MetaStore [email protected] 2025-06-17 18:31:48,917 INFO metastore.HiveMetaStore: Added admin role in metastore 2025-06-17 18:31:48,922 INFO metastore.HiveMetaStore: Added public role in metastore 2025-06-17 18:31:49,028 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty 2025-06-17 18:31:49,426 INFO metastore.RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=root (auth:SIMPLE) retries=1 delay=1 lifetime=0 2025-06-17 18:31:49,471 INFO metastore.HiveMetaStore: 0: get_all_functions 2025-06-17 18:31:49,474 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_all_functions Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. 2025-06-17 18:31:49,645 INFO CliDriver: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. Hive Session ID = 4bd9c9da-8eb4-44a2-948e-faa5053d60f3 2025-06-17 18:31:49,645 INFO SessionState: Hive Session ID = 4bd9c9da-8eb4-44a2-948e-faa5053d60f3 2025-06-17 18:31:49,697 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/4bd9c9da-8eb4-44a2-948e-faa5053d60f3 2025-06-17 18:31:49,703 INFO session.SessionState: Created local directory: /tmp/root/4bd9c9da-8eb4-44a2-948e-faa5053d60f3 2025-06-17 18:31:49,710 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/4bd9c9da-8eb4-44a2-948e-faa5053d60f3/_tmp_space.db 2025-06-17 18:31:49,713 INFO metastore.HiveMetaStore: 1: get_databases: @hive# 2025-06-17 18:31:49,714 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_databases: @hive# 2025-06-17 18:31:49,716 INFO metastore.HiveMetaStore: 1: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-17 18:31:49,720 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-17 18:31:49,755 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-17 18:31:49,756 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-17 18:31:49,768 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#db_hive1 pat=.*,type=MATERIALIZED_VIEW 2025-06-17 18:31:49,769 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#db_hive1 pat=.*,type=MATERIALIZED_VIEW 2025-06-17 18:31:49,789 INFO metastore.HiveMetaStore: 1: get_multi_table : db=db_hive1 tbls= 2025-06-17 18:31:49,793 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=db_hive1 tbls= 2025-06-17 18:31:49,795 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW 2025-06-17 18:31:49,796 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW 2025-06-17 18:31:49,803 INFO metastore.HiveMetaStore: 1: get_multi_table : db=default tbls= 2025-06-17 18:31:49,803 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=default tbls= 2025-06-17 18:31:49,803 INFO metadata.HiveMaterializedViewsRegistry: Materialized views registry has been initialized hive>

[root@hadoop01 apache-hive-3.1.3-bin]# hive which: no hbase in (/export/servers/hadoop-3.3.5/bin::/export/servers/apache-hive-3.1.3-bin/bin:/export/servers/flume-1.9.0/bin::/export/servers/apache-hive-3.1.3-bin/bin:/export/servers/flume-1.9.0/bin:/export/servers/flume-1.9.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/root/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/root/bin) SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/export/servers/hadoop-3.3.5/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/export/servers/apache-hive-3.1.3-bin/lib/log4j-slf4j-impl-2.17.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory] 2025-06-16 17:53:36,956 INFO conf.HiveConf: Found configuration file file:/export/servers/apache-hive-3.1.3-bin/conf/hive-site.xml Hive Session ID = 30846036-47a7-480e-81e3-48f09d764412 2025-06-16 17:53:40,291 INFO SessionState: Hive Session ID = 30846036-47a7-480e-81e3-48f09d764412 Logging initialized using configuration in jar:file:/export/servers/apache-hive-3.1.3-bin/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true 2025-06-16 17:53:40,414 INFO SessionState: Logging initialized using configuration in jar:file:/export/servers/apache-hive-3.1.3-bin/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true 2025-06-16 17:53:43,041 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/30846036-47a7-480e-81e3-48f09d764412 2025-06-16 17:53:43,099 INFO session.SessionState: Created local directory: /tmp/root/30846036-47a7-480e-81e3-48f09d764412 2025-06-16 17:53:43,119 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/30846036-47a7-480e-81e3-48f09d764412/_tmp_space.db 2025-06-16 17:53:43,154 INFO conf.HiveConf: Using the default value passed in for log id: 30846036-47a7-480e-81e3-48f09d764412 2025-06-16 17:53:43,154 INFO session.SessionState: Updating thread name to 30846036-47a7-480e-81e3-48f09d764412 main 2025-06-16 17:53:45,040 INFO metastore.HiveMetaStore: 0: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-16 17:53:45,120 WARN metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored 2025-06-16 17:53:45,133 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-16 17:53:45,139 INFO conf.MetastoreConf: Found configuration file file:/export/servers/apache-hive-3.1.3-bin/conf/hive-site.xml 2025-06-16 17:53:45,141 INFO conf.MetastoreConf: Unable to find config file hivemetastore-site.xml 2025-06-16 17:53:45,141 INFO conf.MetastoreConf: Found configuration file null 2025-06-16 17:53:45,143 INFO conf.MetastoreConf: Unable to find config file metastore-site.xml 2025-06-16 17:53:45,143 INFO conf.MetastoreConf: Found configuration file null 2025-06-16 17:53:45,603 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored 2025-06-16 17:53:46,052 INFO hikari.HikariDataSource: HikariPool-1 - Starting... 2025-06-16 17:53:46,556 INFO hikari.HikariDataSource: HikariPool-1 - Start completed. 2025-06-16 17:53:46,645 INFO hikari.HikariDataSource: HikariPool-2 - Starting... 2025-06-16 17:53:46,677 INFO hikari.HikariDataSource: HikariPool-2 - Start completed. 2025-06-16 17:53:47,494 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" 2025-06-16 17:53:47,815 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-16 17:53:47,820 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-16 17:53:48,285 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:48,286 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:48,287 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:48,287 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:48,288 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:48,288 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:51,987 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:51,988 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:51,988 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:51,988 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:51,989 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:51,989 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-16 17:53:57,252 WARN metastore.ObjectStore: Version information not found in metastore. metastore.schema.verification is not enabled so recording the schema version 3.1.0 2025-06-16 17:53:57,253 WARN metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 3.1.0, comment = Set by MetaStore [email protected] 2025-06-16 17:53:57,550 INFO metastore.HiveMetaStore: Added admin role in metastore 2025-06-16 17:53:57,560 INFO metastore.HiveMetaStore: Added public role in metastore 2025-06-16 17:53:57,673 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty 2025-06-16 17:53:58,030 INFO metastore.RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=root (auth:SIMPLE) retries=1 delay=1 lifetime=0 2025-06-16 17:53:58,085 INFO metastore.HiveMetaStore: 0: get_all_functions 2025-06-16 17:53:58,089 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_all_functions Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. 2025-06-16 17:53:58,263 INFO CliDriver: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. Hive Session ID = c998401c-9255-4863-8e6f-4932f9f591fa 2025-06-16 17:53:58,266 INFO SessionState: Hive Session ID = c998401c-9255-4863-8e6f-4932f9f591fa 2025-06-16 17:53:58,341 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/c998401c-9255-4863-8e6f-4932f9f591fa 2025-06-16 17:53:58,350 INFO session.SessionState: Created local directory: /tmp/root/c998401c-9255-4863-8e6f-4932f9f591fa 2025-06-16 17:53:58,365 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/c998401c-9255-4863-8e6f-4932f9f591fa/_tmp_space.db 2025-06-16 17:53:58,372 INFO metastore.HiveMetaStore: 1: get_databases: @hive# 2025-06-16 17:53:58,373 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_databases: @hive# 2025-06-16 17:53:58,377 INFO metastore.HiveMetaStore: 1: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-16 17:53:58,383 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-16 17:53:58,462 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-16 17:53:58,466 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-16 17:53:58,494 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#db_hive1 pat=.*,type=MATERIALIZED_VIEW 2025-06-16 17:53:58,495 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#db_hive1 pat=.*,type=MATERIALIZED_VIEW 2025-06-16 17:53:58,525 INFO metastore.HiveMetaStore: 1: get_multi_table : db=db_hive1 tbls= 2025-06-16 17:53:58,526 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=db_hive1 tbls= 2025-06-16 17:53:58,530 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW 2025-06-16 17:53:58,530 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW 2025-06-16 17:53:58,539 INFO metastore.HiveMetaStore: 1: get_multi_table : db=default tbls= 2025-06-16 17:53:58,539 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=default tbls= 2025-06-16 17:53:58,539 INFO metadata.HiveMaterializedViewsRegistry: Materialized views registry has been initialized

/home/hadoopmaster/jdk1.8.0_161/bin/java -javaagent:/home/hadoopmaster/idea-IC-221.6008.13/lib/idea_rt.jar=34515:/home/hadoopmaster/idea-IC-221.6008.13/bin -Dfile.encoding=UTF-8 -classpath /home/hadoopmaster/jdk1.8.0_161/jre/lib/charsets.jar:/home/hadoopmaster/jdk1.8.0_161/jre/lib/deploy.jar:/home/hadoopmaster/jdk1.8.0_161/jre/lib/ext/cldrdata.jar:/home/hadoopmaster/jdk1.8.0_161/jre/lib/ext/dnsns.jar:/home/hadoopmaster/jdk1.8.0_161/jre/lib/ext/jaccess.jar:/home/hadoopmaster/jdk1.8.0_161/jre/lib/ext/jfxrt.jar:/home/hadoopmaster/jdk1.8.0_161/jre/lib/ext/localedata.jar:/home/hadoopmaster/jdk1.8.0_161/jre/lib/ext/nashorn.jar:/home/hadoopmaster/jdk1.8.0_161/jre/lib/ext/sunec.jar:/home/hadoopmaster/jdk1.8.0_161/jre/lib/ext/sunjce_provider.jar:/home/hadoopmaster/jdk1.8.0_161/jre/lib/ext/sunpkcs11.jar:/home/hadoopmaster/jdk1.8.0_161/jre/lib/ext/zipfs.jar:/home/hadoopmaster/jdk1.8.0_161/jre/lib/javaws.jar:/home/hadoopmaster/jdk1.8.0_161/jre/lib/jce.jar:/home/hadoopmaster/jdk1.8.0_161/jre/lib/jfr.jar:/home/hadoopmaster/jdk1.8.0_161/jre/lib/jfxswt.jar:/home/hadoopmaster/jdk1.8.0_161/jre/lib/jsse.jar:/home/hadoopmaster/jdk1.8.0_161/jre/lib/management-agent.jar:/home/hadoopmaster/jdk1.8.0_161/jre/lib/plugin.jar:/home/hadoopmaster/jdk1.8.0_161/jre/lib/resources.jar:/home/hadoopmaster/jdk1.8.0_161/jre/lib/rt.jar:/root/IdeaProjects/kkk/out/production/kkk:/home/hadoopmaster/scala-2.12.15/lib/scala-reflect.jar:/home/hadoopmaster/scala-2.12.15/lib/scala-xml_2.12-1.0.6.jar:/home/hadoopmaster/scala-2.12.15/lib/scala-parser-combinators_2.12-1.0.7.jar:/home/hadoopmaster/scala-2.12.15/lib/scala-swing_2.12-2.0.3.jar:/home/hadoopmaster/scala-2.12.15/lib/scala-library.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/xz-1.8.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/jta-1.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/jpam-1.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/json-1.8.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/ST4-4.0.4.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/guice-3.0.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/ivy-2.5.0.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/oro-2.0.8.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/blas-2.2.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/core-1.1.2.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/gson-2.2.4.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/tink-1.6.0.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/avro-1.10.2.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/jsp-api-2.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/okio-1.14.0.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/opencsv-2.3.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/shims-0.9.0.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/xmlenc-0.52.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/arpack-2.2.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/guava-14.0.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/jetty-6.1.26.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/jline-2.14.6.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/jsr305-3.0.0.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/lapack-2.2.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/log4j-1.2.17.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/minlog-1.3.0.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/stream-2.9.6.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/velocity-1.5.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/generex-1.0.2.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/hk2-api-2.6.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/janino-3.0.16.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/jdo-api-3.0.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/objenesis-2.6.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/paranamer-2.8.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/py4j-0.10.9.3.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/pyrolite-4.30.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/HikariCP-2.5.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/commons-io-2.4.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/hive-cli-2.3.9.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/javax.inject-1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/libfb303-0.9.3.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/lz4-java-1.7.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/okhttp-3.12.12.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/snakeyaml-1.27.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/stax-api-1.0.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/JTransforms-3.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/aopalliance-1.0.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/avro-ipc-1.10.2.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/breeze_2.12-1.2.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/commons-cli-1.2.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/commons-net-3.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/derby-10.14.2.0.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/hive-jdbc-2.3.9.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/hk2-utils-2.6.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/httpcore-4.4.14.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/jaxb-api-2.2.11.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/jersey-hk2-2.34.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/jodd-core-3.5.2.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/orc-core-1.6.12.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/super-csv-2.2.0.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/xml-apis-1.4.01.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/zookeeper-3.6.2.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/JLargeArrays-1.5.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/activation-1.1.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/automaton-1.11-8.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/commons-dbcp-1.4.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/commons-lang-2.6.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/commons-text-1.6.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/hive-serde-2.3.9.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/hive-shims-2.3.9.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/javolution-5.5.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/libthrift-0.12.0.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/orc-shims-1.6.12.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/slf4j-api-1.7.30.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/zjsonpatch-0.3.0.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/zstd-jni-1.5.0-4.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/chill-java-0.10.0.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/chill_2.12-0.10.0.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/guice-servlet-3.0.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/hadoop-auth-2.7.4.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/hadoop-hdfs-2.7.4.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/hive-common-2.3.9.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/hk2-locator-2.6.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/httpclient-4.5.13.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/jackson-xc-1.9.13.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/jetty-util-6.1.26.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/joda-time-2.10.10.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/kryo-shaded-4.0.2.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/metrics-jmx-4.2.0.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/metrics-jvm-4.2.0.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/rocksdbjni-6.20.3.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/spire_2.12-0.17.0.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/xercesImpl-2.12.0.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/aircompressor-0.21.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/algebra_2.12-2.0.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/annotations-17.0.0.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/antlr4-runtime-4.8.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/api-util-1.0.0-M20.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/arrow-format-2.0.0.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/arrow-vector-2.0.0.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/avro-mapred-1.10.2.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/commons-codec-1.15.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/commons-pool-1.5.4.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/compress-lzf-1.0.3.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/hive-beeline-2.3.9.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/javax.jdo-3.2.0-m3.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/jaxb-runtime-2.3.2.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/jersey-client-2.34.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/jersey-common-2.34.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/jersey-server-2.34.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/leveldbjni-all-1.8.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/metrics-core-4.2.0.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/metrics-json-4.2.0.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/RoaringBitmap-0.9.0.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/antlr-runtime-3.5.2.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/commons-math3-3.4.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/hadoop-client-2.7.4.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/hadoop-common-2.7.4.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/jackson-core-2.12.3.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/javassist-3.25.0-GA.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/jul-to-slf4j-1.7.30.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/protobuf-java-2.5.0.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/snappy-java-1.1.8.4.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/transaction-api-1.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/bonecp-0.8.0.RELEASE.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/commons-crypto-1.1.0.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/commons-digester-1.8.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/commons-lang3-3.12.0.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/curator-client-2.7.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/hive-exec-2.3.9-core.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/hive-metastore-2.3.9.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/jackson-jaxrs-1.9.13.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/jakarta.inject-2.6.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/orc-mapreduce-1.6.12.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/scala-xml_2.12-1.2.0.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/shapeless_2.12-2.3.3.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/slf4j-log4j12-1.7.30.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/spark-sql_2.12-3.2.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/threeten-extra-1.5.0.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/zookeeper-jute-3.6.2.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/commons-compress-1.21.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/commons-logging-1.1.3.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/curator-recipes-2.7.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/hadoop-yarn-api-2.7.4.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/hive-shims-0.23-2.3.9.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/jcl-over-slf4j-1.7.30.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/parquet-column-1.12.2.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/parquet-common-1.12.2.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/parquet-hadoop-1.12.2.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/scala-library-2.12.15.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/scala-reflect-2.12.15.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/spark-core_2.12-3.2.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/spark-hive_2.12-3.2.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/spark-repl_2.12-3.2.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/spark-tags_2.12-3.2.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/spark-yarn_2.12-3.2.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/api-asn1-api-1.0.0-M20.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/breeze-macros_2.12-1.2.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/cats-kernel_2.12-2.1.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/commons-httpclient-3.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/flatbuffers-java-1.9.0.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/hive-llap-common-2.3.9.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/hive-service-rpc-3.1.2.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/hive-storage-api-2.7.2.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/jetty-sslengine-6.1.26.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/metrics-graphite-4.2.0.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/netty-all-4.1.68.Final.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/parquet-jackson-1.12.2.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/scala-compiler-2.12.15.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/spark-mesos_2.12-3.2.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/spark-mllib_2.12-3.2.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/spire-util_2.12-0.17.0.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/xbean-asm9-shaded-4.20.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/apacheds-i18n-2.0.0-M15.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/arpack_combined_all-0.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/arrow-memory-core-2.0.0.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/commons-beanutils-1.9.4.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/commons-compiler-3.0.16.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/curator-framework-2.7.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/datanucleus-core-4.1.17.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/hive-shims-common-2.3.9.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/jackson-core-asl-1.9.13.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/jackson-databind-2.12.3.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/jakarta.ws.rs-api-2.1.6.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/kubernetes-client-5.4.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/macro-compat_2.12-1.1.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/parquet-encoding-1.12.2.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/spark-graphx_2.12-3.2.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/spark-sketch_2.12-3.2.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/spark-unsafe_2.12-3.2.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/univocity-parsers-2.9.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/arrow-memory-netty-2.0.0.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/datanucleus-rdbms-4.1.19.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/hadoop-annotations-2.7.4.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/hadoop-yarn-client-2.7.4.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/hadoop-yarn-common-2.7.4.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/spark-kvstore_2.12-3.2.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/spire-macros_2.12-0.17.0.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/commons-collections-3.2.2.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/commons-configuration-1.6.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/datanucleus-api-jdo-4.2.4.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/jackson-mapper-asl-1.9.13.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/jakarta.servlet-api-4.0.3.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/json4s-ast_2.12-3.7.0-M11.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/spark-catalyst_2.12-3.2.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/spark-launcher_2.12-3.2.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/audience-annotations-0.5.0.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/hive-shims-scheduler-2.3.9.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/hive-vector-code-gen-2.3.9.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/jackson-annotations-2.12.3.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/jakarta.xml.bind-api-2.3.2.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/json4s-core_2.12-3.7.0-M11.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/spark-streaming_2.12-3.2.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/spire-platform_2.12-0.17.0.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/kubernetes-model-apps-5.4.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/kubernetes-model-core-5.4.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/kubernetes-model-node-5.4.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/kubernetes-model-rbac-5.4.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/logging-interceptor-3.12.12.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/mesos-1.4.0-shaded-protobuf.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/osgi-resource-locator-1.0.3.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/spark-kubernetes_2.12-3.2.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/spark-tags_2.12-3.2.1-tests.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/aopalliance-repackaged-2.6.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/htrace-core-3.1.0-incubating.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/istack-commons-runtime-3.0.8.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/jakarta.annotation-api-1.3.5.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/jakarta.validation-api-2.0.2.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/json4s-scalap_2.12-3.7.0-M11.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/kubernetes-model-batch-5.4.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/spark-mllib-local_2.12-3.2.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/jersey-container-servlet-2.34.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/json4s-jackson_2.12-3.7.0-M11.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/kubernetes-model-common-5.4.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/kubernetes-model-events-5.4.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/kubernetes-model-policy-5.4.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/jackson-dataformat-yaml-2.12.3.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/jackson-datatype-jsr310-2.11.2.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/kubernetes-model-metrics-5.4.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/hadoop-yarn-server-common-2.7.4.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/spark-network-common_2.12-3.2.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/jackson-module-scala_2.12-2.12.3.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/kubernetes-model-discovery-5.4.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/parquet-format-structures-1.12.2.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/spark-network-shuffle_2.12-3.2.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/apacheds-kerberos-codec-2.0.0-M15.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/hadoop-mapreduce-client-app-2.7.4.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/kubernetes-model-extensions-5.4.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/kubernetes-model-networking-5.4.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/kubernetes-model-scheduling-5.4.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/hadoop-mapreduce-client-core-2.7.4.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/hadoop-yarn-server-web-proxy-2.7.4.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/jersey-container-servlet-core-2.34.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/kubernetes-model-autoscaling-5.4.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/kubernetes-model-flowcontrol-5.4.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/scala-collection-compat_2.12-2.1.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/spark-hive-thriftserver_2.12-3.2.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/kubernetes-model-certificates-5.4.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/kubernetes-model-coordination-5.4.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/kubernetes-model-storageclass-5.4.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/scala-parser-combinators_2.12-1.1.2.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/hadoop-mapreduce-client-common-2.7.4.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/kubernetes-model-apiextensions-5.4.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/hadoop-mapreduce-client-shuffle-2.7.4.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/hadoop-mapreduce-client-jobclient-2.7.4.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/kubernetes-model-admissionregistration-5.4.1.jar:/home/hadoopmaster/spark-3.2.1-bin-hadoop2.7/jars/dropwizard-metrics-hadoop-metrics2-reporter-0.1.2.jar kkk.WordCount Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 25/06/02 20:17:03 INFO SparkContext: Running Spark version 3.2.1 25/06/02 20:17:04 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 25/06/02 20:17:04 INFO ResourceUtils: ============================================================== 25/06/02 20:17:04 INFO ResourceUtils: No custom resources configured for spark.driver. 25/06/02 20:17:04 INFO ResourceUtils: ============================================================== 25/06/02 20:17:04 INFO SparkContext: Submitted application: WordCount 25/06/02 20:17:04 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0) 25/06/02 20:17:04 INFO ResourceProfile: Limiting resource is cpu 25/06/02 20:17:04 INFO ResourceProfileManager: Added ResourceProfile id: 0 25/06/02 20:17:04 INFO SecurityManager: Changing view acls to: root 25/06/02 20:17:04 INFO SecurityManager: Changing modify acls to: root 25/06/02 20:17:04 INFO SecurityManager: Changing view acls groups to: 25/06/02 20:17:04 INFO SecurityManager: Changing modify acls groups to: 25/06/02 20:17:04 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set() 25/06/02 20:17:05 INFO Utils: Successfully started service 'sparkDriver' on port 41615. 25/06/02 20:17:05 INFO SparkEnv: Registering MapOutputTracker 25/06/02 20:17:05 INFO SparkEnv: Registering BlockManagerMaster 25/06/02 20:17:05 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information 25/06/02 20:17:05 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up 25/06/02 20:17:05 INFO SparkEnv: Registering BlockManagerMasterHeartbeat 25/06/02 20:17:05 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-68486311-3f69-48e8-8f69-e7afffcf5979 25/06/02 20:17:05 INFO MemoryStore: MemoryStore started with capacity 258.5 MiB 25/06/02 20:17:05 INFO SparkEnv: Registering OutputCommitCoordinator 25/06/02 20:17:06 INFO Utils: Successfully started service 'SparkUI' on port 4040. 25/06/02 20:17:06 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://hadoopmaster:4040 25/06/02 20:17:06 INFO Executor: Starting executor ID driver on host hadoopmaster 25/06/02 20:17:06 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 41907. 25/06/02 20:17:06 INFO NettyBlockTransferService: Server created on hadoopmaster:41907 25/06/02 20:17:06 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy 25/06/02 20:17:06 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, hadoopmaster, 41907, None) 25/06/02 20:17:06 INFO BlockManagerMasterEndpoint: Registering block manager hadoopmaster:41907 with 258.5 MiB RAM, BlockManagerId(driver, hadoopmaster, 41907, None) 25/06/02 20:17:06 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, hadoopmaster, 41907, None) 25/06/02 20:17:06 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, hadoopmaster, 41907, None) 25/06/02 20:17:08 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 244.0 KiB, free 258.2 MiB) 25/06/02 20:17:09 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 23.4 KiB, free 258.2 MiB) 25/06/02 20:17:09 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on hadoopmaster:41907 (size: 23.4 KiB, free: 258.5 MiB) 25/06/02 20:17:09 INFO SparkContext: Created broadcast 0 from textFile at WordCount.scala:9 Exception in thread "main" org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/home/hadoopmaster/words.txt at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:287) at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:229) at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315) at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:205) at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:300) at scala.Option.getOrElse(Option.scala:189) at org.apache.spark.rdd.RDD.partitions(RDD.scala:296) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49) at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:300) at scala.Option.getOrElse(Option.scala:189) at org.apache.spark.rdd.RDD.partitions(RDD.scala:296) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49) at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:300) at scala.Option.getOrElse(Option.scala:189) at org.apache.spark.rdd.RDD.partitions(RDD.scala:296) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49) at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:300) at scala.Option.getOrElse(Option.scala:189) at org.apache.spark.rdd.RDD.partitions(RDD.scala:296) at org.apache.spark.Partitioner$.$anonfun$defaultPartitioner$4(Partitioner.scala:78) at org.apache.spark.Partitioner$.$anonfun$defaultPartitioner$4$adapted(Partitioner.scala:78) at scala.collection.immutable.List.map(List.scala:293) at org.apache.spark.Partitioner$.defaultPartitioner(Partitioner.scala:78) at org.apache.spark.rdd.PairRDDFunctions.$anonfun$reduceByKey$4(PairRDDFunctions.scala:322) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) at org.apache.spark.rdd.RDD.withScope(RDD.scala:414) at org.apache.spark.rdd.PairRDDFunctions.reduceByKey(PairRDDFunctions.scala:322) at kkk.WordCount$.main(WordCount.scala:10) at kkk.WordCount.main(WordCount.scala) 25/06/02 20:17:09 INFO SparkContext: Invoking stop() from shutdown hook 25/06/02 20:17:09 INFO SparkUI: Stopped Spark web UI at http://hadoopmaster:4040 25/06/02 20:17:09 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! 25/06/02 20:17:09 INFO MemoryStore: MemoryStore cleared 25/06/02 20:17:09 INFO BlockManager: BlockManager stopped 25/06/02 20:17:09 INFO BlockManagerMaster: BlockManagerMaster stopped 25/06/02 20:17:09 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! 25/06/02 20:17:10 INFO SparkContext: Successfully stopped SparkContext 25/06/02 20:17:10 INFO ShutdownHookManager: Shutdown hook called 25/06/02 20:17:10 INFO ShutdownHookManager: Deleting directory /tmp/spark-213e3a91-227c-4e0a-8254-ab5c0f00786d Process finished with exit code 1

[root@hadoop01 apache-hive-3.1.3-bin]# bin/hive which: no hbase in (/export/servers/hadoop-3.3.5/bin::/export/servers/apache-hive-3.1.3-bin/bin:/export/servers/flume-1.9.0/bin::/export/servers/apache-hive-3.1.3-bin/bin:/export/servers/flume-1.9.0/bin:/export/servers/flume-1.9.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/root/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/root/bin) 2025-06-17 19:17:36,429 INFO conf.HiveConf: Found configuration file file:/export/servers/apache-hive-3.1.3-bin/conf/hive-site.xml Hive Session ID = 174edc05-5da2-4b28-a455-d29ac7bdd8fc 2025-06-17 19:17:39,510 INFO SessionState: Hive Session ID = 174edc05-5da2-4b28-a455-d29ac7bdd8fc Logging initialized using configuration in jar:file:/export/servers/apache-hive-3.1.3-bin/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true 2025-06-17 19:17:39,639 INFO SessionState: Logging initialized using configuration in jar:file:/export/servers/apache-hive-3.1.3-bin/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true 2025-06-17 19:17:42,218 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/174edc05-5da2-4b28-a455-d29ac7bdd8fc 2025-06-17 19:17:42,262 INFO session.SessionState: Created local directory: /tmp/root/174edc05-5da2-4b28-a455-d29ac7bdd8fc 2025-06-17 19:17:42,274 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/174edc05-5da2-4b28-a455-d29ac7bdd8fc/_tmp_space.db 2025-06-17 19:17:42,304 INFO conf.HiveConf: Using the default value passed in for log id: 174edc05-5da2-4b28-a455-d29ac7bdd8fc 2025-06-17 19:17:42,304 INFO session.SessionState: Updating thread name to 174edc05-5da2-4b28-a455-d29ac7bdd8fc main 2025-06-17 19:17:44,440 INFO metastore.HiveMetaStore: 0: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-17 19:17:44,512 WARN metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored 2025-06-17 19:17:44,530 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-17 19:17:44,532 INFO conf.MetastoreConf: Found configuration file file:/export/servers/apache-hive-3.1.3-bin/conf/hive-site.xml 2025-06-17 19:17:44,534 INFO conf.MetastoreConf: Unable to find config file hivemetastore-site.xml 2025-06-17 19:17:44,534 INFO conf.MetastoreConf: Found configuration file null 2025-06-17 19:17:44,535 INFO conf.MetastoreConf: Unable to find config file metastore-site.xml 2025-06-17 19:17:44,535 INFO conf.MetastoreConf: Found configuration file null 2025-06-17 19:17:44,958 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored 2025-06-17 19:17:45,507 INFO hikari.HikariDataSource: HikariPool-1 - Starting... 2025-06-17 19:17:46,120 INFO hikari.HikariDataSource: HikariPool-1 - Start completed. 2025-06-17 19:17:46,244 INFO hikari.HikariDataSource: HikariPool-2 - Starting... 2025-06-17 19:17:46,262 INFO hikari.HikariDataSource: HikariPool-2 - Start completed. 2025-06-17 19:17:46,944 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" 2025-06-17 19:17:47,177 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-17 19:17:47,179 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-17 19:17:47,661 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:47,662 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:47,664 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:47,664 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:47,665 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:47,665 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:51,340 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:51,341 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:51,342 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:51,342 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:51,342 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:51,342 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:56,088 WARN metastore.ObjectStore: Version information not found in metastore. metastore.schema.verification is not enabled so recording the schema version 3.1.0 2025-06-17 19:17:56,088 WARN metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 3.1.0, comment = Set by MetaStore [email protected] 2025-06-17 19:17:56,489 INFO metastore.HiveMetaStore: Added admin role in metastore 2025-06-17 19:17:56,497 INFO metastore.HiveMetaStore: Added public role in metastore 2025-06-17 19:17:56,607 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty 2025-06-17 19:17:56,969 INFO metastore.RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=root (auth:SIMPLE) retries=1 delay=1 lifetime=0 2025-06-17 19:17:57,003 INFO metastore.HiveMetaStore: 0: get_all_functions 2025-06-17 19:17:57,011 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_all_functions Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. 2025-06-17 19:17:57,170 INFO CliDriver: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. Hive Session ID = 2804a115-a38a-441b-9d42-6abad28c99f8 2025-06-17 19:17:57,170 INFO SessionState: Hive Session ID = 2804a115-a38a-441b-9d42-6abad28c99f8 2025-06-17 19:17:57,216 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/2804a115-a38a-441b-9d42-6abad28c99f8 2025-06-17 19:17:57,222 INFO session.SessionState: Created local directory: /tmp/root/2804a115-a38a-441b-9d42-6abad28c99f8 2025-06-17 19:17:57,228 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/2804a115-a38a-441b-9d42-6abad28c99f8/_tmp_space.db 2025-06-17 19:17:57,231 INFO metastore.HiveMetaStore: 1: get_databases: @hive# 2025-06-17 19:17:57,231 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_databases: @hive# 2025-06-17 19:17:57,233 INFO metastore.HiveMetaStore: 1: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-17 19:17:57,239 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-17 19:17:57,272 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-17 19:17:57,272 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-17 19:17:57,288 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#db_hive1 pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:17:57,288 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#db_hive1 pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:17:57,314 INFO metastore.HiveMetaStore: 1: get_multi_table : db=db_hive1 tbls= 2025-06-17 19:17:57,314 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=db_hive1 tbls= 2025-06-17 19:17:57,316 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:17:57,316 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:17:57,321 INFO metastore.HiveMetaStore: 1: get_multi_table : db=default tbls= 2025-06-17 19:17:57,321 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=default tbls= 2025-06-17 19:17:57,322 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#itcast_ods pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:17:57,322 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#itcast_ods pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:17:57,326 INFO metastore.HiveMetaStore: 1: get_multi_table : db=itcast_ods tbls= 2025-06-17 19:17:57,326 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=itcast_ods tbls= 2025-06-17 19:17:57,326 INFO metadata.HiveMaterializedViewsRegistry: Materialized views registry has been initialized hive>

[root@hadoop101 bin]# schematool -dbType mysql -initSchema 2025-03-21 16:06:47,668 INFO [main] conf.HiveConf (HiveConf.java:findConfigFile(187)) - Found configuration file file:/export/server/apache-hive-3.1.2-bin/conf/hive-site.xml 2025-03-21 16:06:47,872 INFO [main] tools.HiveSchemaHelper (HiveSchemaHelper.java:logAndPrintToStdout(117)) - Metastore connection URL: jdbc:mysql://192.168.10.101:3306/metastore?useSSL=false&useUnicode=true&characterEncoding=UTF-8&allowPublicKeyRetrieval=true Metastore connection URL: jdbc:mysql://192.168.10.101:3306/metastore?useSSL=false&useUnicode=true&characterEncoding=UTF-8&allowPublicKeyRetrieval=true 2025-03-21 16:06:47,872 INFO [main] tools.HiveSchemaHelper (HiveSchemaHelper.java:logAndPrintToStdout(117)) - Metastore Connection Driver : com.mysql.jdbc.Driver Metastore Connection Driver : com.mysql.jdbc.Driver 2025-03-21 16:06:47,872 INFO [main] tools.HiveSchemaHelper (HiveSchemaHelper.java:logAndPrintToStdout(117)) - Metastore connection User: root Metastore connection User: root Loading class com.mysql.jdbc.Driver'. This is deprecated. The new driver class is com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary. Starting metastore schema initialization to 3.1.0 Initialization script hive-schema-3.1.0.mysql.sql Error: Table 'CTLGS' already exists (state=42S01,code=1050) org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization FAILED! Metastore state would be inconsistent !! Underlying cause: java.io.IOException : Schema script failed, errorcode 2 Use --verbose for detailed stacktrace. *** schemaTool failed ***

[root@hadoop01 apache-hive-3.1.3-bin]# $HIVE_HOME/bin/hive which: no hbase in (/export/servers/hadoop-3.3.5/bin::/export/servers/apache-hive-3.1.3-bin/bin:/export/servers/flume-1.9.0/bin::/export/servers/apache-hive-3.1.3-bin/bin:/export/servers/flume-1.9.0/bin:/export/servers/flume-1.9.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/root/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/root/bin) 2025-06-17 19:30:31,773 INFO conf.HiveConf: Found configuration file file:/export/servers/apache-hive-3.1.3-bin/conf/hive-site.xml Hive Session ID = b6ee71f0-5d43-4149-99c7-808d6c553bb8 2025-06-17 19:30:36,006 INFO SessionState: Hive Session ID = b6ee71f0-5d43-4149-99c7-808d6c553bb8 Logging initialized using configuration in jar:file:/export/servers/apache-hive-3.1.3-bin/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true 2025-06-17 19:30:36,195 INFO SessionState: Logging initialized using configuration in jar:file:/export/servers/apache-hive-3.1.3-bin/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true 2025-06-17 19:30:40,759 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/b6ee71f0-5d43-4149-99c7-808d6c553bb8 2025-06-17 19:30:40,863 INFO session.SessionState: Created local directory: /tmp/root/b6ee71f0-5d43-4149-99c7-808d6c553bb8 2025-06-17 19:30:40,874 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/b6ee71f0-5d43-4149-99c7-808d6c553bb8/_tmp_space.db 2025-06-17 19:30:40,916 INFO conf.HiveConf: Using the default value passed in for log id: b6ee71f0-5d43-4149-99c7-808d6c553bb8 2025-06-17 19:30:40,916 INFO session.SessionState: Updating thread name to b6ee71f0-5d43-4149-99c7-808d6c553bb8 main 2025-06-17 19:30:43,385 INFO metastore.HiveMetaStore: 0: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-17 19:30:43,505 WARN metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored 2025-06-17 19:30:43,521 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-17 19:30:43,523 INFO conf.MetastoreConf: Found configuration file file:/export/servers/apache-hive-3.1.3-bin/conf/hive-site.xml 2025-06-17 19:30:43,526 INFO conf.MetastoreConf: Unable to find config file hivemetastore-site.xml 2025-06-17 19:30:43,526 INFO conf.MetastoreConf: Found configuration file null 2025-06-17 19:30:43,528 INFO conf.MetastoreConf: Unable to find config file metastore-site.xml 2025-06-17 19:30:43,528 INFO conf.MetastoreConf: Found configuration file null 2025-06-17 19:30:44,060 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored 2025-06-17 19:30:44,701 INFO hikari.HikariDataSource: HikariPool-1 - Starting... 2025-06-17 19:30:45,564 INFO hikari.HikariDataSource: HikariPool-1 - Start completed. 2025-06-17 19:30:45,707 INFO hikari.HikariDataSource: HikariPool-2 - Starting... 2025-06-17 19:30:45,741 INFO hikari.HikariDataSource: HikariPool-2 - Start completed. 2025-06-17 19:30:46,209 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" 2025-06-17 19:30:46,656 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-17 19:30:46,662 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-17 19:30:47,806 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:30:47,807 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:30:47,808 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:30:47,809 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:30:47,809 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:30:47,810 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:30:51,421 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:30:51,422 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:30:51,422 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:30:51,422 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:30:51,423 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:30:51,423 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:30:56,388 WARN metastore.ObjectStore: Version information not found in metastore. metastore.schema.verification is not enabled so recording the schema version 3.1.0 2025-06-17 19:30:56,388 WARN metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 3.1.0, comment = Set by MetaStore [email protected] 2025-06-17 19:30:57,110 INFO metastore.HiveMetaStore: Added admin role in metastore 2025-06-17 19:30:57,116 INFO metastore.HiveMetaStore: Added public role in metastore 2025-06-17 19:30:57,279 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty 2025-06-17 19:30:57,779 INFO metastore.RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=root (auth:SIMPLE) retries=1 delay=1 lifetime=0 2025-06-17 19:30:57,891 INFO metastore.HiveMetaStore: 0: get_all_functions 2025-06-17 19:30:57,923 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_all_functions Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. 2025-06-17 19:30:58,126 INFO CliDriver: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. Hive Session ID = 06c85488-8547-4e38-a0d1-5e386cd373f1 2025-06-17 19:30:58,131 INFO SessionState: Hive Session ID = 06c85488-8547-4e38-a0d1-5e386cd373f1 2025-06-17 19:30:58,173 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/06c85488-8547-4e38-a0d1-5e386cd373f1 2025-06-17 19:30:58,195 INFO session.SessionState: Created local directory: /tmp/root/06c85488-8547-4e38-a0d1-5e386cd373f1 2025-06-17 19:30:58,203 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/06c85488-8547-4e38-a0d1-5e386cd373f1/_tmp_space.db 2025-06-17 19:30:58,210 INFO metastore.HiveMetaStore: 1: get_databases: @hive# 2025-06-17 19:30:58,211 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_databases: @hive# 2025-06-17 19:30:58,213 INFO metastore.HiveMetaStore: 1: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-17 19:30:58,217 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-17 19:30:58,253 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-17 19:30:58,255 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-17 19:30:58,272 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#db_hive1 pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:30:58,272 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#db_hive1 pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:30:58,305 INFO metastore.HiveMetaStore: 1: get_multi_table : db=db_hive1 tbls= 2025-06-17 19:30:58,305 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=db_hive1 tbls= 2025-06-17 19:30:58,310 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:30:58,310 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:30:58,327 INFO metastore.HiveMetaStore: 1: get_multi_table : db=default tbls= 2025-06-17 19:30:58,327 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=default tbls= 2025-06-17 19:30:58,327 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#itcast_ods pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:30:58,327 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#itcast_ods pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:30:58,332 INFO metastore.HiveMetaStore: 1: get_multi_table : db=itcast_ods tbls= 2025-06-17 19:30:58,332 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=itcast_ods tbls= 2025-06-17 19:30:58,332 INFO metadata.HiveMaterializedViewsRegistry: Materialized views registry has been initialized hive> SHOW DATABASES; 2025-06-17 19:31:11,344 INFO conf.HiveConf: Using the default value passed in for log id: b6ee71f0-5d43-4149-99c7-808d6c553bb8 2025-06-17 19:31:11,624 INFO ql.Driver: Compiling command(queryId=root_20250617193111_702231fb-8543-45f5-b003-86a49e9c4298): SHOW DATABASES 2025-06-17 19:31:12,626 INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager 2025-06-17 19:31:12,673 INFO ql.Driver: Semantic Analysis Completed (retrial = false) 2025-06-17 19:31:12,819 INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:database_name, type:string, comment:from deserializer)], properties:null) 2025-06-17 19:31:12,982 INFO exec.ListSinkOperator: Initializing operator LIST_SINK[0] 2025-06-17 19:31:13,009 INFO ql.Driver: Completed compiling command(queryId=root_20250617193111_702231fb-8543-45f5-b003-86a49e9c4298); Time taken: 1.484 seconds 2025-06-17 19:31:13,011 INFO reexec.ReExecDriver: Execution #1 of query 2025-06-17 19:31:13,011 INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager 2025-06-17 19:31:13,012 INFO ql.Driver: Executing command(queryId=root_20250617193111_702231fb-8543-45f5-b003-86a49e9c4298): SHOW DATABASES 2025-06-17 19:31:13,034 INFO ql.Driver: Starting task [Stage-0:DDL] in serial mode 2025-06-17 19:31:13,041 INFO metastore.HiveMetaStore: 0: get_databases: @hive# 2025-06-17 19:31:13,041 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_databases: @hive# 2025-06-17 19:31:13,057 INFO exec.DDLTask: results : 3 2025-06-17 19:31:13,244 INFO ql.Driver: Completed executing command(queryId=root_20250617193111_702231fb-8543-45f5-b003-86a49e9c4298); Time taken: 0.233 seconds OK 2025-06-17 19:31:13,245 INFO ql.Driver: OK 2025-06-17 19:31:13,245 INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager 2025-06-17 19:31:13,276 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir 2025-06-17 19:31:13,381 INFO mapred.FileInputFormat: Total input files to process : 1 2025-06-17 19:31:13,458 INFO exec.ListSinkOperator: RECORDS_OUT_INTERMEDIATE:0, RECORDS_OUT_OPERATOR_LIST_SINK_0:3, db_hive1 default itcast_ods Time taken: 1.725 seconds, Fetched: 3 row(s) 2025-06-17 19:31:13,479 INFO CliDriver: Time taken: 1.725 seconds, Fetched: 3 row(s) 2025-06-17 19:31:13,480 INFO conf.HiveConf: Using the default value passed in for log id: b6ee71f0-5d43-4149-99c7-808d6c553bb8 2025-06-17 19:31:13,480 INFO session.SessionState: Resetting thread name to main hive>

[root@hadoopo1hive]# bin/schematool -dbtype mysql -initschema --verb ose Metastore connection URL: jdbc:mysq1://1ocalhost:3306/hive? cr eateDatabaseIfNotExist=true&usesSL=false&serverTimezone=UTC Metastore connection Driver : com.mysq1.cj.jdbc.driver Metastore connection user: root starting metastore schema initialization to 3.1.0 Initialization script hive-schema-3.1.0.mysq1.sq1 Connecting to jdbc:mysq1://1ocalhost:3306/hive? Wuhuijun527@ 0: jdbc:mysq1://1ocalhost:3306/hive (c1osed)> !autocommit on Wuhuiiun527@ No current connection Connection is already closed. org.apache.hadoop.hive.metastore.HiveMetaException: Schema initializa tion FAILED! Metastore state would be inconsistent !! Underlying cause: java.io.I0Exception : Schema script failed, errorco de 2 org.apache.hadoop.hive.metastore.HiveMetaException: Schema initializa tion FAILED! Metastore state would be inconsistent!! at org.apache.hive.beeline.HiveschemaToo1.doInit(Hiveschemato o1.java:594) at org.apache.hive.beeline.HiveschemaToo1.doInit(HiveschemaTo o1.java:567) at org.apache.hive.beeline.HiveschemaTool.main(HiveSchemaToo1 .java:1517) at sun.ref1ect.NativeMethodAccessorImp1.invoke0(Native Method at sun.ref1ect.NativeMethodAccessorImp1.invoke(NativeMethodAc cessorImp1.java:62) at sun.ref1ect.DelegatingMethodAccessorImp1.invoke(Delegating MethodAccessor Imp1.java:43) at java.lang.ref1ect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:328) at org.apache.hadoop.util.RunJar.main(RunJar.java:241) caused by: java.io.10Exception: schema script failed, errorcode 2 at org.apache.hive.beeline.HiveschemaToo1.runBeeLine(Hivesche maToo1.java:1226) at org.apache.hive.beeline.HiveschemaToo1.runBeeLine(Hivesche maToo1.java:1204) at org.apache.hive. beeline.HiveschemaToo1.doInit(HiveschemaTo o1.java:590) 8 more *** schematool failed***

最新推荐

recommend-type

【scratch2.0少儿编程-游戏原型-动画-项目源码】角色控制猫咪MM.zip

资源说明: 1:本资料仅用作交流学习参考,请切勿用于商业用途。更多精品资源请访问 https://blog.csdn.net/ashyyyy/article/details/146464041 2:一套精品实用scratch2.0少儿编程游戏、动画源码资源,无论是入门练手还是项目复用都超实用,省去重复开发时间,让开发少走弯路!
recommend-type

SourceCodeSecurityAudit(源代码安全审计).zip

SourceCodeSecurityAudit(源代码安全审计).zip
recommend-type

Node.js构建的运动咖啡馆RESTful API介绍

标题《sportscafeold:体育咖啡馆》指出了项目名称为“体育咖啡馆”,这个名字暗示了该项目可能是一个结合了运动和休闲主题的咖啡馆相关的网络服务平台。该项目运用了多种技术栈,核心的开发语言为JavaScript,这从标签中可以得到明确的信息。 从描述中可以提取以下知识点: 1. **Node.js**:体育咖啡馆项目使用了Node.js作为服务器端运行环境。Node.js是一个基于Chrome V8引擎的JavaScript运行环境,它能够使得JavaScript应用于服务器端开发。Node.js的事件驱动、非阻塞I/O模型使其适合处理大量并发连接,这对于RESTFUL API的构建尤为重要。 2. **Express Framework**:项目中使用了Express框架来创建RESTFUL API。Express是基于Node.js平台,快速、灵活且极简的Web应用开发框架。它提供了构建Web和移动应用的强大功能,是目前最流行的Node.js Web应用框架之一。RESTFUL API是一组遵循REST原则的应用架构,其设计宗旨是让Web服务通过HTTP协议进行通信,并且可以使用各种语言和技术实现。 3. **Mongoose ORM**:这个项目利用了Mongoose作为操作MongoDB数据库的接口。Mongoose是一个对象文档映射器(ODM),它为Node.js提供了MongoDB数据库的驱动。通过Mongoose可以定义数据模型,进行数据库操作和查询,从而简化了对MongoDB数据库的操作。 4. **Passport.js**:项目中采用了Passport.js库来实现身份验证系统。Passport是一个灵活的Node.js身份验证中间件,它支持多种验证策略,例如用户名和密码、OAuth等。它提供了标准化的方法来为用户登录提供认证,是用户认证功能的常用解决方案。 5. **版权信息**:项目的版权声明表明了Sportscafe 2015是版权所有者,这表明项目或其相关内容最早发布于2015年或之前。这可能表明该API背后有商业实体的支持或授权使用。 从【压缩包子文件的文件名称列表】中我们可以了解到,该文件的版本控制仓库使用的是“master”分支。在Git版本控制系统中,“master”分支通常用于存放当前可部署的稳定版本代码。在“master”分支上进行的更改通常都是经过测试且准备发布到生产环境的。 综上所述,我们可以知道体育咖啡馆项目是一个利用现代JavaScript技术栈搭建的后端服务。它包含了处理HTTP请求的Express框架、连接MongoDB数据库的Mongoose库和实现用户身份验证的Passport.js中间件。该项目可用于构建提供体育信息、咖啡馆菜单信息、预约服务等的Web应用或API服务,这为体育咖啡馆的营销、用户体验和数据管理提供了可能。 考虑到文档资料的提及,该项目的安装和API文档应该包含在项目资料中,可能在项目的README文件或其他说明文档中。对于项目的使用者或者开发者而言,这部分文档非常重要,因为它们可以提供详细的信息和指导,帮助用户快速部署和使用该API。 总结来说,这是一套针对体育咖啡馆相关业务的后端解决方案,它使用了流行的开源技术栈,可以灵活地应用于体育咖啡馆的网络服务中,如信息发布、在线预约、会员管理等。
recommend-type

【LNR优化与用户体验】:一文看透互操作优化如何提升用户感知

# 摘要 LNR(邻区关系优化)是提升移动通信网络性能与用户体验的关键技术。本文系统阐述了LNR优化的基本概念、理论基础与技术实现路径,构建了包括信道质量评估、干扰建模及关键性能指标分析在内的数学模型,并探讨了参数配置、邻区管理及AI与大数据在LNR优化中的应用。通过量化用户感知指标,分析了LNR优化对视频流畅度、页面加载速度及切换性能的实际改善效果,并结合典型场景验证了其优化成效。本文进一步展望了LNR技术在5G及未来网络中的发展趋势与技术挑战。
recommend-type

Java1.8 的编程语言、使用场景、版本号、厂商、是否开源、发行日期、终止日期、可替代产品、推荐产品是什么

以下是关于 **Java 1.8** 的全面信息汇总,涵盖其编程语言特性、使用场景、版本号、厂商、是否开源、发行日期、终止支持日期、可替代产品和推荐产品。 --- ## ✅ Java 1.8 综合信息表 | 项目 | 内容 | |------|------| | **编程语言** | Java | | **版本号** | Java 1.8(也称为 Java 8) | | **厂商** | Oracle、Adoptium、Amazon(Corretto)、Azul(Zulu)、Red Hat、IBM 等 | | **是否开源** | ✅ 是(OpenJDK 1.8 是开源的,Oracle
recommend-type

Java开发的教区牧民支持系统介绍

根据给定文件信息,下面将详细阐述相关知识点: ### 标题知识点 #### catecumenus-java: 教区牧民支持系统 - **Java技术栈应用**:标题提到的“catecumenus-java”表明这是一个使用Java语言开发的系统。Java是目前最流行的编程语言之一,广泛应用于企业级应用、Web开发、移动应用等,尤其是在需要跨平台运行的应用中。Java被设计为具有尽可能少的实现依赖,所以它可以在多种处理器上运行。 - **教区牧民支持系统**:从标题来看,这个系统可能面向的是教会管理或教区管理,用来支持牧民(教会领导者或牧师)的日常管理工作。具体功能可能包括教友信息管理、教区活动安排、宗教教育资料库、财务管理、教堂资源调配等。 ### 描述知识点 #### 儿茶类 - **儿茶素(Catechin)**:描述中提到的“儿茶类”可能与“catecumenus”(新信徒、教徒)有关联,暗示这个系统可能与教会或宗教教育相关。儿茶素是一类天然的多酚类化合物,常见于茶、巧克力等植物中,具有抗氧化、抗炎等多种生物活性,但在系统标题中可能并无直接关联。 - **系统版本号**:“0.0.1”表示这是一个非常初期的版本,意味着该系统可能刚刚开始开发,功能尚不完善。 ### 标签知识点 #### Java - **Java语言特点**:标签中明确提到了“Java”,这暗示了整个系统都是用Java编程语言开发的。Java的特点包括面向对象、跨平台(即一次编写,到处运行)、安全性、多线程处理能力等。系统使用Java进行开发,可能看重了这些特点,尤其是在构建可扩展、稳定的后台服务。 - **Java应用领域**:Java广泛应用于企业级应用开发中,包括Web应用程序、大型系统后台、桌面应用以及移动应用(Android)。所以,此系统可能也会涉及这些技术层面。 ### 压缩包子文件的文件名称列表知识点 #### catecumenus-java-master - **Git项目结构**:文件名称中的“master”表明了这是Git版本控制系统中的一个主分支。在Git中,“master”分支通常被用作项目的主干,是默认的开发分支,所有开发工作都是基于此分支进行的。 - **项目目录结构**:在Git项目中,“catecumenus-java”文件夹应该包含了系统的源代码、资源文件、构建脚本、文档等。文件夹可能包含各种子文件夹和文件,比如src目录存放Java源代码,lib目录存放相关依赖库,以及可能的build.xml文件用于构建过程(如Ant或Maven构建脚本)。 ### 结合以上信息的知识点整合 综合以上信息,我们可以推断“catecumenus-java: 教区牧民支持系统”是一个使用Java语言开发的系统,可能正处于初级开发阶段。这个系统可能是为了支持教会内部管理,提供信息管理、资源调度等功能。其使用Java语言的目的可能是希望利用Java的多线程处理能力、跨平台特性和强大的企业级应用支持能力,以实现一个稳定和可扩展的系统。项目结构遵循了Git版本控制的规范,并且可能采用了模块化的开发方式,各个功能模块的代码和资源文件都有序地组织在不同的子文件夹内。 该系统可能采取敏捷开发模式,随着版本号的增加,系统功能将逐步完善和丰富。由于是面向教会的内部支持系统,对系统的用户界面友好性、安全性和数据保护可能会有较高的要求。此外,考虑到宗教性质的敏感性,系统的开发和使用可能还需要遵守特定的隐私和法律法规。
recommend-type

LNR切换成功率提升秘籍:参数配置到网络策略的全面指南

# 摘要 LNR(LTE to NR)切换技术是5G网络部署中的关键环节,直接影
recommend-type

How to install watt toolkit in linux ?

安装 Watt Toolkit(原名 Steam++)在 Linux 系统上通常可以通过编译源码或者使用预编译的二进制文件来完成。Watt Toolkit 是一个开源工具,主要用于加速 Steam 平台的下载速度,支持跨平台运行,因此在 Linux 上也能够很好地工作。 ### 安装步骤 #### 方法一:使用预编译的二进制文件 1. 访问 [Watt Toolkit 的 GitHub 仓库](https://github.com/BeyondDimension/SteamTools) 并下载适用于 Linux 的最新版本。 2. 解压下载的压缩包。 3. 给予可执行权限: ```
recommend-type

PHP实现用户墙上帖子与评论的分享功能

根据给定文件信息,我们可以推导出与“userwallposts”相关的知识点。这里涉及的关键概念包括用户墙面墙(User Wall)、帖子(Posts)和评论(Comments),以及它们在编程语言PHP中的实现方式。用户墙是一种允许用户发布信息,并让他们的朋友或跟随者查看并参与讨论的功能,常见于社交网站。 ### 用户墙概念 用户墙类似于现实生活中的一面墙,用户可以在上面贴上“帖子”来分享自己的想法、照片、视频等信息。其他用户可以在这些帖子下面进行“评论”,类似于在墙上留言。这种互动方式构建了一个社区式的交流环境,增加了用户之间的互动性和参与感。 ### 用户墙的实现 在PHP中实现用户墙功能需要处理前端用户界面和后端服务器逻辑。前端负责展示用户墙、帖子和评论的界面,而后端则负责存储、检索和管理这些数据。 1. **前端实现**:前端可以使用HTML、CSS和JavaScript来构建用户墙的界面。使用AJAX技术可以让用户无需刷新页面即可提交和获取新的帖子和评论。此外,可能还会用到模板引擎(如Twig或Smarty)来动态生成页面内容。 2. **后端实现**:后端PHP代码将负责处理前端发送的请求,如帖子和评论的添加、删除和检索。数据库(如MySQL)将用于存储用户信息、帖子内容、评论以及它们之间的关联关系。 3. **数据库设计**: - 用户表(users):存储用户信息,例如用户名、密码(加密存储)、用户状态等。 - 帖子表(posts):存储帖子信息,例如帖子ID、帖子内容、发帖时间、所属用户ID等。 - 评论表(comments):存储评论信息,包括评论ID、评论内容、评论时间、所属帖子ID和用户ID等。 4. **PHP与数据库交互**:使用PDO(PHP Data Objects)或mysqli扩展来执行SQL语句与数据库进行交互。PDO提供了数据库访问的抽象层,可以连接多种数据库系统,而mysqli则针对MySQL进行了优化。 5. **安全性**: - 输入验证:为了防止SQL注入等安全问题,需要对用户输入进行验证和清理。 - 输出编码:在将数据输出到浏览器之前,应将特殊字符转换为HTML实体。 - 用户认证:用户登录系统时,应采取安全措施如使用会话管理(session management)和防止跨站请求伪造(CSRF)。 6. **功能实现细节**: - 发布帖子:用户输入帖子内容并提交,后端接收并存储到帖子表中。 - 显示帖子:从帖子表中检索所有帖子并展示在用户墙上,包括帖子标题、内容、发布时间和发帖人等信息。 - 发布评论:用户对特定帖子发表评论,后端接收评论信息并将其存储到评论表中。 - 显示评论:为每个帖子显示其下的所有评论,包括评论内容、时间、评论者等。 ### 开源项目实践 “userwallposts-master”暗示了可能存在一个与用户墙帖子和评论相关的开源项目或代码库。这个项目可能包含预设的代码和文件结构,允许开发者下载、安装和配置来创建他们自己的用户墙功能。开发人员可以使用这个项目作为起点,根据自己的需求进一步定制和扩展功能。 在实际开发过程中,还需要考虑系统的可扩展性和维护性。例如,可以设计RESTful API让前端和后端分离,或者使用现代的PHP框架(如Laravel、Symfony等)来简化开发流程和提高代码的组织性。 在总结上述内容后,我们可以了解到用户墙是社交平台中常见的一种功能,其核心功能包括发布帖子和评论。在PHP环境下实现这样的功能需要对前端界面和后端逻辑有深入的理解,以及对数据库设计和交互的安全性有一定的认识。开源项目如“userwallposts-master”可能提供了一个基础框架,帮助开发者快速构建和部署一个用户墙系统。
recommend-type

【LNR信令深度解析】:MR-DC双连接建立全过程技术揭秘

# 摘要 本文系统探讨了LNR信令与MR-DC双连接技术的基本原理、架构组成及其关键信令流程。深入分析了MR-DC的网络架构分类、核心网元功能、无线承载管理机制以及双连接建立过程中的关键信令交互路径。同时,文章解析了LNR信令消息的结构特征与关键字段,探讨了MR-DC场景下的性能评估指标与优化策略,包括信令压缩、负载均衡及节能调度机制。最后,文章展望了MR-DC技术在5G