oracle+current-time,在Oracle用SQL处理以 System.currentTimeMillis

本文介绍了如何使用SQL进行时间范围搜索,特别是在处理毫秒级时间戳时的方法。通过将时间戳转换为日期并进行比较,可以有效地在数据库中找到特定时间段内的记录。同时,也展示了如何找出日期最小的时间戳对应的日期。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

有時為了系統的需求會紀錄到毫秒(Millisecond),我們會接將得到的值寫入db,但是如果要用SQL

做時間範圍的搜尋,有以下做法

( systemdate欄位存放System.currentTimeMillis() 取得的值)

--找出myTable資料表中 systemdate欄位 在 2006/11/28 14:00:00 ~ 14:10:00

的資料

select * from myTable where

systemdate >= (to_date('20061128 14:00:00','YYYYMMDD

hh24:mi:ss') - to_date('19700101 8:00:00','YYYYMMDD

hh24:mi:ss'))*1000*60*60*24

and systemdate <= (to_date('20061128

14:10:00','YYYYMMDD hh24:mi:ss') - to_date('19700101

8:00:00','YYYYMMDD hh24:mi:ss'))*1000*60*60*24

--找出myTable資料表中日期最小的時間

select to_date('19700101 8:00:00','YYYYMMDD hh24:mi:ss') + ((select min(SYSTEMDATE) from myTable )/1000/60/60/24) from dual

END; 2025-06-05 13:51:09.307 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - 分析归档日志... 2025-06-05 13:51:09.308 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - SELECT NAME, FIRST_CHANGE#, NEXT_CHANGE# FROM v$archived_log WHERE DEST_ID = 1 AND FIRST_CHANGE# <= 7914757418 AND NEXT_CHANGE# > 7914757418 2025-06-05 13:51:09.333 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN dbms_logmnr.add_logfile(logfilename=>'/DATA/app/product/12.2.0/dbhome_1/dbs/arch/1_74286_1100144861.dbf',options=>dbms_logmnr.new); END; 2025-06-05 13:51:09.352 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN dbms_logmnr.start_logmnr(startScn=>'7914757418',options=> dbms_logmnr.dict_from_online_catalog + dbms_logmnr.skip_corruption + dbms_logmnr.no_sql_delimiter + dbms_logmnr.no_rowid_in_stmt + dbms_logmnr.string_literals_in_stmt) ; END; 2025-06-05 13:51:09.362 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - dbms_logmnr successfully! 当前分析模式: archive 2025-06-05 13:51:09.732 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - 本轮归档日志已分析结束, 结束本次会话, 开始下次分析, 开始点位: 7914757422 2025-06-05 13:51:09.732 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN dbms_logmnr.end_logmnr; END; 2025-06-05 13:51:09.733 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - end_logmnr succeeded ! 2025-06-05 13:51:09.733 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN DBMS_SESSION.SET_SQL_TRACE(FALSE); END; 2025-06-05 13:51:09.734 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - 分析归档日志... 2025-06-05 13:51:09.734 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - SELECT NAME, FIRST_CHANGE#, NEXT_CHANGE# FROM v$archived_log WHERE DEST_ID = 1 AND FIRST_CHANGE# <= 7914757422 AND NEXT_CHANGE# > 7914757422 2025-06-05 13:51:09.750 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - 分析在线日志... 2025-06-05 13:51:09.768 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - SELECT f.member,g.group#,g.FIRST_CHANGE# FROM v$log g left join v$logfile f on g.group# = f.group# where g.FIRST_CHANGE# <= 7914757422 AND g.NEXT_CHANGE# > 7914757422 2025-06-05 13:51:09.768 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN dbms_logmnr.add_logfile(logfilename=>'/DATA/app/oradata/orcl/redo02.log',options=>dbms_logmnr.new); END; 2025-06-05 13:51:09.772 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN dbms_logmnr.start_logmnr(startScn=>'7914757422',options=> dbms_logmnr.dict_from_online_catalog + dbms_logmnr.skip_corruption + dbms_logmnr.no_sql_delimiter + dbms_logmnr.no_rowid_in_stmt + dbms_logmnr.string_literals_in_stmt) ; END; 2025-06-05 13:51:09.786 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - dbms_logmnr successfully! 当前分析模式: online 2025-06-05 13:54:02.535 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN dbms_logmnr.end_logmnr; END; 2025-06-05 13:54:02.537 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - end_logmnr succeeded ! 2025-06-05 13:54:02.537 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN DBMS_SESSION.SET_SQL_TRACE(FALSE); END; 2025-06-05 13:54:02.538 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - 分析归档日志... 2025-06-05 13:54:02.538 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - SELECT NAME, FIRST_CHANGE#, NEXT_CHANGE# FROM v$archived_log WHERE DEST_ID = 1 AND FIRST_CHANGE# <= 7914822313 AND NEXT_CHANGE# > 7914822313 2025-06-05 13:54:02.555 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN dbms_logmnr.add_logfile(logfilename=>'/DATA/app/product/12.2.0/dbhome_1/dbs/arch/1_74287_1100144861.dbf',options=>dbms_logmnr.new); END; 2025-06-05 13:54:02.560 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN dbms_logmnr.start_logmnr(startScn=>'7914822313',options=> dbms_logmnr.dict_from_online_catalog + dbms_logmnr.skip_corruption + dbms_logmnr.no_sql_delimiter + dbms_logmnr.no_rowid_in_stmt + dbms_logmnr.string_literals_in_stmt) ; END; 2025-06-05 13:54:02.569 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - dbms_logmnr successfully! 当前分析模式: archive 2025-06-05 13:54:02.980 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - 本轮归档日志已分析结束, 结束本次会话, 开始下次分析, 开始点位: 7914822317 2025-06-05 13:54:02.980 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN dbms_logmnr.end_logmnr; END; 2025-06-05 13:54:02.982 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - end_logmnr succeeded ! 2025-06-05 13:54:02.982 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN DBMS_SESSION.SET_SQL_TRACE(FALSE); END; 2025-06-05 13:54:02.983 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - 分析归档日志... 2025-06-05 13:54:02.983 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - SELECT NAME, FIRST_CHANGE#, NEXT_CHANGE# FROM v$archived_log WHERE DEST_ID = 1 AND FIRST_CHANGE# <= 7914822317 AND NEXT_CHANGE# > 7914822317 2025-06-05 13:54:03.010 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - 分析在线日志... 2025-06-05 13:54:03.040 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - SELECT f.member,g.group#,g.FIRST_CHANGE# FROM v$log g left join v$logfile f on g.group# = f.group# where g.FIRST_CHANGE# <= 7914822317 AND g.NEXT_CHANGE# > 7914822317 2025-06-05 13:54:03.041 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN dbms_logmnr.add_logfile(logfilename=>'/DATA/app/oradata/orcl/redo03.log',options=>dbms_logmnr.new); END; 2025-06-05 13:54:03.045 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN dbms_logmnr.start_logmnr(startScn=>'7914822317',options=> dbms_logmnr.dict_from_online_catalog + dbms_logmnr.skip_corruption + dbms_logmnr.no_sql_delimiter + dbms_logmnr.no_rowid_in_stmt + dbms_logmnr.string_literals_in_stmt) ; END; 2025-06-05 13:54:03.054 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - dbms_logmnr successfully! 当前分析模式: online 2025-06-05 14:00:13.564 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN dbms_logmnr.end_logmnr; END; 2025-06-05 14:00:13.570 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - end_logmnr succeeded ! 2025-06-05 14:00:13.570 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN DBMS_SESSION.SET_SQL_TRACE(FALSE); END; 2025-06-05 14:00:13.571 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - 分析归档日志... 2025-06-05 14:00:13.571 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - SELECT NAME, FIRST_CHANGE#, NEXT_CHANGE# FROM v$archived_log WHERE DEST_ID = 1 AND FIRST_CHANGE# <= 7914891702 AND NEXT_CHANGE# > 7914891702 2025-06-05 14:00:13.587 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN dbms_logmnr.add_logfile(logfilename=>'/DATA/app/product/12.2.0/dbhome_1/dbs/arch/1_74288_1100144861.dbf',options=>dbms_logmnr.new); END; 2025-06-05 14:00:13.600 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN dbms_logmnr.start_logmnr(startScn=>'7914891702',options=> dbms_logmnr.dict_from_online_catalog + dbms_logmnr.skip_corruption + dbms_logmnr.no_sql_delimiter + dbms_logmnr.no_rowid_in_stmt + dbms_logmnr.string_literals_in_stmt) ; END; 2025-06-05 14:00:13.608 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - dbms_logmnr successfully! 当前分析模式: archive 2025-06-05 14:00:13.971 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - 本轮归档日志已分析结束, 结束本次会话, 开始下次分析, 开始点位: 7914891706 2025-06-05 14:00:13.971 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN dbms_logmnr.end_logmnr; END; 2025-06-05 14:00:13.973 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - end_logmnr succeeded ! 2025-06-05 14:00:13.973 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN DBMS_SESSION.SET_SQL_TRACE(FALSE); END; 2025-06-05 14:00:13.974 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - 分析归档日志... 2025-06-05 14:00:13.974 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - SELECT NAME, FIRST_CHANGE#, NEXT_CHANGE# FROM v$archived_log WHERE DEST_ID = 1 AND FIRST_CHANGE# <= 7914891706 AND NEXT_CHANGE# > 7914891706 2025-06-05 14:00:14.005 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - 分析在线日志... 2025-06-05 14:00:14.014 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - SELECT f.member,g.group#,g.FIRST_CHANGE# FROM v$log g left join v$logfile f on g.group# = f.group# where g.FIRST_CHANGE# <= 7914891706 AND g.NEXT_CHANGE# > 7914891706 2025-06-05 14:00:14.014 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN dbms_logmnr.add_logfile(logfilename=>'/DATA/app/oradata/orcl/redo01.log',options=>dbms_logmnr.new); END; 2025-06-05 14:00:14.018 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - BEGIN dbms_logmnr.start_logmnr(startScn=>'7914891706',options=> dbms_logmnr.dict_from_online_catalog + dbms_logmnr.skip_corruption + dbms_logmnr.no_sql_delimiter + dbms_logmnr.no_rowid_in_stmt + dbms_logmnr.string_literals_in_stmt) ; END; 2025-06-05 14:00:14.043 [pool-34-thread-1] WARN c.a.o.c.server.embedded.handle.oracle.OracleHandlerV3 - dbms_logmnr successfully! 当前分析模式: online 这个日志执行结果 ,下面是代码, package com.alibaba.otter.canal.server.embedded.handle.oracle; import com.alibaba.otter.canal.instance.core.CanalInstance; import com.alibaba.otter.canal.meta.FileMixedMetaManager; import com.alibaba.otter.canal.parse.CanalEventParser; import com.alibaba.otter.canal.parse.inbound.mysql.MysqlEventParser; import com.alibaba.otter.canal.parse.support.AuthenticationInfo; import com.alibaba.otter.canal.protocol.FlatMessage; import com.alibaba.otter.canal.protocol.Message; import com.alibaba.otter.canal.server.embedded.handle.Constant; import com.alibaba.otter.canal.server.embedded.handle.NotMysqlHandler; import com.alibaba.otter.canal.server.exception.CanalServerException; import com.alibaba.otter.canal.utils.OracleUtil; import com.google.common.collect.Lists; import org.apache.commons.lang.StringUtils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.util.CollectionUtils; import java.io.IOException; import java.io.PrintWriter; import java.io.StringWriter; import java.sql.*; import java.text.SimpleDateFormat; import java.util.*; import java.util.concurrent.atomic.AtomicLong; /** * @author : yangpeng * create at: 2022/5/21 13:09 * @description: oracle连接实现 */ @Deprecated public class OracleHandlerV3 implements NotMysqlHandler { private static final Logger logger = LoggerFactory.getLogger(OracleHandler.class); private final static String ORACLE_DRIVER_URL = "jdbc:oracle:thin:@"; private final static String DATA_DICTIONARY_PATH = "/oracle/oradata/orcl/logmnr"; private Connection connection; private AuthenticationInfo authInfo; private AtomicLong currentScn = new AtomicLong(0); private String queryLogSql; private Map<String, List<String>> pkMaps; private List<String> whiteList; private String currentRedoGroups; private boolean isSwitch = true; private int pageSize = 500; static { //加载驱动 try { Class.forName("oracle.jdbc.OracleDriver"); } catch (ClassNotFoundException e) { e.printStackTrace(); } } @Override public void subscribe(CanalInstance canalInstance) { //数据库基本信息 CanalEventParser eventParser = canalInstance.getEventParser(); MysqlEventParser mysqlEventParser = (MysqlEventParser) eventParser; this.authInfo = mysqlEventParser.getMasterInfo(); connect(); String filterRegex = authInfo.getFilterRegex(); if(!org.springframework.util.StringUtils.isEmpty(filterRegex) && !filterRegex.endsWith(".*")){ //whiteList = Arrays.asList(filterRegex.split(",")); whiteList = new ArrayList<>(); for (String table : filterRegex.split(",")) { whiteList.add(table.toUpperCase()); } } String index = FileMixedMetaManager.getScnLocal(canalInstance.getDestination()); if("0".equals(index)){ String scn = authInfo.getTimestamp(); if(StringUtils.isEmpty(scn)) scn = getStartPosition(); try { this.currentScn.set(Long.valueOf(scn)); FileMixedMetaManager.saveScnLocal(canalInstance.getDestination(), Long.valueOf(scn)); } catch (IOException e) { e.printStackTrace(); } }else{ this.currentScn.set(Long.valueOf(index)); } //queryCurrentArchivedLogInfo(); currentRedoGroups = queryCurrentRedoGroups(); authInfo.setArchived(true); updateSettings(); //获取主键 pkMaps = new HashMap<>(); for (String table : whiteList) { List<String> pkNames = getPkNames(authInfo.getDefaultDatabaseName(), table); //pkMaps.put(authInfo.getDefaultDatabaseName()+"."+table, pkNames); String key = (authInfo.getDefaultDatabaseName() + "." + table).toUpperCase(); pkMaps.put(key, pkNames != null ? pkNames : Collections.emptyList()); } } private String queryCurrentRedoGroups(){ //获取当前正在使用的redo日志文件的相关组信息 Statement statement = null; ResultSet resultSet = null; try { statement = connection.createStatement(); resultSet = statement.executeQuery("SELECT GROUP# FROM V$LOG WHERE STATUS = 'CURRENT'"); StringBuilder sb = new StringBuilder(); while (resultSet.next()) { sb.append(resultSet.getString(1)); } if(sb.length() > 0){ return sb.toString(); } } catch (SQLException e) { logger.warn("Query Current Redo Group failed!"); } finally { close(statement, resultSet); } return ""; } // private void queryCurrentArchivedLogInfo(){ // //获取当前最新的归档日志信息 // Statement statement = null; // ResultSet resultSet = null; // try { // statement = connection.createStatement(); // resultSet = statement.executeQuery("SELECT NAME, RECID FROM " + // "(SELECT NAME, RECID FROM v$archived_log WHERE DEST_ID = 1 ORDER BY RECID DESC) WHERE ROWNUM = 1"); // if (resultSet.next()) { // //lastArchivedLogName = resultSet.getString(1); // //lastArchivedLogRecid = resultSet.getLong(2); // //logger.warn("1 - > lastArchivedLogName: " + lastArchivedLogName); // } // } catch (SQLException e) { // logger.warn("Query Current Archived Log failed!"); // } finally { // close(statement, resultSet); // } // } private void isArchived(Statement statement, ResultSet resultSet, ArrayList<String> logFiles){ if(authInfo.isArchived()){ try { logger.warn("分析归档日志..."); statement = connection.createStatement(); String sql = "SELECT NAME, NEXT_CHANGE# FROM v$archived_log WHERE DEST_ID = 1 AND FIRST_CHANGE# >= "+currentScn+" ORDER BY RECID ASC"; resultSet = statement.executeQuery(sql); String NAME; while(resultSet.next()){ NAME = resultSet.getString(1); //添加归档日志 logFiles.add(NAME); } } catch (SQLException e) { logger.warn("Archive log parsing failed!"); e.printStackTrace(); StringWriter sw = new StringWriter(); PrintWriter pw = new PrintWriter(sw); // 将出错的栈信息输出到printWriter中 e.printStackTrace(pw); pw.flush(); sw.flush(); try { sw.close(); } catch (IOException ex) { throw new RuntimeException(ex); } pw.close(); } finally { close(statement, resultSet); } } } private void addLogfile(Statement statement, ResultSet resultSet, ArrayList<String> logFiles){ if(CollectionUtils.isEmpty(logFiles)){ try { logger.warn("分析在线日志..."); statement = connection.createStatement(); resultSet = statement.executeQuery("select member from v$logfile"); while (resultSet.next()) { String fileName = resultSet.getString(1); if(!fileName.contains("sredo")){ logFiles.add(fileName); } } } catch (SQLException e) { logger.warn("Description Failed to query online logs!"); } finally { close(statement, resultSet); } } } private void setLogAnalyseDir(){ String createDictSql = "BEGIN dbms_logmnr_d.build(dictionary_filename =>'" + authInfo.getOraName() + "', dictionary_location =>'" + DATA_DICTIONARY_PATH + "'); END;"; logger.warn(createDictSql); getCallableStatement(createDictSql); } private void addAllfile(ArrayList<String> logFiles){ StringBuilder sbSQL = new StringBuilder(); for (int i = 0; i < logFiles.size(); ++i) { if (i == 0) { sbSQL.append("BEGIN\n"); } sbSQL.append("dbms_logmnr.add_logfile(logfilename=>'").append(logFiles.get(i)).append("',options=>dbms_logmnr.") .append(i == 0 ? "new" : "addfile").append(");\n"); } logFiles.clear(); sbSQL.append("END;\n"); logger.warn(sbSQL.toString()); getCallableStatement(sbSQL.toString()); } private void appendSql(){ //logmner视图查询 StringBuilder queryStr = new StringBuilder(); queryStr.append("SELECT scn,operation,sql_redo,table_space,table_name,timestamp,csf FROM v$logmnr_contents where 1=1"); queryStr.append(" AND seg_type_name='TABLE'"); queryStr.append(" AND table_space!='SYSTEM'"); String filter = authInfo.getFilter(); if (filter.contains(".*") && !".*..*".equals(filter)) { queryStr.append(" AND seg_owner='").append(authInfo.getDefaultDatabaseName()).append("'"); } if (!filter.contains(".*") && !".*..*".equals(filter)) { queryStr.append(" AND seg_owner='").append(authInfo.getDefaultDatabaseName()).append("'"); queryStr.append(" AND table_name in(").append(authInfo.getTable()).append(")"); } queryStr.append(" AND scn > " + "?"); queryStr.append(" AND rownum < ").append(pageSize); queryStr.append(" order by scn asc "); queryStr.append(" AND operation IN ('INSERT','UPDATE','DELETE')"); this.queryLogSql = queryStr.toString(); } public void updateSettings() { ArrayList<String> logFiles = new ArrayList<>(); Statement statement = null; ResultSet resultSet = null; try { // 是否分析归档日志 isArchived(statement, resultSet, logFiles); // 分析在线日志 addLogfile(statement, resultSet, logFiles); if (logFiles.isEmpty()) throw new RuntimeException("The Oracle log file was not read"); // 设置日志分析目录 setLogAnalyseDir(); // 添加所有日志文件 addAllfile(logFiles); try { // 开启logmnr视图 String startsql = "BEGIN\n" + "dbms_logmnr.start_logmnr(startScn=>'" + currentScn.get() //开始的scn号 + "',dictfilename=>'" + DATA_DICTIONARY_PATH + "/" + authInfo.getOraName() //路径 + "',options=>" + " dbms_logmnr.primary_key" + " +dbms_logmnr.skip_corruption" + " + dbms_logmnr.no_sql_delimiter" + " + dbms_logmnr.no_rowid_in_stmt" + " + dbms_logmnr.committed_data_only" + " + dbms_logmnr.string_literals_in_stmt) ;\n" + "END;"; logger.warn(startsql); getCallableStatement(startsql); } catch (Exception e) { logger.error("oracle logminer Unsubscribe failed:{}", e); // 开启logmnr视图 String startsql = "BEGIN\n" + "dbms_logmnr.start_logmnr(startScn=>'" + 0 //开始的scn号 + "',dictfilename=>'" + DATA_DICTIONARY_PATH + "/" + authInfo.getOraName() //路径 + "',options=>" + " dbms_logmnr.primary_key" + " +dbms_logmnr.skip_corruption" + " + dbms_logmnr.no_sql_delimiter" + " + dbms_logmnr.no_rowid_in_stmt" + " + dbms_logmnr.committed_data_only" + " + dbms_logmnr.string_literals_in_stmt) ;\n" + "END;"; logger.warn(startsql); getCallableStatement(startsql); } //logmner视图查询 appendSql(); logger.info("Subscribe to Oracle {} successfully!", authInfo.getAddress() + ":" + authInfo.getDefaultDatabaseName()); } catch (Exception e) { throw new CanalServerException("Failed to subscribe to Oracle logs. Procedure:::", e); } finally { close(statement, resultSet); } } private void logSwitchVerify(){ String crgs = queryCurrentRedoGroups(); if(StringUtils.isNotEmpty(crgs) && !crgs.equals(currentRedoGroups)){ currentRedoGroups = crgs; unsubscribe(); //oracle切换日志组时可能存在日志没来得及分析的遗漏问题, 此时应分析最近的归档日志, 且只分析一次, 处理掉可能遗漏的数据后再开启分析在线日志 authInfo.setArchived(true); isSwitch = true; updateSettings(); } } @Override public synchronized Message getWithoutAck(CanalInstance canalInstance) { try { if (connection.isClosed()) { reconnect(); } } catch (Exception e) { logger.error(e.getMessage(), e); } logSwitchVerify(); String findSql = this.queryLogSql.replace("?", currentScn.toString()); logger.warn(findSql); List<FlatMessage> data = new ArrayList<>(); Statement statement = null; ResultSet resultSet = null; try { statement = connection.createStatement(); resultSet = statement.executeQuery(findSql); //获取历史标识 long lastScn = currentScn.get(); Long endScn = null; while (resultSet.next()) { long scn = resultSet.getLong(Constant.scn); if (scn <= currentScn.get()) continue; String operation = resultSet.getString(Constant.operation); String sql = resultSet.getString(Constant.sql_redo); if(StringUtils.isEmpty(sql)) continue; // String tableSpace = resultSet.getString(Constant.table_space); String tableName = resultSet.getString(Constant.table_name); String timestamp = resultSet.getString(Constant.timestamp); SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss"); long es = sdf.parse(timestamp).getTime() / 1000; FlatMessage flatMessage = new FlatMessage(); flatMessage.setDatabase(authInfo.getDefaultDatabaseName()); flatMessage.setTable(tableName); flatMessage.setIsDdl(false); flatMessage.setType(operation); flatMessage.setEs(es); flatMessage.setTs(System.currentTimeMillis()); flatMessage.setSql(sql); OracleUtil.toMqformat(flatMessage);//转换sql为mq所需json格式 //主键设置 //List<String> list = pkMaps.get(authInfo.getDefaultDatabaseName() + "." + tableName); String key = (authInfo.getDefaultDatabaseName() + "." + tableName).toUpperCase(); List<String> list = pkMaps.get(key); if (list == null || list.isEmpty()) { logger.warn("表 {} 无主键信息! 操作: {}, SQL: {}", tableName, operation, sql); } else { logger.info("主键信息: table={}, pk={}", tableName, list); } if(!CollectionUtils.isEmpty(list)){ flatMessage.setPkNames(list); }else{ flatMessage.setPkNames(new ArrayList<>()); } data.add(flatMessage); endScn = scn; } if (!CollectionUtils.isEmpty(data)) { if(null != endScn) currentScn.set(endScn); FileMixedMetaManager.saveScnLocal(canalInstance.getDestination(), currentScn.get()); return new Message(lastScn, data, Constant.NOT_MYSQL); } } catch (Exception e) { logger.error("oracle logminer Unsubscribe failed:{}", e); unsubscribe(); updateSettings(); } finally { close(statement, resultSet); if(isSwitch){ if(!CollectionUtils.isEmpty(data) && data.size() >= (pageSize-1)){ //data不等于空, 且条目数大于等于每次获取的日志数量, 可能说明从归档日志中一次不足以拿完全部日志, 多循环几次 }else{ isSwitch = false; authInfo.setArchived(false); unsubscribe(); updateSettings(); } } } return new Message(-1, true, null); } private String getStartPosition() { String sql = "SELECT MAX(CURRENT_SCN) CURRENT_SCN FROM GV$DATABASE"; Statement statement = null; ResultSet resultSet = null; try { statement = connection.createStatement(); resultSet = statement.executeQuery(sql); while (resultSet.next()) { long scn = resultSet.getLong(1); return String.valueOf(scn); } } catch (SQLException e) { e.printStackTrace(); } finally { close(statement, resultSet); } return "0"; } public List<String> getPkNames(String datasourceName, String tableName) { List<String> pkNames = Lists.newArrayList(); Statement statement = null; ResultSet resultSet = null; try { DatabaseMetaData metaData = connection.getMetaData(); String tableNameUpper = tableName.toUpperCase(); resultSet = metaData.getPrimaryKeys(datasourceName, null, tableNameUpper); while (resultSet.next()) { pkNames.add(resultSet.getString("COLUMN_NAME")); } return pkNames; } catch (Exception e) { logger.error(datasourceName+"."+tableName+"oracle get table primary key returns null", e); return Collections.emptyList(); } finally { close(statement, resultSet); } } private void close(Statement statement, ResultSet resultSet){ try { if (resultSet != null) resultSet.close(); } catch (SQLException e1) { e1.printStackTrace(); } finally { try { if (statement != null) statement.close(); } catch (SQLException e) { e.printStackTrace(); } } } @Override public void connect() { String jdbcUrl = ORACLE_DRIVER_URL + authInfo.getAddress().getHostString() + ":" + authInfo.getAddress().getPort() + ":" + authInfo.getServerName(); try { this.connection = DriverManager.getConnection(jdbcUrl, authInfo.getUsername(), authInfo.getPassword()); } catch (SQLException e) { e.printStackTrace(); } } @Override public void reconnect() { disconnect(); connect(); } @Override public void disconnect() { if (connection != null) { try { unsubscribe(); connection.close(); } catch (SQLException e) { e.printStackTrace(); } } } @Override public void ack(CanalInstance canalInstance) { return; } @Override public boolean checkConsistent(CanalInstance canalInstance) { CanalEventParser eventParser = canalInstance.getEventParser(); MysqlEventParser parser = (MysqlEventParser) eventParser; AuthenticationInfo newAuthInfo = parser.getMasterInfo(); String newUrl = newAuthInfo.getAddress().getHostName() + newAuthInfo.getAddress().getPort(); String oldUrl = this.authInfo.getAddress().getHostName() + this.authInfo.getAddress().getPort(); if (Objects.equals(newUrl, oldUrl)) { return true; } return false; } @Override //回退到指定位置 public void rollback(CanalInstance canalInstance, Long batchId) { try { FileMixedMetaManager.saveScnLocal(canalInstance.getDestination(), batchId); currentScn.set(batchId); } catch (IOException e) { e.printStackTrace(); } } public void getCallableStatement(String sql) { CallableStatement callableStatement = null; try { callableStatement = connection.prepareCall(sql); callableStatement.execute(); } catch (SQLException e) { logger.error(e.getMessage(), e); e.printStackTrace(); StringWriter sw = new StringWriter(); PrintWriter pw = new PrintWriter(sw); // 将出错的栈信息输出到printWriter中 e.printStackTrace(pw); pw.flush(); sw.flush(); try { sw.close(); } catch (IOException ex) { throw new RuntimeException(ex); } pw.close(); if (e.getErrorCode() == 1291) { throw new CanalServerException("1291"); } } finally { close(callableStatement, null); } } public void getStatement(String sql) { try (Statement statement = connection.createStatement()) { statement.execute(sql); } catch (SQLException e) { // TODO Auto-generated catch block e.printStackTrace(); } } /** * description: 关闭查询会话 * * @param {} * @return void */ public void unsubscribe() { getStatement("BEGIN\n" + "dbms_logmnr.end_logmnr;\n" + "END;"); logger.warn("Unsubscribing succeeded !"); } } 怎么在代码加日志打印来排查 ,解析到的oracle归档日志中的sql是否正确,尤其是update和insert,来排查增量数据没有增加的原因
最新发布
06-06
<think>我们面临的任务是在OracleHandlerV3类中添加日志,以验证解析出的SQL语句(特别是UPDATE和INSERT)是否正确,并排查增量数据未增加的问题。根据引用[1]中的经验,临界条件的控制很重要,避免数据漏传或多传。同时,引用[2]提到触发器逻辑错误可能由于未正确处理操作类型,这提醒我们需要特别注意对操作类型的判断和记录。步骤:1.确定需要打印日志的位置:我们应该在解析出SQL语句后,执行之前打印日志,以便记录每条解析出的SQL语句。2.特别关注UPDATE和INSERT语句,因为问题中强调要验证这两种语句。3.同时需要记录操作类型(INSERT/UPDATE/DELETE),以便排查是否因为操作类型判断错误导致增量数据未增加。4.记录足够的信息,比如SQL语句的内容、影响的行数(如果可能)等。假设我们的OracleHandlerV3类中有一个方法负责解析归档日志并生成SQL语句,我们可以在这个方法中为每条解析出的SQL语句添加日志。具体代码修改建议:在解析出SQL语句的地方(可能是某个循环中,对每条记录解析后),添加日志打印,包括:-操作类型(INSERT/UPDATE/DELETE)-SQL语句字符串-解析时间(可选)-记录的唯一标识(如ROWID或主键,如果有的话)例如,使用Java的Log4j日志框架,可以这样添加:```java//假设我们已经解析得到了操作类型和SQL语句StringoperationType=...;//从解析中得到Stringsql=...;//解析得到的SQL语句//使用日志框架打印log.debug("解析归档日志得到SQL语句:[操作类型:{}][SQL:{}]",operationType,sql);```注意:为了减少日志量,可以只打印DEBUG级别日志,在排查问题时开启DEBUG级别。另外,为了排查增量数据未增加的问题,还可以在以下位置添加日志:-在开始解析归档日志时,记录开始时间、归档日志文件信息等。-解析结束时,记录解析了多少条SQL,其中INSERT、UPDATE、DELETE各多少条。这样,我们可以通过日志观察是否解析到了预期的增量数据(UPDATE和INSERT语句),以及这些语句的内容是否正确。此外,还需要考虑:-是否有异常被捕获但没有记录?添加异常日志。-解析逻辑是否正确?特别是对UPDATE和INSERT语句的解析。针对增量数据未增加,可能的原因包括:1.解析归档日志时,没有正确解析到UPDATE和INSERT语句(解析逻辑有误)。2.解析到的SQL语句不正确,导致执行时失败(需要验证SQL语句的正确性)。3.解析的条件设置有误,比如类似于引用[1]提到的临界条件问题,导致没有获取到最新的增量。因此,在添加日志后,我们可以通过日志查看:-解析出的SQL语句是否与数据库实际操作相符(可以对比测试环境中的操作和日志)。-解析过程中是否有错误或警告信息。如果发现解析出的SQL语句有误,则需要检查解析逻辑;如果没有解析到预期的语句,则需要检查解析条件(如时间范围)和归档日志文件的选择。总结修改步骤:1.OracleHandlerV3类的关键解析方法中添加详细日志,特别是每条解析出的SQL语句及其操作类型。2.在解析方法的开始和结束处添加统计日志。3.确保在解析过程中遇到异常时,记录异常信息。4.根据日志进行排查,验证解析出的SQL语句是否正确以及增量数据是否被正确解析。注意:在生产环境中,日志级别要合理,避免日志量过大。在问题排查期间可以临时开启DEBUG级别,排查完毕后调整回INFO级别。示例日志输出格式:开始解析:log.info("开始解析归档日志,归档日志文件:{},开始时间:{}",fileName,startTime);每条解析出的SQL:log.debug("解析到SQL:[类型:{}][SQL:{}]",operationType,sql);解析结束:log.info("解析归档日志完成,共解析{}条记录,其中INSERT:{}条,UPDATE:{}条,DELETE:{}条",total,insertCount,updateCount,deleteCount);如果解析过程中有错误:log.error("解析归档日志时发生异常",e);通过这样的日志,我们可以验证解析出的SQL是否正确,并分析增量数据未增加的原因。</think> 为了在`OracleHandlerV3`类中添加日志验证解析到的SQL语句(尤其是UPDATE/INSERT)并排查增量数据未增加的问题,可按以下步骤实现: ### 一、关键日志点添加 ```java // OracleHandlerV3.java public class OracleHandlerV3 { private static final Logger logger = LoggerFactory.getLogger(OracleHandlerV3.class); public void processRedoLog(LogMinerResultSet rs) { while (rs.next()) { // 1. 捕获原始SQL操作类型 int operationCode = rs.getInt("OPERATION_CODE"); String operationType = switch (operationCode) { case 2 -> "INSERT"; case 3 -> "UPDATE"; case 1 -> "DELETE"; default -> "UNKNOWN"; }; // 2. 记录解析前的原始SQL片段 (关键诊断点) String originalSql = rs.getString("SQL_REDO"); logger.debug("[SQL解析][{}] 原始SQL片段: {}", operationType, originalSql); // 3. 解析SQL并记录结果 String parsedSql = parseSql(originalSql, operationType); logger.info("[SQL解析][{}] 生成可执行SQL: {}", operationType, parsedSql); // 4. 记录表名和行数统计 String tableName = rs.getString("SEG_NAME"); int rowCount = 1; // 每次解析一条 logRowStatistics(operationType, tableName, rowCount); } } // 新增统计方法 (排查数据未增加) private void logRowStatistics(String opType, String tableName, int count) { // 按表名+操作类型汇总计数 statsMap.computeIfAbsent(tableName + "_" + opType, k -> new AtomicInteger()).addAndGet(count); // 每1000条输出汇总报告 if (totalCount.incrementAndGet() % 1000 == 0) { statsMap.forEach((key, val) -> logger.warn("[增量统计] 表操作: {} 数量: {}", key, val.get()) ); statsMap.clear(); } } } ``` ### 二、验证重点 1. **UPDATE语句验证** - 检查日志中`SQL_REDO`是否包含`WHERE`条件完整路径(ROWID或主键) - 验证`SET`子句列值是否包含所有变更字段 - 示例正确日志: ``` [INFO] [SQL解析][UPDATE] 生成可执行SQL: UPDATE emp SET salary=8000 WHERE ROWID='AAAbCdAAAAB' ``` 2. **INSERT语句验证** - 检查`SQL_REDO`是否包含所有必填字段 - 验证自增主键是否正常生成 - 示例错误排查: ``` [WARN] [增量统计] 表操作: orders_INSERT 数量: 0 ``` ### 三、排查增量未增加的流程 1. **查看统计日志** ```bash grep "增量统计" oracle_handler.log # 输出示例: [增量统计] 表操作: products_UPDATE 数量: 47 ``` 2. **对比数据库实际增量** ```sql -- 目标表行变化检测 SELECT COUNT(*) FROM target_table WHERE create_time > sysdate-5/1440; ``` 3. **常见问题定位** - ❌ 无INSERT日志 → 解析器跳过插入操作(检查`OPERATION_CODE`过滤逻辑) - ❌ 有日志但无数据 → SQL执行失败(检查事务回滚/主键冲突) - ❌ WHERE条件丢失 → 增量数据重复更新或覆盖错误(需补充ROWID解析) > **关键引用建议**:日志临界点控制(类似引用[1]的`modification_time`边界)需要确保解析时间窗口包含完整事务提交时间,避免因时间差导致增量丢失[^1]。同时注意Oracle的`COMMIT`日志在LogMiner中表现为单独记录,需关联事务ID确认完整性[^3]。 ### 四、日志优化建议 1. 添加错误恢复日志: ```java try { executeSql(parsedSql); } catch (SQLException e) { logger.error("SQL执行失败: {} | 错误: {}", parsedSql, e.getMessage()); } ``` 2. 开启DEBUG日志捕获`SQL_REDO`原始内容,对比[Oracle官方LogMiner格式](https://docs.oracle.com/en/database/oracle/oracle-database/19/sutil/oracle-logminer-utility.html) ---
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值