運用JDBC在MySQL數據庫中如何疾速批量拔出數據。本站提示廣大學習愛好者:(運用JDBC在MySQL數據庫中如何疾速批量拔出數據)文章只能為提供參考,不一定能成為您想要的結果。以下是運用JDBC在MySQL數據庫中如何疾速批量拔出數據正文
運用JDBC銜接MySQL數據庫停止數據拔出的時分,特別是大批量數據延續拔出(10W+),如何進步效率呢?
在JDBC編程接口中Statement 有兩個辦法特別值得留意:
void addBatch() throws SQLException
Adds a set of parameters to this PreparedStatement object's batch of commands.
int[] executeBatch() throws SQLException
Submits a batch of commands to the database for execution and if all commands execute successfully, returns an array of update counts. The int elements of the array that is returned are ordered to correspond to the commands in the batch, which are ordered according to the order in which they were added to the batch.
經過運用addBatch()和executeBatch()這一對辦法可以完成批量處置數據。
不過值得留意的是,首先需求在數據庫鏈接中設置手動提交,connection.setAutoCommit(false),然後在執行Statement之後執行connection.commit()。
package cyl.demo.ipsearcher; import java.io.BufferedReader; import java.io.FileInputStream; import java.io.IOException; import java.io.InputStreamReader; import java.sql.Connection; import java.sql.DriverManager; import java.sql.PreparedStatement; import java.sql.SQLException; public class DbStoreHelper { private String insert_sql; private String charset; private boolean debug; private String connectStr; private String username; private String password; public DbStoreHelper() { connectStr = "jdbc:mysql://localhost:3306/db_ip"; // connectStr += "?useServerPrepStmts=false&rewriteBatchedStatements=true"; insert_sql = "INSERT INTO tb_ipinfos (iplong1,iplong2,ipstr1,ipstr2,ipdesc) VALUES (?,?,?,?,?)"; charset = "gbk"; debug = true; username = "root"; password = "***"; } public void storeToDb(String srcFile) throws IOException { BufferedReader bfr = new BufferedReader(new InputStreamReader(new FileInputStream(srcFile), charset)); try { doStore(bfr); } catch (Exception e) { e.printStackTrace(); } finally { bfr.close(); } } private void doStore(BufferedReader bfr) throws ClassNotFoundException, SQLException, IOException { Class.forName("com.mysql.jdbc.Driver"); Connection conn = DriverManager.getConnection(connectStr, username,password); conn.setAutoCommit(false); // 設置手動提交 int count = 0; PreparedStatement psts = conn.prepareStatement(insert_sql); String line = null; while (null != (line = bfr.readLine())) { String[] infos = line.split(";"); if (infos.length < 5) continue; if (debug) { System.out.println(line); } psts.setLong(1, Long.valueOf(infos[0])); psts.setLong(2, Long.valueOf(infos[1])); psts.setString(3, infos[2]); psts.setString(4, infos[3]); psts.setString(5, infos[4]); psts.addBatch(); // 參加批量處置 count++; } psts.executeBatch(); // 執行批量處置 conn.commit(); // 提交 System.out.println("All down : " + count); conn.close(); } }
執行完成當前:
All down : 103498
Convert finished.
All spend time/s : 47
一共10W+,執行時間一共破費 47 秒.
這個效率依然不高,似乎沒有到達想要的效果,需求進一步改良。
在MySQL JDBC銜接字符串中還可以參加參數,
rewriteBatchedStatements=true,mysql默許封閉了batch處置,經過此參數停止翻開,這個參數可以重寫向數據庫提交的SQL語句。
useServerPrepStmts=false,假如不開啟(useServerPrepStmts=false),運用com.mysql.jdbc.PreparedStatement停止本地SQL拼裝,最後送到db上就是曾經交換了?後的最終SQL.
在此稍加改良,銜接字符串中參加上面語句(代碼結構辦法中去掉正文):
connectStr += "?useServerPrepStmts=false&rewriteBatchedStatements=true";
再次執行如下:
All down : 103498
Convert finished.
All spend time/s : 10
異樣的數據量,這次執行只破費了10秒 ,處置效率大大進步.
以上就是本文的全部內容,希望對大家的學習有所協助,也希望大家多多支持。