Bulk collect for inserting million of records [message #669666] |
Sun, 06 May 2018 20:27 |
|
senmng
Messages: 22 Registered: April 2018
|
Junior Member |
|
|
Hi - I have this proc which needs to insert the values into child table where the size of the parent table = 9k rows and the row count of joining tables are
table1= 1 million,table2=3 million and table3 = 4 million
here except table 3 and child tables,all other tables are residing in remote db.If that's the case,how come we declare the rowtype parameters for parent table?
Also in the for loop..will it commit for every 1 million records here and continue to load for the next 1 million records here?
CREATE OR REPLACE PROCEDURE fast_way AUTHID CURRENT_USER IS
TYPE myarray IS TABLE OF parent%ROWTYPE;
l_data myarray;
CURSOR r IS
SELECT part_num, part_name
FROM parent@dblink join table1 on parent.id=table1.id
join table2dblink on table1.id=table2.id
join table3 on table2.id=table3.id;
BatchSize CONSTANT POSITIVE := 1000;
BEGIN
OPEN r;
LOOP
FETCH r BULK COLLECT INTO l_data LIMIT BatchSize;
FOR j IN 1 .. l_data.COUNT LOOP
l_data(j).part_num := l_data(j).part_num * 10;
END LOOP;
FORALL i IN 1..l_data.COUNT
INSERT INTO child VALUES l_data(i);
EXIT WHEN l_data.COUNT < BatchSize;
END LOOP;
COMMIT;
CLOSE r;
END fast_way;
/
|
|
|
|
|
|
|
|
Re: Bulk collect for inserting million of records [message #669690 is a reply to message #669689] |
Tue, 08 May 2018 03:00 |
Roachcoach
Messages: 1576 Registered: May 2010 Location: UK
|
Senior Member |
|
|
It does take longer, but the strong argument used in its favour (I'm on the fence personally) is that doing a bulk collect commit allows restart-ability (in the sense it is going to pick up almost where it left off) should the job fail partway through. In an ideal world it won't, but life is rarely so convenient
It's not sufficiently slower that I shout at people for doing it at least.
[Updated on: Tue, 08 May 2018 03:01] Report message to a moderator
|
|
|
|
|
|
|
|