Initial Loading a large data set

Hi. I asked this question on Slack and was directed to this forum. I’m looking to load a large data set of 90+Million rows. I’d rather not have to do this in one initial load as it would impact the db server. I was wondering if there was a way to batch this into smaller data sets using the cursor (date) to limit each batch run into a range of dates. I’m told that is not possible but we could use the fetchSize parameter to run in small batches. My question regarding this… if the fetchSize is set and we decide to stop the initial load process after an hour and it still has a lot more rows to process… will rerunning the initial process be smart enough to resume where it left off or will it need to start from scratch?

Unfortunately no. It will restart from scratch. Something you can do it create a view and manage the load through the view parameter.
create view as (select * from my_table where start_date >= 2021-01-1 and start_date <= 2021-02-01) 

Would that still work? Would the process have a problem with the missing older data as the date is increased in subsequent loads? Or does the cursor only used to look for data newer than the cursor value?

Cursor will only looks for new data after the current value.

K. Thanks. I will give it a shot.

Hi there from the Community Assistance team.
We’re letting you know about an issue we discovered with the back-end process we use to handle topics and responses on the forum. If you experienced a situation where you posted the last message in a topic that did not receive any further replies, please open a new topic to continue the discussion. In addition, if you’re having a problem and find a closed topic on the subject, go ahead and open a new topic on it and we’ll follow up with you. We apologize for the inconvenience, and appreciate your willingness to work with us to provide a supportive community.