Skip to main content

Running UD 8.3 on AIX with plenty of disk space. I have a number of large files (10M-50M) records that I want to re-size to improve overall performance, but more importantly, prevent any file corruption due to any file re-sizing activities.

I am trying to find out what is the optimal number of records that each group should contain:

1- Is there a recommend number of records per group (keep it under 20 or a certain number)? This will allow me to properly size block and modulo. 

2- Should I keep increasing size of the block until I have zero level 1 overflow, which means ignore number of records per group? Albeit I managed modulo size in conjunction with block size.

Any insights or recommendations is greatly appreciated.

Thank you

Running UD 8.3 on AIX with plenty of disk space. I have a number of large files (10M-50M) records that I want to re-size to improve overall performance, but more importantly, prevent any file corruption due to any file re-sizing activities.

I am trying to find out what is the optimal number of records that each group should contain:

1- Is there a recommend number of records per group (keep it under 20 or a certain number)? This will allow me to properly size block and modulo. 

2- Should I keep increasing size of the block until I have zero level 1 overflow, which means ignore number of records per group? Albeit I managed modulo size in conjunction with block size.

Any insights or recommendations is greatly appreciated.

Thank you

Iyad,

there are some Knowledge articles in Rocket Support you might find useful here's one to get you started:

  • 000022041: Using Modeling to Heuristically Determine Best File Sizing Parameters for UniData Files

Regards

JJ


Running UD 8.3 on AIX with plenty of disk space. I have a number of large files (10M-50M) records that I want to re-size to improve overall performance, but more importantly, prevent any file corruption due to any file re-sizing activities.

I am trying to find out what is the optimal number of records that each group should contain:

1- Is there a recommend number of records per group (keep it under 20 or a certain number)? This will allow me to properly size block and modulo. 

2- Should I keep increasing size of the block until I have zero level 1 overflow, which means ignore number of records per group? Albeit I managed modulo size in conjunction with block size.

Any insights or recommendations is greatly appreciated.

Thank you

Iyad,

20 records per group is a good for most cases, but may need to be adjusted if the records are too small or large.

It is not always possible to remove all level 1 overflow, especially in cases where there is a high standard deviation in the size of the items.

Is there a specific file that is giving you trouble?

If you can provide either the FILE.STATS or the output from guide -r, we can discuss how to determine resize values.