Skip to main content

Problem:

Running a job that has 14 sort steps and then a COBOL program that does delta reporting. The files range in size from 1 million to 3 million records with record sizes from 100 bytes to 400 bytes. The largest file in this test run is just under 2 gigabytes. The entire job takes

One hour to run on a Windows Server 2003 machine with 2 gigabytes of memory

One hour and twenty minutes on a Windows/XP, laptop with 2 gigabytes of memory

Ten minutes on the ZOS mainframe.

Have been experimenting with the following in the EXTFH CONFIG file.

SEQDATBUF=4096, 32K, 132K

READSEMA=OFF

IGNORELOCK=ON

Some things managed to get a 10% performance improvement but would like to get a lot more.

Are there anymore directives (remember most of this is MFSORT, not COBOL)?

What is the recommended setting for SEQDATBUF when using large sequential files?   

Resolution:

Definitely defrag first, but also to set:

TMP

Tells the run-time system where to put paging files. Normally it puts these files in the current directory. By using TMP to specify a different drive or path for them, this sometimes help with performance

SORTSPACE

The amount of memory to be allocated to internal workspace for SORT operations. This can be specified in different formats: for example, to specify 64M, 2G, and 1000000 to give sort memory areas of 64 Megabytes, 2 Gigabytes and 1000000 bytes respectively

It is suggested NOT to use;  as this seems to slow it down

SORTCOMPRESS

Tells the system whether or not to execute a compression routine on each record to be sorted. SORTCOMPRESS=5 allows for run-length encoding of sort records, resulting in much better performance when records contain multiple repeated characters.

And NOT to use the seqdatbuf in the EXTFH config file.

Old KB# 1292

#netexpress
#AcuCobol
#RMCOBOL
#COBOL
#ServerExpress