Skip to main content

After upgrading to Extend 10.3.1, we experience very long response times when reading large indexed files.

Environment: Debian server running Acucobol (acurcl runtime). Windows PC connect using acuthin.exe.

We have a search function reading all records in an indexed file containing more than one million records. First search takes 10-15 minutes and the user thinks that the search routine died or even receive message "The remote server is not responding". Second search is done within 10-15 seconds!

It seems to have something to do with linux, because even if we close the application and restart it, a new search is still done within 10-15 seconds.

We also had this problem in version 9.1, but not at all to that extend. First search was done within a few minutes.

Does anyone have a clue on how to optimize initial read of a large indexed file on linux?