[Migrated content. Thread originally posted on 18 June 2009]
We set up our clients to run from a mapped drive. The we simply open indexed files from that mapped drive.I am seeing a strange behavior when reading data. We have a couple of programs that utilize some of the same code to display a list box of records that the user can interact with. The data files that this data is pulled from can have upward to 100k records in it.
Our issues showed up when a customer called and complained that the client processing was extremely slow, but the application server pc (The one everyone maps to) worked normal. Of the two programs mentioned above, one worked fine displaying the data and the other did not. In our lab what I found was that the one that worked ok would take approximatly 3 to 5 seconds to fill and display the list box, whereas the program with the problem took 1 minute, 25 seconds to display the same exact information.
Ok, now here's the weird thing that I can't figure out. The difference between these two programs is that one allows you to edit information in the file and the other is simply a report program. The one that allows you to edit the program opens the file for I/O and the report program opens it for INPUT. the confusing part is, it's the one that opens the file for INPUT that is extremely slow.
I have verified that opening the file for INPUT is the culprit by changing the report program to open the file as I/O, then it will work as the edit program does in that it only takes 3 to 5 seconds to read and display the information.
Am I missing something here? We have a 100mb network and the one that reads with the file open for I/O hits about 16 to 20 % bandwidth where as when reading the same exact data with the file open for INPUT only hits 7%.
Any help here would be appreciated as I don't want to have to go through a couple of hundred different reporting programs to change how they open files.



