From a database theory point of view this would be a NO-GO!
The index points to where in the file a record resides.
For one index to work for multiple files,
* Each record would have to be the same size across all the files.
* Every record would have to be in every file.
* Every record would have to be updated in each file in the same order every time.
Otherwise, the data record could be in different places in each file. If the prior record before the record you read is larger in one file than another, the location in the physical record would be different. One could even be in an overflow while the other is in the main record. If you update if differing orders, the records get placed in the physical record if differing orders.
I do not know the details of the UV indices. If only the “bucket” is placed in the index and then the location of the record in the bucket is in the bucket, my explanation would not apply.
If the records include a file indicator in the index key, this could work. Essentially each file has a unique index file within the same single index. This can be done with client data by including the client id in the data and putting all clients in one file.
My question is, “With the files existing, how would you create and build the index?” Build and create it for one file, then read and write every record in the other files?
This just send shivers up my spine. For you older folks, I hear the robot saying, “Danger Wil Robinson. Danger!” And for the younger generations, it makes my “spidie sense” tingle.
BTW I have been an IT professional for 50 years most working with various database systems. So, I may be a bit set in my ways. But watch out for unexplained data corruption if you do this.
________________________________________________
Sam Beard
Senior Programmer Analyst
8215 S. Elm Place, Broken Arrow, OK 74011
sambeard@micahtek.com <mailto:
sambeard@micahtek.com>
www.micahtek.com <http:
www.micahtek.com/="">Phone: 918.449.3300 x3035
Fax: 918.449.3301
[Description: facebook]<https:
www.facebook.com/micahteksolutions/=""> [Description: m-icon] <http:
www.micahtek.com/="">[Description: cid:
image004.png@01D488B1.580182F0]
"SOLUTIONS you need for the GROWTH you desire!"
This message is for the named person's use only. It may contain confidential, proprietary, or legally privileged information. No confidentiality or privilege is waived or lost by any wrongful transmission. If you receive this message in error, please immediately delete it and all copies of it from your system, destroy any hard copies of it, and notify the sender. You must not, directly or indirectly, use, disclose, distribute, print, or copy any part of this message if you are not the intended recipient. MicahTek reserves the right to monitor all e-mail communications through its networks. Any views or opinions expressed in this message are solely those of the individual sender, except (1) where the message states such views or opinions are on behalf of a particular entity; and (2) the sender is authorized by the entity to give such views or opinions.
Original Message:
Sent: 9/26/2024 4:43:00 PM
From: Mike Rajkowski
Subject: RE: one index for multiple files?
Thomas,
I would worry about how you are handling the writes to he 8 files, if/wen locks are involved. What would happen if one of the files did not get updated? I can see a situation where the index would return an item based on data that would not be in the version of the file that failed to write.
As for the performance, I think I see why it is faster.
While I am not 100% sure, I expect that the index would only be updated by the first file that was updated. So, 9 writes for each update, 1 write for each of the 8 files and one index write, is approximately half the writes (16) if each file had its own index.
I understand your concern with changing code that works and the effort may not be worth the gain, but maintaining one file for the 60% that is identical, and a secondary file that uses the same id for the unique data in each of the 8 accounts may provide better performance. The bulk import of the common data would be updating 2 files not 9 or 16.
------------------------------
Mike Rajkowski
MultiValue Product Evangelist
Rocket Internal - All Brands
US
------------------------------
Original Message:
Sent: 09-26-2024 15:58
From: Thomas Ludwig
Subject: one index for multiple files?
Thanks JOE for thinking about it.
There ar 8 physical Data-Files with the same dict in 8 Accounts.
about 60% of the data are identical in all 8 data-files.
The keys and number of Entries in all 8 data-files are also identical.
Only Fields are in the index that are identical in all 8 Files
best regards
Thomas
------------------------------
Thomas Ludwig
System Builder Developer
Rocket Forum Shared Account
Original Message:
Sent: 09-26-2024 15:54
From: Joe Goldthwaite
Subject: one index for multiple files?
Could you clarify this a bit? What do you mean by "Master Data" vs "item files". In Universe you can have one copy of the file with Q pointers in all 8 accounts pointing to that one copy. If there's data specific to the account, each of your 8 accounts can have it's own copy of that file.
Having 8 files with one index sounds like you're going to have a lot of mismatches. I must not be understanding the question.
------------------------------
Joe Goldthwaite
Consultant
Phoenix AZ US
Original Message:
Sent: 09-26-2024 15:47
From: Thomas Ludwig
Subject: one index for multiple files?
Our Universe application consists of 8 accounts for 8 branches. In all accounts, there is a file for the items. The item file is identical for the master data in all accounts, only the transaction data differs. When writing, changes to the file, master data are distributed across all accounts to keep them consistent.
Although this approach is certainly not optimal, we do not want to change it because the programming is not easy to modify.
I conducted a test, and used: SET.INDEX ITEM TO H:\ALPHA\INDEXES\I_ITEM
to set the target path for the index of all 8 item files to the same destination.
During a bulk import into the item file, where all 8 files are updated, switching to a single index file, approximately double the speed is achieved.
My question now is: Is there anything that speaks against using a single index file for multiple data files, as long as all indexed fields are identical for all data files?
greetings
Thomas
------------------------------
Thomas Ludwig
System Builder Developer
Rocket Forum Shared Account
------------------------------
</http:></https:></http:></mailto:sambeard@micahtek.com>