This may not be a direct answer to your query, but I will offer the following opinion and observations.
In trying to solve any computing problem one needs to consider two things: Time and space. That is, given enough time you can solve any problem. Or, given enough space you can solve any problem. In reality you don't have the luxury of both. Part of the problem that exists in the computing world is that the younger generation of coders feel they have infinite time
and space to solve their computing problems.
Part of the genius of the early Pick systems was capitalizing on the concept of paging. A page of memory was only 512 bytes and you had to deal with disk drives spinning at 1200rpm. Oh, and there was only 32K of memory. You had to be clever in your coding and organize logic such that the most frequently pieces of logic would be memory resident and not force the disk to page. Consequently, structured, easy-to-read code flew out the window. When you are forced to live within tight constraints you develop systems that will survive (and thrive) advances in technology.
Another part of the early genius was the concept of multi-value. This allowed for the use indexes which did not have to be constrained by space. Let's say you have a customer record that has an attribute which contains a list of orders. This is where multi-value comes to the rescue. Tied with associative attributes one could minimize the number of disk reads to give a snapshot of the "sales" total or the "cost" total without having to read through thousands of order records. Put another way, can you answer what the total sales for a customer is in "one" disk read or do you have to pull in
x records to get a total?
One also needs to consider the issue of read/write locks in a multi-user environment. If two users are updating the same customer with maybe the same order, how will this be handled? Mature systems have these scenarios well thought out and appropriately coded.
A well-designed application is ideally efficient whether a customer record has 10 orders or 100,000. Per your query, we need to ask if thousands of customer records are being updated every hour? Or, only a handful? By a couple of users or by hundreds? Is the next generation of programmers who will follow you capable of handling clever data structures that you developed?
In my experience, I have evolved to simple, flat files which have indexes that point to other simple, flat files. The data structures are easy to follow and they lend themselves (nicely) to ad hoc reporting. Locks are more easily handled. The design supports large number of users, too.
I have also discovered that Mvbase tends to be faster and more efficient in handling
my approach to data solutions than does D3. I suspect that this is because Mvbase assumes having unlimited main memory (space) and all
my data is resident. I can count millions of records instantaneously in Mvbase but the same operation drags under D3. Would things still be as efficient for Mvbase if the data set increased from 8GB to 8TB? I don't know.
Best wishes on deciding your approach to the "write trigger." It will be an interesting journey, I'm sure.
------------------------------
Michael Archuleta
President
Arcsys Inc
Draper UT US
------------------------------
Original Message:
Sent: 12-22-2021 18:52
From: Jeremy Lockwood
Subject: Opening other files in callx triggers
Anyone out there open other files in write triggers?
Given a relationship such as this:
customers hasmany orders
orders belongsto customer
I'd like to sum orders.total and write it on customer.
Instead of finding each routine that would need to update this value, I was thinking about adding it to the customer write trigger. So every time a customer is updated, this value is evaluated and updated.
What's the cost/risk to implementing it this way?
------------------------------
Jeremy Lockwood
Awesome
ASE Supply Inc
Portland OR US
------------------------------