Given a relationship such as this:
customers hasmany orders
orders belongsto customer
I'd like to sum orders.total and write it on customer.
Instead of finding each routine that would need to update this value, I was thinking about adding it to the customer write trigger. So every time a customer is updated, this value is evaluated and updated.
What's the cost/risk to implementing it this way?
------------------------------
Jeremy Lockwood
Awesome
ASE Supply Inc
Portland OR US
------------------------------
Given a relationship such as this:
customers hasmany orders
orders belongsto customer
I'd like to sum orders.total and write it on customer.
Instead of finding each routine that would need to update this value, I was thinking about adding it to the customer write trigger. So every time a customer is updated, this value is evaluated and updated.
What's the cost/risk to implementing it this way?
------------------------------
Jeremy Lockwood
Awesome
ASE Supply Inc
Portland OR US
------------------------------
FLAG.INCLUSION= ACCESS(16)
FLAG.EXCLUSION = ACCESS(12)
*
FILEID= ACCESS(10)
FILENAME = ACCESS(11)
OBJECT OF THE FILE OPENNED = ACCESS(1)
access() function
| Rocketsoftware | |||||||
|
|||||||
| Rocketsoftware | |||||||
|
|||||||
------------------------------
Alberto Leal
System Analyst
Millano Distribuidora de Auto Pecas Ltda
Varzea Grande MT BR
------------------------------
Given a relationship such as this:
customers hasmany orders
orders belongsto customer
I'd like to sum orders.total and write it on customer.
Instead of finding each routine that would need to update this value, I was thinking about adding it to the customer write trigger. So every time a customer is updated, this value is evaluated and updated.
What's the cost/risk to implementing it this way?
------------------------------
Jeremy Lockwood
Awesome
ASE Supply Inc
Portland OR US
------------------------------
We write to a temp/work file and then let a phantom process (program started with ZS program name) take care of it to avoid locking issues.
In your callx we do this:
************************************
OPEN 'your_temp_file' TO F.WF ELSE
RETURN
END
OPEN 'some_other_file' TO F.SRCF ELSE
RETURN
END
GOSUB 100
*
ITEM.NAME = ACCESS(10)
ITEM.NAME = ITEM.NAME:"*":ADD.NBR
REC = ""
WRITE REC ON F.WF, ITEM.NAME
RETURN
*
****************************
* GET NEXT NBR FROM some_other_file
****************************
100
*
T.DATE = DATE()
SRCF.KEY = "something"
READU SRCF.REC FROM F.SRCF, SRCF.KEY ELSE SRCF.REC = ""
ON.FILE.DATE = SRCF.REC<1>
IF ON.FILE.DATE = T.DATE THEN
ADD.NBR = SRCF.REC<2> +1
END ELSE
SRCF.REC<1> = T.DATE
ADD.NBR = 1
END
SRCF.REC<2> = ADD.NBR
WRITE SRCF.REC ON F.SRCF, SRCF.KEY
*
RETURN
******************************
You could use "common" to not keep open the files in the callx - we prefer not to.
We add a counter on the "real" order id (in your case) because the phantom process might have it locked. Set the phantom to sleep 3-5 seconds and the scan the 'your_temp_file' and do your calculations in the phantom. This is much easier to debug/change as you just have to update the phantom program. If you update your customer file elsewhere then maybe have a "customer-total" file that only gets updated from the phantom would be better. This would be done to avoid locking issue.
Thanks,
Laust Andersen
Business Accounting Software Canada
------------------------------
Laust Andersen
President
Business Accounting Software Canada
Calgary AB CA
------------------------------
Given a relationship such as this:
customers hasmany orders
orders belongsto customer
I'd like to sum orders.total and write it on customer.
Instead of finding each routine that would need to update this value, I was thinking about adding it to the customer write trigger. So every time a customer is updated, this value is evaluated and updated.
What's the cost/risk to implementing it this way?
------------------------------
Jeremy Lockwood
Awesome
ASE Supply Inc
Portland OR US
------------------------------
In trying to solve any computing problem one needs to consider two things: Time and space. That is, given enough time you can solve any problem. Or, given enough space you can solve any problem. In reality you don't have the luxury of both. Part of the problem that exists in the computing world is that the younger generation of coders feel they have infinite time and space to solve their computing problems.
Part of the genius of the early Pick systems was capitalizing on the concept of paging. A page of memory was only 512 bytes and you had to deal with disk drives spinning at 1200rpm. Oh, and there was only 32K of memory. You had to be clever in your coding and organize logic such that the most frequently pieces of logic would be memory resident and not force the disk to page. Consequently, structured, easy-to-read code flew out the window. When you are forced to live within tight constraints you develop systems that will survive (and thrive) advances in technology.
Another part of the early genius was the concept of multi-value. This allowed for the use indexes which did not have to be constrained by space. Let's say you have a customer record that has an attribute which contains a list of orders. This is where multi-value comes to the rescue. Tied with associative attributes one could minimize the number of disk reads to give a snapshot of the "sales" total or the "cost" total without having to read through thousands of order records. Put another way, can you answer what the total sales for a customer is in "one" disk read or do you have to pull in x records to get a total?
One also needs to consider the issue of read/write locks in a multi-user environment. If two users are updating the same customer with maybe the same order, how will this be handled? Mature systems have these scenarios well thought out and appropriately coded.
A well-designed application is ideally efficient whether a customer record has 10 orders or 100,000. Per your query, we need to ask if thousands of customer records are being updated every hour? Or, only a handful? By a couple of users or by hundreds? Is the next generation of programmers who will follow you capable of handling clever data structures that you developed?
In my experience, I have evolved to simple, flat files which have indexes that point to other simple, flat files. The data structures are easy to follow and they lend themselves (nicely) to ad hoc reporting. Locks are more easily handled. The design supports large number of users, too.
I have also discovered that Mvbase tends to be faster and more efficient in handling my approach to data solutions than does D3. I suspect that this is because Mvbase assumes having unlimited main memory (space) and all my data is resident. I can count millions of records instantaneously in Mvbase but the same operation drags under D3. Would things still be as efficient for Mvbase if the data set increased from 8GB to 8TB? I don't know.
Best wishes on deciding your approach to the "write trigger." It will be an interesting journey, I'm sure.
------------------------------
Michael Archuleta
President
Arcsys Inc
Draper UT US
------------------------------
Given a relationship such as this:
customers hasmany orders
orders belongsto customer
I'd like to sum orders.total and write it on customer.
Instead of finding each routine that would need to update this value, I was thinking about adding it to the customer write trigger. So every time a customer is updated, this value is evaluated and updated.
What's the cost/risk to implementing it this way?
------------------------------
Jeremy Lockwood
Awesome
ASE Supply Inc
Portland OR US
------------------------------
------------------------------
Brian S. Cram
Principal Technical Support Engineer
Rocket Software
------------------------------
------------------------------
Brian S. Cram
Principal Technical Support Engineer
Rocket Software
------------------------------
sometimes i use a little different...
the trigger writes a instrucion only in a control file.. .
and i have a phantom that is always reading that control file.. and that one starts the real phantom to process
why? lets say... your trigger starts 1000 changes ... soh you will have 1000 phantons running at the same time causing slow down in creating new jobs, so peoople will think the system is slow.. but is not.. its only the search for a valid id of job in queeq is to big, and some times may cause problems with printers
------------------------------
Alberto Leal
System Analyst
Millano Distribuidora de Auto Pecas Ltda
Varzea Grande MT BR
------------------------------
sometimes i use a little different...
the trigger writes a instrucion only in a control file.. .
and i have a phantom that is always reading that control file.. and that one starts the real phantom to process
why? lets say... your trigger starts 1000 changes ... soh you will have 1000 phantons running at the same time causing slow down in creating new jobs, so peoople will think the system is slow.. but is not.. its only the search for a valid id of job in queeq is to big, and some times may cause problems with printers
------------------------------
Alberto Leal
System Analyst
Millano Distribuidora de Auto Pecas Ltda
Varzea Grande MT BR
------------------------------
------------------------------
Brian S. Cram
Principal Technical Support Engineer
Rocket Software
------------------------------
Sign up
Already have an account? Login
Welcome to the Rocket Forum!
Please log in or register:
Employee Login | Registration Member Login | RegistrationEnter your E-mail address. We'll send you an e-mail with instructions to reset your password.