Skip to main content
How have others that are using MVS tackled the challenge of pagination for result sets?

------------------------------
Jeremy Lockwood
Awesome
ASE Supply Inc
Portland OR United States
------------------------------
How have others that are using MVS tackled the challenge of pagination for result sets?

------------------------------
Jeremy Lockwood
Awesome
ASE Supply Inc
Portland OR United States
------------------------------
Hi, Jeremy. Sorry for just getting to this. I was in Kansas on vacation last week.

I've never done it, but have a few ideas. First off, best practices dictate subroutine and index usage. Page# and page depth ( items/page ) would need to be passed arguments. Then you could either create and store a list in the pointer-file ( then figure out a way to clean it up ), read the list, count down attributes and go from there, or you could do (( page# - 1 ) times page depth ) dummy "key" reads on your index.

------------------------------
Brian S. Cram
Principal Technical Support Engineer
Rocket Software
------------------------------
Hi, Jeremy. Sorry for just getting to this. I was in Kansas on vacation last week.

I've never done it, but have a few ideas. First off, best practices dictate subroutine and index usage. Page# and page depth ( items/page ) would need to be passed arguments. Then you could either create and store a list in the pointer-file ( then figure out a way to clean it up ), read the list, count down attributes and go from there, or you could do (( page# - 1 ) times page depth ) dummy "key" reads on your index.

------------------------------
Brian S. Cram
Principal Technical Support Engineer
Rocket Software
------------------------------

My current routine looks something like this:

subroutine paginate.list(listname, perpage, page, rlist, rcount)

   open 'pointer-file' to f.pf;
   read clist from f.pf,listname
   
   startIdx=(page-1)*perpage'0';
   
   rcount=dcount(clist, @am);
   
   dim matlist(rcount)
   matparse matlist from clist;
   
   if startIdx < 1 then
      startIdx = 1;
   end
   
   endIdx = startIdx+numItems;
   
   if endIdx > rcount then
      endIdx = rcount;
   end
   
   rlist="";
   for idx=startIdx to endIdx
      rlist<-1>=matlist(idx)
   next idx
   
return


What would a dummy key read implementation look like?



------------------------------
Jeremy Lockwood
Awesome
ASE Supply Inc
Portland OR United States
------------------------------

My current routine looks something like this:

subroutine paginate.list(listname, perpage, page, rlist, rcount)

   open 'pointer-file' to f.pf;
   read clist from f.pf,listname
   
   startIdx=(page-1)*perpage'0';
   
   rcount=dcount(clist, @am);
   
   dim matlist(rcount)
   matparse matlist from clist;
   
   if startIdx < 1 then
      startIdx = 1;
   end
   
   endIdx = startIdx+numItems;
   
   if endIdx > rcount then
      endIdx = rcount;
   end
   
   rlist="";
   for idx=startIdx to endIdx
      rlist<-1>=matlist(idx)
   next idx
   
return


What would a dummy key read implementation look like?



------------------------------
Jeremy Lockwood
Awesome
ASE Supply Inc
Portland OR United States
------------------------------
Something like:

ROOT 'FILE','INDEXREF' TO I.REF ELSE
ERROR = "cannot open index"
GO SubroutineMainExit
END
SKIP = (PAGENO-1)*PERPAGE
PROCESS = PERPAGE
FNXT = "C" ;* "first" key
LOOP
KEY(FNXT,I.REF,SEARCHKEY,ID) THEN
FNXT="N" ;* "next" key
IF SKIP THEN
SKIP -= 1
END ELSE
IF PROCESS THEN
PROCESS -= 1
GO DO WHATEVER HERE
END ELSE FNXT = "" ;* force end of loop
END
END ELSE
FNXT = ""
END
UNTIL FNXT = "" DO REPEAT

------------------------------
Brian S. Cram
Principal Technical Support Engineer
Rocket Software
------------------------------

My current routine looks something like this:

subroutine paginate.list(listname, perpage, page, rlist, rcount)

   open 'pointer-file' to f.pf;
   read clist from f.pf,listname
   
   startIdx=(page-1)*perpage'0';
   
   rcount=dcount(clist, @am);
   
   dim matlist(rcount)
   matparse matlist from clist;
   
   if startIdx < 1 then
      startIdx = 1;
   end
   
   endIdx = startIdx+numItems;
   
   if endIdx > rcount then
      endIdx = rcount;
   end
   
   rlist="";
   for idx=startIdx to endIdx
      rlist<-1>=matlist(idx)
   next idx
   
return


What would a dummy key read implementation look like?



------------------------------
Jeremy Lockwood
Awesome
ASE Supply Inc
Portland OR United States
------------------------------
GET-LIST listname (1001-2000 *** to get elements from the 1001 to 2000
CLEARSELECT *** this exahust the list even if it is empty - do it ALWAYS​

------------------------------
Stefano Maran
Senior programmer
GTN SpA
Tavagnacco Italy
------------------------------
GET-LIST listname (1001-2000 *** to get elements from the 1001 to 2000
CLEARSELECT *** this exahust the list even if it is empty - do it ALWAYS​

------------------------------
Stefano Maran
Senior programmer
GTN SpA
Tavagnacco Italy
------------------------------
Thanks, Stefano. I never knew that one. You learn something new every day!

------------------------------
Brian S. Cram
Principal Technical Support Engineer
Rocket Software
------------------------------
GET-LIST listname (1001-2000 *** to get elements from the 1001 to 2000
CLEARSELECT *** this exahust the list even if it is empty - do it ALWAYS​

------------------------------
Stefano Maran
Senior programmer
GTN SpA
Tavagnacco Italy
------------------------------
Very nice, I haven't tried Brian's root/key solution yet, but based on my benchmark the get-list method is faster than my matlist one (which is already much quicker than using a dynamic array) - using the routine below, these are my results -

Using a file with ~360K items

iterations = 10000
method = 1, average = 0.0179, min = 0.0163, max = 0.045
method = 2, average = 0.0033, min = 0.0029, max = 0.0293

Unfortunate that the AQL select doesn't simply support an offset with a syntax similar to this:

SELECT FILE BY A0 OFFSET 100 SAMPLING 25

open 'pointer-file' to f.pf

crt "file: ":; input tfile;
crt "number of iterations: ":; input iterations;

listname="test-list"
execute \\select \\: tfile :\\ by a0\\ capturing noop;
execute \\save-list \\:listname capturing noop;

read thelist from f.pf,listname;

itemlist1="";
itemlist2="";
times="";

crt "items in list = ": dcount(thelist, @am);

for x=1 to iterations
   
   itemlist1="";
   
   call timestamp(tstartTime);
   
   dim matlist(dcount(thelist, @am));
   matparse matlist from thelist;
   
   for i=350000 to 350025
      itemlist1=itemlist1:@vm:matlist(i);
   next i
      
   call timestamp(tendTime);
   
   etime=tendTime-tstartTime;
   times<1,-1>=etime;
   
   crt "completed iteration ":x:", method = ":1:", time = ": etime;
next x

crt "------------------------------------";
crt "method = 1, average = ":sum(times<1>)/iterations:", min = ":minimum(times<1>):", max = ":maximum(times<1>);
crt "------------------------------------";

for x=1 to iterations

   itemlist2="";

   call timestamp(tstartTime);
   
   execute "get-list test-list (350000-350025" capturing noop;
   select f.pr to ilist;
   loop
      readnext itemid from ilist else exit
      itemlist2=itemlist2:@vm:itemid;
   repeat
   clearselect ilist;
      
   call timestamp(tendTime);
   etime=tendTime-tstartTime;
   times<2,-1>=etime;
   
   crt "completed iteration ":x:", method = ":2:", time = ": etime;
next x

crt "------------------------------------";
crt "method = 2, average = ":sum(times<2>)/iterations:", min = ":minimum(times<2>):", max = ":maximum(times<2>);
crt "------------------------------------";

crt "iterations = ": iterations;
crt "method = 1, average = ":sum(times<1>)/iterations:", min = ":minimum(times<1>):", max = ":maximum(times<1>);
crt "method = 2, average = ":sum(times<2>)/iterations:", min = ":minimum(times<2>):", max = ":maximum(times<2>);
crt "------------------------------------";

crt itemlist1 = itemlist2;​

This could probably be tested better by accessing random offsets, but I think this is good enough for me.

Given that the initial select is expensive, I think my next step will be to figure out a way to cache the list and only invalidate/perform the select/save-list again if the underlying data changes.

------------------------------
Jeremy Lockwood
Awesome
ASE Supply Inc
Portland OR United States
------------------------------