Rocket iCluster

 View Only

iCluster Performance - replication with Local Journaling

  • 1.  iCluster Performance - replication with Local Journaling

    ROCKETEER
    Posted 03-02-2021 17:32
    Edited by Mark Watts 04-06-2021 13:05


    In this series discussing iCluster Performance, we have discussed TCP/IP settings and disk unit I/O impacts on IBM Power running i with iCluster replication.  If you missed those topics, you would find those in the list of Forum threads in the iCluster topic. 

    iCluster replication technology is based on IBM i's ability to record all changes to objects to a journal (log) when they have been created, updated or deleted.  iCluster includes features to support the selection of either using a Remote Journaling transport, an IBM DB2 service included in the base of the IBM i operating system, or a built-in service included in iCluster to deliver changes to the selected clustered node using local journaling exclusively.

    In this entry we will discuss creating a local journal using the most common options to quickly and easily set up journaling for iCluster logical replication.  Sorry to say it again, iCluster also fully supports remote journaling so please don't interpret its omission in this discussion as not being supported but rather a narrow focus for this forum discussion and a follow-up discussion covering remote journals. What benefits to replication are delivered by iCluster local journaling selection?  First, you will experience the ease of configuration selection as you follow along in this article.  Second is the efficiency of replication communication with iCluster as it is not subject to journal pollution.  Journal pollution is when the objects journaled to a selected journal far out number the the actual objects in selection for the group.  iCluster only selects and sends the objects specified in the group selection to the target versus every single journal entry written to the journal whether selected by the group or not are sent to the target system with remote journaling.  The result is less bandwidth usage when compared to remote journaling.  Lastly, one thing customers like is less complexity.  Its not terrifically complicated to use remote journals but IBM i administrators typically have multiple hats and anything that reduces complexity on IBM i makes it easier to maintain it expertly.  We can focus on remote journaling in a future post. 

    So let's set up the discussion as if the application analysis has already been performed and the result is that we will select the library PAYLIB to replicate the payroll application.  Let's further assume that all the objects for the application payroll are all contained within PAYLIB.  It is not uncommon that some of the objects in PAYLIB are already journaled to some application journal (sometimes in the application library) to support recovery or application commitment control.  With iCluster we do not need to do further investigation into this situation.  Why? Because if the application vendor (or homegrown app developers) set up journaling for the most critical files in their application, we do not need to change the journaling choices the vendor made, so we can leave this in-place just as the vendor planned.  

    What we will do is create a 'default journal' that will be designed for 'everything else' that are not already journaled in the application journal.  iCluster will automatically adopt all journals for replication associated with the selection so there is no need to make complex analysis and decisions to decide what will be journaled where or how many replication groups should be made.  Further, there is no need to interrupt the application availability because no journaling changes would be made to existing journaling.  Now in full disclosure, once replication is active and any deficiencies in our first pass at config are identified, we may make some adjustments to what we have setup but that is typically not required.

    So lets create a library where we will store our default journals.  Some people like @JOURNALS or AAAJRNLIB for the journal library.  The low alphabetic name is to help insure that when a full system backup is performed the journal library is backed up typically before the application libraries and the opposite would be true upon a restore.  In our example we will use AAAJRNLIB. 
    CRTLIB LIB(AAAJRNLIB) TYPE(*PROD) TEXT('Journal Library')

    Next we will create our Journal receiver with our preferred options.​
    CRTJRNRCV JRNRCV(AAAJRNLIB/PAY0000001) THRESHOLD(3500000) TEXT('PAYJRN receiver')

    Your first question might be, "Why did we increase the journal receiver threshold?".  We want to prevent the DB2 system from creating a new receiver too frequently is the primary reason.  Each time the system reaches threshold for the size, the system must pause, update the chain, create the new receiver, complete the chain and continue.  It performs this very quickly but, too frequently and multiply that across all applications and the applications could have their performance adversely affected.  Our soft target is to have the system only create a new journal receiver between one and four times a business day and adjust the threshold accordingly.  This is a good starting point.  There are of course other options available in creating a journal receiver you can examine to see if they are appropriate for your environment.

    Next we will create the default journal for our replication group and associate it to the receiver we just created.
    CRTJRN JRN(AAAJRNLIB/PAYJRN) JRNRCV(AAAJRNLIB/PAY0000001) MNGRCV(*SYSTEM) DLTRCV(*NO) RCVSIZOPT(*RMVINTENT *MAXOPT2) MINENTDTA(*FLDBDY) JRNCACHE(*NO) TEXT('PAYJRN Journal')

    Some of the values we selected are the system defaults on most systems but they are important so if they have been changed, these are the values we are expecting.  These are all pretty standard and expected but there are two values that might be a surprise so lets discuss them briefly.  First, the journal minimal data option identified by the MINENTDTA(*FLDBDY) option.  This allows the system to only need to write the changed data field to the journal rather than the entire row.  Depending on the length of the database rows, this could represent a significant savings in collected and transmitted data when a change to the table data is performed.  iCluster supports this option and we recommend you select the 'Field Boundary' option (*FLDBDY) so that the journal records can still be easily read if you need to investigate journal records. The other option 'Journal Cache' turned off (JRNCACHE =*NO) is a strategic option rather than a hard requirement.  There are two reasons for this, first this is a Paid option a customer must get entitlement for from IBM to use it so we are trying to not require additional cost to the customer.  Second, we are also supposing that if transactions are cached on the primary before a write, there is a slightly higher risk that transactions have not been written to the system in the event of an unexpected outage to production.  If they are cached it provides a better journaling performance option but if they haven't been written to disk, they are not yet available to replication to send them to the target node.

    Now that we have our journal created we have two more steps to create a basic replication group and selection utilizing the new default local journal.  This first command is all that is needed to add the replication group to the cluster using LOCAL journaling.  Although there are many override options available to customize the replication definition for the application and desired result, iCluster automatically defaults all these options to their most commonly selected default settings and therefore reduces the Group creation effort.
    DMADDGRP GROUP(PAYROLL) GRPTYPE(*REPL) PRIMNODE(ICDMO74A) BACKUPS(ICDMO74B) STGSTORLIB(#HAPAYROL) DFTDBJRN(AAAJRNLIB/PAYJRN)

    In our simple example we will complete the selection with a single selection.  Of course most replication groups will contain enough statements to both select all the desired objects and also contain exclusion statements to limit the selection to the best performing replication group that eliminates objects that are not useful for recovery and business resumption on the backup node (for example excluding temporary work objects).  Of course we are not interested in identifying and excluding some low activity temp objects that do not adversely effect replication.  For simplicity we would allow them to be replicated with a generic inclusion statement.
    DMSELOBJ GROUP(PAYROLL) OBJ(PAYLIB/*ALL) OBJTYPE(*ALL) INCFLG(*INCLUDE) 

    There we have it.  Our local default journal we created will be selected for the replication group and all the desired objects are ready to begin the SYNC process used for replication.  To keep this Forum post a short read, I'll not go into the SYNC process in detail but instead that will be covered separately.  But, lets not leave you hanging but instead let's discuss how you get objects currently in selection but not journaled to our new default journal. 

    The primary commands used for replication Sync in combination with a sync process are DMMRKPOS, DMMRKSYNC and DMSTRGRP with an option to REFRESH objects as part of the process or use a previously MARKED position.  I'll cover the easiest of the commands (with the easiest options) that synchronizes a new replication group over the existing established communications link.  This command is so good that it may well deserve a separate post just highlighting this command.  Below is an example of how I would use it.  As a result of invoking this command for our new group, all the unjournaled objects in the selection will be automatically journlaed to our new default journal assigned in our group definition.  In addition, all the application objects that were already journaled remain on their prescribed journal and their details are added to the group metadata.   A new point in time 'mark' is added to the iCluster metadata for all objects in selection.  A save file is created and all the objects in scope are saved.  An FTP connection is established to the backup node and the savefile is delivered.  The savefile is restored to the backup node.  The work savefiles are optionally deleted off both Primary and Backup nodes.  The replication group is optionally started at the 'MARKED' journal position.  I also recommend you submit the task to a job rather than performing it interactively.  This command would be submitted from the Primary Node with the correct access authorities and the service for FTP active on the BACKUP node.  Note:This command is the only command or service in the iCluster application arsenal that uses FTP.  It is only used due to some of its unique capabilities and it didn't make sense to rewrite for this exclusive use.  FTP does not need to be available for iCluster replication after this command use is completed. 
     
    SBMJOB CMD(DMMRKSYNC GROUP(PAYROLL) SAVFLIB(ICTEMP) LIBS(*ALL) SYNCBSF(*YES) SAVPVTAUT(*NO) RMTUSRPRF(ABCFTPUSR) RMTPWD(abcftppwd) RTPRMTPWD(abcftppwd) DLTSAVF(*BOTH) STRGRP(*YES)) JOB(SYNCPAYROL) JOBD(ICLUSTER/CSJOBD)

    The above statement is the way I typically submit the job when using this sync method.  Note that it uses a temporary and NEW work library 'ICTEMP' to store savefiles created in the step.  You can delete it after a successful completion and group startup. Be sure to consider if you will have storage space to store all the temporary savefiles that will be created during the process (although they are temporary they could represent significant space depending on the group selections.).  Any sync process that uses existing communications should also review the iCluster performance forum post regarding TCP/IP and make a rough calculation of the anticipated time to send the resulting data to confirm you have selected the best method to sync for this replication group.

    At this point you simply monitor the job completion and anticipated startup of the replication group as its last step.  To follow along as it is running you can monitor the size of the savefiles created on primary and compare their existence and size on the backup to estimate its progress.

    Last thing, journal maintenance.  This mention of journal maintenance in this article is admittedly more of a 'drive by' but here goes:  Use this iCluster command to see all journals on the system and their storage. DMWRKJRN JRN(*ALL/*ALL) FILTER(*NO) 
    The resulting display is awesome as it presents so much information at your fingertips about system journals.  Our mention here is brief because I wanted to tie it back to the journal receiver threshold we set to '3500000'.  iCluster's journal management can have all your journals enrolled and with either generic command execution or jobs that remain active and periodically check journal storage based on user parameters and can maintain desired retention easily.  I just wanted to point out that if we select option 1 to start journal management and select a desired interval, there is also a 'Attach a new receiver' option that will run with every interval.  The point is, that we can use iCluster to enforce a once a day journal receiver creation even if the selected threshold has not been reached.  The combination of an 'interval of 24 hours' and a selection of 'attach a new receiver *YES' would essentially generate a new receiver each day (or the same command could be used from your favorite scheduler) with the other desired retention options and take away the need to try to calculate what threshold value might be best.  And of course it is flexible enough to set to any interval you like, just combine the option with a very large receiver thresholds setting and allow the journal maintenance to generate daily new receivers at the frequency you want based on time rather than size. 

    The goal of this post was to cover the ease of creating a replication definition in iCluster using LOCAL journal definitions and some selection options to improve their performance.  I also wanted to show why iCluster administrators find the low complexity in setup and ongoing administration appealing of using LOCAL journaling and iCluster built in transport.  The final replication definitions are simply repeating these steps and further identification of library and path objects selected to replication groups to build to the final full replication definitions.  We would also build a GROUP to select system objects such as user profiles, authority lists, directory entries, schedule jobs, OUTQs and etc. 

    If you want to contribute to this discussion, share how LOCAL journaling with iCluster is used in your environment or ask questions if you like.  Your feedback for this topic or any in the Rocket Forum is also appreciated.

    #iClusterPerformance
    #IBMi

    ------------------------------
    Mark Watts
    Rocket Software
    ------------------------------​​