Rocket iCluster

 View Only

So you want to replicate the IFS, A lot of IFS...

  • 1.  So you want to replicate the IFS, A lot of IFS...

    ROCKETEER
    Posted 11-24-2021 15:57
    Edited by Mark Watts 11-24-2021 16:43
    I recently received a question regarding IFS replication for some paths that contain millions of IFS objects.  What are the selection considerations for large IFS replication groups that may contain millions of IFS objects?  Do you use 'journal=*NONE' in your selection or journaled IFS replication pointing to the default BSF journal in the group?  If the objects are jpg images or docs that never change once they are created, like pdfs, what is the best practice for these?

    The engineer asking the question is correct that the two methods of IFS (near real time) replication with iCluster are journaled and non-journaled replication.  The other method is using a periodic snapshot replication using a *RFSH replication group type.  (not usually the best choice for millions of objects but should be considered and eliminated if it is not applicable)  For IFS objects that are created, stored and do not change, non-journaled IFS object selection can be a good choice.  On occasion I have heard of applications that do go back and touch the attributes of documents after creation which causes the objects to be exceptions in a sync check afterwards.  Watching for that behavior may be key in your decision which method to use.  (Some customers thought they were not changed and surmised the exceptions were because iCluster replication was inaccurate.  When investigated, the objects were indeed modified seconds after they were created but had already been replicated so the change was not registered by non-journaled replication.)  Remember that non-journaled replication tracks actions like creation, renames, moves and deletions so if your IFS path of objects only have these type changes they are a good candidate for that setting.  You do have to experiment if your first configuration choices do not result in your cleanest operation.  Frankly, the journaled IFS selections work great and hedges your bet that the objects in the IFS folder are touched only once.  Attribute and content changes are automatically replicated if they occur.

    The caveat (when using this option) is that the journal attributes should be modified to allow 'Journal object limit' to *MAX10M to prepare the journal to receive changes for objects above the default limitation of 250K.  Then also consider if the 10M limit will 'EVER' be reached in the current configuration.  If there are chances that it may be reached, consider splitting the paths into multiple data journals.  Don't forget to change each journal attributes to the *MAX10M setting.

    Notice also the HELP note included with the CHGJRN command:
    Runtime performance concerns should be considered when
    choosing this option (selecting the *MAX10M option).
    With this new attribute, there
    is an opportunity for a greater number of objects
    journaled to one journal. Thus there is a potential
    opportunity of more objects that can be actively 
    changing at the same time which can affect journal
    runtime performance. Therefore if the frequency of
    journal entries being deposited to this one journal is
    causing runtime performance concerns, then a better
    alternative would be to split the journaled objects to
    more than one journal.

    In my opinion, there is a low risk in the case we are discussion but each situation should be considered to avoid update contention.  Highly active IFS paths do exist and careful consideration when using this option in the Library.Databasefile storage area is warranted.

     

    Finally, when there are millions of objects, you want to consider if there are single threaded components in your replication design and how to mitigate them.

    • Apply processing – Each data journal (associated with a single replication group) you create for the IFS get a separate HADTUP job in the group.  So to optimize apply processing you have basically two choices
      • Try to estimate the update rate of paths in your selection and separate the objects by selection to a separate data journal to get iCluster to assign a separate update processing for each selection.
      • If all your IFS is journaled to the same data journal, Split the paths into separate replication groups.  In this example each replication group gets their own scraper job on primary and update job on the target.  In this case there is some redundancy on the primary as multiple groups are reading the same data journal however the overhead is typically low and the performance of the replication solution is still well within acceptable performance ranges.  Most optimal would be the previous effort where the workload is separated by data journal.
      • If following your configuration design choices replication experiences apply latency during peaks, consider further splitting the path workload into additional data journals or replication groups to engage additional apply jobs to concurrently work on the processing workload.  Now that the groups are established and if you are using journaled replication, you can use the Journal Analysis Report (DMANZJRN) to gather more insight into what objects or paths are busiest.    
    • Sync Checking – We get one active attribute sync check per Replication Group.  The sync checks are very rich in options and can accomplish many checks/audits and corrections in a single pass.  To reduce Sync check times, placing large selections that contain selections for static objects can be moved out to separate replication groups from your active replication objects.  This allows you to have different schedules of archive sync checks (do you need to check millions of objects that are in the archive daily or continuously?) compared to the active paths of current objects and reduce overall sync check/audit times.  In previous versions of iCluster, Checksum Sync checks were run single threaded.  Today (and we recommend version 8.3+ of iCluster) Checksum Sync checks and many other processes are multithreaded.  This is controlled by the value in the iCluster System values (option 6 off the main menu or DMSETSVAL) in the option for 'Max. # of parallel processes'.  The acceptable values are 0 (zero) to 50 and control the number of parallel processes automatically generated when one of the supported commands are run.
           Max. # of processing jobs (MAXNUMJOB) - Help     
                                                             
    Set this parameter to the maximum number of jobs for    
    parallel processing of iCluster resource-consuming      
    requests when applicable. The job that processes the    
    request will submit a number of parallel processing jobs
    up to the specified maximum. The commands that can be
    processed in parallel are:                               
                                                               
      o  DMMRKPOS                                             
                                                               
      o  DMSETPOS
                                                         
      o  DMOBJCNT                                             
                                                               
      o  DMRPLCVRPT                                           
                                                               
      o  DMAUDITRPT                                           
     
      o  DMSTRSC and STRHASC with SCTYPE(*CHECKSUM) or with   
         SCTYPE(*FULL) RUNCHKSUM(*YES)                        
                                                               
      o  DMSTRSCUSR and STRHASCUSR with SCTYPE(*CHECKSUM)     
    • Group Startup – During normal start of a group, having millions of objects in a group don't particularly affect start up times.  If you (or iCluster) is starting the group in an abnormal group start and rebuilding the metadata prior to group startup, a very large group selection could take significant time to rebuild the group before normal replication resumes. 
    • Switching – In current versions of iCluster, the switch processing is effected by the 'parallel processing control' system value.  You might say "Wait a minute, I don't see any switching commands in the list above for the parallel processing help!"  That is true, but remember that one of the long processing steps in the switch process is the DMMRKPOS processing.  Now that it is enabled for parallel processing, very large replication groups are also switched much faster.  So for IFS the consideration for the number of objects and how it might be affected during switch is limited.  Remember the other housekeeping processes are 'Triggers', 'Constraints', 'identify columns' and 'SQL objects'.  (did I miss any?) As it applies to IFS objects, the MRKPOS step is a huge efficiency when activating parallel processing.

    So, how do you know how many objects are in an IFS path?   The built-in functions on IBM i can tell us the size of a path in bytes but does not tell us the number of objects.  (using RTVDSKINF and PRTDSKINF commands)  However, there is an easy way using iCluster to count objects in an IFS path.  The high level steps are below:

    1. Create a new replication group (IFSTEST for example)  for your IFS selection.  Do not worry about a default journal definition and setting as we are not going to start this group and no journaling will be initiated as a result of these steps.
    2. Add the path selection(s) to the group using journal *NONE for each selection.
    3. Do not start the group... Instead submit a sync check from the primary node using this command example:                                  ===> DMSTRSC GROUP(IFSTEST) SCTYPE(*OBJATTR) OUTPUT(*NONE) LOCK(*NO)                                                                                                   SBMJOB(*YES) DLTOBSOBJ(*NO) RUNCHKSUM(*NO) RUNCHKOBSL(*NO) REPAIR(*NO) CHKSUS(*NO) 
    4. After the sync check job has completed (could take some significant time) go to the BACKUP node and run command DMSCRPT. Read the # Checked column for the IFSTEST group in the resulting report.  Expect there will be a significant number of exceptions and pay them no mind as we only wanted the object count from the job run.  Do not issue any commands to correct the Out Of Sync exceptions.
    5. Remove the replication group IFSTEST 
     

    I hope you find that discussion helpful. 

    You can ask questions in the Forum related to a post or start your own discussion.  If you want me to proof read a post that you think others will find helpful, feel free to send it direct to my email address.  I would be happy to help you write it or can even post it on your behalf if you prefer.

    #IBMi​​

    ------------------------------
    Mark Watts
    Sr. Software Engineer
    Rocket Software Inc
    Waltham MA US
    ------------------------------