Rocket iCluster

 View Only
  • 1.  iCluster Performance - replication with Remote Journaling

    ROCKETEER
    Posted 04-06-2021 12:55
    Edited by Mark Watts 04-06-2021 13:03

    In this series discussing iCluster Performance, we have discussed TCP/IP settings, disk unit I/O impacts on IBM Power running i with iCluster replication and last post we discussed creating a Local Journal with options to perform optimally.  If you missed any of those topics, you would find those in the list of Forum threads in the iCluster topic. 

    A quick recap, iCluster replication technology is based on IBM i's ability to record all changes to objects to a journal (log) when they have been created, updated or deleted.  iCluster includes features to support the selection of either using a Remote Journaling transport, an IBM DB2 service included in the base of the IBM i operating system, or a built-in service included in iCluster to deliver changes to the selected clustered node using local journaling exclusively.

    in this post we are going to leverage the information shared prior, the previous post for iCluster Performance - replication with Local Journaling.  That will prevent the potentially long redundancy in our story about building an efficient replication model.  For HA/DR redundancy is good, in writing a forum post, not so much.  Where we left off, we had discussed:
    1. Identifying the application and its libraries
    2. Defining a Replication group to include the application
    3. Creating a Default Journal receiver and Journal for the replication group
    4. Using an automated command to Sync and start the replication group
    5. We introduced some concepts around maintaining the journal receiver images based on time/age rather than a size threshold.

    First, a brief Q&A
    ⦁ We're talking about performance so, is remote journaling better than local journaling? - no.  Local or Remote Journaling both have their benefits so iCluster can be configured to use either transport choice or mix the usage where the best benefits of each can be used where needed.
    ⦁ Do I need to change the replication group for it to start using remote journals? - yes, a surprisingly easy and painless step
    ⦁ Do I need a new license key for iCluster with remote journaling - no - all the features discussed are all part of the iCluster offering with no add-ons or additional features to purchase  
    ⦁ Can some of the replication groups use remote journaling and some local journaling? - yes, this is a group by group configuration setting
    ⦁ Do I need to create a library on the target system for the remote journals to be stored? - no, iCluster will create it if needed.
    ⦁ Do I need to create the remote journal? - no, iCluster will create it if needed.
    ⦁ Do I need to activate the remote journal or maintain its state? - no, iCluster will maintain the state of the remote journal
    ⦁ Sounds too good to be true, is it? - If you are new to replication and remote journaling, then iCluster can create everything it needs for the remote journaling transport services.  However, there are two items that we must potentially consider that are manually investigated and addressed. 
    1. DDM security settings and authentication
    2. A potential cleanup exercise of old, stale remote journaling configurations and resulting remote journals that are in need of redesign, change or purging.

    One of these two items we need to address right away so things will work properly and that is the DDM security settings. If you just said "Oh No!", don't worry because I am going to walk through with you the easiest setup possible for DDM authentication and still deliver secure authenticated DDM connections.  We are going to assume that you have left unchanged on the source and target the default security settings for DDM and that will require DDM authenticated connections. (you can also change it to use encryption if you like)  You can review this setting with command CHGDDMTCPA and press F4 to prompt the values. We expect to see one of the authentication methods that require both a USERID and PASSWORD (whether encrypted or not) and that the server is auto started.


    The important thing to know is iCluster will require DDM connections between cluster nodes when using remote journaling so getting DDM authentication set up properly is required. There is also a lower security authentication method that can be used that is '*USRID' that does not require a password stored or authentication entries however IBM has changed defaults to make us more secure so I will avoid showing you a less secure method in this example for the sake of simplicity and security.

    Did you know you can create an IBM i group profile with DDM authentication that can be used to authenticate all of your IT Admins and iCluster between the iCluster nodes? That is my favorite method so that is what we will do right now. There is a 'user by user' method for authentication in the iCluster User Guide (in the section titled "Server authentication for secure DDM connections") if you prefer that method (which we will not discuss here beyond its mention). Perform the method I will show you on each node that will run iCluster to set this up. The password for this ID on each node needs to be the same and if you are replicating all users and automatically disabling users on-the-fly, this profile must be excluded from replication so it is not inadvertently DISABLED. (same is true of the user DMCLUSTER, the iCluster running user)  This authentication method can also be easily disassembled if it will not work for your site rules.  Please include any feedback 'why' or 'why not' in the forum.

    Create our group profile on each node:
    CRTUSRPRF USRPRF(ROCKETDDM) PASSWORD() PWDEXP(*NO) STATUS(*ENABLED) USRCLS(*USER) INLMNU(*SIGNOFF) LMTCPB(*YES) TEXT('DDM authentication group profile for iCluster') SPCAUT(*IOSYSCFG) PWDEXPITV(*NOMAX)

    A couple items of note for this sample user profile if you use it as in the example... This is a Class *USER with password expiration turned off. This user is not able to be signed on a terminal INLMNU(*SIGNOFF) and is designed specifically for DDM authentication. The user's only special authority required is *IOSYSCFG. You could also consider just adding DDM authentication to an already existing group profile however a dedicated group profile for DDM will help the team not forget what other authorities are granted and prevent extending an authority not intended. (and this profile requires a password be defined which other group profiles may be authorities only)  Remember that if the password for this user is changed, the DDM authentication entrees must also be updated to reflect the change.  To ease this burden a CLP could be easily created to update both the user and the DDM authentication however it would need to be run on all clustered nodes with as little gap in time as possible.  Lastly, there is nothing special about the name ROCKETDDM (other than Rocket support will recognize it and your set up authentication method), you can use whatever group user profile name you wish.

    You can display the Authentication Entry commands on this IBM menu: GO CMDAUTE

    As this is a new user, we can go ahead and add the Authentication entry.(since we know the entry does not already exist)
    ADDSVRAUTE USRPRF(ROCKETDDM) SERVER(QDDMDRDASERVER) USRID(ROCKETDDM)PASSWORD(theROCKETDDMpassword)

    Now add the new group profile (ROCKETDDM) to the existing DMCLUSTER user and any users that will interact with iCluster (whether Users, Operators, Admins) You could optionally also add library authority for the library ICLUSTER if you have updated the library as authority EXCLUDE so that non-authorized users can simply add the group profile and get both library authority and DDM authority if appropriate for your site. (It can be added in the group profile or supplemental groups parameter.  See the example in the included image.)

    For the new group profile value to be in effect for iCluster and users, you may need to shut down iCluster controlled and restart it normally (users may need to log off and back on). (When you are adding a new group profile to an active user the new authority can be available immediately. When you remove a group profile from an active user, it seems they need to log off and back on to reflect the change. The assured way is to log them off and back on each time authority is changed to reflect the change immediately and that would include  restarting iCluster completely.) Before you restart iCluster review the next paragraph.

    At this point we have all the appropriate authority and we only need to change the journal location field in the group definition (because the group is already inactive, the group must be inactive). Let's try this for group PAYROLL that we created in the previous post. Using command DMCHGGRP PAYROLL - F4 then F9, page down to the Journal location parameter and change it to *REMOTE.



    Then press <Enter>.

    For the group to recognize the new parameter change and rebuild the metadata enter this command: DMSETPOS GROUP(PAYROLL) JRN(*ALL) JRNPOSLRG(*LASTAPY)
    I like to also identify and clear the Staging Library for the group before restarting the group.
    Now start the group normally with the command: DMSTRGRP GROUP(PAYROLL) STRAPY(*YES)
    The exciting thing is, in that startup step, the replication group will add and activate the remote journal to the local journal(s), capture the required information for the group to start and automatically resume where it left off --- however now using the remote journal transport to deliver the data journal changes to the target system and the process will facilitate the apply processes by retrieving the data changes from the remote journal.

    We are finished here if your group became active as expected. I like verification steps, do you want to do some verification steps? You can try this and the same procedure will work as you convert additional groups.

    You can look at the iCluster monitor and record the names of all the data journals in use for your group (and you can use option 5 next to each journal to see additional details like the library it is in). Next use command: DMWRKJRN JRN(*ALL/*ALL) FILTER(*NO) to display all the journals on the system then find your first journal and use option 12 (WRKJRNA) next to it. From this display you can investigate the remote journal created and its status with option F16. Next you can go to the target system and run command WRKLIB for the library where the remote journal was created (anticipating DMRMTJRN). Anytime iCluster needs to create the remote journal it is going to attempt to create it in library DMRMTJRN first. If there is a remote journal name collision it will automatically increment the library name and create the remote journal with the same name as on the primary in the new unique remote journal library name. Repeat for each data journal in your group you are converting to *REMOTE.

    Lastly, to wrap this up, I mentioned earlier that if this is not the first remote journal usage there could be some cleanup required due to conflicts, scattered usage or otherwise. Here is a bold statement, "To make the process as clean and easy to maintain as possible, remove all the remote journals configured on Primary previously used for replication and go to the target node and clean up all the remote journals and receivers off of the target system."  Now, this is assuming that you have established replication in iCluster using local journaling first and have started these replication groups.  Every administrator has their own deployment plan and this is not meant to fit every scenario.  When you set up remote journal replication let iCluster create all remote journaling and all that is needed for this process. iCluster can recognize and adopt your existing remote journal configs but without going into much more detail, we can't fully give the different possible scenarios a check mark. Just bite the bullet and remove them. If you do this, replication setup is very easy and switchover processes will be cleaner and consistent. Unless you have some other remote journal requirement for something like IBM Infosphere CDC or some other vendor or internal coded process that requires remote journaling, remove them all. If you do this, then just like we changed our sample PAYROLL group, all other groups can be changed to use remote journals just as easily.

    Best of luck and happy iClustering!

    #iClusterPerformance
    #IBMi

    ------------------------------
    Mark Watts
    Rocket Software
    ------------------------------
    ​​


  • 2.  RE: iCluster Performance - replication with Remote Journaling

    PARTNER
    Posted 07-20-2021 21:41
    Hi,

    Can we replicate application journals and their journal receivers?

    Thank you.

    ------------------------------
    Romeo Santos
    Team Leader
    Strategic Synergy, Inc
    Makati City Metro Manila Philippines
    ------------------------------



  • 3.  RE: iCluster Performance - replication with Remote Journaling

    ROCKETEER
    Posted 07-21-2021 09:58
    Hi Romeo,

    If you select *ALL *ALL *ALL for everything in a library and there happens to be a journal and its receivers in that library, iCluster does not replicate the journal and its receivers (or need to for replication).  Instead, assuming the objects in scope for replication are actually journaled to the subject journal, a journal of the same name and location is created on the target system (if it doesn't already exist as part of the initial sync process) and if the control parameter for 'Journal on Backup' is set to *YES, then all the same objects in scope that are journaled on the primary are journaled on backup.  Sorry for the longest sentence in history.  Your sync check process with repair option *JRN will help insure the journal environment is healthy and all components that need to be journaled for switchover are journaled same as on the PRIMARY node.

    If you are actually wanting to capture a primary node journal/receivers and deliver it to the backup, you could save it to a SAVF and iCluster will replicate the SAVF.  You could alternatively use an FTP session to send the SAVF to the BACKUP node.  Careful not to place it in a library in scope as it could be identified as an obsolete object on the backup in a sync check and depending on the parameters it could be automatically purged. It could also represent a significant amount of data that could interfere with normal replication if included in one of your replication groups.

    Hope that is helpful.

    ------------------------------
    Mark Watts
    Software Engineer
    Rocket Software Inc
    Waltham MA United States
    ------------------------------