Rocket U2 | UniVerse & UniData

 View Only
  • 1.  I/O too fast for the design

    Posted 10 days ago

    We will soon upgrade UV storage to solid state.

     A couple times in my life I've seen fast I/O bring unanticipated timing issues when file updates happen sooner than anticipated by the application design.

     I'm struggling to remember details.  One was a multi-write logical transaction where, at some point, some files were moved to a much faster disk or bus than the others, so the writes got temporarily out of synch, creating unanticipated confusion for nearly simultaneous queries.  It wasn't a problem when everything was slower.


    Would anyone care to share any such gotchas they've seen, particularly when moving to solid state storage?

     (It does seem an odd thing to complain about:  my system is too fast.)



    ------------------------------
    Chuck Stevenson
    DBA / SW Developer
    Pomeroy
    US
    ------------------------------


  • 2.  RE: I/O too fast for the design

    Posted 9 days ago
    Chuck,

    You may want to check out the BEGIN TRANSACTION/END TRANSACTION statements in the documentation. You do not need to actually turn on disk logging to use transactions.

    It would require some re-coding to do this, but that should keep the transactions in order and should prevent unbalanced I/O speeds from causing any problems.

    Jon

    ------------------------------
    Jon Kristofferson
    Pick Programmer
    Snap-on Credit LLC
    Libertyville IL US
    ------------------------------



  • 3.  RE: I/O too fast for the design

    Posted 9 days ago
    Good point.

    I use explicit TRANSACTIONs quite a bit, not as much as I wish.  They are hard to retrofit into existing old code,  but I like to  require it for new code.
    2 things make retrofitting difficult:
    1.  Locks end up being held longer than originally designed for, &
    2. Deadly embraces are more likely (e.g., Program1 updates A, then B, but Pgm2 updates B, then A.)

    Thanks for the response.

    ------------------------------
    Chuck Stevenson
    DBA / SW Developer
    Pomeroy
    US
    ------------------------------



  • 4.  RE: I/O too fast for the design

    ROCKETEER
    Posted 8 days ago
    Edited by John Jenkins 6 days ago
    Chuck,

    This should not be a problem for UniVerse as it uses it's internal buffers and standard O.S level file APIs that should keep everything in sync. Where you could see difference however  is with physical writes to media, as these are buffered differently. For example:

    1. UniVerse performs a logical write to a file
    2. The write is buffered in UniVerse memory and O/S API calls are made to write the data. All access within UniVerse will read/write through the same UniVerse memory and O/S APIs and no issues should be apparent.
    3. An O/S API writes to system I/O memory space and buffers and may be deferred depending upon O/S buffer configuration, available memory and storage device I/O queues.
      1.  NOTE: Logical writes to adjacent disk locations may get consolidated in the physical I/O by the O.S or by disk drivers and firmware.
    4. A write by the O/S will go into the storage device I/O drivers and controlling firmware. Depending upon the storage configuration and media this can take a while to get to the point of being actually physically committed to the storage (i.e. written).
    5. If RAID 5 is used then writing the parity disk can take extra storage device write cycles - but the principle is the same.
    6. If SSD is used then the device firmware will have its own optimisation to try and minimise the number of physical writes to the storage device. This usually consists of a combination of cache memory to defer writes until they can be consolidated to minimise the physical writes and optimisation algorithms which can spread wear over the SSD storage space - often by dynamic re-mapping of the storage media blocks.
    Throughout this whole process, everything that goes on below the O/S API is transparent to any file access using the same common O/S API calls. The API call goes through the same O/S memory space, buffers and drivers and ends up at the same storage device.

    HOWEVER:
    Where this can 'go wrong' in a couple of ways:
    • O/S crash resulting in some storage device buffers never getting written to storage media - typically seen after a hard crash and a subsequent reboot. A key mitigating factor here is the periodic flushing of buffers - even if not yet full. This is a  balancing act of performance vs resiliency/recovery and many O/S variants have tuneable parameters in this area that govern the frequency and size of buffer purges. 
      • NOTE: Many storage devices check their own 'health' and use a 'badmap' to remap suspect blocks on disk, remapping transparently to a  reserve of  blocks. Over time this reserve can become depleted which can result in a future hard failure when the reserve is exhausted. Where a disk subsystem has a an internal disk 'health check' please do not forget to use it every now and then - and be ready for an early disk replacement before a failure if the reserve starts to either run low or show signs of accelerating failure.
    • Direct read or write (raw I/O) - not using the O.S APIs. Though possible I have not seen this as an issue in normal operational use save on one occasion when an engineer accidentally attached the same disk to two disk controllers at the same time. I would add that the result was not subtle - chaos ensued. A multiple mount of the same disk subsystem can have similar effects.
    • Incorrect hot swap of media between different machine instances (and serially multiple times in the instance of which I am aware) - trashing the validity of the file I/O buffers at the O/S level on each of the systems concerned. The result varied from missing updates and minor corruption to trashed files proportional to the level of activity on the file at the time. The disk block structures were also damaged requiring a full disk repair be run.
    To summarise :
    • Barring a failure either in configuration or the O/S to hardware chain I would not expect the latency of any particular storage media to affect the read/write consistency - use of raw I.O in parallel with the file access APIs excepted.
    • If however a failure occurred with an O.S crash - then yes, different media can be differently affected as the amount of data pending write in both file and system buffers will vary considerably and the buffered data will be lost upon the crash. On recovery then variable levels of mismatches and/or damage would be the expected outcome.
    • Use of transaction boundaries and the U2 Recoverable File System (RFS) will all help mitigate the result of any failure at the file, record and transaction levels.
    • Some backup or file system cloning methodologies use raw I/O and assume that if a file is not physically open it is not logically open. Use of Raw I/O and making this assumption can have unintended consequences regarding data consistency for the copy or back being created. Please see  thee UniData 'dbpause' and the UniVerse 'SUSPEND.FILES ON' - there are good reasons for these.
    Hoping this helps - and also noting that disk performance is a massive topic in itself and includes topics less familiar to many such as SDLC or MDLC (for commercial use MDLC  always)  and SSD 'conditioning' which can have a major affect on I/O speeds and SSD life expectancy.  

    If you have come across a specific instance please let us know as with storage subsystems becoming ever more modular and complex in configuration options there may well be other areas where conflict can arise.

    Regards,

    JJ

    ------------------------------
    John Jenkins
    Principal Technical Support Engineer
    Rocket Software Limited
    U.K.
    ------------------------------



  • 5.  RE: I/O too fast for the design

    ROCKETEER
    Posted 6 days ago
    Edited by John Jenkins 6 days ago
    Adding:

    • If accessing UniVerse files on a separate UniVerse instance on another server, please use UV/NET. Accessing files using raw NFS mounts will inevitably result in data mismatches at best as the native UniVerse buffers and locking mechanisms on each files  'home server' are bypassed. UV/NET exists to ensure lock and file consistency - please use it to ensure data consistency.
    • The UniVerse BASIC User Guide on Data Anomalies and 'dirty reads' is  definitely worth perusing. As previously mentioned by other contributors, use of appropriate record locks and transaction boundaries are your best friends here.

    Regards

    JJ

    ------------------------------
    John Jenkins
    Principal Technical Support Engineer
    Rocket Software Limited
    U.K.
    ------------------------------