Pages

    How rolling four hour average is calculated

    WLM is responsible for taking MSU utilization samples for each LPAR in 10-second intervals. Every 5 minutes, WLM documents the highest observed MSU sample value from the 10-second interval samples. This process always keeps track of the past 48 updates taken for each LPAR. When the 49th reading is taken, the 1st reading is deleted, and so on. These 48 values continually represent a total of 5 minutes * 48 readings = 240 minutes or the past 4 hours (I.E. R4HA). WLM stores the average of these 48 values in the WLM control block RCT.RCTLACS. Each time RMF (or BMC CMF equivalent) creates a Type 70 record, the SMF70LAC field represents the average of all 48 MSU values for the respective LPAR a particular Type 70 record represents. Hence, we have the “Rolling 4 Hour Average”. RMF gets the value populated in SMF70LAC from RCT.RCTLACS at the time the record is created.

    Concurrent access to VSAM files from batch and CICS


    SYSB-II ® is mainframe software that allows CICS and batch to have concurrent access to VSAM files, while maintaining data integrity. This means you can process batch whenever needed, while CICS applications and VSAM data remain fully available for updates.

    Traditionally when batch updated VSAM files, organizations had to take CICS applications offline or
    employ techniques like showing read-only data to CICS users. SYSB-II changes all of that.

    With SYSB-II, organizations can:

    -Keep CICS applications running 24/7 so they can be available on PCs and mobile devices, without
    downtime.

    -Accept new business because organizations have more time both to process online transactions and run batch.

    -Expand to more time zones because CICS applications can be online during traditional “nighttime” hours.

    How SYSB-II works

    SYSB-II uses the documented MVS subsystem interface to intercept batch VSAM requests, translate the input/output (I/O) requests into CICS I/O protocol, and then allow CICS to perform the VSAM operation on behalf of the batch job. SYSB-II communicates between CICS and the batch job using TCP/IP, VTAM, and cross-memory services. This architecture ensures that SYSB-II is upwardly compatible with future releases of CICS Transaction Server and z/OS.

    SYSB-II runs as a legitimate command-level CICS transaction, following CICS rules and standards. Batch jobs look any other CICS transaction to the CICS application.

    With SYSB-II, batch can take advantage of CICS’s data integrity, recovery tools, and file-locking and updating capability. SYSB-II also supports, but doesn’t require, VSAM RLS, a CICSplex, a sysplex, coupling facilities, and TCP/IP and VTAM protocols.

    How VSAM RLS works

    VSAM RLS is a function introduced by DFSMS/MVS V1.3 and exploited by CICS Transaction Server. It is designed to enable VSAM files to be shared, with full update capability, between many applications running in many CICS regions. Prior to the inception of RLS, VSAM data sets that were opened for update were owned and accessed through a single address space, either by stand-alone batch or by a CICS file-owning region (FOR). With RLS, the VSAM files are owned by the RLS server address space (also known as SMSVSAM). Multiple CICS regions can access the data concurrently with full update integrity, thereby eliminating the CICS File Owning Region that had become a bottleneck and a single point of failure for many installations.

    Logging 

    VSAM RLS files fall under two main categories: recoverable and nonrecoverable. The recoverability of an RLS VSAM file depends entirely on the value specified for the LOG parameter in the VSAM cluster definition. This parameter was introduced to support RLS and has three possible values: NONE, UNDO, and ALL. A recoverable data set is one whose in-flight changes are backed out (by CICS) if a transaction fails. The LOG value used on the RLS recoverable VSAM file definition is either UNDO or ALL.

    • NONE - A value of NONE identifies the data set as nonrecoverable because CICS does not log any changes for the data set and is unable to provide transactional or data set recovery.

    • UNDO - The data set is backward recoverable. That is, any in-flight changes made by a unit of work that does not succeed (uncommitted) is backed out. Changes added to the CICS logger are used to back out any in-flight unit of work that failed. This is also known as transactional recovery.

    • ALL - The data set is both backward and forward recoverable. Forward recovery is possible only when another product such as CICS VSAM RECOVERY (CICSVR) is used. When ALL is used, CICS records both the before and after images of any change. In addition to transactional recovery, ALL allows the capability for products such as CICSVR to rebuild data if hardware fails or software problems occur against the data set. This type of recovery is known as data set recovery

    Locking 

    Files opened in RLS mode can be accessed by many CICS regions simultaneously. This means it is impractical for individual CICS regions to attempt to control record locking. Therefore, VSAM maintains a single, central lock structure using the lock-assist mechanism of the coupling facility. This central lock structure provides sysplex-wide locking at a record level. No control interval (CI) locking is used. This is in contrast to the locks for files in non-RLS mode, which are normally limited to a single CICS region and are either CI locks or CICS ENQs.

    VSAM supports two types of locking for files accessed in RLS mode:

    • Shared locks
    Shared locks support read integrity. They ensure that a record is not in the process of being updated during a read-only request. Shared locks can be owned by several tasks at the same time.

    • Exclusive locks
    Exclusive locks protect updates to file resources, both recoverable and nonrecoverable. They can be owned by only one transaction at a time. Any transaction that requires an exclusive lock must wait if another task currently owns an exclusive lock or a shared lock against the requested resource.

    Exclusive locks can be active or retained, whereas shared locks can only be active. When a lock is first acquired, it is an active lock. Normally this lock would eventually be released, but if a unit of work fails, and this would cause the lock to be held for an abnormally long time, the active lock is converted into a retained lock. This has implications for batch processes that require RLS files to be quiesced prior to batch execution because a quiesced data set can be opened only in non-RLS mode if no retained locks are present.

    Integrity 

    To request access to a RLS VSAM data set, a batch program must do one of the following: • Specify the RLS read integrity level on the MACRF parameter of the VSAM ACB. • Add an RLS parameter to the DD statement for the file in JCL. If the RLS parameters are omitted then the program will attempt non-RLS access to the file. Depending on the type of OPEN (read or update) and the recovery attribute of the RLS file, this situation could give rise to program abends or, potentially worse, access to data in native VSAM mode with none of the integrity benefits of accessing the shared RLS SMSVSAM buffers.

    For files opened by CICS, the read integrity level is specified in the file definition.The RLS recovery option (if not specified) is taken from the VSAM file definition stored in the ICF catalog.

    RLS read integrity 

    VSAM RLS supports three levels of read integrity:

    • NRI (No Read Integrity)
    No record locking is performed by VSAM when a GET/POINT request is issued. Although this avoids the overhead of locking, it can allow the requestor to obtain uncommitted data and is sometimes referred to as a dirty read.

    • CR (Consistent Read)
    A shared lock is obtained by VSAM for GET/POINT requests. This ensures that no uncommitted data are ever be returned to the application. GET/POINT requests wait for any pending change to be committed or backed out and the currently held lock to be released.

    • CRE (Consistent Read Explicit)
    This is similar to CR except that the shared lock is held by VSAM RLS until the unit of recovery or unit of work has been completed. This type of lock is available only to CICS and TVS.

    Update integrity 

    For updates to a recoverable file, data integrity is ensured by SMSVSAM by maintaining locks on data changed in the unit of work until CICS explicitly declares that locks can be released. CICS signals this as a result of one of the following:

    • Successful completion of the unit of work
    • Processing of a SYNC call
    • Successful backout of in-flight changes, should a unit of work fail

    RLS provides locking and sysplex-wide parallel shared-data access, while CICS provides the logging and recovery capabilities. It is both of these features used together that make transactional recovery of a VSAM RLS data set from a failed unit of work possible.

    For updates to a nonrecoverable file, RLS releases a lock when the buffer containing the modified control interval has been written. Because no transactional recovery is ever performed on a nonrecoverable file, changes are not backed out and there is no need to maintain any locks. In fact, it is reasonable to assume that locks against records in a nonrecoverable data set remain held only for the duration of the requests — that is, they are acquired at the start of a request and released upon its completion.

    SHAREOPTION 

    SHAREOPTION is largely ignored under RLS, with the exception of SHAREOPTION (2, x). This means that non-RLS reads of a data set opened in RLS mode are possible. No data integrity is provided for the non-RLS reader. Both CICS and batch can have concurrent read and update access to nonrecoverable data sets. Again, in this instance, no coordination between CICS and batch occurs, so data integrity issues are possible.

    Batch and VSAM RLS recoverable files 

    As previously stated, RLS addresses the limitation of a single CICS address space owning a VSAM file for update and the associated single point of failure. For recoverable VSAM files, CICS read and update integrity is ensured by synchronized SMSVSAM buffers and the coupling facility’s system-wide locks on accessed records. With VSAM RLS, you no longer need to restrict VSAM update activity to a single CICS file-owning region. Now you have the possibility of channeling your workload to any number of additional available CICS regions, allowing you to better balance the workload and evenly distribute access to VSAM data.

    For batch processes that require inquiry access to RLS-managed recoverable VSAM files, read integrity can be obtained through RLS implementation by utilizing the shared SMSVSAM buffers. In other words, shared access is allowed for read only from a batch program.

    For batch processes that require update access to RLS-managed recoverable VSAM files, a batch processing window is still required. This requires deallocation of the VSAM file from RLS (and CICS) while batch updates the file natively. When batch is completed, the VSAM file can be reopened under RLS management. If CICS requires inquiry access to the VSAM file during batch processing, this can be achieved with a SHAREOPTION 2,3. However, the VSAM file must be opened under CICS as non-RLS mode for inquiry only. This is typically achieved by providing a separate FCT entry for CICS inquiry during batch processing.

    Batch and VSAM RLS nonrecoverable files 

    For nonrecoverable files, batch and CICS can process concurrently for both read and update capabilities. By definition, recovery is not ensured for either a CICS transaction or a batch transaction. With nonrecoverable files, if a transaction updates multiple records and fails before the last record update is successful, the unit of work is partially committed.

     For example, a single transaction is made to transfer $100 from a savings account to a checking account. This single transaction issues two record update requests. The first record update adds $100 to the checking account and adjusts the account balance accordingly. The second record update subtracts the $100 from the saving account and adjusts the balance accordingly. If the files are defined as nonrecoverable, the $100 might be added to checking but the subsequent $100 subtraction to the savings account is unsuccessful. This increases the checking account by $100 but the associated equivalent savings account debit does not occur.

    This potential exposure escalates with batch in a shared environment. Records are committed with every successful write, rewrite, or delete request. If an abend occurs, the status of files might be in a state that is less than ideal when auditing takes place. This might not be an issue for batch jobs that can be rerun at the point of failure or rerun without adverse affects to the data. However, such is not typically the case.

    In other words, recovery becomes your responsibility when batch and CICS share update access through RLS for nonrecoverable files. If you choose to restore a file to a point-in-time backup, what happens to the updates that have occurred after the point-in-time backup was established? Implications to sharing nonrecoverable VSAM files for update between batch and CICS need to be carefully considered.

    Understanding VSAM SHAREOPTIONs


    SHAREOPTIONs are settings that provide various levels of shared access to VSAM files. This capability has been available for decades, and on initial review, it appears to be an easy way to provide file sharing. Let's take a closer look at the SHAREOPTIONs and exactly what it takes to implement this approach.

    SHAREOPTION 1 is the most restrictive of all SHAREOPTIONs, disallowing any file sharing if updates occur. This SHAREOPTION permits a single region to write to a VSAM data set or enables many regions to read a data set. If a region is updating the file, all access to the file is denied to any region seeking to perform a read, so this option is often deemed unacceptable. Yet, performance wise, this is the most effective SHAREOPTION to use.

    SHAREOPTION 2 also prohibits concurrent data set updates, but provides for multiple reads during an update process. This function is typically used where CICS is the updating application with infrequent data reading from batch. A major drawback of SHAREOPTION 2 occurs when a reading region attempts to access data that an update region has modified. It is possible that the reading region will not receive the most recent updates and could terminate abnormally if substantial data changes have occurred.

    SHAREOPTION 3 on the surface, appears to solve the limitations of SHAREOPTION 2 by enabling multiple, concurrent updates. Because SHAREOPTION 3 permits multiple regions to both read and write to a single data set, it would also permit multiple regions to concurrently update identical blocks of records, causing a loss of physical data. Using SHAREOPTION 3 could lead to VSAM data integrity issues and increased overhead.

    SHAREOPTION 4 requires extensive and complex programming modifications to your application. These modifications will increase I/O overhead to ensure data integrity. Use of SHAREOPTION 4 will not enable a single CICS region to lock onto its required records. An environment using SHAREOPTION 4 and enabling concurrent updates by both CICS and batch could experience a severe data integrity problem in the event of a CICS transaction abend. The CICS dynamic transaction backout will replace all the transaction-updated records with initial values, even if a batch job updated the records.

    Practical implications: Employment of a higher VSAM SHAREOPTION translates into increased overhead. Additionally, the risk of error increases when there is less data-integrity protection. Employing VSAM SHAREOPTIONs for concurrent update access between CICS regions and batch jobs is rarely a suitable solution for companies striving to ensure processing efficiency and data integrity.

    DB2 CHECK PENDING

    Check pending status is set on following situations.

    1. When you use ALTER TABLE to add a check constraint to already populated tables, and CURRENT RULES special register is DB2®, the check constraint is added to the table description but its enforcement is deferred. Because there might be rows in the table that violate the check constraint, the table is placed in CHECK-pending status.

    2. When a table is LOADed with ENFORCE NO option, then the table is left in CHECK PENDING status as DB2 bypasses referential integrity and check constraints

    3. An index might be placed in CHECK-pending status if you recovered an index to a specific RBA or LRSN from a copy and applied the log records, but you did not recover the table space in the same list. The CHECK-pending status can also be set on an index if you specified the table space and the index, but the recovery point in time was not a point of consistency (QUIESCE or COPY SHRLEVEL REFERENCE).

    To reset this status, run the CHECK DATA utility, which locates invalid data and, optionally, removes it. If CHECK DATA removes the invalid data, the remaining data satisfies all check and referential constraints and therefore, the CHECK-pending restriction is removed.

    1. If a table space is in both REORG-pending and CHECK-pending status (or auxiliary CHECK-pending status), run the REORG TABLESPACE utility first and then run CHECK DATA to reset the respective states.

    2.Run the CHECK INDEX utility on the index. If any errors are found, use the REBUILD INDEX utility to rebuild the index from existing data.

    3.Use the REPAIR utility with the SET STATEMENT and NOCHECKPEND option.

    Following jobs remove copy pending and recovery pending status from a table space and removes copy pending, recovery pending status and rebuild pending status from an index space

    //STEPO1 EXEC PGM=DSNUTILB,REGIONM, 
    // PARM='DB2X,MIGRDAN, '
    //STEPLIB DD DSN=XXX.XXXXX.SDSNLOAD,DISP=SHR
    //SYSIN DD *    
    REPAIR SET TABLESPACE XXXXX.XXXXX NOCOPYPEND 
    REPAIR SET TABLESPACE XXXXX.XXXXX NORCVRPEND
    REPAIR SET INDEX XXXXX.XXXXX NOCOPYPEND
    REPAIR SET INDEX XXXXX.XXXXX NORCVRPEND
    REPAIR SET INDEX XXXXX.XXXXX NORBDPEND
    /*
    //SYSPRINT DD SYSOUT=* 
    //UTPRINT DD SYSOUT=*

    DB2 COPY PENDING

    COPY PENDING 


    A state in which, an Image Copy on a table needs to be taken. In this status, the table is available only for SELECT queries(read ONLY). You cannot update the table. 

    It happens in the following situations. 

    1. When you load the table with LOAD TABLE LOG NO and failed to include NOCOPYPEND
    2. when image copies job fails while copying the data to Tape or DASD. 

    To remove the COPY PENDING status

    1. you take an image copy and the Tablespace status changes from copy pending to RW

    If you dont want to take image copy, you can do the below

    2. use the below REPAIR command to reset the tablespace status
    REPAIR SET TABLESPACE XXXXX.XXXXX NOCOPYPEND 

    3. You can execute following db2 command to bring the status of tablespace to RW. This command releases most restrictions for the named objects
    -START DATABASE(dbname) SPACE(tablespace-name) ACESS(FORCE) .

    Following jobs remove copy pending and recovery pending status from a table space and removes copy pending, recovery pending status and rebuild pending status from an index space

    //STEPO1 EXEC PGM=DSNUTILB,REGIONM, 
    // PARM='DB2X,MIGRDAN, '
    //STEPLIB DD DSN=XXX.XXXXX.SDSNLOAD,DISP=SHR
    //SYSIN DD *    
    REPAIR SET TABLESPACE XXXXX.XXXXX NOCOPYPEND 
    REPAIR SET TABLESPACE XXXXX.XXXXX NORCVRPEND
    REPAIR SET INDEX XXXXX.XXXXX NOCOPYPEND
    REPAIR SET INDEX XXXXX.XXXXX NORCVRPEND
    REPAIR SET INDEX XXXXX.XXXXX NORBDPEND
    /*
    //SYSPRINT DD SYSOUT=* 
    //UTPRINT DD SYSOUT=*

    CICS BMS MAP LOW-VALUES


    Input Mapping

    When BMS makes the map available to your program on a RECEIVE MAP operation it
    places data in three areas for each input field. These three areas are the length
    field, the flag/attribute byte and the data field.

    There are several situations at execution time for which BMS allows. In the
    physical map, a length for the field is specified. If more data is keyed in than
    is specified in this length, the data is truncated on the right and the length
    field is set to the truncated length. If less data is keyed in than the length
    specified, the length field is set to the number of characters entered and the
    data field is padded on the right with blanks or on the left with zeros
    (depending on whether the field is alpha or numeric). However, if any data was
    previously in the field and the keyed data failed to cover up the old data, the
    entire field would be returned with a length representing the original field
    length.

    There is an exception to the right-justify, zero-fill feature for a numeric item.
    If the numeric field is initialized with other characters previous to the data
    you entered, your data may not come in right-justified and zero-filled.

    The flag byte is almost always initialized to X'00' when the map comes to you.
    The length field is usually used to tell whether or not any data has been
    entered. Data has been entered in the field when the length field is not equal
    low-values (nulls). However, a special situation occurs when a field is modified
    but no data is sent (as when a field is modified to low values with the erase to
    end-of-file key). The length field would show as zero and there would be no data
    in the data field. To be able to tell when this has happened, the flag byte is
    set to X'80' and the length area is set to zeros. Therefore, if the flag byte is
    set to X'80', the user has cleared the field. The length and data areas of any
    fields that are defined but not modified are set to low values (X'00').

    Output Mapping

    Do not re-use symbolic mapping areas under BMS unless they are cleared to low
    values. BMS relies on x'00' in the first byte of a field to construct
    an accurate output data stream.

    To place data in the output map, move it to the output data field. Be careful,
    however, not to let any low-value (X'00') characters begin your field or BMS will
    ignore that field. If a field begins with low values, the data in the field is
    not returned to the program.

    It is also possible to change the attribute of the output field. This can be done
    when the input and output portions of the map redefine each other. In that case,
    you have to move the desired attribute byte into the input map's attribute byte
    and your attribute will override what was originally specified. However, when the
    map is received, that byte is reset.

    You may code your own attribute constants to supplement the attributes supplied
    by IBM. Sample explanation for some of the attributes followed by the COBOL
    statements defining some additional attributes are shown as follows:

    'I': Moving 'I' to the attribute byte for an alphanumeric field in error will
    make this field high intensity, unprotected and MDT on.

    'R': Moving 'R' to the attribute byte for a numeric field in error will make this
    field high intensity, unprotected, numeric lock on and MDT on.

    '4': Moving '4' to the attribute byte for a edited field that has passed the
    edits will protect this field.

    NOTE: If an attribute is needed for the FSET, BRT combination, define the
    following in WORKING-STORAGE:

    01  FSET-BRT            PIC X VALUE 'I'.

    Using the map fields defined under the "Input Mapping" topic, the move statement
    to set this attribute would be coded:

    MOVE  FSET-BRT TO INFLDA.

    MAPFAIL

    There is an important situation that occurs under BMS called "MAPFAIL". This
    occurs when a RECEIVE MAP command is issued but no data was returned to the
    program. One common way this would happen is when no field is modified on a map.
    When no field is modified, the length, flag and data fields are not changed or
    updated in any way. You must know if data is returned or entered before you start
    your edit checking.

    One technique that is used to avoid the MAPFAIL condition is the code
    in the mapset a one-byte dummy "autoskip" field with FSET specified so at least
    one byte of date is sent to the program when the map is returned.

    You can test EIBAID to ensure that data was entered via the ENTER key (or other
    planned key) by checking for which AID key was pressed. The clear or PA key will
    look like data entry, but no data is returned. See the "Input Mapping" topic for
    more information about receiving a map with no data. There is a MAPFAIL
    exceptional condition that is handled by the HANDLE CONDITION command in command-
    level programming.

    In command-level programming, if the HANDLE AID command is specified, it could
    override the MAPFAIL handle condition specified. This could mean that the handle
    aid would take precedence over the handle condition. The RESP and RESP2 options
    may also be used to test for exceptional conditions. Since the use of RESP
    implies NOHANDLE, you must be careful when using it with the RECEIVE command,
    because NOHANDLE overrides the HANDLE AID command as well as the HANDLE CONDITION
    command. The result is that PF key responses are ignored.

    MQ : Conditions for a trigger event

    The queue manager creates a trigger message when the following conditions are satisfied:
    1. 1. A message is put on a queue.
    2. 2. The message has a priority greater than or equal to the threshold trigger priority of the queue. This priority is set in the TriggerMsgPriority local queue attribute; if it is set to zero, any message qualifies.
    3. 3. The number of messages on the queue with priority greater than or equal to TriggerMsgPriority was previously, depending on TriggerType:
      • Zero (for trigger type MQTT_FIRST)
      • Any number (for trigger type MQTT_EVERY)
      • TriggerDepth minus 1 (for trigger type MQTT_DEPTH)
      Note
      1. a. For non-shared local queues, the queue manager counts both committed and uncommitted messages when it assesses whether the conditions for a trigger event exist. Consequently an application might be started when there are no messages for it to retrieve because the messages on the queue have not been committed. In this situation, consider using the wait option with a suitable WaitInterval, so that the application waits for its messages to arrive.
      2. b. For local shared queues, the queue manager counts committed messages only.
    4. 4. For triggering of type FIRST or DEPTH, no program has the application queue open for removing messages (that is, the OpenInputCount local queue attribute is zero).
      Note
      1. a. For shared queues, special conditions apply when multiple queue managers have trigger monitors running against a queue. In this situation, if one or more queue managers have the queue open for input shared, the trigger criteria on the other queue managers are treated as TriggerType MQTT_FIRST and TriggerMsgPriority zero. When all the queue managers close the queue for input, the trigger conditions revert to those conditions specified in the queue definition.
        An example scenario affected by this condition is multiple queue managers QM1, QM2, and QM3 with a trigger monitor running for an application queue A. A message arrives on A satisfying the conditions for triggering, and a trigger message is generated on the initiation queue. The trigger monitor on QM1 gets the trigger message and triggers an application. The triggered application opens the application queue for shared input. From this point on the trigger conditions for application queue A are evaluated as TriggerType MQTT_FIRST, and TriggerMsgPriority zero on queue managers QM2 and QM3, until QM1 closes the application queue.
      2. b. For shared queues, this condition is applied for each queue manager. That is, a queue manager's OpenInputCountfor a queue must be zero for a trigger message to be generated for the  queue by that queue manager. However, if any queue manager in the queue-sharing group has the queue open using the MQOO_INPUT_EXCLUSIVE option, no trigger message is generated for that queue by any of the queue managers in the queue-sharing group.
        The change in how the trigger conditions are evaluated occurs when the triggered application opens the queue for input. In scenarios where there is only one trigger monitor running, other applications can have the same effect because they similarly open the application queue for input. It does not matter whether the application queue was opened by an application that is started by a trigger monitor, or by some other application; it is the fact that the queue is open for input on another queue manager that causes the change in trigger criteria.
    5. 5. On WebSphere MQ for z/OS, if the application queue is one with a Usage attribute of MQUS_NORMAL, get requests for it are not inhibited (that is, the InhibitGet queue attribute is MQQA_GET_ALLOWED). Also, if the triggered application queue is one with a Usage attribute of MQUS_XMITQ, get requests for it are not inhibited.
    6. 6. Either:
      • The ProcessName local queue attribute for the queue is not blank, and the process definition object identified by that attribute has been created, or
      • The ProcessName local queue attribute for the queue is all blank, but the queue is a transmission queue. As the process definition is optional, the TriggerData attribute might also contain the name of the channel to be started. In this case, the trigger message contains attributes with the following values:
        • QName: queue name
        • ProcessName: blanks
        • TriggerData: trigger data
        • ApplType: MQAT_UNKNOWN
        • ApplId: blanks
        • EnvData: blanks
        • UserData: blanks
    7. 7. An initiation queue has been created, and has been specified in the InitiationQName local queue attribute. Also:
      • Get requests are not inhibited for the initiation queue (that is, the InhibitGet queue attribute is MQQA_GET_ALLOWED).
      • Put requests must not be inhibited for the initiation queue (that is, the InhibitPut queue attribute must be MQQA_PUT_ALLOWED).
      • The Usage attribute of the initiation queue must be MQUS_NORMAL.
      • In environments where dynamic queues are supported, the initiation queue must not be a dynamic queue that has been marked as logically deleted.
    8. 8. A trigger monitor currently has the initiation queue open for removing messages (that is, the OpenInputCount local queue attribute is greater than zero).
    9. 9. The trigger control (TriggerControl local queue attribute) for the application queue is set to MQTC_ON. To do this, set the trigger attribute when you define your queue, or use the ALTER QLOCAL command.
    10. 10. The trigger type (TriggerType local queue attribute) is not MQTT_NONE.
      If all the required conditions are met, and the message that caused the trigger condition is put as part of a unit of work, the trigger message does not become available for retrieval by the trigger monitor application until the unit of work completes, whether the unit of work is committed or, for trigger type MQTT_FIRST or MQTT_DEPTH, backed out.
    11. 11. A suitable message is placed on the queue, for a TriggerType of MQTT_FIRST or MQTT_DEPTH, and the queue:
      • Was not previously empty (MQTT_FIRST), or
      • Had TriggerDepth or more messages (MQTT_DEPTH)
      and conditions 2 through 10 (excluding 3) are satisfied, if in the case of MQTT_FIRST a sufficient interval (TriggerIntervalqueue-manager attribute) has elapsed since the last trigger message was written  for this queue.
      This is to allow for a queue server that ends before processing all the messages on the queue. The purpose of the trigger interval is to reduce the number of duplicate trigger messages that are generated.
      Note
      If you stop and restart the queue manager, the TriggerInterval timer is reset. There is a small window during which it is possible to produce two trigger messages. The window exists when the trigger attribute of the queue is set to enabled at the same time as a message arrives and the queue was not previously empty (MQTT_FIRST) or had TriggerDepth or more messages (MQTT_DEPTH).
    12. 12. The only application serving a queue issues an MQCLOSE call, for a TriggerType of MQTT_FIRST or MQTT_DEPTH, and there is at least:
      • One (MQTT_FIRST), or
      • TriggerDepth (MQTT_DEPTH)
      messages on the queue of sufficient priority (condition 2), and conditions 6 through 10 are also satisfied.
      This is to allow for a queue server that issues an MQGET call, finds the queue empty, and so ends; however, in the interval between the MQGET and the MQCLOSE calls, one or more messages arrive.
      Note
      1. a. If the program serving the application queue does not retrieve all the messages, this can cause a closed loop. Each time that the program closes the queue, the queue manager creates another trigger message that causes the trigger monitor to start the server program again.
      2. b. If the program serving the application queue backs out its get request (or if the program abends) before it closes the queue, the same happens. However, if the program closes the queue before backing out the get request, and the queue is otherwise empty, no trigger message is created.
      3. c. To prevent such a loop occurring, use the BackoutCount field of MQMD to detect messages that are repeatedly backed out. For more information, see Messages that are backed out.
    13. 13. The following conditions are satisfied using MQSET or a command:
        • TriggerControl is changed to MQTC_ON, or
        • TriggerControl is already MQTC_ON and the value of either TriggerTypeTriggerMsgPriority, or TriggerDepth (if relevant) is changed,
        and there is at least:
        • One (MQTT_FIRST or MQTT_EVERY), or
        • TriggerDepth (MQTT_DEPTH)
        messages on the queue of sufficient priority (condition 2), and conditions 4 through 10 (excluding 8) are also satisfied.
        This is to allow for an application or operator changing the triggering criteria, when the conditions for a trigger to occur are already satisfied.
      1. The InhibitPut queue attribute of an initiation queue changes from MQQA_PUT_INHIBITED to MQQA_PUT_ALLOWED, and there is at least:
        • One (MQTT_FIRST or MQTT_EVERY), or
        • TriggerDepth (MQTT_DEPTH)
        messages of sufficient priority (condition 2) on any of the queues for which this is the initiation queue, and conditions 4 through 10 are also satisfied. (One trigger message is generated for each such queue satisfying the conditions.)
        This is to allow for trigger messages not being generated because of the MQQA_PUT_INHIBITED condition on the initiation queue, but this condition now having been changed.
      2. The InhibitGet queue attribute of an application queue changes from MQQA_GET_INHIBITED to MQQA_GET_ALLOWED, and there is at least:
        • One (MQTT_FIRST or MQTT_EVERY), or
        • TriggerDepth (MQTT_DEPTH)
        messages of sufficient priority (condition 2) on the queue, and conditions 4 through 10, excluding 5, are also satisfied.
        This allows applications to be triggered only when they can retrieve messages from the application queue.
      3. A trigger-monitor application issues an MQOPEN call for input from an initiation queue, and there is at least:
        • One (MQTT_FIRST or MQTT_EVERY), or
        • TriggerDepth (MQTT_DEPTH)
        messages of sufficient priority (condition 2) on any of the application queues for which this is the initiation queue, and conditions 4 through 10 (excluding 8) are also satisfied, and no other application has the initiation queue open for input (one trigger message is generated for each such queue satisfying the conditions).
        This is to allow for messages arriving on queues while the trigger monitor is not running, and for the queue manager restarting and trigger messages (which are nonpersistent) being lost.
    14. 14. MSGDLVSQ is set correctly. If you set MSGDLVSQ=FIFO, messages are delivered to the queue in a First In First Out basis. The priority of the message is ignored and the default priority of the queue is assigned to the message. If TriggerMsgPriority is set to a higher value than the default priority of the queue, no messages are triggered. If TriggerMsgPriority is set equal to or lower than the default priority of the queue, triggering occurs for type FIRST, EVERY, and DEPTH. For information about these types, see the description of the TriggerType field under Controlling trigger events.
      If you set MSGDLVSQ=PRIORITY and the message priority is equal to or greater than the TriggerMsgPriority field, messages only count towards a trigger event. In this case, triggering occurs for type FIRST, EVERY, and DEPTH. As an example, if you put 100 messages of lower priority than the TriggerMsgPriority, the effective queue depth for triggering purposes is still zero. If you then put another message on the queue, but this time the priority is greater than or equal to the TriggerMsgPriority, the effective queue depth increases from zero to one and the condition for TriggerType FIRST is satisfied.
    Note
    1. 1. From step 12 (where trigger messages are generated as a result of some event other than a message arriving on the application queue), the trigger message is not put as part of a unit of work. Also, if the TriggerType is MQTT_EVERY, and if there are one or more messages on the application queue, only one trigger message is generated.
    2. 2. If WebSphere MQ segments a message during MQPUT, a trigger event will not be processed until all the segments have been successfully placed on the queue. However, once message segments are on the queue, WebSphere MQ treats them as individual messages for triggering purposes. For example, a single logical message split into three pieces causes only one trigger event to be processed when it is first MQPUT and segmented. However, each of the three segments causes their own trigger events to be processed as they are moved through the WebSphere MQ network.