Saturday, November 17, 2012

Initializing VSAM Files


VSAM files have always presented a rather annoying problem of requiring that at least one data record be initially loaded into the file before the file could be opened for input or update processing. This is because VSAM issues a VERIFY command upon opening a file to reset the end-of-file pointer. If the file has never been loaded, the VERIFY fails because the high used RBA (Relative Byte Address) (HI-USEDRBA) is still zero. Therefore, VSAM files must be initially "loaded" to set the HI-USED-RBA to a value other than zero. This is done by writing a record to the VSAM file in "load" mode and optionally deleting the record to empty the file while leaving the HI-USED-RBA at a non-zero value.


Load processing requires opening the VSAM file for OUTPUT in SEQUENTIAL processing mode and writing at least one record to the file. (COBOL-II does implement a direct mode load facility but the initialization of the VSAM file is done behind the scenes by code invoked by COBOL-II.) The common COBOL solution is to code a separate initialization program. The problem with using COBOL as the initialization mechanism is that the information in the FD (File Description) must always match the physical
attributes of the file from the DEFINE CLUSTER command. Therefore, there is usually a separate initialization program for each VSAM file and it must be changed when the VSAM file attributes are modified. This is another maintenance issue which can go wrong. Although it may not be very complicated, it is a nuisance to initialize VSAM files every time they are DEFINEd and to have separate programs for each file.

VSAM FILE MAINTENANCE

Many times, the process of VSAM file maintenance, which uses IDCAMS to reorganize the VSAM file, is also used to perform the task of initially loading the file. The reorganization process consists of unloading (to a sequential file) the data records, DELETEing the VSAM file, DEFINEing the VSAM file again, and reloading the file's data records from the unloaded sequential file. This process reorganizes the VSAM file for                                                            
more optimal processing by redistributing the free space throughout the file and eliminating "split" data blocks. However, if there are no records in the file at the time of the reorganization, this process will produce a reorganized VSAM file that can't be opened for normal input or update processing because the HIUSED-

RBA will still be zero. Loading the file with the IDCAMS REPRO command will not reset the HI-USED-RBA if the input file is empty. The usual error, which results when the VSAM file is subsequently used for input or update processing, is "IEC161I 072-053 jobname, stepname,ddname,,,vsam-dataset-name".
The solution is to create a small program to OPEN the file, WRITE a record, optionally DELETE that record, and then CLOSE the file. This initial record is usually referred to as the "seed" record. This record can also be used as an EOF (End of File) indicator for sequential processing by using a key which is larger than other keys in the file, i.e., HIGH-VALUES. Then again, using the FILE-STATUS-CODE to determine when End of File occurs is more general and less prone to errors or confusion. Sometimes this seed record is used as a COUNTER record at the start of the file, key of LOW-VALUES. Other times, the seed record is deleted because it does not contain valid data. In a few instances, the record is ignored but remains because of the effort required to delete it. Initializing the file does not prevent the reorganization process from being run. You can simply insert the initialization step between the DEFINE and reload steps so even if there are no records in the unloaded file, the VSAM file will be initialized.


VSAM FILE TYPES
Th e re are four types of VSAM fi l e s : KSDS, RRDS, ESDS and Linear.E a ch type has a diffe re n t structure and use. The initialization process is diffe rent for the KSDS, RRDS and ESDS files , which are commonly used in application systems.

KSDS:
Keyed VSAM file for direct access by KEY or sequential processing — KSDSes have a data and index component. The data portion contains the actual data records. The index contains the KEYs and pointers to the records (RBAs [Relative Byte Address]). The index is arranged in a tree structure to allow relatively
quick direct access to specific records. The lowest level of the index, the INDEX SET, contains all of the keys and the RBAs. Sequential processing by key sequence uses the sequence set of the index to maintain record order.

RRDS :
R e l at ive Record file for direct access by RECORD NUMBER or sequential processing — RRDSes have a data component which contains the data re c o rds. Va ri able length RRDSes also have an
i n d ex component which contains the RBAs for records based on the record number, or SLOT. Random access uses the record number to access the desired record. For variable length records, the index is accessed to get the RBA for the desired record.

ESDS:
Entry Sequenced file for sequential access — ESDSes have a data component wh i ch contains the data re c o rds. Access is usually sequential and all data is added to the end of the file.

Linear:
Usually not used in application systems; used by IBM for DB2 databases — KSDS and RRDS files can be initialized and then the re c o rds can be deleted to leave the file initialized but empty. ESDS records can't be deleted. The ESDS convention is to delete s o f t wa re re c o rds by "fl agging" them as deleted with a X'FF', H I G H -VA L U E S, in the fi rst by t e.




CICS: What is the difference between transaction and task ?



   A transaction is a piece of processing initiated by a single request,
   usually from an end user at a terminal.  A single transaction will consist
   of one or more application programs that, when run, will carry out the
   processing needed.

   In other words, "transaction" means in CICS what it does in everyday
   English:  a single event or item of business between two parties.  In
   batch processing, transactions of one type are grouped together and
   processed in a batch (all the updates to the personnel file in one job, a
   list of all the overdue accounts in another, and so on).  In an online
   system, by contrast, transactions aren't sorted by type, but instead are
   done individually as they arrive (an update to a personnel record here, a
   customer order entered there, a billing inquiry next, and so on).

   Having given you this straightforward definition, we'll immediately
   complicate things a bit by admitting that the word "transaction" is used
   to mean both a single event (as we just described) and a class of similar
   events.  Thus, we speak of adding Mary Smith to the Payroll File with a
   (single) "add" transaction, but we also speak of the "add" transaction,
   meaning all additions to that particular file.

   Things are further complicated by the fact that one sometimes describes
   what the user sees as a single transaction (the addition to a file,
   perhaps) as several transactions to CICS. 
   Now, what about a task?

   Users tell CICS what type of transaction they want to do next by using a
   transaction identifier.  By convention, this is the first "word" in the
   input for a new transaction, and is from one to four characters long,
   although this source of the identifier is sometimes overridden by
   programming.

   CICS looks up the transaction identifier to find out which program to
   invoke first to do the work requested.  It creates a task to do the work,
   and transfers control to the indicated program.  So a task is a single
   execution of some type of transaction, and means the same thing as
   "transaction" when that word is used in its single event sense.

   A task can read from and write to the terminal that started it, read and
   write files, start other tasks, and do many other things.  All these
   services are controlled by and requested through CICS commands in your
   application programs.  CICS manages many tasks concurrently.  Only one
   task can actually be executing at any one instant.  However, when the task
   requests a service which involves a wait, such as file input/output, CICS
   uses the wait time of the first task to execute a second; so, to the
   users, it looks as if many tasks are being executed at the same time.




COBOL: How Do I Use Evaluate?


EVALUATE can be used in place of IF statements, it can often make the program more readable when you have complex nested conditions.

 In it's simplest form the EVALUATE statement goes something like this:


EVALUATE condition
   WHEN value
        imperative-statements
   WHEN value
        imperative statements
...........
END-EVALUATE.


Here's a simple example:

EVALUATE WS-X
   WHEN 1
        ADD 15 TO WS-TOTAL
        PERFORM A-100-REPORT
   WHEN 2
        ADD 16 TO WS-TOTAL
        MOVE 'WS-X IS 2' TO WS-DISPLAY
        PERFORM A-200-REPORT
   WHEN OTHER
        PERFORM X-100-ERROR
END-EVALUATE.


This will check the value of the variable WS-X and execute the statements depending on the value. Note the use of WHEN OTHER this will be executed if WS-X does not match any of the values, so in the example if WS-X is not equal to 1 or 2 then PERFORM X-100-ERROR will be executed.

 Sometimes you will want to have multiple conditions with lots of ANDs and ORs in an EVALUATE statement as you would in an IF statement. To do this with EVALUATE requires a slightly different approach. One way is to use EVALUATE TRUE (or EVALUATE FALSE). for example
 

EVALUATE TRUE
   WHEN WS-X = 1 AND WS-Y = 2
        PERFORM X-100-PROCESS1
   WHEN WS-X =1 AND WS-Y NOT = 2
        PERFORM X-200-PROCESS2
END-EVALUATE.
 


Here, the whole condition on the WHEN statement is checked and if it is TRUE then the associated statement(s) are executed.

 The second way to do this is using EVALUATE ... ALSO.
 

EVALUATE WS-AGE ALSO WS-SEX ALSO WS-WEIGHT
   WHEN 21 ALSO 'M' ALSO 150
        PERFORM A-200-ACCEPT
   WHEN OTHER
        PERFORM A-300-DECLINE
END-EVALUATE.


In this example if WS-AGE is 21 AND WS-SEX is 'M' AND WS-WEIGHT is 150 then PERFORM A-200-ACCEPT is executed, if not then PERFORM A-300-DECLINE is executed.

 You can combine ALSO with the TRUE and FALSE conditions, so you could have EVALUATE TRUE ALSO FALSE for example.
 




How can I improve COBOL program performance?


There are various methods of improving the efficiency of your code from the way you code your program to the compiler options you choose. Some of these techniques will not make any major difference to speed unless you are processing high volumes of data and/or code is going to be executed millions of times for each run. The points presented below are guidelines only - there may be better ways of speeding things up - if you are worried about performance, it is always a good idea to discuss it with your local systems programmer.
 

1) Arithmetic. Some arithmetical calculations take longer than others. Multiplication and division take longer than adding or subtracting. So instead of, say multiplying a number by 2 , try adding the number to itself. If you want to make a number negative, you could subtract it from 0, instead of multiplying it by -1. Check all arithmetic to see if there is a more efficient way of doing it
 

2) Order of comparisons. The way you do comparisons on an IF or EVALUATE statement can affect performance. If you have multiple OR conditions on your IF statement try and make sure that the condition most likely to be true is first in the list. This means that the program will not have to check as many conditions before finding a 'true' one.
 Here's an example to illustrate this. In the example we are checking a VSAM status code. The most likely value is 00 (Executed without any errors).
 

We could code:

 IF WS-STATUS = '97' OR '96' OR '95' OR '00'
DISPLAY WS-STATUS.


The program would first check is WS-STATUS = '97', then it would check is WS-STATUS = '96' then it would check is WS-STATUS = '95' and finally it would check is WS-STATUS = '00', which would be true so the DISPLAY would be executed. That would be four checks before the 'true' condition was found.

 If we code:

 IF WS-STATUS = '00' OR '97' OR '96' OR '97'
DISPLAY WS-STATUS.


then the program would check is WS-STATUS = '00'first, which is true so the DISPLAY would be executed, with just one check.
 Conversely if you have multiple AND conditions on your IF statement try and make sure that the condition least likely to be true is first in the list. This means that the program will bomb out of checking conditions as early as possible.
 

3) Watch those compiler options. Some compiler options can result in longer running times. For example SSR, which checks for subscripts going out of range actually adds extra code to your program to check subscripts. SSR would be acceptable in a test environment, but may not be appropriate in the production environment.
 

4) Sorting. If at all possible avoid doing a sort within a COBOL program. COBOL sorts are very inefficient. If you must do a sort in a COBOL program, specifying the FASTSRT compiler option may speed up the sorting process.
 

5) Numbers.
a) How you define numeric fields can have an impact on performance. If you are using a field for arithmetic or as a subscript and the field has 8 or fewer digits it is quite often best to define it as a binary number (COMP-1) that is signed. This is because binary numbers can be manipulated much faster. If the field has between 8 and 15 digits then it is often best to define it as a Packed decimal number (COMP-3) with an odd number of digits and signed especially if the number is to be used with USAGE DISPLAY items. If the number has more than 18 digits then decimal arithmetic is always used by the compiler. For more information on this check out the appropriate COBOL Application Programming Guide. Use signed numbers wherever possible. COBOL does all arithmetic with signed numbers, so if you use unsigned numbers COBOL has to add code to remove signs.
 b) Rounding numbers can often take longer than the calculation so try to avoid rounding numbers.

 6) CALLs to other programs. When calling a subprogram with USING try to specify as few parameters as possible. Each parameter passed requires an individual BLL cell to be allocated in the called program and may require additional registers to be used.

 7) PERFORMs. If you do a 'PERFORM paragraph' the compiler may convert this into up to six machine instructions. This is because the compiler must establish where it is to jump to and save the address of the instruction to jump back to at the end of the performed paragraph. PERFORMs with VARYING will require many more machine instructions. This will probably only be a problem if the code is executed millions of times. You should weigh up the pros and cons of maintainability with speed. It might be better to opt for writing the program in Assembler if the speed of execution is going to be a problem.

What does that VSAM Status Code Mean?


 Every operation you carry out to access a VSAM file in COBOL will return a file status code to indicate whether the operation completed successfully or not. It is good practice when programming to check the VSAM status code after any operation carried out on a VSAM file to check that the OPEN, READ, etc completed successfully. To use the status code you must define a working storage field that will contain the file status, and use the clause "FILE STATUS IS status" on the SELECT statement.

The codes returned are listed in the table below.

 File Status Cause
00 Operation completed successfully
02 Duplicate Key was found
04 Invalid fixed length record
05 The file was created when opened - Successful Completion
07 CLOSE with REEL or NO REWIND executed for non tape dataset.
10 End of File encountered
14 Attempted to READ a relative record outside file boundary
21 Invalid Key - Sequence error
22 Invalid Key - Duplicate Key found
23 Invalid key - No record found
24 Invalid Key - key outside boundary of file.
30 Permanent I/O Error
34 Permanent I/O Error - Record outside file boundary
35 OPEN, but file not found
37 OPEN with wrong mode
38 Tried to OPEN a LOCKed file
39 OPEN failed, conflicting file attributes
41 Tried to OPEN a file that is already open
42 Tried to CLOSE a file that is not OPEN
43 Tried to REWRITE without READing a record first
44 Tried to REWRITE a record of a different length
46 Tried to READ beyond End-of-file
47 Tried to READ from a file that was not opened I-O or INPUT
48 Tried to WRITE to a file that was not opened I-O or OUTPUT
49 Tried to DELETE or REWRITE to a file that was not opened I-O
91 Password or authorization failed
92 Logic Error
93 Resource was not available (may be allocated to CICS or another user)
94 Sequential record unavailable or concurrent OPEN error
95 File Information invalid or incomplete
96 No DD statement for the file
97 OPEN successful and file integrity verified
98 File is Locked - OPEN failed
99 Record Locked - record access failed.

Temporary storage in CICS


Temporary Storage Queues

Temporary storage provides a means for storing data records in queues. Like files, these records are identified by a unique symbolic name. Temporary storage queues do not have to be predefined to Cics. They can be created in main storage or on auxilliary storage devices. Once created, these records can be read either sequentially or randomly by any other Cics program.

Temporary storage queues are not directly attached to a task. This means that temporary storage queues are task independant. Once a temporary storage queue is written, it remains intact after the task that created it has terminated.

Temporary Storage Queue Commands

There are three commands that process data in temporary storage queues.
* The WRITEQ TS command allows you to write records to a temporary storage queue. If no queue exists when this command is issued, one will be created and the records will be written to it.
* The READQ TS command allows you to read records, either sequentially or randomly, from a temporary storage queue.
* Records in a temporary storage queue can be updated and rewritten by using the REWRITE option of the WRITEQ TS command
* The DELETEQ TS command allows you to delete an entire temporary storage queue. Individual records cannot be deleted
* The queue name specified in a temporary storage command must not exceed eight characters in length


WRITEQ TS QUEUE (queue name)
 FROM (data area)
[LENGTH (data value)]
[ITEM (data area)]
[MAIN / AUXILLIARY]
[SYSID (name)]


READQ TS QUEUE (queue name)
 INTO (data area)
[LENGTH (data area)]
[ITEM (data value) / NEXT]
[NUMITEMS (data area)]
[SYSID (name)]


WRITEQ TS QUEUE (queue name)
 FROM (data area)
[LENGTH (data value)]
[ITEM (data area) [REWRITE]]
[MAIN / AUXILLIARY]
[SYSID (name)]


DELETEQ TS QUEUE (queue name)
 [SYSID (name)]


Transient Data Queues

Like temporary storage queues, transient data queues are task dependent. However transient data queues can only be read sequentially.

Unlike temporary storage queues, transient data queues must be defined before they are used. This definition takes place in a special Cics table called the Destination Control Table (DCT). The DCT is usually maintained by a sys prog. One of the fields in each DCT entry tells whether the queue is an intrapartition or extrapartition queue.

Intrapartition Data Queues
Intrapartition transient data queus may only reside on auxilliary storage and can only be read sequentially by other CICS programs. Reading an intrapartition data queue is destructive.
Intrapartition queues may also be associated with Automatic Task Initiation. When the number of records in an intrapartition queue reaches a predefined count a special task is automatically initiated.

Extrapartition Data Queues
Unlike intrapartition queus, extrapartition queues can be accessed by other Cics programs as well as batch programs executing outside of the CICS partition or region. They can reside on any sequential device, such as disk or tape, or be sent directly to an off line printer. Reading records in an extrapartition queue is non-destructive.

WRITEQ TD QUEUE (queue name)
 FROM (data area)
[LENGTH (data value)]
[SYSID (name)]


READQ TD QUEUE (queue name)
 INTO (data area)
[LENGTH (data area)]
[SYSID (name)]


DELETEQ TD QUEUE (queue name)
 [SYSID (name)]


* The WRITEQ TD command allows you to write records sequentially to a transient data queue
* The READQ TD command allows you to read sequentially from a transient data queue
* The DELETEQ TD command allows you to delete the contents of an intrapartition TD queue.
* Transient Data Queues are referenced by these commands using a symbolic name which must be predefined in the DCT
* The queue name specified in transient data commands must not exceed four characters in length
 

Exceptional Conditions

IOERR - An undetermined error has occured during input or output
ISCINVREQ - an undetermined error has occured on a remote system
ITEMERR - The requested item number is invalid
LENGERR - The length of a record is invalid or missing
NOSPACE - A write has failed due to lack of space
QIDERR - The requested queue cannot be found
QZERO - A read has been attempted on an empty queue
SYSIDERR - The specified remote system is unavailable or not defined


VSAM File Handling in CICS

In CICS, VSAM file operations are peformed using READ, WRITE, REWRITE, DELETE, UNLOCK, STARTBR, ENDBR, READPREV and READNEXT commands. These commands are explained in detail below.

The READ Command

READ
 DATASET (file name)
INTO (data area)
RIDFLD (record key area)
[UPDATE]
[EQUAL / GTEQ]
[LENGTH (record length area)]
[GENERIC]
[KEYLENGTH (data value)]
[SYSID (system name)]


The DATASET, INTO and RIDFLD options are required in every READ command. DATASET gives the name, in quotes, of the file that you wish to read. INTO names the data area within your program into which the record should be copied from the file. RIDFLD stands for Record IDentification FieLD. This option names the data area that contains the key value of the record you wish to read from the file.
The UPDATE option is used to establish exclusive control over a record. This is necessary to prepare CICS to update or delete a record later in your program.
EQUAL and GTEQ refer to the collating sequence in which record keys occur in the file. By default, the READ command will only read a record whose key is equal to the key specified in RIDFLD. If GTEQ is specified, the first record whose key is greater than or equal to the RIDFLD is read.
The LENGTH option is mandatory for reading variable length records. It specifies a data area that contains the maximum length input record that the program is expecting. When Cics executes the READ it stores the actual length of the record it reads into the data area specified in the LENGTH option. If this length exceeds the maximum length that the program had specified, the LENGERR exception condition is raised.
GENERIC tells CICS that the RIDFLD specifies only a partial key. KEYLENGTH, which is mandatory with the GENERIC option, tells CICS how many bytes of the RIDFLD key should be used to retrieve the record. The data value associated with the KEYLENGTH option may be a constant number, or it may be a variable data area defined in working storage.
The SYSID option is needed only if your installation uses the Intersystem Communication (ISC) facility to communicate with other systems. The four character system ID that you specify tells Cics on which system the file to be read is located. Whenever you specify the SYSID option, the LENGTH and KEYLENGTH options must be specified.


The WRITE command


WRITE
 DATASET (file name)
FROM (data area)
RIDFLD (record key area)
[LENGTH (data value)]
[KEYLENGTH (data value)]
[SYSID (system name)]


With the exception of FROM and LENGTH, all of these options are coded in the same way that they are coded for the READ command.
The FROM option indicates the name of the data area in working storage from which the record will be written.
The LENGTH option specifies the exact length of the record to be written. There is no need for CICS to return a length value to the program after the record has been written. Therefore, in the WRITE command, the LENGTH option can be specified by a constant number instead of by the name of the data area, as in the READ command. As with the READ command, the LENGTH option is required for variable length records.


The REWRITE Command


REWRITE
 DATASET (file name)
FROM (data area)
[LENGTH (data value)]
[SYSID (system name)]


After a record has been READ from a file with the UPDATE option, and the program has updated fields within the record, the REWRITE command can be issued to rewrite the record to the file and complete the update option.
Notice that the options that affect the WRITE command also affect the REWRITE command with the exception of the RIDFLD and KEYLENGTH options. These options are unnecessary because, if used at all they must be specified in the READ UPDATE command which must precede a REWRITE.
 

The DELETE Command

DELETE
 DATASET (file name)
[RIDFLD (record key area)]
[LEYLENGTH (data value)
[SYSID (system name)]


The DELETE command can be used in two ways to delete records. One way is to issue a DELETE command using the DATASET and RIDFLD options.
The other, safer, method is to issue a READ UPDATE command prior to deleting the record. The program can the inspect fields within the record to help determine whether the record should be deleted. If a record is deleted after being retrieved with a READ UPDATE command, the DELETE command may be issued without the RIDFLD option. RIDFLD is unnecessary in this instance because it was already specified on the READ UPDATE command.
The KEY LENGTH and SYSID options need to be issued only when the record to be deleted resides in a file on another system.


The UNLOCK Command


UNLOCK
 DATASET (file name)
[SYSID (system name)]


When a record is read with the UPDATE option, exclusive control for that record remains in effect until the record is either rewritten or deleted or until the transaction is terminated.
If once the record has been read, it is determined that an update is not necessary, exclusive control should be released from the record so that it can be accessed by other transactions.
The UNLOCK command releases the program's exclusive control over a record.
 

Writing a Browse Program
A browse transaction reads and displays multiple records from a file in a single transaction. Browse programs are usually coded to allow the user to continue browsing the file by pressing enter.
All browse programs contain the following Cics commands:
* STARTBR - Initiates the browse by establishing the key of the first record to be read.
* READNEXT - Reads the first and all subsequent records in a browse
* ENDBR - Terminates the Browse.


The STARTBR Command


STARTBR
 DATASET (file name)
RIDFLD (data area)
[GTEQ / EQUAL]
[GENERIC]
[KEYLENGTH (data value)]
[SYSID (system name)]


Most of these options are the same as for the READ command, except that STARTBR does not have INTO or LENGTH options. These options are unnecessary because STARTBR does not actually read a record into the program. It merely sets up a starting record key from which the READNEXT command works. Also note that, unlike the READ command, STARTBR takes GTEQ as a default option.
When a STARTBR command is issued with the GENERIC option the transaction is known as a generic browse.


The READNEXT command


READNEXT
 DATASET (file name)
INTO (data area)
RIDFLD (data area)
[LENGTH (data area)]
[KEYLENGTH (data value)]
[SYSID (system name)]


The READNEXT command reads just one record each time it is executed, therefore a browse program must include a loop that issues the READNEXT command multiple times. The loop should terminate after enough records have been read to fill up the screen, or until some earlier end point (such as end of file) has been reached.
The READNEXT command looks a lot like the READ command. Unlike STARTBR it does include the INTO and LENGTH options, records are actually being read into the program. Unlike READ, READNEXT does not have the GTEQ and GENERIC options, because these options establish the starting browse key, which is taken care of by STARTBR. The KEYLENGTH and SYSID options are required if the file to be read resides on another system.
 

The ENDBR Command

ENDBR
 DATASET (file name)
[SYSID (system name)]


The ENDBR command has two options, DATASET and SYSID. They perform the same functions in ENDBR as they do in all the other file handling commands.
Using the COMMAREA
After issuing the ENDBR command, you must issue a RETURN command to return control of the program to CICS. Use the COMMAREA option of the RETURN command in case the user wishes to continue the browse. It is common practice to store the last record key returned by the READNEXT command into a COMMAREA field. When the user resumes execution of the program, the key stored in DFHCOMMAREA may be moved to the data area referenced by the RIDFLD option of the STARTBR command.
 

The READPREV Command

A browse that reads a file in descending key sequence is called a reverse browse. A reverse browse program is coded using the READPREV command instead of the READNEXT command. The options of the READPREV command are identical to those of the READNEXT command. A reverse browse program is coded just like a normal browse, except that, after an initial mandatory READNEXT command is issued, the program loop executed the READPREV command instead of the READNEXT command.
The RESETBR Command
The RESETBR combines the effects of an ENDBR and a STARTBR. It's options are identical to those of the STARTBR. The RESETBR command can be used for a skip-sequential browse. During this type of application, the starting key of the browse is reset one or more times during the same transaction.


 Exceptional Conditions

Like any CICS commands, a file handling command may raise an exceptional condition when executed. Your programs should include either a HANDLE CONDITION or RESP options and some corresponding routines to handle the more common conditions that may occur.

CICS/FEPI



What is FEPI?
FEPI stands for Front End Programming Interface. It is a terminal emulator implemented via a Programming Interface. FEPI is integrated into CICS, so if you have CICS (CICS/ESA V3.3 or above) then you already have access to FEPI. FEPI can communicate with transactions running on CICS or IMS systems.


What can I do with FEPI?
FEPI can be used to integrate your existing CICS applications into one system without needing to alter the existing systems.
 For example if two companies merge, FEPI can be used to provide a single user interface to both of the merged companies' systems. Another example, if you want to be able to access all a customer's details from a number of different systems in one single system - you could have a single screen where the user enters the customer's name and the FEPI interface will go to, for example, the Home Insurance system, the Motor Insurance system and Accounts system retrieve details from each of these systems. The information from each of these different systems will be returned and be available for displaying on one screen. This could be useful, for example, for cross selling. You might be able to see that a customer has Motor Insurance, but not Home Insurance, you could then ask the customer if they had considered buying your company's Home Insurance product!
You could also use FEPI to add new functions to a system. Say you want to keep an existing system stable, but want to add new functionality, you could write a program which would use FEPI to access the stable system, and access the programs which contain the new functionality when it is required.
As part of your company's Web enablement you could use FEPI to provide an interface between your CICS systems and your Web site. You could keep the existing CICS systems, but have a FEPI based system to handle enquiries from the Web. The FEPI programs would extract the required information from the existing system and return it to the Web server.
 


Further Reading
CICS Front End Programming Interface User's Guide (IBM SC33-1629-02)
The CICS Programmer's Guide to FEPI (McGraw Hill ISBN 0-07-7077793-8)

CICS Program Compilations under TS


CICS Transaction Server includes an ‘integrated translator component’. This means that you can compile programs to be used under CICS using a single job step no more CICS Translate Step.

The CICS Translate step was required to translate any EXEC CICS commands into something that the language compiler could understand, in COBOL for example EXEC CICS commands were converted into CALLs. Once the translator had completed the generated code was passed to the compiler and finally the compiled code was passed to the link editor.

For DB2 CICS Programs you would have to feed the source code firstly to the DB2 Preprocessor, then through the CICS translator, and then that would go to the compile and link edit steps.

 The integrated translator in CICS TS combines the translate and compile into one. The integrated translator can handle CICS API commands, CICS SPI (System Programming Interface) commands, CICSPlex SM API commands and DL/I EXEC DLI commands. DB2 commands (EXEC SQL) are translated using the ‘SQL integrated coprocessor’.

You now only need two steps in your job, the Compile and the Link Edit.

All the diagnostic messages will now appear in the one listing, And the compiler statement numbers will be the same as they are in the original source code.

The combining of the translator and compiler now means it is possible to have CICS and DB2 commands inside copybooks. Previously the copybooks were not expanded until the compile step, so any CICS or DB2 Commands in a copybook would be missed by the translate and preprocessor phases.

 Another feature of using the integrated translator is that when you CALL another program you no longer have to pass DFHEIBLK and DFHCOMMAREA as arguments. These two areas are defined as GLOBAL in the outermost program and so are available to any called programs.

An important change has taken place with the Execute Interface Block (DFHEIBLK) definitions. All binary (COMP) fields are now defined as COMP-5. This means that they are unaffected by what you specify in the TRUNC compiler option, which has been the source of some problems in the past.

For more information visit http://www.ibm.com/software/ts/cics/v2
IBM Document, ‘Application development improvements with CICS TS Version 2.2

CICS Storage Violations


CICS Storage violations can seem to be one of the most difficult problems to deal with when debugging. Storage Violations can in the worst cases bring down CICS and can sometimes go undetected by CICS which could lead to problems in the future. Storage violations involving overwriting CICS control blocks or CICS storage will in most cases lead to CICS ‘falling over’. Storage violations of user storage areas can sometimes go undetected and may not cause a CICS failure.
By understanding how CICS detects storage violations, you can make the task of debugging a storage violation easier.
When you request an area of storage in CICS, such as for the program’s commarea, CICS will add 16 bytes of storage to the area of storage. It adds 8 bytes at the start and 8 bytes at the end. CICS will then put a value in each of these areas. If the value changes then CICS will detect a storage violation! However, CICS will not necessarily detect the storage violation when it happens. This is because CICS will only check for a violation when the area of storage is freed up. So the storage violation could occur at the start of your program but not actually be detected until later, for example when the program terminates or calls another program.
The most common reasons for storage violations are programming errors. Common reasons are different length DFHCOMMAREAs when your program calls or is invoked by another program, or subscript errors causing data to be stored beyond the end of the storage area.
It is also possible that another program has overwritten your storage, your program may detect that it’s storage has been overwritten by the other program. If you suspect that it is not your program that is at fault, try doing a CEMT New Copy of your program.
As CICS has evolved new methods of protecting storage have been added to CICS (From version 3.3 of CICS it has been possible to switch storage protection on in a CICS region) , however many sites still do not take advantage of them this may be because some older software may require that the storage protection options be switched off.
 If you switch on storage protection in a CICS region, then you may find that you get a lot more storage violations in existing programs, this could be because the storage violations went undetected previously.

Multiple CICS Regions (MRO)


What is MRO ?
MRO (or Multiple Region Option) is the term used to describe a set of inter-linked CICS regions. Each region may perform a different function, such as an FOR (File Owning Region). An FOR would contain all the definitions for the files that will be used by CICS. When an action such as READ or WRITE is requested, the request can be routed to the FOR. The CICS regions communicate with each other using Inter System Communication (ISC). The CICS regions must be in the same MVS image or in the same MVS sysplex to use MRO.
 

Why use MRO ?
In the early days of CICS, before MRO, all definitions for files, terminals, programs etc would be stored in one CICS region. These definitions all take up CICS storage, as the numbers of terminals, files etc increase so the amount of storage available for running programs is reduced. This could often lead to problems with CICS because of a lack of available storage. The CICS system might eventually grind to a halt and have to be re started. The term for this kind of problem is Virtual Storage Constraint (VSC). Using MRO can help to relieve the problems of VSC. Resource definitions could be stored in different CICS regions that were linked together in order to reduce the amount of storage required in the main CICS region. Different resource types are generally grouped together in a particular region which gives rise to the term “owning Region”. So a CICS region that has all the terminal definitions is called a Terminal Owning Region or TOR, the region that contains the applications programs is called the Apication Owing Region or AOR. Another reason for using MRO might be that you have seperate development teams that require access to the same files or database, but are developing different applications. You could specify different Application Regions, which are both connected to the same File Owning Region. Team 1 could develop their applications in AOR 1, and Team2 could develop their applications in AOR 2. If there are any problems with the applications in AOR 1, then Team 2 could continue working in AOR 2.


Transaction Routing.
Transaction routing lets you run a transaction in any connected CICS system. When you enter the transaction id the transaction might run in any one of the connected CICS regions.
 

Function Shipping.
Function Shipping lets your program access resources by 'shipping' requests to another CICS region. For example if your program requests access to a file, the file control request might be shipped to the File Owning Region. You can also ship requests for access to TSQs, TDQs and databases owned by another CICS region. When you write an application you do not need to know where the resource is located. The CICS resource definitions will specify where the required resources are.
 

Connecting to other Regions in MRO.
When you logon to CICS you will (in most cases) be logging in to the Application Owning Region. If you want to look at (say) a definition of a file in the FOR you might find that you cannot see the details of the file because it is in another region, or you may want to change the definition of the file and are unable to, because it is in the FOR. So how do you get into the FOR? Many sites use some form of session manager (NC-ACCESS, MULTSESS, Tubes, etc) which will only let you login to the AORs. To overcome this problem you should login to the CICS region as you would normally do. Then you can use the CRTE transaction to login to another region.
eg CRTE SYSID=xxxx , where xxxx is the name of the region you want to go to (You can find out names of the connected regions using CEMT INQUIRE CONNECTION).
You then login using the CESN transaction. Once you have done all the work you want to, you return to the original CICS region by entering 'cancel'.




Submit CICS commands from a batch job

It is possible to submit commands to a CICS region from a jcl jobstream. The JCL uses the MVS Modify command to execute the CICS commands.

The format of the modify command is :

F jname,cicscommand

 jname is the job name or taskid of the CICS region.
 cicscommand is the CICS command to be executed. 



 Note: In order for this to work, a console entry in the CSD must have been defined. For CICS running under MVS prior to SP 4.1 it will be CONSOLE(00), for CICS running under SP 4.1 or later it will be CONSNAME(INTERNAL).

 The following job shows you how to submit commands:


//CICSMOD1 (acct-info),CLASS=A,MSGCLASS=X,MSGLEVEL=(1,1)
//*
//*
//STEP01 EXEC PGM=IEFBR14
// F CICSREG1,'CEMT SET PROG(EBR001) NEW'
// F CICSREG1,'CEMT I TER'
//


You can omit the apostrophes round the command if you wish, but if there are sequence numbers at the end of the line a warning message is displayed on the console (the command will be executed never-the-less.)

Coding CICS maps


CICS maps are created using special Assembler Language macros. It is not necessary to know much about assembler to create CICS maps.

 There are special screen painters available (eg IBM's SDF II) which take the hard work out of coding assembler macros, these screen painters will generate the necessary assembler macros to create your map.

 This page will show you how to code the assembler macros necessary to create a CICS map for your program.

 This page is intended as a quick guide only, many other options are available, if you want to know more, check out the CICS Application Programming Guide.
 

 Symbolic and Physical Maps Two terms commonly used when creating maps are the Symbolic Map and the Physical Map. The symbolic map is essentially a copy library member which allows you to refer to fields on the map from your COBOL (or PL/1 or C or Assembler) program. The physical map is the code generated by the assembler to allow the system to display the map i.e. the object module.

 Some Rules
 Before coding your map you must be aware of some rules for coding the assembler statements. There may seem to be a lot of rules, but don't worry they are fairly simple to get your head round.

 Columns:
 Columns 1 to 71 are where you code your assembler statements.
 Column 72 is for a continuation marker if you need to continue a line.
 Columns 73 to 80 are for a line sequence number of comment.

 Continuation Lines:
 Any statement can have only two continuation lines. The text of the continuation line must start in column 16.
 To continue a line you put any character (except blank) in column 72.
 

 Comments: Comment line are indicated by placing an asterisk ('*') in column 1. A comment line can NOT be placed between continuation lines. and comment lines can NOT be continued.

 You cannot have blank lines in the assembler program.

 The Assembler Statement:
 The statement consists of four components: The optional name, starting in column1 (max length seven characters), the mandatory operation specifying the assembler instruction or macro, the operands which specify the parameters and an optional comment.


 Creating A Map Once you have created your map, you need to run it through the assembler. The map must be assembled twice with different parameters. The first pass throught the assembler, you specify TYPE=DSECT, this will create a copy library member that you can copy into your CICS/COBOL program. The second pass through the assembler, you specify TYPE=MAP, this will create an object module which will be passed through the link editor (binder) to produce a CICS load library member.
 

 Coding the CICS Map. There are normally 3 macro used when coding your map. They are DFHMSD, DFHMDI and DFHMDF.
 The DFHMSD macro defines a mapset.
 The DFHMDI macro defines a map within a mapset defined by by DFHMSD
 The DFHMDF macro defines a field within the map defined by DFHMDI.
 

Don't worry too much about the first two macros, they won't change much from map to map.
 The first statement in your program should be a DFHMSD macro statement, this will define the mapset.
 It will look something like this:
 

MAPSN DFHMSD TYPE=DSECT,                               X
             CTRL=FREEKB,DATA=FIELD,LANG=COBOL,        X
             MODE=INOUT,TERM=3270,TIOAPFX=YES
 


"MAPSN" is the name of the mapset to be created. "TYPE=" is used to specify whether a copybook member is to be generated (TYPE=DSECT) or an object library member is to be created (TYPE=MAP).
 "CTRL=" specifies the characteristics of the 3270 terminal
 "DATA=FIELD" specifies that data is passed as contiguous fields.
 "LANG=COBOL" specifies the source language for generating the copy library member.
 "MODE=INOUT" specifies that the mapset is to be used for both input and output.
 "TERM=" specifies the terminal type associated with the mapset.
 "TIOAPFX=YES" specifies that fillers should be included in the generated copy library member.
 

The next statement you should include is the DFHMDI macro statement to define the map charcteristics.

 It will look something like this:

MAPNM DFHMDI COLUMN=1,DATA=FIELD,                       X
             JUSTIFY=(LEFT,FIRST),LINE=1,               X
             SIZE=(24,80)


 "MAPMN" is the name of the map.
 "COLUMN=1","LINE=1" and "JUSTIFY=(LEFT,FIRST)" establish the position of the map on the page.
 "DATA=FIELD" specifies that the data is passed as a contiguous stream.
 Most of the time the DFHMSD and DFHMDI operands will not need to be changed. the only changes will be the mapset name and map name.
 Once you have coded these statements you can now get on with defining the fields that will appear in your map. (ie the important bit).


 We'll start out with a couple of sample definintions:


FNAME  DFHMDF POS=(1,5),LENGTH=10,                          X
              ATTRB=(UNPROT,BRT,FSET),                      X
              INITIAL='XXXXXXXXXX',PICIN='X(10)',           X
              PICOUT='X(10)',COLOR=RED
*
DOB    DFHMDF POS=(2,5),LENGTH=8,                           X
              ATTRB=(UNPROT,NORM,NUM,ASKIP),                X
              INITIAL='00000000',PICOUT='9(8)'
 


First in the definition is the field name ("FNAME" and "DOB") followed by the macro DFHMDF
 "POS=(x,y)" specifies where on the screen the field is to be placed. x is the line and y is the column.
 "LENGTH=" specifies the length of the field to be generated.
 "ATTRB=" specifies a list of attributes for the field. UNPROT means you can type data in to the field, BRT means the field intensity is BRighT, NUM that the field is numeric only.
 "INITIAL=" specifies an initial value for the field.
 "PICIN=" and "PICOUT=" allows you to specify a picture clause for the field. This lets you specify editing characters such as Z to suppress leading zeros.
 "COLOR=" is used to define the colour of the field. Note that you must specify MAPATTS=COLOR on the DFHMDI macro to use the COLOR option .
 Once you have specified all the fields to be included on the map (the maximum number of fields is 1023) you must then specify a final macro to indicate the end of the map. You specify the DFHMDF macro with the operand TYPE=FINAL, like this:
 DFHMDF TYPE=FINAL
And that's it.


 Here's a quick example mapset definition (note this has not been tested):

 MAPS1 DFHMSD TYPE=&SYSPARM,MODE=INOUT,TIOAPFX=YES,LANG=COBOL, X
              TERM=3270-2
 MAP1  DFHMDI SIZE=(24,80),CTRL=FREEKB,MAPATTS=COLOR
 FNAME DFHMDF POS=(1,5),LENGTH=10,                             X
              ATTRB=(UNPROT,BRT,FSET),                         X
              PICIN='X(10)',PICOUT='X(10)'
 LNAME DFHMDF POS=(1,25),LENGTH=10,                            X
              ATTRB=(UNPROT,BRT,FSET),                         X
              COLOR=RED,                                       X
              PICIN='X(10)',PICOUT='X(10)'
 CRLIM DFHMDF POS=(3,5),LENGTH=8,                              X
              ATTRB=(UNPROT,NUM,NORM,FSET),                    X
              PICOUT='ZZZZ9.99'
       DFHMSD TYPE=FINAL
 END


 Generating the Load module (Physical Map) and Copybook (Symbolic Map)

Once you have defined your map,you must then 'Assemble' the map. The code you have created has to be passed through the assembler twice. The first time is to create the load module, the second time is to create the Copybook layout. As mentioned above, to create the load module the TYPE parameter of the DFHMSD macro must be set to MAP, and to create the copybook the TYPE parameter must be set to DSECT. To make life easier you can specify TYPE=&SYSPARM. This will let you pass the MAP or DSECT parms when you assemble the map.
 There is an IBM supplied JCL PROC called DFHMAPS which can be used to assemble the maps, this will handle the assembling of maps for you and pass the appropriate value for TYPE when required, or you site may have it's own way of assembling maps.



 Using The Generated Copybooks (Symbolic Map).When your map has been generated you will have a new Copybook. If you take a look at the copybook you will see that there are two main definitions (Level 01). The first is the mapname suffixed by an 'I' , for Input, the second is the mapname suffixed by an O, for Output. The input definition is used to display the map field via the SEND MAP CICS command. Within the Input definition you will see all the fields you specified defined. For each of these fields there will also be a length field suffixed by 'L', and an Attribute field, suffixed by 'A'. There will also be a Flag field suffixed by F, this isn't normally used.
 You can change the colour, and protection attributes by changing the A field. To do this you should copy the DFHBMSCA copybook into working storage. This contains definitions for attributes eg DFHBMASK to set a field to SKIP. Then move the required attribute to the 'A' field.
The length field can be used to set where the cursor will be positioned when the map is displayed. Simply move -1 to the length field, prior to the SEND MAP.
 The output part of the copybook definition should be used as a receive area after the user has pressed Enter (or a PF Key) - RECEIVE MAP...... INTO...



Arrays

You can use BMS Macros to define an array of fields with the same name. However using Assembler Macros you can only specify a horizontal array. To do this you use the ‘OCCURS=’ parameter on the DFHMDF macro.
If you want to specify a Vertical array then you must specify each field in the array seperatley using BMS Macros and then edit the generated symbolic map (Copybook).
 When using a screen painter such as IBM’s SDF II you can specify the array direction, and the screen painter will generate the correct symbolic map to let you use horizontal arrays.without the need for editing the generated code.



Common CICS abends


Some of the more common CICS abends are briefly described below.These are only brief descriptions and do not cover all possible reasons. You should check the appropriate IBM manual(s) for full details

ASRA
This is the most common abend in CICS. It indicates a Program Check Exception, roughly equivalent to having an S0C7 in a batch program. Check for spaces in a packed decimal numeric field and changes to the file and record layouts.

AEIx and AEYx
There are numerous abends that start with AEI or AEY. They indicate that an exception has occured, and RESP (or NOHANDLE) is not is use. The last character indicates the exact error
AEI0 indicates a PGMIDERR.
AEI9 is a MAPFAIL condition,
AEIO indicates a duplicate key (DUPKEY) condition.
AEIN indicates a duplicatebrecord (DUPREC) condition.
AEID indicates an End of file condition.
AEIS indicates that a file is not open (NOTOPEN)
AEIP indicates an invalid request condition (INVREQ)
AEY7 indicates that you are not authorised to use a resource (NOTAUTH)

 See the CICS Messages & Codes Manual for more details.

AICA
 This abend usually occurs if your program is looping. There are CICS parameters that determine how long a task can run without giving up control. The ICVR parameter in the CICS SIT table can be used to specify a value for all tasks running in CICS, or you can specify a RUNAWAY value when you define a transaction . If a program is looping then you may not get an AICA abend, because the timer can be reset when certain events occur, eg some EXEC CICS commands may reset the timer to zero.

 ATCH and ATCI
These abends indicates that the task was purged. The task may have been purged by someone issuing a CEMT command to purge the task, or by CICS because the Deadlock timeout limit has been exceeded or because there was not enough virtual storage available to run all the tasks in CICS (Short on Storage)

APCT
A program was not found or was disabled. Check the transaction definition to see if the program name was misspelled. Check that the program is enabled. Check that the program is in an appropriate Load Library (ie one defined to the current CICS system).

AKCP and AKCT
These abends indicate that a timeout of the task occurred. This may be due to a deadlock.

AFCA
A dataset could not be accessed because it was disabled.

ABM0
The specified map was not found in the specified mapset. Check that you have not misspelled the map name.

DB2: Indicator Variables with Host Variables


Indicator variables are small integers that you can use to:
  • Indicate whether the values of associated host variables are null
  • Verify that the value of a retrieved character string has not been truncated
  • Insert null values from host variables into columns.
Retrieving Data into Host Variables: If the value for the column you retrieve is null, DB2 puts a negative value in the indicator variable. If it is null because of a numeric or character conversion error, or an arithmetic expression error, DB2 sets the indicator variable to -2.

If you do not use an indicator variable and DB2 retrieves a null value, an error results.

When DB2 retrieves the value of a column, you can test the indicator variable. If the indicator variable's value is less than zero, the column value is null. When the column value is null, the value of the host variable does not change from its previous value.

You can also use an indicator variable to verify that a retrieved character string value is not truncated. If the indicator variable contains a positive integer, the integer is the original length of the string.

You can specify an indicator variable, preceded by a colon, immediately after the host variable. Optionally, you can use the word INDICATOR between the host variable and its indicator variable. Thus, the following two examples are equivalent:

EXEC SQL                     EXEC SQL
     SELECT PHONENO               SELECT PHONENO
     INTO :CBLPHONE:INDNULL       INTO :CBLPHONE INDICATOR :INDNULL
     FROM DSN8510.EMP             FROM DSN8510.EMP
     WHERE EMPNO = :EMPID         WHERE EMPNO = :EMPID
END-EXEC.                    END-EXEC.

You can then test INDNULL for a negative value. If it is negative, the corresponding value of PHONENO is null, and you can disregard the contents of CBLPHONE.

When you use a cursor to fetch a column value, you can use the same technique to determine whether the column value is null.

Inserting Null Values into Columns Using Host Variables: You can use an indicator variable to insert a null value from a host variable into a column. When DB2 processes INSERT and UPDATE statements, it checks the indicator variable (if it exists). If the indicator variable is negative, the column value is null. If the indicator variable is greater than -1, the associated host variable contains a value for the column.

For example, suppose your program reads an employee ID and a new phone number, and must update the employee table with the new number. The new number could be missing if the old number is incorrect, but a new number is not yet available. If it is possible that the new value for column PHONENO might be null, you can code:

EXEC SQL
     UPDATE DSN8510.EMP
     SET PHONENO = :NEWPHONE:PHONEIND
   WHERE EMPNO = :EMPID
END-EXEC.

When NEWPHONE contains other than a null value, set PHONEIND to zero by preceding the statement with:

MOVE 0 TO PHONEIND.

When NEWPHONE contains a null value, set PHONEIND to a negative value by preceding the statement 
with:

MOVE -1 TO PHONEIND.