When dealing with sequential DASD datasets,
MVS does some performance tuning behind your back. Instead of dealing
in 1 block chunks it now deals in 5 block chunks. What this means is
that for the following blocked datasets you actually deal with larger
chunks of data.
BLKSIZE BUFNO=5
22000 110,000 bytes gotten per access
6160 30800 bytes gotten per access
Thus when reading a 22,000 byte blocked
dataset, MVS really works with 110,000 bytes chucks by default. The
objective being met here is to minimize the amount of requests made
to MVS's I/O subsystem, an expensive option.
But is 5 buffers enough? In today's
environment, the answer is simple => NO. You should consider
coding more. Another factor that comes into play at this time is if
you exceed multiples of 31 buffers or 249,856 total bytes, then MVS
will break your request into 2 or more parts and do Parallel I/O thus
reducing the amount of time required to run I/O bound jobs. For
example a job that read 100,000 240 byte records blocked at 24,000
took 8 seconds to run, but with 33 buffers it took only 6.5 seconds.
If this job did a lot of process between request for data records the
response time would decrease more!
You must also recognize that by increasing
the BUFNO your job now require extra MEMORY, which can have an impact
on your run time. In fact coding too many buffer can slow things
down. Never code BUFNO < 5 unless you really understand the
implications, and finally do not over code BUFNO, if you dataset is
only 100K in size, don't code 200K of buffer space.
Also don't bother with this parm for
SORTIN, SORTWK, or SORTOUT (sort datasets). Sort does its own special
I/O processing to reduce EXCP and coding BUFNO will only confuse it.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.