Scenario:
A distributed application puts a high volume (200-800 per second) of request messages onto ten shared mainframe queues (MQ v8.0) via SVRCONN channels . Each of these non-triggered queues have multiple CICS transactions processing these requests and putting responses onto another set of ten shared queues then read by distributed processes. All the requests are processed by MSGID and all the queues are non-persistent. CICS syncpoints are issued every 200 messages.
Goal:
Improve throughput by eliminating as much of the MQ wait time as possible.
Proposal:
Have the application issue all puts and gets outside of syncpoint. The messages are non-persistent, the data is stale after a few seconds and the application/business is not concerned for message rollback. My understanding is that putting/getting within syncpoint will lock the message until a commit/syncpoint is issued. By modifying the application to put/get outside of syncpoint this delay will be avoided. Is this correct?
Note that these transactions are long running (with built in delays), therefore all SMF statistics are aggregate so we cannot accurately state the value of MQ wait times.
My question is really is just to ensure that my understanding of the syncpoint process in the above regard is correct.
Thanks.
Allen