Once upon a time, CICS applications may have been single-purpose, self-contained units. Executing in solitude with the incredible throughput and reliability of the platform, the CICS application did its job – and still does, but the times they are a changing.

Our expectations of the humble CICS application have grown: we want more insight, more capacity, and smarter processing to maintain that competitive edge. Applications that began life taking input from a terminal now have to interface with data stores, programs across multiple systems, and invoke requests to external services.

“We are asking more from CICS applications
and we want the results sooner”

Perhaps you started off with one simple database of customer details, but over a number of application enhancements, system consolidations, and business acquisitions spanning several geographies, you now have to fetch data across all parts of the organisation to build up a complete view of the customer.
Or maybe a new business objective means you need to pull together a number of dispersed services which are provided across your organisations systems.

We are asking more from CICS applications and we want the results sooner. New applications have aggressive response time goals to meet SLAs and improve client satisfaction, whilst existing apps have to be enhanced without affecting overall response times.

If any of that sounds familiar, then you’re in good company!

A day in the life of Alan

Alan, our senior CICS application programmer, has been asked to create a new app. The app will need to aggregate all the data for a customer, and run some checks to provide a tailored experience. This involves calling other systems in the organisation as well as web services served by other organisations.

Alan’s aims are simple: to write quality code with a defect-free algorithm that meets the objectives in a timely manner.

Alan begins as usual, by writing some code to step through what needs to be done: call the first service and deal with the response, call the second service and deal with the response, then – you guessed it – call the third service and deal with the response.

The diagram below shows what happens when you call services sequentially.

sequential

But what if you could execute those services in parallel? The application’s response time will be reduced because the time spent waiting for responses occurs simultaneously.

parallel

Even if you don’t have multiple services to call, you can still make response time savings if you can continue processing in the caller thread while waiting for a service response.

By running the service request in a different transaction to the caller, the caller itself is free to continue doing something else, be that business logic or calling another service. Later on, when the result is required, the caller can request the results of the service. If the called service has returned by then, great – if not, then at least time spent waiting is reduced.

Using an asynchronous model

Asynchronous processing isn’t a new idea – but it might not be something Alan is used to. In the past, Alan has tried to create an asynchronous processing model for CICS, but struggled with complexity and unstable results.

In his home-grown asynchronous framework, Alan has a lot of possibilities to consider:

  • How to run work flows simultaneously
  • How to work out when something has completed and returned
  • How to get data to and from different transactions
  • What happens if the service replies late – or not at all
  • What if the calling program has returned before the called service replies?
  • Many other timing windows; of various services being called, results being returned out of order, stale data
  • Tracking which services have replied, and managing completion states, abend codes, and clean up tasks
  • The two new CICS API commands introduced in the CICS TS 5.4 open beta, EXEC CICS RUN TRANSID and EXEC CICS FETCH, enable simple, intuitive development of an asynchronous programming model in CICS. Using the new API commands, an application developer can run an asynchronous transaction in CICS, passing and consuming data with ease and reducing the challenges associated with home-grown asynchronous infrastructure.

    “two new asynchronous API commands introduced in CICS TS V5.4 open beta

    enable simple, intuitive development of an asynchronous programming model in CICS”

    The EXEC CICS RUN TRANSID command starts a new child transaction which will run asynchronously to the parent transaction. Similar in principle to an EXEC CICS START command, the difference on the RUN is that a returned CHILD parameter enables future coordination of passed data.

    Coupled with the EXEC CICS FETCH command, you can specify a previously RUN child transaction to fetch results from. CICS handles coordination between the parent and child transactions, and if a channel is specified, CICS containers can be used to easily pass and return data from the child transactions. Also exposed on the FETCH command is the completion status of the child transaction for easy, transparent processing.

    The CICS asynchronous API commands remove many of the maintenance, management, error and edge-case issues previously associated with asynchronous processing. They enable application developers to concentrate on efficient business logic rather than infrastructure code.

    For a simple example (including passing and fetching data to a child transaction) see the article “How to use the CICS Asynchronous API Commands“.
    To find out more visit the CICS Knowledge Center.

    Try it yourself in the CICS TS V5.4 open beta – and let us know what you think!

    7 comments on"Introducing the New CICS Asynchronous API!"

    1. Michael Erichsen July 25, 2016

      Hi Pradeep

      This is very exciting.

      But as I read the documentation then FETCH waits to return, until the child has ended. Does this mean that I have to FETCH all my children sequentially, in which case I can only handle the simple scenario of waiting until everybody has ended? Then a wish for enhancement would be to be able to wait for the first ending child, and also for a certain subset of children.

      Br Michael Erichsen,
      Xact Consulting

      • PradGohil July 25, 2016

        Hi Michael
        Thanks for taking an interest and feeding back.
        Yes, for the current open beta the FETCH is ‘blocking’. So when you issue the command, it will wait for the child to complete (assuming it has not already completed).
        There are some variations that you can have over a simple scenario – for example you can issue the FETCH commands in a different order (perhaps ordering them for expected quickest responders). The FETCH command is also only needed if you want to check on the child’s completion. In your business logic you could choose to not bother fetching the details of a particular child… in which case CICS would still do the tidy up for the abandoned task and channel.

        Having a “non-blocking” FETCH, and also waiting for the first child to reply, aligns with other feedback and discussions we’ve been having. These are items that we hope to look at in future open beta drops.

        The subset of children is an interesting point. The main objective is to present concurrency in a natural way to the application developer, but also to get out of their way where possible, by making the API simple to use.
        I think that maintaining lists… or (where the conversation seems to progress to) with logic operators such as “fetch child-A and child-B or child-C”… is better left to the applications’ business logic.
        It would be good to know if this sounds reasonable to you, or if you have a particular situation in mind (feel free to follow up directly if you wanted to discuss a particular business scenario in more detail).

        • Michael Erichsen July 26, 2016

          Thanks.
          I think that non-blocking would be sufficient and uncomplicated.
          Leaving the rest to business logic sounds right.
          One thing I think a lot about is a fuzzy logic-UIMA style-Watson like architecture, where you send off a number of parallel requests (typically JSON in these times) and are aware that you cannot achieve completeness nor full consistency. So you accept to have less than all the response within a reasonable period of time and then let the received responses “vote” or do some other logic on them (fastest, cheapest, best, etc). That should definitely be in the application scope, not in the infrastructure.
          Will wait for a future beta drop before starting to experiment.
          Br Michael

          • Hi Michael
            Thought you might be interested to know that (amongst other enhancements) in the latest open beta we have added support for non-blocking EXEC CICS FETCH commands. Simply use the NOSUSPEND parameter on the FETCH command.
            Another enhancement is the addition of the FETCH ANY command, which will return the results of any completed child task. Useful for when you have more than one child and you wish to process the results as soon as they return.

    2. Michael Erichsen January 12, 2017

      Hi Prad
      I have noticed, and I am installing the latest beta in this very moment (SMP/E ACCEPT running right now). 😀
      Looking forward to try it.

      Br Michael

      • Awesome!
        If there was a ‘like’ button, I would have clicked it 🙂

        • Michael Erichsen January 12, 2017

          This is fun. Program PARENT runs a child program 3 times in parallel with a zero in the container.
          Each of them adds 1 and returns the container.
          PARENT does three times FETCH ANY NOSUSPEND, and is left with just one 😀
          I always want my bank to do this when I withdraw money.

          In other words: It looks like just what we could need.

    Join The Discussion

    Your email address will not be published. Required fields are marked *