Category Archives: designpattern

Common Error Table

Common Error Table.

 When error happens, exception handling processes notify support team, raise tickets with remedy, and sometimes use log4j to log error messages in the server. In addition to these, it is good to have common error table to log the error. It helps us to analyze system health directly integrating with other tools.

 Following can be the fields of common error table.

Error key: This is the primary key of the table. This can be a sequence.

Program Name: Name of the program. This is applicable where under a single program there are multiple projects.

Project Name: Name of the project under the program.

Process Name: Name of the process producing audit log.

Event Name: Name of the Event /Message/Entity

MessageId: The MessageId uniquely identify the message. This can be used for reconciliation purpose.  This can be optional.

Source: Source System name.

Target: Target System name.

Error Code: This field will contain error code

Error Message: Detail error messages

Error Summary: Brief description of error

Create-timestamp: Timestamp when error occurs

Incident number: Error can be linked with remedy ticket number when a ticket is raised automatically in remedy.

Connector name: Name of the connectors causing error. This can be optional

Attribute1, Attribute2, Attribute3: You can store any important information in these fields like  ids , number etc to  troubleshoot issues

Storing the Messages for Retry

Sometimes we need to store messages into the database for retry when messages cannot be pushed to the target at the first attempt.

Again here we need to think of what fields are required in the database table to store the messages. Otherwise you will keep on adding fields afterwards which may need changes of code and documents.   So you should decide which fields are required in the database table along with storing the message payload at the first place. Typically you should have following fields.

Message ID:  This is the primary key of the table. This is the unique id which to be retrieved from the message payload. Each message should have message id which uniquely identify the particular message. If you receive the message with same message id, it means it is duplicate and you should reject that message. Sometimes you may not get this unique message id in the message payload, in that case try to identify some unique field or combination of fields in the messages which will uniquely identify a message and act as a key.

Program: Name of the program.  This is applicable for large initiative where under a single program, there are multiple projects.  Otherwise for small project, program name and project name can be the same.

Project: Name of the project under the program.

Process: Name of the process producing the message

Message Type: Message or event type. You can correlate the messages which to be processed by the particular process. Though you can correlate with process name. But it is good to have this field.

Source System: Message source system which is sending the message  to the destination

Destination System: Target system where message to be sent

Message-Payload: This is the message payload. This can be a text or  clob type data.

Received-timestamp: Timestamp when message is received

Last-time retried timestamp: Timestamp when message are retried to push to target

Retry-flag: This is a Boolean flag.  If denotes that messages to be retried to send to destination or not. If messages are successfully pushed to destination, retried flag should be set to “N”.

Retry count: Number of times messages are retried to push to destination. Message retry may not be successful in the first attempt. So you should keep on increasing the counter every time you attempt to push the message to the target.

Message status: Message are successfully sent or not. Store “success” if the message is successfully sent to destination. Otherwise store failure.

Error Code: This is optional. You may store the error code to know the reason of retry failure

Error  Description: This is optional. You can store detail error messages.

Also remember you should call common audit process from retry process. Retry process should be the common reusable module of the main process which try to push the message to the destination.

Common Audit Log Process

Audit process:

Keeping audit log is very important in message oriented middleware. Sometimes standard audit log feature available with ESB tools are not enough. We need to build   custom audit log. We should create a common reusable process which will be called by main process to create audit log.

There should be standard table defined which will contain audit log. But in that case it will be slower, if every process access the table to create audit records. Best practice is to create an audit queue where messages can be published. We can have a subscriber which will subscribe to the audit queue and write the audit log to the table.

Audit table should contain meta-data about messages. We can generate various reports from audit log. You can get a complete insight how ESB is performing from audit log.

Sometimes we create custom audit table without giving due importance of the fields of the table and later on we keep on adding new fields.

Typically audit table should have following attributes.

Audit Key: Primary key of the table. This should be incremented by one every time a record is inserted.

Program Name: Name of the program. This is applicable where under a single program there are multiple projects.

Project Name: Name of the project under the program.

Process Name: Name of the process producing audit log.

Event Name: Name of the Event /Message/Entity

MessageId: The MessageId uniquely identify the message. This can be used for reconciliation purpose.

Payload: This is optional. We don’t encourage to store payload in the database. It will increase size of audit tables rapidly and may deteriorate performance. Also if payload contain sensitive data, you should not store as a plan text. In that case, you need to encrypt the payload.

Source: Source System name.

Target: Target System name.

Error Code: This field will contain error code

Error Message: Detail error messages

Status: You can store success and failure status of messages in terms of successful transfer from source to target.

Received Timestamp: This will contain timestamp of when messages is received

Sent Timestamp: This will contain timestamp of when messages are sent/committed.

Log Timestamp: The timestamp where messages are logged.

Server Name: In a cluster environment it comes handy if you store the information of server which has processed the messages. You can monitor performance of each server in processing of messages.

Atribute1, Atribute2, and Attribute 3:  You should have some optional attributes in the table. You may use to store additional attributes of messages.

This audit log can be used not only for transferring messaging, but can also be used to keep the audit log of other pattern such as file transfer, bulk data transfer etc.

It is better to accumulate all the data and call audit log process at the end of the processes.

You can have common sub process or you can post the data directly to the audit log queue.

We can implement queue based audit log with two processes

First process

Main process will call audit publisher messages.  This process will validate the messages and will write the messages to the queue.

Second process.

This is a subscriber which will subscribe messages from the queue and write the messages in a database table.

You may directly send the messages to the audit queue. In that case you don’t need to have publisher process. In that case you need validate the messages before sending to the audit queue. Otherwise subscriber may reject the messages.

Publish Subscribe Delayed Response

This is a publish-subscribe pattern variant.

In this case integration process needs to send acknowledgment to the source application if messages are delivered successfully in the target application. But message successful delivery cannot be determined by the first response sent by target application. Target application system might have messaging queue. It stores the messages in the message queue   and send the response to the caller before applying it to the database.  This is the first response. So to know if the messages are actually applied and committed in the database of target application you need to wait for couple of minutes to get the final response.

You can have a batch process which periodically invoke a web service to get final status of the messages from the target application.

The subscriber process read message queue and try to push to target. It writes success or failed messages to the status table.

The batch process reads the status table (which was populated by subscriber process), invoke web service to get the status and update the status code in the table. The table can have a key fields like messages id /transaction id / batch number or other business keys and web service can pass any of these keys as a parameter to get the status.

Source application can invoke web service   periodically to get the status. Integration layer should expose a web service by which it returns the status to the source application after reading the table as per key values passed in the web service.

If source application can provide a call back web service then batch process can directly send the acknowledgement to the source application by using web service call back.

Publish Subscribe using Messaging as a Service

Publish Subscribe   Messaging as a service.

 This is another variant of publish subscriber design pattern.

Sometimes organization offers “messaging as a service”. That means source application can publish and subscribe message directly to the message Queue or Topic.  Messaging service should support HTTPS protocol.  External application should connect with messaging service using HTTPS protocol.  JMS connector can be used if source application is in intranet.

Target application can directly listen to the queue. But sometimes target application does not have the capability to directly listen to the queue or directly access the queue to retrieve messages. In that case we need a subscriber process, which will listen to the queue and push to the target.

To implement this design pattern we need a subscriber process and a batch process to handle failed messages.  Integration layer does not need any publisher process.

Subscriber and batch process should be the same as described in main publisher and subscriber design pattern.

Additionally subscriber process can validate the message. If the messages are not valid, it may invoke a call back web service provided by source application to send the acknowledgment.

Acknowledgement can be sent to Source application from subscriber process or batch process as required.

Publish Subscribe using selector process

Publish Subscribe using Selector Process.

 

 This is another publish and subscribe design pattern variant.

We can implement this design pattern using JMS message selector API. Publisher publish the messages in a common queue. Selector  process listen to that queue and send the messages to destination specific queue.

Selector  process use specific filter  criteria to send the messages to different destination queue.

 

We need at least 4 process to implement this design pattern.

Publisher process: this is the same as describer in earlier section

Selector process.

 

This process use selection criteria to select the messages from the common queue. This can be a batch process or listener process.

The selector’s process select the messages from the common queue on the basis of  filter criteria and move the messages to the destination specific queue.

We can have multiple selector process depending on the use case.   We can have separate selector process for each destination.  In this case  governance, error handling and monitoring  of the process would be easier.

Subscriber process: as described in earlier section. Normally you should have separate subscriber process for each destination.

Batch process to handle failed messages.

You should have separate batch process to handle the failed messages for each destination. It would provide you more control and granularity. If you can parameterize your connection URL you can run the same batch jobs with different connection parameter.  Normally you should have single table to hold the all error messages.

Publish Subscribe using Topic

Publish  Subscribe  using Topic  

 

This is a variant of publish/subscribe design pattern.  In this design pattern there are multiple subscribers for a single message. We need JMS Topic to implement the same. Each subscriber is given a single copy of the messages. In case of durable subscription, messages will be retained for the subscriber which is not normally  available at particular point of time.

 

When considering zero message loss policy, it is convenient to use virtual topic concept provided by some messaging provider. Messages are published to the topic. But messages are forwarded to the different queue as per predefined policies. Subscriber process consumes the messages from the corresponding queue. In this way message tracking are easy. If messages has persistence storage then, there will be zero message loss.

 

Publish Subscribe Design Pattern

publish subscribe design pattern

This design pattern can be achieved by using JMS QUEUE. In this exchange pattern publishers send the messages in the Queue. Subscribers get the messages from the queue.

We need three processes to implement this design pattern; a publisher process, a subscriber process and a error handler batch process.

Publisher process.

  1. Source application invoke this process through web service
  2. Publisher send the messages of the source application to the queue after successful validation
  3. Publisher sends the error messages to source application if validation fails.
  4. Publisher commit the messages to the Queue and send success response to the source application or send failure response if commit to the queue fails.

Subscriber process.

  1. Subscriber process listen to the Queue.
  2. Messaging system deliver the messages to the subscriber process
  3. Subscriber enrich and transform the messages and deliver to the target application
  4. The messages which could not be posted, are sent to the Error Queue or Error table.(You may use error table, instead of queue to achieve fine grained control)

 

Batch Process.

  1. This process handles the failed messages which could not be posted to the target application
  2. This is a scheduled process which read the messages from the message Queue and try to post the messages to the destination.
  3. The messages will be written back to the error queue which could not be posted.
  4. In case if the process support transaction, messages will be there in the error queue, if the messages cannot be posted to the destination.

Use Case

This is asynchronous design pattern variant. You should use this pattern if your integration layer has standard messaging support. It is better to use Queue instead of database to store the messages especially in cluster environment.

Also where there are single publisher and multiple subscriber, you should use this pattern. To handle this scenario with database is not recommended.

 

 

 

 

Asynchronous Design Pattern

Asynchronous Design Pattern

In this message exchange pattern, messages from source application are processed asynchronously.

Source application send the messages to the integration process. Integration process commit the messages in a database of integration layer and send the response back (HTTP OK) to the source application. Another integration process will pickup the messages from the database and send the message eventually to the target application.

The response message to the source application is normally HTTP 200 OK for success response or HTTP 500 for failure (or any other error code)   It is the responsibility of integration layer to send the messages to the target application.

We can implement this pattern with two processes.

First Process

  1. Source application will invoke the integration process
  2. The process validates the messages and after successful validation commit the messages in the database.
  3. If validation is unsuccessful, it sends HTTP error code to the source application.
  4. Send HTTP OK /HTTP Error according to the successful commit or failure.

 

 

Second Process

  1. This is batch process which is scheduled or it can be a listener process which will poll the database.
  2. This process will pick up messages from the database and then validate, enrich and transform the messages and invoke provider web service.
  3. This process will delete the record from the database after successful processing.

  Use Case

    Use this pattern when messages delivery from source to target takes significant amount of time.  Also for cloud to cloud communication, sometimes connection failure occurs vary frequently. In that situation it is better to choose asynchronous design pattern. Also when middle-ware needs to have lots of complicated business logic for messages enrichment and transformation, you may choose this design pattern.

Hybrid Design Pattern

Synchronous Message Processing – Request-Response Pattern; Handle Failed Request Asynchronously-Hybrid Design pattern.

This is mainly a synchronous design pattern. But sometimes, some messages can’t be posted to the destination in real time due to connection failure or other reason. So we need to process those messages asynchronously. This design pattern is partly synchronous and partly asynchronous. So you can call this a hybrid design pattern.  Those   failed messages needs to be stored temporarily in a message queue or database for retry.  ESB main process will send an intermediate status code to the requester application in such case. A separate ESB process will listen to the queue and try to post it to the destination. Requester application needs to provide a call back web service.

Happy path will be always synchronous. Only few messages that could not be posted will be processed asynchronously.

Messages should be published to the queue to post in   near real-time. As faster message processing is the foremost important in synchronous design pattern, preferably we should use messages queue instead of database. If messages are stored in a database we should write a batch process which will try periodically to post those messages. Also we can use database to persist the messages if we need fine grain control.

For synchronous main process,   every process component like duplicate checking, auditing, error handling is applicable.

For the second process which work asynchronously, we may not need to have duplicate check. But we should have other common sub-processes like audit and error handling. We may store the enriched and transformed messages or we may store original messages sent from source application. But in this case again we need to do message enrichment and transformation, which may not be required and may be avoided.

We need to have two distinct processes to implement this design pattern.

          Main Process-Invoke Provider and send failed messages to the queue.

 Requester will invoke the Integration process web service and wait for response.

  1. The main integration process will validate the request and if messages are ok, enrich and transform the messages and invoke provider service.
  2. The process  will send the response in real time to the requester
  3. The process will send the messages, which could not be posted, to the message queue.

         Process 2-Process from queue

  1. This process 2 can be scheduled process or a queue listener process
  2. Process 2 will get requester messages from the queue or database
  3. Process 2 will invoke provider web service to pass the messages
  4. Process 2 will pass the response to requester by call  back web service
  5. In case messages could not be delivered; the messages will be placed in the error queue or it can keep on retrying the messages. In case messages are stored in the database, message flag can be updated for successful delivery of the messages of delivery failure.
  6. This process will try repeatedly to push the failed messages to the target application.

 

Use Case

 Use this pattern, when requester wants to pass messages to the provider preferably in real time. However, it requires integration layer to push failed messages to the destination asynchronously. A call back web service should be provided by requester to send acknowledgement to the requester.