In this requirement, we need to send a single file to multiple targets.
In this design pattern a single ESB process will handle file transfer to multiple destinations. The flowchart is given above. We have to use a file transfer tracking table to track status of file transfer for each target. We should keep checks in the process not to send the file to target if the file is already sent. This is required when we rerun the process after exception occurs.
The file transfer tracking table can have following fields. Some fields can be optional. This generic table can be reused for all the ESB file transfer processes.
Program name: Name of the program. This is applicable for big initiative where there are multiple projects under the program.
Project name: Name of the project under the program.
Event name: Event name or message name or entity name etc can be stored
Process name: Name of the process transferring the file.
Filename: This is the unique file name. Source will generate unique file name normally by post -fixing timestamp or by any other method.
Messageid: Messageid uniquely identify contents of a file. We can check if file’s content is duplicate with this messageid field. If we see we processed the file with that message id already, we can ignore the file to process further. This can be a primary key of the table.
File Source: Source from where file is being sent
File Target: Destination system where file is to be sent.
File Sent date: Timestamp of when file sent to target.
Status: Delivery status of the file. You can store value such as “delivered” . Also we can store other intermediate status such as “picked up”, “archived” etc.
FileTransferid: This may the primary key of the table. Also we can also use messageid field as primary key. In that case we don’t need this attribute.
This can be used in combination of audit table.
Benefit: Single process can handle file transfer to multiple targets.
Disadvantage: We can use this pattern, if number of targets are less (may be 2 to3). More number of targets means, process will be more complex. If new targets are added or logic to be changed for particular target, the exiting process needs to get changed. So it requires regression testing. As process is complex maintainability will be expensive.