Image without alt
Back to all cases

Optimizing Instant Payments Performance: Batch Payments and Message Compression for MQ Based Queues

21 of September of 2022

por matera

Share

Image without alt

The success of the Pix instant payments scheme in Brazil is well-documented. While this instant payments scheme leverages REST APIs to exchange ISO2022 messages, another payment scheme in Brazil called TED (similar to ACH), uses IBM MQ queues.

FedNow is also using IBM MQ queues. Given our experience with IBM MQ queues from processing TED transactions, we know that when there is a burst of payments, queues can create a bottleneck. We have one large client that processed 500,000 TEDs in one business day, sending each payment as a new message, and the system almost stopped.

There are ways to optimize the sending of messages to prevent these bottlenecks from occurring. What follows are details related to batch payments and message compression to optimize the FedNow instant payment scheme.

Batch Payments vs. Single Payments

The FedNow instant payments rail is planning to only allow a single payment per message at launch. Banks will only be allowed to send one payment at a time to FedNow rather than send multiple payments in the same message.

Since November 2020 when Pix launched in Brazil, financial institutions have had the option to send multiple payments in the same message. This enables them to do things like send just one message for a company’s payroll or to allow customers to schedule future payments and then bundle these requests daily to send to the Central Bank in one payment message.

As part of these batch payments, Pix solution providers also figured out a way to prevent duplicate payments (e.g. idempotence). The key was finding a field where a unique identifier could be associated with each payment.

When sending multiple payment messages in the same message, the Message Identification field can’t be used for the universally unique identifier (UUID) to prevent duplicate payments because this field describes the overall message. There’s no room to indicate which UUID is associated with which payment.

For Pix, the tag, “EndToEndId” in the pacs.008 message was chosen as the field to include the UUID associated with each payment within the single message. Payment messages can then be re-sent without concerns about duplicated transactions.

If FedNow were to enable batch payments, it would facilitate easier implementation of use cases such as payroll and scheduling future payments. And, our experience from Brazil confirms that batch payments can be implemented in a secure way.

Benefits of message compression

Whether or not FedNow allows single or batch payments, enabling file compression can significantly improve network traffic performance.

In Pix, whether it is a batch of messages or a single message, participants have the option of sending the raw XML or compressing the xml into a gzip file. For single messages, compressing the XML to gzip can more than double network capacity. For batch messages, compressing the file can increase network capacity by 5X or even more when using messages with hundreds of payments.

Conclusion

If batch payments and message compression are enabled in the architecture of FedNow, it can help save network traffic, avoid problems of too many messages piling up in the queue, and provide more security.