This post originated from an RSS feed registered with .NET Buzz
by Udi Dahan.
Original Post: MSMQ scale out question
Feed Title: Udi Dahan - The Software Simplist
Feed URL: http://feeds.feedburner.com/UdiDahan-TheSoftwareSimplist
Feed Description: I am a software simplist. I make this beast of architecting, analysing, designing, developing, testing, managing, deploying software systems simple.
This blog is about how I do it.
It appears that the questions are getting more concrete around my Autonomous Services article. This one comes from Eric who asks:
"Udi,
Your article caught my attention because we are currently trying to figure out the "infrastructure implementation" piece of the puzzle in an MSMQ cluster scenario. I didn't understand the term "remote transactional reads"; can you describe it as it relates to this architecture?
The problem we've encountered is how do we make it easy for our ops team to just add/remove servers from the cluster, and have the load redistribute itself properly. MSMQ tied to virtual IPs by itself doesn't seem smart enough. We're toying with both HW load balancing, and/or some NLB/MOM/AppCenter interactions (with programming layers to make it work.)"
Eric, the current version of MSMQ lacks the remote transactional receive capability - meaning that if your "application" is trying to perform a "Receive" operation on a queue that is located on a remote machine in a transactional context, it will not work. The next version of MSMQ is supposed to "fix" this.
Why would you want such a feature? Well, if you put a single queue on a clustered server (for availability) and had many other servers using that queue as their input queue, you'd have the ability to add servers at runtime to handle increases in load. However, since the handling of a message from a queue often requires a transaction, the current version of MSMQ won't support this out of the box.
What you could do is have some sort of dispatcher application that would send messages to the other servers. When a server would be ready to receive a message (not under heavy load), it could send the dispatcher a message saying just that. The dispatcher would store the return address (ResponseQueue) so that when a message would arrive, it could send that message to the server.
This scenario would have each server have its own input queue and be aware of the dispatcher. The dispatcher would not have to know about the other servers directly - it would just store the return addresses of the messages it receives. The result of this behavior would cause the messages to be sent transactionally to only one server, which would, in turn, transactionally receive the message from its local queue and process it.
I hope that answers your question Eric. If you'd like more information, feel free to follow up.