This page is in group Technology.
First, the epilogue..
This blog note spurred a paper “Selective choice ‘feathering’ with XCHANs” that I had accepted for CPA-2013. It’s at  to read, but you might perhaps read this blog note first. It’s probably a prerequisition.
“In the modern world, many programs spend much of their time waiting. Unfortunately, so do their users. To reduce both kinds of waiting time, it is necessary for the programs to wait faster. Wait times can be significantly reduced by waiting for two or more events concurrently – either waiting for both of them, or (more effectively) waiting just for just one of them, whichever happens first. (p1 notes)
I will explain how this is achieved by an example, taken from real life. Most days I go to work by bus. There are two bus routes that pass between my home and my office: a city route, number 9, and a country route, number 7. Let P stand for a decision to take the more interesting city route; and let Q stand for the decision to take the country route, nice in good weather. If I decide P , I find my average waiting time at the bus stop is twenty eight minutes. If I decide Q , I have to wait on average the same time. The busses arrive independently at random, at widely varying intervals. Since the journey time is only ten minutes, I really have a strong motive to reduce my time spent waiting at the bus stop. There is one remarkable way that I can do this: suppose I do not decide in advance on which bus to take, but just take the first bus that arrives; then I find that my average wait time goes down to just eleven minutes! I get to work much quicker, and I have done it, not by speeding up the bus, but simply by waiting twice as fast, or more! You can tell I am waiting faster, because I’m waiting less than half as long. I will now spend a little time to explain this paradox of waiting times, and show how the same technique applied inside a computer can make concurrent programs wait faster.” (p2 notes)
But.. who’s waiting “fastest” – buffered or non-buffered comms?
Obviously, for an asynchronous system all messages that’s being waited for come from a queue. In a synchronous system there is no queue message queue. Will this mean that a system built with asynchronous messages waits faster than a system based on synchronous communication between processes?
It does not count on how the messages are delivered, only on how the waiting mechanism is designed. An asynchronous queue is often designed as a message pool into the waiting process, so each bus (using the same metaphor as Hoare), as it arrives to the bus stop, will leave a message that it has arrived (or: is planning to or accept a stop, really). The first in, first out (fifo) buffer makes it obvious that a waiter will get the first bus first.
The synchronous case I am talking about has a process wait for the first event in a selective choice. In occam it is called ALT, in Google Go select, case and in Ada select. Unix select is not really the same, but for this discussion it could be.
What’s waited for is a message from each of the buses. Each bus could represent a process. The selective choice is able to wait for all the buses at the same time. Whether the bus stop message arrives through a buffered channel into the choice or an unbuffered (synchronised) channel makes no difference. The choice is triggered whenever the first message arrives. Just like for the asynchronous case.
So, a process in a selective choice could wait for a synchronous channel (or rendezvous), a buffered channel (both one-to-one) or a shared channel (many-to-one). The last case most closely resembles the asynchronous case I discussed above.
Aside 1: In the most usual asynchronous case, the waiter would have to inspect all busses that arrive to the stop, also the numbers that will not take him in the right direction. There could be 10 busses stopping and it may have to discard maximum 8 messages before one of the two possible bus numbers arrived. A selective choice would often have a boolean expression before each component (in Go this is simulated with a pattern, see ). A selective choice would then be able to listen to only the set of busses that will take it in the right direction. So, during the waiting it could possibly burn fewer cycles of the machine to take the first. In some languages it is also possible to enable a sub-set of the many-set in a many-to-one channel.
Aside 2: The messages from uninteresting bus processes may still be pending after the correct first, depending on the paradigm used. Messages from blocked sending would have to become picked up, but attempted sending with output guards available would enable the sender to flush an uninteresting passing.
Aside 3: In some languages (like Erlang), a receiver is also able to listen for a certain contents of a message, and filter out uninteresting messages. I have not dicussed this case here. See ERLANG – Expressions (This paragraph is an update 11Sep2022)
So, whether or not messages are passed over as buffered or not is not a matter here. Only how the waiting mechanism is implemented. If you wait for the first, and the first is guaranteed to get there first, that’s all that Tony Hoare would ask for. With none of these mechanisms, first will arrive second. (It could, if second had priority, but that’s another matter.) So, neither buffered nor non-buffered communication waits faster than the other.
- Hoare on Wikipedia: http://en.wikipedia.org/wiki/C.A.R._Hoare
- Concurrent programs wait faster by Tony Hoare at MSR TechFest 2003 (Microsoft Research Technolgy Festival(?)
* Hoare on Microsoft Research, Cambridge, UK: here
* 11Sep2022: Since the original 2013 URLs are by now dead, I have been blunt enough to place my PDF from 2013 here…
* .. and the notes here (they are needed)
* The Power Point source file is also available from the WayBack Machine, like here
- Priority select in Go – a blog note of mine
- Selective choice ‘feathering’ with XCHANs will take you to paper and presentation at CPA-2013