Not so blocking after all

Started 8Sept2014, updated 01Sep2024 (Clos-network added to Wikipedia refs. User forums). This page is in group Technology and is about a facet of multi-threaded (or perhaps single-threaded) programming

After having heard so often that «blocking is evil» and that it’s kind of the opposite – a necessary side of life –  I have finally unblocked on the semantics. It’s both. And neither. If so, I’d have a conditional admission here..

..and a new phrase: «To yield on a channel» (or to *@%#)

Fold handling with Collapse-O-Matic plugin

I am using Collape-O-Matic (here) on many pages.

Expand All (for searching)
Collapse All

Typical fold

This text 123456789 will only be found with browser search when the fold is expanded

Yielding, blocking and deadlock

I discovered while researching on my previous blog note (*) that the Wikipedia article (Wiki-refs below) writes about non-blocking algorithm that:

Literature up to the turn of the 21st century used «non-blocking» synonymously with lock-free. However, since 2003, the term has been weakened to only prevent progress-blocking interactions with a preemptive scheduler. In modern usage, therefore, an algorithm is non-blocking if the suspension of one or more threads will not stop the potential progress of the remaining threads. They are designed to avoid requiring a critical section. Often, these algorithms allow multiple processes to make progress on a problem without ever blocking each other. For some operations, these algorithms provide an alternative to locking mechanisms.

I have finally (**) put together a figure that perhaps catches some of that and some of this:

Figure 2

It’s figure 2! No copyright if used as is.

I believe what the Wiki-text is saying is this; and I have added other situations:

  1. If «blocking» on some shared object pathologically forever stops the whole system from progressing, it is a deadlock. There always is more than one part in a deadlock. Any multi-threaded system may deadlock, but there may be rules to avoid it (or patterns, language primitives or paradigms). A deadlock is a cycle in the dependency graph. In the figure above I have tried to show it as a cross roads with only red traffic lights (and no rules to get out of it!). But,
  2. if «blocking» on some shared object in practice violates a specification by hindering required functionality, we have a system that blocks in the traditional sense. This is the middle part of the figure that I have tried to show with a STOP sign. But,
  3. if «blocking» on some shared object still allows the rest of the system to progress, it is in fact non-blocking. I have suggested reusing the term to yield for this in the figure above; with a mandatory driving direction sign with modified colours (it was blue). Finally in this context,
  4. else «blocking» on a channel is perfectly fine if the other part is not ready to participate in the communication. Since a channel also is a shared resource (or rather a licence to communicate safely) this really is the same as point 3 above, so it is a yield (assuming no deadlock).

The term yield is traditionally used for threads or processes that run some kind of explicit yield statement. This often is a return to the scheduler. With my suggestion this may now also cover situations where «the paradigm decides» to do this yielding implicitly when needed.

(A Python function becomes a generator if it contains a yield statement. To many programmers the word yield will be flavoured by this. The discussion here doesn’t preempt that bias. See [1] for a great explanation.)

My aha! experience here is to discover that 4. above is not as «blocking» as I have thought (or been taught). Of course I know exactly the semantics of channel communication, it’s the non-literal use of the word blocking that I discuss here: the word as used doesn’t literally describe anything that’s «blocked». I have tried to explain in many of my other blog notes, as well as in a Talk chapter I added at that same Wikipedia article’s page, «Interactions with preemptive scheduler» or «deadlock»? I ask, as implied in the list above, whether the «blocking» as of synchronous channel communication is not the same «blocking» that the «non-blocking algorithm» tries to avoid. Here’s part of the Talk-page:

It is possible to deadlock also with a non-preemptive scheduler, f.ex. cooperative scheduling. This is often used with CSP-type schedulers, where channel communication often is synchronous. This could mean that if such a system deadlocks (=cannot proceed) then «blocking on a channel» is not the same usage of the word «blocking» in this article’s context, since all other processes that are ready to run may then in fact be scheduled? «Blocking on a channel» (synchronous or full buffer if buffered) is standard procedure in a multi-process system. Any needed asynchronism is added by building on this primitive.

So the «blocking» that «non-blocking algorithm» is meant to avoid is not the same as the «blocking on a channel»? I know that Wikipedia sets up an article about Non-blocking I/O (redirected to Asynchronous I/O) – but I/O and communication is just another side of communicating over shared variables of any sort, which I believe is discussed here? There’s no need to even think about non-blocking algorithms if the ultimate purpose is not to communicate?

I was hoping to get some help on this from the Wikipedia authors (as well as adding some interesting points to the Talk-page), but so far it’s been silent. Therefore I still really don’t know if 4. above is in accordance with how the Wikipedia authors would think, if they were to think about channel communication. But personally I am comfortable with this private, yes – decision: That the Wiki-page’s talk about non-blocking algorithm also would go for channel communication.

(Aside: Jeremy Martin has discussed deadlock and how to avoid it extensively in his 1996 D. Phil thesis [6]. And my 2012 XCHAN paper (XCHANs: Notes on a New Channel Type) describes a deadlock free channel)

An admission

The way I have been talking about «blocking» in many of my blogs is not and at the same time is how the Wiki-text defines it. I have really been talking, thinking and arguing about yielding, but used the phrase blocking (and told it’s «good»), together with lots of words. But this is contrary to blocking (that is «evil»).

  • This yielding is also what Rob Pike talks about in Pike & Sutter: Concurrency vs. Concurrency; it isn’t really «blocking» – the show goes on
  • On the other side, what Herb Sutter talked about was the red blocking in the figure; it is blocking – the show stops. This usage covers how we naturally (and literally) understand the word «blocking»

Aside: The asynchronous / synchronous discussion is neither in line with or orthogonal to this understanding. It’s in between. I have tried to write about all this in (too?) many of my blog notes.

Therefore I could have phrased some of my blog notes differently. They would have been easier to understand. From now on I will «yield on a channel». And I will agree that with basically single-threaded code, «blocking is evil». Consequently I will continue to tell the people who have said the latter all the time, about Process-oriented Programming and yielding. It’s not a fairy-tale.

Disclaimer: I might have heard about yielding on a channel (instead of blocking on a channel) before. When I search the web, I can see traces of it both in my blog notes and in the golang-nuts community. I doubt that this usage is invented here.

golang-nuts and golang-dev

I started the thread Yielding instead of blocking on a channel? on golang-nuts on 25Sep2014. There were some interesting comments. I referred back to here with a claimr: I wrote that blog note. It has zero ads and there is zero money involved. It’s enthusiasm-based.

The final say there is of course Rob ‘Commander’ Pike‘s:

The word «block» is the correct term. Please let’s leave this alone.

I also mentioned this in Update on «lock-free channels» on golang-dev on 29Sept2014. It’s quite interesting.

occam-com

I started the thread The history of the term «to block on a channel» on occam-com on 26Sep2014 (for committee people). I have been allowed to paste some extracts here, individually and by name.
I start with the first, and proceed down. Here’s my intro:

When was the phrase “blocking on a channel” introduced and by whom? Hoare does not use it in his 1985 book. Roscoe almost does not use it, or I would say, he does not use it at all in this context in his version of the book that’s PDFed. If this group does not know this, none will.

I am suggesting using “to yield on a channel” rather than “to block on a channel”. I have a blog note about it [here] and there is a discussion thread on golang-nuts [see above]. I include the figure [here] (and the intro text in golang-nuts also was included)

Comments by Tony Gore, Roger Shepherd, Matt Pedersen, Jon Kerridge, Marc Smith, Larry Dickson, David May, Chris Jones, Rick Beton, Ian East and myself:

Comments fold

Down to occam-com comments end

Disclaimer: Even if the starting point of this thread was this blog note and search for the history of the use of the term «to block on a channel», the individual authors are only connected in the context of that thread. Explicitly, they are not in any way associated with the rest of this blog note, or any of my other stuff. Disclaimers are always difficult, but you know what I mean!


Tony Gore :-:

I think it was a common term in use by April 1986 when I joined Inmos. I think some terms in common use were used a bit loosely. A channel that was waiting for the other end of the communication to become ready “blocked” the process from proceeding. Thus my recollection is that “blocking on a channel” was commonly used to describe a process that couldn’t proceed until the communication could proceed.

I think you have to remember that at the time the transputer came out, lots of people got their hands on it and started parallel programming. Many of these were engineers, without computer science backgrounds, and so made up the terminology as they went along.


Roger Shepherd :-:

I don’t know the answer to you question.

I can’t say that I like “block” – but it usage is certainly old and is common for multitask systems where the ability to create an extra task/thread/whatever to do communication is considered to be advantageous – hence “non-blocking communication”.

I don’t like “yield” either as this has a meaning along the lines of «to give place or precedence (usually followed by to): to yield to another; Will the senator from New York yield?” (from dictionary.com) and it is not necessarily the case the case that there is anything else to yield to.

I prefer “wait”.

On the subject of language, I think the term “synchronous” is plain wrong when used to describe (occam) channel communication. The processes are “synchronised” by the communication; the communication is “asynchronous” – there is no clock which causes the communication to happen.

(In fact, you hit on a good point – there is a difference between programmer’s intention (system wide) and what you can determine locally – with occam you can see only a communication or an alternative communication, you know nothing locally about the rest of the system)


Matt Pedersen :-:

Blocking is probably used because a read or write will do exactly that in a one to one straight up communication. Though used in an alt blocking might not aptly describe the state as only communications that are ready (i.e., will _not_ block) will be considered.
I dislike yield because yield gives the idea that you can back off from a blocking read or write call which you cannot.


Jon Kerridge :-:

Of course if we go back even further there were semaphores to which Dijkstra gave the names P and V for the operations on semaphores and as I understand it P and V were the first letters of the Dutch words for wait and signal.

Just a thought.

Any way in my book which is about to be published («Using Concurrency and Parallelism Effectively», a free e-book for download), I have the following description of channel communication between processes:

«A channel is the means by which a process communicates with another process. A channel is a one-way, point-to-point, unbuffered connection between two processes. One process writes to the channel and the other reads from the channel. Channels are used to transfer data from the outputting (writing) process to the inputting (reading) process. If we need to pass data between two processes in both directions then we have to supply two channels, one in each direction. Channels synchronise the processes to pass data from one to the other. Whichever process attempts to communicate first waits, idle, using no processor resource until the other process is ready to communicate. The second process attempting to communicate will discover this situation, undertake the data transfer and then both processes will continue in parallel, or concurrently if they were both executed on the same processor. It does not matter whether the inputting or outputting process attempts to communicate first the behaviour is symmetrical. At no point in a channel communication interaction does one process cycle round a loop determining whether the other process is ready to communicate. The implementation uses neither polling nor busy-wait-loops and thus does not incur any processor overhead.»

Don’t worry I will tell every one when the book is actually available!


Marc Smith :-:

I think you have hit upon a significant language barrier when we discuss channel communications outside of our CSP community. Even though I have heard the phrase «to block on a channel» I never really thought of it as blocking. Not to add another term to the mix — but what you describe as yielding, I thought of as synchronizing. Every communication over a 1-to-1 channel is a synchronization between two processes. When one process reads what another process has written to a channel, they are synchronizing on that event.

To communicate is to synchronize. We already use the term synchronous to describe our channels. What I like about synch’ing versus yielding is a little nuanced. To yield is an intentional act («after you…», «no, after you…», etc.), which may take away from the intentional act of reading or writing. Yielding is something that may happen to a process if it attempts to read or write, and the process on the other end isn’t ready to reciprocate.

On the other hand, when I think of synchronization, I think of it as what happens as a matter of course when a process attempts to read or write to a channel. The synchronization is built-in, by definition of communicating over a synchronous channel.

So that’s the difference to me between yielding and synch’ing. It is admittedly a nuanced difference, and I don’t know that I ever articulated it before, to myself, or anyone else. So, thank you for that. 🙂

The term blocking never bothered me because I just understood it in the sense of synch’ing. But I see your point about impressions this terminology unintentionally makes on the rest of the concurrency community.


Tony Gore :-:

On yield

“Yield” implies to give way and that one person/process therefore has priority over another.

Norway, in driving, has an additional yield concept that doesn’t exist in other places (and certainly not the UK) – the inverted Y sign which instructs drivers to merge (yield) in turn where two lanes turn into one. So I don’t know if Oyvind thinks of “yield” as being symmetric whereas in the UK there is more of a nuanced “priority” in the concept of yield.

Occam channels are a synchronisation point, but that doesn’t mean they are synchronous – there is no clock tick – they are asynchronous in that the communication takes place at some point when both ends of the communication are ready, and it is not determined in advance which one is ready first.

I was in Zurich the last couple of days and it was pointed out to me that Swiss politics are based on consensus; Occam channels strike me more as “consensus” than “yield”. Consensual Communications, maybe?


Larry Dickson :-:

It seems to me that “blocking” applies to an alt as well as a one-to-one communication (or to a timer). It’s not something that CSP people need to be embarrassed about. On the contrary. Any model that does NOT account for blocking is a false model. Think on the instruction level. Each sequential logic primitive (like an add or multiply) is a wrapper around some combinatorial logic that needs time to settle. The system clock gives it that time — so every instruction blocks! And if it did not block, it would give incorrect results.

Before an instruction completes, the conditions necessary for its completion have to hold. If this is a read, write, alt, or timeout, the required blocking can be “macroscopic,” and efficient chips yield control of the instruction stream to other tasks at that point.


David May :-:

The CSP concurrency model is closely related to delay insensitive logic. In delay insensitive logic, it is normal to talk about things starting, doing something and completing (the CSP/occam wording talks about processes starting, doing something and terminating).

A two-party synchronisation, as takes place in a channel communication, involves a two-party completion. In delay insensitive logic, it would be implemented using a special ‘gate’: the Muller C-element. Similarly, the join at the end of a parallel involves a multi-party completion.

The need for synchronised communication in both CSP/occam and delay insensitive logic arises because of alternation (arbitration in delay insensitive logic, normally performed by another special ‘gate’: the asynchronous arbiter). If there is an alternation between two channels which become ready in sequence, synchronising the communications ensures that the ‘correct’ choice is made (a communication delay on the first channel to be ready can not affect the outcome).

The above is why variants of CSP/occam have been adopted for delay insensitive hardware design.

There seems to be widespread confusion about ‘blocking’. The problem is that a common technique in shared memory concurrency involves ‘blocking’ on access to shared state and various techniques have been devised to avoid or reduce the need for this. Although these work for some algorithms, there is no general technique to eliminate ‘blocking’, and many algorithms rely on it – if a process depends on (for example) the sum of the results of 1000 others (a reducing operation) it will have to ‘block’ until they have all completed.

All of these ‘non-blocking’ techniques are more difficult to understand and verify than the blocking equivalents; also they use indivisible instructions (such as compare and swap) which have to access the shared (deep and high latency) parts of the memory hierarchy; also the hardware has to ‘block’ in order to implement the indivisible instructions. So these techniques are probably not as efficient as they may seem.

Are the notions of completion, two-party completion and many-party completion helpful?

It seems to me that the idea (originated now over 35 years ago) that neither processing delays nor communication delays affect the outputs of a system (as seen from a programming or hardware design language) is as important as ever. Hardware designers know this as ‘avoiding race conditions’ and mostly solve it using synchronous design techniques. But now, we have to use asynchronous techniques – in software for the cloud and large scale systems – and (I suspect) increasingly in hardware as we try to overcome energy problems. The CSP/occam model provides a way to do this.

There is, of course, an issue of how to ensure that timing (and energy) are built into the model … but that’s another story …


Chris Jones :-:

I suspect the term has been lifted from the telecoms industry where telephone exchange equipment was considered to be non-blocking when a caller was guaranteed always to be able to get an immediate connection to another non-busy user on a fully functioning, non-blocking exchange. In the UK, local exchanges were non-blocking while trunk connections were not. I would have thought the usage referring to the possibility of sending a communication was still pertinent.


Roger Shepherd :-:

This sounds likely and the difference between this and what happens in a (for example) occam program is interesting. In an occam program the communication channel is always available, however the “other party” is not. So, in the telephone sense the communication is non-blocking – it’s just that the other party might not be there, and if you are making a call (output) you have have to hang on the line until the other party answers.


Øyvind Teig (me) :-:

..and this again corresponds to the Wikipdia article about Non-blocking algorithm (perhaps by accident?), since the telephone connection does not take other connections into consideration (that trying to call on one line should not stop the others from getting a connection)? Or was this actually what they were afraid of in the twenties or whenever: that they should design the exchanges, cabling etc. so that they did not side effect into other connections and effectively blocked them?


Chris Jones :-:

When the early telephone exchanges were being devised, they were very concerned about blocking, not just from the point of view of convenience but mechanically. These were electro-mechanical switches with the real possibility of latching up or locking mechanically in unwanted ways like typewriters when several keys were pressed too close together in time and they arms would lock together preventing further typing until they were manually released.


Roger Shepherd :-:

The issue in telephones is precisely whether the switch allows another call to be established or whether some calls may be “blocked” because a circuit cannot be established.


Rick Beton :-:

The ‘blocking’ terminology issue seems to me also to be tied up with OS terminology. In particular, standard APIs exist for blocking I/O and for non-blocking I/O, which typically use (or avoid) mutex locks. In the non-blocking I/O case, the OS API provides some means of registering callbacks to deal with state progressions.

The same is very much true for the Java virtual machine also.

For this reason and for reasons others have mentioned, I feel that it is wrong to describe channels with this terminology because it evidently causes nuances and misunderstandings in those not very familiar with a CSP or Occam way of doing things. ‘Waiting’ on a channel communication is helpfully different.

As Roger mentioned, the terms synchronous and asynchronous are widely used outside of their original clocked vs unclocked meanings. Many people would describe a zero-buffer channel as «synchronous» but an infinite-place buffer channel as «asynchronous» because the sender never waits in the latter case. This terminology is flawed in my view; how do you differentiate between an infinite-buffer channel and a one-place buffer channel? The latter could behave «synchronously» or «asynchronously» depending on the current dynamic state; clearly, using synchronous and asynchronous in this way lacks rigour. Alas, it seems to be pervasive though.


Ian East :-:

I’m not sure ‘asynchronous’ should ever apply to clock synchronisation, specifically. Anyway, I think it’s more worthwhile to debate appropriate use of terms now than what their history might have been.

In teaching both hardware and concurrent-process programming, I found it useful to first distinguish ‘synchronous’ and ‘asynchronous’, according to whether both parties must be ‘ready’ prior to communication taking place, or equivalently whether either party might be required to ‘wait’ (suspend execution). Of course, this only concerns implementation, not the semantics of the program, whose trace merely records the common ‘event’ of actual communication. I’d then describe ‘asynchronous’ communication according to the ability of the sender to “fire and forget”, and point out the problem which arises if the receiver tries the same trick (when no message might have arrived). I would illustrate the distinction with a contrast between using the phone (when both parties must first be ready) and email.

I’d then distinguish the three known forms of synchronisation permitting synchronous communication : common clock (as in current digital systems), handshake (used in a typical system bus) and rendezvous.

I found the above a useful platform on which to progress, and I propose its adoption where possible, though I’ve never found any complete and coherent account in print.

If I got it wrong anywhere (or missed anything) then it’s a little late for my poor students, but I’m not yet too old to learn.

Regarding (necessarily finite) buffered communication, is that not just a relative of ‘simple’ synchronous communication (zero-slot buffer), where either party must ‘wait’ (when buffer is empty/full), however synchronisation (with the buffer) is performed? … where both implementation and semantics depends directly on the buffer size. The buffer size does not need to approach infinity for communication to effectively become asynchronous (but for when buffer is empty!). It merely needs to be large enough for the application concerned. (Good luck determining that!) I think this approach lacks more than rigour…


Larry Dickson :-:

I think Rick’s observation is very pertinent. In designing the Graphics Slave (now released, by the way — see
https://www.kickstarter.com/projects/767046121/crawl-space-computing-with-connel/posts ), I had to deal with Windows’ “callback” approach to graphics events. This does nothing that a CSP-type channel approach cannot do, nor does it give quicker response (on the contrary — as the Linux XCB graphics shows), and callbacks are far more complicated, incomprehensible and error-prone. But it allows the pretense that communications are “non-blocking,” by hiding what Rick calls the “state progressions,” which of course are blocking.

Non-blocking in the sense of buffered output is more understandable, because it really does provide some extra utility, and it is NOT incomprehensible. By explicitly programming an ack, you can force it to behave in an “occam-like” fashion if you want. This is what I did in Connel, since all the common channel-analogues offered by modern OSs seem to be buffered.


David May :-:

I have just looked through the early drafts and published versions of occam manuals.

None of them talks about ‘blocking’. It’s all about processes being ‘ready’ to communicate (sometimes ‘ready and waiting’) to communicate. On a channel, communication takes place when ‘both processes are ready’.

Maybe I should scan some of these.


Øyvind Teig (me) :-:

Very interesting! Have a look at http://www.transputer.net before you scan. Michael Bruestle has done, and is doing a great job with scanning. Some are also searchable PDFs.


David May :-:

Discovered ‘hang on a channel’ in an early Inmos-authored magazine article!

Up to occam-com comments start

Alternative terms

Firefox and Chrome have spinners that rotate anti-clockwise during the phase we discuss here. When data is being downloaded the spinners rotate clockwise. Elegant: this is waiting without time to me!

VERBEXAMPLECOMMENT
blockblocking on the channelA rag that's red from one side and green from the other is semantically flawed in my opinion
awaitsuspend the execution of the method until the awaited task completes JavaScript Node.js, C#
waitwaiting on the channel"Waiting" without time?
hangto hang on the channelPassive. But hanging?
readyget ready for the channelBut is it "not ready" first, then?
connectconnecting on the channelActive. Always one connection per communication..
requestrequesting the channelWho do we mentally request?
suspendsuspend on the channelCloser to yield? (*)
relinquishrelinquish the processRich Hickey (#)
parkingparking the processRich Hickey (#)
yieldyielding on the channelImplicit yield. Dimension of "giving up"

(*1) Stackless Python would use tasklet suspend, see [2]. In the Python world and alternative could be the PyCSP library [3]. In the PyCSP description they use block even if it is built on either threading or alternatively Python’s greenlets – which is a spin-off of Stackless Python – which as we have seen uses suspend [4]. In other words: it’s not very concise across the paradigms even if they build on each other. But then, with PyCSP’s CSP roots they should perhaps have used wait, shouldn’t they?

(*2) Kotlin also uses the word suspend. They say that «suspension is for coroutines what blocking is for threads«. I guess they mean to say that «blocking» for coroutines is called «suspension» since it’s perfectly normal and ok, when in threads «blocking» is dangerous. Great! (I have started to write about Kotlin somewhat here.) ::

(#) See Rich Hickey’s lecture at 084:[12]. The description of the difference between blocking and parking is done at 19.12 minutes into the lecture. The presentation’s page there goes like this:

Threads and blocking

  • (thread body...)
    • real thread, real blocking
  • (go body...)
    • IOC ‘thread’, state-machine, ‘parking’

So it depends on if you want real threads available or IOC (Inversion of Control Thread). IOC is described earlier, at 18.24. There, «blocking calls» within code is turned into a code block that’s registered as a callback function, and this is called parking. Since JavaScript does not have threads, only go blocks and IOC threads are supported in ClojureScript. This is described in a note by Rich Hickey at [7].

(Update Apr2018) Another interesting thing is the JavaScript Node.JS await keyword that always waits on a Promise. The sematics is to introduce some concurrency in JavaScript by building it on ECMAscript 6.0’s generator and yield. So the await avoids blocking that the single-threaded JavaScript can not cope with. See «Modern concurrency in JavaScript» here.

C# uses await. In await (C# Reference) Microsoft writes that «The await operator is applied to a task in an asynchronous method to suspend the execution of the method until the awaited task completes. The task represents ongoing work.» I kind of like await better than wait since (at least with my feeling for the English language) then await has less of a time dimension to it than just wait. I could more easily await forever than wait forever, without it becoming problematic. In daily use then I’ll wait for an hour, it kind of requires the time dimension. For this blog note the whole idea is that we need something positive without a time dimension. (I learnt about C# await at CPA 2016 in Copenhagen, in the lecture Extensions to the Concurrent Communications Library Kenneth Skovhede and Brian Vinter (it’s about CoCoL, here))

Examples

XMOS xC language and «distributable»/synchronous tasks with blocking operation

This is described more in xC is C plus x, chapter «XMOS libs: synchronous/blocking when [[distributable]] and asynchronous when [[combinable]]». There synchronous/blocking operation is one of the options available to the user, the other being asynchronous.

Sharing from OS X Preview with Messages

OS X Yosemite 10.10.4 (5July2015). Referenced here. Also posted to Sharing from Preview via Messages blocks Preview at an Apple Support Community.

When I share a picture from Preview via Messages then Preview is blocked while Messages sends. So, if I try to use Preview in that phase it’s unavailable to me and the Preview «blinks». This is ok if I barely notice it. However, if the internet connection is bad (like sharing the 3G/4G connection from an iPhone 5S with the laptop in a cabin up in a mountain) then Preview hangs or waits for several minutes before it gives up and tells me. I cannot proceed with viewing other pictures in this phase. It’s «formally» the «green» type of blocking, but since the solution really blocks all of the machine since viewing pictures was all I was going to do, I’d say it’s almost «red» blocking.

If I were Apple I would not have blocked the user in this case.

Apple Metal framework

In the Apple Metal framework reference, about using a MTLCommandBuffer, (here 9.2015) we read this:

  • waitUntilScheduled (Synchronously wait for this command buffer to be scheduled)
  • waitUntilCompleted (Synchronously wait for the execution buffer to complete)

It continues:

After a command buffer has been committed for execution, the only valid operations on the command buffer are to wait for it to be scheduled or completed (using synchronous calls or handler blocks) and to check the status of the command buffer execution. When used, scheduled and completed handlers are blocks that are invoked in execution order. These handlers should perform quickly; if expensive or blocking work needs to be scheduled, defer that work to another thread.

In a multithreaded app, it’s advisable to break your overall task into subtasks that can be encoded separately. Create a command buffer for each chunk of work, then call the enqueue method on these command buffer objects to establish the order of execution. Fill each buffer object (using multiple threads) and commit them. The command queue automatically schedules and executes these command buffers as they become available.

Observe that this is seen from Objective-C or Swift into the GPU (Graphics Processing Unit). Apple is thoroughly describing what «handler blocks» means, which doesn’t seem to relate to «blocking» but rather imply a «block of code». And the functions in case are called waitUntil, not blockUntil, even if this is a red blocking as of the above. But Apple’s solution to this is also described: do these calls from threads that may block such that the over all blocking scheme is «green» – and thus «wait semantics» makes sense because one such call won’t block the others. Of course, there is complicating underlying semantics here as well, that about how the jobs are scheduled on the GPU’s cores.

(I learnt about Metal listening to «iOS supercomputing with metal and swift» (http://memkite.com), at a CocoaHeads Meetup, Trondheim (CocoaHeads Trondheim) on 30Sept2015)

Suggestion of a channel yielding arrow

Channel yielding arrow by Øyvind Teig

Figure 3

In semiotics (see Wiki-refs below) I read:

Semiotics is the study of meaning-making, the philosophical theory of signs and symbols. This includes the study of signs and sign processes (semiosis), indication, designation, likeness, analogy, metaphor, symbolism, signification, and communication.

With this in mind I have tried to make a symbol showing communication over a yielding-type channel (before this note: a blocking-type channel). There should be no doubt that data flows through the symbol, so there is no reason to depart from the standard arrow. But I also wanted to include the complexity of yielding. The leftmost traffic sign in figure 2 inspired me with its arrow to the right and left. Adding more arrows when we already had one would be cognitive overload. I also did not want to extend outside the arrow proper. So I kept the roads and enveolped them inside the arrow, and made them turn out of and into the arrow.

There is one road ahead (when the first part is already ready: so this arrow must be used both at sender and receive ends, they belong together), one out and down for yielding and finally one in from above for rescheduling when data from the last has been moved across. But there is still no notion of first and last. I don’t know how to without using an animated gif. I put the turning roads inside the arrow head to imply that this is out of control of the parties, the action goes on in the channel.

The data is not shown; and besides, there could be no data, just synchronisation. But we would mentally imagine the cars on the roads, and they might carry the data. (A PDF-version of this file is here, it’s vectorized and scales, all made in OSX Pages)

092_fig4_channel_yield_arrow_by_oyvind_teig_x900

Figure 4

Semantics: Non-buffered channel where first process (goroutine, thread) ready to communicate (sender or receiver) yields to the scheduler. When the second process arrives on the channel the scheduler moves the data from the sender to the receiver and puts the first process into the ready-to-run set. (A PDF-version of this file is here, it’s vectorized and scales).
I have got this comment about the suggestions in this blog note:

There are two underlying concepts:

  • Co-operating: the two-party synchronisation/completion is agreement to proceed and it is symmetrical. Your symbols and words must reflect this I think.
  • Competing: only one party at at time can proceed. Again I think symbols and words must reflect this.

What I can hope for then, is that the ideas here may inspire others!

Watchdog monitors in embedded systems

In embedded systems (especially safety-critical systems) the criteria for trigging a software watchdog process is often done by testing the aliveness of each process (task, thread). It could be through a «ping» or through more like a proper message. It could test that the functionality is present and up and running, but often it really only tests whether a process is running and able to receive and send that message.

When processes block (i.e. wait or yield) on an output channel or in a single channel input  then this paradigm is void. Checking for aliveness needs to be done at a «higher» level. Along the line of an internal heartbeat or alivemsg or a token that’s passed around. All sending of data out of an embedded chip and waiting for data input is asynchronous, meaning that we would never block or busy-wait for any of those actions. Therefore, designing a system that would detect a process that isn’t able to do its work should be possible. Or: it’s as possible as the alternative that may yield false negatives (too much trigging of the watchdog when system is envisaged as dead). Any internal block (i.e. wait or yield) shouldn’t last long.

It’s not necessary to make it not blocking (i.e. wait or yield) to satisfy some watchdog scheme. In my opinion this is as bad as bad blocking.

In some CSP type schedulers then if there are no pending channel communication and no timers or interrupts awaited for then the scheduler would do a CSP type STOP. It might then not kick the HW watchdog and thus cause a later restart by the HW watchdog.

In the IEC 61508 standard for safety-critial systems the word «watchdog» appears in part 2 and 6. In the EN 54 standard for Fire detection and fire alarm systems it is also mentioned. Here’s the basic text from the latter (more or less precise wording now, I assume). CIE stands for Control and indicating equipment:

…deals with program monitoring. The program is the software necessary for the CIE to carry out mandatory functions (including any declared options with requirements). The execution of the entire program has to be monitored. Monitoring can be carried out either by means of a hardware watchdog system or by another processor. The program may include software which runs in more than one processor and software in elements supplied to the manufacturer. The degree of monitoring should be sufficient to ensure that the CIE is able to meet the requirements of this standard. In the case of an alphanumeric display module, it is considered to be sufficient to routinely check that data written to the module may be read back from it..

CSP Watchdog process

There’s some theory here that I need to read myself up on: «Watchdog Transformations for Property-Oriented Model-Checking» by Michael Goldsmith et.al. This is for the formal verification stage. This is present in CSPm and the tool FDR3 (now with University of Oxford, not Formal Systems), where failure_watchdog and trace_watchdog use this concept. See [5].

A quote from HighIntegritySystems

(Update 14March2017) From Semaphores and Mutexes by HighIntegritySystems I read:

Consider the case where a task is used to service a peripheral. Polling the peripheral would be wasteful of CPU resources, and prevent other tasks from running. It is therefore preferable that the task spends most of its time in the Blocked state (allowing other tasks to execute) and only executes itself when there is actually something for it to do.

Exactly! Blocked is even capitalised! (So why don’t they have channels? (Even if they try with Task Notification’s Lightweight Queue here…))

Conclusion & to do list

I have no plan to change the world. I have no goal to defeat blocking as used with CSP-type channels. I have nothing invested in yield as a correct word. And I do like the other suggested alternatives, just as I respect the golang-nuts Commander’s decision to continue to use blocking in Go’s spec because it’s right. But I have kept my figures and arguments here because they are valid in their own sense.

But I have a wish to try to explain that blocking has different connotations in different contexts. From red to green rag. I absolutely think that the different programming camps should not shout at each other because of misunderstandings over the taste of a word. Stop using adjectives.

I am User:Aclassifier at Wikipedia . I will do my best at lexical descriptions there.

In Norwegian

  • Wait = vente, «å vente på en kanal»
  • Yield = overgi, oppgi, gi etter, svikte, avstå. Hva med «prosessen gir seg på kanalen»?

A CPA 2015 fringe presentation based on this blog note

See Two CPA 2015 fringe presentations

User forums

users.rust-lang.org

The Rust community is at https://users.rust-lang.org

References

(*) My «previous blog note» is Atomic for all? (**) The obsoleted figure 1 is here: https://www.teigfam.net/oyvind/home/?p=4458. I had left out deadlock and not suggested which terms to use.

Wiki-refs: Blocking, Clos-network, Literal and figurative languageNon-blocking algorithm, Process-oriented programming, Semiotics

  1. Improve Your Python: ‘yield’ and Generators Explained by Jeff Knupp, see https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/. I installed the PyCharm Community Edition IDE and pasted some of the code and ran it. Nice!
  2. Python: stackless.channelshttps://bitbucket.org/stackless-dev/stackless/wiki/Channels
  3. PyCSP: https://pypi.python.org/pypi/pycsp
  4. Getting Started With PyCSP, see https://github.com/runefriborg/pycsp/wiki/Getting_Started_With_PyCSP
  5. Watchdog Transformations for Property-Oriented Model-Checking by Michael Goldsmith, Nick Moffat, A. W. Roscoe, Tim Whitworth, and Irfan Zakiuddin. FME 2003: Formal Methods. LNCS vol. 2805. See http://dx.doi.org/10.1007/978-3-540-45236-2_33
    Seen referred to in the FDR3 document https://www.cs.ox.ac.uk/projects/fdr/manual/cspm/prelude.html
  6. The Design and Construction of Deadlock-Free Concurrent Systems by Jeremy Malcolm Randolph Martin. Thesis submitted for the degree of D. Phil to the School of Sciences in the University of Buckingham, 1996See http://www.wotug.org/docs/jeremy-martin/thesis.pdf
  7. Clojure core.async Channels, by Rich Hickey (2013), see 084:[12]

Leave a Reply

Dette nettstedet bruker Akismet for å redusere spam. Lær om hvordan dine kommentar-data prosesseres.