This page is in group Technology.
If you want to follow me to try to find out why queueing of entry calls in Ada causes nondeterminstic (scheduling? timing? deadline?) and try to understand why the scheduling of processes on the defunct transputer is the opposite(?), then jump on. And, does bit arrays help any? At the moment it’s hard to determine about nondeterminism.
- determinism: a theory or doctrine that acts of the will, occurrences in nature, or social or psychological phenomena are causally determined by preceding events or natural laws
- Nondeterminism: see my blog note about Nondeterminism
See first Limited selfrefs. In short: I am not referring to (m)any of my own blog notes directly in the text. I have several notes around adding to some of the chapters here.
- Does it have to do with many-to-one? Ravenscar profile: “forbidden features: (…) An entry call to a protected entry with a call already queued. If an entry call is made on an entry that already has a queued call (i.e. the queue length would become 2) then Program Error is raised”
- Paste from : “Ada tasks can communicate in two ways – via message-passing (rendezvous) or via protected objects. The latter decouples the tasks more and allows more accurate timing (scheduling) analysis to be carried put. The Ravenscar profile is for hard real-time systems where schedulability is crucial.”
- Paste from : “For message passing it uses a many-to-one model – occam uses one-to-one, so queuing is more an issue in the Ada model (though queuing on ALTs takes the place of queuing on entries – Ada programs need less ALTs/selects)”
- Paste from : “However, there could be a big difference from a timing analysis point of view, in that, in the case of an occam ALT, the number of input channels is apparent in the code for the ALT, whereas in the case of an Ada rendezvous, the number of calling tasks cannot be known by merely inspecting the code of the rendezvous.
Maybe it was that aspect that required the rendezvous to be excluded from the Ravenscar Profile.”
- But then, XC will also do multi-client use of servers, but it’s still as an array of one-to-one interfaces. I think then that it’s still no “queueing”?
- When I handle a bit-array to use with scheduling (process is ready) – is that queueing?
A private mail thread about occam on the transputer
Here’s a mail thread that I have had with Roger Shepherd. He was on the team designing the transputer. I have been allowed to quote him on several occasions (just search for Shepherd in the search field) . Especially in Atomic operations and the transputer.
The mail thread (below) starts with my qustions to him (gray lined) and his answers in italic:
I have been trying to understand the reason for the Ravenscar profile  and understand why occam on transputers might not have that problem.
Ravenscar has disallowed entry calls from Ada and allows only queues of dim 1, ie. no queue. For standard Ada, every time an entry call is not taken since it’s busy then it’s queued (in a FIFO I think) and the caller waits. This is bad for determinicity (?) and bad for “schedulability analysis”. And for a limited size queue it may overflow since it may be difficult to calculate its max size (? Not really, for a statically linked program?)
Occam channels sets a “first” bit for the first on the channel and when someone comes to use it and it’s set as “first” then the action is done (to move data) and the “first” process is going to be scheduled for this very reason: to run on from its synchronisation point. This scheduling may be taken from a queue, or it may be a bit in a bitset that we just test in some manner. Several processes may have their ready-to-be-scheduled bit set, but there is ever only one cause of each.
In occam I think a bitset would be difficult – the issue is that compared to Ada the creation and destruction of processes is very dynamic, and the identity of a process is rather loose – which means it would be difficult to know to which process a bit referred. In fact, in occam, processes have no name – the only way in which they are referenced is indirectly via a channel.
On the transputer, a process may be running (i.e. actually being executed), interrupted, runnable (on a run queue), on a timer queue and/or waiting on a channel. Non-running, non-interrupted processes are identified by their workspace – which remains static when they are not running, but when they are running the workspace may move (this could be an issue in the implementation of ALTs, but I think they can always be implement so that identity used to identify the process remains constant even though the actual workspace has moved – this is what I mean by process identity being loose.
Now, Roger, what’s the difference between the standard Ada queue and the possible queue or bitset on cooperative scheduling CSP runtimes? Do we go free?
The replicated ALT takes an array of channels, as do an array of channels or an array of interfaces in XC. There is ever only grouped one-to-ones when a one-to-may is needed. In  it was speculated whether this was the difference. Of course, when one builds CSP on top of Java with monitor queues I guess we have the same situation as Ada: it’s problematic. Also, even Go has queues on select. But are these problematic the same way?
For channel-only and channel-SKIP ALTs no queues are needed. The storage needed to hold any waiting processes is allocated in the channels. For the timer, there is a queue – but the implementation uses the “trick” that if there are multiple timer inputs, only the earliest has to be considered – and so a waiting ALT process only needs to be associated with a single time, and hence the workspace can provide the storage needed to support the timer queue.
It turns out arrays of channels aren’t a problem to implement. However, if you are trying to make some statement about service time then the number of channels (size of array) needs to be known, and this may not be easily visible
I’m not really familiar with the intricacies of Ada but if the language has the notation of repeated rendezvous, with notation of a service agreement as to how multiple processes will be serviced, a queue of waiting processes might be needed. In occam there is no such notion. ALTs are very dynamic – they exist once – they can be repeated, but the emphasis is on the channel, not on the process on the other side of the channel.
Go is more complex – it permits output guards for example.
P.S. Please feel free to quote any reply – open discussion is good.
Oops, we use bit arrays
In our ChanSched  we do use bit arrays (bitarray, bitarrays) for
- ALT set (one bit per channel: ChanAltInputGuards_BitArray_a)
- Interrupts that signal on an asynchronous (no data) channel end (SignalChannelIds_BitArray_a)
- When a process is ready to run a bit is set in a bit array (ProcessIds_BitArray_a)
Our channel looks like this. It’s really interesting to learn from Roger’s comments above that in the general case, with language support, bit arrays are not needed. For us we have limited the number of channels to some word width (like 32), the same for the number of processes. It has worked for us, but I do see the points above.
Update 15Jul2019: I discovered some explanation about this. When the scheduler uses a bit array it does the same as it would in a purely queue-based system. It would simply find the next process to schedule, that’s what it’s after. This is not busy poll. In this case a rotating mask to pick out the next waiting task (like 64 bits for 64 tasks) would be no difefrent than a queue with at max 64 entries and the
next element is the one holding the id of the next task to be scheduled. However, along some axis I feel that a bit array word is simpler and will not need the same overhead as a queue. This reads that I am still uncertain if there is a line of non-deterministic timing problem between bit array and queue, like bit arrays would not be so, but queues might or even will?
- 035 – Channels and rendezvous vs. safety-critical systems
- New ALT for Application Timers and Synchronisation Point Scheduling, see http://www.teigfam.net/oyvind/pub/pub_details.html#NewALT