Rich interface component modeling

This page is in group Technology.

Started work on this 1Feb2014, probably finished now (updated 7Jul2020 (table))

Intro

In this note I will describe two articles about software components. The article «Component Models for Reasoning» (Seceleanu, Crnkovic) [01] and «Tool support for the rapid composition, analysis and implementation of reactive services» (Kraemer, Slåtten, Herrmann) [02]. Both basically accentuate their particular solution. (Obs: There also is a tool by Magee and Kramer, much related to by blog notes as such, but not to this blog note [11].)

I will also discuss the matter from another side: how are the CSP-type tools and methodologies facing this?

Disclaimer: I have of course read the references – but I am a computer science reader and blogger, not a computer scientist! I suggest you read the sources first and then perhaps read this note. I am certainly trying to stretch my own knowledge – being a professional (embedded) programmer this is always needed. Therefore, if you too feel somewhat short of knowledge I welcome you to read. If you find something wrong or think I am too biased, please comment or mail me and I will try to rewrite.

There are some implicit Wiki-refs at the bottom.

Reasoning (bottom-up and top-down)

From [03], Wikipedia, the odd article and from talking with colleagues the below table has emerged. Some of the points are more or less correctly placed, especially when it comes to concrete methodologies. One could ask why one would want to make such a table. Many times we do top-down and bottom-up simultaneously, and I haven’t found a phrase for that. Let it be, the table at least makes me understand more about reasoning – or solve reasoning for me?

Forward EngineeringReverse engineering
Model -> SystemSystem -> Model
Specification
(of something that must exist)
Description
(of something that exists)
Building with LEGO..
Normative or prescriptive..
Model-as-originalModel-as-copy
Design modelAnalysis model
Process of synthesisProcess of analysis
Object or Process
oriented programming
Structured programming
Build by reuse..Make for reuse..
CompositionDecomposition
Inductive reasoningDeductive reasoning
Understanding by seeing,
leaves and branches becoming trees and
trees becoming a forest
..
Bottom-upTop-down
If system does not exist
good intuition is needed
Creativity driven, but too
much of it may leave unused
parts in final product
Solving an understood problem
Code made this way may be
more general and reusable
Combining subdivided modules made with →Object-oriented (OO) code
starting from top by subdividing
"Solve a problem""Understand a problem"
This table is a surprise to me, every time. Is it top-down reasoning when I describe something that exists? Is tearing apart analysis? Yes I admit. The table seems correct, every time I analyse and reverse engineer it.

(Update 4May2017) Allen Downey (in Scientific American [10]) helps by writing (in the context of Python as «top down») that:

Most textbooks and classes use math to teach signal processing, with students working primarily with paper and pencil. With this approach, the only option is to go “bottom up”, starting with the arithmetic of complex numbers, which is not the most exciting topic, and taking weeks and many pages to get to relevant applications.

With a computational approach, we can go “top down”, starting with libraries that implement the most important algorithms, like Fast Fourier Transform. Students can use the algorithms first and learn how they work later. They can see the most important ideas, like spectral decomposition, without being blinded by details. They can work on real applications, on the first day, that provide the motivation to go deeper. And they can have a lot more fun.

I have included the above table simply because I need it myself; the article writers also refer to many of the terms. And one of them, perhaps even incorrectly.

Also, whether I am most comfortable with text code or a graphic tool depends both on some common cognitive invariants – but also on me, myself. Turkle and Papert I believe also are concerned about this [04]. That paper is one of my grand favourites, and it’s fun reading. The title «Epistemological Pluralism and the Revaluation of the Concrete» made me curious.

What I read from Turkle and Papert’s paper is that if I think code is more suitable than some graphical tool, then that’s ok. I should say it out load. But I must be sure to set the scene, since in another context the conclusion may be the opposite. In the first case it may be that the graphics didn’t match how you envisaged the problem as well as the text. We have had letters built from glyphs for a long time. But even longer cave paintings. However, they probably captured something simpler..

This is not a complaint about f.ex. the UML diagrams and their semantics. They cover many useful situations. They are the result of much thinking and need. But sometimes we don’t invent the final solution; and many times a solution was used beyond intended. So also for UML, and probably any other methodology.

Once upon a time: occam 2 toolset

One of my best memories of any programming language is with occam. It was already occam 2 when we started with it around 1990. The occam 2 toolset wasn’t graphical, it could have been – but it was running on DOS and transputer machines on plug-in boards. The transputer-based Transputer Development System (TDS) from around 1985 that introduced folding at all levels (much like modern IDE’s navigator window) and was all written in occam (also the occam compiler) – had been rewritten in C and ported to the IBM PC. This was I believe, the camp’s first crisis. One of the free standing tools was the ilist binary lister. I still have the books, and scanned the summary:

081 fig1 inmos occam 2 toolset ilist binary listerNow look at the option P – «Procedural interface data». It displays any procedural interfaces found in the specified modules. In the description about what ilist can do it says «channel usage, if applicable». At the time this was quite useful.

Ilist inspected the source code of the module and picked out all channel communications, including direction (? is input and ! is output, like Go language’s <- and -> and the XMOS XC language’s <: and :>). In occam these are the only primitives that are allowed to have side effects to a PROCess. Here is the example in the manual (which I still have):

DESCRIPTOR mode: TA H language: OCCAM <ORIGIN DESCRIPTOR>
DESCRIPTOR mode: TA H language:
ws: 52 vs: 378 -- workspace and vector space resource consumption
PROC simple(CHAN OF SP fs, CHAN OF SP ts)
SEQ
fs? -- input on channel fs, proceed to next line when SP-data input presented
-- Use of data not shown by ilist
ts! -- output on channel ts, proceed to next line when SP-data output taken
:
I added the coloured comments. Observe that any chan is synchronous one-to-one one-directional in occam. The ilist log would immediately tell a user of simple that it was not a good idea to listen on the other end of fs first, since that’s what simple did. It would have deadlocked. And after a user correctly had sent on fs it needed to listen on ts, since simple then sooner or later will send on ts. Observe that ilist seems to have collapsed all the nice indenting of occam.

This was not at the time, but could have been – brought into another tool, and for all the components in a system such a tool could easily have discovered deadlocks.

This is basically what this blog note is about. Even much richer interface descriptions that the inmos occam 2 toolset and TDS ilist tools provided since the 1980’s

A newer version of occam (occam-π) has direction shown in the PROCess parameter list. Aside: occam3 (occam 3, occam-3) in 1993 had described support for client-server networks by introducing channel-types and CLAIM and GRANT [09].

Two papers

Both papers deal with component-based software engineering (CBSE). They are both concerned past the old type of API of name and parameters to full descriptions of functional semantics such as interface contracts. They both show that this, at some level, may be described by an «external» state machine, even if this must not necessarily be exposed to the programmer. The matters are closed with formal verification to show that components are connected and used correctly. «Rich component» and «collaborative building blocks» are close terms.

In this note I will take the view that a component is a «process» and view the papers through those glasses. Not objects or functions, even if I believe that is covered somewhat in the papers.

I have studied both papers’ reference lists, and I see that there seem to be few references to synchronously communicating formalisms. But there is an exception in Seceleanu & Crnkovic, where their tool ProCom’s «lower-level» is «fine-grained with synchronous communication» (described in detail in [05]).

Is this mainly-asynchronous thinking coincidental, or is the «CSP community» less concerned about this «fashion» because that’s basically what CSP has been about from the onset? To me it looks like being asynchronous in the bottom (like TLA+ seems to be (?)) only delays the problems until buffers get full.

Or is it the «expressiveness» of the chosen formalisms that shy users away from synchronous modeling ? Or have they never been close to it in the first place? Or is it as simple as to inferring that this «modeling world» reflects the «real world» – where asynchronous programming seems prevalent. I have blogged extensively about this lately, see group Technology or have a pick to the right. I will try not repeat here..

Component Models for Reasoning

I will not try to go through any of the two papers in detail, only comment on anything that triggers me. This paper is (Seceleanu, Crnkovic) [01].

I believe that the primary idea of this paper is to introduce their ProCom tool. In addition they list some alternatives such as AUTOSAR. I have no idea of how many such correct component connection tools there are out there. But the Arctis tool as introduced in the second article by Kraemer et.al. is not mentioned.

ProCom has two levels. The higher-level is called ProSys and the lower-level is ProSave. ProSys has an asynchronous message-type interface; what they describe reminds me of SDL. By reading over and over again I think it’s this level that has what they call read-execute-write semantics. They mention this term once in the semantic paper [05]. There they describe a concerted type of concurrent activity meant to guarantee that data access will have no races and access rights to shared data is done correctly. The lower-level ProSave has synchronous communication between components, has time-triggered, pipe-and-filter communication and strict run-to-completion execution semantics. I have not seen time-triggered communication in this context before, and am uncertain about why they do it like this. Pipe-and-filter looks like Erlang to me.

By having two layers they argue that a system is easier to analyse with respect to different aspects; also with respect to the different design stages. I believe this is something different than the refinement checking that CSP tools do (hoping that I didn’t now compare apples and bananas).

At first glance and without hands-on experience, this looks quite difficult to grasp. The semantic paper does not mention deadlock, livelock, liveness (have these been excluded by design?) – but does mention safety and timeliness. I think timeliness guarantees deadlines since their state machines models (FSMs) are expanded with urgency, priority and implicit timing, but this is not mentioned. The FSMs are translated into timed automata that according to Wikipedia may treat safety and liveness properties. But liveness isn’t timeliness. The Uppaal Model checker is also built on these thoughts, it says. So, plugging such verified components together should just work.

From the semantic paper I quote:

The FSM language builds on standard FSM, enriched with finite domain integer variables, guards and assignments on transitions, notions of urgency and priority, as well as time delays in locations. The language assumes an implicit notion of time, making it easy to integrate with various concurrency models (e.g., the synchronous/reactive concurrency model, or discrete event concurrency model.

I like that they have added guards, but how useful are they in an asynchronous system? I don’t know what implicit notion of time is, but I might guess that it means that sequence and order are kept? Isn’t this obvious? And why this makes integration wider I don’t understand. Another interesting aspect is that they equalise synchronous and reactive, contrary to Kraemer et.al. and very very contrary to the Scala community who equalise asynchronous and reactive. See next chapter. But personally I agree with the semantic paper, or rather: reactiveness is independent of synchronousity.

About the two layers, an aside (bear over with me): I remember in the early nineties when I started programming with transputers. They were occam machines, and everything was processes, even interrupt-handlers. A process had channel inputs that came from other processes. An interrupt was also called a process (unlike in the newer Timed CSP there was no interrupt term) – but its input was from a channel that was placed at some processor pin or some HW event. With only two priorities it was difficult to build intensive interrupt systems with. But since occam was a runnable subset of CSP, it was analysable. But timing analysis was not possible. But I believe that temporal modeling and verification has come a long way since then. CSP has been extended and the PAT tool has temporal modeling as well. But stay tuned, the new FDR3 tool also supports Timed CSP.

Tool support for.. reactive services

This paper is (Kraemer et.al) [02] – but I am commenting on the open source document shown at the reference.

This paper describes a tool called Arctis for «rapid development of reactive systems». This is a theoretical paper, far past my knowledge – but there are lots of understandable words in it.

When I start with such a paper, biased as I may be, I search for «sync». This will soon tell me if it’s mainly an asynch or synchronous solutions that’s treated. This paper is very asynchronous, and it also equals reactive systems with asynchronous systems. Here’s a quote where they discuss other modeling approaches from theirs:

While they focus on activities more from a perspective of business processes assuming a central clock or synchronous communication, we need for our activities reactive semantics [KH07a] reflecting the transmission of asynchronous messages between distributed components

I have a comment on this in the previous chapter. I have discussed this at my A reactive manifest – where I have tried to show that reactiveness is independent of synchronousity. The (a)synchronousity button switches something other than reactiveness on or off. I have tried to understand, but for this heavy paper it surprises me to read it so clearly. Doubt creeps in: is the delusion mine? No! Reactive systems may be async or sync, or even better – a combination. Come on.

I wonder how much of the thinking behind this is caused by the fact that message sequence diagrams with asynchronous communication has sloping communication arrows? I have a figure and some discussion here. Send and forget is such a nice paradigm! I also wonder how much of the talk about temporal matters that’s discussed in both of the present articles has to do with the fact that messaging takes time.

Do messages that «take time» (sloping sequence diagram) introduce a time distortion field which consequently needs to be analyzed?

The paper introduces the term component-oriented, probably to slightly distinguish themselves from component-based (which has a Wikipedia article). I guess the difference is that the components are described from UML (and some more) into machine-generated executable (not external  this time) state machines. This state machine is the footpath of the component (my wording). This footpath could be thought of as a dance where the other part has to precisely follow. The pattern is written up into a formal language called TLC (that also handles temporal logic) by an automatic transform from ESM. Then the dance is verified by the TLA+ tool. If the dancers fall, it’s possible to re-dance the path to see where the heel broke (or a theorem violated). The tool also suggest how to avoid the error. If they succeeded then it’s possible to celebrate with a cup of Java – also generated by the tool. The final automatically generated code is a component that behaves. So, it is described from UML activity diagrams into machine-generated excecutable state machines by an automatic transformation from the activity diagrams. (Thanks to F. A. Kraemer for proof reading this!)

Future component designers will think of the old APIs as silent black and white movies. Even if looking back often is an exercise in acquiring respect of what the founding fathers and mothers achieved. But little Linnéa, Filip, Jakob or Anna may also wonder why their grandparents invented so many languages that needed crutches (like static and dynamic analysis tools) and why we spent so much time testing (when we could have just built correctly from the start).

But this is where we are now: we live a picture in a flickering b&w movie from back then.

Sorry, I got carried away, there. Back to the nearest future. The Arctis tool’s internal workings seem a little different from other formal modeling. I am impressed by the fact that they have plugged it all together. They put it all in the box.

The Arctis tool has refinement semantics (as briefly mentioned above). This means that the (more and more concrete) implementation refines the (as little concrete as possible) specification. CSP was first on this, invented by C.A.R. Hoare. Of course I like it.

The paper goes through an example. It’s difficult to grasp for me since I am not used to activity and structure diagrams. I would in case have to work myself into them. For an amateur they have cognitive overload, and I’d like to see a textual form of them («ok» say Turkle & Papert).

They have a chapter called «A Building Block to Handle Mixed Initiatives» which is especially interesting for me because I have done something similar (in this product). At the time I modeled the roles between primary and secondary of a redundant system. I used Promela and formally verified with Spin. This paper’s model is similar to a voter I had to build myself to prove that the solution worked. The authors ended up providing users with a ready-made building block that votes on requests and outputs that primary wins (it cannot lose) or secondary wins, is accepted or overruled. I am on thin ice here, but I am not certain whether their modeling tool has selective choice and guards with expressions. I wonder if the building block they have made would be necessary if they had had this. I’d like feedback on this.

The paper describes special things that they have to do with f.ex. join nodes, where they have to build several state machines to model them. In occam (and the like) a join is invisible in the code, it’s just an outdent from all stopped composite processes. In the Go language one has to pick up exit-channels from all stopped goroutines. Anyhow, to avoid mess a join is a synchronizing point.

They also explain that components communicate via buffered exchange flows crossing partition borders. As I have blogged a lot about this I won’t repeat much. The good thing here is that they formally prove that the buffers can’t overflow. At least I think that’s what they do in the TLC code example. Often, in real life buffer sizes have been tuned like heap and stack sizes: if they overflow it’s a crash and restart. Why buffering is moved to the transport route is a mystery to me (so send and forget is just «nice», not much more). Except for some rare cases; and the fact that any physical route would include buffering, so it would need to be analysed. I’d like overflows to be handled explicitly by the application. I have suggested the XCHAN to join asynchronous and synchronous system thinking.

CSP-type modeling of software components?

I have had a mail thread with Thomas Gibson-Robinson of University of Oxford, Department of Computer Science. He has allowed me to quote him on some of the matters. I have spliced this from several mails back and forth. Starting with me and quoting T. G-R:

As you may see from this blog note I wonder why the papers I am discussing (and others, I have noticed) seem to be so free of CSP-ish thoughts? Is this just because of natural ignorance when being rooted in another tradition, perhaps from both sides?

«I have to confess to not knowing much about software component modelling. I’ve seen various things in the past on this topic, but I’ve never really sat down to try and summarise them. I’ve also never really seen anything to dow tin CSP in this context. I suspect you might be right that it’s down to ignorance and come from a different area.»

«Another possible reason is perhaps due to the lack of timing in CSP, at least historically. Many of the papers you summarised on your blog post (which I enjoyed reading) appear to want to model the passage of time, which is (historically) something that has been more difficult in CSP than you would perhaps like. I wonder if this has put people off trying to use CSP as a language for modelling these things.»

I have a difficulty in grasping propagation time, processor cycle time, relative time and “paradigm time” (like in the figure above). Paper [01] in the blog even talks about “implicit time”.

«Regarding your questions on Timed CSP. It supports discrete time ([08] describes it in full). In theory you can handle real-time systems by using the work of Joel Ouaknine on digitisation; essentially you can convert real-time systems to a discrete-time system such that the same refinement checks holds. It is a complex theory though, so I’d not necessarily recommend applying it directly.»

«Essentially, Timed CSP adds a ‘tock’ event to the language where each occurrence of tock means that one time unit has passed. The CSP operators are then extended so that they synchronise with tock appropriately (for example, [] is changed so that tocks do not resolve the choice – instead both sides perform the tocks, thus allowing time to pass whilst we are waiting for an event to occur).»

So it seems that with tocks on both sides, the “horizontal” message sequence diagrams I talked about for CSP would, in Timed CSP, not stop time for any of the parties. Time continues for the sake of verification, provided any of the parties has related to time.

«You’re right about time continuing. Suppose tock is the event that you are wanting to use to model time, and System is a timed system. It is certainly the case that System should never refuse to perform tock – it must be available in every state (this is actually quite difficult to test for in the presence of divergence – I believe Bill’s new book discusses this somewhat).»

Thread continues some paragraphs below.


Timed CSP (if this is what makes the difference to describe rich interface component modeling) finally is available! However, the «time distortion field» that I blatantly suggested above – maybe Time is not needed as often as one may think?

I have chosen to reference three quite new papers : «FDR3 — A Modern Refinement Checker for CSP» [06] (probably the most theoretical), «Timed CSP: A Retrospective» [07] and «Model checking Timed CSP» [08].

I am quoting them directly instead of trying to summarize them (it’s easier!). I have just aligned the quotes into a paragraph for each (tabbed by …):

[06]

CSP denotational models (e.g. the traces, failures and failures-divergences models). It is also able to check for more properties, including deadlock-freedom, livelock-freedom and determinism, by constructing equivalent refinement checks. … CSP is a process algebra in which programs or processes that communicate events from a set  with an environment may be described. We sometimes structure events by sending them along a channel. … Like recent versions of FDR2, FDR3 supports the Timed CSP language.

As seen below, the «recent version» of FDR2 was 2.94 of May 2012.

[07]

A front-runner amongst timed process algebras. … Initially Timed CSP added a single primitive to the language CSP—WAIT t, for any time t. … Timewise refinement: The idea is simple, yet quite powerful: by syntactically transforming a Timed CSP process into a CSP one essentially dropping all WAIT t terms), much information is preserved, and under appropriate conditions a number of properties can be formally established of the original Timed CSP process by studying its untimed counterpart. … The notion of liveness in Timed CSP, for example, consists in asserting that an event is never blocked, rather than postulate its eventual occurrence. … a number of additional features, such as infinite choice, infinite observations, timeouts and interrupts, signals, … model checking for Timed CSP. To this end, he defined a “finite-state” version of the language, together with a suitable temporal logic, and applied regions-based algorithms … systematic study of the relationship between (dense-time) Timed CSP and a discrete-time version of it … which resulted in a model checking algorithm for very a wide class of specifications that could be verified on the CSP model checker FDR. … many common specifications on Timed CSP processes (liveness, deadlock-freedom, timestop-freedom, . . . )

I assume that «never blocked» means «not forever blocked» since it’s described as «eventual occurence» and since blocking is normal in CSP? … Actually, this was not the answer! Here’s what Thomas Gibson-Robinson replied:

«I’m not that familiar with [07], but I believe that it does really mean the event is never blocked. (This is of course just one definition of liveness – you could imagine weaker variants such as the one you suggest.) I believe this because of the example they give – surely even if the eject button were blocked for half a second that could be fatal?»

Of course! I have already blogged about this, why didn’t I realise that «an event is never blocked» in this case is a requirement (which of course also could be weaker). Building with synchronous channels and this requirement simply means that a sender on the channel must always be able to send. It may be solved by different means: buffering, XCHAN and simply by designing the network of synchronous channels (which is equal to shared events) in such a way that the requirement is fulfilled.

[08]

Though Timed CSP was developed 25 years ago and the CSP-based refinement checker FDR was first released 20 years ago, there has never been a version of this tool for Timed CSP. In this paper we report on the creation of such a version, based on the digitisation results of Ouaknine and the associated development of discrete-time versions of Timed CSP with associated models.

As a consequence of the work described Timed CSP was introduced in FRD2 on version 2.94 in May 2012.

Go language

I tried to raise the problem at golang-nuts (here), but so far I have had no response. After all, Go has CSP-type channels between gorotines and has rather interesting package and interface concepts. Go would, in my opinion be an interesting language to build rich interface components from. What would be its pros and cons, and would it need additional features or a subset?

In the second paper here we see Java generated. How would Go cope?

Conclusion

I would hope that this blog note might be a stroke of the wing of a butterfly when it comes to rich interface componenet modeling. Might there be attempts to use tools like FDR3 or PAT for this? Could CSP-based tools join the band?

Afterwards

I have had a pile of IEEE Computer and Software on my desk for too long now. So I decided to cut the pile, a goal was by one per day. Today I discovered IEEE Software May/June 2011 which is a special issue on Software Components: Beyond Programming. It was edited by Ivica Crnkovic (who also coauthored the first paper discussed here), Judith Stafford and Clemens Szyperski.

It shows the history of component-based software engineering (CBSE) and state of the art. It also has a point / counterpoint section (Kurt Wallnau / Philippe Kruchten). In the counterpoint I read that «ADLs (Architecture Description Languages) have failed to «cross the chasm». Industry hardly ever uses them.» «We’re more or less left with UML-as-an-ADL». He also quotes Steve Mellor who said that «model-driven design and model-driven architecture has been «just two years away from industrial use» for the last 10 years. But now there’s service-oriented architecture..»

Even in that magazine the authors seem to mean that asynchronous design is the more or less only way. Personally I am waiting for someone to realise that graphical descriptions need to be less overloaded, and languages need to be more conceptually graphical, and asynchronous and synchronous are tools, not solutions. I still believe that CSP has more to offer (smaller, more specialized) than UML’s present offerings (too wide, too expandable).

References

Wiki-refs: Butterfly effect, Component-based software engineering (CBSE), CSP, Go languageObject-oriented programming, occam-π, PAT model checker, Process-orientated programming, Promela, Refinement, timed automata, Transputer, Uppaal Model Checker

  1. Component Models for Reasoning, Cristina Seceleanu and Ivica Crnkovic, Mälardalen University, Sweden. In IEEE Computer, Nov. 2013 (vol. 46 no. 11) pp. 40-47. See http://www.computer.org/csdl/mags/co/2013/11/mco2013110040-abs.html (The author has promised a reference to a related article ready for download, but [05] is also interesting)
  2. Tool support for the rapid composition, analysis and implementation of reactive services by Frank Alexander Kraemer, Vidar Slåtten, Peter Herrmann, in The Journal of Systems and Software 82 (2009) 2068–2080. See http://www.item.ntnu.no/people/personalpages/fac/kraemer/publications/krshjss09 (With ref to an initial paper that’s downloadable)
  3. On the difference between analysis and design, and why it is relevant for the interpretation of models in Model Driven Engineering by Gonzalo Génova, María C. Valiente and Mónica Marrero. See http://www.jot.fm/issues/issue_2009_01/column7/
  4. Epistemological Pluralism and the Revaluation of the Concrete, Sherry Turkle and Seymour Papert. See http://www.papert.org/articles/EpistemologicalPluralism.html
  5. Formal Semantics of the ProCom Real-Time Component Model by Aneta Vulgarakis, Jagadish Suryadevara, Jan Carlson, Cristina Seceleanu and Paul Pettersson, Mälardalen Real-Time Research Centre, Mälardalen University, Västeraås, Sweden, see http://suryadevara.se/onewebmedia/SEAA09-final.pdf. I don’t think this is a peer reviewed and published paper.
  6. FDR3 — A Modern Refinement Checker for CSP by Thomas Gibson-Robinson, Philip Armstrong, Alexandre Boulgakov, and A.W. Roscoe (Accepted to TACAS 2014), see http://www.cs.ox.ac.uk/files/6237/Document.pdf
  7. Timed CSP: A Retrospective by Joël Ouaknine and Steve Schneider, presented at at APC 2005 (or ENTCS06?), see http://www.computing.surrey.ac.uk/personal/st/S.Schneider/papers/entcs06.pdf
  8. Model checking Timed CSP by Philip Armstrong, Gavin Lowe and Joël Ouaknine (March 2011), see http://www.cs.ox.ac.uk/joel.ouaknine/publications/timedcsp12.pdf
  9. High-Level Paradigms for Deadlock-Free High-Performance Systems by Peter H. Welch, Justo, and Willcock, in Transputer Applications and Systems ’93, see http://kar.kent.ac.uk/21094/1/HigherWelch.pdf
  10. Programming as a Way of Thinking by Allen Downey (blog in Scientific American, 2017). See https://blogs.scientificamerican.com/guest-blog/programming-as-a-way-of-thinking
  11. Concurrency: State Models & Java Programs by Jeff Magee & Jeff Kramer, see https://www.doc.ic.ac.uk/~jnm/book/. They introduce the Labelled Transition System Analyzer (LTSA)

Leave a Reply

Dette nettstedet bruker Akismet for å redusere spam. Lær om hvordan dine kommentar-data prosesseres.