- 1 Intro
- 2 What surprised me
- 3 A glimpse of the H&S book
- 4 Postscript
- 5 Fault injection detection
- 6 Acknowledgements
- 7 References
New 3Apr2020. Updated 3Dec2020. Standard disclaimer.
My (n-1)th attempt of a preamble:
I stumbled across the book “The Huawei and Snowden Questions” by Olav Lysne. It let me realise that the technology forests contain trees of systems and solutions of hardware and software, and I sit on a leaf. However, I was astonished to feel that this leaf might withstand more wind and rain than I had seen from my desk. In this note I will try to relate the leaf to the tree and the forests with Lysne’s book as a source of inspiration.
Hold your breath and read on:
This note started with a weekly letter from editors Ole Petter B. Stokke and Jørgen Jacobsen at kode24.no on Friday 3Apr2020, in the middle of the Corona pandemic. They pointed to an article in kode24.no  on why Simula Research Laboratory  did not want to open-source the code of the Norwegian coronavirus infection-tracking app (Smittestopp / Infection-stop ) – that referred to an earlier article  about why the author Indergård meant that it should have open source code . When I came back to Simula Research Laboratory‘s defence (in ), I discovered their reference to the Huawei and Snowdon book by Lysne . (I name it H&S here.) Lysne‘s book is free and can be downloaded. I did.
Now breath! The reward is that this is one of my shortest notes:
What I saw in Lysne‘s book made me jump on my leaf. At first I thought that I associated after reading the book. However, when I read this note over as almost finished, maybe it’s based on dissociating. Anyhow, the whatever started with the H&S book.
First, the1987 yellow book by acm PRESS appeared to me. Since before Internet. I still have the book  (The version I found on the net has another cover). This book contains two lectures I have never forgotten:
1. Ken Thompson‘s Turing Award lecture “Reflections on Trusting Trust” (1983). Thompson’s companion Dennis M. Ritchie‘s jointly received the prize “for their development of generic operating systems theory and specifically for the implementation of the UNIX operating system”. (Update: my association wasn’t completely off target: After having delved more into H&S I in see that Lysne discusses this lecture in Chapter 4: Development of ICT Systems. 4.1 Software Development (p.33)).
2. The acm PRESS book also contains Tony C.A.R. Hoare’s Turing Award lecture “The Emperor’s Old Clothes” (1980). Hoare received the prize “for his fundamental contributions to the definition and design of programming languages”. A quote from the lecture: “An unreliable programming language generating unreliable programs constitutes a far greater risk to our environment and to our society than unsafe cars, toxic pesticides, or accidents at nuclear power stations. Be vigilant to reduce that risk, not to increase it.” He certainly had something special in mind when he issued that warning (it’s by name in the lecture). Remember, this is the man behind Hoare logic, Quicksort, null references (his “billion-dollar mistake”), CSP and more. Much of which also is covered in the H&S book. In the lecture Hoare continues: “Let me not end on this somber note. To have our best advice ignored is the common fate of all who take on the role of consultant, ever since Cassandra pointed out the dangers of bringing a wooden horse within the walls of Troy.” Of course, Lysne in H&S also discusses much of this. (Aside: I have met Hoare several times at conferences and have exchanged some words with him. I also have his autograph 072:.)
1980 is not as far away as you may think. For one, I was 30 then. It was two years after Hoare’s first publication of CSP, and 5 years before his update, which still is the base. Tool(s) are up to date some 40 years after. I see that the newest version of FDR4 (Oxford University / Cocotec) is from 19Mar2019. You can run it non-commercially for free at home, or on a cluster, for parallel refinement-checking. It’s not a toy. It handles gigastates for you, those left after unnecessary states have been found by math not necessary to visit.
What surprised me
I have been to numerous conferences, listened to twenty times more numerous lectures, even presented some papers myself – about how to make programs safe. Using the CSP type methodology. I coded in the late occam for the Transputer, but also as converted to C by the Southampton Portable occam Compiler (SPoC) tool. I wrote run-time systems in C. Recently I have coded in XC (xCORE / XMOS). This blog is so full of all that stuff that I can’t reference all. I am biased enough. However, I will reference some of my own stuff at the end of this note.
Still I get surprised when, in the context of the coronavirus outbreak, somebody holds up a window to the world, showing something inside there to help with one aspect of making safe code. Then I saw a mirror, pointing to so much of what I have been taught myself, and tried to practice.
If the coders of WhatsApp did it with Erlang , and Google made the Go programming language along the same line of reasoning, and if some of the young gurus out there now preach it , , then maybe I should try a fresh look. Again. I think it’s much about containment, and what a task or process is. A lesson I have had to re-learn again and again over a career of 40+ years.
|Terms more idiomatic per language usage here.
I have seen places where a process is defined as “software on different computer” and thread as “software on same computer”. Also, concurrent code that share a processor or core, and parallel as running on different cores. And then task, which I guess could be either of those (which indeed both process and task might..). And then there is multi-core, the memory model, the communication model and the synchronisation model (++). All this count up to how the process model is defined. To answer this is not the focus here. Instead I try to follow the idiomatic naming used in the different programming languages.
I will not take you through the different references I have included here, but the H&S book I will want to write some about.
A glimpse of the H&S book
Disclaimer: This is not a summary, neither a review. I will see H&S entirely through my own, subjective and biased glasses. I haven’t even claimed that I have read it all or understood the implications of what I did read. Neither that I necessarily agree on how he weights some of the facets that I think I know something about. But I do know one thing: you should throw yourself over this free book.
Here is exactly how I was taken to the H&S book. From  Simula Research Laboratory writes at the end of chapter Open source and privacy:
|Åpen kildekode og personvern
Den som er spesielt interessert i forholdet mellom kildekode og sikring av sensitive personopplysninger i datautstyr kan lese en grundig gjennomgang av de faglige aspektene av dette i boken “The Huawei and Snowden questions.” Boken er gratis tilgjengelig i pdf-format her.
|Open source and privacy
Those who are particularly interested in the relationship between source code and securing sensitive personal data in computer equipment can read a thorough review of the professional aspects of this in the book “The Huawei and Snowden questions.” The book is available for free in pdf format here.
In the preface Lysne writes that his intention with the book was to “provide answers to these questions by analysing the technical state of the art in all relevant fields of expertise”. I think the questions are (1) how to become able to analyse equipment to ensure that it has not been designed “to act against the interests of its owners.” But also (2) to help manufacturers about how “to (re)build trust in their electronic equipment”.
I always search for my buzz words when I have a PDF in front of me. There are several hits on “safe” and one on “unsafe”. And as mentioned, “Thompson”, “CSP” and “Hoare”.
Since my background is from having worked with safety critical systems then I would also add safety from another angle. A car brake or a fire detection system shall only be as risky as allowed by the requirement. Risk is probability times consequence. The word here is IEC 61508 (I have written much about it, search for those in the Search field above).
Lysne goes through several methodologies, what they are able to help with, and what their limitations are. This paragraph somewhat intermingles his wording and mine. Including the fact that there are theoretical limits as to what may be done through, for example code analysis. Even if detecting some types of malware is theoretically possible (they are “NP-complete”), we don’t know how long time it would take to solve the problem at hand. Also, the tool we use may be infected by an undetectable virus in itself, which would make it not possible to find out whether we can trust the result (Thompson at ). The methodologies he mentions will not solve the problems, Lysne explains. But they will add up to some degree of help. This also includes testing, as we all know. One cannot test a system to be fault free, just see if the tests fail or not. Again, for a very small system it’s possible, but for increasingly complex systems then waiting for the result is not feasible.
However, in Lysne’s opinion there is one way forward which is more The Way Forward than other ways. On page 112 he writes that:
The conclusion on the current state of the art with respect to our problem is rather bleak. If we do not trust the makers or vendors of our electronic equipment, there seems to be no framework of knowledge that adequately addresses the full complexity of the question. However, some approaches appear to be more promising than others and they should receive the additional attention that the seriousness of the problem seems to justify. We will mention some of them in the following sections. (Lysne, H&S p.112)
Especially promising ways forward
I think he argues that research on the below points might be especially promising. Mostly my wording expect when (quoted):
Because encrypted protocols etc. may provide “live” verification. But they run on volatile hardware
- Formal Methods
Because it has promises to guarantee the absence of unwanted functionality.
But verifying machine code is better than verifying source code.
Even more, verifying the hardware first would make this even better.
Provided its production were also assured.
- Heterogeneity and Containment
Because it has the potential to handle all sides of the problem.
Heterogeneity is duplication with the aspect of assuming that no hardware or software may be trusted when alone.
Containment is about having a software model where parts may not be seen as soft balloons floating around, easy to accidentally hollow out from the inside with a small needle that came loose, or from the outside with darts. Balloons should be punctuation-free, and so should communication between them.
“Unlike other conceivable approaches, there are no important parts of the problem that we can identify upfront as impossible to handle.” (p.115)
Bringing it to my leaf
In this chapter I will describe the resonance that some of the points in H&S triggered in me, sitting on this leaf.
This is fantastic, and surprising to me. Surprising, because I did not think that what I have been working with, with the simple aim of making things as good as I possibly could, might also have had some potential for something more. I am writing this simply because I feel that something of what I worked with were ok, nothing more. Disclaimer, disclaimer..
Aside: CRC16, CRC32 and the Birthday Paradox
(The broader picture of this product is described in My aquarium’s data radioed through the shelf.)
Like, the other day, or was it year, when I discovered something called the Birthday Paradox that causes a CRC16 checksum packed out of an encrypted message coming in from a radio, not to detect a fault on as few as every 2(16/2) = 256 message. Undetected fault on every 256 message – bad! Not, as I thought, not to fail on detecting a fault every 216 ≈ 65 thousand message. Which would have 28 = 256 times been better.
But when I added a CRC32 around the payload, then the detection might only fail on every exponent (16+32)/2 = 24 bits → 224 ≈ every 16.7 million message. So for high security messaging even more protocolling must be added. The encryption key of the RFM69 radio with a Semtech SX1231 chip was 16 bytes. Equal keys on both sides ensures that the right message is read and that it is difficult to eavesdrop, but a CRC16 is still not enough to make sure that what is read is the same as what was sent – over a noisy radio. One of 256 messages may fail catastrophically.
Already me telling this might enable you to come by the house and, at least optimistically, attempt to pick up some data from the aquarium. But I think I would observe you during the time you’d need to sit there. But the aquarium takes no input, so you cannot boil the fishes. But you may know that the radio still has to receive something since it wants to send when it’s silent. Anybody there? Listen… Yes! And how handling this Yes is coded, the fact that I do receive something but don’t do anything with it?
I coded a library in XC and ran the code on an xCORE XMOS multi-core embedded controller with built-in task scheduler and communication. No library for internal comms. No library for tasks or concurrency. All these are in hardware. With tasks that cannot (easily) be shot down or punctuated.
I ported a known radio library called RFM69 with pieces from a library called RadioHead. Once you know that, perhaps you could send some noise on the right frequency (not 433.92 MHz like most, but 433.706 MHz so it won’t hold my car key from opening the doors), and then jam my aquarium controller, and have my XC code fall over and boil the fishes. The Yes had the aquarium SW fall over. Well, max heating is 48W and I have a over-temp melting fuse. And my software for the heating elements have to retrigger a monostable latch every 10 ms to keep them warm, else no heat. So it might be hard to persuade that software task or process to start pulsing when it should not, from anywhere outside of the standard line of command interface.
But I don’t have any task duplication and voting or whatever Lysne was thinking of. I will study more what he says about this. But I won’t say more about it here.
Since I don’t sell these units you would not be able to buy one and test it to fail for you. And my aquarium is not on an IoT or AIoT box, not connected to the internet. But yes, I was aware of this, and did do a design and implementation that would enable my software to pick up any amount of jammed messages without falling over.
Luckily XC won’t let me do
unsafe references without me explicitly saying so. I did have to make a global register that was accessible for all tasks for debug, but it was wrapped in a macro that came with the development system. The design of the XC language has made sure that needles and darts are stored far away. And even
unsafe is controlled to a very large extent by the compiler to make it quite safe.
For XC code I don’t need my knock-come deadlock free pattern that I modelled and formally verified in Promela with SPIN model checker . In Promela one builds a model as a set of communicating processes and its surroundings and let the tool generate a correctness checker in C that explores the full system space to make sure that everything is executable. I used the knock-come pattern a lot at work in C with a scheduler we wrote . The code runs in a patented system that makes sure cruise ships and large maritime units may arrive “Safe to Port” . Or the
XCHAN channel type  that I tried to formally model in CSPm with the FDR tool . Also a set of communicating processes, but one would write two models in CSPm. The implementation and specification. The verification that is done is that the implementation refines the specification. But for XC I need neither knock-come nor XCHAN because XC has transactions, state and roles that it handles for me, in a deadlock free and thread safe matter (a list: ).
I have now described what I discovered on my little leaf. I recognised something there from Lysne’s book, even some flavours of his “most promising ways forward”. Some of it was from my professional years at work; that work was utilised in real products. But it is no the less fun now, as retired, to do XC and FPGA matters in my home office. I guess this about ends this note. I hope you have been motivated to read Lysne’s book. And maybe even poke around in this text to see if any of it might be interesting for you. Wait a little, I added a postscript:
Erlang vs XC
I have no hands-on experience with Erlang. I wish I had spent as many man-hours on it as I have on XC after i retired. I did not use XC at work. I retired into bliss, so to say. I have commented some on Erlang over the years (search for it in the search field (top)), and I will try to write some remarks here, with the intention to make it rather inconclusive. In view of this, here is some sort of table. Left column is based on chapter 11.3 Erlang: A Programming Language Supporting Containment in H&S. Right column is according to own experience, also see My XMOS pages. I have not added any rows. If I had started with XC the table would have been longer. But I did not want to go further than Lysne’s points:
|Erlang according to H&S||Comment on both sides here||XC|
|Multi-platform, open source||Apples vs. oranges
Erlang vs Go even more fruitful?
|Proprietory for xCORE multi-core processors by XMOS. Many of the language primitives have been implemented in hardware (scheduler / logical cores, chanends, timers) and are therefore limited|
|“The core component in an Erlang system is a process. This process is strongly isolated from other processes; thus, its existence or performance is not, per se, dependent on the perfect operation of the other processes.”||Process ≈ task. XC process types allows for lighter and lighter threads regarding runtime and scheduling. But both languages have these concurrent objects as lightweight. XC tasks are presumably more tightly coupled than Erlang processes (XC: if a note falls off the score it’s probably more difficult to finish as some other tune)||The core component of XC is a task. A task is strongly isolated from other tasks, always running (1) on a logical core alone (normal), (2) scheduled with others (no common code after while true select loop) (combinable) or (3) more like a function (only communicates with other tasks) (distributable)|
|“Processes can only interact through message passing and there is no shared memory. This means that the memory state of one process cannot be corrupted by another process.”||Erlang may wait on filtered input, but sender has no automatic acknowledge of success. XC supports CSP-type waiting from other tasks and can block  a client while serving another. Both solutions enable a task/process not to have to push and pop messages not wanted in a certain state, avoiding it becoming its own scheduler||Tasks interact with messages through synchronised (zero-buffering) channels or (on top of them) with synchronised interface calls. Topmost, XC supports an asynchronous pattern using sessions. Tasks are fully black box with no explicit sharing of memory. A synchronised channel is a safe permission for copying or zero-copying potentially shared data|
|“The addition and removal of processes are lightweight operations.”||Since tasks in XC are supported in HW, I would assume that it would be even more lightweight than for Erlang||Normal tasks may be started and stopped dynamically, but I don’t think after the binary image has been created (here). RTOS support most probably adds dynamic loading, but I assume that would be for RTOS-native processes (see FreeRTOS here)?|
|..”replacement of misbehaving pieces of software at runtime”..||Erlang runtime code is able to replace user code elements without downtime of the rest of the system. I cannot see that this would be easy for an XC system.||Bare XC or its runtime would not allow any replacement of code. FreeRTOS includes an over-the-air (OTA) update, but I have yet to get verified that this goes for the XMOS branch. It is probably not hot OTA.|
Erlang: also green blocking
I did throw in references to two Erlang books  (which I have ordered) and . I also added an excellent chapter where the designer of Erlang, Joe Armstrong (born 1950, as myself, but died a year ago), writes about “Red and Green callbacks” . Red callbacks are standard callbacks, where the possibility of callback hell (as some call it) fast arises. Some see this as no good way to build software, even if the web screen you read right now probably would have code with hundreds of red callbacks. The green callback is the one where even blocking is not dangerous because it doesn’t block other processes. Repeating two references: Rich Hickey’s excellent lecture about core.async  and my note “Not so blocking after all” .
But still Armstrong designed Erlang with asynchronous sending of messages, kind of as if he were scared of blocking in that situation. As opposed to CSP, which the Erlang community seems to want to be associated with (*). The problem of full buffers is there, when a producer is more eager than a consumer. What to do when a buffer is full? And the case where multiple clients make use of a single server would have to introduce some sort of synchronisation, even in Erlang. I assume Erlang goes into the tradition of SDL (Specification and Description Language) defined by CCITT and used with success in telecom systems. A data packet by “send and forget” is a nice paradigm there. I have also discussed this a lot (with questions answered by a person central to the early development of SDL, Rolv Bræk) see .
(*) But Armstrong doesn’t mention CSP with a word in , either. In my opinion this is probably just as fair to Erlang’s idioms. But I miss a chapter that would outline the history of concurrent programming (511 pages plus a few would have been ok for a glued book I assume). It would have been natural to include the long lines there. About not allowing side effects, because “side effects and concurrency don’t mix”. About no mutexes, processes that interact through messages, processes that don’t share data, but only use “pure message passing”. Erlang is from 1986, anteceded by CSP from 1978 and 1985, occam from 1983, Ada from 1980. Being a little paranoid, the people behind Go didn’t tell about their history either, when they in the nineties did the Limbo language. But they did come out of the den with Go 072: (“Bell Labs and CSP Threads” by Russ Cox) and 072: (“Origins of Go Concurrency style” by Rob Pike).
Fault injection detection
See Fault injection detection (started 23Apr2020)
Thanks to Edvard Kristoffer Karlsen (MSE) for a private peer review of a late version of this note. After this you, the reader, should have been taken better care of. I also added Narrative style disclaimer for the first time after his comments. Thanks!
Errata in H&S
p.33 = Thomson → Thompson
- English: Ada. Apples and oranges. Birthday Problem (Birthday Paradox). Communicating Sequential Processes (CSP). Erlang. FDR. Tony C.A.R Hoare. Hoare logic. IEC 61508. Joe Armstrong. Ken Thompson. MSE (sivilingeniør = siv.ing.), NP-completeness. occam. Promela. Specification and Description Language (SDL). Turing machine.
- Norwegian: Olav Lysne (translated).
- The Huawei and Snowden Questions. (“H&S“) Can Electronic Equipment from Untrusted Vendors be Verified? Can an Untrusted Vendor Build Trust into Electronic Equipment?
by OLAV LYSNE (SimulaMet, Oslo. Lysne at Wikipedia (NO)). Simula SpringerBriefs on Computing. ISBN 978-3-319-74949-5 and ISBN 978-3-319-74950-1 (eBook). Read at https://doi.org/10.1007/978-3-319-74950-1 © The Author(s) 2018
- ACM Turing Award Lectures. The First Twenty Years. 1966 to 1985. ACM Press (1987). ISBN: 0201077949
– Reflections on Trusting Trust.
by KEN THOMPSON. To what extent should one trust a statement that a program is free of Trojan horses? Perhaps it is more important to trust the people who wrote the software. Turing Award Lecture (1983), Communications of the ACM, August 1984 Volume 27 Number 8. Read at https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_ReflectionsonTrustingTrust.pdf
– The Emperor’s Old Clothes
by C.A.R. HOARE. Turing Award lecture (1980) is also in the book. Read it at https://www.cs.fsu.edu/~engelen/courses/COP4610/hoare.pdf
- Bekjempe pandemi? «Det finnes en app for det» (NO)
by SIGVE INDERGÅRD at Agenda magasin (30. mars 2020). Read at https://agendamagasin.no/kommentarer/bekjempe-pandemi-finnes-app/
- Simula bistår Folkehelseinstituttet (NO)
at SIMULA RESEARCH LABORATORY (OSLO). “Professor Olav Lysne og research professor Ahmed Elmokashfi har ledet arbeidet med uviklingen av appen”. 27. mars 2020. Read at https://www.simula.no/news/simula-bistar-folkehelseinstituttet
- – Derfor bør korona-appen ha åpen kildekode (NO)
by TROND ARVE WASSKOG CTO at Bekk, at kode24.no (1. april 2020), read at https://www.kode24.no/kodenytt/derfor-bor-korona-appen-ha-apen-kildekode/72320730 (translated). Originally from (31. mars 2020) at https://blogg.bekk.no/fhis-kontaktsporingsapp-som-åpen-kildekode-bfc03c32ff6c (translation failed)
- Derfor vil ikke Simula åpne kildekoden (NO)
by Simula Research Laboratory (2. april 2020) at kode24.no, read at https://www.kode24.no/kodenytt/derfor-vil-ikke-simula-apne-kildekoden/72324506 (translated). The source of this article seems to be at Simula’s own page: https://www.simula.no/news/digital-smittesporing-apen-kildekode (translated)
- Programming Erlang. Software for a Concurrent World
by JOE ARMSTRONG. The Pragmatic Programmers (2013) ISBN-13: 9781937785536
- Concurrent programming in Erlang
by JOE ARMSTRONG, Robert VIRDING, Claes WIKSTRÖM and Mike WILLIAMS. Prentice Hall. ISBN-10: 013508301X, ISBN-13: 978-0135083017 (1996)
(Part I of ISBN 013508301X), read at https://erlang.org/download/erlang-book-part1.pdf
- Red and Green callbacks
by JOE ARMSTRONG at his blog “Erlang and other stuff” (2013). Read at https://joearms.github.io/#2013-04-02%20Red%20and%20Green%20callbacks
- Clojure core.async
by RICH HICKEY. He discusses the motivation, design and use of the Clojure core.async library. Lecture (45 mins). See note 084:
- SimulaMet or Simula Metropolitan, Oslo, see https://www.simulamet.no/research/projects/organisation. They are owned by Simula Research Laboratory (Simula), see https://www.simula.no/ and Oslo Metropolitan University (OsloMet), see http://Oslo Metropolitan University
- The “knock-come” deadlock free pattern, by me (2009), see https://oyvteig.blogspot.com/2009/03/009-knock-come-deadlock-free-pattern.html
- XCHANs: Notes on a New Channel Type by me (2012), see https://www.teigfam.net/oyvind/pub/pub_details.html#XCHAN
- Becoming textual: attempting to model ‘XCHAN’ with CSPm by ØYVIND TEIG (2013), https://www.teigfam.net/oyvind/home/technology/063-lecture-ntnu/
- My XMOS pages by me
- New ALT for Application Timers and Synchronisation Point Scheduling by ØYVIND TEIG and PER JOHAN VANNEBO (2009), see https://www.teigfam.net/oyvind/pub/pub_details.html#NewALT
- Patent documents – Norsk (329859) (granted) and PCT/NO2009/000319 also here – now patent #2353255 (AutroKeeper)
- Is it possible to detect use of “unsafe” by inspecting the object code? – In the XE language, in XE Community (XCore Exchange), by me 6Apr2020
- “Smittestopp”, the Norwegian app to help “infection-stop” (name published 9Apr2020), see https://no.wikipedia.org/wiki/Smittestopp (translated)
- Inside Erlang, The Rare Programming Language Behind WhatsApp’s Success. Facebook’s $19 billion acquisition is winning the messaging wars thanks to an unusual programming language, by AINSLEY O’CONNELL, at FastCompany (21Feb2014)
- Not so blocking after all by me
- Some questions about SDL by me
Privacy-Preserving Contact Tracing.
Apple and Google.
iOS and Android.
It consists of three parts:
Contact Tracing – Bluetooth Specification
Contact Tracing – Cryptography Specification
Contact Tracing – Framework API
This point has explicitly not been mentioned in this note. It arrived just this day. But it is tremendously interesting and full of hope
- Skriv under mot datahandelsavtalen i WTO – redd vår digitale framtid, Sign the WTO Data Trading Agreement – Save Our Digital Future, by attac%, see https://attac.no/fremtiden/nei-til-datahandelsavtalen/ (translated)