3 Basic Rules for Applications in a Safety Critical Environment

When I wrote my blog about a backdoor in an air-condition compromising the entire safety of a company network, I thought that’s bad. But at least the air-condition suppliers closed this backdoor. Not fast but they did. Last week I learned that this is neither the worst possible case nor fixing is self-evident.



One of the big venders of “infusion pumps” in the US implemented a backdoor. Unbelievable, but true: In a worst case scenario could this feature kill all patients inside the hospital once someone with malicious intent gets physical access to only one of all the pumps used there. In the past many security issues were caused by some implementation problems inside core LINUX components like OPENSSH. Some people didn’t install a fix for the bug and therefore sooner or later hacker used these bugs to get access to the system. But in the case of the infusion pumps the hacker doesn’t even use a bug inside LINUX. And the device manufacturer claims that this is not a bug – it’s by design and intention. No I’m not joking and not exaggerate. If someone bad uses this “feature”, the only hope for the patient to survive is to be connected to a (not vulnerable) vital sign monitor and a fast nurse and reanimation team.


This should not be considered under economic aspects!

From the economical point of view this dangerous backdoor is caused by a cost-benefit calculation. If you want to fix the firmware bug you need to update all your devices in field. In this case it even looks like some service technician needs to visit all installed devices and do a local update. Besides the bad customer reputation (caused by complicated service procedures), there will also be high travel and personnel costs for the service technicians. So for some manufacturers it may look like a good idea to do the update only when an issue happens or the service is due to other reasons. Somehow I can even understand the manufacturers but nevertheless: We talk about vital equipment here and issues like this should be anticipated!


How we could help

For application in safety critical environment we offer many working and proven solutions. We do a lot of support for security in software and boot protection at Kontron to make sure your application is not modified and it’s your application at all. Based on proven protection software we offer white and black listing including PKI based signing of all components. Trusted timestamping (with a PKI digital signature) is the process of securely keeping track of the creation and modification time of a document/component. Security here means that no one — not even the owner — should be able to change it once it has frozen. Therefore the timestamp based integrity is never compromised. Of course this also includes the possibilities to do remote secure updates but only once enabled by a local employee. Many solutions exist for remote updates, but in this case the local acknowledge is vital.

We should always keep in mind, that “This system will reboot in 5 minutes” is not a message you like to see on the screen of – for example – the ventilation system your life depends on. So there are many important factors that need to be considered in setting up a security concept and there are experts like us to do training and consulting. If you do any life critical product (whether medical or train or whatever) and add connectivity to these systems, please consult someone like us.


What basic rules to follow?

At Kontron we support FIPS compliant solutions, but this will only work if you follow at least some basic rules for security. For me there are three really basic security commandments you should always keep in mind:


  1. Thou shalt have no other features thy customer need!

Any feature you include is a potential root cause for vulnerability. Don’t do it because it’s fun to implement. That fun enables a dangerous security break.


  1. Thou shalt not covet thy laziness!

I use SSH myself and certificates without password are quite handy and speed up login. I know having no login at all makes debugging even faster, but it also obliterates all security build into the LINUX system. This is the root cause for being able to kill people around an entire hospital here. Use credentials that you could easy remember but are hard to guess. User: “root”, password: “toor” or none at all are a very bad idea. Using a root user which you don’t call “root” or “admin” helps at least a bit.


  1. Thou shalt not bear same result against thy question!

This sounds quite strange to an engineer: If you ask the same question several times, normal expectation would be the same result all the time. Talking on security issues – encryption to be more precisely – that’s different. It’s highly recommended to spit out a different result (with the same meaning) all the time. To give you a practical example: If you encrypt the message “sunk buoy” then this encrypted message has to look different every time (not always “tknesrev eJob”). Otherwise, any message after this may be readable if you had encrypted the original message. It’s much more difficult to detect a logical sequence if the result looks different every time. To achieve this you have to encrypt random garbage – so called “salt” – with your original message. The Enigma had at least one weakness. And this was the main reason the German Submarines got destroyed in WorldWar II. No matter what, one letter was always replaced by another, never by itself. An “A” therefore has never been an “A”. The Germans were just lazy and some messages always looked the same (“tknesrev eJob”). So with a good knowledge in cryptography and about the structure of a message one could read anything. With some “salt” in it and an “A” may become an “A” again, no one would have been able to read decrypt the message.

How do you think risks like this could be avoided or at least reduced in future?

Thank you!

Your comment was submitted.

An error occured on subscribing!:

{{comment.date.format('MMMM DD, YYYY')}}


There are no comments yet.

Stay connected