Picture of Aedan CoffeyAedan Coffey is a consultant electronic design engineer working in Ireland


We got hacked. Some nice person somewhere in the world managed to get into the little black box that connects our house to the internet and modify it. The result was that every time anyone in the house clicked on a link on a web page they were misdirected to some rather unsavoury ones instead.

To us this was merely an unpleasant inconvenience; no permanent damage was done and a few days later we had a new router and it was all fixed. But what happens when a medical device gets hacked? It’s probably not too serious if somebody manages to download all the data from the activity tracker on your wrist, but imagine the consequences of a pacemaker that is suddenly set to defibrillate continuously at its maximum power, an insulin pump that delivers its complete reservoir of insulin in a few moments or a ventilator that just stops working without any alarms going off.

You may think these things can’t happen.However, the ability of hackers to do these things without permission has been demonstrated – thankfully in controlled environments. A well-known “white hat” hacker called Barnaby Jack was scheduled to demonstrate the hacking of a pacemaker from 10m away at a hacking convention in Las Vegas in July 2013. Unfortunately he died a few weeks beforehand.

Whilst in office in 2007 US Vice President Dick Cheney had a defibrillator implanted. The possibility of this device being hacked was deemed serious enough that all wireless access to it was disabled.

How can this be allowed to happen? Nowadays pretty well any device with electronics in it has an embedded computer and some means of external – usually wireless – communication. If an implanted device doesn’t have wireless communication the device may need to be removed from the patient periodically to get valuable diagnostic information or to readjust the therapy settings, which is obviously not desirable.

The computer embedded in a medical device generally has a security hole or two. Its designers didn’t intend them to be there, but over time – as the flaws are discovered – the designers release a new version of the software, and in time more weaknesses are found. It’s an endless race to keep to keep one step ahead of the hackers.

The computer industry tackles this problem by constantly sending out software updates, sometimes as frequently as every few days. These can sometimes be in response to vulnerabilities that have been discovered only weeks or even days earlier. These vulnerabilities may have been lurking there, unexploited for many years, but need to be fixed very quickly once discovered. Contrast this fast turnaround with the medical device and drugs industry which – correctly- has cautious and extensive testing and approval procedures for changes, which takes a long time. We have a convergence of two industries that travel at vastly different speeds.

So what can we do about this?

It’s really not at all clear.

We could just ask the bad guys to slow down their speed of progress to match ours……

We could simply remove all external access to implanted devices, so once it’s in there the only way to communicate with it would be to perform surgery to ‘plug in a cable’. This would resolve security issues. Nobody could access your device without your knowing as long as you were conscious. But this approach is clearly not ideal; it would be slow, expensive and inconvenient, not to mention a introducing infection and anaesthetic risks.

We could build computers so well that the good guys can get access to them but the bad guys can’t. The only problem with this is that it’s exactly what the computer industry has been trying to do for years without succeeding. Every time a way in is discovered it gets closed off through software updates and hackers divert their attention to some other part of the system and eventually find a new way in, and so the cycle continues. It’s almost evolutionary in nature.

We could reduce the power levels used for communications to such a low level that their range would be just a few centimetres. This is how the contactless – or Near Field Communication – bank cards that we can now use for small payments work. A design goal of that technology was that it should have a range of no more than 10cm. However, by 2013 academics had already designed equipment that could pick up their transmissions at a range of 45cm. We can expect that whatever range a device has today will be substantially greater in a few years as technology marches relentlessly on.

There is no obvious solution to this problem, but whatever happens it looks like the medical device industry and its regulatory authorities may have to start moving at a much faster pace than they have been moving to date. Finding the correct balance between speed and caution will not be easy.