A recent post titled "Why the ‘Internet of things’ is a ticking bomb" has me deeply concerned.
One of the fastest growing segments of the so-called Internet of Things (IoT) is mobile and home-use medical devices. Glucose meters and insulin pumps have been around for many years, and are being joined by baby or elder monitoring systems, other diagnostic testing devices (e.g. for urinalysis), blood pressure devices, home defibrillators, TENS devices, oximeters, rehabilitation devices, and even the much-heralded "tricorder."
No one can deny that outfitting electronic devices to communicate with the caregiver or healthcare provider brings huge benefits. Diabetic patients' glucose levels can be observed much better, elder patients' activity and vital signs can be checked without constant (costly) trips to a clinic, and parents can keep watch over the baby without staying in the same room all night. Plenty of resources provide information about design of these devices, from the FDA, to the semiconductor manufacturers, to independent design research groups. Environment of use, user interface design for lay users, electrical and mechanical safety - a host of issues are considered in these resources.
A crucial concern seems to get short shrift, however: security. As the Internet of Things blog post points out, the more we accept devices with remote connections in our daily lives, the more opportunities we hand the hackers to spy on us, study our behavior, steal our private information, and possibly injure or kill us. I've spoken numerous times about the dangers of a glucose meter built into a mobile phone: all it takes is a hacker bent for revenge on a known diabetic, and a Bluetooth virus which modifies glucose results sent to the doctor's office would be circulating wildly. In fact, the potential for hacker attack actually prompted former vice president Dick Cheney to have the wireless programming feature of his implanted cardioverter-defibrillator disabled (NY Times, ArsTechnica).
In a comment on LinkedIn about the IoT blog post, John Peter Sabini commented: "Switches, routers and even VM's are targets of black hat research. If Cisco, a company with great engineers and processes can have their switches and routers compromised then just imagine a smaller company with 10 - 50 people who sell products for IoT and who think security is a non-profitable challenge. Major companies that supply the infrastructure grid are rightfully shy about IoT to the point where they are considering not 'stirring things up' by avoiding any mention of breaches to the public when possible. It opens up the least secure link into their infrastructure."
Really, folks? Security is a "non-profitable challenge"?
What if it were your wife, your child, or your parent that the hacker killed?
When I forwarded the blog post today, one colleague of mine responded "It's interesting how many 'thing'-vendors that have spent their life behind someone else's firewall leave security as an afterthought when they go mobile." Another colleague commented, quite rightly, "There is a solution but only by moving forward. No retreat back to stashing cash in a mattress, or any other individual-based solution will work."
That said, what do we do to bring more attention to designing for security? Is it regulation? How about independent certification? What part will public attention play?
What do you think?
We have met the enemy, and he is us!
All too often, when government, technology, and safety intersect, the Pogo line from our Vietnam-era past comes back to haunt us.
No time like the present. In light of the Federal incentives to adopt electronic health records systems and the "meaningful use" criteria, I've been wondering how soon we would start seeing adverse event issues. Look no farther.
The Boston Globe published a fascinating and chilling account of these on July 20 ("Hazards tied to medical records rush - Subsidies given for computerizing, but no reporting required when errors cause harm").
Several observations occur to me.
Mind you, I understand that there's plenty of blame to go around. Systems are badly designed with little thought about UX (user experience); implementation has created a number of strange hybrids with confusing overlaps; and the inability of different healthcare departments to communicate with each other (an issue regardless of the electronic medical records mess) all contribute to the tenuous safety environment.
I happen to believe that the cup is half full, not half empty. These systems CAN improve care, enhance patient safety, and lower healthcare costs. How many more deaths and near misses will it take, however, before we get there?