A recent post titled "Why the ‘Internet of things’ is a ticking bomb" has me deeply concerned.
One of the fastest growing segments of the so-called Internet of Things (IoT) is mobile and home-use medical devices. Glucose meters and insulin pumps have been around for many years, and are being joined by baby or elder monitoring systems, other diagnostic testing devices (e.g. for urinalysis), blood pressure devices, home defibrillators, TENS devices, oximeters, rehabilitation devices, and even the much-heralded "tricorder."
No one can deny that outfitting electronic devices to communicate with the caregiver or healthcare provider brings huge benefits. Diabetic patients' glucose levels can be observed much better, elder patients' activity and vital signs can be checked without constant (costly) trips to a clinic, and parents can keep watch over the baby without staying in the same room all night. Plenty of resources provide information about design of these devices, from the FDA, to the semiconductor manufacturers, to independent design research groups. Environment of use, user interface design for lay users, electrical and mechanical safety - a host of issues are considered in these resources.
A crucial concern seems to get short shrift, however: security. As the Internet of Things blog post points out, the more we accept devices with remote connections in our daily lives, the more opportunities we hand the hackers to spy on us, study our behavior, steal our private information, and possibly injure or kill us. I've spoken numerous times about the dangers of a glucose meter built into a mobile phone: all it takes is a hacker bent for revenge on a known diabetic, and a Bluetooth virus which modifies glucose results sent to the doctor's office would be circulating wildly. In fact, the potential for hacker attack actually prompted former vice president Dick Cheney to have the wireless programming feature of his implanted cardioverter-defibrillator disabled (NY Times, ArsTechnica).
In a comment on LinkedIn about the IoT blog post, John Peter Sabini commented: "Switches, routers and even VM's are targets of black hat research. If Cisco, a company with great engineers and processes can have their switches and routers compromised then just imagine a smaller company with 10 - 50 people who sell products for IoT and who think security is a non-profitable challenge. Major companies that supply the infrastructure grid are rightfully shy about IoT to the point where they are considering not 'stirring things up' by avoiding any mention of breaches to the public when possible. It opens up the least secure link into their infrastructure."
Really, folks? Security is a "non-profitable challenge"?
What if it were your wife, your child, or your parent that the hacker killed?
When I forwarded the blog post today, one colleague of mine responded "It's interesting how many 'thing'-vendors that have spent their life behind someone else's firewall leave security as an afterthought when they go mobile." Another colleague commented, quite rightly, "There is a solution but only by moving forward. No retreat back to stashing cash in a mattress, or any other individual-based solution will work."
That said, what do we do to bring more attention to designing for security? Is it regulation? How about independent certification? What part will public attention play?
What do you think?
We have met the enemy, and he is us!
All too often, when government, technology, and safety intersect, the Pogo line from our Vietnam-era past comes back to haunt us.
No time like the present. In light of the Federal incentives to adopt electronic health records systems and the "meaningful use" criteria, I've been wondering how soon we would start seeing adverse event issues. Look no farther.
The Boston Globe published a fascinating and chilling account of these on July 20 ("Hazards tied to medical records rush - Subsidies given for computerizing, but no reporting required when errors cause harm").
Several observations occur to me.
Mind you, I understand that there's plenty of blame to go around. Systems are badly designed with little thought about UX (user experience); implementation has created a number of strange hybrids with confusing overlaps; and the inability of different healthcare departments to communicate with each other (an issue regardless of the electronic medical records mess) all contribute to the tenuous safety environment.
I happen to believe that the cup is half full, not half empty. These systems CAN improve care, enhance patient safety, and lower healthcare costs. How many more deaths and near misses will it take, however, before we get there?
Together with Nancy Van Schooenderwoert, I'll be attending the AAMI course on Technical Information Report 45 (Guidance on the use of AGILE practices in the development of medical device software) Tuesday and Wednesday this week.
Have you ever considered the improvements you can make in your software development process by adopting an Agile approach? Take a look at the white paper that Nancy and I put together, Q&A - Adopting Agile for Medical Device Development.
Earlier this year, Myshkin Ingawale announced at the TED talks that his company, Biosense Technologies, had developed and released a convenient iPhone-based reading application for dip-and-read urine sticks. (Pocket diagnostics: uChek smartphone app launched)
The application certainly looks extremely convenient: purchase the commercially-available urinalysis strips and the uChek kit, dip the strip in a cup of your urine, insert the strip into a slot in the kit's reference card, place it in the "Cuboid" box, and photograph it with your iPhone. The app takes the analysis from there. An obvious innovation in this era of more and more personalized medicine!
There's an issue, however. In March, regulatory legal expert Brad Thompson wrote a piece for MD&DI Online (Wanted: FDA App Enforcement) asking whether this system is in fact a medical device and subject to FDA clearance. In a separate piece, Thompson went on to make the case (Brad Thompson: Three reasons FDA’s enforcement helps mobile health) that clearer, more consistent FDA action on mobile applications which clearly class as medical devices will actually help spur growth in this area.
Since those pieces were published, the FDA has indeed contacted Biosense Technologies with an "It has come to our attention" letter (FDA Letter to Biosense Technologies), noting that the application qualifies as a medical device, that clearance to market in the U.S. is therefore required, and that the FDA cannot find any clearance number for this application on file. The agency letter also notes that:
though the ... urinalysis dipsticks ... are cleared, they are only cleared when interpreted by direct visual reading ... any company intending to promote their device for use in analyzing, reading, and/or interpreting these dipsticks need to obtain clearance for the entire urinalysis test system (i.e., the strip reader and the test strips, as used together).
The company claims that the device is registered with the FDA as a class I medical device, and several online articles I've checked echo this comment. I find the claim odd; one registers a manufacturing facility, which in absolutely no way obtains FDA's permission to sell the product! This and a number of other issues were further discussed in MD&DI editor Heather Thompson's piece (UChek’s Response to an FDA Letter: An Unfolding Saga) on May 31.
In another setting, I've been involved in an exchange of comments over whether the FDA can, in fact, do anything about the uChek app. Biosense Technologies has merely posted the app on Apple's App store, and the urine strips come from another company - the only element they would need to ship to U.S. customers is the kit with the cuboid and the reference card. That, indeed, may be the only leverage the agency has in this situation.
The bigger issue, in my mind, is whether mobile app developers with a cool idea need to follow the same rules as everyone else. What sort of software validation has Biosense Technologies carried out on the application? Have they conducted any kind of usability testing to determine how average individuals might be able to misuse the system and obtain incorrect results? Do they even have any kind of quality system?
Certainly, Biosense has published performance data for their system - but their entire study consisted of (a) reading a series of positive and negative controls, and (b) comparing their system against a commercial urine-dipstick reading system on serial dilutions of a control urine. This is certainly a good start - but when I worked in clinical diagnostics, those studies were only the opening rounds of much more extensive analytical evaluations. Where, for example, is their multi-patient correlation data?
Mind you, the FDA itself is still working through the kinks in this topic. In a remotely-delivered talk at the Wireless Convergence Summit 2013, as reported by MD&DI (FDA's Regulation of Mobile Medical Apps Will Probably Confuse You) FDA's Bakul Patel noted that the agency will "regulate smartly" in the mobile app field. General-purpose apps such as electronic textbooks will not be of interest, while "... apps that are used as an accessory to an already regulated medical device, [or] apps that transform a mobile device into a regulated medical device" will be subject to scrutiny. He also noted that app distributors - the Apple app store or the Google Play Store - will not be subject to FDA regulation, even though they are clearly serving as distributors for a medical device.
Lots of questions remain, as the MD&DI article points out. "How will FDA regulate mobile apps sold overseas? What if an app doesn't meet approvals, but is distributed anyway? Will Class I, II, III be applied to apps?" I'll add my own question here: what happens if a field action or recall of such an app is necessary? We'll have to wait until later this year for some word from the FDA, when their updated guidance on mobile apps is published.
What do you think? Can the FDA do anything about apps developed overseas and posted on one or both of the app stores? Should it pursue the companies that develop these apps if they walk, talk, and quack like medical devices? In a clearly global economy were bitstreams flow rapidly across borders, is there some more rational way to assure reliability and safety of these apps?
Healthcare IT just published a report on a panel discussion at the Design Automation Conference, where a panel of university researchers pointed out the woeful state of security in medical devices. Read the report here.
I have two mutually contradictory reactions.
(1) "Yeah, yeah. Big surprise." It hardly seems newsworthy that our high-tech devices are lacking in security, in or out of the medical device world. Everywhere we look, software-driven gadgets are passing data around willy-nilly, making life oh so conveinent both for us who own the data, and for the thieves and snoops who have other purposes in mind. We expose our bank accounts and our health information first, then only later get upset and try to close the wide-open barn door once someone has stolen the horse.
(2) "OMG, how can we still be in this state?" Medical device innovators have always had a two-part command - the green and white beacon guiding us to our landing - "Whatever you design, make it work and make it SAFE." In terms of system and software design for security, how is it that we've forgotten or ignored that edict?
Yes, many devices currently in use were designed some time ago.
Yes, for devices on a network such as in a hospital, the IT administrator shares part of the security task.
Yes, the level of hacking sophistication has risen dramatically of late.
Still, can we honestly say that we couldn't foresee potential threats to devices which communicate data in some way? For years, I have described my favorite example - imagine a glucose meter built into (or attached to) a cell phone, which can then transmit data for Mr D (diabetic) to the physician's office for trending. All we need is a hacker, looking to cause trouble for Mr D, who releases a Bluetooth virus which alters the glucose readings to appear to have a massive spike. The erroneous insulin dose could be fatal.
Please, please, please. Medical device cybersecurity is part of design safety. Hazard assessment tools such as fault tree analysis and FMEA are well-established, and security threats don't require much imagination any more. I realize that evaluating cyberhazards isn't nearly as glamorous and sparkly as new technology - but it's one of the many things that bite us sooner or later when we neglect it in our designs!