It’s Like TSA Pre✓® — But For Medical Devices

Via (U.S. Army photo)

2019 was a breakthrough year for digital health. While media coverage has focused primarily on Google’s $2.1 billion acquisition of Fitbit, in the first half of the year alone the digital health sector saw more than 40 acquisitions, 4 public offerings, and more than $4.2 billion in venture capital invested in digital health companies. As a PhD student in cultural anthropology studying how Silicon Valley is transforming the American medical system, I have followed these headlines with great interest. Behind the scenes of these deals are technologies that promise to detect disease earlier, speed the development of new therapeutics, and provide individuals with treatment plans personalized to their unique biologies and life circumstances. These developments, which have substantial implications for our everyday experiences of health and health care, pose serious challenges for regulatory bodies.

The U.S. Food and Drug Administration (FDA), the agency tasked with protecting public health by ensuring the safety, efficacy, and security of drugs and medical devices, has found itself on the front lines of this ‘digital revolution’ in health care. “These are no longer far-fetched ideas,” former FDA Commissioner Scott Gottlieb said in a 2018 speech. “We know that to keep pace with innovation in these fast-moving fields, the FDA itself must do more to leverage digital health tools and analytics internally to help the agency develop new regulatory tools and advance its own work.”

In response to the increasing volume of digital health products and the accelerating pace of product development, the FDA formed the Division of Digital Health which, under the leadership of director Bakul Patel, has since proposed substantial changes to the FDA’s review processes. These changes are intended to address one of the major challenges facing this area of the FDA: adapting processes designed for hardware to adequately review software. Historically, hardware products have been built using a fundamentally different approach to development than software. Take, for example, an intrauterine device (IUD)—to sell an IUD, the product developer would first need to prove to the FDA that the IUD is safe and effective for humans to use. To prove safety and efficacy, the developer would conduct studies of the product to generate the kinds of data required by the FDA for review. Once the product was reviewed and approved by the FDA, the developer would be able to market and sell the IUD. In this standard model of FDA review, the bulk of the review process happens up front during the product’s pre-market phase in an effort to predict and prevent potential harm.

This regulatory model is based on assumptions about the stability and durability of hardware—in this case, that the IUD will stay more or less the same throughout the review process and following commercialization. In other words, the hardware-based model assumes that the risk of using a product once it is made commercially available should be about the same as the risk of using the product at the time it was submitted for review. In the case of software, however, the assumption of a relatively stable and unchanging product like an IUD does not hold up. Unlike a hardware-based medical device, which can take years to build and test, software can be built quickly and involves constant iteration and modification. The speed of software development will only increase as machine learning techniques grow in popularity. In this new paradigm of digital health technologies, how is the FDA supposed to keep up?

Pre✓® Your FDA Submission

In response to the high volume of digital health submissions and the rapid pace of software modification, the FDA has proposed the Digital Health Software Precertification (“Pre-Cert”) Program. I first learned about the Pre-Cert program while working for a startup incubator for digital health companies. The entrepreneurs I worked with were enthusiastic about the Pre-Cert program, which they interpreted as a sign of the FDA’s growing friendliness toward industry. As a startup with limited “runway” (i.e., funding to continue building the company), the time and capital required to achieve FDA approval can be a daunting prospect. Many of my entrepreneurial colleagues welcomed the Pre-Cert program as a process better suited to the unique challenges they face as companies attempting to bridge the divergent worlds of technology and health care.

Patel, the director of the FDA’s digital health division, likens the program to TSA Pre✓®  at the airport, which allows travelers who have applied and passed a background check to speed through the security protocols. Modeled after this concept, the Pre-Cert program makes it possible for product developers to undergo an “Excellence Appraisal,” which, like the TSA’s background check, enables the developer to skip the line of the normal review process and speed their products through FDA approval.

What are we to make of the Pre-Cert program? In some ways, this is a big shift in the agency’s approach. Historically, the agency has taken a product-by-product strategy for conducting regulatory reviews, meaning that each product is evaluated on its own terms prior to becoming commercially available. Under the Pre-Cert model, the FDA evaluates product developers, usually companies, in addition to products. If a developer is deemed “excellent,” then that company’s products—at least those deemed to be “lower-risk”— can participate in a faster regulatory process than those products made by companies without precertification. While pre-certified developers have an expedited experience, like travelers with TSA Pre✓® they do not avoid security checks altogether.

In other ways, this shift is consistent with the FDA’s broader trend toward sharing oversight activity with private industry. Pre-certified companies collaborate with the FDA to determine “Key Performance Indicators”, types of data used in this case to evaluate a company’s classification as excellent, and agree to provide the FDA with regular reports of “real-world performance analytics” that measure the product’s safety as it is used by people in their daily lives. The Pre-Cert process is currently being tested through a program pilot. Initial pilot participants include Apple, Fitbit, and Alphabet’s Verily, among others.

What is Safe Enough?

 While industry has enthusiastically welcomed the Pre-Cert program as a positive development, not everyone is convinced by the proposed changes. In October, Senator Elizabeth Warren (D-Mass), Senator Patty Murray (D-Wash), and Senator Tina Smith (D-Minn) sent a letter to the FDA outlining their concerns about the program. Whereas my former entrepreneurial colleagues expressed approval of the agency’s “common sense” approach, the senators have been less convinced about the judiciousness of the changes. Their letter voiced concern about the flexibility the new program extends to companies to help determine how a product’s safety should be measured and monitored. Amid growing public concerns about the technology industry’s activities in health care and in society more broadly, the senators asked why the FDA would grant each developer the flexibility, for example, to determine which Key Performance Indicators should be used to evaluate whether they qualify as “excellent” under the Pre-Cert model. The senators further questioned the program’s use of real world performance analytics, asking how the agency could trust the data provided by participants: “[How can the agency] ensure that the [real world performance analytics] it receives from organizations are accurate, timely, and based on all available information?” The senators are not alone in asking these questions. Can big tech companies really be trusted to measure their own ‘excellence’ and effectively monitor the safety of their own products?

If Silicon Valley has done little to gain public trust in recent years, industry involvement in FDA product evaluations is not new. As anthropologist Linda Hogle has pointed out, the passage of the FDA Modernization Act (FDAMA) in 1997 enabled private sector contractors to review products in areas where the FDA lacked sufficient expertise, and thus opened the door to industry helping set standards and review products. There are also precedents for the use of observational data—what the FDA is now calling real world performance analytics. When the FDA was founded in 1938, it took a reactive approach, regulating products already on the market based on observational reports of abuses that had already occurred. In fact, it wasn’t until the 1970s—an era that saw widespread debates about corporate abuses and the dangers of technological development—that the agency shifted to the more familiar proactive model where certain categories of products like medical devices are reviewed for safety before they can be sold to the public. In some ways, the use of real world performance analytics, a form of observational data, seems to be a return to the FDA’s original reactive regulatory model.

However, if the novelty of the FDA Pre-Cert initiative can be debated, we should pay close attention to the concerns raised by the senators and other critical voices. In her ethnographic research on the FDA’s regulation of pharmaceutical products, Hogle has shown how studying regulatory processes reveal insights into social processes that carry implications for how we view risk and responsibility for health. From this perspective, debates about the Pre-Cert program are revealed as debates about fundamental social values: health, safety, and individual autonomy. What risks are acceptable, and what is the responsibility of government? The conversations that are taking place right now about the regulation of digital health touch on the deepest questions of human health and social life. What kinds of data can help us determine safety? What does it mean for a medical product to be safe enough?

If we take a step back from the galvanized debates, the specialized vocabulary, and the hope and hype of digital health, we might begin to get at some of these deeper questions about what we value as a society. Anthropological research has the potential to help us imagine how a more productive conversation might unfold. What might an anthropological approach look like?

First of all, anthropologists ask questions. If the goal is to ensure that people developing new technologies act in accordance with broader values about health and safety, we might ask how people in different contexts—developers, regulators, patients, and physicians—would answer the questions posed by the senators’ letter. What does safety mean to them? How do they think about risk? We might also study those developing digital health technologies: How do they make decisions about product safety in their everyday work? To bring the daily realities of digital health development into closer alignment with the goals of public health and safety, we ought to start by first understanding the day-to-day experiences of the people ‘on the ground’ and how these experiences intersect with and impact others. How do the practical challenges of developing a software product and building a business intersect with the expectations of patients, physicians, and others?

Anthropologists observe the present in order to see what might be possible in the future.  That is to say, studying how people understand and act in the world has the potential to help us imagine something different: different development practices, different regulatory processes, and different futures. In the words of anthropologist Kim Fortun, this kind of research has the potential to be “productively creative, creating space for something new to emerge, engineering imaginations and idioms for different futures, mindful of how very hard it is to think outside and beyond what we know presently.”

In an effort to solve a real and pressing problem, the FDA has drawn from the familiar, not only finding inspiration in analogous programs like TSA Pre✓® but also returning to old regulatory models premised on reactive responses over proactive intervention. I think it’s worth asking: Has starting from a place of familiarity limited the possibilities of the program? In an age of substantial technological change, perhaps what we need from regulators is something altogether new—something that attends to the practical challenges of the present while simultaneously opening up new and different possibilities for the future.

Paige Edmiston is a PhD Student in Cultural Anthropology at the University of Colorado Boulder. Her research focuses on how digital technology is changing the American medical system, and how these changes are impacting humans and society.