It’s Like TSA Pre✓® — But For Medical Devices

Via (U.S. Army photo)

2019 was a breakthrough year for digital health. While media coverage has focused primarily on Google’s $2.1 billion acquisition of Fitbit, in the first half of the year alone the digital health sector saw more than 40 acquisitions, 4 public offerings, and more than $4.2 billion in venture capital invested in digital health companies. As a PhD student in cultural anthropology studying how Silicon Valley is transforming the American medical system, I have followed these headlines with great interest. Behind the scenes of these deals are technologies that promise to detect disease earlier, speed the development of new therapeutics, and provide individuals with treatment plans personalized to their unique biologies and life circumstances. These developments, which have substantial implications for our everyday experiences of health and health care, pose serious challenges for regulatory bodies.

The U.S. Food and Drug Administration (FDA), the agency tasked with protecting public health by ensuring the safety, efficacy, and security of drugs and medical devices, has found itself on the front lines of this ‘digital revolution’ in health care. “These are no longer far-fetched ideas,” former FDA Commissioner Scott Gottlieb said in a 2018 speech. “We know that to keep pace with innovation in these fast-moving fields, the FDA itself must do more to leverage digital health tools and analytics internally to help the agency develop new regulatory tools and advance its own work.”

In response to the increasing volume of digital health products and the accelerating pace of product development, the FDA formed the Division of Digital Health which, under the leadership of director Bakul Patel, has since proposed substantial changes to the FDA’s review processes. These changes are intended to address one of the major challenges facing this area of the FDA: adapting processes designed for hardware to adequately review software. Historically, hardware products have been built using a fundamentally different approach to development than software. Take, for example, an intrauterine device (IUD)—to sell an IUD, the product developer would first need to prove to the FDA that the IUD is safe and effective for humans to use. To prove safety and efficacy, the developer would conduct studies of the product to generate the kinds of data required by the FDA for review. Once the product was reviewed and approved by the FDA, the developer would be able to market and sell the IUD. In this standard model of FDA review, the bulk of the review process happens up front during the product’s pre-market phase in an effort to predict and prevent potential harm.

This regulatory model is based on assumptions about the stability and durability of hardware—in this case, that the IUD will stay more or less the same throughout the review process and following commercialization. In other words, the hardware-based model assumes that the risk of using a product once it is made commercially available should be about the same as the risk of using the product at the time it was submitted for review. In the case of software, however, the assumption of a relatively stable and unchanging product like an IUD does not hold up. Unlike a hardware-based medical device, which can take years to build and test, software can be built quickly and involves constant iteration and modification. The speed of software development will only increase as machine learning techniques grow in popularity. In this new paradigm of digital health technologies, how is the FDA supposed to keep up?

Pre✓® Your FDA Submission

In response to the high volume of digital health submissions and the rapid pace of software modification, the FDA has proposed the Digital Health Software Precertification (“Pre-Cert”) Program. I first learned about the Pre-Cert program while working for a startup incubator for digital health companies. The entrepreneurs I worked with were enthusiastic about the Pre-Cert program, which they interpreted as a sign of the FDA’s growing friendliness toward industry. As a startup with limited “runway” (i.e., funding to continue building the company), the time and capital required to achieve FDA approval can be a daunting prospect. Many of my entrepreneurial colleagues welcomed the Pre-Cert program as a process better suited to the unique challenges they face as companies attempting to bridge the divergent worlds of technology and health care.

Patel, the director of the FDA’s digital health division, likens the program to TSA Pre✓®  at the airport, which allows travelers who have applied and passed a background check to speed through the security protocols. Modeled after this concept, the Pre-Cert program makes it possible for product developers to undergo an “Excellence Appraisal,” which, like the TSA’s background check, enables the developer to skip the line of the normal review process and speed their products through FDA approval.

What are we to make of the Pre-Cert program? In some ways, this is a big shift in the agency’s approach. Historically, the agency has taken a product-by-product strategy for conducting regulatory reviews, meaning that each product is evaluated on its own terms prior to becoming commercially available. Under the Pre-Cert model, the FDA evaluates product developers, usually companies, in addition to products. If a developer is deemed “excellent,” then that company’s products—at least those deemed to be “lower-risk”— can participate in a faster regulatory process than those products made by companies without precertification. While pre-certified developers have an expedited experience, like travelers with TSA Pre✓® they do not avoid security checks altogether.

In other ways, this shift is consistent with the FDA’s broader trend toward sharing oversight activity with private industry. Pre-certified companies collaborate with the FDA to determine “Key Performance Indicators”, types of data used in this case to evaluate a company’s classification as excellent, and agree to provide the FDA with regular reports of “real-world performance analytics” that measure the product’s safety as it is used by people in their daily lives. The Pre-Cert process is currently being tested through a program pilot. Initial pilot participants include Apple, Fitbit, and Alphabet’s Verily, among others.

What is Safe Enough?

 While industry has enthusiastically welcomed the Pre-Cert program as a positive development, not everyone is convinced by the proposed changes. In October, Senator Elizabeth Warren (D-Mass), Senator Patty Murray (D-Wash), and Senator Tina Smith (D-Minn) sent a letter to the FDA outlining their concerns about the program. Whereas my former entrepreneurial colleagues expressed approval of the agency’s “common sense” approach, the senators have been less convinced about the judiciousness of the changes. Their letter voiced concern about the flexibility the new program extends to companies to help determine how a product’s safety should be measured and monitored. Amid growing public concerns about the technology industry’s activities in health care and in society more broadly, the senators asked why the FDA would grant each developer the flexibility, for example, to determine which Key Performance Indicators should be used to evaluate whether they qualify as “excellent” under the Pre-Cert model. The senators further questioned the program’s use of real world performance analytics, asking how the agency could trust the data provided by participants: “[How can the agency] ensure that the [real world performance analytics] it receives from organizations are accurate, timely, and based on all available information?” The senators are not alone in asking these questions. Can big tech companies really be trusted to measure their own ‘excellence’ and effectively monitor the safety of their own products?

If Silicon Valley has done little to gain public trust in recent years, industry involvement in FDA product evaluations is not new. As anthropologist Linda Hogle has pointed out, the passage of the FDA Modernization Act (FDAMA) in 1997 enabled private sector contractors to review products in areas where the FDA lacked sufficient expertise, and thus opened the door to industry helping set standards and review products. There are also precedents for the use of observational data—what the FDA is now calling real world performance analytics. When the FDA was founded in 1938, it took a reactive approach, regulating products already on the market based on observational reports of abuses that had already occurred. In fact, it wasn’t until the 1970s—an era that saw widespread debates about corporate abuses and the dangers of technological development—that the agency shifted to the more familiar proactive model where certain categories of products like medical devices are reviewed for safety before they can be sold to the public. In some ways, the use of real world performance analytics, a form of observational data, seems to be a return to the FDA’s original reactive regulatory model.

However, if the novelty of the FDA Pre-Cert initiative can be debated, we should pay close attention to the concerns raised by the senators and other critical voices. In her ethnographic research on the FDA’s regulation of pharmaceutical products, Hogle has shown how studying regulatory processes reveal insights into social processes that carry implications for how we view risk and responsibility for health. From this perspective, debates about the Pre-Cert program are revealed as debates about fundamental social values: health, safety, and individual autonomy. What risks are acceptable, and what is the responsibility of government? The conversations that are taking place right now about the regulation of digital health touch on the deepest questions of human health and social life. What kinds of data can help us determine safety? What does it mean for a medical product to be safe enough?

If we take a step back from the galvanized debates, the specialized vocabulary, and the hope and hype of digital health, we might begin to get at some of these deeper questions about what we value as a society. Anthropological research has the potential to help us imagine how a more productive conversation might unfold. What might an anthropological approach look like?

First of all, anthropologists ask questions. If the goal is to ensure that people developing new technologies act in accordance with broader values about health and safety, we might ask how people in different contexts—developers, regulators, patients, and physicians—would answer the questions posed by the senators’ letter. What does safety mean to them? How do they think about risk? We might also study those developing digital health technologies: How do they make decisions about product safety in their everyday work? To bring the daily realities of digital health development into closer alignment with the goals of public health and safety, we ought to start by first understanding the day-to-day experiences of the people ‘on the ground’ and how these experiences intersect with and impact others. How do the practical challenges of developing a software product and building a business intersect with the expectations of patients, physicians, and others?

Anthropologists observe the present in order to see what might be possible in the future.  That is to say, studying how people understand and act in the world has the potential to help us imagine something different: different development practices, different regulatory processes, and different futures. In the words of anthropologist Kim Fortun, this kind of research has the potential to be “productively creative, creating space for something new to emerge, engineering imaginations and idioms for different futures, mindful of how very hard it is to think outside and beyond what we know presently.”

In an effort to solve a real and pressing problem, the FDA has drawn from the familiar, not only finding inspiration in analogous programs like TSA Pre✓® but also returning to old regulatory models premised on reactive responses over proactive intervention. I think it’s worth asking: Has starting from a place of familiarity limited the possibilities of the program? In an age of substantial technological change, perhaps what we need from regulators is something altogether new—something that attends to the practical challenges of the present while simultaneously opening up new and different possibilities for the future.

Paige Edmiston is a PhD Student in Cultural Anthropology at the University of Colorado Boulder. Her research focuses on how digital technology is changing the American medical system, and how these changes are impacting humans and society. 

Expanding Telecommunications Services in a New Age

How Legal Traditions and Licensing Procedures Impact Telecommunications Industries Around the World

One would expect that lawmakers rely on economic, social, and technical analysis to support their decisions. However, in reality lawmakers’ decisions are often influenced by subjective considerations and politics. When economic, social, and technical analysis is referred to, it is often presented by parties with a vested interest. This is particularly problematic in the telecommunications industry where those without political capital historically remain left out of the decision-making process.

To address the need for reliable analysis, a group of researchers and policymakers convened at the first Telecommunications Policy Research Conference (TPRC) in 1971. Continuing this tradition, the 47th annual TPRC brought together industry players, academics, and regulators from around the world. Staying true the conference’s roots, many speakers presented research on the various ways radio spectrum could be better allocated in order to address economic, educational, and other social disparities – like political participation and housing.

I had the pleasure of presenting my research at this year’s conference. Over the past two years, I built a database of the telecommunications industry’s critical points of analysis, and then used those points to support the arguments put forward in my paper ­– Expanding Telecommunications Services in a New Age: How Legal Traditions and Licensing Procedures Impact Telecommunications Industries Around the World. The paper was selected as a Finalist for the conference’s Student Paper Competition and was featured in the “International” panel.

I first became interested in the digital divide while working on infrastructure improvement projects in Latin America throughout high school and as an undergraduate. My hometown, Gettysburg, Pennsylvania, also struggled to address the digital divide, but not nearly to the same extent as what I saw on those trips. While the work I did generally aimed at improving essential infrastructure and economic opportunity, I noticed that many communities I worked in also lacked any recognizable form of telecommunications infrastructure.

When I started travelling outside of Pennsylvania and Latin America, I realized the digital divide was a common issue that practically all countries share. This is demonstrated by the efforts of members of international organizations, such as the Global Systems Mobile Communications Associations (GSMA) and the International Telecommunication Institute (ITU).  It was not until I began work as a Research Assistant for Professor Dale Hatfield that I understood how the level of economic and social development in each country, at least as it relates to telecommunications industries, is heavily influenced by decisions about how to manage and regulate radio spectrum use.

Two issues influence the telecommunications industry the most – the balance of power between government branches and radio spectrum licensing. So, Professor Hatfield and I agreed it would be worthwhile to research how legal traditions, like common law and civil law, and licensing procedures, like auctions and comparative hearings, influence the quality of services and prices of telecommunications providers in each country. To do this, I created two separate regression analyses. This allowed me to measure the relative legal advantages for a civil law country as opposed to a common law country, and of using auctions as opposed to comparative hearing to assign spectrum.

One of the regression analyses indicated that civil law countries’ legal traditions give them a major advantage over common law countries. This may be explained by the differences between countries in respect to how power is shared between branches of government. In civil law countries, decision-making authority is traditionally concentrated in the hands of the executive branch and agencies, often at the expense of the judiciary and legislature. Alternatively, in common law countries the balance of power is more evenly shared between branches. This is observed in the exercise of judicial review and some legislatures’ ability to limit the scope of agencies’ authority.

Civil law countries do have at least one major advantage built into their legal systems: they have been able to achieve much higher Internet access, subscription rates, and broadband speeds – all without significantly increasing consumer prices. This ought to provide encouragement for regulators in countries that are struggling to keep up with telecommunications development to experiment with policies that have proven successful in civil law countries. It also indicates that perhaps they should place more faith in their agencies to make effective radio spectrum management decisions.

Licensing procedures also strongly influence outcomes in the telecommunications industry. The second regression analysis indicated that countries that use auctions to assign radio spectrum have delivered cellular and internet services to more people, and at lower costs, than countries that rely on comparative hearings. Interestingly, many countries still use comparative hearings to assign radio spectrum. This may be because comparative hearings, at least in theory, give the regulators conducting the hearing greater discretion in selecting the licensee.

It seems that auctions commonly lead to positive outcomes for consumers, faster broadband speeds, and, when conducted properly, can even help introduce competition by reducing the amount of market share in the hands of the leading operators. Again, this should encourage regulators to experiment with more dynamic approaches to spectrum regulation. In the paper, I concluded that a best practice has already been established – auctions with minimum criteria for participation and/or buildout requirements – and I encourage regulators to pursue that approach in the future.

My research also helps to confirm something I suspected while travelling abroad and throughout the rural U.S. The digital divide is influenced by both physical and political barriers. Indeed, the regression analyses indicated that political factors play an even greater role than several factors others have relied on to explain the observed disparities. I know this to be true because I controlled for several traditional explanations, including rural population, population density, GDP per Capita, and Corruption Indices. Yet, when compared to legal tradition and licensing procedure, these factors have only a slight, if not insignificant, impact on telecommunications outcomes. Therefore, politicians and telecommunications providers can no longer point to economic and physical variables to explain their shortcomings.

Taken together, the research on legal tradition and licensing procedure helps to explain why there are often large disparities between countries. I understand this marks a major departure from much previous thinking on the subject. My hope is that my research will help others to understand how legal tradition and licensing procedure can be used as mechanisms to better develop telecommunications markets.

Freddy is Managing Editor of CTLJ, Volume 18 and a Research Assistant for the University of Colorado’s Silicon Flatirons Center for Law, Technology, and Entrepreneurship. His study and research focus on identifying legal solutions for the issues that arise out of emerging technologies and increasing access to critical technology infrastructure in underdeveloped communities.

Energy & Data – Benefits of Rural Electric Cooperatives as Broadband Providers

For years, lack of access to modern infrastructure threatened to leave rural communities across the United States behind in the race for economic development. The large investor-owned companies that were responsible for deploying the necessities of modern economic life to cities and densely-populated areas proved reluctant to make significant investments outside population centers. Lower population density and higher deployment costs limited critical connections for rural America, and only one-in-ten households had access to reliable modern infrastructure.

This story may sound familiar to rural residents who lack access to reliable broadband internet in 2019. However, this isn’t a new story – it mirrors the snail’s pace of electrification in the 1930s. As rural electrification inched along in the early 20th century, rural electric cooperatives (RECs), proved critical to solving the crisis, and these same entities may be able to address the modern broadband divide as well. Until recently, the largest obstacles to RECs providing broadband was lack of federal support and restrictive state law. In the last two years, a wave of state bills and new federal interest have begun to remove these obstacles. RECs are poised to benefit local economies not only by closing the digital divide, but also by folding energy-saving technology and renewable assets into their services.

In the 1930s, rural populations struggled in part due to a lack of the electricity that lit up the rest of America. As the New Deal picked up steam, the federal government sought new solutions to rural electrification. Congress and the White House created the Rural Electrification Administration (REA), which in turn wrote model “Rural Electric Cooperative Corporation” legislation for states. This widely adopted legislative blueprint enabled rural residents to form cooperatives to take advantage of REA funding and build out their own electric grids. These cooperatives combined democratic and corporate structures into a mixed model in which leadership boards are elected by all rate-paying residents, rather than investor-shareholders. They purchased power from large power companies who handled generation and transmission, and then distributed it to their customers. The REA also provided loans and loan guarantees to seed RECs with capital, which would be paid back by member-owners through their monthly electric bills. Hundreds of rural electric co-ops formed across the country and increased electrification rates from ten percent to ninety percent in a span of about eighteen years.

Today, broadband internet access faces similar challenges. The Federal Communications Commission describes broadband as “critical to economic opportunity, job creation, education, and civic engagement.” Deficient broadband access is recognized as a major barrier to effective rural entrepreneurship and economic growth. Sixty percent of American farmers report that they do not have good enough internet to run their businesses. The FCC’s Connect America Fund (CAF), has poured billions into rural development, just last month authorizing another $112 million for the latest auction of CAF project grants. However, the Commission also acknowledges that access to broadband remains twenty to thirty percent lower in rural areas than in population centers. New research from the Purdue Center for Regional Development finds that a large percentage of advertised broadband comes from a DSL connection, which often does not meet the FCC’s modest 25 Mbps download speed and 3 Mbps upload speed definition for broadband. Yet, many urban residents enjoy access to “gigabit” speeds of 1 Gbps or faster, and many believe the FCC should be pushing development by defining 100 Mpbs download speeds as the minimum for “broadband” service. The Purdue research also highlights that upload speed is often as important as download speed for economic development because businesses are producing data as much as they are consuming it from outside sources. However, for “symmetrical 25/25 speeds, the share of rural housing units with no access more than doubles from 26.9 to 64.7 percent.”  While incumbent corporations, states, and the federal government have proposed various remedies, RECs have also begun stepping up to provide access to broadband in these high-cost rural areas.

RECs are well-suited for the task. They have nearly a century of experience managing local infrastructure in difficult, high-cost rural areas. Indeed, REC electric infrastructure connects many of the most distant and rugged parts of the country. This infrastructure and experience enables them to provide fiber to the home at relatively low cost, enabling gigabit speeds in areas where such connectivity would normally be unthinkable. Ownership by their members means that they are only required to break even, enabling RECs to charge more affordable rates than investor-owned companies driven by profitability concerns. Additionally, RECs map well onto many of the areas that could gain the most economic benefit from broadband connectivity. The National Rural Electric Cooperative Association reports that overall 6.3 million households in co-op territory could gain a collective $12 billion in economic benefits if they received reliable access. 

Access to funding is an important piece of the puzzle for any rural broadband project. RECs have applied for and received funding from federal sources like the FCC, the National Telecommunications and Information Administration, and the Department of Agriculture. However, when applications have been denied, they have also proven effective at self-funding. Indeed, RECs can leverage existing electrical assets in order to pay for broadband deployment, without having to hike rates for their electric customers.

As member-owned collectives, RECs tend to be highly trusted and responsive local institutions, allaying possible mistrust and conflict with local residents and stakeholders. The American Consumer Satisfaction Index reports that these inherently localized institutions enjoy the highest consumer satisfaction of any of the different players in the electricity industry. Their structure provides transparency and voice to their consumers, who are also their owners.

Finally, deploying fiber can enhance an REC’s electric service and expand distributed renewable energy generation. Combining fiber with electric service provides reliability and redundancy for the grid managers. It can also improve efficient energy usage by allowing for load-management devices like smart thermostats and smart appliances. Perhaps most importantly, as RECs are looking to increase their renewable generation portfolios, building connectivity can improve their “ability to host these generation assets, monitor power sources, and improve forecasting capabilities to integrate the intermittent nature of their production onto the grid.” While many investor-owned monopoly utilities remain reluctant to move away from centralized power plants, REC’s member-owner structure gives them enormous potential as renewable energy providers. Producing energy on land owned by members in turn boosts economic development by increasing the land’s productivity while developing new sources of rural capital. (For more on the benefits of smart grids and the disruptive potential of distributed generation, see the National Rural Electric Cooperative Association’s “The Value of a Broadband Backbone” and “The Energy Prosumer” by Colorado Law Professor Sharon Jacobs, respectively.)

It may come as a surprise, then, that despite the 1996 Telecommunications Act authorizing grants to multiple types of providers, the FCC has been reluctant to provide Connect America Fund money to RECs, instead reserving grants for telephone companies. Even more surprisingly,  in many states RECs faced long-standing legal barriers to getting into the broadband game. For example, North Carolina prevents their RECs from accessing federal grant funding for broadband deployment. Similarly, Georgia began 2019 in a legal limbo, unclear whether RECs were even allowed to provide broadband service at all. The Institute for Local Self-Reliance points out that many direct state barriers are preempted by the 1996 Telecommunications Act. However, RECs often lack the resources, knowledge, and political will to engage in lengthy legal battles with their own state capitols. Of course, major national telecom companies are known to lobby fervently against letting any new providers into the market, even in poorly-served areas. For small cooperatives, this creates a daunting political landscape. 

Meanwhile, major incumbent electric utilities are equally leery of landowners developing their own renewable energy resources, which injects more competition into electricity generation market. RECs have typically purchased power from these wholesale power generators, and distributed it to their customers. The ability for REC member-owners to produce their own power keeps more money local, but also creates supply competition for regional power providers. RECs trying to empower distributed generation and build broadband connectivity thus face fights on multiple regulatory fronts against incumbent electricity providers as well as telecommunications companies.

But in the last two years, spurred by an increasing demand to close the digital divide, both states and the FCC have been making changes. The 2017 Connect America Fund Auction finally opened a relatively small portion of the bidding to non-incumbent carriers like RECs In 2017. In this same vein, Tennessee cleared out legal barriers for co-ops and simultaneously provided a pot of money to incentivize build-out. Georgia and Mississippi both passed laws this year allowing its co-ops to get in the game. In a reflection of the bipartisan consensus around removing regulatory obstacles to rural economic development, both pieces of legislation cleared state houses with overwhelming support. In 2016, notably earlier than many of the recent developments in state law, 87 RECs across the country were already advertising fiber networks providing gigabit speeds. Some of these take the form of partnerships with ISPs while others may offer open-access networks to encourage competition. These success stories have no doubt spurred states and the FCC to reconsider RECs more as partners, and less as competitors.

Until 2018, Colorado had its own obstacle for RECs. By law, incumbent telecommunications providers had the right of first refusal whenever a new broadband expansion project was proposed. This restriction enabled telecommunications companies operating in the area to provide a minimum level of service while foreclosing other competitors. The 2018 Broadband Deployment Level Playing Field Act kept this right of first refusal in place, but with an important change. Under the amended law, incumbents that wish to exercise their right of first refusal must match the upstream and downstream rates of a potential competitor’s proposed project, and do so at the same or lower cost. 

In 2016, the Delta-Montrose Electric Association (DMEA), on the western edge Colorado was among the first to move forward with a fiber program to stimulate economic growth in the region. Their Elevate program offers a 100 Mbps option and 1 Gbps option. Similarly, the La Plata Electric Association and Yampa Valley Electric Association, in southern and northern Colorado respectively, are also in the process of expanding broadband subsidiaries. DMEA has also been a leader in the fight to allow for more local electricity generation. The co-op recently followed the example of New Mexico’s Kit Carson Cooperative and reached a settlement to buy out of its contract with incumbent electricity producers, which limited local generation potential. La Plata Electric Association is considering doing the same.

Closing the rural digital divide has been described as an “all hands on deck” effort by the FCC. Increasingly, that means opening the door to RECs as broadband providers. Their community-centered model and time-tested experience with rural infrastructure gives them a natural affinity for the task at hand. As the cost of distributed renewable energy generation continues to plummet, the advantages of integrating energy and data infrastructure grow. RECs not only enable data-driven entrepreneurs, they also open the door for struggling farmers and landowners to build profitable, renewable energy resources. However, both our data infrastructure and power generation infrastructure are struggling to grow past the restrictive legacy of a top-down approach. This top-down approach relied on regulation and planned economic development, rather than market competition and entrepreneurial innovation. In a time when American public sentiment is distrustful of corporate interests and intrigued by cooperative ownership models, lawmakers and regulators should empower RECs. They should have the chance to duplicate the success of the 1930s, compete with incumbent broadband providers in a free market, and participate in competitive power markets. Rural Americans underserved by the existing broadband market should consider if the groups that proved so successful at electrifying their communities could also be the most reliable bridge across the digital divide.

Conor May is a member of the Colorado Law & Technology Journal and serves on the executive boards of the Environmental Law Society as well as the Silicon Flatirons Student Group. He studies antitrust law, tech policy, and environmental law, with a focus on energy regulation.

Mango Pods and the Regulatory State

The Washington Post has called mom-and-pop vape shops “the small business success story of the decade:” a product with high demand and a market with relatively low barrier to entry, there’s a reason you’ve been seeing vape shops pop up everywhere recently. In the Netflix documentary Betting on Zero, Zac Kirby from Ponca City, Oklahoma loses a lot of money with the notorious multi-level marketing company Herbalife. His solution? Turn the brick-and-mortar location he had purchased to hawk Herbalife smoothies into a vape shop. “I was one of the lucky ones,” he says, “who found a new and emerging industry to get into.”

As cigarettes have fallen further out of favor, nicotine and THC vaporizers have started to take their place, with people drawn to the lack of obnoxious smell coupled with a nicotine or THC high. Vaporizers have been around for a while, but their original iteration was large and bulky, and their demographic confined to those who wanted to buy or build something larger than an iPod to get a nicotine fix.

You’ve also likely heard about the spate of vape-related illnesses that have popped up over the past year, capturing huge amounts of media and political attention. President Trump, as he is wont to do, has even threatened executive action to stem the so-called vaping crisis. And while our current administration is much more likely to pass executive orders than previous ones, the federal government actually has little control over the sale of nicotine projects short of executive action.

            In 2009, the Obama administration passed the Family Smoking Prevention and Tobacco Control Act, which gave the FDA the power to regulate certain aspects of the tobacco industry. Importantly, this means that the FDA can pass rules related to tobacco regulation without an explicit mandate from Congress, as long as the rule is within the power granted to the FDA in the Act and enacted in accordance with the administrative procedure act. Prior to the act, tobacco was regulated through a combination of state, federal, and municipal laws, with no federal agency involvement. The Act ceded authority to the FDA over tobacco manufacturing, barring states from passing stricter laws related to that aspect of tobacco regulation. While Beverly Hills is allowed to ban cigarette sales outright, it can’t regulate the way cigarettes are manufactured, because that power lays with the federal government.

            This Act, however, only preempted some forms of state and local regulations against tobacco and preserved others. For instance, states and municipalities still have the power to ban any or all classes of tobacco, but they can’t force more stringent labelling laws than are required by federal statute. In Beverly Hills, for instance, gas stations and convenience stores will be prohibited from selling cigarettes beginning in 2021. 90210 is still an extreme outlier in tobacco regulation; the city was one of the first to ban smoking indoors in the late 80s, and will be one of the only areas of the United States where it’s illegal to sell tobacco products.

            So why have states and municipalities been so quick to ban vape products while leaving traditional cigarettes and other tobacco products on the shelves? Tradition and history likely have a lot to do with it. The Beverly Hills cigarette ban, for instance, has a carve-out for the cigar lounges that have been in the neighborhood since the days of Old Hollywood. Even Auschwitz prisoners—allowed little else– were allotted three cigarettes per week, such was their importance. As unhealthy as cigarettes demonstrably are, and as successful as advocates have been in cutting the number of smokers in America, they’re still an indelible part of at least some corners of social culture. There’s a reason everyone was swooning over that photo of Phoebe Waller-Bridge celebrating at an Emmy’s after-party.

            Vaporizers, on the other hand, are new, and are especially new to a particular class of young urban professional. Much has been written about how the advent of Juul and similar devices, with their sleek, unobtrusive design, has brought vaporizing to the mainstream. Where vaporizing before was an activity limited to those who wanted to buy or build their own large devices, and nicotine oil had to be purchased at specialty shops rather than as gas stations or convenience stores, it’s now been adopted by people in every demographic. Vaporizers just weren’t popular enough to care about before they looked like USBs.

            But Juul and their competitors have changed all of that—the number of high school students who say they have vaped nicotine has doubled since 2017, from 11 percent to nearly 21 percent. The fact that vaporizers have now become the provenance of young urban professionals, however, combined with the fact that there is a legitimate issue with teen use of the products, is arguably what has made vaping such a ripe political target. It’s relatively new, it’s entered the mainstream in a short amount of time, it’s been adopted by teenagers as contraband, it’s made hundreds of people ill—vaporizing was primed to catch the ire of societal moral panic.

            There’s also the issue of the tobacco lobby—or Big Tobacco. Decades of pressure on all levels of government meant a hands-off approach to tobacco regulation prevailed until the 90s. And while the vast majority of states and establishments have chosen to ban smoking indoors, banning cigarettes outright would mean losing monetary support from Big Tobacco and angry constituents, in addition to a host of lawsuits.

Tobacco 21

            You may have seen pro-21-year-old smoking age ads paid for by Juul or Juul’s parent company Altria in magazines or on TV recently. Why on earth, you might wonder, would the companies accused of aggressively marketing to teens supporting raising the age to purchase tobacco products? Because those companies want laws that will raise the smoking age while simultaneously preventing states from passing new tobacco regulations, with the goal of eventually passing a preemptive federal law.

            So if tobacco companies get their way, a federal law that would raise the smoking age would also grant the federal government preemptive authority over other aspects of the tobacco industry, like the ability to ban certain products. This would mean that cities like Beverly Hills that have outlawed cigarettes, and the multitude of other states and municipalities that have banned vaping products recently, would no longer be able to do so. A smoking age of 21 may seem great to most people—which is what tobacco companies are counting on so that they can slide federal preemption of state and municipal power over tobacco into the statute.

Legalize and Regulate

            The Temperance Movement in the United States was the result of a mix of potent cultural forces culminating in the ratification of the 18th Amendment—better known as Prohibition. One of the results of this brief period of constitutional insanity was that on average 1,000 Americans died every year of prohibition from tainted alcohol. Barred from enjoying their vice of choice, prohibition-era Americans would “denature” industrial-strength alcohols like methanol, many times with fatal or paralyzing results. “Blind drunk” became more than a figure of speech.

            There are obvious parallels between prohibition and the current federal legal status of marijuana. When NBC News enlisted cannabis testing agency CannaSafe to run a battery of tests on 18 separate brands of THC vaporizer cartridges, the 3 purchased from legal dispensaries came back negative for pesticides, heavy metals, and solvents. The other 15—purchased on the street—came back positive for at least one. Amanda Chicago Lewis, a prolific cannabis writer and activist, has been warning about the dangers of unregulated vapes for the past few years. A wholesale ban on vaporizers will likely only serve to make the problem worse. When it comes to legitimate THC oil in places where marijuana is legal, for instance, a lot of legal states now limit the parts per million of butane allowed in the oil. Indeed, multiple health professionals and advocates have voiced concerns that traditional cigarette smoking will increase if vapes are banned nationwide. Others fear that the market for THC oil will move further underground with resulting safety concerns. Their fears aren’t unfounded: illegal markets create safety concerns precisely because illegal products and services can’t be regulated or monitored by the government.

Saving Our Spectrum: Handling Radio Layer Vulnerabilities in Wireless Systems

Two of the greatest challenges of the modern technological age are security and privacy. Spectrum, specifically at the radio layer, is particularly vulnerable to attack. How can we better protect our devices and infrastructure? Speakers and panelists at the Silicon Flatirons Center’s Saving Our Spectrum: Handling Radio Layer Vulnerabilities in Wireless Systems Conference came as close as one can to answering this complex question.

A key development in the telecommunications industry is marked by the shift away from the use of wires and fiber to transmit signals to wireless. And while wireless networks reduce capital expenditures, and have great potential in tough to reach places, the world’s increased dependence on radio comes at a cost. Radio receivers, unlike wired connections, cannot be physically protected from attack. This is because in order to function, they must be open to signals.

Professors Hatfield and Gremban led off the conference’s primer with a discussion of three common types of attacks on these systems: sniffing (listening to wireless transmissions for unencrypted signals), spoofing (one user masquerading as another), and jamming (blocking signals to specific or several devices). One of the difficulties in addressing the radio layer vulnerabilities of 4G and 5G networks is that attacks can occur at any layer, meaning both our devices and the network are potentially vulnerable to attack.

Consumers can protect their devices but are limited in their ability to do so. Certainly, one can download any number of applications from the app store that assure consumers of the app’s ability to detect IMSI catchers (or stingray) devices, which helps combat sniffing, but those applications are known to produce false positives, meaning they alert customers to nonexistent threats, and they are not as effective at stopping bad actors as one would hope. This raises the question – if end users can do little to protect themselves, can, and should, companies being doing more to protect devices by securing the network and improving the hardware of our devices?

For the mobile communications companies selling devices and telecommunications equipment, that possess advanced technology and the know-how to address the aforementioned vulnerabilities, there is little incentive for them to make improvements that would secure devices. Adding hardware that would address vulnerabilities in the devices would be expensive, but probably not impossible according to some panelists.

Though it may be some time before we see security features in our devices, more and more companies, advocacy groups, and regulators are attempting to address the vulnerability issue by experimenting with new systems and methods. One way this is being done is by using artificial intelligence and machine learning to detect and respond to attacks on devices and at other layers. AI is particularly useful because of its ability to quickly isolate atypical interactions on the network. Additionally, DARPA’s Spectrum Collaboration Challenge (SC2) is “using AI to unlock the true potential of the RF spectrum,” and promises to deliver some viable solutions as well, so the outcome of that competition will be worth following.

Finally, the 3GPP (Third Generation Partnership Project) has used sophisticated technology to address vulnerabilities to improve user authentication process. This helps networks determine whether users are who they say they are, which helps in instances of spoofing. With each generation of cellular network technologies improving on the last, we have been able to improve our ability to authenticate users – and this trend has held true for 5G. 

Despite the high number of attacks, the situation is not as dismal as it once appeared. In fact, there are already several viable ways to secure radio layer vulnerabilities. These include requiring device manufacturers to include protective features, such as advanced hardware and encryption technology,  before delivering the devices to customers, increasing uses for AI applications, and doing what we have done in the past: hoping that new cellular network generations will lead to improvements in authentication processes.

Though the hope is always that something will be done to totally secure our devices and networks, we must recognize our own technological constraints, as well as the capabilities of bad actors. That said, the three examples outlined above seem to be viable solutions and demonstrate stakeholders’ increased awareness of the issue at hand

Freddy is Managing Editor of CTLJ, Volume 18 and a Research Assistant for the University of Colorado’s Silicon Flatirons Center for Law, Technology, Entrepreneurship. His study and research focus on identifying legal solutions for the issues that arise out of emerging technologies and increasing access to critical technology infrastructure in under-served communities.

Move Fast, Break Things: How carriers could break networks in the race to 5G

In the race to deploy 5G networks across the U.S., the big carriers have adopted a “move fast, break things” mentality that threatens to break existing network architectures for the speculative promise of faster speeds and better networks. This mentality is in large part motivated by the narrative that 5G is a race where the U.S. is competing against China to deploy the next generation mobile network. This narrative of a race even led AT&T to push out an OTA update to certain phones that displayed a “5Ge” logo in the upper-hand corner when users were actually connected to was a legacy 4G LTE network with specialized updates. While modestly faster, it certainly fell short of a generational change in mobile telecommunications. 

What is 5G?

For the uninitiated, 5G refers to refers to a set of standards for the next generation of mobile networks. Here is a good summary for those looking for a deep dive on what makes 5G different. Broadly speaking 5G makes three key improvements over 4G LTE networks: 

(1) higher speeds

(2) lower latency

(3) the ability to connect to more devices at once

Like other mobile networks, 5G depends on spectrum allocations through the Federal Communications Commission (FCC), that authorizes carriers to transmit through cell towers and cell phones at a specific radio frequency. However, unlike other networks, 5G relies on a wider array of spectrum allocations in order to provide more data to consumers. Generally speaking, lower frequency bands provide better coverage over longer distances, but typically don’t provide as much data bandwidth, making low band ideal for rural applications. As frequencies increase, signals typically fall off over a shorter range, but can provide higher data bandwidth. Legacy 4G LTE systems already operate on low and mid band spectrum, but new spectrum allocations in high bands, like millimeter wave (mmWave), promise significantly higher data capacity. 5G also depends on network optimizations to reduce backhaul latency to deliver faster speeds. This is what AT&T tried to argue that they deployed with “5Ge” before eventually settling a lawsuit alleging false advertising.

High Band Issues: Problems with mmWave and the 24 GHz disaster

While new high-band allocations promise the biggest potential in speed gains over legacy 4G LTE networks, these benefits will likely only be available to a select few Americans in specific areas within the biggest metropolitan centers. Due to the propagation characteristics of mmWave technologies, the towers have a very limited range – at best only a couple hundred meters – compared to up to 50-150 km for 3G/4G towers. Thus, to effectively deploy a network using mmWave technologies, a very high degree of “network densification” is necessary to provide service. Essentially, while the 5G towers are smaller, a city needs hundreds, even thousands to reach the density required for a functioning network. While this “densification” is logistically possible and economically feasible in major cities, and football stadiums, the potential promise of downloading movies in seconds will likely remain unavailable for rural Americans. 

Another issue with high-band spectrum is the recent “24 GHz disaster”, which threatens the reliability and accuracy of weather forecasts for the promise of better networks. In the recent “Spectrum Frontiers” auction, the FCC sold access to carriers for blocks of spectrum in the 24 GHz band for use in 5G networks. However, this auction was conducted despite objections from NOAA and NASA that mobile allocations in that band would cause significant interference issues with weather satellites that depend on the unique characteristics of the band 24 GHz to observe water vapor in the atmosphere. Once mobile service is active in the band, interference issues could potentially reduce the accuracy of hurricane forecasts by decreasing the forecast lead time. As strong hurricanes become more common and hit increasingly underprepared cities, this band allocation could result in an increase in property damage and potentially even additional loss of life from superstorms. 

Mid-Band Issues: C-Band & 6 GHz

Beyond the 24 GHz band, other bands the carriers are seeking for 5G have additional interference problems that could threaten incumbent services that are still critical to our telecommunications infrastructure. In the C-Band Proceeding, the FCC is considering a reverse auction to relocate or substantially reduce the number of companies using the using the 3.7-4.2 GHz band for satellite communications. These C-Band incumbents include satellite companies, cable companies, rural broadband providers, and television broadcasters. While a number of these incumbents are likely to participate in the reverse auction and sell their current licenses, some incumbents have indicated that they are either unwilling or unable to relocate their services out of the band. Mobile carriers and satellite incumbents are also fighting over exactly how the 500 MHz of spectrum should be divided between incumbents and entrants, and whether or not guard bands are necessary to protect earth stations from interference. Regardless of what decision the FCC makes, there will likely be impacts for rural Americans who depend on satellite services to receive internet, television, or other services. 

In the nearby 6 GHz band carriers are currently in conflict with manufacturers of unlicensed devices (think Wi-Fi routers), over a proposed change for service rules in the band that would allow unlicensed users to share spectrum with licensed users subject to an “automated frequency coordination” scheme designed to prevent interference. This “AFC” technology would be required to be installed on any unlicensed device that operates within the 6 GHz band, and would prevent these devices from broadcasting signals if it might cause interference with a licensed user broadcasting nearby. Unlicensed users argue that as American’s appetite for data increases, more spectrum will need to be allocated for unlicensed use to provide more room for WiFi services. While WiFi devices are predominantly low power and used indoors, mobile carriers argue that AFC rules should be applied to all unlicensed devices operating in the band. AT&T argues that without AFC rules applied to all unlicensed devices, interference with mobile operations will be inevitable. In these arguments the carriers stress that while expanded spectrum for WiFi may be critical, 5G should also be considered a possible solution to America’s expanding appetite for data. 

Dynamic Efficiency: Critics Claim a Flawed Premise for the T-Mobile/Sprint Merger

In their filings and messaging around the proposed merger, T-Mobile/Sprint argue that as a combined firm they will be able to provide better 5G service than any existing carrier will be able to do alone. This assertion is justified in part by the complementary spectrum assignments each company holds (see the figure below.) Together, these assignments will give the ‘New T-Mobile’ access to more spectrum than any other carrier, enabling them to deliver better service. Essentially, a dynamic efficiency gain that outweighs the potential danger of static efficiency losses. 

However, the Department of Justice placed conditions on the merger that require Sprint to sell some of their spectrum to DISH, a company that has a history of hoarding spectrum. According to critics, this idea is unnecessarily complicated as there are currently four carriers, and DISH won’t be a viable option for quite some time. Additionally, the spectrum divestiture somewhat undermines the initial premise of the merger – the combined spectrum portfolios would empower the ‘New T-Mobile’ to provide better service than anyone else.

A false premise of the race to 5G?

Ultimately, the narrative of a race to 5G may turn out more beneficial to carriers than to consumers. Critics of the 5G race argue that there is likely no harm to consumers, and possibly even US carriers, if China succeeds in deploying 5G before the United States – because the Chinese government controls spectrum allocations, major carriers, and device manufacturers, it is significantly easier for China to rapidly deploy 5G. In effect, China can ignore all dissent and centrally manage their economy. In this way, a race analogy benefits US carriers because it motivates the FCC to take a light tough regulatory approach, effectively coordinating or allowing carriers to coordinate in a way that could mirror a centrally managed economy. This may reflect the White House’s focus on delivering “wins” for America, even if those wins come at a heavy cost or start to ape a communist approach to the economy.

Join the CTLJ Content Team at the Silicon Flatirons Saving Our Spectrum Conference on October 10, 2019 at CU Law. Luminaries in the field of spectrum governance and radio propagation will examine these issues, in particular the security vulnerabilities emerging with the next generation of connected devices.

A Day in Your Data

Artwork by Oxyman.

Digital privacy rights remain a fleeting uneasy feeling to most people, mildly relevant during Facebook’s Cambridge Analytica Scandal or data breaches from companies like Equifax and Capital One. Yet, Consumer data is continuously harvested and utilized in a never-ending effort to turn ‘cookies into cash’. Your information, or rather information about you, is big business, more lucrative and exponentially expanding than oil.

This pervasive, essentially unavoidable, collection and commercialization of data creates extensive social costs (legalese for knock-on effects in the economy that hurt people – like pollution). While some social costs are moderate, who might see a job notice for example, some social costs are extreme. As conversations inappropriately recorded and retained by ‘smart’ home assistants like Amazon’s Alexa are used in court, and automated systems without transparency, or indeed even accuracy or accountability, are used to determine sentences, calculate recidivism risks, or award bail, the social costs created from data utilization may eclipse their benefits in certain contexts.

The data economy is full of risks and benefits. In the U.S., consumers are largely responsible for navigating these issues, and are typically bound to whatever decisions they make. Those decisions are of limited value however, as companies routinely disregard their own policies or violate expressed consumer choices; Google retained and utilized the location data of consumers despite those consumers turning off location services in their settings, just as Apple violated its own commitments not to share consumer listening data with third parties. When evaluating a data decision, consumers must evaluate not only the decision, but also the risk that they are engaging with a bad actor.

To help illustrate our contemporary data reality, the CTLJ Digital Content Team crafted a brief hypothetical timeline of the average day of a CTLJ member, given the nom de plume “Bob” to protect their privacy, that catalogs how data about us is implicated in every facet of daily life.

5:30AM – Rise ‘n Shine

On weekdays Bob wakes up at 5:30AM. He uses a Google Assistant “routine” to reads the forecast for the day, based on his current location. It also plays the most recent episode of the WSJ Tech News Briefing.

Bob is receiving some real benefits here, and without spending any money. Benefits include the alarm itself, the forecast, and the podcast. These are real benefits, and most would agree that they are a good result for Bob.

Google also collected useful information about Bob that they can sell, in some form or another, to generate revenue. They track what time he typically gets up in the morning, and potentially how often he snoozes his alarm, which will be integrated into an advertising profile, and possibly a psychographic profile, similar to those created by Cambridge Analytica. They also track his current location and interests based on the “routine”, which could also be used to tailor advertisements, alter the offering of services, influence prices for certain goods for Bob, and may ultimately impact which job or housing postings he has access to.

5:45AM – Struggling to Get Out of Bed

While Bob does occasionally hop straight out of bed in the morning, all too often he spends about 15 minutes lingering under the sheets scrolling through Twitter or Reddit, or watching videos on YouTube. Sometimes Bob stays in bed under the covers all day.

For every YouTube article or Reddit link Bob clicks, more information is added to his various profiles, updating targeting algorithms to serve him ads that he is more likely to find relevant. Most large data collection firms like Facebook have their proprietary profiles, even if you don’t use their service, while data brokers and other ill-defined entities create and trade their own.

7:30AM – Hitting the Gym

If everything is going well for Bob so far this morning, he’s hopefully getting to the gym around 7:30AM. He uses a Garmin Fenix watch which tracks his location and heart rate. Garmin also has an app where Bob can enter details on his height, weight, and age. Their app also contains “badges” that incentivize Bob to do things like record “activities” for 7 days in a row, climb a certain number of floors in a day, or run a certain number of miles in a week. These “badges” show up on a public profile where Bob can compete with locals or people around the globe at a similar level. 

While Bob enjoys these training metrics, and derives motivation from the gamification of workouts, Garmin likely collects a swath of information about Bob. This data may be used in ads, it may also be eventually used to determine his health insurance coverage and premiums or even his ability to get or keep a job. Health trackers in particular have also been implicated in national security issues, revealing not only the location of security installations but also potentially patrol routes and patterns of activity within these installations.

8:45AM – Morning Coffee

On his way home from the gym, Bob might stop to grab a cup of coffee from his local coffee shop. This purchase will be tracked in a number of ways. If location services are enabled on Bob’s phone, and likely even if they’re turned off, Google tracks that he visited the Brewing Market on Baseline, and may prompt him to write a review, in addition to storing that information in his profile and likely sharing it with third parties. If Bob uses his credit card, Wells Fargo tracks when, where, and how much money Bob just spent. If Bob forgot his wallet at home and uses Google Pay, his transaction information is also shared with Google. Additionally, the WiFi router in Brewing Market likely logs when Bob enters and leaves the area and shares it with third parties, even if Bob’s phone is in ‘airplane mode’.

12:00PM – Lunch

On most days Bob packs a lunch, but occasionally he has a lazy morning and resigns himself to eating out for lunch. Because Bob is perpetually indecisive, he usually spends a fair amount of time searching around for a new place to eat before eventually giving up and heading to Qdoba. As he tools around on Google Maps mulling over the menu choices at the newest food truck or fast casual restaurant, every single click is recorded and analyzed. 

Because Bob has location services enabled on his phone, after he leaves wherever he went for lunch that day, he us immediately prompted with a notification asking him how his experience was and prompting him to write a review. Both Bob’s visit itself, as well as any potential review he posts, will be used to improve his search experience next time, as well as to determine if Bob is a good prospect for a job, a service, or advertisements. These profiles can become so accurate at predicting and interpreting human behavior that some people are convinced that their phones are being used to surreptitiously record them. That is possible, but highly unlikely – several studies have debunked this claim. The reality is these algorithms and profiles are finely tuned with access to nearly limitless data about you, so they are simply that good.

1:30PM – Return to Work

Whether Bob is trying to catch up on readings or email, the first thing he does after booting up his computer is fire up a music app. Bob’s usual choice when he’s trying to be productive is the YouTube channel “lofi hip hop radio” (seriously, check it out). As Bob has been going about his day, algorithms have been hard at work updating his profile with all the data they’ve collected. Perhaps he liked a post by one of the outdoor gear companies he follows on Instagram, so he might receive an ad for a new piece of gear, or maybe Google allowed a third party to analyze his emails and found that Bob has an ailing relative, and he might mysteriously receive an ad for a new breakthrough drug for the disease they were just diagnosed with.

5:30PM – Grocery Shopping

Before Bob heads home he takes care of errands, which usually entails a stop for groceries. Most of the time Bob heads for Whole Foods, tempted by the “Prime Deals” available to Amazon Prime members. Bob is incentivized to use his Prime account every time he shops to receive savings, but at the same time he’s providing Amazon with information including what he likes to eat, how often he buys certain items, and when during the day he likes to shop. All this information can be useful to target advertisements for Bob down the road, as well as evaluate his health, future earning potential, or likelihood for certain personality traits.

Data utilization is a mixed bag. There are risks and real benefits to utilizing behavioral data. Perhaps Bob found the more tailored advertising content more relevant than the generic ads for pharmaceuticals that plague network television. Bob likely derived real benefits from the recommendation received from Google on a new taco truck that he should try in the area. Platforms can also use this data to better understand consumer preferences to deliver higher quality content. Netflix uses behavioral data to decide which shows to greenlight, make decisions about the creative team and casting for origional content, and better source recommendations for existing content to users. According to a recent survey, 39% of consumers thought that Netflix had the best original content (vs. 14% for HBO, and 5% for Amazon Prime Video).  These are real, tangible benefits that cannot be discounted when examining data governance

There are also real costs caused by these data mining operations, that aren’t often visible on the surface. For example, when asked about the profiling tool Cambridge Analytica built to study Facebook users, former employee Brittany Kaiser called it “weapons grade” technology. When behavioral data is used to sell us more widgets or promote new content, there is likely a net benefits to society. When data about us is used to influence electoral outcomes, restrict personal and professional opportunities, or expose our deepest secrets, the costs loom.

Join the CTLJ Content Team at the Silicon Flatiorns Near Future of U.S. Privacy Law Conference on September 6, 2019 at CU Law. Luminaries in the field of data governance and privacy will examine these issues, in particular the possibility of a new Federal Law on data privacy rights.

Releasing Rapunzel(s) from the Trademark Tower: A Consumer’s Real Interest in Trademark Registration

by Rebecca Curtin

The TTAB recently reaffirmed that a consumer can establish standing to oppose a trademark registration because “consumers, like competitors, may have a real interest in keeping merely descriptive or generic words in the public domain, ‘(1) to prevent the owner of a mark from inhibiting competition in the sale of particular goods; and (2) to maintain freedom of the public to use the language involved, thus avoiding the possibility of harassing infringement suits by the registrant against others who use the mark when advertising or describing their own products’” (citation omitted; emphasis is the Board’s). 

In the interest of full disclosure: I am the petitioner in that case, opposing the registration of a word mark, RAPUNZEL for dolls and toy figures that, according to the specimen of use, depict the famous fairy tale character.  I am being represented by the Suffolk Law IP and Entrepreneurship Clinic with pro bono assistance from Workman Nydegger.

At this point, the TTAB has only denied the applicant’s motion to dismiss and allowed the opposition to continue based on adequately plead claims that the word mark RAPUNZEL: 1) fails to function as a trademark and 2) is merely descriptive for dolls and toys. 

Nonetheless, the TTAB’s opinion thus far is heartening. It recognizes that an entity using its trademark power to gain exclusive rights to a descriptive or generic word mark can harm consumers and contribute to market overreach.  This is because, when an entity establishes trademark rights in a word, the word’s use is restricted in the marketplace in relation to the goods that it describes. If the registered mark is descriptive or generic, then consumers of those goods will experience deleterious effects and hindered competition. 

A trademark registration is a powerful tool in the hands of a registrant.  Take a look at these benefits, as articulated by scholar Rebecca Tushnet:

Rather than having to establish in each individual legal proceeding that its mark is in fact valid, a registrant is accorded a presumption of validity, and under certain circumstances that presumption is irrebuttable.  Other benefits to the trademark owner are nationwide priority over other users even without nationwide use, eligibility for assistance from the Customs Service in avoiding infringing imports, the ability to use the U.S. registration as the basis for extending protection in other countries, and preemption of certain state laws (876).

Trademark holders can also use their trademark registration to more effectively police and enforce their exclusive rights. Even if trademark holders never ultimately pursue an action in court to enforce their mark, registrants may make persuasive reference to a trademark registration in their cease-and-desist letters, which they send when they wish to warn a potential competitor off using a mark that they think will confuse consumers as to the source of the products or services being offered.

In the ordinary course, where the purported mark really is functioning to indicate a product’s source, the benefits of registration can ultimately flow to consumers. Marks can protect consumers from confusion as to which entity is producing, sponsoring, or endorsing the products they buy.  This has been recognized by the Supreme Court as the “general concern” of unfair competition law, with the Court elaborating that “while that concern may result in the creation of ‘quasi property rights’ in communicative symbols, the focus is on protection of consumers, not the protection of producers as an incentive to product innovation.”

In that light, consider the harms that may result from granting trademark registration to a descriptive or generic word.  In other words, if the registration is for a word that describes the qualities or function of the product, or is the generic name for the product, then the trademark registration will make it harder for competing producers to communicate with consumers. In this sense, registration of generic and descriptive marks can cause a different kind of consumer confusion.  The Rapunzel name is a helpful one for thinking about what the stakes in such communication really are.

Rapunzel is the name of a fairy tale character, perhaps best known in the versions first published by the Brothers Grimm in 1812, but with roots in tales much older than that, all sharing an archetype identified by folklorists Aarne and Thompson as the “maiden in the tower.”  For centuries, the character has remained a powerful and poignant touchstone for artists with strikingly different takes on the character, from Anne Sexton’s poetic lesbian re-telling of the tale to Dina Goldstein’s photographic series of Fallen Princesses to Carl Payne’s bronze Rapunzel

Naturally, this proliferation of different versions of the character has been accompanied by a broad range of dolls and toy figures, from producers large and small, which are, to my experience as a mother of a young girl, cultural artifacts just as important as the higher art.  Accordingly, when consumers hear the word, “Rapunzel,” they think of the well-known fairy tale character, not any one producer of dolls or toys.  These dolls tell us what “Rapunzel” looks like; they tell us (and our children) what “Rapunzel” means.  Having the ability to choose from various interpretations of “Rapunzel” dolls, and being able to find such a doll from more than one producer is important to consumers—it is important to me as a frequent buyer of fairy-tale-based dolls and toys. 

Unlike other marks, the word mark RAPUNZEL for dolls and toys has no substitute. In this way, trademark rights in “Rapunzel” for dolls and toy figures are importantly different from a trademark like, say, “Cinderella Eyebrows Spa” for various health spa services.  There, we can see that the reference to Cinderella playfully suggests all kinds of things about the services—that the technicians will work as cheerfully as Cinderella at her chores, perhaps, or that the results will be so striking that no one will recognize the beautiful new you at the ball—but none of these suggestions will prevent other health spas from getting into the market and telling consumers the same things about their excellent services using other marks.  By contrast, if the maker of a doll that depicts Rapunzel can’t market the doll using that name, consumers will be deprived of important information that can’t be communicated with other words.  Rapunzel dolls are Rapunzels.

Exclusive trademark rights to market dolls and toy figures under the Rapunzel name could chill new entrants to the market through fear of infringement liability.  Even factoring in defenses like the descriptive fair use doctrine, trademark rights in the Rapunzel name for dolls would raise barriers, both to dollmakers offering “Rapunzels” in the marketplace and to consumers finding “Rapunzels,” where only the trademark holder feels free to use that name in big letters across the front of the box.

As the TTAB acknowledged in its opinion allowing the opposition to go forward, it has already “noted that ‘other doll makers interested in marketing a doll that would depict the character [LITTLE MERMAID] have a competitive need to use that name to describe their products.’”

Thus we are fighting for the consumer interest in Rapunzels that look like this, and this, and this, and this, and this, and this, and this.  We want a marketplace that welcomes a diverse array of producers who engage with the cultural legacy of this ancient fairy tale, from the latest interpretation by a giant corporate content provider to antique dealers who need the name Rapunzel to describe rare French bisque toys like this one, from more than a hundred years ago.   We are fighting for access to artisans like this one, who customize their Rapunzel’s skin tone, hair, and dress color to whatever their customer asks.  We are fighting for creative cos-play dads and crafty-do-it-yourselfers, who have long been inspired by the rich market for dolls and toys that express the Rapunzel character in different ways.  No one needs a trademark on the Rapunzel name to join in this market for dolls and toys that depict the character. Giving one to a single company will only chill the use of that name by others to tell us about the unique dolls and toys they make.

*Disclaimer: The Colorado Technology Law Journal Blog contains the personal opinions of its authors and hosts, and do not necessarily reflect the official position of CTLJ.

Professor Curtin is a graduate of Princeton University, where she received her A.B. in English, summa cum laude, and of the University of Virginia School of Law, where she served on the editorial board of the Virginia Law Review. Prior to attending law school, she completed her Ph.D. in English and American Literature and Language at Harvard University, and held teaching positions at Harvard University and Brandeis University.

Before joining the faculty at Suffolk Law, Professor Curtin worked as an associate in the IP Transactional practice group at Ropes & Gray LLP, where her practice focused on licensing, collaboration and other commercial agreements involving intellectual property. Professor Curtin teaches courses in Property and Copyright. Her research interests currently include the evolution of intellectual property regimes under the influence of new technologies and licensing transactions.

YouTube Demonetization

by Matthew Martinez

YouTube has recently changed its guidelines for ad revenue and video monetization, which is having a significant impact on content creators. Specifically, content creators are finding that their uploaded videos are flagged for inappropriate content and suffering from demonetization. This is part of a new policy created by YouTube to make the community brand-friendly and attract more advertisers to the platform.

An algorithm was designed to identify the following factors in videos and flag them for demonetization:

  • sexually suggestive content, including partial nudity and sexual humor;
  • violence, including display of serious injury and events related to violent extremism;
  • inappropriate language, including harassment, swearing and vulgar language;
  • promotion of drugs and regulated substances, including selling, use and abuse of such items; and
  • controversial or sensitive subjects and events, including subjects related to war, political conflicts, natural disasters and tragedies, even if graphic imagery is not shown.

These categories have drawn criticism for being overly broad and vague. Combine this with the inaccuracy of bots learning how to enforce the guidelines and the result is unfair demonetization. Many of the demonetized videos are those dealing with subject matter YouTube has marked “not suitable for advertisers.” However, many of these videos are in fact appropriate and not deserving of demonetization. Casey Neistat, a popular YouTuber, after the mass shooting at a Las Vegas concert created a video aimed at raising money for the victims of the tragedy stating that all proceeds from ads would be donated to the victims and their families. A few days after the video was uploaded, the video was demonetized.

YouTube’s algorithm has recently been more widespread and aggressive at removing ads on videos that could have the slightest possibility of being controversial. Because the algorithm is fairly new, the result is that it is over-inclusive and impacting videos that should be deserving of ad profits. As a result, certain YouTubers are unable to sustain a career from making videos, and are being forced to stop uploading content. Even though YouTube is a private company not subject to the usual First Amendment constraints as other public forums of communication, removing ads is still a form of censorship. By flagging videos for demonetization, YouTube is rewarding a very specific kind of content, while forcing controversial, suggestive, or tangentially related content off of the platform. This significantly impacts LGBTQIA content creators because most of their videos deal with sexuality, the coming out process, and other related content that has been flagged for being “sexually suggestive.”

The underlying impetus for the increased policing of videos and the over-inclusive demonetization of videos was a response to right-wing political groups uploading and posting content that verged on extremism and hate speech. After brands found their ads being paired with videos on channels like InfoWars, along with other conservative content creators, they threatened to remove all support from YouTube. While YouTube’s intentions seem well placed, its execution has been isolating for all political groups.

As YouTube attempts to make the platform brand-friendly and palatable, it is acting like a gatekeeper and actively censoring content it deems inappropriate. Through the use of the current demonetization algorithm, YouTube is favoring certain speech over others and unnecessarily harming deserving creators and minority groups. The appeal of the platform is waning, and other services like Patreon and are appearing on the horizon as a better market place alternative.

*Disclaimer: The Colorado Technology Law Journal Blog contains the personal opinions of its authors and hosts, and do not necessarily reflect the official position of CTLJ.

Copyright: (Get It) Out of Fashion

by Caitlin Stover

By interpreting copyright law to provide protection to fashion, has the Supreme Court inadvertently exposed the fashion industry to harm?

At issue in Star Athletica v. Varsity Brands was whether the arrangements of lines, chevrons, and colorful shapes appearing on the surface of Varsity Brands’ cheerleading uniforms were eligible for copyright protection as separable features of the design. Answering this question in the affirmative—after applying the relevant test for “separability” in a markedly different manner than courts have traditionally applied the doctrine—the Supreme Court broadly and categorically changed the game of copyright protection. Now, if a court determines that a design (1) has graphic or pictorial qualities, and (2) could be applied on a painter’s canvas, the test for copyright protection is met.

While perhaps providing some measure of clarity for circuits that are split on the issue, the Court’s opinion generates far more questions than answers.

Are baseball uniforms slim-fit leggings and a buttoned up top—or are they more? For example, let’s say a baseball uniform designer claims, as “copyrighted works,” rights to the pinstripes or the piping along the seams of jerseys. Under the precedent established by Star Athletica, who would prevail in litigation—the claimant, or the alleged copyright infringer? And if the claimant prevails, what does this mean for baseball uniform vendors? And how will this affect end consumers of baseball uniforms?

Copyright, as applied to many industries, operates on an incentive-based theory: copyright protection exists to encourage the creation and dissemination of creative expression. In practice, this protection serves as a vehicle for the commodification of creations.  When rights to a particular expression become a commodity, the scope of affected interests expands; in a capitalistic society, the protection and enforcement of commodity-based rights inevitably impacts the end consumer. After all, consumers create the market for the products (or euphemistically, “expressions”) that copyright owners want to protect their rights to.

Tempted by the potential for securing a monopoly in one of the most lucrative clothing industries in the country, brands that are in the position to assert copyrights over the original designs of sports uniforms may soon flood the courts. And allegedly infringing brands, beware: copyright infringement liability carries with it a truly staggering range of potential damages. Pursuant to § 512 of the Copyright Act of 1976, an infringer may be on the hook for a minimum of $350 and a maximum of $75,000 penalty—for each individual finding of copyright infringement. Where infringement is found to be willful, that $75,000 maximum doubles to $150,000 (again, because this bears repeating) per infringement.

Sure, money talks—but money can also silence. If companies are suddenly left exposed to unanticipated liability, scrambling to develop an adequately non-infringing alternative to their previously unchallenged iterations of Varsity Brands’ designs, there may not be a market for consumers to choose from.  A market contender toeing the new, Court-drawn line of infringement is not likely to gamble in the face of potentially ruinous pecuniary liability.

Threats of litigation, and risks of huge penalties if unable to settle litigation before trial, will likely be an effective deterrent for many potential market contenders lacking the deep pockets to carry on in the face of uncertain liability. For the Little League baseball teams, the AAU basketball teams, and more, this may translate into a sharp decline in the generic alternatives to the name-brand jersey supplier.

This brings us to yet another question: do the justifications advanced in Star Athletica for increasing copyright protections afforded to the fashion industry actually outweigh the increased costs to consumers?  If not, then perhaps copyright protection should not extend to fashion. After all, this was the conclusion previously reached by legal scholars, suggested by the text of the Copyright Act, asserted by the Copyright Register’s Office, held by a majority of federal circuit courts, and advanced by critics of Star Athletica’s holding.

*Disclaimer: The Colorado Technology Law Journal Blog contains the personal opinions of its authors and hosts, and do not necessarily reflect the official position of CTLJ.