Virtual Casinos: Are Video Games Preying on Our Addictive Tendencies?

by Trevor Bervik

Mix Star Wars with . . . well, anything, and you have instant recognition for your product. Under the licensing prowess of Disney, who purchased the franchise for just over $4 billion in 2012, the beloved Star Wars characters have graced such diverse products as boxes of macaroni and cheese, fishing equipment, and inflatable Christmas decorations. From these innocuous products, it seems unlikely that Disney would find themselves in the middle of a political controversy, but that’s exactly what has happened in the wake of the new video game Star Wars: Battlefront II, developed by EA DICE and published by video game company Electronic Arts (EA).

In Battlefront II, Electronic Arts—the exclusive producers of Star Wars video games— implemented a “loot-box” system where players could purchase in-game loot-boxes with real-world dollars. Opening these loot-boxes would unlock digital features, such as new characters or weapon upgrades, but the purchaser would not exactly know what feature they would unlock until the purchase was completed and the “box” was opened. Even iconic characters, such as Darth Vader and Boba Fett, were initially unavailable to gamers unless either an exorbitant amount of time was spent playing—about 40 hours per locked character—or by opening a large number of loot-boxes.

Gamers instantly complained about the system, calling the system “pay-to-win” and pushing back against spending additional money to unlock new content, especially after spending a minimum of $60 for the core game. The controversy exploded in the all-time most heavily downvoted post on the popular social-media site Reddit after Electronic Arts attempted to defend their practices.

However, the problems for EA didn’t stop at social media. Regulators in Belgium have begun looking into whether the loot-box system should be considered a form of gambling. Even in the United States, politicians are beginning to take notice. Hawaiian State Representative Chris Lee has described loot-boxes as a type of predatory behavior, even comparing the system to an “online casino” for kids.

Conversely, those in the industry disagree with the belief that loot-boxes are a form of gambling. Recently, Karl Slatoff, the president of game publisher Take-Two Interactive, has come out in defense of loot-boxes, saying the loot-box system is all about content and “[y]ou can’t force the customer to do anything.” Others in the industry compare loot-boxes with buying packs of cards, reasoning that since every loot-box contains “something” each purchase delivers actual content to the player, while true gambling doesn’t come with such guarantees.

But these industry reactions may be missing the point. Content from loot-boxes generally can be separated into three categories: 1) common low-level content; 2) rare mid-level content; and 3) extremely rare high-level content.

The type of content that falls into category 1 is typically uninteresting, like a short voice clip or a marginal upgrade for a character. Category 1 content is freely given from loot-boxes and players may receive duplicates in each successive loot-box.

Category 2 content is slightly more interesting and can include thing like costumes and better upgrades. This content is still given in nearly every loot-box, but usually at only one item in each box (out of the five total items given in each loot-box).

Category 3 content includes things like high-powered characters, elaborate costumes, and other highly sought-after content. This is where the gambling may be implicated. As category 3 content generally provides players with the largest competitive advantage, items in this category are highly coveted. While each loot-box does contain some content, players must purchase a large number of loot-boxes to obtain the specific upgrade that they want. Gamers have even reported spending upwards of $90 on loot-boxes and still failing to unlock certain items.

This arrangement “rewards” players for opening a lucky box. Imagine, as an analogy, a casino that charges poker players an entry fee of $500 dollars. Under the loot-box model, industry leaders would argue against a “gambling” classification for a casino which gives a losing poker player a stuffed-animal “reward” for participating while giving a winning player a gold bar. The losing player still gets “something,” but it defies logic to say that this “something” precludes the poker game from being considered gambling.

The industry naturally wants to keep its income stream strong, and profits from loot-boxes are healthy. But the gambling comparisons are strong, and legislatures may want to consider direct action to avoid these possibly predatory practices.

*Disclaimer: The Colorado Technology Law Journal Blog contains the personal opinions of its authors and hosts, and do not necessarily reflect the official position of CTLJ.

Technology Competency in a Brave New World of Legal Practice

by Emily Dreiling

The Evolution of “Competence”

A lawyer’s “Duty of Competence” has been around for a long time. Model Rule 1.1 provides that “[a] lawyer shall provide competent representation to a client. Competent representation requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.” Generally, lawyers have understood competence to mean their substantive knowledge of a certain area of law, in addition to their ability to adequately represent a client. The scope of this competence, however, has changed.

In 2012, the American Bar Association edited its Model Rules to emphasize that lawyers have a duty to not only to be competent generally, but also to be “Technology Competent.” Specifically, it amended Model Rule 1.1, Comment 8, “Maintaining Competence,” to read as follows:

To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology, engage in continuing study and education and comply with all continuing legal education requirements to which the lawyer is subject. (Emphasis added.)

Just this year, Nebraska became the 28th state to adopt Comment 8; a majority of states now require a duty of Technology Competence. According to legal tech expert Robert Ambrogi, in a recent interview with the legal tech podcast “The Digital Edge,” this duty will likely soon be adopted by all states.

So, what does it mean to be “Technology Competent?”

Lawyers didn’t need this amendment to understand the importance of technology in the legal profession. The fact that you’re here, reading my post on a Tech Law blog, shows you understand the importance of, or at least value the use of, technology in the law.

Yet, when Lawyers Casey Flaherty and Darth Vaughn administered basic technology assessments to hundreds of law school students in 2016, asking them to complete several tasks in MS Word, only about 33% of the students could perform these tasks on their first attempt:

  • accept/turn-off track changes;
  • cut & paste;
  • replace text;
  • format font and paragraph;
  • fix footers;
  • insert hyperlink;
  • apply/modify style;
  • insert/update cross-references;
  • insert page break;
  • insert non-breaking space;
  • clean document properties; and/or
  • create comparison document (i.e., a redline).

It seems reasonable that, to be tech competent, lawyers should know the above skills—and arguably even more. Yet, if I am being completely honest, I do not think that I could complete at least several of these tasks on my own, on the first try, without the aid of Google. Most law students, like myself, can get through school without ever having to do most of the above tasks. This limited exposure leads to a more limited tech competence. If we lack basic competence in MS Word, however, what sorts of roadblocks may we encounter when we enter the legal profession?

Given the increasing adoption of mandatory e-filing, service via email, and eDiscovery, lawyers can no longer get by living in technological ignorance. Furthermore, prevalent professional use of modern technologies—such as case-management software, document-management software, billing software, e-mail, PDF systems with redaction, and the MS Office Suite—emphasizes the need for lawyers to be tech competent, at least when it comes to these programs.

So, if students are lacking this competence, how do they get it?

Here’s a call out to the law schools. Despite the prevalence of tech in the legal profession, few law schools offer any substantive training in its use. This may be due to a fear of tech, or a belief that tech is reserved for the STEM curriculum. Moreover, academic faculty may lack an understanding of tech, and therefore feel uncomfortable teaching it, or may be ignorant to the shifting landscape of the legal profession, causing them to dismiss the importance of tech courses altogether.

Regardless, with the new ethical duty of technology competence, it is vital that law schools begin to teach these basic competencies to their students. Whether in one of the 28 “Comment 8” states or in one of the states that has not yet adopted this change, law schools should not only provide substantive courses on the use of prevalent legal technologies, but require them. And students, in turn, should demand these courses.

*Disclaimer: The Colorado Technology Law Journal Blog contains the personal opinions of its authors and hosts, and do not necessarily reflect the official position of CTLJ.

The FCC Needs a 21st Century Theory of Video Competition

by Galen Posposil

How will Americans receive video programming in the 21st Century?  Over the Air? Via traditional multi-channel video programming distributors (MVPDs), such as cable and satellite? Or via over-the-top internet services?  What mix of video delivery methods will ensure that every American household has access to competitive sources of video programming? Despite considering dozens of video-related measures, both regulatory and deregulatory, over the last five years, the Commission has yet to articulate a regulatory vision for how the marketplace will deliver competitive and diverse video programming options to all Americans.

Each year, the Commission reports to Congress on the status of video-programming competition.  As it prepares its 19th annual report, the video marketplace is bursting with innovation and competition.  Depending on where they live, consumers often have the choice of multiple traditional multi-channel video programming distributors, over-the-air television, and internet-delivered programming.  Meanwhile, both traditional networks and new entrants like Netflix and Amazon are spending billions on exclusive programming to attract consumers to the latest new network or online service.  However, the benefits of this “Golden Age” of television may never reach all Americans.  Those with disabilities and those in areas without broadband competition may never experience the benefits of television’s Golden Age.

The explosion of new video production has brought dozens of new shows to television, but “television” itself is also changing.  The technology used to deliver video to a screen now includes apps, streaming-video appliances, and advanced set-top boxes.  In the race to build the best video service, accessibility features like video description and closed captioning are often left behind.  For the deaf and blind, these accessibility features are the difference between participating in the national conversation about that new hit series, and being left behind.  While some online video providers like Netflix and Amazon have voluntarily adopted closed captioning and video description for their exclusive shows, devices have not necessarily kept pace.  With TV apps and streaming-video sticks providing more and more video content, consumers cannot be sure that their content will be available in an accessible format on every device.

Rural consumers may also be left behind as the video-distribution landscape shifts. Both cable television and broadband infrastructure have been deployed mostly in urban areas.   Rural consumers are left with only satellite and over-the-air broadcast video services.  As video choices proliferate online, rural consumers are without access to the new services that have broken the stranglehold of traditional media. In other areas, consumers may have access to only a single broadband provider with speeds capable of streaming video.

As part of its 19th video competition report, the Commission should consider what data it needs from cable operators and traditional video programming distributors.  As the Commission considers changes to its data gathering tools, it should examine cable video in the larger context of the video programming market.  As video is increasingly delivered online, that task will require assessing broadband deployment as infrastructure for delivering video programming. Video competition and broadband deployment can no longer be assessed independently.

*Disclaimer: The Colorado Technology Law Journal Blog contains the personal opinions of its authors and hosts, and do not necessarily reflect the official position of CTLJ.

The Trump Administration & Technology Transfer

By Daniel Insulza

Technology transfer is the process by which an organization transfers scientific findings to another with the purpose of further developing that technology and commercializing it. The Patent and Trademark Law Amendment Act of 1980 (also known as the Bayh-Dole Act), which has previously been described as “the most inspired piece of legislation to be enacted in America over the past half-century”, laid the foundation for the growth of technology transfer in the United States.

The topic of technology transfer has barely been mentioned by the Trump administration. Even during his presidential campaign, Trump did not state an official position with regards to technology transfer or even patent reform. Considering his goal of growing GDP by 3% every year, President Trump could look at technological development as one of the areas to exploit. Technology transfer represents a sector that has increased consistently since its establishment, and it is also one that could use some improvements.

Before the enactment of Bayh-Dole, any new technology generated from government-funded research became government property. Unfortunately, the federal government had neither the capabilities nor the manpower to manage these new technologies, and only licensed less than 5% of government patents to industries. In addition, there were minimal incentives for academic institutions to carry out government-funded research.

Once Bayh-Dole was enacted, both the responsibilities and incentives were passed on to research institutions that had the resources to take advantage of these opportunities. An increasing number universities and research centers have technology transfer offices, which are responsible for patenting and licensing technologies developed by researchers.

Universities have been particularly successful in commercializing their intellectual property. In 2014, universities earned $2.2billion in patent licensing revenue alone. In addition, royalties have produced revenues in excess of a billion for several universities. These numbers have steadily increased since Bayh-Dole was enacted because many universities have managed more and more technological and financial resources through their Technology Transfer Offices (TTOs), which also are responsible for contacting partners that range from startups to large companies, to commercialize new technological developments.

As effective as Bayh-Dole has been, there are still improvements to be made in the technology-transfer area. Universities currently have widely different approaches for the implementation of a technology transfer program. These approaches are mainly influenced by the resources available for the university to spend. Also the process is always more challenging for startups, since they are faced with a much more challenging stage moving from a raw technology to a marketable product.

The Department of Commerce, through its National Institute of Standards and Technology (NIST), is the agency responsible for improving technology transfer in the US. The Technology Partnerships Office (TPO), which is a subdivision of NIST has as its mission to “enables technology transfer to promote US competitiveness, both for NIST and as across the Federal government for the Department of Commerce.”

Even though the commercialization of new technologies generates significant amounts of revenue and value in terms of intellectual property, it is not a high priority for governments to establish goals for technology transfer (President Obama for example did not significantly address technology transfer until he signed a presidential memorandum towards the end of 2011). Innovation and technology development policies tend to be unpopular as presidential candidates’ proposals due to the lack of immediate benefits. Unlike other areas—such as infrastructure, housing, and employment—as valuable as technology transfer initiatives are, they will not produce tangible results until several years later.

Under the Obama administration, NIST created an initiative called Lab to Market. This program seeks to optimize the management of federally funded patents and discoveries. Their goal is to revise and update Bayh-Dole in order to increase the economic impact of federally funded research. In 2016, NIST issued an NPRM regarding various provisions of the some of the provisions of the Bayh Dole Act.

So far, President Trump has directly addressed intellectual property, broadly speaking, just once. This was a couple of months back when he issued an executive memorandum instructing US agencies to protect US intellectual-property theft by foreign countries. Trump did this right after he stopped the takeover of a US tech company specialized in making chips by a private-equity firm with ties to China.

With respect to technology transfer, an encouraging sign is Trump’s choice for director of NIST, Walter Copan, who has a background working in this area. Copan highlighted technology transfer as one of his priorities at the time of his nomination. Congress confirmed him on October 5th of this year.

Even though technology transfer is not particularly attractive as an area to support from a political standpoint, the positive impact that it has on the economy by increasing productivity is unquestioned. It will be interesting to see if this administration decides to remain focused on the same technology-transfer goals as the previous one; although, for the time being, it seems the Trump administration will remain relatively quiet on the subject.

*Disclaimer: The Colorado Technology Law Journal Blog contains the personal opinions of its authors and hosts, and do not necessarily reflect the official position of CTLJ.

Copyright Infringement Analysis of the Videogame Destiny 2

By Zachary Nichols

Law school has a way of making you look at the world a little differently. You examine and analyze things that you wouldn’t have before. Everyday things that you once may have noticed, and then laughed off, now become nagging questions, and you can’t help but dig a little deeper. I believe the saying goes, “if all you have is a hammer, everything looks like a nail.” Each class in law school gives you a new hammer to use. In this article, I am going to use one of my newly acquired hammers, copyright infringement, to look at something that I would normally use as an escape from law school—a videogame, Destiny 2, to be precise. This type of exercise is something that any law student can do to practice. When I see something from the classroom in the real world, I use it as a practice issue spotter, similar to those found on law school exams.

Destiny 2 came out fairly recently, and in my play through of the campaign, I noticed a few peculiarities. Two of these I discuss in this post. The first has to do with the game’s main villain, Dominus Ghaul. He is referred to in the game simply as “Ghaul” and has a familiar likeness—that of Bane from the movie “The Dark Knight Rises.” Like Bane, Ghaul has a respirator mask that covers his nose and mouth with a strap that goes around the back of his bald head. And the mask makes him speak in a deep and distorted voice.

The second peculiarity that I would like to examine has to do with one of the game’s three main playable classes, the Titan. One of the Titan’s subclasses, the Sentinel subclass, and its special ability is eerily reminiscent of the Marvel character, Captain America. Triggering the Sentinel special ability will cause your character to summon a circular shield. Your character can throw that shield, and it will ricochet around the room and off of enemies for a while. And you can charge toward enemies and bash them with the shield, much like Captain America does.

So, the question that I couldn’t help but ask while playing was, do these similarities constitute copyright infringement? For practice, I decided to argue against copyright infringement, and this post will explore that point of view. There are other doctrines that you would want to address on an exam, but this post will only explore an infringement analysis.

If this were to constitute copyright infringement, it would be the copyright holder’s right to the reproduction of original material that Destiny 2 would be infringing. The test for infringement of a reproduction right consists of two parts. The first prong of the test is “Probative Similarity.” We look at the alleged infringers access to the copyrighted work as well as the two works’ similarity. Remember that only expressions, not ideas, can be copyrighted. The second is “Improper Appropriation.” Here we use the Levels of Extraction test and break down the similarities into two categories, the elements that are copyrightable and the elements that are not. We then see if the copied elements were elements that were originally eligible for copyright protection.

Under Probative Similarity we ask, has the work actually been copied? To make that determination, we first look at access and similarity.

The creators of Destiny 2 likely had plenty of access to the character Bane. The Dark Knight Rises was a very popular movie, and Bane has been a villain in the Batman comics for quite some time. The next piece is assessing the similarity of Ghaul and Bane. This is first a question for an expert, before it is presented to the lay observer, but because I am not an expert, I will only examine it as a lay observer. To me, the two seem similar, given their appearance and voice.

Likewise, when applying the access and similarity analyses to the Sentinel subclass and Captain America, it is likely that the creators of Destiny 2 had plenty of access to Captain America. Captain America has appeared in multiple movies and comics. When looking at the similarities between the two characters, the round shield coupled the various ways it is used to attack enemies, does lend itself to the belief that a lay observer could find that the two characters were similar enough.

After assessing probative similarity, we turn to the Improper Appropriation test. There, we are asking if too much of the original work has been copied and used. So, we really have to see if the creators of Destiny 2 copied the heart and expression of Bane and Captain America. We break down the similarities into elements that are copyrightable and elements that are not. Here, we have a similarity in appearance and voice—the ideas behind Bane, but the expression of Bane is not infringed. The mask and distorted voice are not copyrightable elements. They are aspects of Bane, but not Bane himself. Ghaul is an alien from another world with an entirely different backstory. The Sentinel is just a hero with a shield that ricochets around when thrown. A shield that can be used as a weapon is again just an idea included in Captain America, but is not the expression of him. So, it looks like the elements that are similar are not the copyrightable elements of the original work. The similarities are in the ideas, not the expressions of the characters, and that is why they do not infringe.

*Disclaimer: The Colorado Technology Law Journal Blog contains the personal opinions of its authors and hosts, and do not necessarily reflect the official position of CTLJ.

As Cell Phone Security Increases, Constitutional Protection Decreases

By Conrad Glover

As technology used in cell phones advances, the cell phones become increasingly more safe and secure against those trying to access that information—that is, as long as those whom you seek to keep your information from is not the United States Government.

These days, many methods can be used to unlock a smart phone. For example, the new Samsung Galaxy 8 offers five different methods that users can choose to access their device. Cell phones are now commonly unlocked with four or six-digit PIN code or alphanumeric passwords, pattern unlock methods (where one traces set pattern through a grid of nine dots), fingerprint scanners, iris scanning methods, and finally facial recognition.

These methods are meant to keep the common person from accessing your phone. If you were to lose you phone or if it were to get stolen, these security methods would make it much more difficult for the average person to get access to you phone. There are roughly 10,000 possible combinations for a four-digit pin code and over 1,000,000 possible combinations for a six-digit pin code. The likelihood that someone would quickly crack this code is very unlikely with modern cell phone technology. The same goes for biometric identification methods like fingerprints, facial recognition, and iris scans. And while one can debate which method of biometric authentication is the most secure, the fact is the technologies are constantly improving and becoming more secure with each new iteration. For example, previous facial recognition software could be easily spoofed with a high-resolution photograph of the user. Newer software is more interactive and takes a much more detailed scan of the user face, making this method of security more fool-proof than the previous versions.

However, it is arguable whether or not these enhanced methods of protecting your cell phone data increase protection against the government. If the government were to produce a warrant, it is very likely that you would be obligated to unlock your cell phone for the government.

The defining issue is whether or not the method you choose to protect your phone is covered by the Fifth Amendment. The Fifth Amendment protection against self-incrimination is what is in play here. The Fifth Amendment states that no person shall be compelled in any criminal case to be a witness against himself. The determinative factor is whether the proscribed actions fall under the protection of the Fifth Amendment, is if the conduct in question is considered testimony. Recently, there has been quite a bit of debate as to what is actually protected. In essence, real or physical evidence is not protected by the fifth amendment. Testimony or some sort of communication is required to receive constitutional protection. That is why individuals can be compelled to produce their fingerprints, DNA samples, participate in a line-up or one-on-one identification, wear face paint, or put on a blouse. All of these actions involve physical evidence or physical characteristics.

Without a communicative component, or without thought being produced, there is no protected testimony. Thumbprints have already been found to not be protected by the Fifth Amendment by the courts. Similarly, since one can already be forced to appear in a line-up and wear face paint, it is unlikely that Fifth Amendment protection will extend to facial recognition or iris scans. You are protected from providing your password, because providing that would require you to produce thought.

Therefore, if security from government intrusion is your concern, consider whether the newest piece of technology is really the right choice.

*Disclaimer: The Colorado Technology Law Journal Blog contains the personal opinions of its authors and hosts, and do not necessarily reflect the official position of CTLJ.

The Uterine Transplant: A Controversial Means Of Entering Motherhood

By Ethan Tackett

Women with uterine factor infertility (UFI) suffer infertility due either to irreversible uterine damage or to uterine complications that arise during embryonic development. These women are incapable or, at most, have an extremely low chance of getting pregnant because of limited treatment options.

However, last year, the Cleveland Clinic gave hope to women suffering from UFI. On February 24, 2016, Cleveland Clinic performed a historic uterine transplant on Lindsay McFarland, a then-26-year-old woman born without a uterus. After ten hours in the operating room, McFarland became the first woman in the United States to receive a uterine transplant. While this is a great medical feat, the uterine-transplant procedure brings with it many questions including whether the Affordable Care Act (ACA) will require insurance providers to cover the procedure.

McFarland was the first in a Cleveland Clinic study of ten women with UFI selected to receive a uterine transplant. The procedure begins with stimulating the woman’s ovaries to produce multiple eggs. The eggs are removed, fertilized with sperm via in vitro fertilization, and frozen for future use. The woman then starts anti-rejection medication and undergoes the transplant. Twelve months later, after the uterus fully heals, the embryos are thawed and implanted one at a time. During pregnancy, the mother continues taking anti-rejection medication and is closely monitored through delivery. After delivering one or two babies by C-section, the woman undergoes a hysterectomy to remove the transplanted uterus and stops taking anti-rejection medication.

Though McFarland’s uterine transplant was a success, her transplanted uterus was removed approximately two weeks later on March 8, 2016, due to a severe yeast infection. The Clinic voluntarily put a hold on the study to allow for consultation with infectious-disease specialists and amend the procedure to prevent this problem from happening again. Dr. Andreas Tzakis, program director of the transplant center and primary investigator of the uterus transplant clinical study, says that the Clinic’s work was not a failure, as it has shown that these transplants are possible.

Although this procedure offers a ray of hope to women incapable of carrying a child, it also raises medical, social, and legal issues that need to be assessed.

First, this procedure includes medical risks to women receiving a uterine transplant and children being born from a transplanted uterus. As with any major operation, this procedure poses serious risks of surgical and anesthetic complications. These women also face an increased risk of infection not only from the surgery, but from the anti-rejection medication. The procedure requires the woman to take large quantities of anti-rejection medications for an extended period which results in a suppressed immune system. Additionally, babies born from a transplanted uterus face risks from the prolonged exposure to the anti-rejection medication taken by the mother. By undergoing the uterine-transplant procedure, these women and their children face a great level a risk.

Second, the uterine-transplant procedure reinforces traditional social stereotypes of what it means to be a woman and a mother. The procedure underscores the idea that a uterus is required to be a “real” woman. This affects women born without a uterus, including both ciswomen who suffer from syndromes like Mayer-Rokitansky-Küster-Hauser syndrome and transwomen. The procedure also emphasizes the notion that genetic relation to and gestation of a child are required to be a “real” mother. This affects mothers who adopted or enlisted the help of a surrogate to start a family. Cleveland Clinic’s uterine-transplant procedure challenges modern social interpretations of womanhood and motherhood.

Last, this procedure raises legal questions in the area of insurance law. As this procedure either introduces a uterus into or replaces a non-functioning uterus in a woman’s body, the uterine-transplant procedure is neither a life-saving operation nor an urgent procedure. Thus, the potential availability of the procedure begs the question of whether insurance providers should be required to cover a uterine transplant. Currently, the ACA requires every health plan to cover pregnancy and childbirth. As this procedure is further developed and becomes more available to women, Congress and/or the Department of Health and Human Services, the agency responsible for implementing the ACA, will need to decide whether this procedure qualifies under the ACA requirement for pregnancy and childbirth.

Though the Cleveland Clinic’s study is a huge leap forward in reproductive and surgical medicine, the uterine-transplant procedure isn’t without its negative implications. While the procedure provides women like McFarland the otherwise impossible option of experiencing pregnancy, the successful completion of the Cleveland Clinic’s study may have greater social and legal effects than previously anticipated. Like with all great advances in medicine, the researchers, physicians, and bioethicists involved should develop the uterine-transplant procedure and assess its effects at a responsible pace.

*Disclaimer: The Colorado Technology Law Journal Blog contains the personal opinions of its authors and hosts, and do not necessarily reflect the official position of CTLJ.

Tribal Sovereign Immunity & the Patent System

By Trey Reed

In September, the drug company Allergan transferred its patents for Restasis to the Saint Regis Mohawk Tribe for $13.5 million with up to $15 million/year in royalties. Allergan did this to take advantage of the tribe’s sovereign immunity, which would prevent patent trolls, and anyone else, from challenging the validity of the patent. Tribal Sovereign Immunity prevents tribes from being sued without their consent unless Congress passes a waiver.  In general, Congress has plenary power over the tribe’s sovereign immunity. This means that Congress may alter the scope of the tribe’s sovereign immunity at will, just as it may alter or breach the terms of a treaty with a tribe at will.

Currently, Congress is considering passing a bill that would prevent tribes from being able to use this immunity to circumvent the patent system. However, there are conflicting views on whether using sovereign immunity to bypass a key feature of the patent system should be allowed. On one side are people who wish to see tribal sovereign immunity left whole and not chipped away by creating pockets of invalidity. On the other side are people who are afraid of drug companies abusing their monopolies and gouging prices unconscionably. Recent history supports both sides of this argument. For instance, the history of the United States is riddled with stories of Indian tribes being taken advantage of. From treaty abrogation to the taking of Standing Rock to build a pipeline, tribes have not been treated well by the federal government. As such, they have good reason to distrust any federal encroachment of their rights. On the other hand, the news has been full of companies hiking up prices on vital, life-saving medicine in order to increase profits. For example, the price of an Epipen was increased from about $110 to almost $610 due to the monopoly the patent owners have. Given the ease of abusing the monopoly stemming from patent rights, this issue should be examined to ensure that the balance between tribal immunity and sound law is maintained.

To fix this problem, it seems likely that Congress will create another caveat to restrict the use of sovereign immunity to prevent patent validity from being challenged. All in all, this will probably not be a large blow to the tribes. This is a new legal strategy that, if allowed, could become a great way for tribes to see an increase in income; however, it also bypasses some of the new checks that the America Invents Act (AIA) has introduced to the patent system to prevent bad patents from being issued and abused.

With the passage of the AIA, the patent system saw the introduction of inter partes review (IPR). IPR is an important post grant review method that aimed to fix some of the issues that plagued the old post grant review system. In practice,  IPR allows third parties, normally competitors, to challenge a patent’s validity without having to go through an egregiously expensive legal battle that typically accompanies patent litigation. Traditional patent battles cost upwards of $2 million, so the barrier of entry for challengers is quite large.  Instituting an IPR proceeding requires the United States Patent and Trademark Office (USPTO) to reexamine the patent with the materials (usually prior art claims) the challenger provides that may invalidate the patent. The average cost of IPR is estimated to be around $450k. Although this amount is hefty, it is relatively cheap when compared to the millions of dollars required for traditional patent litigation. This relative cheapness allows less-wealthy parties to access the patent system and prevent large companies with deeper pockets from monopolizing technology. Allowing entities to legally avoid this challenge needs to be carefully considered and balanced.

In addressing these issues, Congress must weigh keeping tribal sovereign immunity whole against the ability to use the immunity to avoid checks that the patent system uses to prevent bad patents. Congress’ response must be measured because, in the end, either tribal sovereign immunity could be diminished or the patent system could be left compromised.

*Disclaimer: The Colorado Technology Law Journal Blog contains the personal opinions of its authors and hosts, and do not necessarily reflect the official position of CTLJ.

Google and Facebook Stopping “Fake News” on Las Vegas Shooting Suspect

By June Torres

In an era where the internet is the main place where people access information, Google, Facebook, and other social networks are continuously managing fake news that are publicized on their trafficked sites.

Fake news stories are not a novelty; however, with multiple online avenues, the authenticity of each story becomes more difficult to address. The creation of social networks allows people to exchange information on a much greater scale, allowing prior economic barriers of fake news to be removed. Although there are no current laws or precedents that explicitly define “fake news,” the concept of fake news is generally understood as any news story that intentionally presents and spreads incorrect information.

Last week, distressing false news emerged from the mass shooting in Las Vegas. On October 2, 2017, people worldwide rushed to obtain information regarding the deadliest massacre in the United States. The Las Vegas massacre, which killed at least 59 people and injured more than 500, left the world in fear, sadness, and with many questions. Seeking answers, many searched Google for information about the victims and the suspected shooter.

Many of these searches yielded inaccurate information. According to Google, its computer algorithms displayed misinformation about the shooter’s identity. Before the problem was corrected, Google’s search results displayed in its top stories a discussion thread from an online forum, 4chan, a “notorious spawning ground for Internet hoaxes.” 4chan provided false information about the motivation of the shooter and falsely identified the shooter as Geary Danley, “calling him a leftist and Democratic supporter.” 4chan’s fake news gained traction, and consequently appeared in the top stories of Google, due to “Internet sleuths scour[ing] social media to identify the gunman faster than police.” However, the police later identified the shooter responsible for the Las Vegas massacre. Although the identity of the shooter is now known, his motives for the mass shooting are still unclear.

Like Google, Facebook is an epicenter where many people today gather information on current events. Consequently, Facebook is also dealing with backlash from misinformation posted about the identity of the gunman. On Facebook’s “Safety Check” page, it promoted “stories from right-wing news sites…which falsely identified the suspected shooter and included misleading speculation on his motivation.” Facebook soon removed the fake news that was circulated, and it informed users that it would work on fixing the issue. In the past few months, Facebook has made similar assurances to its users due to the inaccurate posts on their pages, but Facebook and Google have yet to adequately protect their systems from enabling viral speed of misinformation.

The growing volume of digital news reveals the necessity for more sophisticated technology that can recognize false and potentially harmful information. But in the meantime, legal recourse may be an avenue, under defamation law, for an individual who suffered a harm to their reputation due to a defamatory statement. However, an issue with defamatory statements that are made on social media is that simply “retweeting a defamatory statement is probably not going to be enough to qualify for republication.” In addition, suits against online media, such as Google and Facebook, are protected by Section 230 of the Communication Decency Act of 1996. “This federal statute declares that providers of interactive services are not liable for content posted by their users.” Nonetheless, Facebook and Google are careful when removing information from their sites so to avoid the possibility of claims of censorship, and to continue the opportunity for its users to speak freely on these platforms.

It is important to share our ideas, concerns, and desires while using these platforms. However, as users of Facebook, Google, and other social media, it is important to question the information posted on these sites, and to think critically about the impacts of sharing posts and links that contain unsupported statements.

My thoughts and prayers are with those affected by the tragedy in Las Vegas.

*Disclaimer: The Colorado Technology Law Journal Blog contains the personal opinions of its authors and hosts, and do not necessarily reflect the official position of CTLJ.

Trump Administration Prioritizes STEM Education

By Joseph Gaffney

Last week, President Trump signed a memo directing the Secretary of Education to prioritize Science, Technology, Engineering, and Mathematics (STEM) education for K-12 students, including the allocation of $200,000,000 a year toward STEM education. Along with allocating funds, the Secretary of Education is also directed to produce guidance documents and technical assistance to support the goals of this initiative. A focus on STEM subjects is not new, as former President Obama made several pledges to encourage STEM education through grant funds and by securing private investments.

The President’s memo argues that the skills acquired through STEM education are becoming increasingly necessary for individuals to qualify for high-paying jobs in the US and that while the system as a whole has room for growth, certain groups of children in particular are not being adequately served. The memo cites statistics showing that minorities, students in rural areas, and girls are particularly underserved.

The memo does not give precise detail about how decisions about the funds will be made. Instead, the memo gives discretion to the Secretary of Education to allocate grant funds with the goal of promoting STEM subjects, especially Computer Science. However, there are indications that underserved populations will be favored in the decision-making process. In the Secretary’s annual report to the Office of Management and Budget, she will need to include the results from the previous year, including data specific to underserved populations. Additionally, Ivanka Trump, whose meetings with Silicon Valley executives over the past few months helped to precipitate this initiative, has stated that the White House will advise the agency to make decisions with gender and racial diversity in mind. The memo identified the scarcity of teachers for STEM subjects as a barrier to success, so it can be fairly assumed that steps toward alleviating this problem will be part of the process.

But one question circulating around the media is where will these funds come from and what other programs will be affected. White House officials have stated that the funds will be taken from the existing budget for the Department of Education—a budget that consisted of $209.1 billion in 2017. The President insists that $200 million dollars is “peanuts,” and next to $209.1 billion it may seem that way. However, if it is enough money to make a tangible difference in STEM education, it is likely enough money to diminish other programs.

Critics have suggested that shifting more resources toward STEM education and away from humanities, arts, and sports may help prepare students to work as inventors, but leave them unable to be innovators. For example, some have argued that the people skills learned from non-STEM subjects are needed to implement the skills acquired through a STEM education in any meaningful way.

However, psychology research may show that this view of people-skills acquisition is too narrow. People skills are the abilities that are necessary to maintain positive relationships and generally get along with others. They are acquired in many different social situations, such as arguing with a friend, reciprocating social cues, or handling a bully. Most would agree that social skills are vital to many aspects of life, including employment in a STEM field, and perhaps non-STEM classrooms may be a good places to learn social skills. But given that people skills are learned through a broad range of social interactions, it is also possible that children are learning these skills in STEM classes, or outside of school altogether.

Moreover, even if non-STEM classrooms were the exclusive domain of social-skills learning, the President’s memo does not advocate for less time spent teaching non-STEM subjects in public schools. Rather, the memo asserts that more and better course offerings in STEM subjects should be encouraged in order to keep the US economically competitive. The effect that additional STEM course offerings will have on non-STEM courses is at this point speculative.

*Disclaimer: The Colorado Technology Law Journal Blog contains the personal opinions of its authors and hosts, and do not necessarily reflect the official position of CTLJ.