Innovating Like an Optimist, Preparing Like a Pessimist: Ethical Speculation and the Legal Imagination

Innovating Like an Optimist, Preparing Like a Pessimist: Ethical Speculation and the Legal Imagination

by Casey Fiesler[1]*

“Science fiction writers foresee the inevitable, and although problems and catastrophes may be inevitable, solutions are not.” – Isaac Asimov[2]

Introduction

We all tell stories—to ourselves, and to others—about the future. These stories typically draw us in two opposite directions: to an optimist utopia, where we imagine how things might be better than they are today, or to a pessimist dystopia where aggressive innovation leads to our destruction.[3] In our current landscape of rapidly emerging technology, it is easy to jump to dystopic scenarios when we imagine the future—and as individuals we often do.[4] Meanwhile, science fiction media like Black Mirror also tell these stories for us, fast-forwarding technologies being developed to the point where the dream of that technology becomes a nightmare instead.[5]

Dystopias as a genre serve as cautionary tales that can warn us of what might lie ahead if we are not careful now.[6] These stories are particularly powerful in the context of unanticipated consequences, where deliberate acts have effects that are unintended or unforeseen.[7] By definition, negative consequences of this type are unforeseeable at the time a technology is designed… but what if they were not? Speculation is the ability to imagine potential futures and alternatives.[8] And science fiction as a narrative genre of speculation can illuminate the likely social impact of change—not just by criticizing naïve optimism about the future, but also by providing a blueprint for a better one.[9]

For both pessimists and optimists, critique is not necessarily negative, but can be a testimonial towards how the world might be instead.[10] Creative speculation, as a method of ethical and legal foresight,[11] can help us foresee potential consequences of emerging technologies. Subsequently, we may be able to use design, implementation, or regulation to mitigate negative outcomes. In fact, a number of scholars have called for multi-stakeholder and interdisciplinary approaches to regulation,[12] and even pointed to the usefulness of science fiction and speculation.[13] Moreover, much like issue-spotting and other traditional ways of “thinking like a lawyer,”[14] creative speculation is a skill that can be practiced and taught.

In this essay, I begin by discussing the problem of unanticipated consequences in the design and regulation of emerging technology, pointing to the difficulty of foresight as an underlying cause. Next, I draw a line between the issue-spotting capabilities of “thinking like a lawyer” and cultivating foresight as a skill. Finally, I describe my own experiences with using science fiction and creative speculation in teaching ethics and policy, and argue for the usefulness of creative speculation as a tool for those who are designing, deploying, and regulating technology. I sometimes describe myself, in the context of technology, as an optimist who believes it is important to think like a pessimist. I believe that such tools can help us create the future that we want rather than the one that we fear.

Unanticipated Consequences and the Challenge of Emerging Technology

Emerging technologies are often both high-risk and high-potential. They offer benefits to society, but with those benefits come ethical and regulatory quandaries. With this in mind, how do we simultaneously leverage an innovation’s anticipated benefits while guarding against its potential harms? This question is particularly difficult to answer when we might not be able to understand the risk associated with a technology until it is suitably developed.[15] For example, rapid advancements in artificial intelligence have prompted alarm not just from the general public and regulators, but from the very leaders in the tech companies engaged in its development.[16] Elon Musk called AI “our biggest existential threat” as he asked for regulatory oversight to make sure that “we don’t do something very foolish.”[17]

While AI is not designed to produce negative consequences, it is designed to produce the unforeseen. Artificial intelligence simulates human intelligence—which means that by definition the actions it takes are not all hard-coded and known in advance.[18] Even narrow AI (contrasted with general AI, still in the realm of science fiction), which is programmed to perform a specific task, can have significant impacts on society even when applied carefully.[19] The capability for AI to produce actions for which it is not directly programmed (and therefore, potentially unforeseen) is entirely intentional—but the direct consequences, including the possibility of a loss of control of that AI’s actions, might not be.[20] Even an AI agent with the seemingly harmless goal of making paperclips might have an unmitigated opportunity to effect change on the environment directly and negatively impact humans.[21] In other words, AI will inherently have unanticipated, if not unintended, consequences.

It is unsurprising, therefore, that the unforeseen aspects of AI have created ethical challenges. To address these challenges, we have seen a scrambling for AI ethics principles and guidelines from a huge variety of relevant actors—from the government of Australia,[22] to the U.S. Department of Defense,[23] to Microsoft[24] and Google,[25] and even religious institutions.[26] Though these principles share some common features, they are still highly divergent on important matters of interpretation and application.[27]

Additionally, AI raises unique legal challenges—similar to those of the internet—which actually resulted in a new subfield of law.[28] In fact, the unforeseeable poses a particularly vexing legal challenge: will legal systems choose to view the actions of some AI systems as unintended/unanticipated, and if so, will system designers escape liability?[29] This type of quandary poses entirely new kinds of public risks.[30] Meanwhile, regulators are tasked not only with thinking about the potential consequences of the technology itself, but also about the possible consequences of regulation.[31]

These ethical and legal challenges are largely created by uncertainty, a common side effect of technological revolutions.[32] However, a question that often arises is whether there really was so much uncertainty, or were certain problems foreseeable? For example, consider the case of the Cambridge Analytica scandal, encompassing a number of large ethical, legal, and social issues including privacy violations and manipulation.[33] Arguably, the use of personality traits by political campaigns to attempt to manipulate voters on Facebook could have been foreseeable. In fact, a 2013 paper revealed that undisclosed personality traits (e.g., introversion versus extroversion) could be accurately predicted by Facebook “likes”; the paper concluded by noting the “considerable negative implications” of the research.[34] In the wake of the scandal, much public discourse shifted to the ethical responsibility of both technologists and platforms to anticipate potential problems associated with their technology.[35] Everyone involved in the design of technology should be looking for ethical warning signs, whether they can be inferred by existing data or are more speculative. “I’m just an engineer” is no longer a valid excuse, and those engineers are expected to have considered the social implications of the technologies they create.[36] Even academic researchers have called for their community to “work much harder to address the downsides of our innovations” without simply assuming that computing research will result in a net positive impact on the world.[37]

Of course, this goal may fall into the “easier said than done” category. When it comes to both the designers of technology and those who are tasked with regulating it, they cannot actually see the future. Foresight is difficult and fraught with pitfalls, including misunderstanding the potential of an emerging technology, misconceiving a scientific trajectory, or failing to predict pivotal events or innovations.[38] For example, few recognized how socially and commercially transformative the internet, or even fax technology, would be until it was widely adopted, nor were they prepared for the legal mischief that would follow.[39]

According to Amara’s Law, people tend to overestimate the impact of technology in the short term, but underestimate its impact in the long term.[40] In 2004, Facebook launched to students at Harvard,[41] and as its userbase grew, Facebook showed enough potential to attract investors—but it is reasonable that at the time no one would have predicted that it might someday be so embedded in the social fabric of society that it could influence the course of elections.[42]

According to sociologist Robert Merton’s theory of unanticipated consequences, two of the major causes of negative outcomes when the relevant parties are well-intentioned are: (1) the inability to anticipate every eventuality, making incomplete analysis inevitable; and (2) errors in analysis that arise from methods or habits that may have worked in the past but do not apply to the current problem.[43] Both of these problems critically intersect with law—not only with respect to a potential lack of foresight, but also because the law develops at a snail’s pace compared to technology, and application of the law to new technologies often involves analogy and functional equivalence.[44]

Functional equivalence and the large role that analogy plays in case law make the perfect recipe for Merton’s second challenge for unanticipated consequences, the application of habits that have worked in the past. In his discussion of the parallels between the regulatory challenges for robotics and the internet, one of the lessons that Ryan Calo draws from cyberlaw is that courts will look to how a new digital activity is “like” one for which there are already rules. For example, if a court is determining the appropriate Fourth Amendment protections for an email, they might ask whether an email is more like a postcard or a sealed letter.[45] Similarly, in a 2005 Supreme Court case, the court wrestled with whether a cable internet provider is more an “information service” than a “telecommunications service,” with Justice Scalia’s dissent arguing that it is analogous to a pizza delivery service.[46] A challenge then is that when the legal profession fails to keep in step with advancements in technology (due in part to a lack of technical knowledge), and therefore relies on less advanced technology for analogy, the application of the law may suffer in quality and subsequently result in undesirable consequences.[47]

Though of course we will never be able to solve Merton’s first challenge for unanticipated consequences by gaining the ability to anticipate every eventuality, ethical speculation and legal foresight can help create “pathways into the unknown.”[48] Asimov defined science fiction as the branch of literature that deals with “the reaction of human beings to changes in science and technology.”[49] The introduction to Future Tense: Stories of Tomorrow, a science fiction short story anthology, describes the power of science fiction to shape our reactions productively:

[T]he history of actual technological change … is always heterogeneous, ambivalent, growing out of and elaborating on our existing social structures and norms, cultures and values, and physical environments.… We get used to these changes quite quickly, and once we do, they become unremarkable, even invisible. A good science fiction story can help re-sensitize us by showing us people dangling over different technological precipices, or realizing their potential in once-unimaginable ways.[50]

Perhaps optimists are more inclined to reimagine potential, and pessimists to dangle our possible futures over those precipices. Both are important. The idea is not to regulate now for the HAL-9000s, WALL-Es, or R2D2s that may or may not exist in any form in the future.[51] However, we can exercise the muscles of our imagination and avoid complacency over the changes around us.

I argue that the most important context for ethical speculation is as part of the design and implementation of new technology, as some small weapon against uncertainty. By the time we get to lawyers and lawmakers, it is often too late, since the regulation of disruptive technology tends to be reactive to problems and challenges that arise out of uncertainty.[52] As we consider speculation as part of education and design, however, there are lessons we can take not only from science fiction, but also from the legal imagination.[53] Next, I consider how the characteristics of legal reasoning are useful for ethical speculation.

Thinking Like a Lawyer… or a Science Fiction Writer

“Thinking like a lawyer” is a skill one is supposed to learn in law school.[54] Traditionally this new way of thinking involves analytical skills, with a focus on thinking rhetorically in a problem-solving context, and in particular on the ability to inductively synthesize a legal principle from a series of cases and to analogize them to others.[55] One way that this skill finds its way into legal pedagogy is via “issue-spotting” exams that require perceiving the analogies between a fact pattern and a set of legal issues, standards, and precedents.[56]

I still remember the exam from my Torts class in the first year of law school. It began with a story (a “fact pattern”) that was about a page and a half long. The story ended with a plane crash, but prior to that there was a cast of potentially liable actors: a co-pilot who had had a drink before the flight, a pilot who was distracted by his affair with a flight attendant, an air traffic controller being trained on the job, a couple of rowdy passengers, the architects of a poorly lit runway, and a number of others I cannot remember. At the end of this story, there was a single prompt: “Discuss all possible torts claims.”

Today, I teach a course on information ethics and policy, and the majority of my students are computer science and information science majors—potential designers of the “emerging technology” of the future that one day we will find challenging to regulate. When it comes to teaching ethics—a topic that very often does not have “right” answers—issue-spotting is one of the most useful skills I can cultivate in my students. In fact, a recent analysis of syllabi from university tech ethics classes showed that variations on being able “to recognize ethical issues in the world” is one of the most common types of desirable learning outcomes.[57]

These fact patterns present some of the same ethical controversies that we see in the news every day—for example, the behavioral microtargeting behind Cambridge Analytica.[58] Who were the bad actors in this scenario? What were the harms and were they foreseeable? How much did the design or business model of the platform contribute to those harms, and what responsibility might Facebook bear? What about the researchers who first determined that personal attributes are predictable from Facebook’s collected data, and published a paper that noted the “considerable negative implications” of this finding?[59] This type of real-world fact pattern still boils down to a familiar question: “Discuss all possible ethical issues.”

In addition to observational skills like issue-spotting, imagination also plays a critical role in legal reasoning because it fosters development of conceptual metaphors, which are more than just means of expression; they are also the “imaginative means by which we receive the multiple relations of a complex world.”[60] Like the philosophical concept of imagination, the legal imagination requires perceiving connections between the general and the specific[61]—or even the general and the speculative. When asking my students to imagine both the promise and the potential harms of the technology they might create, I am asking them to both extrapolate from the pitfalls of the past and to imagine uses and circumstances beyond their control. They must think now about the consequences that they may not intend but that might, with a little imagination, be foreseeable. In recursively traveling between the general and the specific, we can choose among the possibilities and consider their moral consequences.[62]

Arguments for interdisciplinarity around the regulation of technology often involve the ability to bring in greater technical expertise and to help alleviate multi-stakeholder tensions by having more people in the room from the start.[63] However, engaging multiple perspectives also has the opportunity to ramp up creative speculation. There have been arguments for engaging the public more with science fiction in order to increase capacity to think critically about our technological futures, as well as to promote science fiction writing as a socially valuable profession with more direct interaction with scientists and technologists.[64] However, legal reasoning—including issue-spotting, perceiving analogies, and extrapolation—also provides a skillset that could be useful for technologists.

Perhaps we could create dream teams of technologists, lawyers, and science fiction writers to design and simultaneously consider the regulatory implications for the technologies of the future. However, in the interim, we can consider how creative speculation, like legal reasoning, can be cultivated as a skill.

Teaching Creative Speculation

How best to teach ethics to computer science students or other technologists of tomorrow is an unsettled question, with a variety of pedagogical approaches represented even as the demand for such instruction continues to grow.[65] One approach, as exemplified in the course “Science Fiction and Computer Ethics” taught at University of Kentucky and University of Illinois, emphasizes “offer[ing] students a way to cultivate their capacity for moral imagination” through analyzing science fiction stories.[66] The instructors note that a key insight of this course was that “a good technology ethics course teaches students how to think, not what to think, about their role in the development and deployment of technology, as no one can foresee the problems that will be faced in a future career.”[67]

I include analysis of science fiction texts and media in my own teaching, including stories like Cory Doctorow’s “Scroogled”[68] and Naomi Kritzer’s “Cat Pictures Please”[69] in conjunction with scholarly and news articles when covering surveillance and AI, respectively. I also have students write essays about an AI science fiction film of their choice; Ex Machina, Her, and Avengers: Age of Ultron are particularly popular. However, some of my most successful teaching exercises have students not analyzing science fiction but creating it, or engaging in further creative speculation around it. Next, I will discuss two such exercises that I first described in the online article “Black Mirror, Light Mirror”[70] and have also taken on the road to try out in other classes and even beyond the classroom: the first an activity on speculative regulation, and the second an activity on imagining possible harms of future technologies.

Speculative Regulation

The course I teach covers information/technology policy in addition to ethics. I encourage students to use their legal imaginations, considering the intersection of metaphor and speculation. After we watched the Black Mirror episode “The Entire History of You,”[71] which takes place in a future in which every action we take is recorded (i.e., always-on lifelogging) and every memory accessible (even by others).[72] When a student inquired whether this would put an end to crime, she followed up by asking if the police would have access to memories at all. Would it be an invasion of privacy? How might the Fourth Amendment apply? Would such a thing constitute unreasonable search? Someone else asked if your own memories could be used against you without your consent, or was that self-incrimination? The conversation then led us to a discussion about the FBI-Apple encryption dispute that concerned whether Apple could be compelled to unlock an encrypted iPhone,[73] and then I told them about the Supreme Court ruling in Katz v. United States.[74]

None of these regulatory or ethical issues came up in “The Entire History of You,” which was much more concerned with the human and social consequences of the technology. However, this example highlights a feature we have established about science fiction; it can help us explore our present just as much as our future. The premise of this future technology served as a catalyst for discussing similar complexities we are grappling with today. Just as the creators of the iPhone were likely not thinking about how biometric keys might be used by law enforcement,[75] Alexander Graham Bell likely did not consider the legal privacy implications of the telephone.[76] Today’s technology is yesterday’s science fiction.

I use another Black Mirror episode for a teaching exercise I call “speculative regulation.” In “Be Right Back,” a young widow brings back her deceased husband first via a chatbot-like service, and eventually via an eerily lifelike robot recreation.[77] After watching the episode, class begins with the question: what regulations would exist in a world with this technology? If we could create robot versions of our deceased loved ones, what current laws might regulate this practice, or what new ones would be created?

Again, law can often be reactive in the face of new technology.[78] When Facebook was first launched, no one would have thought to create laws that would regulate the use of such platforms for disinformation campaigns, but after the Cambridge Analytica scandal, this seemed to be a given.[79] Because edge cases and counterfactuals are a critical part of legal analysis, the exercise continues with a series of hypotheticals[80] to shift the conversation and force students to find inconsistencies in their decisions and to follow the downstream effects of regulation. These hypotheticals raise a series of questions for the students to answer and ultimately make decisions. Is a robot inheritable property? Are there consequences for mistreatment of a robot? Who is liable for a robot’s behavior? Who is responsible for its care? Can a robot hold a copyright (which nearly always leads to discussion of monkeys[81])? Each decision shapes a set of laws (as well as, e.g., a Terms of Service for the robotics company) that in turn shape the social structure of the world that this fictional technology embodies.

The purpose of this exercise is not to think seriously about how we might regulate this technology; even if we can see the inspiration in current technologies designed around a digital afterlife,[82] this is far future tech that might not ever come to pass. There are much more pressing matters for our regulatory structures to deal with right now than the potential rights or liabilities for eerily lifelike robots. However, the intended outcome of this activity is to exercise the legal imagination, to learn to think through problems with creative speculation. Also—it’s fun. If students can get excited about thinking through the ethical and legal implications of some technology that someone else might create a hundred years from now, they should be able to do the same with the technology that they are creating right now. The next exercise takes students through an example of that process by giving them the opportunity to be science fiction writers.

The Black Mirror Writers’ Room

I think that one of the reasons Black Mirror has been so successful is that it takes current technologies and pushes them just a step further—most often a foreseeable step, a plausible step. For example, the episode “Nosedive”[83] features widespread adoption of a ratings-based social measurement tool with severe ramifications; a question like “why would we agree with this?” forces reflection about the role of social media and related technologies in our own lives.[84]

“Nosedive” also easily tees up conversations about surveillance (particularly as represented by the social credit system in China) and social media addiction and well-being. Similarly, “The Entire History of You” takes on the ethical and normative implications of lifelogging and provokes memories of the failure of Google Glass. And even “Be Right Back”—as far-fetched as it might seem—begins (before the robot shows up) with a premise that is hardly science fiction at all; there has already been a tech company with the tagline “when your heart stops beating you’ll keep tweeting.”[85]

The common thread between these stories—which anecdotally, my students count among their favorite episodes—is that they take our current anxieties about technology and nudge them forward far enough to make a point, but close enough that you can still easily see the thread from here to there. They are cautionary tales not based on some distant future but based on where we might plausibly go based on the developments (and anxieties) of today.

Science fiction often starts with these same kinds of questions. Author Louisa Hall says that her novel Speak began with imagining what legal, social, and corporate issues artificial intelligence might raise in the future.[86] Similarly, Annalee Newitz considered in her novel Autonomous what the ACLU might think about robot rights.[87] And of course, Black Mirror jumps straight to what might go most wrong with what the tech companies of today might have in development for tomorrow.[88]

Ethics, particularly with respect to emerging technology, is so deeply at its core about speculation—because there are so many potential harms that are difficult to anticipate. They certainly manage to do that in the writers’ room for Black Mirror, though. What if you were not only recording your memories, but others could see them? What if you could bring a loved one back as more than a chatbot? What if the social credit system in China was powered by Instagram? What would the cautionary tale be, and what narrative would best tell that story?

As an exercise towards this kind of ethical speculation, I turn my class into this writers’ room, having small groups choose an issue or technology—social media privacy, algorithmic bias, online harassment, misinformation—and then consider where it will be in five or ten years. What could be worthy of a Black Mirror episode? They consider possible harms, and then pitch an episode arc.[89]

I have run this exercise not just in an ethics classroom but in technical computer science classes, with high school students, and even with groups of technology professionals at conferences. The ideas that have come out of it are definitely worthy of television. Sometimes ideas from students are barely science fiction at all. For example, they asked: what if an algorithm can tell from your social media traces that you are sick and sends you medication? But wait, that’s not quite creepy enough; what if a profit-motivated algorithm makes a calculation, based on how depressed you are, whether it is more likely to make a sale when advertising antidepressants or heroin?

Another idea from students was that perhaps in the future advertising will not exist at all; Amazon’s algorithms will know so much about us that we do not have to shop at all anymore. Everything we need will just show up at our door—including, in a Twilight Zone type twist, a book about privacy protection. In one class, having recently discussed the Cambridge Analytica scandal in which political campaigns relied on highly personalized Facebook content to influence voters, we imagined a benevolent AI that uses an even more robust form of personalization to manipulate everyone on Earth into complacency (spoiler: it does not end well for them).

This exercise could easily turn into a pessimist’s dream. Black Mirror, after all, mostly helps convince you that technology is going to destroy us all. However, the imagining of all these possible harms is not the right place to end. The next step—arguably, the more important, if less fun step—is to consider how we do not get to these harms. We talk about stakeholders, responsibilities, and potential regulatory regimes. We also do not stop with “there should be laws for that.” What about design? What might the people involved in creating that technology do to help prevent potential negative consequences? Better yet, where could today’s technology go instead that could benefit society and make things better than they are now? My hope is that helping more people think critically about responsibility and ethics in the context of technology is one way to keep our lives from turning into a Black Mirror episode.

Conclusion: Making Ripples

In the introduction to Daxton Stewart’s book Media Law Through Science Fiction, author Malka Older describes what it means to be a science fiction writer:

“My job, then, is essentially to think up some difference in the world … and make sure that the human reactions to it, the changes society has built around it, feel right. … To do my job well, I need to think through the unintended and unexpected consequences, the second- and third- and fourth-order ripples. … I need to imagine what other applications have come up, both formal and unauthorized. … [C]hange typically doesn’t happen as a single variable while everything else stays constant.”[90]

What Older describes, the ability to think through the unintended/unexpected consequences and “the second- and third- and fourth-order ripples,” is precisely what both technologists and regulators should be doing in the context of emerging technology. As Stewart writes, in addition to encouraging foresight, science fiction “enables us to have good discussions in the present about the world we live in … potentially in anticipation of legal issues before they arrive.”[91]

Science fiction writer and activist Cory Doctorow has taken this idea farther than most, for example by writing short stories intended to illustrate “nightmare scenarios” that could become reality based on the regulatory trajectory of the U.S. Copyright Office.[92] He described his goal as being about taking dry and complicated policy and making it vivid and real, hoping that people “will recognize through fiction what the present-day annoyances will turn into in the future.”[93]

However, just as we might take annoyances and extrapolate to future harms, we can also imagine better futures. One inspiration behind Daniel Wilson’s book Robopocalypse was the real-life plane crash caused by tension between human pilots and an automated system; in his future, simple laws for AI and robotics promoted public safety and would prevent this kind of tragedy.[94]

I am an optimist who uses pessimism to prepare. And my preparation is speculation for what the world could be, and how it could be better. In addition to noting that catastrophe is inevitable but solutions are not, Asimov also said that the best way to prevent catastrophe is to take action to prevent it before it happens, and thus to foresee it in time, “but who listens to those who do the foreseeing?”[95]

The answer, I think, is for everyone to do the foreseeing, and to listen to each other, in order to create collective visions of the future. Creative speculation as a design tool can begin in the classroom, but should find its way into practice as well, and it should involve multiple stakeholders whenever possible—as should any consideration of the ethical implications of technology. One of my favorite examples is the Interdisciplinary Ethics Tech Competition, organized by Silicon Flatirons here at University of Colorado Boulder; the competition “gives students a chance to wrestle with a real-world ethics problem in collaboration with a diverse team of students studying law, business, communication, journalism, engineering, [cybersecurity], information science, or computer science.”[96] Having been a judge for this competition for several years, my observation is that students seem to be more creative and forward-thinking when silo-ed within their own disciplines.

Imagine what the world might look like if everyone who touched technology examined it critically, in a creative, forward-thinking way. Perhaps the popularity of Black Mirror is a start. The show’s creator Charlie Brooker described it as “about the way we live now—and the way we might be living in 10 minutes’ time if we’re clumsy.”[97] The show is intended as a way to force people to think about possible futures and potential harms of the technology we build and use—not just technologists and lawyers, but the general public, too. The more of us who think ahead, the more pitfalls we might anticipate and avoid. Brooker posits that mankind is “usually clumsy,”[98] but that just means we need to look where we are going.

  1. * Assistant Professor of Information Science, University of Colorado Boulder; Fellow, Silicon Flatirons Center for Law, Technology, and Entrepreneurship; Fellow, Center for Democracy and Technology; PhD in Human-Centered Computing, Georgia Institute of Technology, 2015; JD, Vanderbilt University Law School, 2009. My thanks to the organizers and participants of the 2020 Silicon Flatirons “Technology Optimism and Pessimism” conference for engaging conversations that helped shape this piece—especially the “Conversation about the Future” panel, including Phil Weiser, Patty Limerick, and Karl Schroeder.
  2. . Daxton R. Stewart, Media Law Through Science Fiction: Do Androids Dream of Electric Free Speech? 31 (2020) (quoting Isaac Asimov, How Easy to See the Future, Natural History, 1975).
  3. . Charles J. Anders et al., Future Tense Fiction: Stories Of Tomorrow 11–13 (Kristen Berg et al. eds., 2019).
  4. . Nazanin Andalibi & Justin Buss, The Human in Emotion Recognition on Social Media: Attitudes, Outcomes, Risks, Proc. ACM CHI Conf. Hum. Factors Computer Sys. at 1, 6 (2020) (describing an interview participant speculating about emotion detection creating a “1984 society”); Blake Hallinan, Jed R Brubaker & Casey Fiesler, Unexpected Expectations: Public Reaction to the Facebook Emotional Contagion Study, 22(6) New Media & Soc’y 1076, 1081–83 (2020) (describing online reactions to the Facebook emotional contagion experiment that referenced the dystopian novels 1984 and Brave New World).
  5. . Anthony Dunne & Fiona Raby, Speculate Everything: Design, Fiction, and Social Dreaming 74–75 (2013).
  6. . Id. at 73.
  7. . See Robert K. Merton, The Unanticipated Consequences of Purposive Social Action, 1 Am. Soc. Rev. 894, 895 (1936).
  8. . Dunne & Raby, supra note 4, at 3–6, 14.
  9. . Russell Blackford, Science Fiction and the Moral Imagination: Visions, Minds, Ethics 14 (Mark Alpert et al. eds., 2017).
  10. . Dunne & Raby, supra note 4, at 34–35.
  11. . Graeme Laurie, Shawn H.E. Harmon & Fabiana Arzuaga, Foresighting Futures: Law, New Technologies, and the Challenges of Regulating for Uncertainty, 4 L., Innovovation & Tech. 1, 3 (2012) (defining “legal foresighting” as “the identification and exploration of possible and desirable future legal or quasi-legal developments aimed at achieving valued social and technological ends”).
  12. . Id. at 10 (“[A] wide range of actors is implicated in the technologies fields, and so a wide range of stakeholders appropriate to the legal foresighting exercise also emerges.”); Gregory N. Mandel, Regulating Emerging Technologies, 1 L., Innovovation & Tech. 1, 9 (2009) (“Critical to this proposal for emerging technology governance is wide and diverse stakeholder involvement.”); Ryan Calo, Robotics and the Lessons of Cyberlaw, 103 Calif. L. Rev. 513, 560 (2015) (“Cyberlaw today is a deeply interdisciplinary enterprise, full of meaningful collaboration across a wide variety of training.”).
  13. . Laurie et al., supra note 10, at 3 (“Legal foresighting should help us create pathways into the unknown, and part of that creation may mean (or demand) a fundamental re-visioning of the legal setting itself, its instruments, institutions, and regulatory or governance mechanisms.”); Clark A. Miller & Ira Bennett, Thinking Longer Term About Technology: Is There Value in Science Fiction-inspired Approaches to Constructing Futures?, 35 Sci. & Pub. Pol’y 597, 604 (2008) (suggesting the value of “[p]romoting critical science fiction writing as a socially valuable profession, and one that interacts with both science and engineering and social and humanistic studies of science and technology”); Kieran Tranter, The Speculative Jurisdiction: The Science Fictionality of Law and Technology, 20 Griffith L. Rev. 815, 820 (2011) (“[L]egal scholarship on technology is kind of an applied futurology – its starting point is images of technological futures that call for law. This is a speculative activity, a creative process of looking at what is and projecting, imaging and dreaming what could be.”); Mitchell Travis, Making Space: Law and Science Fiction, 23 L. & Lit. 241, 242 (2011) (“Science fiction allows for a space in which alternate social and legal systems, conditions, and variables can be considered, and it is beneficial for law to consider these alternate situations, given that they are often inspired by popular attitudes.”).
  14. . See Kurt M. Saunders & Linda Levine, Learning to Think Like a Lawyer, 29 U.S.F. L. Rev. 121, 126 (1994).
  15. . Mandel, supra note 11, at 1.
  16. . Matthew U. Scherer, Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies, 29 Harv. J. L. & Tech. 353, 355 (2016).
  17. . Id.
  18. . Mark O. Riedl & Brent Harrison, Using Stories to Teach Human Values to Artificial Agents, Ass’n for the Advancement of Artifical Intelligence 1 (2015), (“Recent advances in artificial intelligence and machine learning have led many to speculate that artificial general intelligence is increasingly likely.”).
  19. . Enrico Coiera, The Price of Artificial Intelligence, 28 Y.B. Med. Informatics 14 (2019)
  20. . Scherer, supra note 15, at 365.
  21. . Nick Bostrom, Ethical Issues in Advanced Artificial Intelligence, in Science Fiction & Philosopohy From Time Travel to Superintelligence 277, 280–84 (2003) (describing the paperclip maximizer thought experiment, in which a superintelligence whose goal is the manufacturing of paperclips starts transforming first all of earth and then increasing portions of space into paperclip manufacturing facilities); Riedl & Harrison, supra note 17, at 105 (“An artificial general intelligence, especially one that is embodied, will have much greater opportunity to affect change to the environment and find unanticipated courses of action with undesirable side effects. This leads to the possibility of artificial general intelligences causing harm to humans; just as when humans act with disregard for the wellbeing of others.”).
  22. . AI Ethics Principles, Austl. Dep’t Indus., Sci., Energy, & Res., https://www.industry.gov.au/data-and-publications/building-australias-artificial-intelligence-capability/ai-ethics-framework/ai-ethics-principles [https://perma.cc/53C2-67R7] (last visited Oct. 18, 2020).
  23. . DOD Adopts Ethical Principles for Artificial Intelligence, U.S. Dep’t Def. (2020), https://www.defense.gov/Newsroom/Releases/Release/Article/2091996/dod-adopts-ethical-principles-for-artificial-intelligence/ [https://perma.cc/4SHL-DZ9J] (last visited Oct. 18, 2020).
  24. . Microsoft AI Principles, Microsoft, https://www.microsoft.com/en-us/ai/responsible-ai [https://perma.cc/E2KB-LR9X] (last visited Oct. 18, 2020).
  25. . Artificial Intelligence at Google: Our Principles, Google AI, https://ai.google/principles/ [https://perma.cc/P4DV-RGZN] (last visited Oct. 18, 2020).
  26. . Artificial Intelligence: An Evangelical Statement of Principles, Ethics & Religious Liberty Commission of the S. Baptist Convention (2019), https://erlc.com/resource-library/statements/artificial-intelligence-an-evangelical-statement-of-principles [https://perma.cc/58PE-AWZW] (last visited Oct. 18, 2020).
  27. . Anna Jobin, Marcello Ienca & Effy Vayena, The Global Landscape of AI Ethics Guidelines, 1 Nature Machine Intelligence 389 (2019) (“Our results reveal a global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy), with substantive divergence in relation to how these principles are interpreted, why they are deemed important, what issue, domain or actors they pertain to, and how they should be implemented.”).
  28. . Calo, supra note 11, 560.
  29. . Scherer, supra note 15, at 357.
  30. . Id. at 366–67 (describing AI as “a potential source of public risk on a scale that far exceeds the more familiar forms of public risk that are solely the result of human behavior”).
  31. . See, e.g., Andrew W. Brown & David B. Allison, Unintended Consequences of Obesity-Targeted Health Policy, 15 Am. Med. Ass’n J. Ethics 339 (2013); Mark Wolfson & Mary Hourigan, Unintended Consequences and Professional Ethics: Criminalization of Alcohol and Tobacco Use by Youth and Young Adults, 92 Addiction 1159 (1997).
  32. . Laurie et al., supra note 10.
  33. . Ken Ward, Social Networks, the 2016 US Presidential Election, and Kantian Ethics: Applying the Categorical Imperative to Cambridge Analytica’s Behavioral Microtargeting, 33 J. Media Ethics 133 (2018).
  34. . Michal Kosinski, David Stillwell & Thore Graepel, Private Traits and Attributes Are Predictable from Digital Records of Human Behavior, 110 Proc. Nat’l Acad. Sci. U.S. 5802, 5805 (2013) (“[T]he predictability of individual attributes from digital records of behavior may have considerable negative implications, because it can easily be applied to large numbers of people without obtaining their individual consent and without them noticing. Commercial companies, governmental institutions, or even one’s Facebook friends could use software to infer attributes such as intelligence, sexual orientation, or political views that an individual may not have intended to share. One can imagine situations in which such predictions, even if incorrect, could pose a threat to an individual’s well-being, freedom, or even life.”).
  35. . Casey Fiesler, What Our Tech Ethics Crisis Says About the State of Computer Science Education, How We Get to Next (Dec. 5, 2018), https://howwegettonext.com/what-our-tech-ethics-crisis-says-about-the-state-of-computer-science-education-a6a5544e1da6 [https://perma.cc/V7AT-VVN3].
  36. . Id.
  37. . Brent Hecht et al., It’s Time to Do Something: Mitigating the Negative Impacts of Computing Through a Change to the Peer Review Process, ACM Future of Computing Acad. Blog (Mar. 29,2018), https://acm-fca.org/2018/03/29/negativeimpacts/ [https://perma.cc/8J9R-CFZT].
  38. . Laurie et al., supra note 10, at 7.
  39. . Id. at 2–3.
  40. . Coiera, supra note 18 (citing Roy Amara 1925–2007, American futurologist, in Oxford Essential Quotations (Ratcliffe S., ed.,4th ed., 2016)).
  41. . Sarah Phillips, A Brief History of Facebook, The Guardian (July 25, 2007), https://www.theguardian.com/technology/2007/jul/25/media.newmedia [https://perma.cc/4UEH-Y3W5].
  42. . Multiple research studies have shown that Facebook has a direct impact on voter turnout. Katherine Haenschen, Social Pressure on Social Media: Using Facebook Status Updates to Increase Voter Turnout, 66 J. Comm. 542 (2016). Researchers have also shown the impact that Facebook and similar companies play in shaping elections in the context of political communication and avertising. Daniel Kreiss & Shannon C. McGregor, Technology Firms Shape Political Communication: The Work of Microsoft, Facebook, Twitter, and Google With Campaigns During the 2016 U.S. Presidential Cycle, 35 Pol. Comm. 155 (2017). Finally, there has been a great deal of speculation about the specific impact of Facebook on the 2016 presidential election, pointing to misinformation, political advertising, and personalized newsfeeds. Alexis C. Madigal, What Facebook Did to American Democracy, The Atlantic (October 12, 2017), https://www.theatlantic.com/technology/archive/2017/10/what-facebook-did/542502/ [https://perma.cc/VY6Y-KD6H].
  43. . Merton, supra note 6 at 898–901 (“The most obvious limitation to a correct anticipation of consequences of action is provided by the existing state of knowledge… A second major factor of unexpected consequences of conduct, which is perhaps as pervasive as ignorance, is error.”).
  44. . Dan Jerker B. Svantesson, The Times They Are A-Changin’, Forum on Pub. Pol’y 4–5 (2015) (“The functional equivalence approach is based on an analysis of the purposes and functions of the traditional paper based requirement with a view to determining how those purposes or functions could be fulfilled through electronic commerce techniques. … Technology has advanced with great speed in recent years. It is likely to continue to do so. Unlike technology, the law tends to develop slowly, usually by reacting to situations only as they arise.”).
  45. . Calo, supra note 11, at 559.
  46. . Nat’l. Cable & Telecomm. Ass’n v. Brand X Internet Services, 545 U.S. 967, 991 (2005).
  47. . Svantesson, supra note 43, at 9.
  48. . Laurie et al., supra note 10, at 3.
  49. . Blackford, supra note 8, at 8 (citing Isaac Asimov, Asimov on Science Fiction 1981).
  50. . Anders et al., supra note 2, at 11.
  51. . Omar Mubin et al., Reflecting on the Presence of Science Fiction Robots in Computing Literature, 8 ACM Trans. Human-Robot Interaction 1, 7 (2019).
  52. . Mark Fenwick, Wulf A. Kaal & Erik P. M. Vermeulen, Regulation Tomorrow: What Happens when Technology Is Faster than the Law?, 3 Am. U. Bus. L. Rev. (2017).
  53. . Elizabeth Mertz et al., Forty-five Years of Law and Literature: Reflections on James Boyd White’s “The Legal Imagination” and its Impact on Law and Humanities Scholarship, 13 L. & Human. 95, 96 (2019) (describing James White’s 1973 book The Legal Imagination as an approach to legal education that involves “reading law’s instruments, its rhetoric and concepts alongside, above, below and in-between literary works and criticism.”); see Carol Parker, A Liberal Education in Law: Engaging the Legal Imagination Through Research and Writing Beyond the Curriculum, 1 J. Ass’n Legal Writing Directors 130, 132–33 (2008) (examining one specific aspect of this approach, the importance of metaphor and its role in both legal reasoning and imagination).
  54. . See Saunders & Levine, supra note 13.
  55. . Id. at 2.
  56. . Philip C. Kissam, Law School Examinations, 42 Vand. L. Rev. 433, 440 (1989).
  57. . Casey Fiesler, Natalie Garrett & Nathan Beard, What Do We Teach When We Teach Tech Ethics? A Syllabi Analysis, Proc. ACM SIGCSE Tech. Symp. Computer Sci. Educ. 1, 5 (2020).
  58. . See Ward, supra note 32.
  59. . Kosinski et al., supra note 33.
  60. . Parker, supra note 52, at 132 (quoting Steven L. Winter, Death is the Mother of Metaphor, 105 Harv. L. Rev. 745, 759 (1992)).
  61. . Kissam, supra note 55, at 440.
  62. . Parker, supra note 52, at 132–33
  63. . See Laurie et al., supra note 10.
  64. . See Miller & Bennett, supra note 12.
  65. . Fiesler et al., supra note 56 (describing the content of 100+ syllabi from tech ethics courses).
  66. . Emanuelle Burton, Judy Goldsmith & Nicholas Mattei, How to Teach Computer Ethics through Science Fiction, 61 Comm. ACM 54, 64 (2018).
  67. . Id. at 54.
  68. . Cory Doctorow, Scroogled (2007), https://cmci.colorado.edu/~cafi5706/Scroogled.pdf [https://perma.cc/M85C-TX9F] (last visited, Oct. 18, 2020).
  69. . Naomi Kritzer, Cat Pictures Please, Clarkesworld (2016), http://clarkesworldmagazine.com/kritzer_01_15/ [https://perma.cc/X5XG-V2MR] (last visited, Oct. 18, 2020).
  70. . Casey Fiesler, Black Mirror, Light Mirror: Teaching Technology Ethics Through Speculation, How We Get to Next (Oct. 5, 2018), https://howwegettonext.com/the-black-mirror-writers-room-teaching-technology-ethics-through-speculation-f1a9e2deccf4 [https://perma.cc/9N64-R4U3].
  71. . Black Mirror: The Entire History of You (Netflix Dec. 11, 2011).
  72. . Casey Fiesler, Ethical Considerations for Research Involving (Speculative) Public Data, 3 GROUP Proc. ACM Hum.-Computer Interactions 249, 249:2 (2019).
  73. . Dan Froomkin & Jenna McLaughlin, FBI vs. Apple establishes a new phase of the crypto wars, Intercept (Feb. 26, 2016 12:13 PM), https://theintercept.com/2016/02/26/fbi-vs-apple-post-crypto-wars/ [https://perma.cc/WJU5-37UK].
  74. . Katz v. United States, 389 U.S. 347 (1967) (establishing reasonable expectation of privacy with respect to phone calls).
  75. . See Opher Shweiki & Youli Lee, Compelled Use of Biometric Keys to Unlock a Digital Device: Deciphering Recent Legal Developments, 67 Dep’t of Just. J. Fed. L. & Pract. 23 (2019).
  76. . Annie Dike, Alexander Graham Bell Day Calls for Patent Trivia: Time to See How “Phone Smart” You Are, 10 Nat’l L. Rev. 272 (Mar. 6, 2018), https://www.natlawreview.com/article/alexander-graham-bell-day-calls-patent-trivia-time-to-see-how-phone-smart-you-are [https://perma.cc/3S8Q-CFSB].
  77. . Black Mirror: Be Right Back (Netflix Feb. 11, 2013).
  78. . Fenwick et al., supra note 51, at 574.
  79. . Casey Newton, Congress just showed us what comprehensive regulation of Facebook would look like, The Verge, (July 31, 2018); Fiesler, supra Part I.
  80. . A slide deck containing a set of these hypotheticals can be downloaded athttp://cmci.colorado.edu/~cafi5706/blackmirror_speculativeregulation.pptx [https://perma.cc/7EVK-38TX].
  81. . Stephen Schahrer, First, Let Me Take a Selfie: Should a Monkey Have Copyrights to His Own Selfie?, 12 Liberty U. L. Rev. 135–65 (2017).
  82. . Amanda Lagerkvist, The Netlore of the Infinite: Death (and Beyond) in the Digital Memory Ecology, 21 New Rev. Hypermedia & Multimedia 185, 189 (2015).
  83. . Black Mirror: Nosedive (Netflix Oct. 21, 2016).
  84. . Journalism professor Jeremy Littau at Lehigh University uses this example to spur discussion among his students about the future of communication and technology. Stewart, supra note 1, at 10.
  85. . Lagerkvist, supra note 81.
  86. . Stewart, supra note 1, at 15.
  87. . Id. at 18.
  88. . Dunne & Raby, supra note 4, at 74–75.
  89. . Casey Fiesler, The Black Mirror Writers’ Room: A Speculative Exercise, (July 8, 2020), https://docs.google.com/presentation/d/1fZah6nYpAhLtUMh1BRy3w1vCHk_-W7bxxv0LeuKZpT0/edit#slide=id.g63d578e5a7_0_0 [https://perma.cc/SE97-9R8Y].
  90. . Stewart, supra note 1, at ix.
  91. . Id. at 185.
  92. . Id. at 9.
  93. . Id. at 9–10.
  94. . Id. at 19.
  95. . Id. at 31.
  96. . Interdisciplinary Ethics Tech Competition, U. of Colo. Boulder https://www.colorado.edu/law/academics/daniels-fund-ethics-initiative-collegiate-program-colorado-law/programs-and-events-0 [https://perma.cc/33Q7-DNEF] (last visited Oct. 18, 2020).
  97. . Charlie Brooker, The Dark Side of Our Gadget Addictiton, The Guardian (Dec. 1, 2011), https://www.theguardian.com/technology/2011/dec/01/charlie-brooker-dark-side-gadget-addiction-black-mirror [https://perma.cc/YB5J-EYXT].
  98. . Id.
Scroll to Top