{"id":696,"date":"2021-01-29T00:02:29","date_gmt":"2021-01-29T07:02:29","guid":{"rendered":"http:\/\/ctlj.colorado.edu\/?p=696"},"modified":"2021-03-16T15:15:58","modified_gmt":"2021-03-16T21:15:58","slug":"fiesler-1-20-21-final-draft","status":"publish","type":"post","link":"https:\/\/ctlj.colorado.edu\/?p=696","title":{"rendered":"Innovating Like an Optimist, Preparing Like a Pessimist: Ethical Speculation and the Legal Imagination"},"content":{"rendered":"<p>Innovating Like an Optimist, Preparing Like a Pessimist: Ethical Speculation and the Legal Imagination<\/p>\n<p>by Casey Fiesler<sup><a id=\"post-696-footnote-ref-2\" href=\"#post-696-footnote-2\">[1]<\/a><\/sup>*<\/p>\n<p>\u201cScience fiction writers foresee the inevitable, and although problems and catastrophes may be inevitable, solutions are not.\u201d \u2013 Isaac Asimov<a id=\"post-696-_Ref54369409\"><\/a><sup><a id=\"post-696-footnote-ref-3\" href=\"#post-696-footnote-3\">[2]<\/a><\/sup><\/p>\n<p><a id=\"post-696-_Toc57817486\"><\/a><a id=\"post-696-_Toc61983026\"><\/a> Introduction<\/p>\n<p>We all tell stories\u2014to ourselves, and to others\u2014about the future. These stories typically draw us in two opposite directions: to an optimist utopia, where we imagine how things might be better than they are today, or to a pessimist dystopia where aggressive innovation leads to our destruction.<a id=\"post-696-_Ref54369098\"><\/a><sup><a id=\"post-696-footnote-ref-4\" href=\"#post-696-footnote-4\">[3]<\/a><\/sup> In our current landscape of rapidly emerging technology, it is easy to jump to dystopic scenarios when we imagine the future\u2014and as individuals we often do.<sup><a id=\"post-696-footnote-ref-5\" href=\"#post-696-footnote-5\">[4]<\/a><\/sup> Meanwhile, science fiction media like <em>Black Mirror<\/em> also tell these stories for us, fast-forwarding technologies being developed to the point where the dream of that technology becomes a nightmare instead.<a id=\"post-696-_Ref54367961\"><\/a><sup><a id=\"post-696-footnote-ref-6\" href=\"#post-696-footnote-6\">[5]<\/a><\/sup><\/p>\n<p>Dystopias as a genre serve as cautionary tales that can warn us of what might lie ahead if we are not careful now.<sup><a id=\"post-696-footnote-ref-7\" href=\"#post-696-footnote-7\">[6]<\/a><\/sup> These stories are particularly powerful in the context of <em>unanticipated consequences<\/em>, where deliberate acts have effects that are unintended or unforeseen.<a id=\"post-696-_Ref54368999\"><\/a><sup><a id=\"post-696-footnote-ref-8\" href=\"#post-696-footnote-8\">[7]<\/a><\/sup> By definition, negative consequences of this type are unforeseeable at the time a technology is designed\u2026 but what if they were not? <em>Speculation<\/em> is the ability to imagine potential futures and alternatives.<sup><a id=\"post-696-footnote-ref-9\" href=\"#post-696-footnote-9\">[8]<\/a><\/sup> And science fiction as a narrative genre of speculation can illuminate the likely social impact of change\u2014not just by criticizing na\u00efve optimism about the future, but also by providing a blueprint for a better one.<a id=\"post-696-_Ref54369082\"><\/a><sup><a id=\"post-696-footnote-ref-10\" href=\"#post-696-footnote-10\">[9]<\/a><\/sup><\/p>\n<p>For both pessimists and optimists, critique is not necessarily negative, but can be a testimonial towards how the world might be instead.<sup><a id=\"post-696-footnote-ref-11\" href=\"#post-696-footnote-11\">[10]<\/a><\/sup> Creative speculation, as a method of ethical and legal foresight,<a id=\"post-696-_Ref54368005\"><\/a><sup><a id=\"post-696-footnote-ref-12\" href=\"#post-696-footnote-12\">[11]<\/a><\/sup> can help us foresee potential consequences of emerging technologies. Subsequently, we may be able to use design, implementation, or regulation to mitigate negative outcomes. In fact, a number of scholars have called for multi-stakeholder and interdisciplinary approaches to regulation,<a id=\"post-696-_Ref54368020\"><\/a><sup><a id=\"post-696-footnote-ref-13\" href=\"#post-696-footnote-13\">[12]<\/a><\/sup> and even pointed to the usefulness of science fiction and speculation.<a id=\"post-696-_Ref54369281\"><\/a><sup><a id=\"post-696-footnote-ref-14\" href=\"#post-696-footnote-14\">[13]<\/a><\/sup> Moreover, much like issue-spotting and other traditional ways of \u201cthinking like a lawyer,\u201d<a id=\"post-696-_Ref54369149\"><\/a><sup><a id=\"post-696-footnote-ref-15\" href=\"#post-696-footnote-15\">[14]<\/a><\/sup> creative speculation is a skill that can be practiced and taught.<\/p>\n<p>In this essay, I begin by discussing the problem of unanticipated consequences in the design and regulation of emerging technology, pointing to the difficulty of foresight as an underlying cause. Next, I draw a line between the issue-spotting capabilities of \u201cthinking like a lawyer\u201d and cultivating foresight as a skill. Finally, I describe my own experiences with using science fiction and creative speculation in teaching ethics and policy, and argue for the usefulness of creative speculation as a tool for those who are designing, deploying, and regulating technology. I sometimes describe myself, in the context of technology, as an optimist who believes it is important to think like a pessimist. I believe that such tools can help us create the future that we want rather than the one that we fear.<\/p>\n<p><a id=\"post-696-_Toc57817487\"><\/a><a id=\"post-696-_Toc61983027\"><\/a> Unanticipated Consequences and the Challenge of Emerging Technology<\/p>\n<p>Emerging technologies are often both high-risk and high-potential. They offer benefits to society, but with those benefits come ethical and regulatory quandaries. With this in mind, how do we simultaneously leverage an innovation\u2019s anticipated benefits while guarding against its potential harms? This question is particularly difficult to answer when we might not be able to understand the risk associated with a technology until it is suitably developed.<sup><a id=\"post-696-footnote-ref-16\" href=\"#post-696-footnote-16\">[15]<\/a><\/sup> For example, rapid advancements in artificial intelligence have prompted alarm not just from the general public and regulators, but from the very leaders in the tech companies engaged in its development.<a id=\"post-696-_Ref54368037\"><\/a><sup><a id=\"post-696-footnote-ref-17\" href=\"#post-696-footnote-17\">[16]<\/a><\/sup> Elon Musk called AI \u201cour biggest existential threat\u201d as he asked for regulatory oversight to make sure that \u201cwe don\u2019t do something very foolish.\u201d<sup><a id=\"post-696-footnote-ref-18\" href=\"#post-696-footnote-18\">[17]<\/a><\/sup><\/p>\n<p>While AI is not designed to produce negative consequences, it <em>is<\/em> designed to produce the unforeseen. Artificial intelligence simulates human intelligence\u2014which means that by definition the actions it takes are not all hard-coded and known in advance.<a id=\"post-696-_Ref54368752\"><\/a><sup><a id=\"post-696-footnote-ref-19\" href=\"#post-696-footnote-19\">[18]<\/a><\/sup> Even narrow AI (contrasted with general AI, still in the realm of science fiction), which is programmed to perform a <em>specific<\/em> task, can have significant impacts on society even when applied carefully.<a id=\"post-696-_Ref54368971\"><\/a><sup><a id=\"post-696-footnote-ref-20\" href=\"#post-696-footnote-20\">[19]<\/a><\/sup> The capability for AI to produce actions for which it is not directly programmed (and therefore, potentially unforeseen) is entirely intentional\u2014but the direct consequences, including the possibility of a loss of control of that AI\u2019s actions, might not be.<sup><a id=\"post-696-footnote-ref-21\" href=\"#post-696-footnote-21\">[20]<\/a><\/sup> Even an AI agent with the seemingly harmless goal of making paperclips might have an unmitigated opportunity to effect change on the environment directly and negatively impact humans.<sup><a id=\"post-696-footnote-ref-22\" href=\"#post-696-footnote-22\">[21]<\/a><\/sup> In other words, AI will inherently have unanticipated, if not unintended, consequences.<\/p>\n<p>It is unsurprising, therefore, that the unforeseen aspects of AI have created ethical challenges. To address these challenges, we have seen a scrambling for AI ethics principles and guidelines from a huge variety of relevant actors\u2014from the government of Australia,<sup><a id=\"post-696-footnote-ref-23\" href=\"#post-696-footnote-23\">[22]<\/a><\/sup> to the U.S. Department of Defense,<sup><a id=\"post-696-footnote-ref-24\" href=\"#post-696-footnote-24\">[23]<\/a><\/sup> to Microsoft<sup><a id=\"post-696-footnote-ref-25\" href=\"#post-696-footnote-25\">[24]<\/a><\/sup> and Google,<sup><a id=\"post-696-footnote-ref-26\" href=\"#post-696-footnote-26\">[25]<\/a><\/sup> and even religious institutions.<sup><a id=\"post-696-footnote-ref-27\" href=\"#post-696-footnote-27\">[26]<\/a><\/sup> Though these principles share some common features, they are still highly divergent on important matters of interpretation and application.<sup><a id=\"post-696-footnote-ref-28\" href=\"#post-696-footnote-28\">[27]<\/a><\/sup><\/p>\n<p>Additionally, AI raises unique legal challenges\u2014similar to those of the internet\u2014which actually resulted in a new subfield of law.<sup><a id=\"post-696-footnote-ref-29\" href=\"#post-696-footnote-29\">[28]<\/a><\/sup> In fact, the unforeseeable poses a particularly vexing legal challenge: will legal systems choose to view the actions of some AI systems as unintended\/unanticipated, and if so, will system designers escape liability?<sup><a id=\"post-696-footnote-ref-30\" href=\"#post-696-footnote-30\">[29]<\/a><\/sup> This type of quandary poses entirely new kinds of public risks.<sup><a id=\"post-696-footnote-ref-31\" href=\"#post-696-footnote-31\">[30]<\/a><\/sup> Meanwhile, regulators are tasked not only with thinking about the potential consequences of the technology itself, but also about the possible consequences of regulation.<sup><a id=\"post-696-footnote-ref-32\" href=\"#post-696-footnote-32\">[31]<\/a><\/sup><\/p>\n<p>These ethical and legal challenges are largely created by <em>uncertainty<\/em>, a common side effect of technological revolutions.<sup><a id=\"post-696-footnote-ref-33\" href=\"#post-696-footnote-33\">[32]<\/a><\/sup> However, a question that often arises is whether there really was so much uncertainty, or were certain problems foreseeable? For example, consider the case of the Cambridge Analytica scandal, encompassing a number of large ethical, legal, and social issues including privacy violations and manipulation.<a id=\"post-696-_Ref54369162\"><\/a><sup><a id=\"post-696-footnote-ref-34\" href=\"#post-696-footnote-34\">[33]<\/a><\/sup> Arguably, the use of personality traits by political campaigns to attempt to manipulate voters on Facebook could have been foreseeable. In fact, a 2013 paper revealed that undisclosed personality traits (e.g., introversion versus extroversion) could be accurately predicted by Facebook \u201clikes\u201d; the paper concluded by noting the \u201cconsiderable negative implications\u201d of the research.<a id=\"post-696-_Ref54369179\"><\/a><sup><a id=\"post-696-footnote-ref-35\" href=\"#post-696-footnote-35\">[34]<\/a><\/sup> In the wake of the scandal, much public discourse shifted to the ethical responsibility of both technologists and platforms to anticipate potential problems associated with their technology.<sup><a id=\"post-696-footnote-ref-36\" href=\"#post-696-footnote-36\">[35]<\/a><\/sup> Everyone involved in the design of technology should be looking for ethical warning signs, whether they can be inferred by existing data or are more speculative. \u201cI\u2019m just an engineer\u201d is no longer a valid excuse, and those engineers are expected to have considered the social implications of the technologies they create.<sup><a id=\"post-696-footnote-ref-37\" href=\"#post-696-footnote-37\">[36]<\/a><\/sup> Even academic researchers have called for their community to \u201cwork much harder to address the downsides of our innovations\u201d without simply assuming that computing research will result in a net positive impact on the world.<sup><a id=\"post-696-footnote-ref-38\" href=\"#post-696-footnote-38\">[37]<\/a><\/sup><\/p>\n<p>Of course, this goal may fall into the \u201ceasier said than done\u201d category. When it comes to both the designers of technology and those who are tasked with regulating it, they cannot actually see the future. Foresight is difficult and fraught with pitfalls, including misunderstanding the potential of an emerging technology, misconceiving a scientific trajectory, or failing to predict pivotal events or innovations.<sup><a id=\"post-696-footnote-ref-39\" href=\"#post-696-footnote-39\">[38]<\/a><\/sup> For example, few recognized how socially and commercially transformative the internet, or even fax technology, would be until it was widely adopted, nor were they prepared for the legal mischief that would follow.<sup><a id=\"post-696-footnote-ref-40\" href=\"#post-696-footnote-40\">[39]<\/a><\/sup><\/p>\n<p>According to Amara\u2019s Law, people tend to overestimate the impact of technology in the short term, but underestimate its impact in the long term.<sup><a id=\"post-696-footnote-ref-41\" href=\"#post-696-footnote-41\">[40]<\/a><\/sup> In 2004, Facebook launched to students at Harvard,<sup><a id=\"post-696-footnote-ref-42\" href=\"#post-696-footnote-42\">[41]<\/a><\/sup> and as its userbase grew, Facebook showed enough potential to attract investors\u2014but it is reasonable that at the time no one would have predicted that it might someday be so embedded in the social fabric of society that it could influence the course of elections.<sup><a id=\"post-696-footnote-ref-43\" href=\"#post-696-footnote-43\">[42]<\/a><\/sup><\/p>\n<p>According to sociologist Robert Merton\u2019s theory of unanticipated consequences, two of the major causes of negative outcomes when the relevant parties are well-intentioned are: (1) the inability to anticipate every eventuality, making incomplete analysis inevitable; and (2) errors in analysis that arise from methods or habits that may have worked in the past but do not apply to the current problem.<sup><a id=\"post-696-footnote-ref-44\" href=\"#post-696-footnote-44\">[43]<\/a><\/sup> Both of these problems critically intersect with law\u2014not only with respect to a potential lack of foresight, but also because the law develops at a snail\u2019s pace compared to technology, and application of the law to new technologies often involves analogy and functional equivalence.<a id=\"post-696-_Ref54369044\"><\/a><sup><a id=\"post-696-footnote-ref-45\" href=\"#post-696-footnote-45\">[44]<\/a><\/sup><\/p>\n<p>Functional equivalence and the large role that analogy plays in case law make the perfect recipe for Merton\u2019s second challenge for unanticipated consequences, the application of habits that have worked in the past. In his discussion of the parallels between the regulatory challenges for robotics and the internet, one of the lessons that Ryan Calo draws from cyberlaw is that courts will look to how a new digital activity is \u201clike\u201d one for which there are already rules. For example, if a court is determining the appropriate Fourth Amendment protections for an email, they might ask whether an email is more like a postcard or a sealed letter.<sup><a id=\"post-696-footnote-ref-46\" href=\"#post-696-footnote-46\">[45]<\/a><\/sup> Similarly, in a 2005 Supreme Court case, the court wrestled with whether a cable internet provider is more an \u201cinformation service\u201d than a \u201ctelecommunications service,\u201d with Justice Scalia\u2019s dissent arguing that it is analogous to a pizza delivery service.<sup><a id=\"post-696-footnote-ref-47\" href=\"#post-696-footnote-47\">[46]<\/a><\/sup> A challenge then is that when the legal profession fails to keep in step with advancements in technology (due in part to a lack of technical knowledge), and therefore relies on less advanced technology for analogy, the application of the law may suffer in quality and subsequently result in undesirable consequences.<sup><a id=\"post-696-footnote-ref-48\" href=\"#post-696-footnote-48\">[47]<\/a><\/sup><\/p>\n<p>Though of course we will never be able to solve Merton\u2019s first challenge for unanticipated consequences by gaining the ability to anticipate every eventuality, ethical speculation and legal foresight can help create \u201cpathways into the unknown.\u201d<sup><a id=\"post-696-footnote-ref-49\" href=\"#post-696-footnote-49\">[48]<\/a><\/sup> Asimov defined science fiction as the branch of literature that deals with \u201cthe reaction of human beings to changes in science and technology.\u201d<sup><a id=\"post-696-footnote-ref-50\" href=\"#post-696-footnote-50\">[49]<\/a><\/sup> The introduction to <em>Future Tense: Stories of Tomorrow<\/em>, a science fiction short story anthology, describes the power of science fiction to shape our reactions productively:<\/p>\n<p>[T]he history of actual technological change \u2026 is always heterogeneous, ambivalent, growing out of and elaborating on our existing social structures and norms, cultures and values, and physical environments.\u2026 We get used to these changes quite quickly, and once we do, they become unremarkable, even invisible. A good science fiction story can help re-sensitize us by showing us people dangling over different technological precipices, or realizing their potential in once-unimaginable ways.<sup><a id=\"post-696-footnote-ref-51\" href=\"#post-696-footnote-51\">[50]<\/a><\/sup><\/p>\n<p>Perhaps optimists are more inclined to reimagine potential, and pessimists to dangle our possible futures over those precipices. Both are important. The idea is not to regulate now for the HAL-9000s, WALL-Es, or R2D2s that may or may not exist in any form in the future.<sup><a id=\"post-696-footnote-ref-52\" href=\"#post-696-footnote-52\">[51]<\/a><\/sup> However, we can exercise the muscles of our imagination and avoid complacency over the changes around us.<\/p>\n<p>I argue that the most important context for ethical speculation is as part of the design and implementation of new technology, as some small weapon against uncertainty. By the time we get to lawyers and lawmakers, it is often too late, since the regulation of disruptive technology tends to be <em>reactive<\/em> to problems and challenges that arise out of uncertainty.<a id=\"post-696-_Ref54369347\"><\/a><sup><a id=\"post-696-footnote-ref-53\" href=\"#post-696-footnote-53\">[52]<\/a><\/sup> As we consider speculation as part of education and design, however, there are lessons we can take not only from science fiction, but also from the legal imagination.<a id=\"post-696-_Ref54369200\"><\/a><sup><a id=\"post-696-footnote-ref-54\" href=\"#post-696-footnote-54\">[53]<\/a><\/sup> Next, I consider how the characteristics of legal reasoning are useful for ethical speculation.<\/p>\n<p><a id=\"post-696-_Toc57817488\"><\/a><a id=\"post-696-_Toc61983028\"><\/a> Thinking Like a Lawyer\u2026 or a Science Fiction Writer<\/p>\n<p>\u201cThinking like a lawyer\u201d is a skill one is supposed to learn in law school.<sup><a id=\"post-696-footnote-ref-55\" href=\"#post-696-footnote-55\">[54]<\/a><\/sup> Traditionally this new way of thinking involves analytical skills, with a focus on thinking rhetorically in a problem-solving context, and in particular on the ability to inductively synthesize a legal principle from a series of cases and to analogize them to others.<sup><a id=\"post-696-footnote-ref-56\" href=\"#post-696-footnote-56\">[55]<\/a><\/sup> One way that this skill finds its way into legal pedagogy is via \u201cissue-spotting\u201d exams that require perceiving the analogies between a fact pattern and a set of legal issues, standards, and precedents.<a id=\"post-696-_Ref54369247\"><\/a><sup><a id=\"post-696-footnote-ref-57\" href=\"#post-696-footnote-57\">[56]<\/a><\/sup><\/p>\n<p>I still remember the exam from my Torts class in the first year of law school. It began with a story (a \u201cfact pattern\u201d) that was about a page and a half long. The story ended with a plane crash, but prior to that there was a cast of potentially liable actors: a co-pilot who had had a drink before the flight, a pilot who was distracted by his affair with a flight attendant, an air traffic controller being trained on the job, a couple of rowdy passengers, the architects of a poorly lit runway, and a number of others I cannot remember. At the end of this story, there was a single prompt: \u201cDiscuss all possible torts claims.\u201d<\/p>\n<p>Today, I teach a course on information ethics and policy, and the majority of my students are computer science and information science majors\u2014potential designers of the \u201cemerging technology\u201d of the future that one day we will find challenging to regulate. When it comes to teaching ethics\u2014a topic that very often does not have \u201cright\u201d answers\u2014issue-spotting is one of the most useful skills I can cultivate in my students. In fact, a recent analysis of syllabi from university tech ethics classes showed that variations on being able \u201cto recognize ethical issues in the world\u201d is one of the most common types of desirable learning outcomes.<a id=\"post-696-_Ref54369330\"><\/a><sup><a id=\"post-696-footnote-ref-58\" href=\"#post-696-footnote-58\">[57]<\/a><\/sup><\/p>\n<p>These fact patterns present some of the same ethical controversies that we see in the news every day\u2014for example, the behavioral microtargeting behind Cambridge Analytica.<sup><a id=\"post-696-footnote-ref-59\" href=\"#post-696-footnote-59\">[58]<\/a><\/sup> Who were the bad actors in this scenario? What were the harms and were they foreseeable? How much did the design or business model of the platform contribute to those harms, and what responsibility might Facebook bear? What about the researchers who first determined that personal attributes are predictable from Facebook\u2019s collected data, and published a paper that noted the \u201cconsiderable negative implications\u201d of this finding?<sup><a id=\"post-696-footnote-ref-60\" href=\"#post-696-footnote-60\">[59]<\/a><\/sup> This type of real-world fact pattern still boils down to a familiar question: \u201cDiscuss all possible ethical issues.\u201d<\/p>\n<p>In addition to observational skills like issue-spotting, imagination also plays a critical role in legal reasoning because it fosters development of conceptual metaphors, which are more than just means of expression; they are also the \u201cimaginative means by which we receive the multiple relations of a complex world.\u201d<sup><a id=\"post-696-footnote-ref-61\" href=\"#post-696-footnote-61\">[60]<\/a><\/sup> Like the philosophical concept of imagination, the legal imagination requires <em>perceiving connections<\/em> between the general and the specific<sup><a id=\"post-696-footnote-ref-62\" href=\"#post-696-footnote-62\">[61]<\/a><\/sup>\u2014or even the general and the speculative. When asking my students to imagine both the promise and the potential harms of the technology they might create, I am asking them to both extrapolate from the pitfalls of the past and to imagine uses and circumstances beyond their control. They must think <em>now<\/em> about the consequences that they may not intend but that might, with a little imagination, be foreseeable. In recursively traveling between the general and the specific, we can choose among the possibilities and consider their moral consequences.<sup><a id=\"post-696-footnote-ref-63\" href=\"#post-696-footnote-63\">[62]<\/a><\/sup><\/p>\n<p>Arguments for interdisciplinarity around the regulation of technology often involve the ability to bring in greater technical expertise and to help alleviate multi-stakeholder tensions by having more people in the room from the start.<sup><a id=\"post-696-footnote-ref-64\" href=\"#post-696-footnote-64\">[63]<\/a><\/sup> However, engaging multiple perspectives also has the opportunity to ramp up creative speculation. There have been arguments for engaging the public more with science fiction in order to increase capacity to think critically about our technological futures, as well as to promote science fiction <em>writing<\/em> as a socially valuable profession with more direct interaction with scientists and technologists.<sup><a id=\"post-696-footnote-ref-65\" href=\"#post-696-footnote-65\">[64]<\/a><\/sup> However, legal reasoning\u2014including issue-spotting, perceiving analogies, and extrapolation\u2014also provides a skillset that could be useful for technologists.<\/p>\n<p>Perhaps we could create dream teams of technologists, lawyers, and science fiction writers to design and simultaneously consider the regulatory implications for the technologies of the future. However, in the interim, we can consider how creative speculation, like legal reasoning, can be cultivated as a skill.<\/p>\n<p><a id=\"post-696-_Toc57817489\"><\/a><a id=\"post-696-_Toc61983029\"><\/a> Teaching Creative Speculation<\/p>\n<p>How best to teach ethics to computer science students or other technologists of tomorrow is an unsettled question, with a variety of pedagogical approaches represented even as the demand for such instruction continues to grow.<sup><a id=\"post-696-footnote-ref-66\" href=\"#post-696-footnote-66\">[65]<\/a><\/sup> One approach, as exemplified in the course \u201cScience Fiction and Computer Ethics\u201d taught at University of Kentucky and University of Illinois, emphasizes \u201coffer[ing] students a way to cultivate their capacity for moral imagination\u201d through analyzing science fiction stories.<sup><a id=\"post-696-footnote-ref-67\" href=\"#post-696-footnote-67\">[66]<\/a><\/sup> The instructors note that a key insight of this course was that \u201ca good technology ethics course teaches students how to think, not what to think, about their role in the development and deployment of technology, as no one can foresee the problems that will be faced in a future career.\u201d<sup><a id=\"post-696-footnote-ref-68\" href=\"#post-696-footnote-68\">[67]<\/a><\/sup><\/p>\n<p>I include analysis of science fiction texts and media in my own teaching, including stories like Cory Doctorow\u2019s \u201cScroogled\u201d<sup><a id=\"post-696-footnote-ref-69\" href=\"#post-696-footnote-69\">[68]<\/a><\/sup> and Naomi Kritzer\u2019s \u201cCat Pictures Please\u201d<sup><a id=\"post-696-footnote-ref-70\" href=\"#post-696-footnote-70\">[69]<\/a><\/sup> in conjunction with scholarly and news articles when covering surveillance and AI, respectively. I also have students write essays about an AI science fiction film of their choice; <em>Ex Machina<\/em>, <em>Her<\/em>, and <em>Avengers: Age of Ultron <\/em>are particularly popular. However, some of my most successful teaching exercises have students not analyzing science fiction but <em>creating<\/em> it, or engaging in further creative speculation around it. Next, I will discuss two such exercises that I first described in the online article \u201cBlack Mirror, Light Mirror\u201d<sup><a id=\"post-696-footnote-ref-71\" href=\"#post-696-footnote-71\">[70]<\/a><\/sup> and have also taken on the road to try out in other classes and even beyond the classroom: the first an activity on speculative regulation, and the second an activity on imagining possible harms of future technologies.<\/p>\n<p><a id=\"post-696-_Toc57817490\"><\/a><a id=\"post-696-_Toc61983030\"><\/a> Speculative Regulation<\/p>\n<p>The course I teach covers information\/technology policy in addition to ethics. I encourage students to use their legal imaginations, considering the intersection of metaphor and speculation. After we watched the <em>Black Mirror<\/em> episode \u201cThe Entire History of You,\u201d<sup><a id=\"post-696-footnote-ref-72\" href=\"#post-696-footnote-72\">[71]<\/a><\/sup> which takes place in a future in which every action we take is recorded (i.e., always-on lifelogging) and every memory accessible (even by others).<a id=\"post-696-_Ref54369374\"><\/a><sup><a id=\"post-696-footnote-ref-73\" href=\"#post-696-footnote-73\">[72]<\/a><\/sup> When a student inquired whether this would put an end to crime, she followed up by asking if the police would have access to memories at all. Would it be an invasion of privacy? How might the Fourth Amendment apply? Would such a thing constitute unreasonable search? Someone else asked if your own memories could be used against you without your consent, or was that self-incrimination? The conversation then led us to a discussion about the FBI-Apple encryption dispute that concerned whether Apple could be compelled to unlock an encrypted iPhone,<sup><a id=\"post-696-footnote-ref-74\" href=\"#post-696-footnote-74\">[73]<\/a><\/sup> and then I told them about the Supreme Court ruling in <em>Katz v. United States<\/em>.<sup><a id=\"post-696-footnote-ref-75\" href=\"#post-696-footnote-75\">[74]<\/a><\/sup><\/p>\n<p>None of these regulatory or ethical issues came up in \u201cThe Entire History of You,\u201d which was much more concerned with the human and social consequences of the technology. However, this example highlights a feature we have established about science fiction; it can help us explore our present just as much as our future. The premise of this future technology served as a catalyst for discussing similar complexities we are grappling with today. Just as the creators of the iPhone were likely not thinking about how biometric keys might be used by law enforcement,<sup><a id=\"post-696-footnote-ref-76\" href=\"#post-696-footnote-76\">[75]<\/a><\/sup> Alexander Graham Bell likely did not consider the legal privacy implications of the telephone.<sup><a id=\"post-696-footnote-ref-77\" href=\"#post-696-footnote-77\">[76]<\/a><\/sup> Today\u2019s technology is yesterday\u2019s science fiction.<\/p>\n<p>I use another <em>Black Mirror<\/em> episode for a teaching exercise I call \u201cspeculative regulation.\u201d In \u201cBe Right Back,\u201d a young widow brings back her deceased husband first via a chatbot-like service, and eventually via an eerily lifelike robot recreation.<sup><a id=\"post-696-footnote-ref-78\" href=\"#post-696-footnote-78\">[77]<\/a><\/sup> After watching the episode, class begins with the question: what regulations would exist in a world with this technology? If we could create robot versions of our deceased loved ones, what current laws might regulate this practice, or what new ones would be created?<\/p>\n<p>Again, law can often be reactive in the face of new technology.<sup><a id=\"post-696-footnote-ref-79\" href=\"#post-696-footnote-79\">[78]<\/a><\/sup> When Facebook was first launched, no one would have thought to create laws that would regulate the use of such platforms for disinformation campaigns, but after the Cambridge Analytica scandal, this seemed to be a given.<sup><a id=\"post-696-footnote-ref-80\" href=\"#post-696-footnote-80\">[79]<\/a><\/sup> Because edge cases and counterfactuals are a critical part of legal analysis, the exercise continues with a series of hypotheticals<sup><a id=\"post-696-footnote-ref-81\" href=\"#post-696-footnote-81\">[80]<\/a><\/sup> to shift the conversation and force students to find inconsistencies in their decisions and to follow the downstream effects of regulation. These hypotheticals raise a series of questions for the students to answer and ultimately make decisions. Is a robot inheritable property? Are there consequences for mistreatment of a robot? Who is liable for a robot\u2019s behavior? Who is responsible for its care? Can a robot hold a copyright (which nearly always leads to discussion of monkeys<sup><a id=\"post-696-footnote-ref-82\" href=\"#post-696-footnote-82\">[81]<\/a><\/sup>)? Each decision shapes a set of laws (as well as, e.g., a Terms of Service for the robotics company) that in turn shape the social structure of the world that this fictional technology embodies.<\/p>\n<p>The purpose of this exercise is not to think seriously about how we might regulate this technology; even if we can see the inspiration in current technologies designed around a digital afterlife,<a id=\"post-696-_Ref54369456\"><\/a><sup><a id=\"post-696-footnote-ref-83\" href=\"#post-696-footnote-83\">[82]<\/a><\/sup> this is far future tech that might not ever come to pass. There are much more pressing matters for our regulatory structures to deal with right now than the potential rights or liabilities for eerily lifelike robots. However, the intended <em>outcome<\/em> of this activity is to exercise the legal imagination, to learn to think through problems with creative speculation. Also\u2014it\u2019s fun. If students can get excited about thinking through the ethical and legal implications of some technology that someone else <em>might<\/em> create a hundred years from now, they should be able to do the same with the technology that they <em>are<\/em> creating right now. The next exercise takes students through an example of that process by giving them the opportunity to be science fiction writers.<\/p>\n<p><a id=\"post-696-_Toc57817491\"><\/a><a id=\"post-696-_Toc61983031\"><\/a> The Black Mirror Writers\u2019 Room<\/p>\n<p>I think that one of the reasons <em>Black Mirror<\/em> has been so successful is that it takes current technologies and pushes them just a <em>step<\/em> further\u2014most often a foreseeable step, a plausible step. For example, the episode \u201cNosedive\u201d<sup><a id=\"post-696-footnote-ref-84\" href=\"#post-696-footnote-84\">[83]<\/a><\/sup> features widespread adoption of a ratings-based social measurement tool with severe ramifications; a question like \u201cwhy would we agree with this?\u201d forces reflection about the role of social media and related technologies in our own lives.<sup><a id=\"post-696-footnote-ref-85\" href=\"#post-696-footnote-85\">[84]<\/a><\/sup><\/p>\n<p>\u201cNosedive\u201d also easily tees up conversations about surveillance (particularly as represented by the social credit system in China) and social media addiction and well-being. Similarly, \u201cThe Entire History of You\u201d takes on the ethical and normative implications of lifelogging and provokes memories of the failure of Google Glass. And even \u201cBe Right Back\u201d\u2014as far-fetched as it might seem\u2014begins (before the robot shows up) with a premise that is hardly science fiction at all; there has already been a tech company with the tagline \u201cwhen your heart stops beating you\u2019ll keep tweeting.\u201d<sup><a id=\"post-696-footnote-ref-86\" href=\"#post-696-footnote-86\">[85]<\/a><\/sup><\/p>\n<p>The common thread between these stories\u2014which anecdotally, my students count among their favorite episodes\u2014is that they take our current anxieties about technology and nudge them forward far enough to make a point, but close enough that you can still easily see the thread from here to there. They are cautionary tales not based on some distant future but based on where we might plausibly go based on the developments (and anxieties) of today.<\/p>\n<p>Science fiction often starts with these same kinds of questions. Author Louisa Hall says that her novel <em>Speak<\/em> began with imagining what legal, social, and corporate issues artificial intelligence might raise in the future.<sup><a id=\"post-696-footnote-ref-87\" href=\"#post-696-footnote-87\">[86]<\/a><\/sup> Similarly, Annalee Newitz considered in her novel <em>Autonomous <\/em>what the ACLU might think about robot rights.<sup><a id=\"post-696-footnote-ref-88\" href=\"#post-696-footnote-88\">[87]<\/a><\/sup> And of course, <em>Black Mirror<\/em> jumps straight to what might go most wrong with what the tech companies of today might have in development for tomorrow.<sup><a id=\"post-696-footnote-ref-89\" href=\"#post-696-footnote-89\">[88]<\/a><\/sup><\/p>\n<p>Ethics, particularly with respect to emerging technology, is so deeply at its core about speculation\u2014because there are so many potential harms that are difficult to anticipate. They certainly manage to do that in the writers\u2019 room for <em>Black Mirror<\/em>, though. What if you were not only recording your memories, but others could see them? What if you could bring a loved one back as more than a chatbot? What if the social credit system in China was powered by Instagram? What would the cautionary tale be, and what narrative would best tell that story?<\/p>\n<p>As an exercise towards this kind of ethical speculation, I turn my class into this writers\u2019 room, having small groups choose an issue or technology\u2014social media privacy, algorithmic bias, online harassment, misinformation\u2014and then consider where it will be in five or ten years. What could be worthy of a <em>Black Mirror <\/em>episode? They consider possible harms, and then pitch an episode arc.<sup><a id=\"post-696-footnote-ref-90\" href=\"#post-696-footnote-90\">[89]<\/a><\/sup><\/p>\n<p>I have run this exercise not just in an ethics classroom but in technical computer science classes, with high school students, and even with groups of technology professionals at conferences. The ideas that have come out of it are definitely worthy of television. Sometimes ideas from students are barely science fiction at all. For example, they asked: what if an algorithm can tell from your social media traces that you are sick and sends you medication? But wait, that\u2019s not quite creepy enough; what if a profit-motivated algorithm makes a calculation, based on how depressed you are, whether it is more likely to make a sale when advertising antidepressants or heroin?<\/p>\n<p>Another idea from students was that perhaps in the future advertising will not exist at all; Amazon\u2019s algorithms will know so much about us that we do not have to shop at all anymore. Everything we need will just show up at our door\u2014including, in a <em>Twilight Zone<\/em> type twist, a book about privacy protection. In one class, having recently discussed the Cambridge Analytica scandal in which political campaigns relied on highly personalized Facebook content to influence voters, we imagined a benevolent AI that uses an even more robust form of personalization to manipulate everyone on Earth into complacency (spoiler: it does not end well for them).<\/p>\n<p>This exercise could easily turn into a pessimist\u2019s dream. <em>Black Mirror<\/em>, after all, mostly helps convince you that technology is going to destroy us all. However, the imagining of all these possible harms is not the right place to end. The next step\u2014arguably, the more important, if less fun step\u2014is to consider how we <em>do not<\/em> get to these harms. We talk about stakeholders, responsibilities, and potential regulatory regimes. We also do not stop with \u201cthere should be laws for that.\u201d What about design? What might the people involved in creating that technology do to help prevent potential negative consequences? Better yet, where could today\u2019s technology go instead that could benefit society and make things <em>better<\/em> than they are now? My hope is that helping more people think critically about responsibility and ethics in the context of technology is one way to keep our lives from turning into a <em>Black Mirror<\/em> episode.<\/p>\n<p><a id=\"post-696-_Toc57817492\"><\/a><a id=\"post-696-_Toc61983032\"><\/a> Conclusion: Making Ripples<\/p>\n<p>In the introduction to Daxton Stewart\u2019s book <em>Media Law Through Science Fiction<\/em>, author Malka Older describes what it means to be a science fiction writer:<\/p>\n<p>\u201cMy job, then, is essentially to think up some difference in the world \u2026 and make sure that the human reactions to it, the changes society has built around it, feel right. \u2026 To do my job well, I need to think through the unintended and unexpected consequences, the second- and third- and fourth-order ripples. \u2026 I need to imagine what other applications have come up, both formal and unauthorized. \u2026 [C]hange typically doesn\u2019t happen as a single variable while everything else stays constant.\u201d<sup><a id=\"post-696-footnote-ref-91\" href=\"#post-696-footnote-91\">[90]<\/a><\/sup><\/p>\n<p>What Older describes, the ability to think through the unintended\/unexpected consequences and \u201cthe second- and third- and fourth-order ripples,\u201d is precisely what both technologists and regulators should be doing in the context of emerging technology. As Stewart writes, in addition to encouraging foresight, science fiction \u201cenables us to have good discussions in the present about the world we live in \u2026 potentially in anticipation of legal issues before they arrive.\u201d<sup><a id=\"post-696-footnote-ref-92\" href=\"#post-696-footnote-92\">[91]<\/a><\/sup><\/p>\n<p>Science fiction writer and activist Cory Doctorow has taken this idea farther than most, for example by writing short stories intended to illustrate \u201cnightmare scenarios\u201d that could become reality based on the regulatory trajectory of the U.S. Copyright Office.<sup><a id=\"post-696-footnote-ref-93\" href=\"#post-696-footnote-93\">[92]<\/a><\/sup> He described his goal as being about taking dry and complicated policy and making it vivid and real, hoping that people \u201cwill recognize through fiction what the present-day annoyances will turn into in the future.\u201d<sup><a id=\"post-696-footnote-ref-94\" href=\"#post-696-footnote-94\">[93]<\/a><\/sup><\/p>\n<p>However, just as we might take annoyances and extrapolate to future harms, we can also imagine better futures. One inspiration behind Daniel Wilson\u2019s book <em>Robopocalypse <\/em>was the real-life plane crash caused by tension between human pilots and an automated system; in his future, simple laws for AI and robotics promoted public safety and would prevent this kind of tragedy.<sup><a id=\"post-696-footnote-ref-95\" href=\"#post-696-footnote-95\">[94]<\/a><\/sup><\/p>\n<p>I am an optimist who uses pessimism to prepare. And my preparation is speculation for what the world could be, and how it could be better. In addition to noting that catastrophe is inevitable but solutions are not, Asimov also said that the best way to prevent catastrophe is to take action to prevent it before it happens, and thus to foresee it in time, \u201cbut who listens to those who do the foreseeing?\u201d<sup><a id=\"post-696-footnote-ref-96\" href=\"#post-696-footnote-96\">[95]<\/a><\/sup><\/p>\n<p>The answer, I think, is for <em>everyone<\/em> to do the foreseeing, and to listen to each other, in order to create collective visions of the future. Creative speculation as a design tool can begin in the classroom, but should find its way into practice as well, and it should involve multiple stakeholders whenever possible\u2014as should any consideration of the ethical implications of technology. One of my favorite examples is the Interdisciplinary Ethics Tech Competition, organized by Silicon Flatirons here at University of Colorado Boulder; the competition \u201cgives students a chance to wrestle\u00a0with a real-world ethics\u00a0problem\u00a0in collaboration with a diverse team of students studying\u00a0law, business, communication, journalism, engineering, [cybersecurity], information science, or\u00a0computer science.\u201d<sup><a id=\"post-696-footnote-ref-97\" href=\"#post-696-footnote-97\">[96]<\/a><\/sup> Having been a judge for this competition for several years, my observation is that students seem to be more creative and forward-thinking when silo-ed within their own disciplines.<\/p>\n<p>Imagine what the world might look like if everyone who touched technology examined it critically, in a creative, forward-thinking way. Perhaps the popularity of <em>Black Mirror<\/em> is a start. The show\u2019s creator Charlie Brooker described it as \u201cabout the way we live now\u2014and the way we might be living in 10 minutes\u2019 time if we\u2019re clumsy.\u201d<sup><a id=\"post-696-footnote-ref-98\" href=\"#post-696-footnote-98\">[97]<\/a><\/sup>\u00a0The show is intended as a way to force people to think about possible futures and potential harms of the technology we build and use\u2014not just technologists and lawyers, but the general public, too. The more of us who think ahead, the more pitfalls we might anticipate and avoid. Brooker posits that mankind is \u201cusually clumsy,\u201d<sup><a id=\"post-696-footnote-ref-99\" href=\"#post-696-footnote-99\">[98]<\/a><\/sup> but that just means we need to look where we are going.<\/p>\n<ol>\n<li id=\"post-696-footnote-2\">* Assistant Professor of Information Science, University of Colorado Boulder; Fellow, Silicon Flatirons Center for Law, Technology, and Entrepreneurship; Fellow, Center for Democracy and Technology; PhD in Human-Centered Computing, Georgia Institute of Technology, 2015; JD, Vanderbilt University Law School, 2009. My thanks to the organizers and participants of the 2020 Silicon Flatirons \u201cTechnology Optimism and Pessimism\u201d conference for engaging conversations that helped shape this piece\u2014especially the \u201cConversation about the Future\u201d panel, including Phil Weiser, Patty Limerick, and Karl Schroeder. <a href=\"#post-696-footnote-ref-2\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-3\">. Daxton R. Stewart, Media Law Through Science Fiction: Do Androids Dream of Electric Free Speech? 31 (2020) (quoting Isaac Asimov, <em>How Easy to See the Future<\/em>, Natural History, 1975). <a href=\"#post-696-footnote-ref-3\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-4\">. Charles J. Anders et al., Future Tense Fiction: Stories Of Tomorrow 11\u201313 (Kristen Berg et al. eds., 2019). <a href=\"#post-696-footnote-ref-4\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-5\">. Nazanin Andalibi &amp; Justin Buss, <em>The Human in Emotion Recognition on Social Media: Attitudes, Outcomes, Risks<\/em>, Proc. ACM CHI Conf. Hum. Factors Computer Sys. at 1, 6 (2020) (describing an interview participant speculating about emotion detection creating a \u201c1984 society\u201d); Blake Hallinan, Jed R Brubaker &amp; Casey Fiesler, <em>Unexpected Expectations: Public Reaction to the Facebook Emotional Contagion Study<\/em>, 22(6) New Media &amp; Soc\u2019y 1076, 1081\u201383 (2020) (describing online reactions to the Facebook emotional contagion experiment that referenced the dystopian novels <em>1984<\/em> and <em>Brave New World<\/em>). <a href=\"#post-696-footnote-ref-5\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-6\">. Anthony Dunne &amp; Fiona Raby, Speculate Everything: Design, Fiction, and Social Dreaming 74\u201375 (2013). <a href=\"#post-696-footnote-ref-6\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-7\">. <em>Id.<\/em> at 73. <a href=\"#post-696-footnote-ref-7\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-8\"><em> . See<\/em> Robert K. Merton, <em>The Unanticipated Consequences of Purposive Social Action<\/em>, 1 Am. Soc. Rev. 894, 895 (1936). <a href=\"#post-696-footnote-ref-8\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-9\">. Dunne &amp; Raby, <em>supra<\/em> note 4, at 3\u20136, 14. <a href=\"#post-696-footnote-ref-9\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-10\">. Russell Blackford, Science Fiction and the Moral Imagination: Visions, Minds, Ethics 14 (Mark Alpert et al. eds., 2017). <a href=\"#post-696-footnote-ref-10\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-11\">. Dunne &amp; Raby, <em>supra<\/em> note 4, at 34\u201335. <a href=\"#post-696-footnote-ref-11\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-12\">. Graeme Laurie, Shawn H.E. Harmon &amp; Fabiana Arzuaga, <em>Foresighting Futures: Law, New Technologies, and the Challenges of Regulating for Uncertainty<\/em>, 4 L., Innovovation &amp; Tech. 1, 3 (2012) (defining \u201clegal foresighting\u201d as \u201cthe identification and exploration of possible and desirable future legal or quasi-legal developments aimed at achieving valued social and technological ends\u201d). <a href=\"#post-696-footnote-ref-12\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-13\">. <em>Id.<\/em> at 10 (\u201c[A] wide range of actors is implicated in the technologies fields, and so a wide range of stakeholders appropriate to the legal foresighting exercise also emerges.\u201d); Gregory N. Mandel<em>,<\/em> <em>Regulating Emerging Technologies<\/em>, 1 L., Innovovation &amp; Tech. 1, 9 (2009) (\u201cCritical to this proposal for emerging technology governance is wide and diverse stakeholder involvement.\u201d); Ryan Calo, <em>Robotics and the Lessons of Cyberlaw<\/em>, 103 Calif. L. Rev. 513, 560 (2015) (\u201cCyberlaw today is a deeply interdisciplinary enterprise, full of meaningful collaboration across a wide variety of training.\u201d). <a href=\"#post-696-footnote-ref-13\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-14\">. Laurie et al., <em>supra<\/em> note 10, at 3 (\u201cLegal foresighting should help us create pathways into the unknown, and part of that creation may mean (or demand) a fundamental re-visioning of the legal setting itself, its instruments, institutions, and regulatory or governance mechanisms.\u201d); Clark A. Miller &amp; Ira Bennett, <em>Thinking Longer Term About Technology: Is There Value in Science Fiction-inspired Approaches to Constructing Futures?<\/em>, 35 Sci. &amp; Pub. Pol\u2019y 597, 604 (2008) (suggesting the value of \u201c[p]romoting critical science fiction writing as a socially valuable profession, and one that interacts with both science and engineering and social and humanistic studies of science and technology\u201d); Kieran Tranter, <em>The Speculative Jurisdiction: The Science Fictionality of Law and Technology<\/em>, 20 Griffith L. Rev. 815, 820 (2011) (\u201c[L]egal scholarship on technology is kind of an applied futurology \u2013 its starting point is images of technological futures that call for law. This is a speculative activity, a creative process of looking at what is and projecting, imaging and dreaming what <em>could be.<\/em>\u201d); Mitchell Travis, <em>Making Space: Law and Science Fiction<\/em>, 23 L. &amp; Lit. 241, 242 (2011) (\u201cScience fiction allows for a space in which alternate social and legal systems, conditions, and variables can be considered, and it is beneficial for law to consider these alternate situations, given that they are often inspired by popular attitudes.\u201d). <a href=\"#post-696-footnote-ref-14\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-15\"><em> . See <\/em>Kurt M. Saunders &amp; Linda Levine, <em>Learning to Think Like a Lawyer<\/em>, 29 U.S.F. L. Rev. 121, 126 (1994). <a href=\"#post-696-footnote-ref-15\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-16\">. Mandel, <em>supra<\/em> note 11, at 1. <a href=\"#post-696-footnote-ref-16\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-17\">. Matthew U. Scherer, <em>Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies<\/em>, 29 Harv. J. L. &amp; Tech. 353, 355 (2016). <a href=\"#post-696-footnote-ref-17\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-18\"><em> . Id. <\/em> <a href=\"#post-696-footnote-ref-18\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-19\">. Mark O. Riedl &amp; Brent Harrison, <em>Using Stories to Teach Human Values to Artificial Agents<\/em>, Ass\u2019n for the Advancement of Artifical Intelligence 1 (2015), (\u201cRecent advances in artificial intelligence and machine learning have led many to speculate that artificial general intelligence is increasingly likely.\u201d). <a href=\"#post-696-footnote-ref-19\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-20\">. Enrico Coiera, <em>The Price of Artificial Intelligence<\/em>, 28 Y.B. Med. Informatics 14 (2019) <a href=\"#post-696-footnote-ref-20\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-21\">. Scherer, <em>supra<\/em> note 15, at 365. <a href=\"#post-696-footnote-ref-21\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-22\">. Nick Bostrom, <em>Ethical Issues in Advanced Artificial Intelligence<\/em>, <em>in <\/em>Science Fiction &amp; Philosopohy From Time Travel to Superintelligence 277, 280\u201384 (2003) (describing the paperclip maximizer thought experiment, in which a superintelligence whose goal is the manufacturing of paperclips starts transforming first all of earth and then increasing portions of space into paperclip manufacturing facilities); Riedl &amp; Harrison, <em>supra<\/em> note 17, at 105 (\u201cAn artificial general intelligence, especially one that is embodied, will have much greater opportunity to affect change to the environment and find unanticipated courses of action with undesirable side effects. This leads to the possibility of artificial general intelligences causing harm to humans; just as when humans act with disregard for the wellbeing of others.\u201d). <a href=\"#post-696-footnote-ref-22\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-23\">. <em>AI Ethics Principles<\/em>, Austl. Dep\u2019t Indus., Sci., Energy, &amp; Res., <a href=\"https:\/\/www.industry.gov.au\/data-and-publications\/building-australias-artificial-intelligence-capability\/ai-ethics-framework\/ai-ethics-principles\">https:\/\/www.industry.gov.au\/data-and-publications\/building-australias-artificial-intelligence-capability\/ai-ethics-framework\/ai-ethics-principles<\/a> [https:\/\/perma.cc\/53C2-67R7] (last visited Oct. 18, 2020). <a href=\"#post-696-footnote-ref-23\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-24\">. <em>DOD Adopts Ethical Principles for Artificial Intelligence<\/em>, U.S. Dep\u2019t Def. (2020), https:\/\/www.defense.gov\/Newsroom\/Releases\/Release\/Article\/2091996\/dod-adopts-ethical-principles-for-artificial-intelligence\/ [https:\/\/perma.cc\/4SHL-DZ9J] (last visited Oct. 18, 2020). <a href=\"#post-696-footnote-ref-24\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-25\">. <em>Microsoft AI Principles<\/em>, Microsoft, https:\/\/www.microsoft.com\/en-us\/ai\/responsible-ai [https:\/\/perma.cc\/E2KB-LR9X] (last visited Oct. 18, 2020). <a href=\"#post-696-footnote-ref-25\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-26\">. <em>Artificial Intelligence at Google: Our Principles<\/em>, Google AI, https:\/\/ai.google\/principles\/ [https:\/\/perma.cc\/P4DV-RGZN] (last visited Oct. 18, 2020). <a href=\"#post-696-footnote-ref-26\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-27\">. <em>Artificial Intelligence: An Evangelical Statement of Principles<\/em>, Ethics &amp; Religious Liberty Commission of the S. Baptist Convention (2019), https:\/\/erlc.com\/resource-library\/statements\/artificial-intelligence-an-evangelical-statement-of-principles [https:\/\/perma.cc\/58PE-AWZW] (last visited Oct. 18, 2020). <a href=\"#post-696-footnote-ref-27\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-28\">. Anna Jobin, Marcello Ienca &amp; Effy Vayena, <em>The Global Landscape of AI Ethics Guidelines<\/em>, 1 Nature Machine Intelligence 389 (2019) (\u201cOur results reveal a global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy), with substantive divergence in relation to how these principles are interpreted, why they are deemed important, what issue, domain or actors they pertain to, and how they should be implemented.\u201d). <a href=\"#post-696-footnote-ref-28\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-29\">. Calo, <em>supra <\/em>note 11, 560. <a href=\"#post-696-footnote-ref-29\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-30\">. Scherer, <em>supra<\/em> note 15, at 357. <a href=\"#post-696-footnote-ref-30\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-31\">. <em>Id. <\/em>at 366\u201367 <em>(<\/em>describing AI as \u201ca potential source of public risk on a scale that far exceeds the more familiar forms of public risk that are solely the result of human behavior\u201d)<em>.<\/em> <a href=\"#post-696-footnote-ref-31\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-32\"><em> . See<\/em>,<em> e.g.<\/em>, Andrew W. Brown &amp; David B. Allison, <em>Unintended Consequences of Obesity-Targeted Health Policy<\/em>, 15 Am. Med. Ass\u2019n J. Ethics 339 (2013); Mark Wolfson &amp; Mary Hourigan, <em>Unintended Consequences and Professional Ethics: Criminalization of Alcohol and Tobacco Use by Youth and Young Adults<\/em>, 92 Addiction 1159 (1997). <a href=\"#post-696-footnote-ref-32\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-33\">. Laurie et al., <em>supra<\/em> note 10. <a href=\"#post-696-footnote-ref-33\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-34\">. Ken Ward, <em>Social Networks, the 2016 US Presidential Election, and Kantian Ethics: Applying the Categorical Imperative to Cambridge Analytica\u2019s Behavioral Microtargeting<\/em>, 33 J. Media Ethics 133 (2018). <a href=\"#post-696-footnote-ref-34\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-35\">. Michal Kosinski, David Stillwell &amp; Thore Graepel, <em>Private Traits and Attributes Are Predictable from Digital Records of Human Behavior<\/em>, 110 Proc. Nat\u2019l Acad. Sci. U.S. 5802, 5805 (2013) (\u201c[T]he predictability of individual attributes from digital records of behavior may have considerable negative implications, because it can easily be applied to large numbers of people without obtaining their individual consent and without them noticing. Commercial companies, governmental institutions, or even one\u2019s Facebook friends could use software to infer attributes such as intelligence, sexual orientation, or political views that an individual may not have intended to share. One can imagine situations in which such predictions, even if incorrect, could pose a threat to an individual\u2019s well-being, freedom, or even life.\u201d). <a href=\"#post-696-footnote-ref-35\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-36\">. Casey Fiesler, <em>What Our Tech Ethics Crisis Says About the State of Computer Science Education<\/em>, How We Get to Next (Dec. 5, 2018), https:\/\/howwegettonext.com\/what-our-tech-ethics-crisis-says-about-the-state-of-computer-science-education-a6a5544e1da6 [https:\/\/perma.cc\/V7AT-VVN3]. <a href=\"#post-696-footnote-ref-36\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-37\">. <em>Id.<\/em> <a href=\"#post-696-footnote-ref-37\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-38\">. Brent Hecht et al., <em>It\u2019s Time to Do Something: Mitigating the Negative Impacts of Computing Through a Change to the Peer Review Process<\/em>, ACM Future of Computing Acad. Blog (Mar. 29,2018), https:\/\/acm-fca.org\/2018\/03\/29\/negativeimpacts\/ [https:\/\/perma.cc\/8J9R-CFZT]. <a href=\"#post-696-footnote-ref-38\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-39\">. Laurie et al., <em>supra<\/em> note 10, at 7. <a href=\"#post-696-footnote-ref-39\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-40\"><em> . Id.<\/em> at 2\u20133. <a href=\"#post-696-footnote-ref-40\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-41\">. Coiera, <em>supra<\/em> note 18 (citing <em>Roy Amara 1925\u20132007, American futurologist,<\/em> <em>in<\/em> Oxford Essential Quotations (Ratcliffe S., ed.,4th ed., 2016)). <a href=\"#post-696-footnote-ref-41\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-42\">. Sarah Phillips, <em>A Brief History of Facebook,<\/em> The Guardian (July 25, 2007), https:\/\/www.theguardian.com\/technology\/2007\/jul\/25\/media.newmedia [https:\/\/perma.cc\/4UEH-Y3W5]. <a href=\"#post-696-footnote-ref-42\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-43\">. Multiple research studies have shown that Facebook has a direct impact on voter turnout. Katherine Haenschen, <em>Social Pressure on Social Media: Using Facebook Status Updates to Increase Voter Turnout<\/em>, 66 J. Comm. 542 (2016). Researchers have also shown the impact that Facebook and similar companies play in shaping elections in the context of political communication and avertising. Daniel Kreiss &amp; Shannon C. McGregor, <em>Technology Firms Shape Political Communication: The Work of Microsoft, Facebook, Twitter, and Google With Campaigns During the 2016 U.S. Presidential Cycle<\/em>, 35 Pol. Comm. 155 (2017). Finally, there has been a great deal of speculation about the specific impact of Facebook on the 2016 presidential election, pointing to misinformation, political advertising, and personalized newsfeeds. Alexis C. Madigal, <em>What Facebook Did to American Democracy<\/em>, The Atlantic (October 12, 2017), https:\/\/www.theatlantic.com\/technology\/archive\/2017\/10\/what-facebook-did\/542502\/ [https:\/\/perma.cc\/VY6Y-KD6H]. <a href=\"#post-696-footnote-ref-43\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-44\">. Merton, <em>supra<\/em> note 6 at 898\u2013901 (\u201cThe most obvious limitation to a correct anticipation of consequences of action is provided by the existing state of knowledge\u2026 A second major factor of unexpected consequences of conduct, which is perhaps as pervasive as ignorance, is error.\u201d). <a href=\"#post-696-footnote-ref-44\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-45\">. Dan Jerker B. Svantesson, <em>The Times They Are A-Changin\u2019<\/em>, Forum on Pub. Pol\u2019y 4\u20135 (2015) (\u201cThe functional equivalence approach is based on an analysis of the purposes and functions of the traditional paper based requirement with a view to determining how those purposes or functions could be fulfilled through electronic commerce techniques. \u2026 Technology has advanced with great speed in recent years. It is likely to continue to do so. Unlike technology, the law tends to develop slowly, usually by reacting to situations only as they arise.\u201d). <a href=\"#post-696-footnote-ref-45\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-46\">. Calo, <em>supra<\/em> note 11, at 559. <a href=\"#post-696-footnote-ref-46\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-47\">. Nat\u2019l. Cable &amp; Telecomm. Ass\u2019n v. Brand X Internet Services, 545 U.S. 967, 991 (2005). <a href=\"#post-696-footnote-ref-47\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-48\">. Svantesson, <em>supra<\/em> note 43, at 9. <a href=\"#post-696-footnote-ref-48\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-49\">. Laurie et al., <em>supra<\/em> note 10, at 3. <a href=\"#post-696-footnote-ref-49\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-50\">. Blackford, <em>supra<\/em> note 8, at 8 (citing Isaac Asimov, Asimov on Science Fiction 1981). <a href=\"#post-696-footnote-ref-50\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-51\">. Anders et al., <em>supra<\/em> note 2, at 11. <a href=\"#post-696-footnote-ref-51\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-52\">. Omar Mubin et al., <em>Reflecting on the Presence of Science Fiction Robots in Computing Literature<\/em>, 8 ACM Trans. Human-Robot Interaction 1, 7 (2019). <a href=\"#post-696-footnote-ref-52\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-53\">. Mark Fenwick, Wulf A. Kaal &amp; Erik P. M. Vermeulen, <em>Regulation Tomorrow: What Happens when Technology Is Faster than the Law?<\/em>, 3 Am. U. Bus. L. Rev. (2017). <a href=\"#post-696-footnote-ref-53\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-54\">. Elizabeth Mertz et al., <em>Forty-five Years of Law and Literature: Reflections on James Boyd White\u2019s \u201cThe Legal Imagination\u201d and its Impact on Law and Humanities Scholarship<\/em>, 13 L. &amp; Human. 95, 96 (2019) (describing James White\u2019s 1973 book <em>The Legal Imagination<\/em> as an approach to legal education that involves \u201creading law\u2019s instruments, its rhetoric and concepts alongside, above, below and in-between literary works and criticism.\u201d); <em>see<\/em> Carol Parker, <em>A Liberal Education in Law: Engaging the Legal Imagination Through Research and Writing Beyond the Curriculum<\/em>, 1 J. Ass\u2019n Legal Writing Directors 130, 132\u201333 (2008) (examining one specific aspect of this approach, the importance of metaphor and its role in both legal reasoning and imagination). <a href=\"#post-696-footnote-ref-54\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-55\"><em> . See<\/em> Saunders &amp; Levine, <em>supra<\/em> note 13. <a href=\"#post-696-footnote-ref-55\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-56\">. <em>Id.<\/em> at 2. <a href=\"#post-696-footnote-ref-56\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-57\">. Philip C. Kissam, <em>Law School Examinations<\/em>, 42 Vand. L. Rev. 433, 440 (1989). <a href=\"#post-696-footnote-ref-57\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-58\">. Casey Fiesler, Natalie Garrett &amp; Nathan Beard, <em>What Do We Teach When We Teach Tech Ethics? A Syllabi Analysis<\/em>, Proc. ACM SIGCSE Tech. Symp. Computer Sci. Educ. 1, 5 (2020). <a href=\"#post-696-footnote-ref-58\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-59\"><em> . See<\/em> Ward, <em>supra<\/em> note 32. <a href=\"#post-696-footnote-ref-59\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-60\">. Kosinski et al., <em>supra<\/em> note 33. <a href=\"#post-696-footnote-ref-60\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-61\">. Parker, <em>supra<\/em> note 52, at 132 (quoting Steven L. Winter, <em>Death is the Mother of Metaphor<\/em>, 105 Harv. L. Rev. 745, 759 (1992)). <a href=\"#post-696-footnote-ref-61\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-62\">. Kissam, <em>supra<\/em> note 55, at 440. <a href=\"#post-696-footnote-ref-62\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-63\">. Parker, supra note 52, at 132\u201333 <a href=\"#post-696-footnote-ref-63\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-64\">. <em>See <\/em>Laurie et al.,<em> supra<\/em> note 10. <a href=\"#post-696-footnote-ref-64\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-65\"><em> . See<\/em> Miller &amp; Bennett, <em>supra<\/em> note 12. <a href=\"#post-696-footnote-ref-65\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-66\">. Fiesler et al., <em>supra<\/em> note 56 (describing the content of 100+ syllabi from tech ethics courses). <a href=\"#post-696-footnote-ref-66\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-67\">. Emanuelle Burton, Judy Goldsmith &amp; Nicholas Mattei, <em>How to Teach Computer Ethics through Science Fiction<\/em>, 61 Comm. ACM 54, 64 (2018). <a href=\"#post-696-footnote-ref-67\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-68\">. <em>Id.<\/em> at 54. <a href=\"#post-696-footnote-ref-68\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-69\">. Cory Doctorow, <em>Scroogled<\/em> (2007), <a href=\"https:\/\/cmci.colorado.edu\/~cafi5706\/Scroogled.pdf\">https:\/\/cmci.colorado.edu\/~cafi5706\/Scroogled.pdf<\/a> [https:\/\/perma.cc\/M85C-TX9F] (last visited, Oct. 18, 2020). <a href=\"#post-696-footnote-ref-69\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-70\">. Naomi Kritzer, <em>Cat Pictures Please<\/em>, Clarkesworld (2016), <a href=\"http:\/\/clarkesworldmagazine.com\/kritzer_01_15\/\">http:\/\/clarkesworldmagazine.com\/kritzer_01_15\/<\/a> [https:\/\/perma.cc\/X5XG-V2MR] (last visited, Oct. 18, 2020). <a href=\"#post-696-footnote-ref-70\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-71\">. Casey Fiesler, <em>Black Mirror, Light Mirror: Teaching Technology Ethics Through Speculation<\/em>, How We Get to Next (Oct. 5, 2018), https:\/\/howwegettonext.com\/the-black-mirror-writers-room-teaching-technology-ethics-through-speculation-f1a9e2deccf4 [https:\/\/perma.cc\/9N64-R4U3]. <a href=\"#post-696-footnote-ref-71\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-72\"><em> . Black Mirror: The Entire History of You<\/em> (Netflix Dec. 11, 2011). <a href=\"#post-696-footnote-ref-72\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-73\">. Casey Fiesler, <em>Ethical Considerations for Research Involving (Speculative) Public Data<\/em>, 3 GROUP Proc. ACM Hum.-Computer Interactions 249, 249:2 (2019). <a href=\"#post-696-footnote-ref-73\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-74\">. Dan Froomkin &amp; Jenna McLaughlin, <em>FBI vs. Apple establishes a new phase of the crypto wars<\/em>, Intercept (Feb. 26, 2016 12:13 PM), https:\/\/theintercept.com\/2016\/02\/26\/fbi-vs-apple-post-crypto-wars\/ [https:\/\/perma.cc\/WJU5-37UK]. <a href=\"#post-696-footnote-ref-74\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-75\">. Katz v. United States, 389 U.S. 347 (1967) (establishing reasonable expectation of privacy with respect to phone calls). <a href=\"#post-696-footnote-ref-75\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-76\"><em> . See <\/em>Opher Shweiki &amp; Youli Lee, <em>Compelled Use of Biometric Keys to Unlock a Digital Device: Deciphering Recent Legal Developments<\/em>, 67 Dep\u2019t of Just. J. Fed. L. &amp; Pract. 23 (2019). <a href=\"#post-696-footnote-ref-76\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-77\">. Annie Dike, <em>Alexander Graham Bell Day Calls for Patent Trivia: Time to See How \u201cPhone Smart\u201d You Are<\/em>, 10 Nat\u2019l L. Rev. 272 (Mar. 6, 2018), https:\/\/www.natlawreview.com\/article\/alexander-graham-bell-day-calls-patent-trivia-time-to-see-how-phone-smart-you-are [https:\/\/perma.cc\/3S8Q-CFSB]. <a href=\"#post-696-footnote-ref-77\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-78\"><em> . Black Mirror: Be Right Back<\/em> (Netflix Feb. 11, 2013). <a href=\"#post-696-footnote-ref-78\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-79\">. Fenwick et al., <em>supra<\/em> note 51, at 574. <a href=\"#post-696-footnote-ref-79\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-80\">. Casey Newton, <em>Congress just showed us what comprehensive regulation of Facebook would look like<\/em>, The Verge, (July 31, 2018); Fiesler, <em>supra<\/em> Part I. <a href=\"#post-696-footnote-ref-80\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-81\">. A slide deck containing a set of these hypotheticals can be downloaded athttp:\/\/cmci.colorado.edu\/~cafi5706\/blackmirror_speculativeregulation.pptx [https:\/\/perma.cc\/7EVK-38TX]. <a href=\"#post-696-footnote-ref-81\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-82\">. Stephen Schahrer, <em>First, Let Me Take a Selfie: Should a Monkey Have Copyrights to His Own Selfie?<\/em>, 12 Liberty U. L. Rev. 135\u201365 (2017). <a href=\"#post-696-footnote-ref-82\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-83\">. Amanda Lagerkvist, <em>The Netlore of the Infinite: Death (and Beyond) in the Digital Memory Ecology<\/em>, 21 New Rev. Hypermedia &amp; Multimedia 185, 189 (2015). <a href=\"#post-696-footnote-ref-83\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-84\"><em> . Black Mirror: Nosedive <\/em>(Netflix Oct. 21, 2016). <a href=\"#post-696-footnote-ref-84\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-85\"><em> . <\/em>Journalism professor Jeremy Littau at Lehigh University uses this example to spur discussion among his students about the future of communication and technology. Stewart, <em>supra<\/em> note 1, at 10. <a href=\"#post-696-footnote-ref-85\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-86\">. Lagerkvist, <em>supra<\/em> note 81. <a href=\"#post-696-footnote-ref-86\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-87\">. Stewart, <em>supra<\/em> note 1, at 15. <a href=\"#post-696-footnote-ref-87\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-88\"><em> . Id.<\/em> at 18. <a href=\"#post-696-footnote-ref-88\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-89\">. Dunne &amp; Raby, <em>supra<\/em> note 4, at 74\u201375. <a href=\"#post-696-footnote-ref-89\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-90\">. Casey Fiesler, <em>The <\/em>Black Mirror<em> Writers\u2019 Room: A Speculative Exercise<\/em>, (July 8, 2020), https:\/\/docs.google.com\/presentation\/d\/1fZah6nYpAhLtUMh1BRy3w1vCHk_-W7bxxv0LeuKZpT0\/edit#slide=id.g63d578e5a7_0_0 [https:\/\/perma.cc\/SE97-9R8Y]. <a href=\"#post-696-footnote-ref-90\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-91\">. Stewart, <em>supra<\/em> note 1, at ix. <a href=\"#post-696-footnote-ref-91\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-92\"><em> . Id.<\/em> at 185. <a href=\"#post-696-footnote-ref-92\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-93\"><em> . Id.<\/em> at 9. <a href=\"#post-696-footnote-ref-93\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-94\"><em> . Id.<\/em> at 9\u201310. <a href=\"#post-696-footnote-ref-94\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-95\"><em> . Id.<\/em> at 19. <a href=\"#post-696-footnote-ref-95\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-96\"><em> . Id.<\/em> at 31. <a href=\"#post-696-footnote-ref-96\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-97\">. <em>Interdisciplinary Ethics Tech Competition<\/em>, U. of Colo. Boulder https:\/\/www.colorado.edu\/law\/academics\/daniels-fund-ethics-initiative-collegiate-program-colorado-law\/programs-and-events-0 [https:\/\/perma.cc\/33Q7-DNEF] (last visited Oct. 18, 2020). <a href=\"#post-696-footnote-ref-97\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-98\">. Charlie Brooker, <em>The Dark Side of Our Gadget Addictiton<\/em>, The Guardian (Dec. 1, 2011), https:\/\/www.theguardian.com\/technology\/2011\/dec\/01\/charlie-brooker-dark-side-gadget-addiction-black-mirror [https:\/\/perma.cc\/YB5J-EYXT]. <a href=\"#post-696-footnote-ref-98\">\u2191<\/a><\/li>\n<li id=\"post-696-footnote-99\"><em> . Id.<\/em> <a href=\"#post-696-footnote-ref-99\">\u2191<\/a><\/li>\n<\/ol>\n","protected":false},"excerpt":{"rendered":"<p>Innovating Like an Optimist, Preparing Like a Pessimist: Ethical Speculation and the Legal Imagination by Casey Fiesler[1]* \u201cScience fiction writers foresee the inevitable, and although problems and catastrophes may be inevitable, solutions are not.\u201d \u2013 Isaac Asimov[2] Introduction We all tell stories\u2014to ourselves, and to others\u2014about the future. These stories typically draw us in two [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_uag_custom_page_level_css":"","site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[6,9,8],"tags":[],"class_list":["post-696","post","type-post","status-publish","format-standard","hentry","category-6","category-printed","category-volume19"],"uagb_featured_image_src":{"full":false,"thumbnail":false,"medium":false,"medium_large":false,"large":false,"1536x1536":false,"2048x2048":false,"portfolio_item-thumbnail":false,"portfolio_item-thumbnail@2x":false,"portfolio_item-masonry":false,"portfolio_item-masonry@2x":false,"portfolio_item-thumbnail_cinema":false,"portfolio_item-thumbnail_portrait":false,"portfolio_item-thumbnail_portrait@2x":false,"portfolio_item-thumbnail_square":false},"uagb_author_info":{"display_name":"Casey Fiesler","author_link":""},"uagb_comment_info":0,"uagb_excerpt":"Innovating Like an Optimist, Preparing Like a Pessimist: Ethical Speculation and the Legal Imagination by Casey Fiesler[1]* \u201cScience fiction writers foresee the inevitable, and although problems and catastrophes may be inevitable, solutions are not.\u201d \u2013 Isaac Asimov[2] Introduction We all tell stories\u2014to ourselves, and to others\u2014about the future. These stories typically draw us in two&hellip;","featured_media_urls":[],"_links":{"self":[{"href":"https:\/\/ctlj.colorado.edu\/index.php?rest_route=\/wp\/v2\/posts\/696","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ctlj.colorado.edu\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ctlj.colorado.edu\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ctlj.colorado.edu\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/ctlj.colorado.edu\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=696"}],"version-history":[{"count":4,"href":"https:\/\/ctlj.colorado.edu\/index.php?rest_route=\/wp\/v2\/posts\/696\/revisions"}],"predecessor-version":[{"id":747,"href":"https:\/\/ctlj.colorado.edu\/index.php?rest_route=\/wp\/v2\/posts\/696\/revisions\/747"}],"wp:attachment":[{"href":"https:\/\/ctlj.colorado.edu\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=696"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ctlj.colorado.edu\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=696"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ctlj.colorado.edu\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=696"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}