by Susan Miller
The futuristic TV show Black Mirror explores various potential technological advancements and their consequences. For example, in the episode “Be Right Back,” the character of Martha recreates her recently deceased boyfriend, Ash, using artificial intelligence (“AI”) technology that uses social media and text history to imitate him. Other books, movies, and TV shows have also explored the possibilities of AI. In some alternate worlds and futures, AI leads to subservient and helpful robots, such as R2-D2 in Star Wars, Baymax from Big Hero 6, and the robots in WALL-E. Other futures are darker, where humans are enslaved or destroyed by their own creations, such as in The Terminator, The Matrix, and 2001: A Space Odyssey. These fictional futures are enjoyable to watch and think about, but as AI creation becomes the focus of many of large tech companies, concerns about the effects of AI become a greater reality.
AI in the real world is often defined as machine learning or the ability of computers to learn difficult tasks, especially after changes to their environment. While consumers might cringe at the idea of Martha in “Be Right Back” communicating with technology that acts as her deceased boyfriend after it learned his speech patterns and habits, these same consumers might tell their Amazon Echo or Google Home to pause or play the next episode. Companies such as Apple, Amazon, and Google have already created and marketed intelligent personal assistants that can play music, answer questions, follow directions, and open and use apps. While these technologies don’t allow the imitation of real people, the creators of Siri have programmed in many sarcastic or sassy answers to immature or trick questions, which gives Siri a personality without being self-aware.
These intelligent personal assistants are quite beneficial to society. For example, they provide a greater form of accessibility to technology to many people. Additionally, the ability to make calls or find recipes with intelligent personal assistants helps with multitasking in the kitchen or when doing chores. And self-driving cars, another form of AI currently in production, could potentially reduce the number of accidents and provide easier transportation for those unable to drive. (However, the use of Siri while driving may not prevent accidents caused by distracted drivers since transcribing to Siri may be almost as distracting as actually using a phone.) Perhaps one day a Baymax-like assistant will exist, providing answers, medical care, transportation, and meals.
As these technologies develop and we continue to rely on their abilities to make our lives easier, it is important for creators and for government regulators to think about not only their effectiveness but also their effects on society. For example, how might the reliance on Alexa and Siri by families affect the way people consume information? The internet and availability of information has already impacted the way people research and read. How might the reliance on speech to gather information affect writing, research, and typing skills? Another concern that has been expressed by parents and educators is what children will learn or not learn from interacting with the Alexas and Siris of the world. For example, there are no consequences for shouting or demanding answers from Alexa or Siri. There’s currently no need to ask with a “please” or end a conversation with a “thank you” with Alexa, which could influence how a child learns to interact in a conversation. And unfortunately, these social concerns are unlikely to have any sort of regulatory or legislative solution.
Other concerns raised by AI can be regulated or influenced by legislation. For example, concerns regarding how these devices further broaden the global digital divide as developed countries race ahead in technological innovations might be alleviated by national or international regulation. This regulation might concern AI directly or more broadly encompass access to the internet and technology. Additionally, twenty-one states have enacted legislation regarding testing and pilot programs for self-driving cars. Federal agencies, such as the National Highway and Transportation Safety Administration, have issued guidance on autonomous cars as well. As legislators turn to regulating AI, they should think about the regulation’s effects on innovation and whether they have enough information about the technology before offering solutions.
Science-fiction is an enjoyable genre that provides wonderful opportunities for discourse on a variety of real-life problems. While hopefully much of the dystopian predictions remain fiction, it is important for innovators and regulators to consider not only the efficiencies and benefits of AI but also the unintended consequences.
*Disclaimer: The Colorado Technology Law Journal Blog contains the personal opinions of its authors and hosts, and do not necessarily reflect the official position of CTLJ.