Ethan G. Texas

Artificial Intelligence on Society

This letter takes an in-depth look at why both the government and people need to work to understand AI and why it is crucial for not just politics, but everything about our future.

Ethan Garcia (8th Grade, Cedar Valley Middle School)

15133 Thatcher Dr.

Austin, TX 78717

(512) 888-7435

[email protected]

3rd November 2016


The Future President


Dear Future President of the United States of America,

Most problems in the world are caused by us as humans. Our different opinions and beliefs and thoughts meet, spurring either diversity or conflict wherever it may arise, empowered by our human nature to care for many specific things in all of the many ditches and corners of the world. It has always happened like this, and all of them have come to a soft and firm rest in our fight for what defines us as people in a changing world. But there is something shining almost deceptively upon our new horizons because this new age is different than any other. There is something inhuman, that, if we do not secure or take note of, will take away our rights as humans through no fault of our own, a world where we do not apply to the world as we do today. That something is one we cannot defend simply by force against our possessions, one that is truly inevitable, one that may give hope and prosperity to us but at expense of human work of many kinds. That strange something is a new revelation in our human quests: Artificial intelligence. This very day, I must address a seemingly unpolitical topic for the sake of not only politics, but the future of everything. If we are to preserve today, we must develop, work with, think differently about not only AI, but about us and everyone else as human beings against our creations.

THE ISSUE:

To understand and defend this perhaps surprising issue, we must understand exactly what AI is. In basic terms, AI is technology that cannot only conclude based on given logic, but also from experience, getting better at tasks, decisions, understanding, and even intuitions. Today, technology can already perform tasks that are comparable to humans. Not only can they be professional gamers in videogames1, but they can write newspaper articles2, create professional music, and more at a scale where people cannot distinguish it as either performed or created by a bot or a human. If this is true, however, why should this be threatening to us? If we can produce intelligence that can do human work well, shouldn’t that give artificial jobs to the economy that will help our country and the world? But it is not just that bots can do work that humans do well, but even better and cheaper than most, if any, people. Take self-driving cars. Self-driving cars are promising for the future of automation in transportation, and it’s not really the future, it’s already here. Just a couple years ago, many people believed that there would be something, like insurance companies, that will prevent this somewhat terrifying vision for our near future. However, most car companies expect to implement full-on car automation in their cars by just around 4 years from now3. And these automated vehicles are by no means perfect in their work as is any form of AI; you may have seen a news article that highlighted a crash or two by Tesla’s Autopilot or similar, but really, the technology just needs to be better than human drivers, which it is4. Already, 1.3 million people around the world are killed by car accidents, nearly all of which are where the driver is at some fault5. But that’s just transportation. Other low skill jobs are being threatened too with this new rise in technology. Physical automation bots like Baxter are smarter than they ever used to be, learning how to perform simple tasks simply by showing them how to do them visually6. These mildly intelligent robots can and have great potential to replace many low-skill jobs that we still have today, such as industrial workers, cashiers, and more. Sure, these bots are slower than humans in their work, but they are much cheaper to operate than paying humans to do the same work, which makes them much more economically viable than humans. Even if we see low-skill workers fighting for work against the bots, it is still more likely that economics wins over human value. There have been many occurrences throughout our history where workers fought against industrial machines that could replace tham, and the workers always lost. If this is true then, you might not see a big problem with this. I mean, we have always gone through these economic revolutions where we replace low-skill jobs with machines that we don’t want people doing anyway. After all, this does free people to specialize and help economies grow. But the thing is, AI and robots aren’t limited to low skill jobs. Most “average” jobs do have some threat to the workforce by technology, especially ones that consist of similar tasks that have to be done repeatedly. Business analysis jobs, standard computer engineering, structural analysis, resource distribution, financial management, and much, much more, can be partly or even fully automated using neural algorithms that teach themselves how to do the job better, more efficiently, and more precisely than any human could ever do. Even professional jobs can performed by AI. By this point, it isn’t that hard to to see how, for example, bots could be used to do a lawyer’s or realtor’s jobs by selecting relevant data among many sets intelligently and basing conclusions off of its observations, whether it be valuing home pricings, finding what clients want, “discovery”, or making ethical conclusions. Many doctoral careers, especially pharmacists and physicians, can be replaced as well. AI such as IBM’s WATSON knows virtually all your medical history with the interactions between every single drug, virtually all of the medical research and data in the world, and can perform diagnoses with percent confidence in what it thinks best for the well-being of a patient7. This leaves creative professions, but even if we are left with this, we cannot base our society off of this because these are based off of popularity and attention. Even so, AI can already mimic human intuition and creativity to an almost indistinguishable degree today. If you’re still unconvinced, there is still something I can show you. Today, there are hundreds of types of jobs, but only several take a big majority. The top 29 occupations in employment have existed in some form even back in the colonial ages, nearly all of which can be assisted or even overtaken by a powerful AI. It’s not until number 30 on the list where we are met with something new - software engineers. Even more unsettling, those top 29 occupations consist of approximately 45% of the workforce8. During the Great Depression, the unemployment rate was “only” 25%9. Just something to think about. The problems don’t end there either because we don’t really even understand how AI works in its conclusion process which results in our failed understanding in how to approach AI in human or ethical ways, leaving ourselves nowhere at the moment. How AI works is quite complicated to explain in a simple letter, but the point is is that putting in information into its system will go through layers so that it can conclude something in the general way we want it to, but hidden away to its actual reasoning in why it did the things it did. If we fail to solve these types of problems, our failed understanding will lead us to try to control and harness this power in stupid ways, hurting ourselves and our humanity along the way. If we ignore the issues that are already facing us today with more intelligent technology, with the threat of outdoing human performance in a variety of ways with our lack of understanding of it, it will catch up to us unprepared in society.

THE ARGUMENT:

You might be looking at all of this and thinking, “This can’t happen anyways. How would we fire humans for a machine anyways? We value humans over technology, so it is impossible that we could get to the point where our society is threatened by the capabilities of technology that learns and practices through emergent behavior. Even if technology is better at work than humans, then why would that be so much of a problem anyways? It can boost our economies because we can produce and perform better than ever, without the risk of human intervention or fault. Machines will just figure out all the messy stuff anyways better than any person soon enough anyways.” Now, you may be right - after all, the topic of this letter is about the future of AI, which means we cannot really say anything for certain. It does seem playful and easy to imagine futures that never are, but this isn’t one of those times. If you argue for the sake of humanity by saying that it will never happen, think again. It isn’t that you have to listen to me because I know it’s going to happen, but really because it is already happening. Remember all those examples of technology outdoing us? All of them are already here, from quasi-automated transportation of not just cars, but almost all transportation imaginable, to bots outperforming more physical low-skill work to more intelligent doctors and programmers and so much more with comparable AI in human writing, originality, and even creativity, all of which can be accessible today. Maybe you only heard of some of them, but you and everyone else you know and love will soon enough. Forbes predicts an incredible 300% increase in AI development and use in just 5 years from today10. Especially considering the state of intelligent technology today, it is almost terrifying to see where we value technology just a few years from now. It’s certainly early days, but its potential and growth means that AI will be as common as smartphones not so long from now, bringing along some potential threats along the way. Speaking of threats, who says that AI needs to be threatening? No one does, actually, because we can shape how we interact, develop, research, and use AI. It isn’t that these problems are inevitable and are engraved within its technology, but that these issues are certainly possible if we ignore AI as it seeps more and more into our lives and our futures to come because our ignorance will make us unprepared for it, making uneducated decisions about its understanding and use without knowing what it can really do to us, bringing us some serious issues. It’s not good to be ignorant the other way either by thinking that AI will solve our problems anyways. “But the way I look at it is that in order for that to happen, we’re going to need a dozen or two different breakthroughs.” according to Joi Ito, MIT Media Lab director, because there is really two different types of AI: Specialized and General. Specialized AI occurs when AI is used to do a specific task the best, whereas General AI can make good conclusions based on anything it is given, the movie type. Because it is more reasonably far from today, what we should really worry about is AI’s ability to outperform humans in specific ways, our understanding of it, how we work with it, and its ability to penetrate systems and information that we don’t want it to have. Robots taking over the world? Probably not.

THE APPROACH:

Now that we know the issue and what we need to work on, how exactly can we ensure that AI is used for good for maximum societal benefit for one of the most game-changing concepts of the 21st century? To take a careful look at our potential actions in AI, I turned to an interview conducted between Joi Ito, the MIT Media Lab director, Scott Dadich, the editor in chief of the magazine WIRED, and current president Barack Obama discussing the future problems and implications of emerging technologies such as AI and self-driving cars. Firstly, during the interview, Joi Ito emphasizes on the importance of people knowing more about AI and its future. In his words, “I feel like this is the year that artificial intelligence becomes more than just a computer science problem. Everybody needs to understand that how AI behaves is important.” Similarly, world-renowned physicist Stephen Hawking also expresses that AI will be “either the best, or the worst thing, ever to happen to humanity.” He even goes as far to say that studying the future of intelligence is “crucial to the future of our civilization and species” and that “history, which let’s face it, is mostly the history of stupidity.”11 In more practical terms, letting more people realize AI will make us better understand it as a human group and make better decisions on how we act on AI, making it less likely that AI could “[eliminate] jobs, increase inequality, [or] suppress wages.” in the words of Barack Obama. How should AI be used then? Joi Ito thinks that “What’s important is to find the people who want to use AI for good - communities and leaders - and figure out how to help them use it.”, so that we can figure how to best use AI for society in research and use. Obama also argues for government funding in AI so that we can avoid problems about having values embedded in these technologies. We also need to develop some insight into how AI works so that when we incorporate it more with decisions with people, we can better judge its conclusions against our own and work better with it. As it comes to suppressing wages and eliminating jobs, there is a possibility that we could incorporate universal basic income, where everyone gets at least some living wage as a form of social security. This concept could also work quite well if we incorporate an AI for every person or group to assist in their jobs and life, both getting to work with each other using an extended intelligence - using AI to extend the abilities of human intelligence - so that people can do and manage their work more easily, intelligently, and efficiently with similar income. Under the same problem, Obama also notes the future importance of rethinking how we value the social compact by possibly upping up careers that would be more difficult for an intelligence to do, such as teaching, academia, and the arts, all of which take a lot of work but don’t rank as high in society today. He also discusses about how we will secure and respond to crisis and security by rethinking what “clean” is to us from designing cybersecurity and medicine with the threat and help of AI - responding at a breach using intelligent systems against the threat of a more intelligent hack or virus instead of by brute force - “Don’t worry as much yet about machines taking over the world. Worry about the capacity of hostile actors to penetrate systems.” In a nutshell, by reevaluating our societal values, working and developing with AI, and letting as many people as we can about how AI works and what may come soon, we can ensure that AI is in our hands so that it gives us a beautiful, flowering benefit that nothing else could ever do.

We are a smart group of animals, and we’re lazy. We built tools and machines to do our work so that we didn’t need to work as hard, developing our civilization. It connected the corners and fragments of this world we care for, brought abundance to us all, and brought us into a system that ensures itself a peace between ourselves. These were the mechanical muscles, - more tireless, productive, and efficient than any human could be. And then came the mechanical minds which made our human brain labor less in demand, which also did similar. But these mechanical minds came to a point where it could conclude and make decisions without us telling to, independent from what we specifically told it to, with the potential to do better than any of us, leaving us nowhere. This time it’s different. We are faced at dawn with an omnipotent future that will decide itself but yet fearful for the risk of our humanity and everything we care for. We refuse to stare at its light as if it will blind us because we don’t know what strange light it is. We must understand these new minds that we created, so different from us and how similar, so that it can understand us. We do not only need to look at AI differently, but at ourselves so that we can see the glory of the new sun we created over the years or instead scorch us in our ignorance. AI will force us to approach what makes us human, and it can change everything for the better if we look at it right. It’s still dawn though, and there is still a long road ahead of us. Like Phaeton, we are driving this sun, let it not scorch our world. Its power and potential is immense and beautiful, concerning everything we know - let’s be careful and care for it.

Sincerely,


Ethan Garcia


1 - Merrill, Brad. "Future Video Game AIs Will Seriously Freak You Out." MakeUseOf. N.p., 12 Feb. 2015. Web. 08 Nov. 2016.

2- Woods, Dan. " ." Natural Language Generation Software. N.p., n.d. Web. 08 Nov. 2016.

3- Bean, Daniel. "A List of All the Companies Making Self-driving Cars, and When They're Hitting Streets." Circa. Circa, 18 Aug. 2016. Web. 08 Nov. 2016.

4- McGoogan, Cara. "Elon Musk: Tesla's Autopilot Is Twice as Safe as Humans." The Telegraph. Telegraph Media Group, 25 Apr. 2016. Web. 08 Nov. 2016.

5- "Road Crash Statistics." Road Crash Statistics. Association for Safe International Road Travel, n.d. Web. 08 Nov. 2016.

6- "Baxter | Redefining Robotics and Manufacturing | Rethink Robotics." Rethink Robotics. Rethink Robotics, n.d. Web. 08 Nov. 2016.

7- "IBM Watson." IBM Watson. IBM, n.d. Web. 08 Nov. 2016.

8- "Occupations with the Largest Employment." - America's Career InfoNet. United States Department of Labor, n.d. Web. 08 Nov. 2016.

9- Frank, Robert H.; Bernanke, Ben S. (2007). Principles of Macroeconomics (3rd ed.). Boston: McGraw-Hill/Irwin. p. 98.

10- Press, Gil. "Forrester Predicts Investment In Artificial Intelligence Will Grow 300% in 2017." Forbes. Forbes Magazine, 1 Nov. 2016. Web. 08 Nov. 2016.

11 - Hern, Alex. "Stephen Hawking: AI Will Be 'either Best or Worst Thing' for Humanity." The Guardian. Guardian News and Media, 19 Oct. 2016. Web. 08 Nov. 2016.

CONTENT BASED ON:

Dadich, Scott. "The President in Conversation With MIT’s Joi Ito and WIRED's Scott Dadich." Wired.com. Conde Nast Digital, 24 Aug. 2016. Web. 08 Nov. 2016.

Humans Need Not Apply. Prod. C.G.P. Grey. Youtube. Youtube, 13 Aug. 2016. Web. 8 Nov. 2016.