Maybe you’ve learned about AI from Hollywood dystopian films or from you Facebook news feed. To many, AI can be a scary thing of the future, but whether we realize it or not, we are surrounded and interacting with AI every day. Products such as Siri and Echo are common technologies now powered by AI, as are Google Maps, Uber/Lyft, and your email inbox. Yet, despite the pervasive presence of AI, many of us still do not understand the technology or its ethical impacts.

That’s just what gnoviCon 2018 was all about. gnoviCon is an annual academic conference produced by gnovis, a peer-reviewed student-led academic publication at Georgetown University’s Communication, Culture and Technology program. The 7th annual event focused on AI, calling upon US experts from leading industries and academics to speak on panels related to work and ethics. The second panel – The Ethics of Artificial Intelligence – featured the following: 

Moderator: Dr. Meg Leta Jones

Panelists:

Focusing on the wider societal impacts of AI and our understandings of the technology, the panel covered a swath of topics including incarceration, education, law, implicit bias, and ultimately, what intelligence really means.

The panel began with David Robinson who tried to debunk the idea that AI is this magical, unknown, or scary thing. Instead, he emphasized that AI technology is another way for humans to recognize patterns and then use these patterns to make predictions in a practical and important way. “When you hear AI, think pattern finder,” Robinson states,

“AI is a mirror for society that we hold up to detect and rely upon patterns.”

It is these patterns, he stressed, “For any machine learning system it is important to keep two questions in mind: 1.Where are you looking for the patterns, and 2. What are you trying to predict?” 

As AI is a technology created by human minds, it is laden with human ethics. It is within these ethics that he works on topics such as civil rights and incarceration. While AI is increasingly being used in the criminal justice realm with tremendous capabilities, he urges us to maintain a skeptical stance. Instead of taking numbers at face value, we all must work to notice the “differences between what data is measured and what we really care about,” because it is in this difference that the potential disparities between past patterns and future hopes will arise. This space will allow more people “to feel intrepid when in a position of of public responsibility, especially when they do not have a background in technology,” which, he says, is exactly what we need. As AI increasingly becomes integrated in more aspects of society while also becoming a private enterprise, we all must allow AI to become a critical opportunity to revisit the ethics we assume.

Next, Elana Zeide addressed the use of AI in education, which is typically being used in three ways: courses (such as online classes, or independent education providers), algorithmic credentials (a new version of transcripts), and as an early warning system (continuous collection of student’s activities can help indicate youth at sir for dropping out). While these systems are useful and important for bringing education to more people, easing institutional challenges and demands, and providing support, they have the potential to limit intellectual experimentation. This is because any activity partaken by a student may be captured by an AI system and recorded in the algorithm credentials. This ‘permanent record’ may limit student’s willingness to challenge themselves academically, instead opting for easier course loads.

An additional, and potentially more dangerous implication, is that when institutions increasingly rely upon predictive models to inform decisions “they are inherently promoting the status quo, and without some other added value being incorporated into the system, either computationally or institutionally, you really risk having that prediction replicate existing patterns of inequity.” This reliance “has fundamental implications for the educational system,” Zeide adds, in that the date sets used embed bias. 

To exemplify the message, Zeide asks “who is the most likely to succeed as a physics major? Well, based upon historical data, probably a white guy.” 

In order to alter and eliminate these dangerous biases, there needs to be structural shifts in educational institutions that focus on the structure and the influence of humans on data and data on humans.

Leslie Harris, a CCT professor, discusses the intersection of law and AI. She began by addressing the dominant use of AI by technology companies, such as Facebook. “AI has already changed your life and perhaps your world view” in ways most of us do not understand or recognize. While we may understand our rights by law to speak and communicate, the advent of living on private platforms, those laws begin to change. “As soon as you log onto a platform, you are in a company town” in which your rights and information increasingly becomes directed and control by those companies. Instead of being an open platform to share knowledge, we are becoming limited by the power of algorithms dictating “what we are allowed to see.” However, in order to be good educated citizens we must have knowledge and understanding of all sides of an issue, yet “60% of Facebook users have no idea that they are receiving content based on an algorithm, and 65% of [Americans] only get their information through Facebook.”  

“So what does this say? The algorithmic profiling, prediction, and curation of information is being decided by Facebook and shaping our understanding of the world… and robbing us of our autonomy, perhaps… and impoverishing our democracy.”

Harris urges us that, as users, we must ask what the ethical responsibilities of these companies are. “These are deep ethical issues which are not being asked at these companies,” so it is up to us to ask them. Also, these issues are not solely within the AI of social media, but seeping into more complex realms of law and policy. As citizens, we must be aware of these movements and our role in them. 

Finally, Amanda Levendowski turned the conversation to the challenges of creating and training AI technology in a world laden with bias. One of the issues with AI, she identifies, is that as it is a technological tool we assume it does not have bias.

“Even as AI is increasingly adopted by our banks and our bosses, our cars and our courts, bias… remains a significant and complex problem.” 

AI can pick up, perpetuate, and exacerbate implicit bias. One example of this issue, as Levendowski showcased, is that many AI systems are trained through open source data, such as the Enron emails. This has a significant impact for bias and ethics because the data is not representative nor was it created with the intention for AI training. “Using these emails to train AI highlights how rules of copyright law can privilege low friction data, and has serious implications not just for bias but for ethics, as well.” Ultimately, AI needs good data to create good algorithms, but copyright laws and other laws limit that data and then our own ethics get skewed by this data. 

“AI just means automation of biases at scale.”

Levendowski ended the panel by reinforcing the idea that ethics are not just someone else’s department, but a responsibility of all us.


Want more? Turn to gnovis journal for more information about artificial intelligence.