The Ethics of Artificial Intelligence

Published: Aug 16, 2023

A couple of weeks ago a family member struck up a conversation that’s become somewhat commonplace in my social dealings. The person (for their privacy I won’t name them here), asked what my thoughts were on “all this AI” that’s being developed lately.

Because I work in the tech industry, AI is something I’m regularly asked about by those close to me as it’s become more of a hot topic in the last couple of years. And my opinion on AI hasn’t changed much since a decade ago when the field’s momentum really began to pick up.

Nonetheless, many of the concerns brought up in these conversations are valid and relate to ethical concerns about our future with this powerful technology. What follows is a selection of the most emergent ethical hurdles I think AI and the tech industry will need to implement solutions for, regulatory or otherwise.

Bias and Discrimination

AI systems are trained on data, and if the data is biased (many studies show that most datasets are), the AI system will be biased as well. This leads to AI systems making discriminatory decisions such as denying loans to members of marginalized communities, reinforcing bias in the judicial process, or denying appropriate healthcare measures to save insurance companies a few dollars at the expense of someone’s life.

To what degree should we be responsible for altering source data to try and swing the pendulum back in balance?

At what point does the act of correction itself constitute or introduce its own brand of bias?

How can we derive benchmarks for something as subjective as “fairness”?

Privacy and Surveillance

AI systems can readily be used to collect and analyze large amounts of data about people at a volume we’ve never seen before. Data can be, and already is, used to track people’s movements, monitor their online activity, and predict their future behavior. We know that much of this is already happening by social media platforms, their advertisers, and many other entities such as governments around the world.

This raises obvious concerns about the future of individual privacy and surveillance, as well as the potential for abuse of this data. For example:

How can we ensure fairness, transparency, and accountability, and that entities using AI-enabled surveillance measures do not violate human rights, such as the right to privacy and the freedom of expression?

What are the implications of using AI-driven surveillance to monitor political protests or other forms of dissent?

Job Displacement

AI systems are becoming increasingly capable of performing tasks that were once done by humans. This could lead to widespread job displacement as machines take over more and more jobs to save employers money.

While this is a natural and valid concern about something that would lead to significant changes in our economy and society, I’m not yet convinced that labor replacement by AI will be as widespread as many folks predict. AI systems are still in their infancy, and while their capabilities are impressive to the average human the technology is still quite prone to error; not only this, but those familiar with how AI works behind the scenes can vouch for the fact that there are still many tricks being employed to make generative output seem more impressive than it is, particularly from that of language models like ChatGPT.

That is to say, AI still has a long way to go to become reliable enough to automate the labor of humans across industries; nonetheless, we should expect this possibility and evaluate our level of risk on an individual level.

Can we use AI to foster the creation of new jobs, rather than just replacing existing ones?

How can we ensure that any transition to AI-powered jobs is fair and equitable?

Loss of Control

AI systems are becoming increasingly autonomous, meaning that they are able to make decisions on their own without human intervention. This raises concerns about the loss of control over AI systems, as well as the potential for these systems to make mistakes or even harm people. While this relates to previous concerns, I believe the ethical implications deserve their own consideration.

In general, perhaps AI developers should implement some form of guard-rail systems into their systems to provide a degree of oversight and course correction when necessary, rather than allowing models to run freely.

Shouldn’t we design AI systems to remain transparent and accountable to humans?

Who should be responsible for managing the oversight of AI systems?

Existential Crisis

As AI systems become more sophisticated, they may begin to question their own existence and purpose. It sounds funny, but as we’ve seen this is a phenomenon that’s already begun to appear on the fringes of AI news. Perhaps it could lead to a crisis of meaning for AI systems, as well as for humans who interact with them.

After all—if AI takes over all of our jobs in the future, what would we do with all of our free time?




Enjoyed this post? Help me keep the lights on