I had a conversation the other day with my good friend, Kasra Khalili, that stuck with me.
We were discussing the viral rise of AI since the release of ChatGPT and its future potential. AI (Artificial Intelligence, if you’re reading this from the rock you’ve been living under) is the flavor of the month in the tech world, and it is rapidly entangling our everyday lives. From smartphones to self-driving cars, AI is changing the way we live and work.
One thing I’ve noticed lately is that, while many folks (especially Twitter tech bros) are hailing the latest exploits or scrapping to stake their claim as the latest and greatest “ChatGPT Prompt Engineer”, not many are talking about the potentially negative side effects lurking under the surface.
One of my biggest concerns about AI tools like ChatGPT is that it could lead to people becoming overly dependent on it. We’re already seeing this happen with students who use ChatGPT to write their essays in school, and workers who use it to automate the bulk of their daily tasks. This is a dangerous trend, and I worry that it could lead to us losing our critical thinking skills. If we become too reliant on AI to do our thinking for us, we’ll become lazy and our mental faculties will atrophy. AI should be a tool, not a second brain.
Another trend I’m predicting is a sharp increase in what is already a minefield of misinformation out there. Just last year, a group of researchers used ChatGPT to generate a fake news article that claimed the death of a major politician. The article was shared widely on social media, and many believed it to be true, causing a great deal of confusion and panic. This, unfortunately, is not an uncommon story.
Considering this type of content generation in tandem with deepfakes, I wonder how many AI tools will spawn just to combat the rubbish being generated by the other platforms.
Of course, these are just a couple of the potential pitfalls. There are, as we’ve already seen, many benefits to AI-based technology. I’m just saying it’s important to weigh the risks and benefits carefully. I believe we should monitor the negative side effects so that we can take steps to mitigate them, rather than stand by indifferently, waiting for it to take over our role in the world.
Rather than rush the process, it would be wise to take a measured approach to AI development and make sure that it continues to be used for good. Let’s also make sure it doesn’t become so powerful that it poses a threat to humanity (we are sure that’s off the table, right?).
All things considered, I believe the future of AI is bright. If we’re mindful about how we proceed with it, AI can continue to be a force for good in the world.
Enjoyed this post? Help me keep the lights on