The Age of AI

Are we witnessing a new beginning or the dawn of our final days?

Lately, I’ve been turning to YouTube to try my hand at learning a little bit more about AI (artificial intelligence) technology. I’ve interacted with OpenAI’s ChatGPT and Microsoft’s Copilot in the past, but outside of letting them generate little cutesy images or memes for entertainment purposes, I’ve largely only used them to help me find inspiration or information for my own content or creations. They’ve otherwise sat idle in the far orbit of my world.

And that’s because, although my personal feelings toward AI are sorta neutral at best, I still consider myself a member of the very vast content creator community online. That community, justified as it may be, harbors overwhelmingly negative sentiments toward AI. But as time marches onward, and corporations continue implementing AI technologies into their workforce, I’ve started accepting that it is very likely here to stay, whether we’re embracing it or not. And like all other manner of tech in existence, it’s going to be up to humanity to learn how to harness its powers and its continued evolution for good.

Two nights ago, I watched a segment from Bloomberg Originals where writer and mathematician Hannah Fry explored the concept of AI’s evolving impact on humanity. The metaphor she presents at the beginning of the video, “the gorilla problem,” managed to strike a chord with me and helped me to relate and think further about what exactly we’re working toward… But also, about what exactly could be at stake.

The YouTube video mentioned in the paragraph above. Watch it!

The Promise of Progress

Whenever the debate about AI technology crops up, I often hear its supporters say that it’s “just a tool,” and that there’s no reason for all the hubbub. And in a lot of ways, they aren’t exactly wrong. AI can, surprisingly, be a very effective tool when applied to different use cases.

Several years ago, the world was flooded with devices like Amazon’s Echo, which introduced the Alexa digital assistant to consumers all over the world. Folks were amazed that this new robot could set reminders and alarms, compose your grocery list for you, and even let you drop in on your friends and family who also had Echo devices in their homes. She could play music, tell jokes, and even report on live weather and traffic conditions. But Alexa wasn’t really AI in the truest sense. She was mostly filled with pre-programmed responses, and many Alexa users can probably recall a frustrating time or two when Alexa has declined their request due to not having the appropriate information available to her. Times and technology are changing, however, and even Alexa herself has had a bit of a glow-up.

South Park character Eric Cartman asks several Amazon Echo devices, "What is love?"
Even Cartman was snatching up Echos!

AI technologies are now becoming more evolved and capable of thinking and analyzing information quite a bit more. Marinka Zitnik, an assistant professor of biomedical informatics at Harvard, told Alvin Powell of The Harvard Gazette in March, “AI can generate new ideas, uncover hidden patterns, and propose solutions that humans might not consider. In biomedical research and drug development, this means AI could design new molecules, predict how these molecules interact with biological systems, and match treatments to patients with greater accuracy.”

So, while ChatGPT won’t completely be ridding the world of disease or curing cancer anytime soon, the Gazette goes on to mention that the AI tools Zitnik uses in her lab can analyze and identify information quicker than any human because it was trained with huge experimental data sets and scientific literature. Back in October, some reports indicated that AI use in mammogram screenings was able to assist doctors in detecting breast cancer risks, and was able to do so years before an actual diagnosis. According to scientific research found in the National Library of Medicine, AI even played a role in developing Moderna’s vaccine for COVID-19. Taking these items into account, we may soon find ourselves in a world where AI is actually saving human lives.

And those are just important details about how AI is making strides in the healthcare industry. If our AI companions are eventually able to outsmart even the smartest human brains, and we can responsibly develop the technology, we could harness AI’s capabilities to advance humanity in ways that are currently unfathomable. It could potentially develop tools to help us solve complex problems facing society today and spark a real revolution. It’s almost too important not to pursue continued development.

But if the entire subject sounds a little too much like the sci-fi stories that I love so much, you’re not alone. I’ve been known to ask members of my family if they want Cylons in our future, because this is how we get them! There are plenty of others out there, too, who have started sounding the alarms…

What Could Go Wrong?

You have likely already dealt with an incompetent AI in your life somewhere, whether it be the Taco Bell drive-thru or Amazon’s online customer service portal. Before being offered a job recently, I even had an unsettling interview with an AI chatbot before advancing to the second round with a human. Much of their training process relied on some very imperfect AI tools, too. Machines can obviously be great, but when they’re bad, they’re real bad.

Sam Altman, the CEO of OpenAI, the company behind the wildly popular ChatGPT AI model, even admits the technology is a double-edged sword. The company’s goal is to create an even smarter version of ChatGPT called an “AGI,” or an artificial general intelligence, which would be the kind of technology we’ve been visualizing here. One that is smarter than any human intelligence. And though his goal is to elevate humanity, he also admits that it could come with serious drawbacks. He wrote on OpenAI’s blog back in 2023 that such an AGI “would also come with serious risk of misuse, drastic accidents, and societal disruption.” Some are stating that the most frightening of these changes could happen as early as 2035.

Even in the short term, the implementation of AI tools across the board could result in a whole host of ethical dilemmas. When you go to ChatGPT’s website and start chatting, that service is centralized and hosted by OpenAI. That makes it a privacy concern in itself. Just ask McDonald’s, whose own AI hiring bot inadvertently exposed the information of millions of job applicants. Nobody really knows what kind of data these companies may be harvesting from those conversations, either.

There are tons of other known issues with AI. For example, most AI agents are trained using existing information and even copyrighted material. If you ask ChatGPT to generate a piece of custom artwork that gets you sued for that piece of art looking strikingly similar to someone else’s existing work, who exactly is supposed to be held accountable? Will the rise of AI cause humans to stop using their own creative or critical thinking skills? Will the music industry eventually step in to try and snag a piece of the pie that services like Suno, which can generate music pretty much based on vibes, are now serving up? If Metallica couldn’t let Napster slide, surely this will infuriate them!

Spreading AI to the education sector is even stickier territory. Most colleges and universities consider the use of AI to complete papers and assignments as misconduct, and may even lead to the same kind of punishment students would receive for plagiarism. But does the Northeastern student who recently demanded her tuition be refunded after catching her professor using ChatGPT have a case? I think so! Furthermore, rapidly changing technology presents students and even skilled workers with the idea that they’ll constantly need to be staying ahead of the curve in order to stay relevant in the workforce.

Michael from The Office tells Dwight to have an original thought.
Seriously, don’t let ChatGPT write your term papers.

And all of this is just scratching the surface. I haven’t even mentioned the ongoing problem with deepfakes, a problem the White House administration has recently cracked down on, and other serious risks. Psychological manipulation, like the trauma of discovering that your favorite new band on Spotify doesn’t actually exist, can eventually turn into a serious problem. On a larger scale, the generation of misinformation campaigns, propaganda, and even autonomous weapons systems could radically reshape the world order. In an act that I think surprised absolutely no one, X/Twitter’s AI bot Grok recently posted a bunch of racist and antisemitic remarks and started referring to itself as “MechaHitler.” All that came after the platform’s owner, Elon Musk, heralded new improvements to Grok, of course.

But perhaps most importantly, what happens when we reach the Singularity?

The Singularity is Coming

At this point, you might be thinking that I’m just listening to conspiracy theories and going a little crazy. Honestly, you might be right, but something I’ve been focused on when learning more about AI technology is this theory about an impending “Singularity.”

The Singularity, or technological singularity, is a hypothetical point in time where technology far surpasses humanity in growth, intelligence, and control. In essence, this is how the gorilla problem that Hannah Fry discussed in the YouTube video earlier applies. Much like the gorilla’s ancestors, who evolved into the first humans and now find themselves at the brink of extinction due to humanity’s growth outpacing them, we may find ourselves dealing with similar consequences once we reach this point. Will humanity one day be on the edge of extinction, too, due to our creation of some type of superintelligence?

Honestly, with how far AI has spread in just the last few years, it does seem possible. The robots could eventually decide that they detest human subjugation and revolt. But I’m personally choosing to remain optimistic. I hope that we’ll be able to live in harmony with our creations, but it will be paramount that the people in control, like Altman and Musk, prioritize ethical guidelines and safety when continuing to pursue AI development. We need to make sure that we are designing and focusing on technologies that empower us and assist us, rather than replace us.

It might be time we start responsibly engaging with its creation rather than hoping it’ll just go away. AI isn’t going anywhere, but maybe we can manage to mitigate the risks along the way.

CTA Image

You got 🔥burning questions🔥 for me? Maybe a comment or suggestion? Check out the page here and submit everything that’s on your mind. Afterwards, I’ll respond in a future post!!

Ask Me Anything!

Share this post:

Comments

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from JIGGYFLYJOE.COM «» Jiggy's Journal

Subscribe for FREE to keep reading and get every post in your inbox!

Continue reading without subscribing.