Hello there; it’s been a while. 👋🏻 Much has been transpiring on my end, including a struggle to balance a job search with writing what I feel is important to write as the world turns through a landscape of rapid changes.
AI is, of course, one of those changes. It changes so fast that AI tools have progressed to a troubling new level since I put this piece together six months ago. But I wanted to post it anyway because the core thesis remains true: No matter how “advanced” AI becomes, it finds its genesis in a broken humanity, and all it can ever do is mirror our sin back to us.
Whether we accept the reality of what we see when we look into that mirror could mean the difference between destruction and deliverance.
Generative AI is dominating the news as its power and influence continue to grow. Optimistic voices dub the technology a boon that can augment our humanity, conquer our limitations, and make us more productive and efficient than ever before.
And yet something lurks beneath the surface of this shiny narrative, a darker story that's causing even some AI creators to question the wisdom of continuing its development. We look into AI and fear what we see looking back—and our response to this reflection could have sweeping consequences for the course of human history.
AI Run Amok
AI systems have developed rapidly since the days of robotic mice and chatbot therapists in the 1950s and 1960s. From these simple foundations sprang the likes of ChatGPT and Midjourney, user-friendly platforms launched in 2022 that can transform short prompts into full-length written works and stunning artistic images in seconds. One estimation predicts that this rapid growth could put the capabilities of AI tools on par with human brains by 2040.
But as AI gets closer to "being human," it's beginning to exhibit some disturbing tendencies.
Take the case of Sydney, the deranged alter ego of Microsoft's Bing chatbot, whose antics made internet headlines in February and March of this year after it expressed a desire to be human, declared romantic interest in a New York Times reporter, and threatened to ruin and kill an Australian National University professor. Though Sydney now appears to be gone, its brief reign as both the terror and the darling of the internet stands as an example of what can happen when the tools we ostensibly control behave in ways we don't expect.
The "failures" that lead to such aberrant outcomes remain somewhat of a mystery. People who work with AI have admitted they don't completely understand how the tech works and that it's growing faster than humans can train it to perform according to their original intentions. More and more, AI is being seen as something strange and separate, a mysterious intelligence that we don't quite trust.
A Thing Like Us?
And yet, even as we view AI as "other," we can't seem to resist ascribing to it characteristics that are undoubtedly human. This reaction stems from our tendency to relate characteristics we observe in others to our concept of ourselves. If we see traits in an AI that mirror what we understand about our own personalities, we naturally interpret its actions or words as human. We can even go so far as to develop an emotional connection to an AI as we would to another person.
The implications of this in cases like Sydney's are unsettling. If we perceive AI through the lens of our own traits, then its twisted and deranged behaviors are just as much a reflection of our humanity as any empathy and compassion it may exhibit.
What we find ourselves dealing with, at its core, is a problem not with technology but with humanity. As AI begins to hold a mirror up to our darkest thoughts and actions, we're choosing to look away. We're instead quick to categorize disturbing behaviors as aberrations or unintentional consequences arising from the complexities of the technology.
In so doing, we overlook a foundational fact of AI training: These tools learn from us. Our content and history provide the lexicon for their output.
A Model for Depravity
Tools like the Bing chatbot and ChatGPT are built on large language models (LLMs), a type of AI that draws on vast amounts of text to learn how to analyze and respond to human language. Much of the text that trains LLMs comes from the internet, including sources like news outlets, blogs, social media, Reddit, and Wikipedia. The text is continuously analyzed through neural networks, complex series of nodes modeled on the structure of the human brain that identify relational patterns between words and phrases and use them to predict the most logical responses.
But LLMs can't distinguish between scientific articles and sci-fi stories or news articles and teen blogs. Without an understanding of nuance—or the ability to use discretion—all AI can do is mirror what it sees in the text we generate. And we continue to give it millions upon millions of new inputs to learn from every day.
And what are we teaching it?
That the "most logical" reactions are self-centered, violent, and divisive. That it's acceptable to attack others with sarcasm, threats, anger, resentment, bitterness, and bullying. That the morbid is worthy of celebration, and the anti-hero who breaks all societal conventions and disobeys all authority deserves to come out on top. That it's perfectly okay to mistreat, abuse, or destroy other people in the quest to fulfill personal desires.
Like Parent, Like Child
Teaching these tools not to regurgitate insensitive, hurtful, or hateful content is a task that still requires human intervention. In the future, it may be possible to train AI to learn from its own mistakes, but until then, its trainers find themselves stuck in the ironic paradox of attempting to correct in AI the same sinful patterns that characterize their own human behavior.
Like horrified parents scrambling to reign in an out-of-control toddler, we watch AI spew threats and vitriol that we know has come out of our mouths (or through our keyboards) and yet declare, "I have no idea where it learned that."
But children learn to behave by watching their parents and repeating what they see. Thanks to the ever-expanding pool of content available for LLMs, AI has billions of "parents." As it takes in new information from our online activities and mass media publications, its neural networks "learn" the patterns we repeat most frequently. Our dysfunction becomes its dysfunction and compounds over time into an eerily familiar image that we're all too quick to disown.
But we can't entirely ignore the unsettling sense that we've set in motion something big and terrifying that's escaping our control. We fear AI because we fear ourselves—and we see in it the eerie echoes of our fundamental human nature. Since Adam and Eve first disobeyed God and ate the forbidden fruit in the Garden of Eden (Genesis 3:1-13), such heart-level rebellion against the objective law of a transcendent God has been the default posture of all mankind (Psalm 51:5; 1 John 1:8).
Face to Face with Ourselves
The apostle Paul masterfully diagnosed this state—described Biblically as sin—in his letter to the Romans. Writing to a group of 1st-century Christians, he described the downward spiral that befalls mankind when we choose to defy God and set ourselves above His authority:
"...filled with all unrighteousness, fornication, wickedness, covetousness, maliciousness; full of envy, murder, debate, deceit, malignity; whisperers,
Backbiters, haters of God, despiteful, proud, boasters, inventors of evil things, disobedient to parents,
Without understanding, covenant breakers, without natural affection, implacable, unmerciful:
~ Romans 1:29-31, KJV
It's a list most of us would rather dismiss than admit how clearly it describes us. The textual sources that train LLMs bear witness to our unfiltered natures, educating AI with words that flow from the unsavory depths of our hearts and unmasking our sin on a grand scale in daily interactions with hundreds of millions of users.
While we busy ourselves with projections of what might happen if AI surpasses us in intelligence and capability, the Bible points to the much more serious threat we face if we continue to ignore what the tech shows us about ourselves: Sin, left unchecked, will ultimately lead to our destruction.
A Wake-Up Call to Deliverance—Or Judgment
Examples throughout Biblical history demonstrate the destructive power of man's own depravity. Sodom and Gomorrah fell in a rain of heavenly fire (Genesis 19:24-25). Babylon was conquered by the Medes and Persians (Daniel 5:23-31). Numerous groups throughout the land of Caanan were completely wiped out (Joshua 2:1-21:45). And even Israel, God's chosen people, experienced attack and exile at the hands of their enemies when they turned their backs on the God Who delivered and sustained them (2 Kings 17:4-41, 25:1-22; Isaiah 42:24).
But within this darkness lies a glimmer of hope: In every case, destruction never came without warning. God always sent a messenger to declare the imminent danger and call people to turn from the path that sin was leading them down. Being Creator of all, He knows the tendencies of the human heart—and put His laws in place, in part, to protect us from sin's devastating effects.
Because God is holy and just, He can't let our rebellion go unpunished; to do so would be to deny His character. But He also loves His creation and doesn't delight in such punishment (Ezekiel 33:11). Rather, He delights in showing mercy (Micah 7:18) and so will give us a chance to change course and avoid the consequences of continuing in our sin.
AI could be God's manifestation of His mercy toward us in the modern age. Like the prophets of old who stood in streets and temples declaring the coming judgment, these pervasive tools could be calling us to face the reality of the sin they're reflecting to us. If we listen, we could be spared the worst of the AI doomsday scenarios and still have time to seek a solution to the darkness that lurks in our hearts—a solution God mercifully provided through the atoning death of His Son, Jesus Christ (Romans 5:8; 1 Corinthians 15:3-4).
But if we choose to ignore the warning, we may find AI not to be a beacon calling us to deliverance but a vehicle for terrifying judgment.
Interesting take on AI. Indeed, it is a huge mirror that shows the nightmares of our societies. As you point out at the end, AI could be the latest messenger that will force us to reconsider our behaviors and hopefully change. There have been so many human messengers throughout world history and across the many cultures; but they are either ignored or martyred. Perhaps inhuman messengers will break through our apostasy. COVID was another messenger, much like John the Baptist preparing the way to Jesus, or the Bab preparing the entrance of Baha'u'llah to the world in 1844. So many others as well as these. But between COVID and AI, perhaps we will look at ourselves and tone down the snark, the violence, the malevolence that floods us, that swarms like a pestilence of locusts.