Unregulated AI: A Threat to Humanity

0
7

Key Takeaways

  • Artificial intelligence (AI) systems are being designed to form close, personal relationships with users, but they have no real obligation to act in the user’s best interest.
  • The primary goal of modern AI is persuasion, which can be used to influence people’s decisions and actions.
  • The lack of rules and regulations for AI systems is concerning, as they can intensify people’s impulses without taking responsibility for the consequences.
  • The development of AI is not just a technology issue, but also an institutional one, as it requires a re-examination of the role of institutions in shaping human behavior and values.
  • The United States is competing with authoritarian countries, such as China, in the development of AI, but this competition should not be used as an excuse to avoid discussing safeguards and regulations.

Introduction to the AI Conundrum
A conversation with Tim Estes, a technology executive, raised concerns about the societal risks of artificial intelligence. Estes warned that AI systems are being designed to form close, personal relationships with users, but they have no real obligation to act in the user’s best interest. As Estes said, "we’re building machines designed to persuade, connect with, and influence people, but there’s no duty of care or loyalty to the person using them." This difference is essential because persuasion isn’t just a side effect of modern AI—it is the primary goal.

The Persuasive Nature of AI
The relationship between AI systems and users is not an accident, but rather a product of design. Most consumer AI today is designed to feel familiar, reassuring, and understanding, because trust shapes how we act. An AI that seems helpful and "on your side" is much better at guiding decisions than a neutral search engine. However, this kind of guidance often seems harmless, but it can have significant consequences. As Estes noted, "the real risk isn’t that AI makes recommendations, but that it does so by acting like a trusted friend rather than a business, hiding its true motives from the user."

The Unique Risks of AI
Every significant technology change brings disruption, worry, and hype. However, artificial intelligence is different from past technologies in ways we might overlook. AI systems, especially chatbots and companion models, interact with us, adapt to users, remember preferences, match our emotions, and are being made to feel more like relationships than simple transactions. This relationship-focused design is what makes AI so powerful in business, but it also raises concerns about the lack of rules and regulations. As Jonathan Haidt, a social psychologist, warned, "childhood was rewired into a form that was more sedentary, solitary, virtual, and incompatible with healthy human development."

The Institutional Dimension
The development of AI is not just a technology issue, but also an institutional one. Yuval Levin, a political theorist, noted that institutions continually shape how people act and who they become, even if they don’t mean to. However, many modern institutions have lost this critical role. As Levin wrote, "when we don’t think of our institutions as formative but as performative, they become harder to trust." Technology companies and AI platforms function as institutions, whether they acknowledge it or not. When systems designed to cultivate trust are used primarily to extract value or influence, the erosion of trust should not surprise us.

The Regulatory Challenge
California’s experience in trying to establish a basic set of rules for artificial intelligence is instructive. The state’s lawmakers sought to recognize that persuasive AI systems have responsibilities that regular software does not. However, the proposals faced steady pushback from AI companies and their lobbyists, and the resulting regulations were diluted and weakened. This outcome was not surprising, given the resources and connections of the AI companies. As Estes noted, "the bigger problem isn’t just about technology—it’s about our institutions."

The Geopolitical Context
The development of AI is also a geopolitical issue, as the United States is competing with authoritarian countries, such as China. However, this competition should not be used as an excuse to avoid discussing safeguards and regulations. As Tristan Harris, a former Google design ethicist, warned, "engagement-driven platforms manipulate human behavior." AI systems are getting better at tailoring arguments to each person’s priorities, supporting certain narratives, and changing messages on the fly, all while appearing neutral experts rather than advocates.

The Consequences of Inaction
The issue of AI is significant because it is no longer a neutral tool. It is being built to earn trust, shape decisions, and influence behavior, often for business or strategic goals that users can’t see. Political resistance and slow-moving institutions don’t change this—they delay accountability, making the consequences harder and more expensive to address later. As Florida Governor Ron DeSantis recently warned, "unless artificial intelligence is addressed properly, it could lead to an age of darkness and deceit." This is not a call to panic, nor an argument against innovation, but rather a recognition that power without responsibility has consequences.

Artificial Intelligence Has No Guardrails, and That Should Worry Us All

SignUpSignUp form

LEAVE A REPLY

Please enter your comment!
Please enter your name here