Unveiling the Truth: AI, Authoritarianism, and the Ethical Maze of Security
Navigating the Security Maze
The chat about security often dives into complex and abstract realms, seeming far removed from our day-to-day lives. But here’s the thing: moral and ethical considerations stand tall, independent of laws, political games, and religious beliefs. This truth hits home when we talk about security in the world of artificial intelligence (AI) and authoritarianism.
The Ladder and the Shield: Our Instinctive Moves
We humans, we’re a funny bunch. We love building ladders to climb and shields to protect ourselves. It’s in our nature. In the security world, those with authoritarian streaks are no different. They scramble up the ladder to grab power and shield themselves from any backlash. One of their favorite shields? The corporate structure. Decisions get buried in paperwork, making it tough to pinpoint who’s really in charge.
Passing the Buck: The Risky Business of Externalizing
Let’s talk about banks. They’re pros at externalizing risk. And then there’s the legal idea of ‘any person, legal or natural.’ It’s a mouthful, but it basically means non-human entities can be treated like human slaves—bought, sold, and terminated. This concept stretches to all sorts of entities, from vehicles and rivers to spirits and even gods.
AI and Security: A Tricky Tango
The AI discussion is full of bold claims. Some say current AI systems, like Large Language Models (LLMs) and Machine Learning (ML) systems, should be seen as human equals, even all-knowing deities. Companies like OpenAI are pushing for their AI to be recognized as legal persons. Why? To dodge accountability, of course.
The Microsoft Tay Fiasco: A Lesson Learned
Remember Microsoft’s Tay? The AI chatbot that turned into a right-wing extremist in a day? It’s a stark reminder of the risks involved in AI. As Dr. Justine Cassell, a professor at Carnegie Mellon University, put it, ‘When we design AI that interacts with people, we must be careful—it doesn’t just learn from data; it learns from culture.’
The Reality Check: AI Systems Under the Microscope
From personal experience with free AI systems hooked up to search engines, it’s clear that a lot of the results aren’t based on reality. Only a tiny fraction of the info they spit out is accurate and useful. But given how these systems are designed and trained, it’s no surprise.
The UK’s Connect System: A Cautionary Tale
Across the pond, the UK’s government departments, like HMRC and DWP, have cooked up an AI system called the Connect System. It’s being used unlawfully to implement clawbacks and discriminatory fines, targeting those who can’t fight back. The system is driven by political slogans and is designed to push people into poverty by presenting false evidence repeatedly.
The Ripple Effect: Broader Implications
The fallout from these developments is huge. Big Silicon Valley corporations, which have squeezed countries like the UK dry, are now staring bankruptcy in the face due to their AI investments. This could trigger a financial crisis that makes previous recessions look like a walk in the park.
The folks responsible for these harms are often shielded by layers of protection, including the legal personhood of AI. This shielding lets them push oppressive agendas without fear of consequences or oversight.
Wrapping Up: The Intersection of AI, Authoritarianism, and Security
The meeting point of AI, authoritarianism, and security is a complex web of ethical and practical challenges. We need to tackle these issues with a clear understanding of the realities involved and a commitment to ethical principles that rise above legal and political frameworks.
For more on unexpected entities considered as persons, check out this insightful article.