There is no such thing as 'artificial intelligence'.
But there sure is such a thing as natural born organic stupidity.
The TechBros and their BFF’s the FinanceBros really seem to go out of their way to remind us that they think everyone else is really stupid and worthless, and they are sooper-gen-ee-us-es, and simply better than any of the rest of us.
From New York to San Francisco, Donald Trump’s return to the White House has greenlighted a corporate cultural regression, instantaneously allowing companies to backtrack on years of climate goals and diversity and inclusion efforts with the anti-woke politico on the horizon.
Wall Street brokers and tech bros alike are celebrating the switch, claiming that they no longer feel the need to culturally consider women, minorities, or disabled people while they talk, reported the Financial Times.
“I feel liberated,” one top banker told the paper. “We can say ‘retard’ and ‘pussy’ without the fear of getting cancelled.… It’s a new dawn.”
A new dawn.
Ya’ don’t say.
Well, one omnipresent feature of this new dawn is the spawn of the commingling of the best thinking the TechBros and FinanceBros have on offer, something dubbed ‘AI” (because it takes too long to say ‘artificial intelligence’, I guess).
But as more applications of ‘artificial intelligence’ are devised and inflicted on the rest of us without consent, there’s something that get’s lost in the noise of bleating self-congratulation- that shit just ain’t smart at all:
AIs lack common sense—the ability to reach acceptable, logical conclusions based on a vast context of everyday knowledge that people usually take for granted, says computer scientist Xiang Ren at the University of Southern California…
Previous research suggested that state-of-the-art AIs could draw logical inferences about the world with up to roughly 90 percent accuracy, suggesting they were making progress at achieving common sense. However, when Ren and his colleagues tested these models, they found even the best AI could generate logically coherent sentences with slightly less than 32 percent accuracy.
Let’s be clear: nothing labeled AI is capable of planning and reasoning at all. No ‘artificial intelligence’ possesses awareness of its own activity, nor of the strings of 0’s an 1’s it sifts and collates.
AI is no more autonomous than any piece of electronic equipment that is left unattended. Every AI function is the product of a group of people combining hardware and code, which skim information according to arbitrary (always human defined) parameters.
What is most dangerous about mechanisms touted as ‘artificial intelligence’ is precisely their lack of judgement, in fact the lack of any means by which judgement could be exercised in their operation. They are indiscriminate, and for motives not evident, those enamored of LLM’s choose to ignore this.
Of course, in every instance in which ‘artificial intelligence’ is set in operation with minimal tethering, the need for human supervision and manual override mechanisms is established by way of spectacular, frequently wildly hazardous, failure.
That LLM’s are foolishly employed is perhaps not a surprise, nor should it be a surprise that individuals who make their living marketing software code might consider the declaration of the wonders of ‘artificial intelligence’ a testament to (in their view) the crucial importance of iterative processing routines to civilization itself.
What must not escape our attention is the harm inherent in the mythology of machine cognition.
Because ‘artificial intelligence’ is perceived by its creators (and the general public, by way of massive PR campaigns) to be something of an awake and aware calculator, the veracity and reliability of its output is taken for granted, and deemed by many of its acolytes to be superior to that of humans. (Just ask any number of Tesla drivers.)
AI systems are increasingly used in hiring decisions, performance evaluations and promotions. If these systems rely solely on accurate but incomplete data, they risk reinforcing biases and ignoring critical human factors, resulting in unfair or ineffective decisions.
When accuracy is confused with truth, there is a high risk of harm, especially in fields where human judgment and ethical considerations are critical.
Furthermore, AI’s reliance on historical data may exacerbate existing biases and injustices. An AI trained on biased data will produce biased results, regardless of how accurate its predictions appear.
Assume an AI used in criminal justice systems bases its forecasts on historical crime data. In that case, it may disproportionately affect specific communities, reflecting and perpetuating societal biases rather than presenting an objective truth.
In the realm of navigating our physical and social environments, every form of ‘artificial intelligence’ is easily surpassed by human toddlers.
In the realm of morality, it’s simply absurd- to the point of delusion- to ascribe agency and the wherewithal to express values to wind-up dolls, no matter how elaborately they are costumed.
TechBros and FinanceBros are simply too stupid to understand this (and their livelihoods, to paraphrase Upton Sinclar, depend on their not understanding this).
Unfortunately, a lot of people in roles of public influence and authority, in positions affording them the potential to do a great deal of harm, are every bit as stupid:
As research has shown, many people are unaware of the use of algorithms in their daily activities (Gran et al., 2021; Powers, 2017; Rader & Gray, 2015; Shin et al., 2022). However, as societies become increasingly dependent on algorithms, our enduring biases, prejudices, and underlying assumptions are reflected back in digital form through the algorithmic systems we use. As such, they have the capacity to significantly amplify, magnify, and systematize biases while appearing to be objective, neutral arbiters (Rovatsos et al., 2019). This trend is exacerbated by the extraordinary pace of adoption of artificial intelligence (AI) systems by corporations, nonprofits, and governments, which can scale production massively through increased access to artificial intelligence development tools and internet-sourced datasets. There are legitimate concerns about the effectiveness of these automated systems for the full range of users. In particular, the ability of the system to reproduce, reinforce, or exaggerate undesirable current societal biases (Raji et al., 2020).
When the most stupid among us are in charge, the only outcome is disaster.
Dietrich Bonhoeffer, living under the Nazi regime in Germany, had something to say about this:
“Stupidity is a more dangerous enemy of the good than malice. One may protest against evil; it can be exposed and, if need be, prevented by use of force. Evil always carries within itself the germ of its own subversion in that it leaves behind in human beings at least a sense of unease. Against stupidity we are defenseless. Neither protests nor the use of force accomplish anything here; reasons fall on deaf ears; facts that contradict one’s prejudgment simply need not be believed — in such moments the stupid person even becomes critical — and when facts are irrefutable they are just pushed aside as inconsequential, as incidental. In all this the stupid person, in contrast to the malicious one, is utterly self-satisfied and, being easily irritated, becomes dangerous by going on the attack. For that reason, greater caution is called for than with a malicious one. Never again will we try to persuade the stupid person with reasons, for it is senseless and dangerous.”
Then again, what society nowadays would put the most stupid among us in charge of everything?
Stupid people elected a stupid person and he surrounded himself with many people many stupid and some not so stupid. But the people behind all of this are not stupid. They know exactly what they’re doing and they know how to manipulate Trump and all his minions to do their bidding. They will come and go as they screw up, but the super powerful and wealthy know what they’re doing regardless. They have a plan and it’s very dangerous and very few people see it for what it is.