It is possible that artificial intelligence is quite literally the most well-known technological buzzword nowadays. That is not surprising, given how the media (both old and new) drag it into every conversation about everything from growing potatoes to curing cancer. Such is its prevalence, that I will argue that AI is no longer just a technology (real or imagined), but a full-blown cultural and financial phenomenon.

Modern fetishization of technology

Pre-modern technologies did not arrive with a lot of fanfare. The stories of their origins are perhaps nothing exciting. Their arrival was not preceded by a period of expectation, predictions and opinions. As far as we know, they spread organically from one community to another, being adopted by those who found them useful. For the most part, it seems that no special effort was expended to constantly invent new technologies.

In the modern era, things are a little different. We are being bombarded relentlessly with news and opinions about technology. It seems that the world is always on the verge of new breakthroughs. We have been conditioned to assume certain things. One of these assumptions is that technology will make our lives easier. Another is that such an easier life is always better.

soyjak_chatgpt

Also, the push for any technical advancement now comes with some sort of justification. Often, it is either idealistically moral (“saving lives”, “saving the planet”) or crassly immoral (profiteering, indulging). This might be a side-effect of the way research funding works these days (driven more by political and financial motives than pure scientific exploration) and the associated media circus.

Our obsession with technological solutions has fueled the growth of AI into something more than an engineering problem.

AI is a marketing tool

For-profit corporations and fund-seeking academics may have their differences, but they do share a questionable habit: telling a story to get the money rolling in.

The story of AI, although conceived before by the creative minds behind sci-fi literature, was refashioned by researchers to help them obtain grants. Labeling a computer program as AI certainly makes a grant application stand out from all the others that simply claim to “use X method to do Y.” Like the practice of calling anything an algorithm, this term has been overused to the point of rendering it almost meaningless. Even Nick Bostrom, a true believer in the emergence of truly intelligent machines that can outsmart humans in every conceivable way, comments in his book Superintelligence: Paths, Dangers, Strategies that the distinction between AI and software is generally not well-defined.

ai_if_then_else

The early days of academic interest in AI did result in some original algorithms such as neural networks and support vector machines, but these failed to reach their potential due to limitations of hardware and supporting theoretical work, resulting in the so-called “AI winter.” Only later, with improvements to efficiency and hardware capability (particularly memory capacity and processing speed), did they become relevant again.

With improvements in computational power and efficiency, data became the limiting factor. This effectively reduced AI development to the problem of running more data through a complex set of rules that continuously adjusts its model of predicted outcomes. Research on basic algorithms continued, but this proved to be the more difficult route, and quickly became the road less traveled.

ai_mentions

Corporations are more straightforward. It has become increasingly difficult to avoid the phrase “powered by AI” in the technological sphere. Apparently, this is what makes much of the latest and greatest software features possible. The effectiveness of these supposedly AI-powered tools is beside the point. As long as slapping this label on a product helps promote it to consumers and attract investors, companies have a strong incentive to make such claims.

There is proof that this approach does produce financial gains, though they may be unequally distributed. Nvidia’s stock rose by almost 30% after its most recent earnings report, continuing its pattern of benefiting from a much-touted focus on AI and optimistic future outlook.

ai_investor

AI is a procedural shortcut

They were trying to turn a conceptual problem they didn’t understand into a procedural one they could just execute. β€œWe’re very good, humans are, at trying to do the least amount of work that we have to in order to accomplish a task,” Richland told me. Soliciting hints toward a solution is both clever and expedient. The problem is that when it comes to learning concepts that can be broadly wielded, expedience can backfire. (David Epstein, Range: Why Generalists Triumph in a Specialized World)

Humans are exceedingly good at finding shortcuts. This ability underpins our success in making tools and building a technological civilization. The underlying motive behind creating stone hunting tools and automated decision-making bots is the same: to find an expedient solution to an existing problem. Many AI tools appeal to our penchant for expedience, providing fast and simple procedures to solve immediate problems without the need for deep conceptual thinking. The applications range from helping write homework essays to solving epoch-making problems in biotech with worldwide impact. With the flood of AI-generated content in recent times, the writing is on the wall for those who hoped to keep these tools contained behind walls. The genie is out of the bottle.

AI is a religious and cultural symbol

I’m hoping the reader can see that Artificial Intelligence is better understood as a belief system instead of a technology. (Jaron Lanier, One Half a Manifesto)

So far, any form of machine intelligence has been limited to narrow AI or weak AI, which is extremely competent in very specific tasks, such as playing chess or serving ads. There are many who believe in the eventual emergence of artificial general intelligence (AGI), which will be an “intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest”, in the words of philosopher Nick Bostrom. There are many thinkers like Bostrom who have made convincing arguments about the inevitability of AGI. At times, the discourse takes on a religious flavor, as if the people are expecting (and dreading) the arrival of a messiah.

More importantly, the idea of a sentient machines is embedded into modern pop culture. A vast portion of science fiction is devoted to AGI. Much of it envisions a dystopian future with antagonistic machines (The Matrix and Terminator movies), although some still hold out hope for an AGI savior (Asimov’s Foundation). This cultural prevalence, along with the quasi-religious belief in its future existence, has created an interesting belief system that, in my opinion, rivals that of any modern religious or secular ideology, at least insofar as it affects the behavior of a large group of people.

What’s the big deal?

So what if the AI branding helps some people make a few more bucks or publish a few more papers? What if it has inspired a new belief system that is apparently harmless? If it has advantages that actually add value, why not embrace it, despite its limitations and misrepresentations?

Because it could also add negative value. This is not necessarily a win-win situations. A stronger emphasis on AI could take attention and resources away from other aspects of technology. Not only do we limit future development, we also risk forgetting how to do things without AI. How long before the complexity of these systems overtakes our ability to understand them? There are already terms like “explainable AI” and “understandable AI” floating around, surely a sign that we are already in a precarious position.

AI is a fantasy, nothing but a story that we tell about our code. It is also a cover for sloppy engineering. Making a supposed AI program that customizes a feed is less work than creating a greater user interface that allows users to probe and improve what they see on their own terms. And that is so because AI has no objective criteria for success. Who is to say what counts as intelligence in a program? (Jaron Lanier, Ten Arguments for Deleting Your Social Media Accounts Right Now)

Many thinkers are concerned about the possibility of AI takeover and the resulting existential crisis for humanity. My concerns are much shorter-term, relating to the loss of skills and knowledge caused by too much attention given to a technology that, at best is a black box or, at worst, simply feeds off of existing knowledge.

office_space_ai

There is also the enormous ethical issue of making many human professions obsolete, which is a logical and inevitable result of the advent of intelligent machines.

Like the Luddite Horses in the video above, some of us may still be in denial. But that luxury may not be available for long.

On the question of machine superintelligence arriving at some point in the future, I am agnostic. Perhaps I should consider becoming an ardent skeptic, just to avoid the possibility of torture by our future AI overlords.