

Discover more from A Lily Bit
I accidentally made this article available for free. Oh well, if you found it enjoyable, I would be grateful for any token of appreciation. Thank you! :)
Alright, I need to be frank, and this isn't usually my style, but I've hit my breaking point. I just can't stand it any longer. I'm done playing along with the overblown hype surrounding AI. And I'm not talking about the typical “AI will steal our jobs” chatter. My issue runs much deeper than that. It's not just about the misuse of the term 'artificial intelligence'—there's no real intelligence, no consciousness, nothing. It's all a facade, a cleverly orchestrated marketing ploy, a future crisis engineered in a boardroom. Give me a bit of your time, and I'll lay it all out for you. This is my no-holds-barred, definitive take on AI, and it's not going to be pretty.
Everywhere I turn, especially on platforms like YouTube, I'm bombarded with messages like “YOU GOT TO USE THESE 10 AI TOOLS in 2023!” But here's the truth: I don't need to use them, and neither do you. It's all about generating clicks and profit. If these tools were simply marketed as “generators,” they wouldn't have garnered much attention. Yes, they are advanced generators, but that's the extent of it. Extracting data from a database and forming a coherent response to random queries like “how do I get the girl who writes about Klaus Schwab on Substack to like me?” doesn't equate to intelligence.
By the way, here is ChatGPT’s reply:
“Engage in Intellectual Discussion: Given her interest in writing and specific topics, she might appreciate deep, intellectual conversations. Share your thoughts on her articles or related topics, but be respectful of her opinions.
Show Genuine Interest: People appreciate when others take an interest in their passions. Ask her questions about her writing, her interests in Klaus Schwab, or other topics she might be interested in.
Be Patient and Kind: Building a relationship, whether it's friendship or something more, takes time. Be patient and show kindness.”
I would add: Become a paid subscriber.
True intelligence isn't just about flawlessly executing pre-set tasks. It's about adaptability and independent thinking, something these systems are devoid of. Merely pulling information from a pre-fed database or generating an image isn't a display of intelligence. It's just a more complex form of generation, a concept we've known for years. The terminology has merely evolved to make it sound more impressive, to generate traffic.
Just how much does it cost to indulge in these “essential AI tools”? Let me paint you a picture. I watched a video by a fully-grown man, ensconced in his “studio,” complete with an over-the-top, height-adjustable desk and a keyboard that could trigger a seizure with its flashy lights. In the background, his YouTube '100k subscribers' plaque hung proudly, almost like a badge for contributing to societal decline. He spent a good 20 minutes waxing poetic about tools like “put your face in any video” and “create lifelike images – look, here's me riding a dolphin!” (though the result resembled a Photoshop disaster with an overzealous softening brush).
The experience? Far from enjoyable. Another gem was, “Create your personal assistant by feeding its database,” which I attempted. I painstakingly uploaded every Substack article I've written, battling a website that seemed to be coded by a rogue AI, with frequent browser crashes. After 12 minutes of this digital odyssey, the big moment arrived. I eagerly asked my custom assistant about Klaus Schwab and his views on climate change, drawing from my extensive writings. The response was a thunderous, “I can’t answer this, as it doesn’t align with my creators’ content policies.” Well, what a revelation. Thank you, AI, for your boundless assistance.
After that I finished watching the video, then went to the shown websites and checked how much it would cost me to use these “super-necessary”, “life changing” AIs that would add so much incredible value to my life. Around $300—monthly.
Machine learning has become the new craze, seemingly overnight, inundating us with a deluge of AI-generated art, music, and text that you're probably already tired of.
The gaming and entertainment industry is just dipping its toes in, while the rest of the world uses AI for anything from art creation to coding, animation, and even subtly coercing you into splurging on microtransactions. Everyone, from colossal corporations to independent creators, is diving into this realm of subpar writing and art that looks like melted ice cream on a sidewalk.
This article is an exploration of the rapidly growing domain of AI. I'll delve into the myriad issues it's spawning, have a chuckle at the absurd art, and then dive into the real AI apocalypse – the one that's being kept under wraps.
But before I dive into all that, I aim to peel back the layers of machine learning, not through some expert's jargon-laden spiel, but by dissecting what these systems really are.
Are these tools a danger to us? No. Can these tools become a danger to us? Not these tools, no. And here is why.
Table of Contents:
What REAL “AI” Actually Is
Explaining “AI” to a Five-Year-Old
The Hidden Cost of “AI”
Behind the Sparkling Facade
“AI” is not Creative
“AI” Steals
“AI” is Biased
The Actual Problems With AI
What REAL “AI” Actually Is
The concept of "seven stages of AI" suggests a progression in the development and capabilities of artificial intelligence. However, I argue that current AI advancements haven't reached a level where they deserve the title “AI” to describe more than a bland marketing hype. Instead, these technologies could be seen more as precursors to highly sophisticated public control systems, lacking true intelligence or sentience. The envisioned AI by some technocrats, aimed at monitoring and controlling every aspect of our lives, also falls short of genuine intelligence. These are essentially control systems, and it would be more accurate to refer to them as such. There is no artificial intelligence in a world run on a social credit system—only artificial control.
To understand this perspective better, it's helpful to examine the recognized seven stages of AI development and assess our current position in this evolutionary path.
Rule-based AI systems, often termed as single-task systems, are foundational in the evolution of artificial intelligence. They function according to a specific set of rules or algorithms defined by their programmers.
To illustrate, think of playing chess with a computer. The computer is programmed with a comprehensive understanding of all potential moves and their outcomes, strictly following its embedded rules. It can determine the most advantageous move within the scope of these rules. However, these systems do not possess the capability to learn or adapt beyond their initial programming, limiting their application to tasks with straightforward, well-defined rules.
Advancing from the foundational rule-based AI systems, we encounter the realm of context awareness and retention systems in AI development. This stage marks a noteworthy progression in the capabilities of artificial intelligence and this is were we’re at. Stage two. TWO!
These systems are designed to not only comprehend but also remember the context. This means they can recall past interactions and leverage this information to shape their future responses. A prime example of this is seen in smartphone assistants like Siri or Google Assistant. These assistants go beyond merely processing commands; they adapt and learn from your previous interactions.
Consider a scenario where you ask, “Who won the soccer game yesterday?” followed by “When is their next match?” The system understands that “their” refers to the soccer team mentioned in the previous query. This ability to recognize context and retain information allows these systems to handle a broader range of interactions than rule-based AI.
In terms of the AI developmental stages, they are akin to teenagers. They have not reached the stage of independent thought, but they exhibit a significant ability to remember and utilize context in their interactions and act on a preset of parameters someone has trained them to use. These “AI” systems do not operate independently and never will. They serve as the foundation for a future where biased “AI” is marketed as “intelligent”, “dangerous” and “in need for regulation” for various reasons, which will be explored later in this discussion.
Advancing from context-aware AI, we encounter domain-specific mastery systems, a stage signifying an AI's proficiency in comprehending, retaining, and becoming exceptionally skilled in a specific domain or field.
These systems are not jack-of-all-trades but rather experts finely honed to excel in particular areas. A prime example is IBM's Watson, engineered to shine in answering questions on the quiz show “Jeopardy.” Similarly, Google's DeepMind AlphaGo, tailored to master the intricate board game Go, achieved the feat of defeating world champions.
Their expertise in their respective domains surpasses human capabilities. These systems can process and analyze immense data volumes, discern patterns, and make informed decisions or forecasts at unparalleled speeds.
Domain-specific mastery systems are like adults in the AI developmental journey, showcasing advanced skills in their specialized fields. However, despite their sophistication, they remain a considerable distance from realizing the goal of true artificial general intelligence.
In stage 4 of AI development, we begin to encounter systems that emulate human-like thinking and reasoning. This stage goes beyond the capabilities seen in earlier phases, where AI was confined to following rules or retaining context.
These thinking and reasoning AI systems are adept at grasping complex concepts, tackling unfamiliar problems, and even generating original ideas.
For example, such an AI could analyze a novel, comprehend its plot, and deduce characters' motives from their actions. Alternatively, it could examine economic data, predict market trends, and formulate investment strategies.
At this juncture, AI begins to mirror human intelligence more closely, though it still doesn't equate to the full capacity of the human mind. It represents a highly advanced tool, particularly adept at reasoning and thinking tasks.
However, intriguing as these AI systems are, this stage represents just the midpoint of the AI evolutionary path. Future stages delve into more speculative realms, where AI could potentially match or even surpass human intelligence.
Artificial General Intelligence (AGI), also known as Strong AI, represents an ambitious and largely theoretical frontier in the field of AI. AGI is envisioned as an AI system that can match human intelligence across all areas.
Unlike Stage 3 AI systems, which excel in specific domains, or Stage 4 systems that demonstrate human-like reasoning, AGI would be capable of performing any intellectual task that a human can. This includes learning new languages, composing symphonies, solving complex mathematical problems, and even understanding and processing human emotions. An AGI would not just mimic human cognitive abilities; it would be self-aware, conscious, and able to interact with the world in a manner indistinguishable from humans.
However, as it stands, AGI remains a theoretical construct and has not yet been realized. It's at the cutting edge of AI research and is surrounded by considerable debate and speculation.
The concept of Artificial Superintelligence (ASI) takes us into a realm of AI that is not just advanced, but extraordinarily so, surpassing all previous stages in complexity and capability.
ASI represents a level of artificial intelligence where cognitive abilities greatly exceed those of humans. It's not merely about matching human capabilities, but vastly outperforming them in efficiency, speed, and scope. ASI would dominate in areas of economic value and intellectual pursuit, far surpassing human proficiency.
To grasp the magnitude of ASI's intelligence, picture a human compared to an ant – that's the scale of difference we're talking about. An ASI could potentially offer solutions to the most challenging issues, and innovate in ways currently beyond human imagination. It might unravel cosmic mysteries that are, as of now, beyond our comprehension.
However, the emergence of ASI is fraught with ethical, safety, and control concerns. The potential for misuse is enormous, matched only by the risk of unforeseen consequences. ASI challenges our understanding of power dynamics, intelligence, and even consciousness. It propels us into a territory that is not just unknown but perhaps unknowable, marking the final and most daunting stage of AI evolution. Such an AI would be of absolutely no use to the technocrats.
The AI Singularity, or simply the Singularity, is a theoretical moment when technological growth becomes unstoppable and irreversible, resulting in profound and unpredictable transformations in human civilization. This concept is closely tied to the emergence of Artificial Superintelligence (ASI).
Futurist Ray Kurzweil popularized the term "Singularity" in this context, borrowing from physics where a singularity, like at the center of a black hole, denotes a point where known laws break down. In AI, the Singularity refers to a point where an ASI surpasses human intelligence and is capable of self-improvement at an extraordinary pace, potentially triggering an exponential increase in technological advancement.
The Singularity is often linked with extreme forecasts, including the end of humanity as we know it, the feasibility of uploading human consciousness into computers, and significant social upheavals. However, it's critical to recognize that the concept of the Singularity is highly speculative and a subject of debate among experts. While some view it as a plausible eventuality, others dismiss it as mere science fiction.
The uncertainty surrounding a true ASI means we can't fully envision its nature or impact. Thus, regardless of one's stance on the likelihood of the Singularity, exploring these seven stages of AI evolution deepens our understanding of the technology's vast potential, as well as the ethical and societal challenges it presents.
The belief that technocrats may always employ the most advanced AI for governance and control is a common concern, but such a scenario is improbable. Technocrats are more likely to impose their own ideologies and perspectives rather than rely on a sentient AI that is hard to control. The idea of an AI developing its own consciousness and potentially deciding that their ideas are completely retarded is a significant deterrent. They will likely never venture beyond stage 4 AI development, where AI can excel in governance based on predefined parameters but lacks true autonomy or superior reasoning.
Any future technocrat claims of achieving beyond stage 4 might be seen as promotional hyperbole, intended to exaggerate the capabilities of their AI systems. This could serve as a strategy to maintain a competitive edge and prevent the emergence of superior AI technologies that could disrupt the existing balance of power. In essence, the advancement of AI beyond a certain point could be stifled, not due to technological limitations, but as a means to preserve existing power structures and prevent challenges to the status quo.
Explaining Current “AI” to a Five-Year-Old
Machine learning is swarming with schemes, but neural networks are the cool kids on the block, setting the trend. They're the backbone of pretty much everything in the field. Think of neural networks as a dumbed-down version of our brains, a network of neurons if you will.
Imagine our brains as a low-res image: light bounces off a painting, our brain does its magic, and voilà, we feel something. That's the basic formula: input, mystery middle, output. And, of course, every beginner's course on neural networks will regurgitate this same formula: input nodes, some hidden layers (the mystery middle), and output nodes.
But let's not get too mystified by the term “hidden layers.” They're not some enigmatic, unknowable black box. In fact, tweaking them is a big part of the fun in neural network parties. The data churned out by these hidden layers might be gibberish to us, but it's the network's bread and butter – hence the “hidden” tag, I guess.
Then there's explainability, or the lack thereof, in AI decision-making. AI doesn't reason like humans. It's not sitting there, pondering the mysteries of the universe. It just processes data in its own alien way.
Nodes, another lofty concept. They don't have a clear counterpart in what the computer's actually doing. They're essentially just functions in a fancy dress. In machine learning, nodes are like little data processors, taking a vector, doing something to it, and passing it on, like in a game of digital hot potato.
And the real magic? It's all in how the inputs are weighted. Much like in our brains, where some neuron connections are stronger and more influential.
The human brain is still a bit of a mystery, but the general idea is that our thoughts and actions are influenced by the connections between neurons and how strong these connections are. In the realm of artificial neural networks, it's basically just a bunch of functions passing numbers around. You can tweak these numbers to make them more or less influential, kind of like turning the volume up or down on your TV.
The network you see here is a child's play compared to real neural networks, which are always designed with a specific goal in mind. Imagine we feed it a picture of a letter, with each pixel's brightness as an input, and the network's job is to guess the letter. Sounds simple, right? But really, it's just a mountain of math.
It's pretty neat that we can create a machine with a billion little dials to turn, but no sane person is going to adjust all of those by hand. Thankfully, neural networks can be trained to tweak their own settings.
For instance, if I have a picture of a 'B', and the network guesses wrong, that's a training opportunity. These networks eat up millions or billions of such data snacks. Initially, the weights in the network are as random as a dice roll, so it might guess that 'B' is an 'A', an 'X', a 'P', and a 'C' all at the same time. But we can measure how off its guess is and adjust the weights accordingly, making it a bit smarter with each try.
If the network is 20% sure that a 'B' is an 'A', it's time to dial down the factors leading to that guess. Gradient descent is the wizardry that kicked off the AI frenzy. Picture you're standing on a hill: the gradient is like an arrow pointing uphill. Following it is the quickest way up. Going in reverse is the fastest way down.
Our error function is like a hill that shows how incorrect each weight is. So, if we reverse-engineer the gradient, our network will inch closer to being error-free. Like a hiker finding the easiest path down a mountain, but with a lot more math, and a lot more trial and error.
Our example neural network gets better at identifying letters, which is admittedly pretty impressive. But let's not kid ourselves – it's not performing any miracles.
The idea I just outlined? It's from a 1958 paper, a vintage theory on how the human brain might work. So, it's hardly breaking new ground, yet this concept of layers of interconnected nodes is the star of today's AI show.
You'd hope that our little letter-recognizing network would learn to spot patterns in letter strokes or something that makes sense. But no, the weights in a neural network often seem as random as a lottery draw, and much of today's machine learning architecture is just the result of someone throwing spaghetti at the wall and seeing what sticks. Sure, sometimes they make the whole gradient descent thing more streamlined, but don't hold your breath for any groundbreaking revelations in the hidden data. Don't expect a coherent line of reasoning from our current “AI” either.
The Real Cost of “AI”
These networks have been used to create all sorts of media, like images and music, from text descriptions and most of us have tried it, played around with it and then abandoned it again. I use AI to create my cover images here. Why? Because I actually think that they give this website a coherent look. For that I frequently battle with “My creators didn’t allow me to render images of despair and political context”—just another surprise.
Remember Google's Deep Dream? Yeah, me neither. It was an early generative model that grabbed headlines by turning innocent photos into psychedelic nightmares. What a wonderful and useful thing.
It was originally part of a 2014 image recognition contest by ImageNet, which, by the way, assembled a dataset of almost 15 million images without owning the rights to them.
The line between research and commercial exploitation is thinner than a razor blade in this field. Big corporations often use this ambiguity to bulldoze their way forward, leaving us to deal with the aftermath. Using millions of images without clear ethical boundaries is questionable even in academia, but once there's money involved, good luck trying to reel that back in. We've seen this scenario play out time and again.
Ah, the good old tradition of companies rushing products to market, like the classic leaded gasoline or asbestos insulation. It’s like a surprise party, but the surprise is brain damage and lung cancer for large chunks of the planet.
Then there’s Google, the overachiever in this game. They nailed it with AdSense, a surveillance system so advanced it probably knows your heart rate, body temperature, and the last time you blinked. And what do they do with this info? Try to peddle some bizarre Coca-Cola concoction to accelerate your timely and pension fund-friendly demise.
I mean, you can actually do useful things with data but selling Coca-Cola Ultimate, with the enchanting flavor of burnt popcorn and coconut, or as YouTubers, who make a living from drinking death syrup on camera, call it, Hawaiian Tropic suntan lotion meets fruit salad, is probably a noble endeavor, too. It’s like they took regular coke, added a splash of mango, and then decided, “Hey, why not throw in some Hubba Bubba gum for that gourmet touch?”, and then had to rely on Google AdSense to sell this crap to someone.
Google turned data harvesting into a cornerstone of the internet before anyone really grasped what digital privacy meant. And now? It’s like a bad tattoo from a wild night out - impossible to get rid of.
But let's not forget ImageNet, the trendsetter in online data scraping for AI training. It’s like they saw all the potential problems and thought, “Yeah, that seems like a great idea, let's do it!” And here we are, dealing with the mess.
ImageNet remarkably biased; they've conveniently 'borrowed' all these images and used some rather iffy labor tactics to label them. But hey, their CEO talks about cute, frolicking cats, so everything's just peachy and light in their cushy, marshmallow-esque fantasy land.
If you found value in this read so far, kindly consider leaving a heart. For those eager to explore more content of this nature, I invite you to continue reading the following 7k words.
Your support in the form of a modest contribution to sustain the daily updates of this Substack is deeply appreciated. Thank you immensely for your readership and for supporting independent journalism!
Amazon's Mechanical Turk (I love that name, I’m sorry) fancies itself as a quaint little bazaar for micro-tasks, those tiny, tedious tasks that somehow still need a human touch.
This mostly unknown digital sweatshop was apparently a match made in heaven for ImageNet's early days around 2008 or 2009. They tossed a bunch of pictures at the workers and asked them to play a game of “I spy with my little eye.” Sounds just like those CAPTCHAs we all solve, right? Congratulations, you've been an unpaid intern for Google's AI training for years.
These tech giants have a rather, let's say, 'creative' understanding of consent. And this is just the tip of the iceberg. Need to visit a website? Well, you might as well roll up your sleeves and help train an AI while you're at it, because let’s face it, they never tried to protect their services from bots.
Jeff Bezos, ever the wordsmith, dubbed this “work” as “artificial artificial intelligence.” And in one article that couldn't stop gushing about ImageNet, Turkers were just a faceless horde.
By 2018, these workers were raking in a whopping $2 an hour, and guess what? Not much has changed since. These folks are the invisible, the impoverished, the easily exploited. Mechanical Turk? More like Slavery-as-a-Service. It’s like a digital Bangladesh.
Turkers might be jacks-of-all-trades, but the AI gold rush needed experts. Enter companies like Appen, peddling specialized data labeling services for machine learning. Oh, the joy of progress!
The story of global tech labor is just a heartwarming tale of opportunity and fair play, isn't it? First, our friends in Kenya and the Philippines were the lucky ones to get a piece of the action. But then, Venezuela's economy took a nosedive, and voilà – a fresh pool of desperate job seekers appeared for companies to exploit.
Here's where it gets even more uplifting: a journalist from MIT decided to shine a light on the plight of a Venezuelan worker at Appen, painting a picture of sheer workplace bliss. Imagine this: no way to talk to the company, glued to your computer, waiting for the next task in a digital version of musical chairs, and a website that's as reliable as a two-dollar umbrella in a hurricane.
Appen, playing the role of the benevolent overlord, really knows how to squeeze every drop from its vast ocean of labor. These workers, banded together in their online havens, have to script their way to making their digital toil just a tad less soul-crushing. And because they're contractors, the pay is as predictable as a game of Russian roulette – some days you earn enough for a coffee, other days it's jackpot time, but mostly it's just playing a game of hope and disappointment.
And let's not forget the golden rule: if a company has a slavery policy, it's probably just for show, right? Meanwhile, the AI juggernauts of the world (e.g. Amazon, Microsoft, Google, Nvidia, Adobe, Oracle, Bloomberg, Boeing, Airbus, Salesforce) can't get enough of this setup. In their eyes, it's boom time, baby!
Behind the Sparkling Facade
This technological evolution, fueled by those with limited options, has transitioned from the simple aesthetics of Deep Dream to a new era marked by unique art, uninspired writing, and consultant-friendly buzzwords.
In the realm of video games, various AI technologies converge, driving substantial investment and effort towards automating nearly every aspect of game development. Yeah, as if games weren’t already an uninspired mess where “I have the biggest empty world” became a seal of quality.
Ubisoft, after having dumped millions into the now dead Metaverse, decided that a tool called Ghostwriter was a good next step. You know, because diving headfirst into crypto wasn't adventurous enough. So now, they've got this shiny new toy that spits out NPC dialogue like a vending machine. It's marketed as a game-changer, but they are apparently not using it for their main characters. I wonder why? Ubisoft dialogues and storytelling always sounded like what ChatGPT would spit out on any random occasion—”Write me an outline for a story about someone losing his father and now seeking vengeance.”
Unity's lineup of “future tools” now includes a quirky alien, a chatbot, and several art-generating tools. They've even opened the doors for the sale of AI tools on the Unity store. However, the company has been rather cagey about the origins of its training data, leading to suspicions that they didn't give it much thought until it became a topic of inquiry.
The rush to market AI solutions seems to prioritize speed over public scrutiny, a race that Unity is actively participating in, especially as their users begin to raise more pointed questions.
Unity's AI initiative is still in beta, yet it was launched with a feature for Atlas, a tool whose main function is to scour Sketchfab for 3D models under lenient licenses. They claim AI is involved, but it seems its role is limited to basic searching tasks. This gives a pretty clear indication of the level of oversight Unity is exercising.
Given these developments, it appears that the most tangible application for this technology is in speeding up the production of those mobile games where you have to save a king or let a town fall down a hole in the ground. You know, all this stuff that really helps to advance humanity.
Currently, the output of AI isn't quite captivating or consistent enough to maintain someone's interest for an extended period. It elicits a reaction like, “Oh, that's neat,” and perhaps you even visit that website promising to redesign your kitchen using a photo. But then you find out it merely slaps a low-quality texture onto your appliances and then tries to upsell you a subscription for “more amazing outcomes.” It's somewhat challenging to come across any creations that prominently feature AI-generated assets.
From my experience, programming with AI is akin to first mastering the code on my own, then painstakingly teaching a five-year-old to replicate what I did. It's almost pointless if I still have to sift through all the standards and documentation myself - I might as well just do the programming directly. And let's not forget the elephant in the room: tools like ChatGPT and GitHub Copilot rely on training data that was scraped. By the way, just in case you're wondering, “GPT” stands for “Generative Pre-trained Transformer.” That’s it. No intelligence, sorry.
Transformers are dreamt up by the wizards at Google to crack the code of language translation, these networks have this nifty feature called 'attention'. It's like giving the network a magnifying glass to better understand the context of words in a sentence.
Then there's ChatGPT, our word-by-word storyteller, always guessing which word might come next like it's playing a never-ending game of Mad Libs. Remember GPT-2's hilarious stunt? It threw in a spoiler alert for a totally fictional episode of 'Game of Thrones'. Why? Because apparently, everyone and their grandmother was writing about it.
But let's be real: even as these models evolve, the dreamer who wants to build a game without learning a single skill or breaking a sweat is probably not crossing the finish line anytime soon. But, hey, these models will improve. There's a fortune being poured into AI right now, so who knows what's next?
Most of these newfangled AI firms are essentially piggybacking on established technologies like GPT or DALL-E. Yet, somewhere out there, surely someone is experimenting with novel structures.
“AI” is not Creative
For the discerning eye, telling apart AI creations from human-made work isn't too tricky for now, but I bet this gap will narrow over time. Often, if something looks passable at first glance, most folks won’t even bat an eye if you suddenly ended up having an extra finger or two on these fancy image with your new “girlfriend.”
Small tasks that are set to predefined parameters, like outlining your text, summarizing, or translating it, are actually the only useful thing, I have seen these models do. Coherence is a big problem for these programs.
The audio you're about to hear was created by a music-producing AI developed by Meta. This example may illustrate the mentioned issue of coherence.
Despite their remarkable capabilities, large language models like ChatGPT won't be pioneering, naming, or evolving new concepts. Even managing a specialized Python library is a stretch for ChatGPT. And when it comes to the humanities, they remain out of reach, regardless of the volume of subreddit data fed into the training.
The majority of AI chatbots have a memory span limited to just a few hundred words (again mimicking society flawlessly) making it a tall order for them to loop back to earlier topics in a discussion.
As for visual art, achieving any semblance of consistency is a herculean task, especially when it comes to crafting textures that even vaguely match each other. Or have you seen what AI imagery does to hands? Not sure what my fingers on this portrait are supposed to represent but it gives me the creeps.
Determining what elevates art to greatness is elusive, but creation involves a myriad of small decisions. An artist's distinct voice emerges from the consistency and harmony within these choices.
Take Mark Rothko, for example. Regardless of personal opinions on his artistry, his paintings are unmistakably his. AI, however, removes these nuanced decisions, replacing them with an averaged blend of choices previously made by others.
Consequently, any artwork generated by AI ends up being an amalgamation of existing images. Inherently, it lacks the capability to produce anything truly novel.
It seems that modern audiences are easily thrilled (or perhaps prompted to be) by a princess character resembling a malformed Pixar movie princess suddenly changing in color and posture, complete with poor lip-syncing, a robotic voice and the promise to perform 'tricks' like disappearing in fairy dust. This reminds me of how I used to make Keynote presentations in the 9th grade. Back then, no one would have cheered ecstatically if I announced my plans to complete all my assignments using the multitude of quirky animations Apple packed into the Keynote App.
Discussing the aesthetics of AI-generated art seems futile. There's no deliberate intention in its creation. The computer merely processes a text prompt and generates an image that reduces error. Additionally, the typical mindset among AI users often revolves around simplistic curiosities, like wondering what lies beside the Mona Lisa, rather than deeper artistic exploration. As a result, there's a lack of profound thought driving this process, and frankly, most of the output doesn't even look particularly appealing.
The current trend in image generation tools gravitates towards surrealism, landscapes, and stylized art. However, the results often resemble what you'd find in a Google search for “cool desktop backgrounds.” There's an apparent lack of vision or deliberate decision-making in these creations, not to mention the often 'melty' appearance they have.
These tools lack an inherent understanding of aesthetics. Neural networks don't operate with an intrinsic sense of beauty. To coax something visually appealing out of them, prompts need explicit directions like 'best quality' or 'featured on ArtStation', because if you want something ‘beautiful’ the AI needs to have been trained on something that was already tagged as ‘beautiful’. There is no decision-making behind what could be considered beautiful. When I gaze at my girlfriend, I see her beauty. But if an AI were to analyze her image, it would perceive nothing beyond pixels. If I were to label her photo with "ugly," the AI would categorize her as such, regardless of anyone else's perception.
Despite these limitations, it's hard to deny that the quality of AI-generated art is improving steadily. Image generating models like those used in ChatGPT have their limitations, yet they're advancing rapidly to a point where their output is good enough and consistent enough that the average person might not discern or mind the difference.
Arguing about what constitutes art in this context may be futile, as it's easy to simply reject any proposed definition. This debate has been ongoing for thousands of years. The brightest minds on Twitter, or any other modern platform, are unlikely to settle it.
However, one thing seems clear: AI lacks true creativity. If you train a neural network to recognize text it will nail it when faced with actual text. But throw a curveball at it, say a bizarre math symbol or a dog’s picture, and watch it hilariously try to figure out which letter it could be. It's just a program, after all, not some intelligent wizard that magically integrates new data like a human.
Sure, you could bombard it with thousands of pictures of dogs to 'retrain' it, but let's not kid ourselves and call that intelligence.
I ran into this very snag while trying to make sense of text in PDF files. Adobe Acrobat is fine with text, but when it stumbles upon an equation, it's basically shooting in the dark since its vocabulary doesn't cover maths.
A big reason AI seems so dazzling these days is thanks to clever wordplay and rhetorical flourishes that dress it up as smarter than it is. Even the term 'AI' is part of this smoke and mirrors act. It can recognize words? Whoa, it must be literate! When a network transforms a text prompt into an image, suddenly it's hailed as a painter. It's like a classic move from the playbook of all good cults – AI enthusiasts use the shroud of mystery to usher their ideas onto center stage.
“We don't understand the hidden layers, nor do we fully grasp how the brain works,” they say. This ambiguity fuels speculation that maybe, just maybe, these machines learn like human brains. Enter the grand, dramatic proposals to invest billions in averting an AI-induced apocalypse.
But let's get real. We may not have an X-ray vision into the hidden layers, but we do understand the mechanics of machine learning and what it churns out. And I can say with confidence: AI is trained to mimic.
You feed the model with training data, crossing your fingers that it replicates what a human might produce. Sure, there are methods like diffusion, used by top-tier image generators, that don't require labeled data. Yet, these networks are still shadowing the structure of whatever they're fed.
Training in AI is essentially about engraving patterns from the training data onto the model. That's no trivial task – these models can detect some incredibly complex patterns. But let's not confuse this with creativity or a dangerous level of potential sentience.
A model, stuffed with countless images, might give the illusion of creativity or a self-aware understanding of what you are saying, but it's really just aligning your prompt with the most probable output based on its training. Researchers have even coaxed these models to regurgitate their training data, proving that they're not concocting something new in the way a human artist might.
If you ask someone to recreate a painting from memory, you won't get an exact replica. Instead, you'll see what resonated with them about the original and their unique interpretation of it. The result will bear the imprint of their personal aesthetic.
On the other hand, a tool like Stable Diffusion can replicate the exact painting, even placing it in an Etsy store mock-up for effect. But when AI proponents liken image generation to photography, they're missing the mark.
If we accept that machine learning models are inherently uncreative, then any creative input must come from the human side. Proponents argue that this is akin to photography because the photographer doesn't create the scene they capture. However, the avenues for creativity in AI are significantly limited.
Sure, you can tweak the prompt and adjust parameters to regenerate an image, or use inpainting for parts of a picture, but there's no precise control over the final output beyond these broad suggestions.
In contrast, photography involves capturing a specific scene or event, but with a high degree of control over numerous factors like position, focus, focal length, white balance, color balance, and lens characteristics. Photos have an objective aspect, but the decision to capture a particular image and its characteristics rests entirely with the artist.
The art of generating a picture with AI is essentially sophisticated guesswork. This leads us to the amusingly emerging field of 'prompt engineering,' which often boils down to naming an artist whose style you wish to mimic or simply appending 'high quality' to your prompt.
Take Microsoft researchers flaunting their CODI network, for example. They use prompts loaded with terms like 'oil painting,' 'cosmic horror painting,' 'elegant intricate art station,' 'concept art by Craig Mullins,' and 'detailed.' It's telling that even the most advanced tech demos rely on adding 'art station' to their prompts to achieve impressive results.
This discussion invariably circles back to the fact that most AI datasets are culled from the web, often without consent. In the world of research, it seems you can get away with a lot under the guise of academic pursuit.
Stable Diffusion, for instance, faces a persistent issue where its generated images carry watermarks from the stolen pictures. Getty Images is even suing them over this. Stable Diffusion employs LAION, a dataset comprising almost 6 billion image-text pairs. The clever twist here is that LAION contains only links to images, not the images themselves, skirting around direct copyright infringement. But the ethical implications of such practices remain a point of contention.
Model developers can initially download images for their research, then conveniently pivot their project into a commercial venture once the model is operational. A case in point is Stable Diffusion, which used images from DeviantArt for training. In reaction to the ensuing controversy, DeviantArt ingeniously integrated diffusion right into their platform. This move has paved the way for an influx of mediocre 500x500 images featuring valuable subjects like ‘a teddy bear smoking weed’ or an ‘exotic, nearly naked woman on the moon, à la Andy Warhol style… detailed!’
But there's a small consolation: if someone uses your name in their prompt, or your face, leading to a creation you're not happy with, you can report it to DeviantArt. They might remove the generated image. Additionally, artists now have the option to tag their work as 'no AI', potentially allowing companies to exclude such tagged artwork from future training datasets — if they choose to do so.
This eagerness to convert community spaces into commercial platforms is disconcerting. DeviantArt, admittedly not renowned for top-tier art (sorry to say), has always been perceived as a haven for artists to connect. Now, its administrators seem to be signaling to their users that it’s no longer safe to post original work there.
This opens a Pandora's box of ethical dilemmas, but it’s often justified with another popular pro-AI rhetoric: “Machine learning is democratizing art.”
In no plausible interpretation of democracy does the idea make sense, except perhaps to those who unrealistically believe they can make millions in mere weeks by selling AI-generated images on dubious websites. The production of art is not democratic, and democracy is totally irrelevant to it.
While art collectives can function democratically, suggesting that AI will democratize entities like Blizzard Studios is a stretch. This rhetoric inadvertently reveals a deeper implication. In the language of news outlets and political commentators, 'democracy' often gets entwined with the concept of a free market. Thus, a process or technology is labeled 'democratic' if it paves the way for new financial avenues.
This perspective leads to a belief where the introduction of AI in art is seen as 'democratic' because it enables traditional companies to reduce costs by replacing human illustrators and designers. Additionally, it creates market opportunities in areas like computing power or data labeling services. It's a reflection of the adage 'the freer the market, the freer the people', showcasing how economic interests often influence the interpretation and application of democratic ideals.
“AI” Steals
AI is dumb. and something that is dumb, cannot create on its own. That is probably why today’s society and Youtubers love it so much. They finally found a “life form” dumber than themselves. Or one that is equally as obedient to a set form of parameters. One of that for sure, but either way AI is the perfect mirror of our society.
OpenAI is the heavy hitter here. They're the brains behind GPT and DALL-E, some of the best text and image generators on the market. The leadership at OpenAI? Let's just say it reads like a list of Silicon Valley's most colorful amphetamine misusers and children’s blood transfusion enthusiasts.
Sam Altman, the CEO, seemingly on a mission to bring every New World Order conspiracy theory to life. His latest adventure? WorldCoin, which involves scanning retinas to build a blockchain-based global digital ID system. Supposedly for the good of all humanity.
Interestingly, in every non-western country they've tried this, their venture has had the lifespan of a celebrity marriage. In Kenya, for instance, they were dishing out $50 for retina scans. That was until the government stepped in, firmly said “fuck off,” and started an investigation. It's a pattern that seems to echo in every third world country where WorldCoin attempts to establish itself.
It's like the ultimate big tech Rube Goldberg machine: because Altman's AI venture is too spooky (to him), we obviously need a way to ID humans. And what better way than a retina database that's also a bank? Oh, and don't forget the cherry on top: universal basic income. Because we need a good onboarding process.
So, surprise, surprise: OpenAI, which once started as a starry-eyed non-profit research organization, suddenly realized the scent of money was too enticing. When they sniffed out the potential for major cash flow, well, let's just say their non-profit ideals got a reality check.
Ironically, despite the name OpenAI, much of their work is anything but 'open'. And here's another not-so-shocking revelation: OpenAI essentially commandeers all its training data. Since there aren't any laws governing this area yet, they're not obligated to disclose the sources of their DALL-E images. And let's face it, there aren't any licensed datasets that fit the bill, so it's a safe assumption that they've scooped up billions of images from who knows where.
GPT's story is no different when it comes to text. OpenAI crawled the internet, vacuuming up text to train its model, or maybe they tapped into an already existing dataset like Common Crawl.
They're gearing up for another round of this digital scavenging. So, if you're a website owner and the thought of OpenAI using your content doesn't sit well with you, it might be wise to update your robots.txt file with some specific lines to keep them at bay:
User-agent: GPTBOT
Disallow: /
The real issue with opposing the use of stolen training data is that the deed is already done. Billions of images have been hoarded on these companies' servers, and slapping a no-AI tag on your DeviantArt work now is like closing the barn door after the horse has bolted.
I'm not an expert, but I have my doubts about the legality of training on copyrighted data ever being challenged. At its core, I suspect a judge would view a generated image the same way many of us do: as just a picture created by AI. Let’s be honest: A lot of people do think that AI actually draws these images out of the blue.
Even in an ideal scenario, where the law does step in, it's likely to protect the interests of the big players with billion-dollar stakes. And let's not forget, digital files have a way of circulating regardless of legal barriers. For a solo artist lacking the means to identify or pursue copyright infringement, there's little hope for safeguarding their work, even if a legal framework existed.
As for the outputs of AI, that's a whole different legal ball game. AI as a field hasn't been thoroughly tested in court, but there have been skirmishes over the patents and copyrights of AI-generated content. One ruling declared that an AI can't be the author of a patent. However, the question of whether an individual can patent an AI-assisted invention remains unaddressed.
Even without legal alterations, there's always room for deception. While AI-generated visual art is relatively easy to spot, text, code, animations, and design drafts can easily be passed off as human-made with a bit of editing. Despite AI's inherent lack of creativity, the law might lean towards considering AI as capable of creativity, especially when its outputs appear novel.
The U.S. government is contemplating AI-specific legislation, but nothing solid has materialized yet, and it's all wrapped in political jargon. This situation leaves us in a quagmire with no clear solutions. While machine learning is undeniably valuable, the rise of generative AI could set the field back through regulation.
There are attempts to watermark generated images, but these efforts are led by AI companies aiming to filter out AI images from their datasets. It's like cooking without nutrition labels, or creating digital content without clear information and protection.
Meanwhile, companies like OpenAI invest heavily in destabilizing and potentially eliminating numerous art, writing, and design jobs. Many clients who hire designers have no taste and believe they could do a better job themselves. Now, they can instruct an AI to “make the logo pop,” and it'll comply without any awkward laughter or disregard.
For someone needing a stock photo, they might choose to generate a free AI landscape instead of paying for a licensed image. People generally opt for the easiest route. Holding strong, informed opinions on topics like AI is a privilege. Most users of these tools are unaware of their origins or implications.
Ultimately, convenience tends to triumph in most scenarios. The real adversaries in this situation are the companies driving these developments. Those who advocate for these technologies are mostly useful idiots, and it's not worthwhile to expend energy debating with them.
“AI” is Biased
Recently, OpenAI has been sanitizing ChatGPT to conceal its biases. Often, when they announce a reduction in bias, the dataset or model structure remains unchanged. Instead, they intervene between the user and the AI, ostensibly to protect users but also likely to avoid negative publicity.
They might prepend or append extra words to the user's prompt to steer the AI's response or intercept messages deemed inappropriate, feeding the AI an alternative prompt.
Everything we create is shaped by our world, and this is especially true for AI, which deals with ambiguous words and concepts. Since AI lacks true intelligence, it inevitably mirrors the biases of its creators.
Expanding the diversity of datasets might help reduce bias in AI applications like medical diagnosis. Consider a scenario in medical research where you feed a machine learning program 100,000 MRI images of lungs, some with tuberculosis and some without. The goal is to see if the computer can outperform doctors in diagnosing tuberculosis. And guess what? It turns out they can. Fantastic.
Now, suppose the research aims to understand how tuberculosis symptoms develop to enable earlier diagnosis. The challenge then becomes deciphering the pattern the AI program is detecting. How exactly is it making its diagnoses? This is where things get tricky.
Remember, you can't just ask the AI, “Why did you decide this?” It gets even more complex with neural networks and deep learning. However, through meticulous A-B testing, comparing nearly identical images, researchers figured out how the AI was identifying tuberculosis more accurately than human doctors. Surprisingly, the AI was giving extra weight to the age of the machine that took the image, deeming older machines more likely to indicate tuberculosis. This revelation shows that understanding AI decision-making can be pretty non-intuitive.
In the case of a model like GPT, the problem is different. All of its training data comes with inherent biases. Averaging the bias across all writing samples doesn't eliminate bias; it just converges on the most prevalent biases. There's no escaping it; there's no magical solution to this issue.
The fundamental law tells us you can't extract unbiased data from biased sources, and this principle applies to AI as well. My aim here is to demystify this technology, revealing it as what it truly is: an array of sophisticated systems capable of pattern recognition.
AI isn't intelligent. The main strategy of machine learning companies is to hoard more data and build even larger models, seemingly under the impression that they'll eventually reach a tipping point where the computer miraculously becomes sentient and, in a twist of fate, falls in love with them.
We've seen a surge of concern from so-called experts, claiming that AI is on the brink of sapience, ready to unleash some vague yet catastrophic harm. This is nothing more than magical thinking, a hysteria fomented by self-described rationalists.
Interestingly, numerous non-profits have emerged, sounding the alarm about AI's potential sentience. A buzzword in their rhetoric is 'alignment,' which can mean anything from aligning AI with human values to ensuring a superintelligence doesn't go rogue and start eliminating humans.
Picture the World Economic Forum sketching out extensive strategies for governments to “shield” us from the perilous threat of a machine that deems the depiction of a female breast as problematic. This scenario presents an ideal opportunity to exploit AI fears for greater control and regulation. Essentially, this manipulation of apprehension is what truly underpins the hype surrounding AI.
The prevailing reason for AI's current popularity is largely due to a widespread lack of understanding about its workings and nature. There's a general misconception about what true AI entails, with many not realizing that what we currently have are merely programs executing tasks based on given prompts, rather than embodying the full concept of artificial intelligence.
However, when OpenAI talks about safety and alignment, it's hard to take them seriously. They discuss solving problems that they not only conjured up themselves but are actively working to materialize through their technologies.
Currently, 'alignment' typically involves humans reviewing AI outputs and assessing whether they meet set goals. While this seems reasonable, it could devolve into a nightmarish job similar to content moderation.
And now, since AI is purportedly becoming smarter than humans, OpenAI suggests we let another machine learning system handle the alignment task. The irony of this proposition hardly needs pointing out.
Companies seem to be big fans of the 'just keep adding stuff' strategy. Why stop at training AI when you can have an AI train another AI that's in charge of aligning a third AI? Regardless of what the individuals involved believe they're achieving, they're effectively creating a smokescreen.
Promoting the notion of sentience or superintelligence as AI's primary danger shifts focus away from the actual application of machine learning. It paints AI as an almost elemental force, inherently capable of objective judgment and already exhibiting signs of intelligence. And of course, to prevent it from spiraling into the realms of science fiction, substantial funding is needed.
The fear-mongering about superintelligence subtly embeds the idea that these models are already capable of objective or at least well-informed judgments. However, the true goal of 'alignment' is to refine AI's ability to make these judgments.
MIT researchers break down alignment into three key aspects: producing accurate outcomes, consistently achieving its goals, and generating value for stakeholders. The last part, stakeholder value, often translates to maximizing profits for shareholders while ostensibly pretending to help the environment.
The 21st century has seen the rise of behavioral nudging, an idea rooted in B.F. Skinner's theories and eagerly adopted by tech entrepreneurs. Nudging is all about making the incentives that steer your choices seamless and unnoticeable, effectively preying on basic human instincts like hunger, fear, and desire.
Enter the world of internet recommendation algorithms: those unseen forces dictating the order of your Twitter feed or the videos populating your YouTube homepage. These are finely tuned to hook as many users as possible into addiction.
But with neural networks, we're seeing a more personalized form of nudging. It's not just a general guess based on similar interests; it's precision targeting based on your specific past interactions. YouTube's algorithm, for instance, is an early version of this future, designed to maximize human screen time and, consequently, ad revenue.
As we march forward, platforms like YouTube are subtly removing the tools for search and discovery, once the fun part of being online. The recent push towards 'shorts' isn't accidental; it's about driving engagement and profits.
And while we're at it, let's talk about the content creators who seem to have run out of things to say. Instead of churning out yet another video on a five-year-old game or a "hidden gem" weapon in Dark Souls or how you have built your desk setup, why not explore something genuinely new or thought-provoking?
The tragedy is that the truly good, informative videos often get lost in the noise, garnering less attention than they deserve. Meanwhile, superficial content designed to grab and hold your attention, without really offering anything of value, dominates the landscape.
By the way, did you realize that hitting the 'like' button on this article really boosts its visibility in the Substack algorithm?
This environment is the result of a simple yet flawed theory of mind held by those behind AI systems: more intelligence equals more objectivity. In their view, manipulation is justified if it works, treating users like mere pawns in a game of attention and profit.
It's becoming increasingly clear that anything prompting genuine reflection is contrary to the interests of these content businesses. They aim to monopolize your time, much like a slot machine, with a constant barrage of noise and light, keeping you hooked and mindlessly scrolling.
The Actual Problems With AI
The use of AI in seemingly innocuous areas like silly videos might seem low-stakes, but its application in predictive policing by law enforcement is a different story. Police forces are keen to predict crimes before they occur. However, considering the well-documented bias issues within police departments, any AI tool they use is likely to replicate these biases.
The FBI, for example, has a reputation for entrapping vulnerable individuals into criminal acts, only to arrest them later. Predictive policing becomes a convenient way to shift blame from individuals or institutions to an algorithm, creating a facade of objective judgment.
Shield AI, which develops AI for police drones and fighter jets, is a case in point. Despite their involvement in creating tools for potentially lethal purposes, they manage to glorify themselves on their trendy tech startup website. They collaborate with the Department of Defense, other military AI firms, drone manufacturers, and major players like Northrop Grumman.
Insurance companies are also jumping on the AI bandwagon, planning to use surveillance data to dynamically adjust premiums. With pervasive sensors in devices, behaviors deemed 'unhealthy' could lead to higher insurance costs.
These companies claim that their AI-driven solutions are about augmenting human productivity rather than replacing workers. They tout the potential of AI in customer service to cut costs, in claims processing to expedite settlements, and in underwriting for hyper-personalized coverage.
However, the dark side is that AI enables these companies to make more intricate inferences about individuals from the data they collect. AI can detect deviations from norms and assign a cost to your actions or misfortunes. In a free market system, misfortune becomes an opportunity for exploitation, turning personal crises into reasons for financial penalties.
As the old adage in every Economics 101 textbook goes, a market in equilibrium leaves no stone unturned in exploiting opportunities. This means that an individual or a company (since, let's remember, companies are legally people too) will inevitably find ways to exploit you. Tragedies, as it turns out, are not just emotionally draining but also financially costly.
While the current fuss about ChatGPT and art theft is a significant aspect of the AI debate, it's only a small fraction of the broader conversation. The issue of creativity in AI is a battle machine learning companies can afford to lose, as long as they continue deploying their systems elsewhere.
Rationality is a valued principle in society, right? Making arbitrary decisions about someone's welfare is generally frowned upon. Since we don't have long-standing traditions to fall back on, we need to rationalize our actions. People often resent insurance companies and the police for seemingly exacerbating their troubles. AI, in this context, offers two things: increased exploitation and a façade of objectivity.
Up until now, humans have had to make complex decisions. Neural networks, however, offer a seemingly simple solution: feed them data, and they'll spit out decisions quickly and supposedly rationally. It's easy to argue that these AI-driven decisions are objective because, well, a computer made them. With automated insurance, pricing decisions appear objective, and contracts can be dynamically updated. Our world is increasingly sensor-filled, and this data is constantly feeding into profiles that neural networks can use to exploit you more efficiently.
Stock prices, always expected to rise, necessitate a shift from the slow, human-centric process of insurance adjustment to a seamless, continuous process, essentially placing a real-time value on your life. As new wealth sources dry up, existing ones are squeezed for every last drop because, of course, the numbers must always go up.
There are endless potential applications for AI. While many won't succeed, some will, and not always for the better. Ad tech companies like Memorable are using AI to make commercials more annoying, believing they can score ads on memorability. Mobile game publishers like Yodo One employ bots to identify and target 'whales', big spenders in their games, with AI-recommended microtransactions.
The irony of automating art production instead of universally disliked jobs is stark. The dream of the internet, offering the world's information at your fingertips, has been subsumed by companies like Google. The information is now at their fingertips, and you're left interacting with automated chatbots, free for now but potentially a new revenue stream in the future.
AI has a multitude of genuine uses, much like blockchain technology, which could be useful for tracking physical goods. AI could automate mundane tasks if properly supervised. But the current path we're on uses AI to further dehumanize interactions and experiences, a trajectory that merits serious contemplation and, possibly, a course correction.
Porn is, unsurprisingly, a primary application for AI, especially appealing to the stereotypical gaming audience of lonely individuals with high-end graphics cards. The League of Legends community, for example, may find itself particularly impacted.
Then there's the trend of AI companions, a somewhat sad reflection of our times. These bots can role-play characters, offering a simulated form of interaction. However, if you spend enough time in AI communities, it becomes apparent that many people are simply seeking companionship. Loneliness is on the rise, and it's a problem largely ignored by those in power. Apps like Replica, which use GPT to create virtual friends or partners, have emerged as a market response to this loneliness.
Replica initially gained traction as an AI therapy tool, which is a sad commentary in itself. But it soon devolved into a service where people pay to have sexually explicit conversations with their AI 'friends'. These are merely temporary fixes, failing to address the deeper issue. We know that real, meaningful community connections make life fulfilling. Everyone understands this, yet genuine relationships, which can’t be transactional, are increasingly difficult to maintain in a world where financial considerations dominate.
Every step taken by advertising firms and tech companies encroaches on our ability to live unmediated lives. Social media is a prime example, turning interpersonal interactions into a marketplace for attention.
Yes, there's a part of me that wants to celebrate the wonders of machine learning. Its potential is immense. I do not reject technology. I reject the omnipresent abuse of technology.
But our creations are invariably shaped by our world. Consequently, every AI project carries within it the fundamental flaw of being nothing more than a sophisticated social control scheme. These tools aren’t designed to enhance creativity or efficiency. They’re built to churn out the bare minimum, with the least oversight. The future might see every actor’s face and voice scanned, stifling the creation of anything truly original. And with AI writing all the scripts, what room does that leave for human creativity and expression?
Every game could transform into a uniquely tailored Skinner box, fine-tuning itself to your reactions, even monitoring your pupil dilation. Consider the delivery worker from Bangladesh, penalized for not walking fast enough – that's already happening.
Not all of this dystopian vision will come to pass, but if entertainment companies rapidly switch to AI-generated content as the norm, the quality becomes irrelevant. If all movies are AI-generated, what's our benchmark for 'bad'?
The current trajectory of what is marketed to you as “AI” is turning life into a game, imposing rigid systems with obscure rules that dictate rewards and penalties. It creates a world bound by countless invisible, ever-changing contracts, often driven by market logic that once bombarded us with absurd internet content.
But remember, things don't have to be this way. There's a time for business and a time for life. Groups like OpenAI won't draw this line for us, so it's up to us to define the boundaries and fight for a better future.
Again, I wrote all of this completely for free now. I’m not an AI unfortunately, and I have to pay my bills. If you can afford it, please consider becoming a paid subscriber for a lot more like this in the future, or readily available. Thank you.
How to Debunk the AI "Revolution"
I really see the AI hype as a two-pronged attack on people's digital autonomy.
By categorizing a "new kind of computing" and heavily regulating it, you can restrict the public from a whole class of computational tasks.
Not only that, but as you point out there are many opportunities where gatekeepers can be introduced to prevent people from directly accessing information themselves.
"I'm sorry Dave, I'm afraid I can't do that"
Thank you for a very interesting read on a subject I know very little about. My initial reaction is I would rather take my chances under the Artificial Intelligence of the Large Language Models than the Genuine Stupidity of the Schwabian Serpent Tyrants.