Artificial intelligence - it’s the future, but what does it actually do? In the year and a half since Chat GPT launched, much ado has been made about the radical changes that AI will make in our lives, and yet everything we’ve been given underwhelms at best and more often horrifies. What is it good for? What is the “use case,” as our Silicon Valley overlords would say? Like cryptocurrency, which so far has only been good for money laundering and buying drugs on the internet, there isn’t much A.I. can do that’s particularly useful. It can generate unnerving images, it can write bad essays for lazy college students, and it can search the internet and give you awful advice, all while using enough energy to ensure our planet warms to a catastrophic level. It’s a bullshit regurgitator, trawling reams of web text and spitting out hallucinatory invented answers to simple prompts, and no matter how many tech companies try to shoehorn it into their product, the machines have not yet wiped out any industries or streamlined anyone’s life; the CEO of Microsoft, in hyping up his own company’s Copilot program, could only point to the fact that the model “is helping me compose email better.” OpenAI is promising that Chat GPT-5 will be “Ph.D level intelligence,” which means that it will have trouble finding a job. As Ed Zitron explains in his excellent newsletter Where’s Your Ed At?, the AI hype bubble is close to bursting:
It’s time — it’s been time for a while — to accept that artificial intelligence isn’t the digital panacea that will automate our entire lives, and that reality may be far more mediocre than what Sam Altman and his ilk have been selling.
Brownlee remarks in his review that he really wants a “super personalized AI assistant that can do whatever a human assistant could,” and I really want to be clear how far we are off from something like that, because something that can automate your calendar, your emails, your music and your travel in a way that you can actually rely on requires operating system-level access and processing of information that may never exist.
Large Language Models are — and always will be — incapable of reasoning or consciousness, and while Sam Altman is desperate for you to believe that AI agents will soon run your apps, there’s little evidence that these agents will ever exist in a form that resembles a “supersmart assistant.” How’re they gonna pull that off? Training it on hours of people clicking buttons? There are a million different edge cases in every single consumer app, and reporters kvetching about the fear of these agents taking people’s jobs should sit and consider whether these things are actually possible.
AI, in its current state, inspires more imaginative thinking than what large language models can actually produce; that gap will persist as long as humans can think beyond a series of binary choices between things that came before them, which will be long past the time the AI hype surge fizzles out. (I hope.) The idea of artificial intelligence, though, is a perfect jumping off point for novelists1; a novel is itself a kind of thinking machine, and AI lets a writer think through what makes humans human, machines machines, and what the limits of thinking and intelligence themselves might be. Though there has been a recent uptick in novels about AI – including a much lauded bad one – a recent reissue from New York Review Books shows that computing and artificial intelligence have been under the microscope for over 60 years; in Dino Buzzati’s novella The Singularity (1960, now out in English in Anne Milano Appel’s translation from NYRB), Buzzati cheerfully skewer the idea of a world-conquering artificial intelligence, showing these systems to be just as flawed and limited as the humans who create them.
Keep reading with a 7-day free trial
Subscribe to Evan Reads to keep reading this post and get 7 days of free access to the full post archives.