The biggest AI flops of 2024

You May Be Interested In:AI’s power play: the high-stakes race for energy capacity | Computer Weekly


AI slop infiltrated almost every corner of the internet

Generative AI makes creating reams of text, images, videos, and other types of material a breeze. Because it takes just a few seconds between entering a prompt for your model of choice to spit out the result, these models have become a quick, easy way to produce content on a massive scale. And 2024 was the year we started calling this (generally poor quality) media what it is—AI slop.  

This low-stakes way of creating AI slop means it can now be found in pretty much every corner of the internet: from the newsletters in your inbox and books sold on Amazon, to ads and articles across the web and shonky pictures on your social media feeds. The more emotionally evocative these pictures are (wounded veterans, crying children, a signal of support in the Israel-Palestine conflict) the more likely they are to be shared, resulting in higher engagement and ad revenue for their savvy creators.

AI slop isn’t just annoying—its rise poses a genuine problem for the future of the very models that helped to produce it. Because those models are trained on data scraped from the internet, the increasing number of junky websites containing AI garbage means there’s a very real danger models’ output and performance will get steadily worse. 

AI art is warping our expectations of real events

2024 was also the year that the effects of surreal AI images started seeping into our real lives. Willy’s Chocolate Experience, a wildly unofficial immersive event inspired by Roald Dahl’s Charlie and the Chocolate Factory, made headlines across the world in February after its fantastical AI-generated marketing materials gave visitors the impression it would be much grander than the sparsely-decorated warehouse its producers created.

Similarly, hundreds of people lined the streets of Dublin for a Halloween parade that didn’t exist. A Pakistan-based website used AI to create a list of events in the city, which was shared widely across social media ahead of October 31. Although the SEO-baiting site (myspirithalloween.com) has since been taken down, both events illustrate how misplaced public trust in AI-generated material online can come back to haunt us.

Grok allows users to create images of pretty much any scenario

The vast majority of major AI image generators have guardrails—rules that dictate what AI models can and can’t do—to prevent users from creating violent, explicit, illegal, and other types of harmful content. Sometimes these guardrails are just meant to make sure that no one makes blatant use of others’ intellectual property. But Grok, an assistant made by Elon Musk’s AI company, called xAI, ignores almost all of these principles in line with Musk’s rejection of what he calls “woke AI.”

share Paylaş facebook pinterest whatsapp x print

Similar Content

Black women on the academic tightrope: four scholars weigh in
Black women on the academic tightrope: four scholars weigh in
Data centres will use twice as much energy by 2030 — driven by AI
Data centres will use twice as much energy by 2030 — driven by AI
Small-molecule inhibition of SARS-CoV-2 NSP14 RNA cap methyltransferase - Nature
Small-molecule inhibition of SARS-CoV-2 NSP14 RNA cap methyltransferase – Nature
In situ spheroid formation in distant submillimetre-bright galaxies - Nature
In situ spheroid formation in distant submillimetre-bright galaxies – Nature
Science and technology stories in the age of Trump
OpenAI’s “12 days of shipmas” tell us a lot about the AI arms race
The Download: Clear’s identity ambitions, and the climate blame game
Inside Clear’s ambitions to manage your identity beyond the airport
Headline Central | © 2024 | News