Battle of the Bots at the Social Media Arena
Posted: Wed Feb 28, 2024 4:13 pm
AI is starting to present a real problem... it threatens the viability of the internet as we know it. The following article outlines the all pervasive nature of bots across the X platform especially, but notes other platforms are struggling to deal with the problem they present... namely, dominating the space.
https://www.abc.net.au/news/science/202 ... /103498070
I guess on one hand, the irony of seeing platforms set up to profit from your personal data habits being farmed like this is amusing, but this surely constitutes a serious threat. How does it play out? Does this lead to a sort of migration away from the internet or just your average, run of the mill enslavement by machines?
https://www.abc.net.au/news/science/202 ... /103498070
More than a year since Elon Musk bought X with promises to get rid of the bots, the problem is worse than ever, experts say.
And this is one example of a broader problem affecting online spaces.
The internet is filling up with "zombie content" designed to game algorithms and scam humans.
It's becoming a place where bots talk to bots, and search engines crawl a lonely expanse of pages written by artificial intelligence (AI).
Junk websites clog up Google search results. Amazon is awash with nonsense e-books. YouTube has a spam problem.
And this is just a trickle in advance of what's been called the "great AI flood".
Bots liking bots, talking to other bots
But first, let's get back to those reef-tweetin' bots.
Timothy Graham, an expert on X bot networks at the Queensland University of Technology, ran the tweets through a series of bot and AI detectors.
Dr Graham found 100 per cent of the text was AI-generated.
"Overall, it appears to be a crypto bot network using AI to generate its content," he said.
"I suspect that at this stage it's just trying to recruit followers and write content that will age the fake accounts long enough to sell them or use them for another purpose."
That is, the bots probably weren't being directed to tweet about the reef in order to sway public opinion.
Dr Graham suspects these particular bots probably have no human oversight, but are carrying out automated routines intended to out-fox the bot-detection algorithms.
Searching for meaning in their babble was often pointless, he said.
Towards the end of last year, Dr Graham and his colleagues at QUT paid X $7,800 from a grant fund to analyse 1 million tweets surrounding the first Republican primary debate.
They found the bot problem was worse than ever, Dr Graham said at the time.
Later studies support this conclusion. Over three days in February, cybersecurity firm CHEQ tracked the proportion of bot traffic from X to its clients' websites.
It found three-quarters of traffic from X was fake, compared to less than 3 per cent of traffic from each of TikTok, Facebook and Instagram.
A sign of the scale of X's bot problem is the thriving industry in bot-making.
Bot makers from around the world advertise their services on freelancer websites.
Awais Yousaf, a computer scientist in Pakistan, sells "ChatGPT Twitter bots" for $30 to $500, depending on their complexity.
In an interview with the ABC, the 27-year-old from Gujranwala said he could make a "fully fledged" bot that could "like comments on your behalf, make comments, reply to DMs, or even make engaging content according to your specification".
Mr Yousaf's career tracks the rise of the bot-making economy and successive cycles of internet hype.
Having graduated from university five years ago, he joined Pakistan's growing community of IT freelancers from "very poor backgrounds".
X's bot problem may be worse than other major platforms, but it's not alone.
A growing "deluge" of AI content is flooding platforms that were "never designed for a world where machines can talk with people convincingly", Dr Graham said.
"It's like you're running a farm and had never heard of a wolf before and then suddenly you have new predators on the scene.
"The platforms have no infrastructure in place. The gates are open."
The past few months have seen several examples of this.
Companies are using AI to rewrite other media outlet's stories, including the ABC's, to then publish them on the company's competing news websites.
How AI is shaping elections around the world
Recent election campaigns from Pakistan, India and Indonesia have shown radical new uses of generative AI that change how campaigns are run. Here's what's coming for Australia and the rest of the world.
A company called Byword claims it stole 3.6 million in "total traffic" from a competitor by copying their site and rewriting 1,800 articles using AI.
"Obituary pirates" are using AI to create YouTube videos of people summarising the obituaries of strangers, sometimes fabricating details about their deaths, in order to capture search traffic.
Authors are reporting what appear to be AI-generated imitations and summaries of their books on Amazon.
Google's search results are getting worse due to spam sites, according to a recent pre-print study by German researchers.
The researchers studies search results for thousands of product-review terms across Google, Bing and DuckDuckGo over the course of a year.
They found that higher-ranked pages tended to have lower text quality but were better designed to game the search ranking algorithm.
"Search engines seem to lose the cat-and-mouse game that is SEO spam," they wrote in the study.
I guess on one hand, the irony of seeing platforms set up to profit from your personal data habits being farmed like this is amusing, but this surely constitutes a serious threat. How does it play out? Does this lead to a sort of migration away from the internet or just your average, run of the mill enslavement by machines?