Artificial intelligence research has a slop problem, academics say: ‘It’s a mess’

A picture


A single person claims to have authored 113 academic papers on artificial intelligence this year, 89 of which will be presented this week at one of the world’s leading conference on AI and machine learning, which has raised questions among computer scientists about the state of AI research,The author, Kevin Zhu, recently finished a bachelor’s degree in computer science at the University of California, Berkeley, and now runs Algoverse, an AI research and mentoring company for high schoolers – many of whom are his co-authors on the papers,Zhu himself graduated from high school in 2018,Papers he has put out in the past two years cover subjects like using AI to locate nomadic pastoralists in sub-Saharan Africa, to evaluate skin lesions, and to translate Indonesian dialects,On his LinkedIn, he touts publishing “100+ top conference papers in the past year”, which have been “cited by OpenAI, Microsoft, Google, Stanford, MIT, Oxford and more”.

Zhu’s papers are a “disaster”, said Hany Farid, a professor of computer science at Berkeley, in an interview.“I’m fairly convinced that the whole thing, top to bottom, is just vibe coding,” he said, referring to the practice of using AI to create software.Farid called attention to Zhu’s prolific publications in a recent LinkedIn post, which provoked discussion of other, similar cases among AI researchers, who said their newly popular discipline faces a deluge of low-quality research papers, fueled by academic pressures and, in some cases, AI tools.In response to a query from the Guardian, Zhu said that he had supervised the 131 papers, which were “team endeavors” run by his company, Algoverse.The company charges $3,325 to high-school students and undergraduates for a selective 12-week online mentoring experience – which involves help submitting work to conferences.

“At a minimum, I help review methodology and experimental design in proposals, and I read and comment on full paper drafts before submission,” he said, adding that projects on subjects such as linguistics, healthcare or education involved “principal investigators or mentors with relevant expertise”.The teams used “standard productivity tools such as reference managers, spellcheck, and sometimes language models for copy-editing or improving clarity”, he said in response to a query about whether the papers were written with AI.The review standards for AI research differ from most other scientific fields.Most work in AI and machine learning does not go undergo the stringent peer-review processes of fields such as chemistry and biology – instead, papers are often presented less formally at major conferences such as NeurIPS, one of the world’s top machine learning and AI gatherings, where Zhu is slated to present.Zhu’s case points at a larger issue in AI research, said Farid.

Conferences including NeurIPS are being overwhelmed with increasing numbers of submissions: NeurIPS fielded 21,575 papers this year, up from under 10,000 in 2020,Another top AI conference, the International Conference on Learning Representations (ICLR), reported a 70% increase in its yearly submissions for 2026’s conference, nearly 20,000 papers, up from just over 11,000 for the 2025 conference,“Reviewers are complaining about the poor quality of the papers, even suspecting that some are AI-generated,Why has this academic feast lost its flavor?” asked the Chinese tech blog 36Kr in a November post about ICLR, noting that the average score reviewers had awarded papers had declined year-over-year,Meanwhile, students and academics are facing mounting pressure to rack up publications and keep up with their peers.

It is uncommon to produce a double-digit number – much less triple – of high quality academic computer science papers in a year, academics said.Farid says that at times, his students have “vibe coded” papers to up their publication counts.“So many young people want to get into AI.There’s a frenzy right now,” said Farid.NeurIPS reviews papers submitted to it, but its process is far quicker and less thorough than standard scientific peer review, said Jeffrey Walling, an associate professor at Virginia Tech.

This year, the conference has used large numbers of PhD students to vet papers, which a NeurIPS area chair said compromised the process.“The reality is that often times conference referees must review dozens of papers in a short period of time, and there is usually little to no revision,” said Walling.Walling agreed with Farid that too many papers are being published right now, saying he’d encountered other authors with over 100 publications in a year.“Academics are rewarded for publication volume more than quality … Everyone loves the myth of super productivity,” he said.On Zhu’s Algoverse’s FAQ page, answers discusses how the company’s program can help applicants’ future college or career prospects, saying: “The skills, accomplishments, and publications you achieve here are highly regarded in academic circles and can indeed strengthen your college application or résumé.

This is especially true if your research is admitted to a top conference – a prestigious feat even for professional researchers,”Farid says that he now counsels students to not go into AI research, because of the “frenzy” in the field and the large volume of low-quality work being put out by people hoping to better their career prospects,Sign up to TechScapeA weekly dive in to how technology is shaping our livesafter newsletter promotion“It’s just a mess,You can’t keep up, you can’t publish, you can’t do good work, you can’t be thoughtful,” he said,Much excellent work has still come out of this process.

Famously, Google’s paper on transformers, Attention Is All You Need – the theoretical basis for the advances in AI that led to ChatGPT – was presented at NeurIPS in 2017,NeurIPS organisers agree the conference is under pressure,In a comment to the Guardian, a spokesperson said that the growth of AI as a field had brought “a significant increase in paper submissions and heightened value placed on peer-reviewed acceptance at NeurIPS”, putting “considerable strain on our review system”,Zhu’s submissions were largely to workshops within NeurIPS, which have a different selection process than the main conference and are often where early-career work gets presented, said NeurIPS organisers,Farid said he did not find this a substantive explanation for one person to put his name on more than 100 papers.

“I don’t find this a compelling argument for putting your name on 100 papers that you could not have possibly meaningfully contributed to,” said Farid.The problem is bigger than a flood of papers at NeurIPS.ICLR used AI to review a large volume submissions – resulting in apparently hallucinated citations and feedback that was “very verbose with lots of bullet points”, according to a recent article in Nature.The feeling of decline is so widespread that finding a solution to the crisis has become the subject of papers itself.A May 2025 position paper – an academic, evidence-based version of a newspaper op-ed – authored by three South Korean computer scientists that proposed a solution to the “unprecedented challenges with the surge of paper submissions, accompanied by growing concerns over review quality and reviewer responsibility”, won an award for outstanding work at the 2025 International Conference on Machine Learning.

Meanwhile, says Farid, major tech companies and small AI safety organisations now dump their work on arXiv, a site once reserved for little-viewed preprints of math and physics papers, flooding the internet with work that is presented as science – but is not subject to review standards.The cost of this, says Farid, is that it is almost impossible to know what’s actually going on in AI – for journalists, the public, and even experts in the field: “You have no chance, no chance as an average reader to try to understand what is going on in the scientific literature.Your signal-to-noise ratio is basically one.I can barely go to these conferences and figure out what the hell is going on.”“What I tell students is that, if what you’re trying to optimize publishing papers, you know, it’s actually honestly not that hard to do.

Just do really crappy low-quality work and bomb conferences with it.But if you want to do really thoughtful, careful work, you’re at a disadvantage because you’re effectively unilaterally disarmed,” he said.
technologySee all
A picture

Artificial intelligence research has a slop problem, academics say: ‘It’s a mess’

A single person claims to have authored 113 academic papers on artificial intelligence this year, 89 of which will be presented this week at one of the world’s leading conference on AI and machine learning, which has raised questions among computer scientists about the state of AI research.The author, Kevin Zhu, recently finished a bachelor’s degree in computer science at the University of California, Berkeley, and now runs Algoverse, an AI research and mentoring company for high schoolers – many of whom are his co-authors on the papers. Zhu himself graduated from high school in 2018.Papers he has put out in the past two years cover subjects like using AI to locate nomadic pastoralists in sub-Saharan Africa, to evaluate skin lesions, and to translate Indonesian dialects. On his LinkedIn, he touts publishing “100+ top conference papers in the past year”, which have been “cited by OpenAI, Microsoft, Google, Stanford, MIT, Oxford and more”

A picture

Cloudflare apologises after latest outage takes down LinkedIn and Zoom

Cloudflare has apologised after an outage on Friday morning hit websites including LinkedIn, Zoom and Downdetector, the company’s second outage in less than a month.“Any outage of our systems is unacceptable, and we know we have let the internet down again,” it said in a blogpost, adding that it would release more information next week on how it aims to prevent these failures.The outage on Friday came after Cloudflare adjusted its firewall to protect customers from a widespread software vulnerability revealed earlier this week, and was not an attack, it said. Earlier, it said a separate issue had been reported with its application programming interfaces.The issue, which affected 28% of its traffic, lasted for half an hour and was resolved shortly after 9am GMT, it said

A picture

‘Urgent clarity’ sought over racial bias in UK police facial recognition technology

The UK’s data protection watchdog has asked the Home Office for “urgent clarity” over racial bias in police facial recognition technology before considering its next steps.The Home Office has admitted that the technology was “more likely to incorrectly include some demographic groups in its search results”, after testing by the National Physical Laboratory (NPL) of its application within the police national database.The report revealed that the technology, which is intended to be used to catch serious offenders, is more likely to incorrectly match black and Asian people than their white counterparts.In a statement responding to the report, Emily Keaney, the deputy commissioner for the Information Commissioner’s Office, said the ICO had asked the Home Office “for urgent clarity on this matter” in order for the watchdog to “assess the situation and consider our next steps”.The next steps could include enforcement action, including issuing a legally binding order to stop using the technology or fines, as well as working with the Home Office and police to make improvements

A picture

New York Times sues AI startup for ‘illegal’ copying of millions of articles

The New York Times sued an embattled artificial intelligence startup on Friday, accusing the firm of illegally copying millions of articles. The newspaper alleged Perplexity AI had distributed and displayed journalists’ work without permission en masse.The Times said that Perplexity AI was also violating its trademarks under the Lanham Act, claiming the startup’s generative AI products create fabricated content, or “hallucinations”, and falsely attribute them to the newspaper by displaying them alongside its registered trademarks.The newspaper said that Perplexity’s business model relies on scraping and copying content, including paywalled material, to power its generative AI products. Other publishers have made similar allegations

A picture

I spent hours listening to Sabrina Carpenter this year. So why do I have a Spotify ‘listening age’ of 86?

Many users of the app were shocked, this week, by this addition to the Spotify Wrapped roundup – especially twentysomethings who were judged to be 100“Age is just a number. So don’t take this personally.” Those words were the first inkling I had that I was about to receive some very bad news.I woke up on Wednesday with a mild hangover after celebrating my 44th birthday. Unfortunately for me, this was the day Spotify released “Spotify Wrapped”, its analysis of (in my case) the 4,863 minutes I had spent listening to music on its platform over the past year

A picture

Elon Musk’s X fined €120m by EU in first clash under new digital laws

Elon Musk’s social media platform, X, has been fined €120m (£105m) after it was found in breach of new EU digital laws, in a ruling likely to put the European Commission on a collision course with the US billionaire and potentially Donald Trump.The breaches, under consideration for two years, included what the EU said was a “deceptive” blue tick verification badge given to users and the lack of transparency of the platform’s advertising.The commission rules require tech companies to provide a public list of advertisers to ensure the company’s structures guard against illegal scams, fake advertisements and coordinated campaigns in the context of political elections.In a third breach, the EU also concluded that X had failed to provide the required access to public data available to researchers, who typically keep tabs on contentious issues such as political content.The ruling by the European Commission brings to a close part of an investigation that started two years ago