H
technology
H
HOYONEWS
HomeBusinessTechnologySportPolitics
Others
  • Food
  • Culture
  • Society
Contact
Home
Business
Technology
Sport
Politics

Food

Culture

Society

Contact
Facebook page
H
HOYONEWS

Company

business
technology
sport
politics
food
culture
society

© 2025 Hoyonews™. All Rights Reserved.
Facebook page

TikTok’s algorithm favored Republican content in 2024 US elections, study finds

about 21 hours ago
A picture


A study published Wednesday in the journal Nature finds that TikTok’s algorithm systematically prioritized pro-Republican content in three states leading up to the 2024 US elections.Researchers created hundreds of dummy accounts and conditioned them to mimic real users’ behavior by watching a set of videos either aligned with the US Democratic or Republican parties.Then, they tracked the videos TikTok recommended on these accounts’ For You pages, TikTok’s main feed.“We found a consistent imbalance,” they wrote in Nature.About 42% of US social media users say that these platforms are important for getting involved with political and social issues, according to Pew Research, but it’s not often clear how recommendation algorithms shape what appears in feeds.

Professors Talal Rahwan and Yasir Zaki at New York University’s Abu Dhabi campus set out to study how partisan politics shows up on TikTok – a platform that has become a key source of political information, especially for some young adults.Their study notes that this demographic – ages 18 to 29 – shifted by 10 percentage points towards Donald Trump between the 2020 and 2024 elections.A statement from TikTok read: “This artificial experiment with fake accounts does not reflect how people actually use TikTok.In reality, people discover and watch a wide variety of content on our platform which they continuously shape and can control through more than a dozen tools the authors seem unaware of.”Bots that were trained on pro-Republican content viewed about 11.

5% more content that agreed with their views compared to their pro-Democrat counterparts.That imbalance held for exposure to opposing views, too.Bots trained on pro-Democratic content were about 7.5% more likely to be viewing pro-Republican content on their For You page, the study found.The researchers used 323 dummy accounts in the study, setting their locations to New York state, Texas and Georgia.

For 27 weeks of the 2024 presidential campaign, the researchers sifted through more than 280,000 recommended videos using a combination of human and AI review.“Our finding isn’t just about reinforcement; Democratic accounts were shown significantly more anti-Democratic content than Republican accounts were shown anti-Republican content,” said Rahwan, one of the study’s authors.“The algorithm wasn’t just giving people what they want; it was giving one side more of what the other side says about them.”The types of issues that surfaced in videos differed, too – as pro-Democrat accounts in the study were fed disproportionately more cross-partisan content on immigration and crime, and pro-Republican accounts saw more cross-partisan content on abortion.“This suggests the algorithm may amplify content designed to attack the opposing side on its weakest ground, which is a more targeted and arguably more concerning pattern than a uniform ideological drift,” added Hazem Ibrahim, a PhD student at NYU Abu Dhabi who worked on the study.

The bots in the study were located through “mock” GPS and virtual private network (VPN) routing in strongly Democratic New York, strongly Republican Texas and Georgia, a battleground state.The researchers caution that their findings shouldn’t be generalized beyond these states.The study authors acknowledge that many users self-select and curate the content they see on various social media platforms.However, they say that TikTok’s For You page gives users less control than the main interfaces of other social media platforms, being “almost entirely driven by the platform’s algorithm”, the paper notes.A TikTok spokesperson disputed the claim that users have few customization options.

On TikTok, “users don’t need to follow anyone; the system decides based on behavioral signals like watch time.That makes it a uniquely clean setting for studying algorithmic influence, because user self-selection is minimized,” Ibrahim says.“Skews here are harder to attribute to users’ choices.”The authors note that while their study unpacked the kind of political content users are exposed to, it doesn’t analyze the influence of these videos on political beliefs and behavior, or the reason why this imbalance exists.They also note that the bots only captured the early states of a user’s experience on the platform, and analyzed English-language video transcripts, which wouldn’t capture political cues conveyed through visuals or other languages.

A TikTok spokesperson said these limitations demonstrate that the study was not representative of real users’ consumption.Still, they stress that studying the extent to which political content can be skewed on TikTok feeds are relevant to ongoing debates about platform transparency and algorithmic accountability.The Nature article pointed out: “Under the EU Digital Services Act, large platforms are required to assess and mitigate systemic risks to electoral processes, whereas in the USA, First Amendment protections grant platforms far greater editorial discretion.”Zaki added: “In an environment where margins are thin, systematic differences in the kind of political information recommended to tens of millions of young voters are worth taking seriously.”
technologySee all
A picture

TikTok’s algorithm favored Republican content in 2024 US elections, study finds

A study published Wednesday in the journal Nature finds that TikTok’s algorithm systematically prioritized pro-Republican content in three states leading up to the 2024 US elections.Researchers created hundreds of dummy accounts and conditioned them to mimic real users’ behavior by watching a set of videos either aligned with the US Democratic or Republican parties. Then, they tracked the videos TikTok recommended on these accounts’ For You pages, TikTok’s main feed.“We found a consistent imbalance,” they wrote in Nature.About 42% of US social media users say that these platforms are important for getting involved with political and social issues, according to Pew Research, but it’s not often clear how recommendation algorithms shape what appears in feeds

about 21 hours ago
A picture

‘Your craft is obsolete’: WiseTech staff in limbo as AI touted as better than humans

Staff at WiseTech have been waiting almost three months to be told if they are among the 2,000 people the logistics software company is to cut due to advances in AI, with workers criticising the wait as stressful and “ridiculous”.The comments come as its founder on Tuesday told investors an AI agent could learn a human’s job in just 15 minutes, according to the Australian Financial Review.The Australian Stock Exchange-listed company announced in late February that it would lay off almost 30% of its workforce across 40 countries, with 2,000 of the 7,000 jobs set to go over the next 18 months.Some areas would be hit harder than others, with product and development and customer service teams expected to be reduced by up to 50%, the chief executive, Zubin Appoo, told an investor briefing in February.“The era of manually writing code as the core act of engineering is over,” Appoo said

about 22 hours ago
A picture

New Mexico proposes $3.7bn fine for Meta and sweeping changes to its social platforms

Meta has returned to court in the US this week for the second phase of a lawsuit brought by Raúl Torrez, New Mexico’s attorney general, following a March verdict that found the company liable for child safety failures and imposed a $375m fine. On Monday, the state petitioned for a legal sanction against the company, a monetary penalty 10 times the original amount, and a sweeping, drastic overhaul of Meta’s child safety protocols.In the second part of the landmark case, known as the remedies phase, the state is asking for Meta to be declared a public nuisance and for the judge to order the company to pay $3.7bn in an abatement plan. The money would fund programs for law enforcement, mental health services and educators

1 day ago
A picture

US and tech firms strike deal to review AI models for national security before public release

The US government has struck deals with Google DeepMind, Microsoft and xAI to review early versions of their new AI models before they are released to the public.The Center for AI Standards and Innovation (CAISI), part of the US Department of Commerce, announced the agreements on Tuesday, saying the review process would be key to understanding the capabilities of new and powerful AI models as well as to protecting US national security. These collaborations will help the federal government “scale (its) work in the public interest at a critical moment”, the agency said in a press release.“Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications,” said Chris Fall, CAISI director.CAISI is an agency meant to facilitate collaboration between the tech industry and the federal government in developing standards and assessing risks for commercial AI systems

2 days ago
A picture

OpenAI president’s ‘deeply personal’ diary becomes focus in Musk’s case against Altman

As Elon Musk’s case against OpenAI entered its second week, focus shifted to the company’s president, Greg Brockman. Over the course of several hours on Monday and Tuesday, Brockman faced questions about his emails, texts and one piece of evidence that has become central to the trial: his personal diary.Musk’s lawsuit revolves around his allegation that Brockman, OpenAI and its CEO, Sam Altman, violated the founding agreement of the artificial intelligence firm by turning it into a for-profit entity. Musk argues that Altman and Brockman also unjustly enriched themselves in the process, essentially taking Musk’s money while deceiving him about their true intent for the business. He is seeking Altman and Brockman’s removal, the undoing of the for-profit restructuring and $134bn, which Musk wants distributed to OpenAI’s non-profit

2 days ago
A picture

Ken Eason obituary

My friend and former colleague Ken Eason, who has died aged 83, was an eminent academic. He specialised in the study of how the introduction of computer technology affects managers and employees in organisations, often with unexpected consequences.Much of his work took place at Loughborough University, where he was involved in the formation in 1970 of the university’s Human Sciences and Advanced Technology (HUSAT) Institute, which carried out some of the earliest research on human-computer interaction.He was the institute’s deputy director until succeeding its founder, Brian Shackel, as its director in 1992, holding that position until Husat was disbanded in 1996. Thereafter he was professor of cognitive ergonomics at Loughborough until his retirement in 2002

2 days ago
sportSee all
A picture

‘The three of us are the next’: Fabio Wardley on Dubois, Itauma and boxing’s heavyweight future

about 6 hours ago
A picture

Change in Sportsbet policy engulfs AFL identities amid scrutiny of gambling links

about 8 hours ago
A picture

Cornish Pirates boosted by ‘milestone’ seven-figure deal with US private equity firm

about 9 hours ago
A picture

Runner dies after medical emergency during 253-mile ultramarathon in Arizona

about 20 hours ago
A picture

A UConn reunion and Caitlin Clark’s return: WNBA storylines to follow in season 30 | Jordan Robinson

about 20 hours ago
A picture

Raducanu’s road leads from Rome to a French Open fitness race and questions beyond

about 21 hours ago