UK schools should remove pupils’ online photos as AI blackmail threat grows, say experts

A picture


UK schools should remove pictures of pupils’ faces from their websites and social media accounts because blackmailers are using them to create sexually explicit images, experts have said.Child safety experts and the UK’s National Crime Agency (NCA) warn that criminals are using AI to manipulate photos of children and then demand cash not to publish them.They are recommending educational institutions remove identifiable pictures of children from their websites and social media accounts – or consider not using them at all.The Internet Watch Foundation (IWF) said an unnamed UK secondary school had recently been subjected to a blackmail attempt after criminals used the institution’s website or social media accounts to take photos of schoolchildren and then, using AI tools, turned them into child sexual abuse material (CSAM).The blackmailers sent the images to the school and threatened to publish them online if they did not receive money.

The IWF, which monitors CSAM online, used a digital tool to turn the blackmail images into a “hash”, or digital fingerprint, which was shared with leading tech platforms in order to prevent them from being uploaded.The watchdog said 150 of the images from the secondary school blackmail attempt could be classified as CSAM under UK law.Jess Phillips, the minister for safeguarding and violence against women and girls, said the attempted blackmailing of schools was a “deeply worrying emerging threat” and laws on use of AI to create explicit images would be updated if necessary, having announced a ban on possessing AI models designed to generate CSAM.“We will not hesitate to go further if necessary and make sure our laws stay up to date with the latest threats,” she said.The IWF said the secondary school incident, which happened late last year, is not the only blackmail attempt involving manipulated school website or social media account photos that it is aware of in the UK.

The IWF is not naming the school involved in the incident last year, or the police force that contacted it seeking help in blocking distribution of the images.A UK advisory body on tackling online harms, the early warning working group (EWWG), has issued guidance to schools on protecting pupils from blackmailers.Although the problem is not widespread, the group is concerned it is “only a matter of time” before more schools are targeted.The group recommended schools remove images that show a student face-on and instead publish images that are harder to misuse, such as pictures taken from a distance, blurred images or portraits taken from behind a pupil.The advice warned against publishing “identifiable information” that could be used to harm or blackmail an individual, such as “names or faces”.

Schools should also consider whether they need pupil photos at all, said the guidance, stating that establishments should mull over “whether using imagery without children and young people’s faces can still achieve your objectives”.This included “celebrating achievements more safely” by showcasing milestones “while minimising risks”.Avoiding the use of names or full names in labelling photos will reduce the risk of blackmail, the advice added, while applying privacy settings to a school website or social media account will limit who can view or share content.A checklist of actions recommended by the EWWG included conducting regular audits of children’s images on websites, social media accounts and promotional material and regularly seeking re-signing of image consent agreements.If an incident occurs, the group advised schools to contact the police immediately, retain any criminal images and remove from view the original images that had been tampered with.

The EWWG’s members include the NSPCC charity, the IWF, the Welsh government, Education Scotland, the Safeguarding Board for Northern Ireland and the NCA.The Confederation of School Trusts (CST), whose academy schools educate more than four million primary and secondary schoolchildren across England, said schools would “carefully consider” the guidance and find the “right balance” between “celebrating pupils … and keeping everyone safe”.“As educators we instinctively want to celebrate children’s achievements and that includes sharing photos and videos of all the good things that go on in our schools – it is deeply depressing that in doing so we potentially have to contend with threats from abusers and scammers,” said Leora Cruddas, CST chief executive.Blackmailing people over intimate images is known as sextortion and the crime has a become increasingly prevalent, with the advent of generative AI tools giving criminals a new way of extorting victims.Typically, sextortion involves manipulating a child or adult into sending intimate images or videos of themselves and then threatening to send the images to friends or relatives, or release them online, if the victim does not send money or more explicit images.

Sextortion has been linked to the suicides of several British teenagers who have killed themselves after receiving extortion threats.In 2024, the Guardian reported the threat was evolving due to advances in AI, with one teenager being sent a fake “nude” image of herself that appeared to have been taken from her Instagram account.The Report Remove service, which allows children to flag explicit images or videos of themselves that have appeared – or could appear – online, said sextortion attempts are increasing.Last year it received 394 reports from under-18s of blackmail attempts after the victims had been manipulated into sending sexual images to predators.The figure is 34% higher than in 2024.

Sextortion has been carried out by criminal gangs based outside the UK, with the NCA pointing to west Africa and Nigeria as hubs,It is understood that the secondary school sextortion attempt involved the use of terms that appear in negotiation “scripts” used by sextortion gangs,Some schools have already taken action after an increase in the threat from AI tools,Last year, the Loughborough Schools Foundation, which represents three private schools, redesigned its website to remove recognisable images of pupils,
technologySee all
A picture

Europe’s AI translation industry told it risks reputation by partnering with US firms

AI companies in Europe risk losing their world-leading status in the field of machine translation, industry figures have said, after the decision by one of the continent’s leading startups to partner with Amazon’s cloud computing division provoked alarm.While businesses in the EU have generally lagged behind the US and China in AI adoption, a small group of European companies have cornered the global market for high-quality machine translations for professional use.The biggest success story is Cologne-headquartered DeepL, an online translator that regularly outperforms Google Translate in accuracy assessments. Used by governments, courts and half of the Fortune 500 list of highest-earning US companies, last year it was reported to have recorded revenues of $185.2m

A picture

Shivon Zilis, mother of four of Elon Musk’s children, testifies in OpenAI trial

Shivon Zilis, a Neuralink executive and the mother of four of Elon Musk’s children, took the stand on Wednesday as one of the most highly anticipated witnesses in Musk’s case against OpenAI. The ChatGPT maker has argued that, while Zilis worked with OpenAI from 2016 to 2023, she was also involved in a secret relationship with Musk, acting as an informant for him.Musk’s case against OpenAI alleges that the company’s CEO, Sam Altman, and president, Greg Brockman, co-founders of the company with Musk, broke a founding agreement when they restructured it from a non-profit to a for-profit enterprise. The Tesla CEO accuses Altman and Brockman of unjustly enriching themselves and wants both removed from their positions at the startup, one of the most valuable in the world. He is also seeking the undoing of the for-profit restructuring and $134bn in damages to be redistributed to OpenAI’s non-profit arm

A picture

No flattery please, Claude: I’m British | Brief letters

The otherwise admirable Richard Dawkins should adjust the local settings of the chatbot or tell it to be less obsequious (Richard Dawkins concludes AI is conscious, even if it doesn’t know it, 6 May). Such bots are initially geared to American overenthusiasm and egregiously flattering reinforcement, but just tell them you want British attitude. They’re only simulating you know.Brian Reffin SmithBerlin, Germany With artificial intelligence bringing “large language models” into everyday use, the LLM after my name has acquired a new meaning. For 70 years I assumed that it referred to my Cambridge master of laws

A picture

TikTok’s algorithm favored Republican content in 2024 US elections, study finds

A study published Wednesday in the journal Nature finds that TikTok’s algorithm systematically prioritized pro-Republican content in three states leading up to the 2024 US elections.Researchers created hundreds of dummy accounts and conditioned them to mimic real users’ behavior by watching a set of videos either aligned with the US Democratic or Republican parties. Then, they tracked the videos TikTok recommended on these accounts’ For You pages, TikTok’s main feed.“We found a consistent imbalance,” they wrote in Nature.About 42% of US social media users say that these platforms are important for getting involved with political and social issues, according to Pew Research, but it’s not often clear how recommendation algorithms shape what appears in feeds

A picture

‘Your craft is obsolete’: WiseTech staff in limbo as AI touted as better than humans

Staff at WiseTech have been waiting almost three months to be told if they are among the 2,000 people the logistics software company is to cut due to advances in AI, with workers criticising the wait as stressful and “ridiculous”.The comments come as its founder on Tuesday told investors an AI agent could learn a human’s job in just 15 minutes, according to the Australian Financial Review.The Australian Stock Exchange-listed company announced in late February that it would lay off almost 30% of its workforce across 40 countries, with 2,000 of the 7,000 jobs set to go over the next 18 months.Some areas would be hit harder than others, with product and development and customer service teams expected to be reduced by up to 50%, the chief executive, Zubin Appoo, told an investor briefing in February.“The era of manually writing code as the core act of engineering is over,” Appoo said

A picture

New Mexico proposes $3.7bn fine for Meta and sweeping changes to its social platforms

Meta has returned to court in the US this week for the second phase of a lawsuit brought by Raúl Torrez, New Mexico’s attorney general, following a March verdict that found the company liable for child safety failures and imposed a $375m fine. On Monday, the state petitioned for a legal sanction against the company, a monetary penalty 10 times the original amount, and a sweeping, drastic overhaul of Meta’s child safety protocols.In the second part of the landmark case, known as the remedies phase, the state is asking for Meta to be declared a public nuisance and for the judge to order the company to pay $3.7bn in an abatement plan. The money would fund programs for law enforcement, mental health services and educators