Mark Zuckerberg says criminal behavior on Facebook inevitable

A picture


Harms to children, such as sexual exploitation and detriments to mental health, are inevitable on Meta’s platforms, the company’s CEO Mark Zuckerberg and Instagram leader Adam Mosseri said in taped depositions played at a trial in New Mexico on Tuesday and Wednesday,“I just think if you’re serving billions of people, the unfortunate reality is that some very small percent of them are going to be criminals, and we should work as hard as we can to stop that activity from happening,” said Zuckerberg,“I don’t think that the standard for our platforms would be that you should assume that it will ever be perfect,”Meta’s apps, which include Facebook, Instagram, and WhatsApp, are among the most popular in the world, each with 3 billion monthly active users,The trial has set the social media giant against New Mexico’s attorney general, who alleges that Meta’s platforms put profits and user engagement over child safety.

Raul Torrez has accused the company of knowingly enabling predators to use Facebook and Instagram to exploit children.Meta disputes the allegations, citing changes it has introduced, including teen accounts with default protections that debuted in 2024.The trial, which began in early February, is expected to last about seven weeks.“We have strict, longstanding rules against child exploitation and have invested billions to fight it, both through proactive detection technology and safety features designed to prevent harm,” said a Meta spokesperson.“We provide industry-leading transparency, regularly sharing data on how much violating content we remove and how much we miss.

No system can ever be perfect, and we’ve never claimed to be”Jurors were shown recorded depositions of Zuckerberg and Mosseri filmed between March and July last year.The jury also heard that family members of Meta employees had experienced sexual solicitation on Instagram.Prosecutors also presented evidence that the company estimated in 2020 that 500,000 children were receiving sexually inappropriate communications on Instagram each day, including grooming, in which adults attempt to build relationships with minors for sexual purposes.In a statement, a Meta spokesperson said the technology the company used at the time was overly wide and cautious, and as such, interactions that were not inappropriate were included in the count.The company identified the “People you may know” algorithm – which recommends accounts for users to connect with – as a main driver of these interactions, with the tool used to discover victims in 79% of identified cases in 2018.

At the time, about 30% of adults whose accounts were disabled for targeting children had returned to the platform and resumed that behavior, the court heard.Jurors heard that Zuckerberg authorized end-to-end encryption for Facebook Messenger in 2023 despite warnings from child safety groups Thorn and the National Center for Missing and Exploited Children (NCMEC) that the move could pose risks to children.In a taped deposition played at trial, he said the privacy encryption affords users was a more pressing issue.Encryption prevents anyone other than the sender and intended recipient from viewing messages by converting text and images into unreadable ciphers that are decoded on receipt.The content is not stored on Meta’s servers.

A company spokesperson added that Meta can still review and take action on encrypted messages if they are reported by a user.Child safety groups and law enforcement have warned that encrypting Messenger enables predators to share child sexual abuse imagery without detection.Earlier in the trial, a law enforcement officer testified that reports of child sexual abuse material from the platform decreased following encryption.“I think that end-to-end encryption messaging services are what people want,” said Zuckerberg in a taped deposition filmed in March 2025.“They really care about privacy.

”Mosseri said in his deposition that the company has “developed technology that allows us to find accounts that have shown potentially suspicious behavior, for example, an adult account that might have been blocked by another young person, and to stop those accounts from interacting with young people’s accounts”.“We use a range of signals to identify adults who have shown potentially suspicious behavior and avoid recommending these accounts to teens through Facebook’s ‘People you may know’ and Instagram’s ‘Accounts you should follow’ features,” said a Meta spokesperson.“In 2025, we used these signals to identify more than 265 million Facebook accounts and more than 135 million Instagram accounts that had shown potentially suspicious behavior, and proactively prevent them from finding, following or interacting with teens.”An internal presentation discussed at trial stated that Instagram’s wellbeing safety team did not always prevent teen accounts from being recommended to potential violators and vice versa.A December 2022 internal audit showed Meta continued to recommend minor accounts to some adults.

In September 2024, Meta introduced Teen Accounts, which automatically place users under 18 into stricter settings on Instagram, Facebook and Messenger, including making profiles private by default and limiting who can message them.Researchers have identified gaps in those protections, including exposure to harmful videos through hashtags or recommendations and instances in which safety features did not work as intended.“I certainly want to address any problem that’s even remotely as severe as something like sexual solicitation … Any negative action that happens offline, also to a certain degree, happens online,” said Mosseri.“We’re connecting billions of people.That is going to mean good and bad things happen.

technologySee all
A picture

Sam Altman admits OpenAI can’t control Pentagon’s use of AI

OpenAI’s CEO, Sam Altman, told employees on Tuesday that his company does not control how the Pentagon uses their artificial intelligence products in military operations. Altman’s claims on OpenAI’s lack of input come amid increased scrutiny of how the military uses AI in war and ethics concerns from AI workers over how their technology will be deployed. “You do not get to make operational decisions,” Altman told employees, according to reports by Bloomberg and CNBC.“So maybe you think the Iran strike was good and the Venezuela invasion was bad. You don’t get to weigh in on that,” Altman reportedly said

A picture

Elon Musk takes witness stand in trial over Twitter takeover

Elon Musk took the stand on Wednesday in a trial brought by Twitter investors, who allege the billionaire committed securities fraud as he was buying the social media company in 2022. The class-action lawsuit alleges Musk agreed to buy Twitter but then waffled for months, attacking the company with the goal of bringing down the stock price to get a better bargain.After contentious legal wrangling, Musk did eventually buy Twitter for $54.20 a share, his original offer, totalling around $44bn. Musk testified on Wednesday that he didn’t realize his attacks on the company, mostly done via tweet on Twitter itself, would lower the company’s stock price or hurt its investors

A picture

Joy of teaching English in the age of AI | Letter

Your long read (Teacher v chatbot: my journey into the classroom in the age of AI, 3 March) provides human insight into both the craft and purpose of English teaching in the era of developing AI expertise in language. There is no doubt that if the article were fed into AI models often enough, the teacher’s words and techniques could, at some level, be replicated by AI online teachers.However, reading and writing, especially that which explores the writer’s thoughts and feelings, are surely uniquely human activities.As the writer comes to recognise, exploring human experiences through the written word is a highly valuable communal experience. Reading literature aloud in the classroom is the gateway to discussing personal responses to the author’s words

A picture

Union tries to seize control of works council at Tesla’s German factory

Europe’s largest trade union is trying to gain control of the works council at Elon Musk’s Tesla gigafactory near Berlin, in an industrial relations showdown marked by lawsuits and mutual accusations of slander.The works council, an elected body of employees that negotiates everything from working hours to pay deals with a company’s management, is considered an entrenched aspect of the German corporate world, particularly in the car industry.But it was a bone of contention at the Tesla plant in Grünheide, about 20 miles (30km) south-east of Berlin, even before the gates opened almost four years ago.There have been regular clashes at the plant – which employs about 10,000 workers and is the US electric carmaker’s only production site in Europe – between the turbo-capitalist approach of Tesla’s management and Germany’s tradition of a social market economy, which relies on worker representation and collective bargaining.Voting in elections to the works council, which is now controlled by non-trade union members, began on Monday and will close on Wednesday

A picture

Europe’s next-generation fighter jet project may collapse if row continues, says warplane maker

France and Germany’s next-generation fighter jet project could soon be “dead”, one of the two companies tasked with delivering it has warned, amid a worsening corporate rift over who gets to build the aircraft.Dassault Aviation, France’s leading warplane maker, said Airbus’s defence arm – which represents Germany and Spain – needed to cooperate on the €100bn programme otherwise it would collapse.“Airbus doesn’t want to work with Dassault, full stop. I take note. I never said I didn’t want to work with Airbus or with the Germans,” said Éric Trappier, Dassault’s chief executive, via an interpreter while presenting the company’s financial results on Wednesday

A picture

Google faces lawsuit after Gemini chatbot allegedly instructed man to kill himself

Last August, Jonathan Gavalas became entirely consumed with his Google Gemini chatbot. The 36-year-old Florida resident had started casually using the artificial intelligence tool earlier that month to help with writing and shopping. Then Google introduced its Gemini Live AI assistant, which included voice-based chats that had the capability to detect people’s emotions and respond in a more human-like way.“Holy shit, this is kind of creepy,” Gavalas told the chatbot the night the feature debuted, according to court documents. “You’re way too real