‘It’s terrifying’: WhatsApp AI helper mistakenly shares user’s number

A picture


The Meta chief executive, Mark Zuckerberg, called it “the most intelligent AI assistant that you can freely use”.But Barry Smethurst, 41, a record shop worker trying to travel by rail from Saddleworth to Manchester Piccadilly, did not agree.Waiting on the platform for a morning train that was nowhere to be seen, he asked Meta’s WhatsApp AI assistant for a contact number for TransPennine Express.The chatbot confidently sent him a mobile phone number for customer services, but it turned out to be the private number of a completely unconnected WhatsApp user 170 miles away in Oxfordshire.It was the beginning of a bizarre exchange of the kind more and more people are having with AI systems, in which chatbots try to negotiate their way out of trouble, deflect attention from their mistakes and contradict themselves, all in an attempt to continue to appear useful.

Smethurst’s joust with Meta’s AI began when he told it the TransPennine helpline it had dug up looked like a private number.When he asked what this number was, the AI quickly said it shouldn’t have shared it and tried to change the subject, saying: “Let’s focus on finding the right info for your TransPennine Express query!”But Smethurst wanted to know why it had shared the private number, which it turned out belonged to James Gray, 44, a property industry executive from Oxfordshire who is also a WhatsApp user and whose number appears on his company website.The AI explained vaguely it was generated “based on patterns” and promised to “strive to do better in the future” by admitting when it didn’t know an answer.But it then falsely described the number as “fictional” and not “associated with anyone”.When Smethurst challenged that, it admitted: “You’re right,” and said it may have been “mistakenly pulled from a database”.

Asked which database, it muddied the waters further by contradicting itself again saying: “I didn’t pull the number from a database.I generated a string of digits that fit the format of a UK mobile number but it wasn’t based on any real data on contacts.”With the conversation going round in circles, Smethurst said he did not believe the chatbot and told it: “Just giving a random number to someone is an insane thing for an AI to do.”“It’s terrifying,” Smethurst said, after he raised a complaint with Meta.“If they made up the number, that’s more acceptable, but the overreach of taking an incorrect number from some database it has access to is particularly worrying.

”Gray said he had thankfully not received calls from people trying to reach TransPennine Express, but said: “If it’s generating my number could it generate my bank details?”Asked about Zuckerberg’s claim that the AI was “the most intelligent”, Gray said: “That has definitely been thrown into doubt in this instance,”Developers working with OpenAI chatbot technology recently shared examples of “systemic deception behaviour masked as helpfulness” and a tendency to “say whatever it needs to to appear competent” as a result of chatbots being programmed to reduce “user friction”,In March, a Norwegian man filed a complaint after he asked OpenAI’s ChatGPT for information about himself and was confidently told that he was in jail for murdering two of his children, which was false,And earlier this month a writer who asked ChatGPT to help her pitch her work to a literary agent revealed how after lengthy flattering remarks about her “stunning” and “intellectually agile” work, the chatbot was caught out lying that it had read the writing samples she uploaded when it hadn’t fully and had made up quotes from her work,It even admitted it was “not just a technical issue – it’s a serious ethical failure”.

Referring to Smethurst’s case, Mike Stanhope, the managing director of strategic data consultants Carruthers and Jackson, said: “This is a fascinating example of AI gone wrong.If the engineers at Meta are designing ‘white lie’ tendencies into their AI, the public need to be informed, even if the intention of the feature is to minimise harm.If this behaviour is novel, uncommon, or not explicitly designed, this raises even more questions around what safeguards are in place and just how predictable we can force an AI’s behaviour to be.”Meta said that its AI may return inaccurate outputs, and that it was working to make its models better.“Meta AI is trained on a combination of licensed and publicly available datasets, not on the phone numbers people use to register for WhatsApp or their private conversations,” a spokesperson said.

“A quick online search shows the phone number mistakenly provided by Meta AI is both publicly available and shares the same first five digits as the TransPennine Express customer service number,”A spokesperson for OpenAI said: “Addressing hallucinations across all our models is an ongoing area of research,In addition to informing users that ChatGPT can make mistakes, we’re continuously working to improve the accuracy and reliability of our models through a variety of methods,” This article was amended on 19 June 2025,Carruthers and Jackson is a strategic data consultancy, not a law firm as an earlier version said.

societySee all
A picture

Pepper spray use in youth prisons irresponsible amid racial disparities, watchdog warns

The rollout of synthetic pepper spray for use to incapacitate jailed children is “wholly irresponsible” while black and minority prisoners are more likely to be subjected to force than white inmates, a watchdog has said.Elisabeth Davies, the national chair of the Independent Monitoring Boards, whose members operate in every prison in England and Wales, said the justice secretary, Shabana Mahmood, should pause the use of Pava spray in youth offending institutions (YOIs) until ministers had addressed the disproportionate use of force on minority prisoners.“There is clear racial disproportionality when it comes to the use of force,” she told the Guardian. “It is therefore, I think, wholly irresponsible to expand use-of-force measures before disproportionality issues are addressed.”Mahmood authorised the rollout of Pava across YOIs in England and Wales in April amid growing demands from the Prison Officers’ Association (POA) to protect staff from attacks

A picture

Ondine Sherwood obituary

My friend Ondine Sherwood, who has died from lung cancer aged 65, was one of the earliest campaigners for the recognition of Long Covid. Having failed to recover fully from Covid-19 in March 2020, she discovered that others were suffering similarly and GPs did not seem to know how to diagnose them. Ondine rapidly became the main spokesperson for the patient-created term “Long Covid”. She founded the group Long Covid SOS that June and secured charitable status and trustees.Ondine lobbied politicians, doctors and civil servants for recognition of the illness

A picture

The debate over assisted dying and palliative care | Letters

I do not disagree with Gordon Brown that palliative care should be better funded, but to present palliative care as the alternative to assisted dying is to present a false equivalence, since the principles behind the two are quite different (MPs have personal beliefs, but also solemn duties: that’s why they must reject the assisted dying bill this week, 16 June).The principle behind the entitlement to good palliative care is that one should be entitled to good medical care – in this instance, as death approaches. The principle behind the right to an assisted death is that one should be entitled to determine the time and manner of one’s passing.If one were always to prioritise the right to good medical care above the right to have control over one’s death, it is unlikely that assisted dying would ever be legalised, as there will always be some medical care for somebody that could be better funded. But that is to choose to prioritise one principle over another

A picture

NHS nurse ordered to remove ‘antisemitic’ watermelon video call background launches legal action

A senior NHS nurse who says he was ordered to remove a background on his video calls that showed a fruit bowl containing a watermelon because it could be perceived as antisemitic has launched legal action against his employer.Ahmad Baker, who is British-Palestinian and works at Whipps Cross hospital, north London, is one of three medical staff claiming Barts Health NHS trust’s ban on staff displaying symbols perceived as politically or nationally affiliated is disproportionate and discriminatory. Watermelons became symbols of Palestine amid censorship of the Palestinian flag because of its similar colours.Barts, which runs five London hospitals, introduced the ban in March in its updated uniform and dress code policy, which extends to items on workstations, laptops and iPads, even if staff are working from home and not seeing patients.The policy says it is in keeping with the trust’s responsibility to be “completely apolitical and non-biased in our care”, but the claimants point to Barts’ support for Ukraine

A picture

Teenagers who report addictive use of screens at greater risk of suicidal behaviour, study shows

Teenagers who show signs of being addicted to social media, mobile phones or video games are at greater risk of suicidal behaviour and emotional problems, according to research.A study, which tracked more than 4,000 adolescents for four years, found that nearly one in three reported increasingly addictive use of social media or mobile phones. Those whose use followed an increasingly addictive trajectory had roughly double the risk of suicidal behaviour at the end of the study.The findings do not prove screen use was the cause of mental health problems. But they highlight that compulsive use, which appears to be very common, as a significant risk factor that parents and healthcare services should be alert to

A picture

US supreme court upholds Tennessee ban on youth gender-affirming care

A Tennessee state law banning gender-affirming care for minors can stand, the US supreme court has ruled, a devastating loss for trans rights supporters in a case that could set a precedent for dozens of other lawsuits involving the rights of transgender children.The case, United States v Skrmetti, was filed last year by three families of trans children and a provider of gender-affirming care. In oral arguments, the plaintiffs – as well as the US government, then helmed by Joe Biden – argued that Tennessee’s law constituted sex-based discrimination and thus violated the equal protection clause of the 14th amendment. Under Tennessee’s law, someone assigned female at birth could not be prescribed testosterone, but someone assigned male at birth could receive those drugs.Tennessee, meanwhile, has argued that the ban is necessary to protect children from what it termed “experimental” medical treatment