Facial recognition technology needs stricter regulation | Letter

A picture


The Metropolitan police’s recognition of the value in “some sort of framework or statutory guidance” for live facial recognition is welcome (Live facial recognition cameras may become ‘commonplace’ as police use soars, 24 May).However, it is not just police use of this technology that needs a clear legal framework.Despite the scale and speed of its expansion, there is still no specific law providing a basis for live facial recognition or other emerging biometric technologies, whether these are used in the public or private sector.Biometric surveillance is expanding rapidly, not just in policing but across society: in train stations, schools and supermarkets.Newer biometric systems go further, claiming to infer people’s emotional states, raising serious concerns about their accuracy, ethics and legality.

In 2020, the court of appeal – in the UK’s only judgment on live facial recognition to date – found a deployment by South Wales police was unlawful.It identified deficiencies in the legal framework and set out minimum standards for lawful use.Since then, a patchwork of voluntary guidance has emerged.New research from the Ada Lovelace Institute has found this patchwork is inadequate in practice, creating legal uncertainty, putting fundamental rights at risk and undermining public trust.Crucially, we find that non-police uses, such as those in the private sector or involving inference, are subject to even fewer safeguards and so stand on far shakier legal ground.

Governance is simply not keeping up with technological adoption and advancement,Policymakers must act,We urgently need new legislation to establish a comprehensive framework covering all forms of biometric surveillance and inference – not just police use – and an independent regulator to oversee and enforce it,Michael BirtwistleAssociate director, the Ada Lovelace Institute
technologySee all
A picture

Bar Council is wise to the risk of AI misuse | Letters

In your report (High court tells UK lawyers to stop misuse of AI after fake case-law citations, 6 June), you quote Dame Victoria Sharp’s call that we, the Bar Council, and our solicitor colleagues at the Law Society address this matter urgently.We couldn’t agree more. This high court judgment emphasises the dangers of the misuse by lawyers of artificial intelligence, particularly large language models, and in particular its serious implications for the administration of justice and public confidence in the justice system.The public is entitled to expect from legal professionals the highest standards of integrity and competence in appropriate understanding and use of new technologies, as well as in all other respects.The Bar Council has already issued guidance on the opportunities and risks surrounding the use of generative AI, which is quoted by the court, and is in the process of setting up a joint working group with the Bar Standards Board to identify how best we can support barristers to uphold those standards with appropriate further training and supervision

A picture

Watch out, hallucinating Humphrey’s about in Whitehall | Brief letters

I doubt that government officials consulted their AI tool, Humphrey, on what it should be called (UK government rollout of Humphrey AI tool raises fears about reliance on big tech, 15 June). It could have advised that in the 1970s the name was used for a milk marketing campaign: “Watch out, there’s a Humphrey about.” That line will now have a whole new meaning. Having spent the last few weeks voting in the Lords to try, in vain, to achieve protections for the creative industries from AI abuse, that meaning might be prophetic. On a personal level, my husband is angry that his name is being stolen again

A picture

Facial recognition technology needs stricter regulation | Letter

The Metropolitan police’s recognition of the value in “some sort of framework or statutory guidance” for live facial recognition is welcome (Live facial recognition cameras may become ‘commonplace’ as police use soars, 24 May). However, it is not just police use of this technology that needs a clear legal framework.Despite the scale and speed of its expansion, there is still no specific law providing a basis for live facial recognition or other emerging biometric technologies, whether these are used in the public or private sector.Biometric surveillance is expanding rapidly, not just in policing but across society: in train stations, schools and supermarkets. Newer biometric systems go further, claiming to infer people’s emotional states, raising serious concerns about their accuracy, ethics and legality

A picture

Tell us about your best Reddit moment

Reddit celebrates its 20th birthday at the end of the month. With 17 million daily viewers, the online community forum has brought various issues to people’s attention, from the timely and topical to the bizarre.We’d like to hear about your best Reddit moment. Perhaps you’ve been able to share a personal experience and felt you found your tribe discussing it with others. Or maybe you’ve had a complex issue explained to you like a five-year-old, or just found yourself laughing along with a viral moment with millions of others

A picture

DNA testing firm 23andMe fined £2.3m by UK regulator for 2023 data hack

The genetic testing company 23andMe has been fined more than £2.3m for failing to protect the personal information of more than 150,000 UK residents after a large-scale cyberattack in 2023.Family trees, health reports, names and postcodes were among the sensitive data hacked from the California-based company. It only confirmed the breach months after the infiltration started and once an employee saw the stolen data advertised for sale on the social media platform Reddit, according to the UK Information Commissioner’s Office – which levied the fine.The information commissioner, John Edwards, called the months-long incident across the summer of 2023 a “profoundly damaging breach”

A picture

Meta sacrifices a heap of money at the altar of AI

Mark Zuckerberg announced in April that the company would make huge capital expenditures in the coming year to keep up in the race to develop cutting-edge artificial intelligence. He made good on that promise last week with a $15bn “AI superintelligence” team that would feature reported nine-figure salaries and a 49% investment in Scale AI. Meta also hired Scale’s 28-year-old founder, Alexandr Wang, a former roommate of OpenAI’s Sam Altman.Before Meta’s investment, Scale counted most of the major players in AI among its clients, and some of them were less than thrilled with the development. Bloomberg puts it succinctly: Scale AI’s Wang Brings to Meta Knowledge of What Everyone Else is Doing