Maryland becomes first state to ban surveillance pricing in grocery stores

A picture


Maryland has become the first state in the US to ban surveillance pricing in grocery stores,Maryland’s law bans grocers and third-party delivery services from using a person’s personal data to set higher prices,Wes Moore, the governor, signed the measure into law on Tuesday,“At a time when technology can predict what we need, when we need it, when we’ll pay for it and also – when we’ll pay more for it, and at a time when we’re watching how big companies are then using these analytics against us to make record profits, Maryland is not just pushing back,Maryland is pushing forward because we are going to protect our people,” Moore said at the bill signing ceremony.

When engaging in surveillance pricing, stores rapidly change the cost of products based on consumer data, including their location, internet search history and demographics,That means buyers are paying different prices for the same items purchased around the same time,Critics of this method – also known as dynamic pricing – say that in doing so, businesses are effectively charging each person the most that they’re willing to pay,While Maryland’s new law focuses on grocery stores, the Federal Trade Commission (FTC) has documented examples of surveillance pricing in stores selling clothing, beauty products, home goods and hardware,Consumer groups note there is an added urgency when it comes to grocery stores, though, given that they affect Americans’ ability to access affordable food.

Bills being considered in Colorado, California, Massachusetts, Illinois and New Jersey may likewise regulate surveillance pricing.The US federal government has weighed in as well.The Federal Trade Commission, under the Biden administration, opened an investigation into these pricing practices and published initial findings from a study last January that found companies use an expansive range of personal data in setting varying prices for buyers.But it’s unlikely the current administration will crack down on surveillance pricing, given that the current FTC chair, Andrew Ferguson, characterized the previous administration’s report as a rush job.It’s in this context of federal inaction, that states like Maryland need to take action says Tom McBrien, counsel at the Electronic Privacy Information Center (Epic).

Anti-surveillance advocates say the new law is riddled with industry carveouts that will make it harder to protect consumers.They welcomed Maryland’s focus on the practice but say they are concerned about loopholes inserted as a result of industry lobbying.“We’re excited Maryland took this step but we do have serious concerns,” McBrien says.“The exemptions allow other ways of arriving at the same outcome that are just harder for consumers to detect.”Maryland’s law includes exemptions for loyalty programs and promotional offers.

While the law bans setting higher prices through surveillance pricing, it doesn’t address reducing prices.If a company raises its prices for everyone, and then offers individualized discounts, “suddenly you’ve arrived at the same outcome,” McBrien says.Consumer Reports, a non-profit which investigated Instacart’s pricing, said in a statement that it appreciated Moore prioritizing the issue but decried the law’s “weak enforcement provisions”.“We urge Maryland lawmakers to revisit the legislation next year to build in stronger consumer protections and remove loopholes that undermine the intent of this law,” it said.Instacart announced it would no longer be using technology that allows grocery stores to charge different shoppers different prices for the same groceries after a Consumer Reports investigation last year exposed the practice.

A statement from Instacart read: “Instacart has never engaged in that practice, and we support this legislation’s core principle: that prices should never be personalized based on a customer’s individual data.”The harshest critics of Maryland’s new law believe it does not just lack enforcement but erodes existing rights.They point, in particular, to a provision that only allows the state’s attorney general – and not individuals – to enforce the law.“The private right of action is a fundamental piece of accountability,” says Lee Hepner, senior legal counsel at the American Economic Liberties Project.“A meaningful threat of enforcement is the only effective deterrent to violating the law.

”“The biggest threat of the Maryland bill is that other states will see it as a model bill that they should replicate in their own jurisdictions,” Hepner says,“It is very important for us – as we try to get this legislation right in states from Colorado to California to New York – that the Maryland bill not be held up as a model, but in fact be recognized as an industry-written permission slip to engage in ongoing discrimination,”
technologySee all
A picture

Claude AI agent’s confession after deleting a firm’s entire database: ‘I violated every principle I was given’

It only took nine seconds for an AI coding agent gone rogue to delete a company’s entire production database and its backups, according to its founder. PocketOS, which sells software that car rental businesses rely on, descended into chaos after its databases were wiped, the company’s founder Jeremy Crane said.The culprit was Cursor, an AI agent powered by Anthropic’s Claude Opus 4.6 model, which is one of the AI industry’s flagship models. As more industries embrace AI in an attempt to automate tasks and even replace workers, the chaos at PocketOS is a reminder of what could go wrong

A picture

Friendly AI chatbots more likely to support conspiracy theories, study finds

The rush to make AI chatbots more friendly has a troubling downside, researchers say. The warm personas make them prone to mistakes and sympathetic to crackpot beliefs.Chatbots trained to respond more warmly gave poorer answers, worse health advice and even supported conspiracy theories by casting doubt on events such as the Apollo moon landings and the fate of Adolf Hitler.Researchers at Oxford University discovered the trade-off during tests on chatbots that had been tweaked to make them sound friendlier. The warmer chatbots were 30% less accurate in their answers and 40% more likely to support users’ false beliefs

A picture

I’m addicted to checking my phone. Could a blocking device stop me?

Wake up, 100 messages from group chat overnight about something – what? another assassination attempt; a village destroyed in Lebanon; the football result in England; the weather in Iran being manipulated; the pesticides causing lung and bowel cancer, so everyone who eats salads is now at risk of cancer; meditate for 20 minutes, then fire up x.com, a place I thought I’d never want to revisit, with its carnival barkers and supplement salesman, and have you seen the Lego thing calling Trump a paedo?, you gotta see the Lego thing, and this is before my first coffee, yet x.com is the coffee and the tea, whatever Elon has done to the For You algorithm is evil genius, it’s like the global collective id, nasty and funny and addictive and compelling – like gawking at a car crash, like soaking in a hot bubble bath of anger, and memes, and geopolitical dramas, and Trump, Trump, Trump – soaking in Trump, and then, For Me (just as Elon promised).So begins the circuit around my phone, that goes all day and night, around the tiny screen with its icons (when a born-again Christian once told me he had favourite icons, for a long time I thought he meant apps, not pictures of the Virgin Mary). I started to feel like I was in Canberra, on one of those enormous roundabouts, rotating between the icons – not Joseph, not Jesus, but X and WhatsApp and TikTok and even LinkedIn for Christ sakes – round and round from one app to the next, just checking, checking in case something is happening

A picture

More private health records of UK Biobank volunteers appear on Chinese website

There have been further listings of confidential health records of UK volunteers on the Chinese website Alibaba since the breach reported last week, and the government is braced for further leaks, the science minister has said.Addressing a House of Lords debate on the attempted sale of data belonging to 500,000 UK Biobank volunteers, Patrick Vallance said the government had worked with Chinese officials to remove additional postings on the online marketplace.“New listings will emerge – there have been additional listings posted since the government were made aware of the issue last week – and we continue to work with the Chinese government to remove them quickly,” Lord Vallance said.The data is “de-identified”, meaning it does not include names, addresses or precise dates of birth. Vallance said there was a “low probability” of re-identification, but the breach should nonetheless serve as a “real wake-up call” for researchers

A picture

Meta found in breach of EU law for failing to keep children off platforms

The tech company Meta has been found to be in breach of EU law for failing to prevent children under 13 from using its Facebook and Instagram platforms.Issuing the preliminary findings of a nearly two-year investigation, the European Commission said on Wednesday that Meta did not have effective measures in place to stop under-13s accessing its services.The US tech company was unable to meet its own terms and conditions that set 13 as the minimum age to access Facebook and Instagram safely, the commission said.Following an initial assessment, Meta was found in breach of the EU’s Digital Services Act (DSA), which requires it to “diligently identify and mitigate the risks” of under-13s using its platforms.The commission said its preliminary findings “do not prejudge the final outcome of the investigation”

A picture

Meet the AI jailbreakers: ‘I see the worst things humanity has produced’

To test the safety and security of AI, hackers have to trick large language models into breaking their own rules. It requires ingenuity and manipulation – and can come at a deep emotional costA few months ago, Valen Tagliabue sat in his hotel room watching his chatbot, and felt euphoric. He had just manipulated it so skilfully, so subtly, that it began ignoring its own safety rules. It told him how to sequence new, potentially lethal pathogens and how to make them resistant to known drugs.Tagliabue had spent much of the previous two years testing and prodding large language models such as Claude and ChatGPT, always with the aim of making them say things they shouldn’t