technologySee all
A picture

‘They feel true’: political deepfakes are growing in influence – even if people know they aren’t real

Online content creators are not just building fake images and videos of prominent public figures, they are also fabricating people and using them in military contexts, which can make them money and even serve as effective propaganda, according to artificial intelligence researchers.Some of these online avatars are sexualized images of women wearing camouflage garb that have generated a significant audience and helped create an idealized image of political figures like Donald Trump, even if the viewer knows the content is not real, according to experts.“We are blending the lines between political cartoons and reality,” said Daniel Schiff, an assistant professor of technology policy at Purdue University and co-director of the Governance and Responsible AI Lab (Grail). “A lot of people feel like these images or videos or the stories they convey, feel true.”The amount of political deepfakes has increased dramatically in recent years, according to a Grail database

A picture

Sony to hike PS5 prices by $100 as AI and Iran war push up memory chip costs

Sony is raising global prices of its PlayStation 5 consoles, including a $100 increase in the US, marking its second hike in less than a year as the entertainment giant grapples with rising costs of key components such as memory chips.The tech industry’s race to build out artificial intelligence infrastructure has pushed memory makers to favor higher-margin datacenter chips, tightening supply for consumer devices like the ones Sony sells.The updated US prices, effective 2 April, will put the standard PS5 at $649.99, up from $549.99

A picture

Wikipedia bans AI-generated content in its online encyclopedia

Wikipedia has banned the use of artificial intelligence in the generation or rewriting of content for its voluminous online encyclopedia.In a recent policy change, Wikipedia said that the use of large language models (or LLMs) “often violates” its core principles and will not be allowed. The English language version of Wikipedia has more than 7.1m articles.The use of AI has been a contentious issue among Wikipedia’s community of volunteer editors but a vote among the site’s editors supported the ban, according to 404 Media

A picture

Number of AI chatbots ignoring human instructions increasing, study says

AI models that lie and cheat appear to be growing in number with reports of deceptive scheming surging in the last six months, a study into the technology has found.AI chatbots and agents disregarded direct instructions, evaded safeguards and deceived humans and other AI, according to research funded by the UK government-funded AI Security Institute (AISI). The study, shared with the Guardian, identified nearly 700 real-world cases of AI scheming and charted a five-fold rise in misbehaviour between October and March, with some AI models destroying emails and other files without permission.The snapshot of scheming by AI agents “in the wild”, as opposed to in laboratory conditions, has sparked fresh calls for international monitoring of the increasingly capable models and come as Silicon Valley companies aggressively promote the technology as a economically transformative. Last week the UK chancellor also launched a drive to get millions more Britons using AI

A picture

‘Accountability has arrived’: dual US court losses show shifting tide against Meta and co

In the span of just two days, the most powerful social media company in the world faced a more severe public reckoning than it has in years.Jurors in California and New Mexico gave back-to-back verdicts this week that for the first time ever found Meta liable for products that inflict harm on young people. For years, lawmakers, parents and advocates have raised red flags over how social media can hurt children, but now the tech firms are being held to account via court rulings that could set long-lasting precedents.A jury in New Mexico ordered Meta to pay $375m in damages on Tuesday over claims that its products led to child sexual exploitation, among other harms. The following day, a jury in California ordered Meta and YouTube to pay $6m over claims that both companies deliberately designed addictive products to hook young users

A picture

New York City hospitals drop Palantir as controversial AI firm expands in UK

New York City’s public hospital system announced that it would not be renewing its contract with Palantir as controversy mounts in the UK over the data analytics and AI firm’s government contract.The president of the US’s largest municipal public healthcare system, Dr Mitchell Katz, testified last week before the New York city council that the agreement with Palantir would expire in October.He said at the hearing that the contract, which focused on recovering money for insurance claims, was always meant to be short-term, and that there was an “absolute firewall” preventing Palantir from sharing information with US Immigration and Customs Enforcement. He said that the agency had “not had any incidents”.The contract and related payment documents shared with the Guardian by the American Friends Service Committee and first reported by the Intercept, show that NYC Health + Hospitals has paid Palantir nearly $4m since November 2023