Amazon boss tells staff AI means their jobs are at risk in coming years

A picture


The boss of Amazon has told white collar staff at the e-commerce company their jobs could be taken by artificial intelligence in the next few years.Andrew Jassy told employees that AI agents – tools that carry out tasks autonomously – and generative AI systems such as chatbots would require fewer employees in certain areas.“As we roll out more generative AI and agents, it should change the way our work is done,” he said in a memo to staff.“We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs.“It’s hard to know exactly where this nets out over time, but in the next few years, we expect that this will reduce our total corporate workforce.

”Amazon employs 1.5 million people worldwide, with about 350,000 working in corporate jobs such as software engineering and marketing.At the weekend the chief executive of the UK telecoms company BT said advances in AI could lead to deeper job cuts at the company, while Dario Amodei, the chief executive of the AI company Anthropic, said last month AI could wipe out half of all entry-level office jobs.Jassy said in the near future there would be billions of AI agents working across companies and in people’s daily lives.“There will be billions of these agents, across every company and in every imaginable field.

There will also be agents that routinely do things for you outside of work, from shopping to travel to daily chores and tasks.Many of these agents have yet to be built, but make no mistake, they’re coming, and coming fast,” he said.Jassy ended the memo by urging employees to be “curious about AI” and to “educate yourself” in the technology and take training courses.“Those who embrace this change, become conversant in AI, help us build and improve our AI capabilities internally and deliver for customers, will be well positioned to have high impact and help us reinvent the company,” he said.Sign up to Business TodayGet set for the working day – we'll point you to all the business news and analysis you need every morningafter newsletter promotionThe Organisation for Economic Co-operation and Development – an influential international policy organisation – has estimated the technology could trigger job losses in skilled white-collar professions such as law, medicine and finance.

The International Monetary Fund has calculated 60% of jobs in advanced economies such as the US and UK are exposed to AI and half of these jobs may be negatively affected.However, the Tony Blair Institute, which has called for widespread adoption of AI in the public and private sectors, has said the technology could displace up to 3m private sector jobs in the UK but the net loss will be mitigated by the technology creating new roles.
technologySee all
A picture

Up to 70% of streams of AI-generated music on Deezer are fraudulent, says report

Up to seven out of 10 streams of artificial intelligence-generated music on the Deezer platform are fraudulent, according to the French streaming platform.The company said AI-made music accounts for just 0.5% of streams on the music streaming platform but its analysis shows that fraudsters are behind up to 70% of those streams.AI-generated music is a growing problem on streaming platforms. Fraudsters typically generate revenue on platforms such as Deezer by using bots to “listen” to AI-generated songs – and take the subsequent royalty payments, which become sizeable once spread across multiple tracks

A picture

Elon Musk’s X sues New York over hate speech and disinformation law

Elon Musk’s X Corp filed a lawsuit on Tuesday against the state of New York, arguing a recently passed law compelling large social media companies to divulge how they address hate speech is unconstitutional.The complaint alleges that bill S895B, known as the Stop Hiding Hate Act, violates free speech rights under the first amendment. The act, which the governor, Kathy Hochul, signed into law last December, requires companies to publish their terms of service and submit reports detailing the steps they take to moderate extremism, foreign influence, disinformation, hate speech and other forms of harmful content.Musk’s lawyers argue that the law, which goes into effect this week, would require X to submit “highly sensitive information” and compel non-commercial speech, which is subject to greater first amendment protections. The complaint also opposes the possible penalty of $15,000 per violation per day for failing to comply with the law

A picture

How AI pales in the face of human intelligence and ingenuity | Letters

Gary Marcus is right to point out – as many of us have for years – that just scaling up compute size is not going to solve the problems of generative artificial intelligence (When billion-dollar AIs break down over puzzles a child can do, it’s time to rethink the hype, 10 June). But he doesn’t address the real reason why a child of seven can solve the Tower of Hanoi puzzle that broke the computers: we’re embodied animals and we live in the world.All living things are born to explore, and we do so with all our senses, from birth. That gives us a model of the world and everything in it. We can infer general truths from a few instances, which no computer can do

A picture

Universities face a reckoning on ChatGPT cheats | Letters

I commend your reporting of the AI scandal in UK universities (Revealed: Thousands of UK university students caught cheating using AI, 15 June), but “tip of the iceberg” is an understatement. While freedom of information requests inform about the universities that are catching AI cheating, the universities that are not doing so are the real problem.In 2023, a widely used assessment platform, Turnitin, released an AI indicator, reporting high reliability from huge-sample tests. However, many universities opted out of this indicator, without testing it. Noise about high “false positives” circulated, but independent research has debunked these concerns (Weber-Wulff et al 2023; Walters 2023; Perkins et al, 2024)

A picture

Bar Council is wise to the risk of AI misuse | Letters

In your report (High court tells UK lawyers to stop misuse of AI after fake case-law citations, 6 June), you quote Dame Victoria Sharp’s call that we, the Bar Council, and our solicitor colleagues at the Law Society address this matter urgently.We couldn’t agree more. This high court judgment emphasises the dangers of the misuse by lawyers of artificial intelligence, particularly large language models, and in particular its serious implications for the administration of justice and public confidence in the justice system.The public is entitled to expect from legal professionals the highest standards of integrity and competence in appropriate understanding and use of new technologies, as well as in all other respects.The Bar Council has already issued guidance on the opportunities and risks surrounding the use of generative AI, which is quoted by the court, and is in the process of setting up a joint working group with the Bar Standards Board to identify how best we can support barristers to uphold those standards with appropriate further training and supervision

A picture

Watch out, hallucinating Humphrey’s about in Whitehall | Brief letters

I doubt that government officials consulted their AI tool, Humphrey, on what it should be called (UK government rollout of Humphrey AI tool raises fears about reliance on big tech, 15 June). It could have advised that in the 1970s the name was used for a milk marketing campaign: “Watch out, there’s a Humphrey about.” That line will now have a whole new meaning. Having spent the last few weeks voting in the Lords to try, in vain, to achieve protections for the creative industries from AI abuse, that meaning might be prophetic. On a personal level, my husband is angry that his name is being stolen again