H
recent
H
HOYONEWS
HomeBusinessTechnologySportPolitics
Others
  • Food
  • Culture
  • Society
Contact
Home
Business
Technology
Sport
Politics

Food

Culture

Society

Contact
Facebook page
H
HOYONEWS

Company

business
technology
sport
politics
food
culture
society

CONTACT

EMAILmukum.sherma@gmail.com
© 2025 Hoyonews™. All Rights Reserved.
Facebook page

ChatGPT may be polite, but it’s not cooperating with you

about 15 hours ago
A picture


After publishing my third book in early April, I kept encountering headlines that made me feel like the protagonist of some Black Mirror episode.“Vauhini Vara consulted ChatGPT to help craft her new book ‘Searches,’” one of them read.“To tell her own story, this acclaimed novelist turned to ChatGPT,” said another.“Vauhini Vara examines selfhood with assistance from ChatGPT,” went a third.The publications describing Searches this way were reputable and fact-based.

But their descriptions of my book – and of ChatGPT’s role in it – didn’t match my own reading.It was true that I had put my ChatGPT conversations in the book, but my goal had been critique, not collaboration.In interviews and public events, I had repeatedly cautioned against using large language models such as the ones behind ChatGPT for help with self-expression.Had these headline writers misunderstood what I’d written? Had I?In the book, I chronicle how big technology companies have exploited human language for their gain.We let this happen, I argue, because we also benefit somewhat from using the products.

It’s a dynamic that makes us complicit in big tech’s accumulation of wealth and power: we’re both victims and beneficiaries,I describe this complicity, but I also enact it, through my own internet archives: my Google searches, my Amazon product reviews and, yes, my ChatGPT dialogues,The book opens with epigraphs from Audre Lorde and Ngũgĩ wa Thiong’o evoking the political power of language, followed by the beginning of a conversation in which I ask ChatGPT to respond to my writing,The juxtaposition is deliberate: I planned to get its feedback on a series of chapters I’d written to see how the exercise would reveal the politics of both my language use and ChatGPT’s,My tone was polite, even timid: “I’m nervous,” I claimed.

OpenAI, the company behind ChatGPT, tells us its product is built to be good at following instructions, and some research suggests that ChatGPT is most obedient when we act nice to it.I couched my own requests in good manners.When it complimented me, I sweetly thanked it; when I pointed out its factual errors, I kept any judgment out of my tone.ChatGPT was likewise polite by design.People often describe chatbots’ textual output as “bland” or “generic” – the linguistic equivalent of a beige office building.

OpenAI’s products are built to “sound like a colleague”, as OpenAI puts it, using language that, coming from a person, would sound “polite”, “empathetic”, “kind”, “rationally optimistic” and “engaging”, among other qualities.OpenAI describes these strategies as helping its products seem “professional” and “approachable”.This appears to be bound up with making us feel safe: “ChatGPT’s default personality deeply affects the way you experience and trust it,” OpenAI recently explained in a blogpost explaining the rollback of an update that had made ChatGPT sound creepily sycophantic.Trust is a challenge for artificial intelligence (AI) companies, partly because their products regularly produce falsehoods and reify sexist, racist, US-centric cultural norms.While the companies are working on these problems, they persist: OpenAI found that its latest systems generate errors at a higher rate than its previous system.

In the book, I wrote about the inaccuracies and biases and also demonstrated them with the products,When I prompted Microsoft’s Bing Image Creator to produce a picture of engineers and space explorers, it gave me an entirely male cast of characters; when my father asked ChatGPT to edit his writing, it transmuted his perfectly correct Indian English into American English,Those weren’t flukes,Research suggests that both tendencies are widespread,In my own ChatGPT dialogues, I wanted to enact how the product’s veneer of collegial neutrality could lull us into absorbing false or biased responses without much critical engagement.

Over time, ChatGPT seemed to be guiding me to write a more positive book about big tech – including editing my description of OpenAI’s CEO, Sam Altman, to call him “a visionary and a pragmatist”,I’m not aware of research on whether ChatGPT tends to favor big tech, OpenAI or Altman, and I can only guess why it seemed that way in our conversation,OpenAI explicitly states that its products shouldn’t attempt to influence users’ thinking,When I asked ChatGPT about some of the issues, it blamed biases in its training data – though I suspect my arguably leading questions played a role too,When I queried ChatGPT about its rhetoric, it responded: “The way I communicate is designed to foster trust and confidence in my responses, which can be both helpful and potentially misleading.

”Still, by the end of the dialogue, ChatGPT was proposing an ending to my book in which Altman tells me: “AI can give us tools to explore our humanity in ways we never imagined.It’s up to us to use them wisely.” Altman never said this to me, though it tracks with a common talking point emphasizing our responsibilities over AI products’ shortcomings.I felt my point had been made: ChatGPT’s epilogue was both false and biased.I gracefully exited the chat.

I had – I thought – won.Then came the headlines (and, in some cases, articles or reviews referring to my use of ChatGPT as an aid in self-expression).People were also asking about my so-called collaboration with ChatGPT in interviews and at public appearances.Each time, I rejected the premise, referring to the Cambridge Dictionary definition of a collaboration: “the situation of two or more people working together to create or achieve the same thing.” No matter how human-like its rhetoric seemed, ChatGPT was not a person – it was incapable of either working with me or sharing my goals.

OpenAI has its own goals, of course.Among them, it emphasizes wanting to build AI that “benefits all of humanity”.But while the company is controlled by a non-profit with that mission, its funders still seek a return on their investment.That will presumably require getting people using products such as ChatGPT even more than they already are – a goal that is easier to accomplish if people see those products as trustworthy collaborators.Last year, Altman envisioned AI behaving as a “super-competent colleague that knows absolutely everything about my whole life”.

In a Ted interview this April, he suggested this could even function at the societal level: “I think AI can help us be wiser and make better collective governance decisions than we could before.” By this month, he was testifying at a US Senate hearing about the hypothetical benefits of having “an agent in your pocket fully integrated with the United States government”.Reading the headlines that seemed to echo Altman, my first instinct was to blame the headline writers’ thirst for something sexy to tantalize readers (or, in any case, the algorithms that increasingly determine what readers see).My second instinct was to blame the companies behind the algorithms, including the AI companies whose chatbots are trained on published material.When I asked ChatGPT about well-known recent books that are “AI collaborations”, it named mine, citing a few of the reviews whose headlines had bothered me.

I went back to my book to see if maybe I’d inadvertently referred to collaboration myself.At first it seemed like I had.I found 30 instances of words such as “collaboration” and “collaborating”.Of those, though, 25 came from ChatGPT, in the interstitial dialogues, often describing the relationship between people and AI products.None of the other five were references to AI “collaboration” except when I was quoting someone else or being ironic: I asked, for example, about the fate ChatGPT expected for “writers who refuse to collaborate with AI”.

But did it matter that I mostly hadn’t been the one using the term? It occurred to me that those talking about my ChatGPT “collaboration” might have gotten the idea from my book even if I hadn’t put it there.What had made me so sure that the only effect of printing ChatGPT’s rhetoric would be to reveal its insidiousness? How hadn’t I imagined that at least some readers might be convinced by ChatGPT’s position? Maybe my book had been more of a collaboration than I had realized – not because an AI product had helped me express myself, but because I had helped the companies behind these products with their own goals.My book concerns how those in power exploit our language to their benefit – and about our complicity in this.Now, it seemed, the public life of my book was itself caught up in this dynamic.It was a chilling experience, but I should have anticipated it: of course there was no reason my book should be exempt from an exploitation that has taken over the globe.

And yet, my book was also about the way in which we can – and do – use language to serve our own purposes, independent from, and indeed in opposition to, the goals of the powerful,While ChatGPT proposed that I close with a quote from Altman, I instead picked one from Ursula K Le Guin: “We live in capitalism,Its power seems inescapable – but then, so did the divine right of kings,Any human power can be resisted and changed by human beings,Resistance and change often begin in art.

Very often in our art, the art of words.” I wondered aloud where we might go from here: how might we get our governments to meaningfully rein in big tech wealth and power? How might we fund and build technologies so that they serve our needs and desires without being bound up in exploitation?I’d imagined that my rhetorical power struggle against big tech had begun and ended within the pages of my book.It clearly hadn’t.If the headlines I read represented the actual end of the struggle, it would mean I had lost.And yet, I soon also started hearing from readers who said the book had made them feel complicit in big tech’s rise and moved to act in response to this feeling.

Several had canceled their Amazon Prime subscriptions; one stopped soliciting intimate personal advice from ChatGPT.The struggle is ongoing.Collaboration will be required – among human beings.
technologySee all
A picture

No smartphone means no cheap bus fares for teens | Brief letters

I am delighted about the campaign to reduce smartphone usage among under-14s (‘The crux of all evil’: what happened to the first city that tried to ban smartphones for under-14s?, 7 May) but in West Yorkshire, where I work, we have run up against structural issues that make this impossible. The cheapest young person’s bus fares are only available via an app, which requires a smartphone. You can buy a monthly bus pass on a smartcard, but only in person and at limited locations. If your child needs a smartphone to get the bus to school, any hopes of not buying them one fall at the first hurdle. Phil SageSkipton, North Yorkshire Regarding children’s appetites increasing after watching junk food ads (11 May), I wonder if there is a similar effect when Saturday Guardian readers look at the Feast supplement

3 days ago
A picture

Australia has been hesitant – but could robots soon be delivering your pizza?

Robots zipping down footpaths may sound futuristic, but they are increasingly being put to work making deliveries around the world – though a legal minefield and cautious approach to new tech means they are largely absent in Australia.Retail and food businesses have been using robots for a variety of reasons, with hazard detection robots popping up in certain Woolworths stores and virtual waiters taking dishes from kitchens in understaffed restaurants to hungry diners in recent years.Overseas, in jurisdictions such as California, robots are far more visible in everyday life. Following on from the first wave of self-driving car trials in cities such as San Francisco, humans now also share footpaths with robots.Likened to lockers on wheels, companies including Serve Robotics and Coco have partnered with Uber Eats and Doordash, which have armies of robots travelling along footpaths in Los Angeles delivering takeaway meals and groceries

3 days ago
A picture

AI firms warned to calculate threat of super intelligence or risk it escaping human control

Artificial intelligence companies have been urged to replicate the safety calculations that underpinned Robert Oppenheimer’s first nuclear test before they release all-powerful systems. Max Tegmark, a leading voice in AI safety, said he had carried out calculations akin to those of the US physicist Arthur Compton before the Trinity test and had found a 90% probability that a highly advanced AI would pose an existential threat. The US government went ahead with Trinity in 1945, after being reassured there was a vanishingly small chance of an atomic bomb igniting the atmosphere and endangering humanity.In a paper published by Tegmark and three of his students at the Massachusetts Institute of Technology (MIT), they recommend calculating the “Compton constant” – defined in the paper as the probability that an all-powerful AI escapes human control. In a 1959 interview with the US writer Pearl Buck, Compton said he had approved the test after calculating the odds of a runaway fusion reaction to be “slightly less” than one in three million

4 days ago
A picture

Paul McCartney and Dua Lipa among artists urging Starmer to rethink AI copyright plans

Hundreds of leading figures and organisations in the UK’s creative industries, including Coldplay, Paul McCartney, Dua Lipa, Ian McKellen and the Royal Shakespeare Company, have urged the prime minister to protect artists’ copyright and not “give our work away” at the behest of big tech.In an open letter to Keir Starmer, a host of major artists claim creatives’ livelihoods are under threat as wrangling continues over a government plan to let artificial intelligence companies use copyright-protected work without permission.Describing copyright as the “lifeblood” of their professions, the letter warns Starmer that the proposed legal change will threaten Britain’s status as a leading creative power.“We will lose an immense growth opportunity if we give our work away at the behest of a handful of powerful overseas tech companies and with it our future income, the UK’s position as a creative powerhouse, and any hope that the technology of daily life will embody the values and laws of the United Kingdom,” the letter says.The letter urges the government to accept an amendment to the data bill proposed by Beeban Kidron, the cross-bench peer and leading campaigner against the copyright proposals

4 days ago
A picture

‘Tone deaf’: US tech company responsible for global IT outage to cut jobs and use AI

The cybersecurity company that became a household name after causing a massive global IT outage last year has announced it will cut 5% of its workforce in part due to “AI efficiency”.In a note to staff earlier this week, released in stock market filings in the US, CrowdStrike’s chief executive, George Kurtz, announced that 500 positions, or 5% of its workforce, would be cut globally, citing AI efficiencies created in the business.“We’re operating in a market and technology inflection point, with AI reshaping every industry, accelerating threats, and evolving customer needs,” he said.Kurtz said AI “flattens our hiring curve, and helps us innovate from idea to product faster”, adding it “drives efficiencies across both the front and back office”.“AI is a force multiplier throughout the business,” he said

5 days ago
A picture

Leave them hanging on the telephone | Brief letters

Regarding dealing with cold callers (Adrian Chiles, 7 May), it’s irritating I know, but if you don’t mind your phone being inaccessible for a few minutes, why not say: “Hang on, I’ll go and get him/her”, and then leave your phone until the caller rings off? At least you will have wasted some of their day.Robert WalkerPerrancoombe, Cornwall Re fostering a love of reading in children (Letters, 6 May), one of my fondest memories of my teaching career was story time in the infant class in a local village school. Most of the children came quite a distance on buses. They adored Michael Rosen’s poetry. There were many afternoons when it was home time and they would shout: “Please read another Michael Rosen one, Mrs Mansfield, the driver won’t mind waiting

6 days ago
foodSee all
A picture

How to make potato salad – recipe | Felicity Cloake's Masterclass

3 days ago
A picture

Tea-licious! 17 awesome ways to use earl grey, from ice-cream and cocktails to strudel and salad

3 days ago
A picture

Song He Lou, London W1: no neon, no bunting and not much jostling for tourist dollars – restaurant review | Grace Dent on restaurants

3 days ago
A picture

José Pizarro’s recipe for slow-roast pork belly with spring onion mojo verde

4 days ago
A picture

Helen Goh’s recipe for matcha madeleines | The sweet spot

5 days ago
A picture

Core principles: the return of ‘real’ cider

6 days ago