In a bid to counter rising public scepticism towards artificial intelligence (AI), OpenAI has unveiled a new policy paper outlining a vision for a socially responsible approach to AI development. This initiative comes at a time when the industry faces escalating criticism over its implications for society and the economy. The document, titled *Industrial Policy for the Intelligence Age*, reflects a strategic pivot by major players in the AI space, aiming to reshape the narrative while addressing increasing public discontent.
OpenAI’s Strategic Shift
OpenAI’s recent announcement diverges from its typical focus on technological advancements; instead, it presents a framework for integrating AI into society in a manner that prioritises human welfare. The 13-page paper introduces concepts such as a four-day work week and a proposed public wealth fund designed to redistribute profits directly to citizens. By framing these ideas as a starting point for dialogue, OpenAI seeks to initiate a broader conversation about the societal impacts of AI.
CEO Sam Altman addressed the growing concerns regarding public perception at a recent BlackRock conference, noting, “AI is not very popular in the US right now.” He underscored the challenges posed by rising electricity prices and job displacements attributed to AI technologies, indicating that the industry is acutely aware of the potential backlash it faces.
The Role of Think Tanks and Lobbying
In parallel with its policy proposals, OpenAI has established a Washington D.C. office and acquired the tech-focused podcast TBPN, intending to foster discussions about AI among policymakers and non-profits. Simultaneously, rival company Anthropic has launched its own think tank, the Anthropic Institute, signalling a collective effort by AI firms to engage with the regulatory landscape proactively.
However, experts caution that these moves may serve dual purposes. While they advocate for regulatory oversight in public discourse, critics argue that AI companies are simultaneously working to weaken regulatory frameworks behind the scenes. Sarah Myers West, co-executive director of the AI Now Institute, remarked, “The OpenAI paper has a lot of the sounds of wanting more regulatory oversight… but they have lobbied very successfully for an administration that has taken a very aggressive deregulatory stance toward AI.”
OpenAI’s lobbying expenditures reached nearly £3 million in 2025, underscoring the financial commitment the firm is making to influence policy outcomes in its favour. Furthermore, the company has actively supported legislation that shields AI firms from liability in cases of severe societal harm, further complicating the discourse surrounding accountability.
Public Sentiment and the AI Dilemma
Despite these efforts, public sentiment towards AI remains deeply ambivalent. Recent surveys indicate a significant lack of trust, with only 26% of voters expressing a favourable opinion of AI technologies. The Pew Research Center found that a mere 16% of Americans believe AI could enhance creativity, while just 5% see it as a tool for fostering meaningful relationships.
This growing distrust may stem from various sources, including fears of job displacement, a historical aversion to large technology firms, and the chilling narratives surrounding AI’s potential risks. As the midterm elections approach, political campaigns are increasingly focusing on the implications of AI, reflecting the urgency with which both the public and lawmakers view the technology’s impact on society.
Why it Matters
The AI sector is at a critical juncture, facing intense scrutiny and a pressing need for public trust. As OpenAI and its peers attempt to navigate this complex landscape, their initiatives may significantly influence the future of regulation and public perception. The industry’s capacity to reshape its narrative and engage constructively with policymakers will determine not only its reputation but also its role in shaping societal norms surrounding technology. Balancing innovation with responsibility is essential as AI continues to permeate various facets of life, making it imperative for these firms to act transparently and accountably.