AI Industry Faces Growing Public Skepticism Amid New Lobbying Strategies

Sarah Jenkins, Wall Street Reporter
6 Min Read
⏱️ 4 min read

In a bid to address rising public unease regarding artificial intelligence, OpenAI has unveiled a new policy paper aimed at redefining the social contract in the age of AI. This initiative comes as recent surveys indicate increasing disapproval of AI technologies among the public. The 13-page document, titled *Industrial Policy for the Intelligence Age*, signals a concerted effort by leading AI firms to reshape their narrative amidst intensifying scrutiny and calls for regulation.

OpenAI’s New Direction

OpenAI’s recent announcement diverges from its usual focus on technological advancements, opting instead to propose a framework for societal engagement with AI. The company, which recently acquired the tech-oriented podcast network TBPN and plans to establish a new office in Washington D.C., aims to foster discussions between policymakers and non-profit organisations about the implications of AI technology. The proposed “OpenAI workshop” will serve as a platform for these critical conversations.

The policy paper outlines a series of ambitious proposals, including the introduction of a four-day workweek and the establishment of a public wealth fund designed to distribute AI-generated profits back to citizens. These suggestions reflect a shift in tone, moving away from a purely technological focus to addressing broader social impacts and the need for protective measures in the face of AI advancements.

Industry Response to Public Concerns

The AI sector, particularly OpenAI’s rival Anthropic, is also making strides to address public concerns through the establishment of think tanks, such as the Anthropic Institute. This initiative aims to explore the societal disruptions posed by AI technologies. The strategic pivot comes in response to growing apprehensions about the implications of AI, with industry leaders like OpenAI CEO Sam Altman acknowledging that public sentiment is not in their favour. At a recent conference hosted by investment firm BlackRock, Altman remarked, “AI is not very popular in the US right now,” highlighting the challenges posed by rising electricity costs linked to data centres and the perception that AI is to blame for widespread layoffs.

Despite these efforts, critics argue that while the rhetoric suggests a desire for regulatory frameworks, the underlying intent may be to diminish independent regulatory efforts. Sarah Myers West, co-executive director of the AI Now Institute, commented on the paper’s duality, suggesting it aims to position AI firms as advocates for regulation while simultaneously lobbying against it.

The Lobbying Landscape

OpenAI’s lobbying expenditures have surged, with the company reportedly spending nearly £3 million in 2025 alone. This financial push is complemented by the formation of a Super PAC co-founded by OpenAI’s president, Greg Brockman, which raised over £125 million last year. This political action committee has already targeted candidates supportive of stricter AI regulations, demonstrating a proactive approach to shaping legislative outcomes.

The AI industry’s lobbying efforts are not isolated; Anthropic has also invested heavily in influencing regulatory discussions. As the political landscape shifts, AI firms are keenly aware of the need to navigate potential state-level regulations that could impose constraints on their operations.

Growing Distrust and Image Challenges

The AI industry is grappling with a significant image problem, particularly in the United States. A Pew Research Centre survey from September indicated that only 16% of Americans believe AI will enhance creativity, while a mere 5% think it will improve interpersonal relationships. Furthermore, an NBC News poll revealed that only 26% of voters hold a favourable view of AI, with the technology’s overall perception lagging behind even that of the controversial US Immigration and Customs Enforcement (ICE).

As public sentiment continues to sour, AI companies are adapting their strategies to mitigate backlash. This includes hiring former academics and researchers who can lend credibility to their initiatives while also shifting away from independent peer-reviewed publications towards more controlled in-house research outputs. Critics argue that this trend raises questions about the independence and integrity of the research being produced.

Why it Matters

The AI industry stands at a crossroads, with increasing public scrutiny and a demand for greater accountability. As firms like OpenAI and Anthropic invest heavily in lobbying and narrative-building, the challenge remains: can they authentically engage with societal concerns while promoting their technologies? The outcome of this struggle will not only shape the future of AI regulation but will also determine the extent to which these technologies can be integrated into our lives without compromising public trust and welfare. The stakes are high, and the discourse surrounding AI is more critical than ever.

Share This Article
Sarah Jenkins covers the beating heart of global finance from New York City. With an MBA from Columbia Business School and a decade of experience at Bloomberg News, Sarah specializes in US market volatility, federal reserve policy, and corporate governance. Her deep-dive reports on the intersection of Silicon Valley and Wall Street have earned her multiple accolades in financial journalism.
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2026 The Update Desk. All rights reserved.
Terms of Service Privacy Policy