A recent investigation has unveiled that Substack, the popular self-publishing platform, is profiting from newsletters that espouse extremist views, including Nazi ideology and antisemitism. The findings have raised significant concerns among lawmakers and advocacy groups about the implications of allowing such harmful content to thrive online.
Substack’s Controversial Revenue Model
Substack, which boasts a user base of approximately 50 million individuals globally, enables writers to publish their work and charge for premium subscriptions. The platform retains about 10% of the revenue generated from these subscriptions, which has led to questions about its ethical responsibility regarding the nature of the content it hosts.
Among the troubling examples is a newsletter titled NatSocToday, which has around 2,800 subscribers and charges an annual fee of $80 (£60). Its content openly promotes racist and antisemitic ideologies, including claims that the Jewish community was responsible for the Second World War and glorification of Adolf Hitler.
Another account, operated by a so-called “NS activist” named Erika Drexler, has attracted attention for its posts celebrating Hitler and charging $150 for a subscription. A newsletter by Ava Wolfe also shares Holocaust denial content, suggesting that the deaths of millions during the Holocaust were not intentional but simply due to disease and starvation.
Algorithmic Amplification of Extremism
The investigation further highlighted that Substack’s algorithm actively promotes similar content to users, directing them to a network of accounts sharing extremist views. This interconnectedness raises alarms about the platform’s role in facilitating the spread of dangerous ideologies, particularly given the recent surge in antisemitic incidents globally.
Danny Stone, the Chief Executive of the Antisemitism Policy Trust, emphasised the real-world ramifications of online hate. He cited numerous instances where online rhetoric has inspired violent attacks, underscoring the urgent need for tech companies to take responsibility for the content they allow on their platforms.
Calls for Action from Lawmakers
In response to the Guardian’s findings, Joani Reid, a Member of Parliament and chair of the All-Party Parliamentary Group against Antisemitism, announced her intention to contact both Substack and Ofcom to address these alarming issues. She expressed concern that antisemitism is “spreading with impunity” and highlighted the pressing need for tech companies to be held accountable for their role in facilitating hate speech.
Reid stated, “Jewish people have been complaining about this for years—saying this violence online is going to end in violence offline—and that is exactly what has happened. We need to start taking this stuff far more seriously.”
The Broader Impact of Online Hate
The increasing prevalence of antisemitism and other forms of hate online has been exacerbated by recent geopolitical tensions, particularly following the onset of the Israel-Gaza conflict in October 2023. Incidents of violence against Jewish communities, including attacks during significant religious observances, have further highlighted the urgent need for action against online hate speech.
Hamish McKenzie, co-founder of Substack, has previously addressed the platform’s stance on hosting controversial content, asserting the belief that censorship does not solve the problem but rather exacerbates it. He reiterated the importance of supporting freedom of expression, even for views that many find objectionable.
Why it Matters
The findings of this investigation shed light on a pressing issue in the digital age: the balance between freedom of expression and the responsibility to combat hate speech. As platforms like Substack gain prominence, the implications of their content policies extend beyond the virtual realm, potentially inciting real-world violence and perpetuating dangerous ideologies. The call for accountability from lawmakers and advocacy groups underscores the necessity for a robust approach to online safety that prioritises the well-being of communities affected by extremist rhetoric.