Inside the Lucrative World of Government-Funded “Fact-Checking”
The Quietly Expanding Industry of Misinformation Combatants: Serving Western Governments and Global Brands
Today, Mark Zuckerberg announced that Meta would be ditching its fact-checking program, opting instead for a system reminiscent of X's community notes. Zuckerberg framed this shift as a return to “Meta's roots”, emphasizing free expression over what he described as “too many mistakes” and “too much censorship” by politically biased fact-checkers.
This decision marks a significant pivot in how Meta plans to manage content on its platforms like Facebook, Instagram, and Threads, starting first in the U.S. Zuckerberg's rationale was clear: the recent elections felt like a “cultural tipping point” towards prioritising speech, leading to a community-driven approach where users, rather than third-party fact-checkers, would provide context and corrections to potentially misleading information.
This announcement didn't occur in a vacuum. It comes at a time when the very concept of truth in the digital age is being renegotiated, often with alarming financial and political implications.
Since the circus of the 2016 election, the market for so-called “misinformation combatants” has burst at the seams, raking in over $300 million, predominantly with government money greasing the wheels. These startups, masquerading as guardians of truth, have become nothing more than well-funded censors, with taxpayers unwittingly footing the bill for this Orwellian charade.
Consider NewsGuard, which, with a cool $21 million in its coffers, has taken upon itself the divine right to judge media outlets. By pressuring advertisers and third-party vendors to blacklist those it deems "untrustworthy," NewsGuard isn't just playing watchdog; it's acting as the executioner of free speech. This isn't about ensuring information integrity; it's about controlling the narrative by economic strangulation.
Blackbird.AI is boasting a $20 million Series B round last year and claiming to shield 2,000 companies and “national security organizations” from the boogeyman of “narrative attacks”. But let's not be naive—protecting from “misinformation and disinformation” is just a euphemism for stifling dissent and controlling the public discourse.
The very notion that a company can decide what constitutes a “narrative attack” is a direct assault on democratic principles, where every voice should have the chance to be heard, not just the ones that align with government or corporate agendas.
Storyzy, another player in this dystopian game, offers “round-the-clock monitoring” for the UK government, tracking what they call “disinformation trends and false actors.” This is no longer just surveillance; this is a sophisticated form of thought policing.
Despite all this pomp and expenditure, there's scant evidence—virtually none—that misinformation has ever swayed an election. It's all smoke and mirrors, a multi-million-dollar industry built on the myth that the public is too gullible to think for itself.
This isn't about protecting democracy; it's about manipulating it, ensuring that only approved truths reach the ears of the electorate. The real misinformation here might just be the narrative that these companies are doing any good at all.
The arrest of Telegram's founder, Pavel Durov, by French authorities at the end of August shouldn't have been the surprise it was, particularly not to Durov himself. The EU had been ramping up its rhetoric against Telegram for months, reaching a fever pitch around the June 2024 EU elections. Officials were practically screaming about being “flooded” with disinformation, but let's call it what it is—a thinly veiled attempt to muzzle platforms that don't bow to their censorship demands.
While every major platform got the side-eye, Telegram was singled out for special scrutiny—not because it was the worst offender, but because it dared to stand its ground. A month before the elections, the EU launched an investigation into whether Telegram qualified as a “major online platform” under the Digital Services Act, which became law in February.
The real agenda here? To force Telegram into the same oppressive regulatory framework that crushes free speech under the guise of “protection” from disinformation.
Estonia's Prime Minister didn't mince words in May 2024, accusing Telegram of allowing disinformation to spread "openly and completely unchecked." But her gripe, and that of her EU cohorts, wasn't just about disinformation; it was about Telegram's refusal to play the censorship game.
However, don't be fooled into thinking that Telegram's content has been left unmonitored. Over the last decade, a lucrative market has sprung up, catering to governments and brands eager to control the narrative. Enter the “MDM” industry—misinformation, disinformation, and malinformation—where companies label, track, and remove content deemed inconvenient or “bad for you,” even if it's the truth.
This industry is ballooning into a behemoth, with startups raking in venture capital like it's going out of style, and well-established firms snagging contracts worth billions. These companies are creating a new market for thought control, where the line between truth and falsehood is drawn by those with the deepest pockets and the most to gain from public ignorance.
As the EU ramped up its scrutiny of Telegram, across the Channel, the UK's Government Communication Service International was busy engaging Paris-based OSINT platform, Storyzy, for what they call “round-the-clock monitoring.”
This Orwellian move, costing a mere $50,000 per seat, aimed at tracking “disinformation trends and false actors” on platforms like Telegram. Not content with that, Storyzy also joined the ATHENA project, a $3.35 million EU initiative to sniff out “foreign information manipulation and interference,” a fancy term for silencing dissent.
The venture capital world has seen gold in this new market of thought policing. London's Logically, with its “advanced AI to fight misinformation,” has managed to secure $37 million, while Factmata, once backed by Biz Stone and Mark Cuban, was gobbled up in 2022.
Clarity, focused on spotting AI-generated deep fakes, pocketed $16 million, and Reken, led by a former Google trust and safety head, raised $10 million to “protect against generative AI threats.” ActiveFence, under the guise of empowering “Trust and Safety,” has amassed $100 million. According to Crunchbase, these 16 startups alone have guzzled over $300 million, all in the name of “combating misinformation.”
The irony? Governments are not just the regulators; they're the biggest customers. Logically, for instance, has been lucratively tethered to the UK's government via contracts worth $1.3 million from the National Security Online Information Team (NSOIT), once known as the Counter Disinformation Unit.
They've used this tech to flag, among others, a tweet from Dr. Alex de Figueiredo questioning child vaccination policies and an interview by Julia Hartley-Brewer discussing lockdown experiences. NSOIT's rebranding after backlash didn't change its mission; it still aims to “understand disinformation narratives” to ensure the government can “take appropriate action,” which, in plain terms, means silencing opposition.
And let's not forget the UK's free speech watchdog, Big Brother Watch, which exposed Logically for spying on British citizens, including their own director, for merely engaging with or liking posts. Logically also reported Hartley-Brewer to the government for sharing government-supplied statistics on cancer deaths during lockdown, which cancer charities had highlighted.
“I do think there's a massive boom in the proliferation of these fact-checking companies or counter disinformation, AI-based companies,” declared Mark Johnson, an advocacy manager at Big Brother Watch. Johnson, ironically, found his own name in a Logically report to NSOIT due to tweeting a link to a parliamentary petition against vaccine passports.
“They are tapping into a wider kind of trend, which is essentially censoring — the platforms and other big players will say 'moderating' — but really censoring speech based on its perceived veracity and accuracy. This is a trend that's happening across the Western world at the moment.”—Mark Johnson
In the US, this collaboration between for-profit MDM companies and government has sunk even deeper. In 2021, the Department of Defense handed out a $979 million contract to Peraton to “counter misinformation” for United States Central Command, which oversees operations in the Middle East and Asia. Peraton, a child of Veritas Capital (which once owned Raytheon Aerospace), was born from Northrop Grumman's IT services arm.