The Road to (Mental) Serfdom & Misinformation Studies
In which I expand on my previous post about misinformation studies, argue there is no such thing as "inoculating yourself against misinformation" & warn against top down control
Those critical of Communism often highlight how it’s underpinned by Envy; But I think supporting Communism is first and foremost a result of the Sin of Pride: there’s immense hubris in believing one can design a centralised economic system that beats evolutionary forces. In "The Road to Serfdom" F.A. Hayek contends that government control of economic decision-making, even with good intentions, inevitably leads to totalitarianism. Hayek was a visionary: a lot of intellectuals persisted in their love for Communism even after the horrors of the Soviet Regime became apparent.
While at the moment outright advocacy for Communism may not be widespread among intellectuals, there remains a latent affinity for top-down control - a kind of ember of ideology that, though subdued, is still smouldering, waiting for the right conditions to reignite. Often, the catalyst for such a resurgence is the perception of a looming threat (that might very well be a justified worry in itself), such as the recent concern over misinformation. The same pride that made intellectuals believe in centralised control over the economy now leads them to often support a form of epistemic control to fight off misinformation. US even briefly had a “misinformation czar” to deal with this problem; The political and media movement feeds off the actual academic field of misinformation studies. This is a heterogeneous domain, but there is certainly an element of it that veers into justifying top-down control over the information ecosystem, often by overstating how good researchers are at detecting false information and why we should trust them. Perhaps nowhere is this overstatement more prominent than in the book “Foolproof” by Sander Van Der Linden, one of the foremost researchers in the field. If you are inclined to dismiss this book as a sort of inconsequential academic exercise, don’t! These are the kind of outlets that recommend it (particularly disappointed by Financial Times). The book also has a blurb from Marianna Spring, BBC’s first disinformation correspondent. Funnily enough, she was recently revealed to have lied on her CV - I guess misinforming employers is ok (they are evil capitalists anyway) if you are doing it for the “Greater Good”.
In this post I am going to argue against one of the basic premises of this book: that we can find a broad acting “vaccine” against misinformation, why this supposed cure is worse than the disease and highlight some very fundamental problems with a large part of this field. In doing so, I am drawing inspiration from philosopher Dan Williams1, one of the few non-anonymous people who has identified the problems with this field.
The illusory promise of regime scientism
In my previous post I wrote about how I think misinformation studies is in large part a form of “regime scientism”. A summary of what I mean by that:
A key part of legitimising speech and thought control in totalitarian regimes is pretending there are “scientific” reasons to do so. It’s stolen valour from actual Science: you get all the legitimising effect of Science with none of the actual rigour. So-called Science is applied to the messy, subtle, hard to quantify world of human interactions to obfuscate and avoid any dissent. Notice how none of the topics tackled by misinformation studies has anything to do with the areas where the scientific method works best (STEM fields).
I was not aware just how deep this scientism goes, until reading the above mentioned centre- piece book in the field- Foolproof. This work treats misinformation as a virus, claiming to have identified its DNA and proposing ways to inoculate against it. The author's overconfidence in this inoculation is evident in the book's title, "Foolproof," reminiscent of an overhyped cleaning product commercial. The book's use of biological terms like "DNA," "virus," and "inoculation" in the context of misinformation seems especially egregious. Each chapter ends with a “Fake News Antigen”: a list of summarised recommendations for how to inoculate yourself against misinformation. Several of these make use of mnemonics. For example, it introduces the acronym DEPICT (Discrediting, Emotion, Polarization, Impersonation, Conspiracy, Trolling) as a mnemonic for the supposed characteristics of misinformation or its alleged “DNA”.
Now, you might ask: Why do I need a mnemonic for this? Well, imagine walking out of your house, moisturised and thriving, when misinformation suddenly catches you off-guard. In such a situation you need DEPICT, so that you quickly remember what the DNA of misinformation is, correctly identify the misinformation chasing you and repel it! Bam! The Misinformation is dead, on the floor, screaming and kicking.
In case it wasn’t obvious, this was a joke, meant to highlight the sheer ludicrousness of the idea that applying some memorised rule can guard one against misinformation. Dan Williams makes a compelling case for why these DEPICT features do not reliably capture the “essence of misinformation”.
Regarding the concept of inoculation, "Foolproof" draws a parallel between traditional vaccination methods and a strategy for combating misinformation. In the same way that vaccines introduce a weakened version of a virus to prompt an immune response, the book suggests that exposing people to a diluted form of misinformation's DNA could help them develop resistance to it. The author supports this idea with examples of interventions that have supposedly been effective in protecting against misinformation. However, Dan's analysis casts doubt on the reliability of these claims, questioning the trustworthiness of the evidence presented (see below).
One might argue that even though the DEPICT framework is currently ineffective in predicting misinformation, there is potential for improvement. Could advanced tools be developed to better decode the so-called fingerprints of misinformation, thereby elevating misinformation studies to a Science?
I remain sceptical. I do not think the issue at hand is merely a technical one that can be resolved with more sophisticated tools, but rather a fundamental flaw in the premise of the field: using pattern recognition based on surface-level features for detecting misinformation. If there is any rule to be applied in identifying false information, it could be summarised simply as: “Is this claim supported by evidence?” And assessing the strength of evidence for specific claims is what entire institutions and most humans are already doing and have been doing forever, with varying degrees of success.
When false information is obvious, reasonable people who are not ideologically biased on that specific topic or otherwise very emotionally attached to it, are pretty good at detecting it without resorting to some mnemonic rule. If they are either driven by ideology or lack of judgement, applying some DEPICT rule won’t save them. When misinformation is more subtle (aka propaganda), it demands meticulous argumentation and a thorough examination of the object-level claims presented. There is no way of getting around this latter process, no magical trick. But this is exactly the opposite of what Foolproof is trying to convince us of! Consider for example this sentence from the book: “Although it’s possible to inoculate people against specific falsehoods, it’s much more effective to immunise against the building blocks of the virus itself”. So much confidence in the feasibility of sidestepping thinking for ourselves!
Just like a free market allows disparate individuals and companies to try and fail and then maybe succeed at creating a product, freedom of thought leads to institutions and opinion makers trying to get at the truth. It’s from this constant hum-drum of people trying their best, that something resembling Truth emerges, and never from top-down control or blind application of some rule. That does not mean there aren’t “winners” in this search for the truth: Indeed, in a healthy society, institutions that are trusted by a majority of the population emerge. But this trust has to be gained organically, not imposed top-down (more on that later).
What these DEPICT-type rules are trying to do, is convince us that it’s ok to give up on doing the hard and energetically consuming process of thinking through claims. It’s an illusory free lunch. But there are no free lunches and there is no understanding things without thinking. The good news is that thinking for yourself is the road to freedom. The bad news is that it hurts.
Brute Misinformation vs Haut Bourgeois Propaganda
To understand more about my claims, let’s look into the methodology of one of the papers claiming that it has identified the “fingerprints of misinformation”. This is not a random paper, but one which is chosen by the authors of a rebuttal to Dan Williams’ critiques as representative. Basically what the study does is collate articles published in a series of outlets classified as “Fake News Corpus” (e.g. InfoWars) and compare their tone to those published in respectable outlets like New York Times. Their findings are: “The results show that misinformation, on average, is easier to process in terms of cognitive effort (3% easier to read and 15% less lexically diverse) and more emotional (10 times more relying on negative sentiment and 37% more appealing to morality)”
While it is nice to have this confirmed, is this in any way predictive? Does knowing this offer us some Swiss knife against misinformation? No: I content that at best, misinformation fingerprints confirm something we already know but are of limited usage in more subtle situations. This is because not all misinformation is created equal. I think it’s worth distinguishing between 2 different types of fake information:
Brute Misinformation: explicitly crazy ideas of the kind you would find on InfoWars. I just googled InfoWars and of course their main page looks and sounds insane to the naked eye. Most reasonable people can identify such misinformation. These are the kind of websites included in the Fake News Corpus so the “fingerprints of misinformation” will basically reflect the difference in style between these and more respectable sites2.
Do we really need “Science” to tell us this kind of crap is false? Now, a rebuttal to this would be that people still read InfoWars, so clearly there is a subset of the population that cannot correctly identify this kind of sites as non credible. But it’s not like those people are going to read Misinformation Studies papers and be like “Aha, I liked InfoWars a lot, but now that I identified the fingerprints of misinformation and I have been inoculated against it, I will stop reading it!”
Haut Bourgeois Propaganda: It’s usually more subtle and relies on presenting partial truths or spinning narratives in a way that is favourable to one side or the other. Let’s call this haute bourgeois propaganda. This is the kind of stuff you’d find in The New York Times. If the topic was a war, it would maybe focus on the casualties of one of the sides more, without explictly stating untruths or conspiracy theories. Now, this is incredibly tricky and it would be very impactful if we could decode it. But such high end propaganda is by its very nature hard to disentangle using fingerprints.
For example, let’s look at 2 articles from 2 respectable publications (New York Times and Scientific American). They support very different views to the same question: “Is overpopulation a problem?” Their disagreements are not purely value based (though they do arise from a difference in values): they also extend to how they weigh empirical facts. Naturally, at least in the realm of empirical evidence, one of them must be more correct than the other. To judge this, one needs to carefully consider the claims: there is no smoking gun in either of these of the kind suggested by DEPICT. Indeed, the most successful propaganda that ends up influencing elites looks nothing like the InfoWars drivel.
The concern about the dangers of relying on surface-level patterns for distinguishing misinformation is rooted in the limitations of this approach. While identifying and quantifying these patterns can be useful as an academic pursuit, the issue arises when this method is used to make broad generalisations or assertions, as seen in publications like "Foolproof". The problem is not with the initial attempts to classify and quantify these differences, but rather with the overreliance on these patterns to make sweeping claims or to extrapolate findings onto new areas. In the realm of social sciences, there's probably a place for analysing such data, provided there is a careful consideration of the implications and limitations of these findings. However, as this field garners significant media attention and influences political discourse, it is imperative to approach the interpretation and presentation of these results with caution.
I am going to flesh out a bit more what the concrete dangers stemming from this field are:
It distracts attention from actually doing stuff that would halt the spread of false narratives (more of that in the next section).
It threatens to use the performance on obvious cases like InfoWars vs New York Times to justify the encroachment of freedom of speech/thought in an arbitrary way in more subtle situations (e.g. and thus enforcing haute bourgeois propaganda). The process goes as follows: you run an analysis of the kind indicated in the paper above. You declare you have found the “fingerprints of misinformation”, which offer poorer predictive value compared to common sense /other types of analyses but they are fine. Then you flag something that you simply do not like/ disagree with as being misinformation based on having some of these “fingerprints”. If someone objects to this, you can simply point to some academic paper that supports your claims. Case closed.
If you do not think 2. can happen, I have news for you. Below is a tweet from the author of the Foolproof paper. He seriously argues that the joke below is misinformation because it incorrectly portrays Joe Biden as senile. Most people would use their judgement and recognise this as humour, without the need of "an expert”.
Further down in the comments you can see the appeal to authority, where various academics ask for citations and literature references to check if that meme is a joke or not.
What can academics do?
All that I have written before does not mean that I don’t believe there are patently false ideas (brute misinformation) whose spread leads to bad consequences in the real world. Two of them come to mind: the anti-vaccine and stolen election narratives. The first one has led to loss of lives and the second one is a real threat to the public’s confidence in democracy. The fact that patently false information has spread so much is worrying, because it’s a symptom of a really negative underlying phenomenon: a decrease in institutional trust; But trying to further exercise top-down epistemic control is just sweeping the dirt under the rug.
So how do we combat the spread of these false ideas?
Notice that you do not need some “fingerprint of misinformation” based methods to prove why these narratives are false. For example, in the vaccine case, there have been smart analyses (for example this one by Nate Silver3) that simply analysed the facts on a case-by-case basis and built a logical case. To the extent that anyone leaves aside their tribal instincts and actually considers the arguments pro and against vaccines, they are going to be much more swayed by such analyses than someone declaring they have found “the fingerprints of misinformation” in anti-vax claims.
Of course, most people who are anti-vax won’t read Nate Silver’s analysis. The truth is most of us do not have time to carefully go over all the topics that might be of interest, so we have to rely on authorities that we trust. So this is where having robust institutions that do their best to get at the truth and elicit trust in the population comes into play.
Instead of inventing new red herring fields, academics do have the option of actually fighting such brute misinformation in an efficient way. And that is by enhancing trust in our knowledge-generating institutions (academia & media) among the general public. If academia (for anti vax) and media (for stolen election) had higher levels of trust, these narratives wouldn’t have been so prevalent. The real threat to our informational ecosystem is the plummeting confidence in higher education institutions that we see in the American public, as illustrated for example by this Gallup survey.
It’s striking that decreased confidence is a trend that can be seen across age groups and political orientation (although Republicans and those without a college degree saw a much bigger decrease in trust).
Unfortunately, the incentives in academia are aligned such that it’s much more profitable to invent a new field and publish a book about it (and do some high status social signalling in the process), than to actually perform the hard and onerous work of going against your colleagues and explain why some of their approaches are wrong. Indeed, the second route will be actively detrimental to your career. So I do not believe this is actually going to happen. My bet for the most likely outcome is that academics are still going to do what they are doing and trust will further erode. Variance between people will increase: those “in the know” will have early access to all sorts of cool new stuff through friend networks. The majority of the population will be left behind.
Now, one might ask: “Why are you opining on this, Ruxandra? You are not a misinformation scientist or a philosopher or any of that stuff”. I think this is precisely WHY I should opine on this. One of the subtle ways in which thought control when it comes to human matters is exerted, is by having people relinquish authority on these matters entirely to so-called “experts”. This approach, which suggests that only those steeped in academic literature can understand what's plainly visible, is one I wholeheartedly reject. But for those concerned about credentials, I offer my background in Biology. The misuse of biological and scientific terms in misinformation discourse is, in my view, not only inappropriate but also somewhat absurd
There is also a level of circularity involved, since these misinformation sites had to be manually annotated as misinformation
Could you imagine someone smart and thoughtful like Nate Silver being like “Aha, I have identified the DNA of misinformation in this piece. Q.E.D”?
"haute bourgeois propaganda"...yesterday marked the 120th anniversary of the Wright Brothers first flight. Only 9 weeks previous to that flight, the NYT mocked the idea of heavier-than-air flight:
https://bigthink.com/pessimists-archive/air-space-flight-impossible/
And in 1920, Robert Goddard's rocket experiments were dismissed by that newspaper in an almost unbelievably arrogant manner:
"That professor Goddard, with his 'chair' in Clark College and the countenancing of the Smithsonian Institution [from which Goddard held a grant to research rocket flight], does not know the relation of action to reaction, and of the need to have something better than a vacuum against which to react -- to say that would be absurd. Of course he only seems to lack the knowledge ladled out daily in high schools."
https://www.forbes.com/sites/kionasmith/2018/07/19/the-correction-heard-round-the-world-when-the-new-york-times-apologized-to-robert-goddard/?sh=321463304543
The real threat to our informational ecosystem is …
Secret, illegal discrimination against hiring Republican professors.
All tax exempt orgs should be required to be diverse, at least 30% Republican & 30% Democrat.
NYT false info, misinformation, for 2 years of Collusion hoax fake news, certainly helped Dems in 2018 election. NYT, and US (deep state) govt censorship of the truth of H Biden’s corruption, with smoking gun evidence on his laptop, helped Biden win in 2020. (Rigged election! Stolen)
Most college educated folk don’t want to believe that the US election was actually stolen. They look for, and provide a market for, rationalizations that it wasn’t stolen. Just like the Williams theory predicts they, and Dan Williams himself, will do.
It’s a great theory.