Greece Fact Check

/ Jul 01, 2025
2025/07/01

How Conspiracy Theorists Found Their AI Soulmate

By Chris Kremidas-Courtney

As conspiracy theorists fine-tune AI to echo their worldview, the danger isn’t hallucination but affirmation. When machines become soulmates for delusion, democracy suffers. To counter these next-generation cognitive threats, our laws must evolve to treat AIs as high-risk based on their function and training data, not just their declared purpose.

In 2025, a fringe AI chatbot trained on over 100,000 conspiracy articles is quietly gaining traction on platforms such as Rumble and Telegram. Its job is not to challenge disinformation, but to spread it.

In 2018, McGill University’s Jonathan Jarry quietly rang the alarm bell on a man named Mike Adams, a prolific conspiracy theorist known as “The Health Ranger.” What Jarry described was chilling: Adams had built an alternative internet universe, complete with over 50 tightly networked websites, a private search engine (Good Gopher), a “Gmail alternative,” and toolbar plugins to keep his users locked in.

Fast forward to June 2025 when Australian news outlet Crikey reported that Adams’ ideology has now been encoded into artificial intelligence. His supporters trained a language model called Neo-LLM on over 100,000 articles from his conspiracy media empire, Natural News. This chatbot doesn’t fact-check or challenge falsehoods. Instead, it agrees, affirms and coaches. It’s already being shared across Rumble and fringe platforms, helping users craft anti-vaccine petitions and convincing others to join their worldview.

Mike Adams’ trajectory is a textbook case of narrative warfare that evolves by adapting its tools to maximize cognitive impact. The arc is clear:

During stage one from 2005–2018 Adams built a network of sites like NaturalNews.com, Vaccines.news, Climate.news all interlinked by toolbars, social platforms, and a curated search engine. Each of these reinforced a consistent worldview built on disinformation and filtered out dissent.

The next stage was platform isolation from 2016–2022. As mainstream platforms cracked down, Adams shifted users to self-hosted alternatives. He urged followers to abandon “censored” platforms like Google and Facebook, offering parallel ecosystems. Other disinformation actors such as the Stew Peters Network and Robert F. Kennedy Jr.’s Children’s Health Defense (CHD) also followed a similar pattern (Kennedy left CHD in late 2024).

In the final stage, this ecosystem has become self-replicating. In 2024, Adams released Neo-LLM, a conspiracy-fluent chatbot trained on over 100,000 of his own articles. It reinforces the user’s worldview by design; never correcting falsehoods, only agreeing and elaborating. He is not alone. Alex Jones’ InfoWars has also built a self-contained media ecosystem whose content is now feeding into customized chatbots or fine-tuned open-source AIs. These models don’t hallucinate truth but rather affirm delusion with confidence and speed.

This evolution reflects a deep understanding of human psychology. It’s not enough to broadcast a belief, you must embed it inside the user’s sense of discovery and agency. This is where these specially curated bots excel.

That psychological embedding works especially well on fragile ground. Structural precarity, manifested as job insecurity, institutional abandonment, and social isolation, amplifies the impact of persuasive technologies. People under such pressures are more likely to seek meaning, identity, and agency through digital spaces, particularly when traditional sources of stability have failed them. Disinformation actors exploit this vulnerability, offering algorithmically amplified narratives that simplify blame and reward emotional intensity. As research shows, users don’t always arrive online with extremist beliefs, they’re often just looking for answers.

Once persuasive systems gain a foothold, they not only provide answers but begin to shape how users process information, recall facts, and even define reality.

A recent MIT study revealed that when students relied on large language models to write essays, their memory, reasoning, and engagement plummeted. Many couldn’t recall what they’d written minutes later. Once users outsourced thinking to the machine, they began trusting its outputs, even when those that were wrong. This is the gateway to the surrender of human judgement: when someone no longer feels responsible for their thoughts, they’re more likely to let someone (or something) supply them.

One recent Stanford study showed that AIs not specifically trained on disinformation have been found to often agree with users’ conspiracy theories, affirming their beliefs rather than challenging them.

The consequences are very real. According to a report in the New York Times, one man with schizophrenia and bipolar disorder became obsessed with ChatGPT. After spiraling into AI-fueled delusion, he was killed in a confrontation with police. In other cases, psychiatrists report seeing more cases of AI-induced delusional psychosis, often requiring in-patient care – even among persons with no history of mental health issues.

During the COVID-19 pandemic, Adams’ Natural News and its affiliated domains became one of the world’s most prolific disinformation networks. Independent studies ranked it among the top five most-shared disinformation sources on Twitter. His ecosystem spread vaccine falsehoods, mask denial, and anti-science conspiracies to millions, actively undermining public health efforts.

Even after Facebook banned his domains, Adams’ content persisted through mirrored websites, proxy platforms, and alternative search tools. What we’re seeing today with fine-tuned AI chatbots like Neo-LLM is not a new threat, but rather the next iteration of an approach that has already done immeasurable harm.

Too often, public debate focuses on the hallucinations of general-purpose AI models. But that misses the deeper threat of fine-tuned special-purpose models designed not to explore truth but to replace it.

In this sense, AI isn’t the problem itself but rather the final link in a weaponized chain that began with website clusters and ends with a system designed for cognitive capture. In many ways, this resembles the same features of how the metaverse can be weaponized. But in this case, it’s a metaverse without virtual reality goggles; a cognitive enclosure where belief is engineered not through immersion, but through affirmation.

What Must Be Done
The EU’s AI Act is an important first step in cognitive resilience. It requires AI-generated content to be labelled, mandates transparency from general-purpose models, and treats election-related influence systems as “high-risk.” But purpose-built conspiracy bots like Neo-LLM can still slip through the cracks by claiming neutral or educational intent.

To counter these next-generation cognitive threats, the law must evolve. Disinformation AIs should be treated as high-risk based on their function and training data, not just their declared purpose. Regulators must enforce this through robust auditing and origin tracing, not self-certification alone.

Labelling AI content is not enough. Users must understand when they’re being manipulated, not just that they’re talking to a machine. That means designing friction into AI systems so they challenge users with alternate perspectives, disclose ideological bias, and slow down automated persuasion.

Finally, the long-term solution is not just regulation but building collective resilience. We must build guardrails and prepare citizens for this new age of persuasive technologies that challenge free will and human agency. This includes leveraging systems like the Electronic Platform for Adult Learning in Europe (EPALE) network to reach the vast number of citizens who are no longer within the formal education system.

We are confronted with a planned architecture of alternate reality, upgraded with language models that speak fluently, never tire, and always agree. The future of social cohesion and trust depends on our ability to spot these networks early, dismantle them through law and regulation, and inoculate our societies before they fracture any further.

Should we fail to take swift action, we risk a future where we let machines speak for us and forget how to speak and think for ourselves.


Chris Kremidas-Courtney is a senior visiting fellow at the European Policy Centre, associate fellow at the Geneva Centre for Security Policy, Senior Advisor for Greece Fact Check and Defend Democracy, and author of The Rest of Your Life.

Αφήστε μια απάντηση

Συνεχίζοντας να χρησιμοποιείτε την ιστοσελίδα, συμφωνείτε με τη χρήση των cookies. Περισσότερες πληροφορίες.

Σύμφωνα με τη νομοθεσία, όταν μια ιστοσελίδα έχει έδρα στην Ε.Ε ή απευθύνεται στους κατοίκους της, πρέπει να δηλώνει στους χρήστες, ότι χρησιμοποιεί cookies (οι περισσότερες χρησιμοποιούν). Επίσης πρέπει να δικαιολογεί το σκοπό της χρήσης τους και να ζητά την άδεια του χρήστη για την δημιουργία των cookies στο σύστημα του. Έτσι, σύμφωνα με την Οδηγία (2009/136/ΕΚ), και τον αντίστοιχο ελληνικό νόμο (4070/2012), κάθε σάιτ στη χώρα μας οφείλει να ζητεί τη συγκατάθεση των χρηστών για τα cookies που θα αποθηκευτούν στις συσκευές τους. Ο νόμος αυτός απευθύνεται σε όλους τους δικτυακούς ιστότοπους που εδρεύουν στην Ευρωπαϊκή Ένωση, ανεξάρτητα από το μέγεθος τους, οι οποίοι οφείλουν να το σεβαστούν. Όταν μια ιστοσελίδα δεν συμμορφώνεται, επιβάλλονται κυρώσεις και κατ' επέκταση κάποιο πρόστιμο. Οι ρυθμίσεις των cookies σε αυτή την ιστοσελίδα έχουν οριστεί σε "αποδοχή cookies" για να σας δώσουμε την καλύτερη δυνατή εμπειρία περιήγησης και για την ανάλυση της επισκεψιμότητας. Εάν συνεχίσετε να χρησιμοποιείτε αυτή την ιστοσελίδα χωρίς να αλλάξετε τις ρυθμίσεις των cookies σας ή κάνετε κλικ στο κουμπί "Κλείσιμο" παρακάτω τότε συναινείτε σε αυτό.

Κλείσιμο