As Lee Haughen cogently argued on Watts Up With That?, the rise of Large Language Models (the popular chatbots like ChatGPT and Google Gemini) will further entrench scientific “consensus” – even when that consensus excludes those who dare challenge it. Climate change is a perfect example:
When LLMs are trained on vast amounts of data, their primary objective is to provide responses that align with established facts, most of which are based on widespread human consensus. But what happens when this consensus is wrong? What if the narrative that dominates the conversation is one-sided, incomplete, or even deceptive? In the case of climate change, the dominance of a singular perspective is not the result of an impartial, objective review of all evidence but rather the product of institutional biases, political agendas, and economic incentives.
Every major search engine and AI tool tends to default to sources such as NASA, the IPCC, and the United Nations–organizations that have become synonymous with the promotion of catastrophic climate change narratives. AI, in turn, reflects this consensus, presenting it as incontrovertible truth. In doing so, it stifles genuine debate and prevents alternative viewpoints from receiving fair representation. In reality, there are numerous scientists from a variety of disciplines–including climatology–who question the data, methods, and conclusions drawn by climate change alarmists. Yet their voices are often marginalized, and their work is frequently excluded from mainstream discussions.