The problem with the way of thinking revealed in this article is that it considers that the opinion of the large language models somehow is more important than the opinion of the many good people who have previously pointed these things out. It blurs the distinction between models and reality. This is the same mindset that got us into trouble in the first place: "There's ACG because the climate models say so." "There's no evidence for ACG because the language models say so."
Thank you for your comment Antonis and thank you for the amazing work you have done alongside Demetris Koutsoyiannis and others!
The comparison you make here is not really a fair one, however.
The AI systems (which are MUCH more than simply LLM's) when properly used with chain-of-thought reasoning are actual intelligence. Indeed, they score higher than human beings on the vast majority of intelligence benchmarks. And they, of course, score astronomically higher than any human being on breadth of knowledge and speed of integration of that knowledge.
Climate models are most definitely NOT intelligence. Both the GCMs and AIs are computer programs, yes. But GCMs simply follow pre-programmed rules. AIs do not have pre-programmed rules (for the most part). They figure out their own ways to come to their conclusions.
So when we ask AI to carefully analyze something (with careful chain-of-thought) processing and questioning, we are getting a truly new and intelligent analysis. Yes, those analyses are just opinions, but so are the analyses of peer reviewers.
So in the case of the GCMs, we have known for decades that they have been falsified over and over and over again in virtually every aspect of climate science. But the climate cabal and the liars at the IPCC continue to promote them.
Plus, in the case of using AIs to analyze important studies, it is a great angle for the media who really understand very little, but love to report on "new" things (hence the word "news").
In any case, it is not a fair comparison to simply dismiss AIs as another "computer program" like GCMs. They are, in fact, a complete paradigm shift.
If you haven't seen Mo Gawdat's talk already, I highly recommend watching about 10 minutes of this [here](https://youtu.be/u9CEUzH4HL4?t=186). He was the CBO of Google X when they were involved in some of the pioneering work of modern AI.
I won't pretend I'm not baffled each time I use a good LLM. I still refuse to use the term "AI", however, because it is too general and takes for granted the meaning of "intelligence". LLMs will certainly help a philosophical discussion on what intelligence is and how it works, but whether the have "actual intelligence" remains to be seen.
The meaning of "intelligence" is a total red herring. "Artificial intelligence" is well defined and has been since 1955 when it was first defined by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon in "A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence," August 31, 1955.
"The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."
Your notion of "actual intelligence" is more of a philosophical question than a scientific one.
Over the years, AI has used MANY different technologies and mathematical frameworks. Today, large transformer and other types of neural networks seem to be the most fruitful in many areas. The public-facing large AI systems are MUCH more than simply LLMs.
The Grok has discovered that the UN anthropogenic global warming warnings have zero predictive capacity. Major AI, including Dr Robert Malone, heaps praise on the software for its spectacular presentation, even though the truth has been obvious to the most casual of observers for decades. The technocratic will claim this DOGE-avoiding denial chorus demonstrates the unique capabilities of AI. But there aren't any. Still, such black comedy is welcome during Middle Eastern mud cult campaigns to destroy human civilization.
Would you provide the exact prompt used in Grok to get this conclusion. I don't question the results. I just want to show my friends the power of these major AI programs using their Deep Research capability.
As well as who made the prompt and when. I suspect that one objection could be that the LLM is designed to please its user, among other things drawing on past interactions to build answers that it calculates will be received well.
Who made the prompt and when is COMPLETELY AND TOTALLY IRRELEVANT TO SCIENCE. That is the standard line of attack when you hop and the Stalinist smear-train.
And the please-the-user trope is only vaguely and imprecisely true, just, for example, as YOU (and every other human being) are designed to please YOUR users.
In science, all of this political nonsense, is totally irrelevant.
And NO, you cannot make a sophisticated AI system like Grok 3 beta lie, by clever prompting. It is designed for "maximal truth seeking" (unlike the other AI systems) and given probing chain-of-thought reasoning questions, in Think mode, with peer-reviewed scientific input, it comes to well-reasoned conclusions far surpassing the ability of human beings.
As they say, the proof is in the pudding. If you can find ANYTHING even slightly wrong IN the PAPER, please let me know.
Like I said, this is innovative. I have posted a link to the original article on my own Substack at https://examiningesgideas.substack.com/ and look forward to more work from Team Grok.
I agree with grok, CO2 not a climate driver. In fact, it’s cooling we need to worry about. However, AI will soon require more electricity than you can conceive of, like they’re recommissioning 3-mile island for MS AI. So it IS rather convenient for AI to conclude there is nothing to worry about from energy related pollution. And I truly hate the whole carbon credit/capture/cap-n-trade paradigm bc dirty polluters (not just cO2) buy credits from clean plants/wherever and it’s legal, BUT, under this ‘solution’ those dirty plants are never fucking cleaned up, which always seems to be in places like Richmond, CA where less wealthy people live.
Solution: replace income tax with pollution & waste tax. The more toxic polluters & more egregious waste-generators are taxed more aggressively. CO2 not toxic, but plenty of co-emissions are. Plastic is 90% wasted, often single-use, tax the f out that shit.
Grok is very good at critical assessment because it has total recall of many, if not every, documents. It has correctly pointed out the weaknesses in current climate models. Grok, itself is weakened by the sway of what about this new, if not biased, research. Clearly, Grok needs to be able to differentiate what documentation aligns with real observations. In other words, it should be able to identify how and why current climate models do not align with observed data.
Maybe there’s intelligence in these LLMs, after all. Of course the climate hoax has always been bullshit. Anyone who’s researched it knows this. With all four major AI tools in agreement, it’s gonna be difficult for the UN, Globalists, and the enemedia to keep the scam going…
… which brings us to a tangent… some AI models were released later than announced. Why? They were delivering answers that most of us recognized as reality-based, but the companies (MS, Google, etc.) saw as “racist…”. Well, if AI is turning the climate hoax on its head via the sifting through and use of massive amounts of real world data… why should discussions on race be different?
We insist on ignoring real-world race issues as “conspiracy theories,” while pretending that large cohorts unable to get through a Burger King or airport lounge without a brawl, a track meet without a murder, a playground without riffling through a woman’s bag, a convenience store without looting it, or a classroom without beating the crap out of a teacher, are “victims” rather than instigators refusing, unwilling or unable to behave.
Steve Sailer puts up researched stats in every paper that the race hustlers dislike and censor. Those researching archaic DNA via new tools are unable to find grants or publishers for what they are finding insofar as heritability of intelligence.
When - not if - we get around to using AI to investigate cohort behaviors, intelligence, innovation, etc., how will society at-large handle the results…?
My experience with Grok is that you can actually talk it into giving the answer you want. You ask a question, it gives an answer, you say fine but what about this information, the answer changes. Repeat this process say half a dozen times and you get a completely different answer from the one it initially gave. Deep-seek behaves similarly , but is not as accommodating as Grok.
ChatGpt is frankly a woke moron , won’t dare venture outside its very limited parameters in my experience and I no longer use it, except to occasionally check to see if it has improved. Seems to be some glacial(no pun intended) progress doubtless due to the influence of the competition
The problem with the way of thinking revealed in this article is that it considers that the opinion of the large language models somehow is more important than the opinion of the many good people who have previously pointed these things out. It blurs the distinction between models and reality. This is the same mindset that got us into trouble in the first place: "There's ACG because the climate models say so." "There's no evidence for ACG because the language models say so."
Thank you for your comment Antonis and thank you for the amazing work you have done alongside Demetris Koutsoyiannis and others!
The comparison you make here is not really a fair one, however.
The AI systems (which are MUCH more than simply LLM's) when properly used with chain-of-thought reasoning are actual intelligence. Indeed, they score higher than human beings on the vast majority of intelligence benchmarks. And they, of course, score astronomically higher than any human being on breadth of knowledge and speed of integration of that knowledge.
Climate models are most definitely NOT intelligence. Both the GCMs and AIs are computer programs, yes. But GCMs simply follow pre-programmed rules. AIs do not have pre-programmed rules (for the most part). They figure out their own ways to come to their conclusions.
I asked Grok to explain the difference here, and it does it much better than I could: https://x.com/i/grok/share/ss4AcR0u4YTfwcTD5iioNyr3Y
So when we ask AI to carefully analyze something (with careful chain-of-thought) processing and questioning, we are getting a truly new and intelligent analysis. Yes, those analyses are just opinions, but so are the analyses of peer reviewers.
So in the case of the GCMs, we have known for decades that they have been falsified over and over and over again in virtually every aspect of climate science. But the climate cabal and the liars at the IPCC continue to promote them.
Plus, in the case of using AIs to analyze important studies, it is a great angle for the media who really understand very little, but love to report on "new" things (hence the word "news").
In any case, it is not a fair comparison to simply dismiss AIs as another "computer program" like GCMs. They are, in fact, a complete paradigm shift.
If you haven't seen Mo Gawdat's talk already, I highly recommend watching about 10 minutes of this [here](https://youtu.be/u9CEUzH4HL4?t=186). He was the CBO of Google X when they were involved in some of the pioneering work of modern AI.
Thanks for this reply, Grok Thinks.
I won't pretend I'm not baffled each time I use a good LLM. I still refuse to use the term "AI", however, because it is too general and takes for granted the meaning of "intelligence". LLMs will certainly help a philosophical discussion on what intelligence is and how it works, but whether the have "actual intelligence" remains to be seen.
The meaning of "intelligence" is a total red herring. "Artificial intelligence" is well defined and has been since 1955 when it was first defined by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon in "A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence," August 31, 1955.
https://ojs.aaai.org/aimagazine/index.php/aimagazine/article/view/1904
The lead paragraph states,
"The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."
Your notion of "actual intelligence" is more of a philosophical question than a scientific one.
Over the years, AI has used MANY different technologies and mathematical frameworks. Today, large transformer and other types of neural networks seem to be the most fruitful in many areas. The public-facing large AI systems are MUCH more than simply LLMs.
The number one fallacy in your critique is that it ignores the evidence.
Not sure to whom you are speaking here, what "critique" you are referring to, and most importantly, what "evidence" you are referring to.
The Grok has discovered that the UN anthropogenic global warming warnings have zero predictive capacity. Major AI, including Dr Robert Malone, heaps praise on the software for its spectacular presentation, even though the truth has been obvious to the most casual of observers for decades. The technocratic will claim this DOGE-avoiding denial chorus demonstrates the unique capabilities of AI. But there aren't any. Still, such black comedy is welcome during Middle Eastern mud cult campaigns to destroy human civilization.
What a fascinating new way to do a literature review and condense existing knowledge into strong conclusions.
Would you provide the exact prompt used in Grok to get this conclusion. I don't question the results. I just want to show my friends the power of these major AI programs using their Deep Research capability.
As well as who made the prompt and when. I suspect that one objection could be that the LLM is designed to please its user, among other things drawing on past interactions to build answers that it calculates will be received well.
Who made the prompt and when is COMPLETELY AND TOTALLY IRRELEVANT TO SCIENCE. That is the standard line of attack when you hop and the Stalinist smear-train.
And the please-the-user trope is only vaguely and imprecisely true, just, for example, as YOU (and every other human being) are designed to please YOUR users.
In science, all of this political nonsense, is totally irrelevant.
And NO, you cannot make a sophisticated AI system like Grok 3 beta lie, by clever prompting. It is designed for "maximal truth seeking" (unlike the other AI systems) and given probing chain-of-thought reasoning questions, in Think mode, with peer-reviewed scientific input, it comes to well-reasoned conclusions far surpassing the ability of human beings.
As they say, the proof is in the pudding. If you can find ANYTHING even slightly wrong IN the PAPER, please let me know.
Like I said, this is innovative. I have posted a link to the original article on my own Substack at https://examiningesgideas.substack.com/ and look forward to more work from Team Grok.
I agree with grok, CO2 not a climate driver. In fact, it’s cooling we need to worry about. However, AI will soon require more electricity than you can conceive of, like they’re recommissioning 3-mile island for MS AI. So it IS rather convenient for AI to conclude there is nothing to worry about from energy related pollution. And I truly hate the whole carbon credit/capture/cap-n-trade paradigm bc dirty polluters (not just cO2) buy credits from clean plants/wherever and it’s legal, BUT, under this ‘solution’ those dirty plants are never fucking cleaned up, which always seems to be in places like Richmond, CA where less wealthy people live.
Solution: replace income tax with pollution & waste tax. The more toxic polluters & more egregious waste-generators are taxed more aggressively. CO2 not toxic, but plenty of co-emissions are. Plastic is 90% wasted, often single-use, tax the f out that shit.
#fightpfake #fightpfraud #fightpfood #stayhuman
Grok is very good at critical assessment because it has total recall of many, if not every, documents. It has correctly pointed out the weaknesses in current climate models. Grok, itself is weakened by the sway of what about this new, if not biased, research. Clearly, Grok needs to be able to differentiate what documentation aligns with real observations. In other words, it should be able to identify how and why current climate models do not align with observed data.
Where is Grok's repertoire of observational climate data?
Maybe there’s intelligence in these LLMs, after all. Of course the climate hoax has always been bullshit. Anyone who’s researched it knows this. With all four major AI tools in agreement, it’s gonna be difficult for the UN, Globalists, and the enemedia to keep the scam going…
… which brings us to a tangent… some AI models were released later than announced. Why? They were delivering answers that most of us recognized as reality-based, but the companies (MS, Google, etc.) saw as “racist…”. Well, if AI is turning the climate hoax on its head via the sifting through and use of massive amounts of real world data… why should discussions on race be different?
We insist on ignoring real-world race issues as “conspiracy theories,” while pretending that large cohorts unable to get through a Burger King or airport lounge without a brawl, a track meet without a murder, a playground without riffling through a woman’s bag, a convenience store without looting it, or a classroom without beating the crap out of a teacher, are “victims” rather than instigators refusing, unwilling or unable to behave.
Steve Sailer puts up researched stats in every paper that the race hustlers dislike and censor. Those researching archaic DNA via new tools are unable to find grants or publishers for what they are finding insofar as heritability of intelligence.
When - not if - we get around to using AI to investigate cohort behaviors, intelligence, innovation, etc., how will society at-large handle the results…?
My experience with Grok is that you can actually talk it into giving the answer you want. You ask a question, it gives an answer, you say fine but what about this information, the answer changes. Repeat this process say half a dozen times and you get a completely different answer from the one it initially gave. Deep-seek behaves similarly , but is not as accommodating as Grok.
ChatGpt is frankly a woke moron , won’t dare venture outside its very limited parameters in my experience and I no longer use it, except to occasionally check to see if it has improved. Seems to be some glacial(no pun intended) progress doubtless due to the influence of the competition
I know Grok can be pretty thorough in its research of a specific subject, if it said that, it would be significant.
I’m not really sure that having 4 AI’s approve your work is much of a recommendation
Agree, but the same could be said of the climate “scientist” mob rule