The English language is a great way to communicate because there are so many ways of saying things that you almost never have to choose just one. The downside is that you can also ask what may seem, on the surface, to be a perfectly reasonable question, but which is actually one that entirely misses the point of the topic on which the question is supposedly based.
Take the first question on this odd poll: “Should climate scientists discuss uncertainty in mainstream forums?”
The answer, of course, is YES. Yes they should. The difficulty is that due to a general lack of understanding about science and how it works, and a general ignorance about statistics, the term “uncertainty”, like the term “theory” takes on a different meaning for those who do not spend their time immersed in scientific literature, if not actually doing research themselves.
Science deals chiefly in probability, and this means a couple things. The first is that high standards must be maintained. If you want to say your results are significant, you have to be able to state truthfully that if someone else did the exact same experiment as you, they would get the exact same results a minimum of 95% of the time. Following on that requirement, then, is the desire to have a large sample size. If you poke five frogs in the butt to test your hypothesis that frogs hop when poked in the butt, you wouldn’t have much certainty in your results. You might have four that hopped, and one that did this, and then you’d have only 80% certainty that they’d hop, and a 20% chance to traumatize any nearby children, with a very small sample size. If that frog was one out of 100 frogs poked, then you’d only have a 1% chance of trauma, and you’d be pretty sure that your results were reliable, since you’d just poked 100 different frogs.
But you’d still have that 1% uncertainty and while it’s very small, it’s still there, and were you talking about it on the news, you’d have to admit that you couldn’t be 100% certain that a poked frog would hop, because it might just be a screamer…
So then look at climate science. We have several thousand scientists actively working on this issue, and most of them have conducted more than one project over the last half century. Each one of those projects, in order to be claimed significant, has to meet that 95% minimum threshold, and a significant number of them (95%, remember?) do. So let’s say we have a sample size of 10,000 projects with published reports (it’s more than that, but bear with me), and 5% of those are screaming frogs. That leaves us with 500 that don’t meet our lofty standards, and 9,500 that do.
Now, those 9,500 papers point to climate change being a real, man-made problem. There are varying degrees of severity, perhaps, but in accepting that it’s a real, man-made problem, we also have to accept that they are right about the solutions, and that we are choosing to go ahead with the risky behavior (doing nothing) despite knowing that we have a 95% chance of losing.
So now we play an extravagant game of Russian Roulette. We’ve got a really really big revolver. REALLY big. It’s got 10,000 chambers. Instead of 1/6 of them being empty though, 95% of them are full. That means that 500 out of the 10,000 chambers are empty and pose no threat to us, and 9,500 chambers are full, and will kill us very very dead. You would, in fact, have better odds of survival if you played with a regular revolver and had 5 bullets and one empty chamber, but we’re going for statistical significance, and 16.667% error isn’t going to cut it.
I could belabor the point some more, but I think you get the gist. When a scientist talks about uncertainty, they have a very clear meaning, and it’s NOT the meaning implied when a reporter asks if they should “discuss the uncertainty”.
Discussing the uncertainty is important, but the amount of discussion on it should probably match the amount of uncertainty.
I will now leave you with a discussion of uncertainty that is the appropriate length, given the data and research available.