The Problem With Really Complicated Ideas to Make Dirty Energy Clean
I'd meant to link to this the other day, but Eugene Robinson is saying very sensible things about efforts to "capture" the carbon emissions generated by coal plants.
[W]ould the stuff stay down there? The whole point of the exercise, remember, would be to keep the carbon dioxide from getting into the atmosphere, where it would contribute to climate change. The idea is to confine it in specific types of geological formations that would contain it indefinitely. But scientists acknowledge that they can't be certain that the carbon dioxide will never migrate.
Scientists and engineers will have to prove that the possibility of a sudden, catastrophic carbon dioxide release from a storage site is exceedingly remote. I say "catastrophic" because carbon dioxide is heavier than air, and a groundhugging cloud would suffocate anyone it enveloped. That is what happened in Cameroon in 1986, when naturally occurring carbon dioxide trapped at the bottom of Lake Nyos erupted and killed 1,746 people in nearby villages. Presumably, storage sites would not be located near population centers.
Perhaps more difficult will be proving that the carbon won't seep out slowly, say at a rate of 1 or 2 percent a year. There would be no health risk from a gradual escape, but we'd have gone to great trouble and expense, and the carbon dioxide would have made its way into the atmosphere after all.
This reminds me of a smart point from R. Douglas Arnold's The Logic of Congressional Action. Imagine you just stop building coal power plants. The probability of that reducing carbon emissions from coal is pretty close to 100 percent. Pretty good, right?
But take a more complicated policy like carbon sequestration. For this to work, a number of things need to happen: We need to develop costeffective sequestration technology. We need legislation that forces coal plants to use it. We need coal plants to implement the technologies effectively. We need the technology to work 15 years down the road. Etc.
Imagine that all these things have a pretty good change of happening: 75 percent, say. The actual probability then that all of them happen is .75 x .75 x .75 x .75  that is to say, merely 32 percent. That's not so good.
(Photo credit: David Klobucar  Associated Press Photo )
By
Ezra Klein

June 8, 2009; 12:00 PM ET
Categories:
Climate Change
Save & Share:
Previous: Think Tank Update
Next: Lunch Break
Posted by: tl_houston  June 8, 2009 12:54 PM  Report abuse
VERY BAD MATH! No cookie for Ezra.
You only get to combine probabilities in that way if they represent independent events, analogous to coin tosses or die tosses, wherein the outcome of toss n+1 does not depend on the outcome of toss n.
But the events you are describing are not independent, so you have to use conditional probabilities. For example, suppose I roll 2 dice and look for the probability that they sum to 8. You can use the regular rules for this calculation since we have no conditions imposed. There are 5 ways for this to happen so the probability of rolling some kind of 8 is 5/36.
Now make it a conditional probability  what is the probability of rolling an 8 subject to the condition that die 1 is a 3 and die 2 is covered up? The probability of rolling an 8 in this situation is no longer 5/36. It is now 1/6 since there is only 1 way to roll an 8 with die 1 a 5.
By setting all the conditional probabilities to the same value, despite the outcomes of prior events, you've actually violated Bayes' Theorem. The only way to justify this is to assume that the prior probabilities are basically equal, but that isn't true. The probability that subsequent events in the chain will occur without prior events having occurred first is zero, not 0.75.
To assume that the outcome of a series of conditional probabilities is the same as the product of the unconditional probabilities, or even that they are of the same order of magnitude, is a subtle statistical fallacy. They are related by the formula for total probability but without further information, the relationship is murky at best.
This is sometimes referred to as the Wyatt Earp effect. Wyatt Earp was famous in part for surviving a large number of duels without ever being harmed. Quite improbable, no? In general, though, it makes no statistical sense to ask after a series of events "what is the probability of this?" Probability has no meaning in retrospect. But if you pose a proper question (When Wyatt Earp was a child, what is the probability that SOMEONE, not necessarily Wyatt Earp, would have survived?), the results are much less impressive. In short, the impressiveness of a series of random events is magnified in the human mind after it is all over since all the contrary outcomes have been discarded by the occurrences themselves.
To combine probabilities in this case, you need to use the law of total probability: the probability of event A is the sum of the joint probabilities of A with all of its conditions.
There is some discussion of this in John Alan Poulos' book Innumeracy, which (along with A Mathematician Reads the Newspaper) should be required reading for anyone pretending to be a journalist.
Posted by: evenadog  June 8, 2009 2:03 PM  Report abuse
You're certainly right, except that I disagree with the premise: I set this up so the probabilities are basically independent. Pivoting off Gene, I was thinking of the fact that the technology is fully effective, not just effective enough. That the legislation has no loopholes, not just that it gets enough votes to pass and sounds sort of good. That the implementation is bulletproof, not shoody. etc.
You can very much imagine a scenario in which decent but imperfect technology is fit into poorly written legislation and then implemented in a halfhearted manner and then many of the units break down 15 years down the road.
And though I can see the argument that these are not fully independent, it is a stylized example. The basic point is simply that chains of likely events are less reliable by far than one likely event.
Posted by: Ezra Klein  June 8, 2009 2:12 PM  Report abuse
I'd call that a straw man argument. Let's make a much simpler example: what is the probability that Atlanta will get a new circumferential freeway? We can divide this into two events for argument: that funding will be appropriated (E1 with probability P1) and that construction will commence (E2 with probability P2).
Well, P1 is probably pretty small (thank god  we have plenty enough sprawl already) but what about P2? Is it a likely event or not? Well, if money is NOT appropriated then no. P2=0. But if money is appropriated, the P2 is as close to 1 as makes no odds. The point is that the value of P2 does not depend on the value of P1  it depends on the OUTCOME of E1. So what do I use for P2? 0? 1? 0.5? None of these are correct because this is not the correct way to combine conditional probabilities. I could choose 0.5 to make a point about probabilities, but that point would not be applicable to this circumstance. In fact, the probability of a freeway appearing is controlled almost entirely by P1. Once money is available, construction is a certainty.
The issue is not the scenario. It is not about the random numbers you substitute for individual probabilities nor the reasons you choose them. Conditional probabilities do not combine the same way as unconditional ones. The probability that legislation will pass is heavily dependent on the effectiveness of the technology. In fact, given the scale of the problem, what you SHOULD argue is that if the technology is bulletproof, then legislation is a dead certainty. This is pretty closely analogous to the freeway example. The point is that your numbers have to be ifthen numbers, not individually assigned, and you have to combine them accordingly. The calculation you've chosen does not inform, it confuses, since it is not relevant to the sequential process you are trying to model.
If you are trying to make a point about basic independent event probability, this is not the appropriate situation in which to make it. If you are trying to make a point about the effectiveness of carbon sequestration, this is not the appropriate way to make it. You've either made a correct argument for the wrong situation or an incorrect argument for the right situation. In any case, there is a serious error.
Posted by: evenadog  June 8, 2009 3:05 PM  Report abuse
EK: "Imagine you just stop building coal power plants. The probability of that reducing carbon emissions from coal is pretty close to 100 percent."
That would almost certainly stop coal emissions from growing, but as long as the existing plants are still in operation, it's not clear why you think emissions will decrease.
Posted by: tomtildrum  June 8, 2009 4:30 PM  Report abuse
We really don't need new coal plants to meet increasing demand for electricity in America. Natural gas is coming online in huge amounts and burns 60% cleaner than coal. This is where the future lies for generating our electricity, until in 2030 years we can convert virtually all generation to nuclear power. See www.energyplanusa.com for in depth articles on natural gas, coal and energy policy.
Posted by: Rmoen  June 9, 2009 11:29 AM  Report abuse
The comments to this entry are closed.
Most sequestration schemes involve injecting carbon dioxide into depleted natural gas reservoirs. These reservoirs have already demonstrated an ability to retain high pressure methane for millions, or tens or hundreds of millions, of years. Some are old enough to have retained their integrity through the opening of the Atlantic Ocean and the uplift of the Rocky Mountains.
The problem is in capturing and transporting the waste CO2. Once you've done that, getting rid of it isn't a problem.