According to the proponents of nuclear power, since nuclear plants have a high capacity factor, they are entirely self sufficient and require no backup. If they have to purchase $140 million worth of power, that must come from "backup" power sources that are normally "standing by" right?
Whether an element of the grid operates at a capacity factor of 1% (yes some do) or 85%, they are going to require the same amount of capacity to back them up when they go down.
If I have a 50MW plant that I count on 20% of the time and it becomes unavailable, I have to find 50MW of backup somewhere.
If I have a 1600MW plant that I count on 85% of the time and it becomes unavailable, I have to find 1600MW of backup somewhere.
More importantly to the grid operator, is how much advance notice I have of this need for replacement power. Do I have time to plan and arrange it, or do I have to provide it instantly in order to prevent a massive, cascading blackout?
This is a snip from Amory Lovins in his (unanswered) challenge to Whole Earth Catalog founder Stewart Brand's claims promoting nuclear power:
The “baseload” myth
Brand rejects the most important and successful renewable sources of electricity for one key reason ... he quotes novelist and author Gwyneth Cravens’s definition of “baseload” power as “the minimum amount of proven, consistent, around-the-clock, rain-or-shine power that utilities must supply to meet the demands of their millions of customers.” (Thus it describes a pattern of aggregated customer demand.) Two sentences later, he asserts: “So far comes from only three sources: fossil fuels, hydro, and nuclear.” Two paragraphs later, he explains this dramatic leap from a description of demand to a restriction of supply: “Wind and solar, desirable as they are, aren't part of baseload because they are intermittent — productive only when the wind blows or the sun shines. If some sort of massive energy storage is devised, then they can participate in baseload; without it, they remain supplemental, usually to gas-fired plants.”
That widely heard claim is fallacious. The manifest need for some amount of steady, reliable power is met by generating plants collectively, not individually. That is, reliability is a statistical attribute of all the plants on the grid combined. If steady 24/7 operation or operation at any desired moment were instead a required capability of each individual power plant, then the grid couldn't meet modern needs, because no kind of power plant is perfectly reliable. For example, in the U.S. during 2003–07, coal capacity was shut down an average of 12.3% of the time (4.2% without warning); nuclear, 10.6% (2.5%); gas-fired, 11.8% (2.8%). Worldwide through 2008, nuclear units were unexpectedly unable to produce 6.4% of their energy output. This inherent intermittency of nuclear and fossil-fueled power plants requires many different plants to back each other up through the grid. This has been utility operators' strategy for reliable supply throughout the industry's history. Every utility operator knows that power plants provide energy to the grid, which serves load. The simplistic mental model of one plant serving one load is valid only on a very small desert island. The standard remedy for failed plants is other interconnected plants that are working—not “some sort of massive energy storage devised.” Modern solar and wind power are more technically reliable than coal and nuclear plants; their technical failure rates are typically around 1–2%. However, they are also variable resources because their output depends on local weather, forecastable days in advance with fair accuracy and an hour ahead with impressive precision. But their inherent variability can be managed by proper resource choice, siting, and operation. Weather affects different renewable resources differently; for example, storms are good for small hydro and often for windpower, while flat calm weather is bad for them but good for solar power. Weather is also different in different places: across a few hundred miles, windpower is scarcely correlated, so weather risks can be diversified. A Stanford study found that properly interconnecting at least ten windfarms can enable an average of one-third of their output to provide firm baseload power. Similarly, within each of the three power pools from Texas to the Canadian border, combining uncorrelated windfarm sites can reduce required wind capacity by more than half for the same firm output, thereby yielding fewer needed turbines, far fewer zero-output hours, and easier integration.
Two of the footnotes to that section:
- In utility operators' parlance, “baseload” actually refers to resources with the lowest operating cost, so they are dispatched whenever available. This definition embraces essentially all efficiency and renewables, since their operating cost is below even that of nuclear plants. Economic (“merit-order”) dispatch next uses nuclear, then coal, then gas-fired plants, in order of their increasing operating cost. Utility resource planners use “baseload” to refer to resources of lowest total cost—information that guides acquisition rather than operation. “Baseload” is also often but erroneously applied by laypeople to the big thermal plants that traditionally produce relatively steady output.
- Jim Harding, who led strategic planning for Seattle City Light, says it has no “baseload” resources in Brand’s sense; its assets’ system capacity factor is around 25%, comparable to a mediocre wind turbine’s. Yet retail electricity prices are relatively low and the system is highly reliable. If Brand were right, this would be impossible.
It should be noted that the figure quoted by Lovins for nuclear's unexpected down time is preFukushima and doesn't include either the reactors that actually melted down or the semi-permanent shutdowns in Japan of 44 other reactors due to safety concerns and lack of trust in the system,