Thursday, August 28, 2008

Dichroics 101

A friend asked for a rundown on dichroics, which are the coatings we put on optical glass (and sunglasses) to optimize the transmission properties of our lenses. So, here is Dichroics 101. You could also check out Wikipedia.

Visible light is made of photons. Each one has a wavelength. Your eyes are receiving a bazillion photons every second in the daytime. At night, your dark-adapted eyes are capable of noticing the flicker of individual photons, but only sometimes. Visible light varies from 420 nm (blue) to 700 nm (red). A nm is a nanometer. 1000 nm is a micron (micrometer, but nobody but Europeans says that), 1000 microns is a mm (millimeter), 1000 mm is a meter.

So, a 420 nm photon is really small. However, atoms are 0.1 to 10 nm across, and we can lay down layers just a few dozen atoms thick, so we can actually build things that are smaller than the wavelength of light. Get back to that in a sec...

Any given transparent material has an index of refraction, which tells you (among other things) the speed of light in the stuff. Air has an index of refraction of about 1, so that the speed of light in air is the same as it is in a vacuum: 186,000 miles per second. Glass has an index of refraction of 1.5 to 1.8 (depending on which kind of glass). Plastics (like polycarbonate, what is most likely used in sunglasses) have an index of refraction around 1.5. So, in plastic, the speed of light is 186,000 miles/second / 1.5 = 124,000 miles/second.

Whenever photons go through a surface, like changing from air to glass or back, some will reflect. The number of photons reflected has to do with the change in index of refraction. The equation is: reflected fraction = (Na - Nb)^2 / (Na + Nb)^2, where Na and Nb are the indices of refraction for the two materials. For example, from air (Na = 1.0) to plastic (Nb = 1.5), the fraction is 0.04, or 4%. When you see yourself in a window (or in someone else's sunglasses), this is what you are looking at. Actually, you see two of these, one off the front surface, and one off the back surface of glass.

Suppose we put a 100 nm thick coating of Magnesium Fluoride (a.k.a. MgF2, index of refraction is 1.38) on a piece of glass (index of refraction 1.5). There will be two reflections: one from the air/MgF2 surface (2.5%), and one from the MgF2/glass surface (1.7%). Because there is a smaller change of index of refraction across each surface, each surface reflects less. If the index of refraction of MgF2 was closer to 1.25, halfway between air and glass, then we'd find that the power of the two reflections, when added, would be less than the power of a single reflection of an air/glass surface. But MgF2 doesn't have that nice property, so why do we bother?

Those photons act like waves: they have peaks and troughs. That 100 nm thickness wasn't just any old number, it is 1/4 of the wavelength of green light in MgF2. Green is ordinarily 550 nm, but going through MgF2 it's about 550/1.38=398.5 nm. The light reflected from the MgF2/glass surface will have travelled two times 100 nm more than the light reflected from the air/MgF2 surface, or one-half a wavelength more. So, the crests of the wave off the MgF2/glass surface will line up with the troughs of the wave off the air/MgF2 surface. When you add those two together, you get... cancellation.

Or... nearly cancellation. The air/MgF2 reflection is a little stronger than the MgF2/glass reflection, so there will be a little wave left over. A 550 nm photon will reflect (2.5-1.7=0.8%). There you have it, the first antireflection coating, as developed for German submarine periscopes in World War II.

Now notice that 420 nm (blue) light will not cancel as well, because the two reflections are 65% of a wavelength offset from one another. The peaks and troughs don't quite line up, so it doesn't cancel as well. The same is true of 650 nm (red) light. So, the reflected light will be purplish: it will have some blue and red, but not so much green.

This is the basis of dichroic filters. You can put several layers of stuff onto a glass or plastic surface, and each additional surface will have a reflection. At each wavelength, you can add up those reflections with their wave offsets to get an overall reflectivity. You end up with a graph which shows how much the thing reflects at each wavelength. By varying the materials deposited and the thicknesses, you can get something which has interesting and useful properties, such as reflecting all the IR (> 670 nm) and UV (< 390 nm).

You can buy such a thing at a good camera store. It's called a B+W 486 filter.

Tuesday, August 19, 2008

Grove's plan

Now I've read Andy Grove's plan (at The American, or via Wired) to fix our energy dependence national security mess. Mr. Grove thinks, as I do, that national security is the most immediate problem we face.

There are flaws in both Grove's and Pickens' plans, but we can take good ideas from both and act on them immediately.

There are two steps to either plan:
  1. Switch our vehicles to a non-petroleum energy form
  2. Make that energy domestically
With Pickens' plan, we switch to natural gas to power cars (step 1), and simultaneously free up domestic methane production by building wind turbines (step 2). If we do step 1 without step 2, we end up switching from imported petroleum to imported natural gas, which suits Pickens quite well because he has lots of gas to sell us.

With Grove's plan, if we do step 1 without step 2, we've got a bunch of electric cars which will plug in at night. The extra demand at night will drive utilities to produce more baseload power -- coal, wind, or nuclear, in that order, all of which is domestic. The advantage of Grove's plan is that step 2 is handled by the market.

The problem with Grove's plan is that converting cars and trucks to electricity instead of natural gas is more costly. The added cost will cripple the plan in two ways:
  1. It is so much more costly that the conversion will happen more slowly. Per year, less petroleum imports will be displaced.
  2. Fewer vehicles, in the end, will be converted.
I think there are good ideas in both these plans that we should isolate and exploit immediately.

There are no forseeable battery technologies that will work on long distance trucks. The obvious substitution here is to move long distance freight by electrified train. We already have most of the rail infrastructure (rights of way are the big issue here), and the nation is already switching some cargoes back to rail. But railroads have been sick for a long time, and we have to fix them before they can help America.

Rail's crushing disadvantage compared to trucking is it's capital structure -- the fact that the same companies own the road and the trucks. Long distance trucking works because multiple companies run trucks over the same routes, which are owned and paid for by the U.S. government via tolls on the trucks and taxes on the diesel they burn. We should change rail to use this structure. The rail infrastructure should be electrified in the process, so that the independent trains can choose to run on cheaper domestically produced electricity where it is available. All the technology necessary is already developed and in production.

The good idea in Pickens' plan is to build lots (many tens of thousands) of wind turbines. Wind turbines displace imported natural gas with domestic labor, and that is the most useful part of his plan. If you then convert cars to run on natural gas rather than petroleum, you in turn displace some imported petroleum with some imported natural gas. This second step is a fine thing too, as petroleum costs more than natural gas per unit energy, but the first step is what is most important.

The United States made a terrible mistake during the 1990s by building nearly a terawatt of natural gas-fired turbines. The choice was driven by utilities who know that fuel costs can always be passed to the consumer, so that cheap gas turbines minimized investment and so maximized return on investment. The problem here is that utilities were allowed to make investments with large externalized costs. Market forces do not work to the advantage of most citizens unless the market is set up to internalize the costs that matter to the citizens. Because utilities have no sons and daughters to send to war, they cannot be allowed to make investment decisions that force us to send our sons and daughters to war.

Electric freight trains and wind turbines will not fix America's imported energy problem. Both, however, can be pursued immediately, are solid steps in the right direction that will not have to be reversed, and will make market-affecting changes in our consumption of imported energy. Both options will buy us some time during which we must develop better options.

In the medium term, we can build more nuclear power plants. These take longer than wind turbines to come on line, but the eventual impact can be much larger. The public discussion of our nuclear options is becoming more sensible, and I am beginning to hope that we may be able to begin building this infrastructure again after a two decade hiatus that has cost us terribly.

Nuclear power generation, if pursued in a sensible way, can drive the cost of electricity in the U.S. down below the cost of coal power in China, in a predictable, long-term way, which I think should be an explicit goal of our national energy policy. This will have the effect of "onshoring" basic industries that we have been moving overseas for decades. The onshoring effect is actually more powerful than displacing imported petroleum, because the imports that are replaced for a given amount of investment have higher added value.

Monday, August 18, 2008

Unreliable Wind Power

The electricity that arrives at your home or business is extremely reliable (if you live in the U.S.). The electricity that comes from a windmill is unreliable -- it only comes 1/3 of the time, and you never know exactly when it's going to come or how much you are going to get. If you want to sell wind energy, you have to convert unreliable wind energy into reliable power. How is it done?

This paper explains how, but it's a tough slog. What follows is my summary.

The utility companies already solve a similar problem. Electricity is not easily stored, and in modern grids it is generally not stored at all. So, when you flick on a light switch, the immediate effect is that the power dissipated by all the lights in your neighborhood drops a little to compensate. Within seconds, some power turbine perhaps hundreds of miles away must twist a little harder on its generator shaft to get everyone's line voltage back up, and the extra thermal or hydroelectric power fed into the turbine to get this extra twist will be just about what your light bulb burns, plus the inefficiencies of getting the power from the turbine input to your bulb.

Turbines can only throttle up to 100% of their rated capacity, and they get inefficient when they throttle down too far, so utilities will shut down or spin up units to make larger changes. Changes like these take a long time, so utilities predict what the expected load at any given time will be, hours or days in advance, and schedule units to be on or off line to match the predicted load. Utilities keep some fraction of their turbines at partial output so that they can immediately crank up to match unexpected increases in the load.

The biggest increase in the load that they plan for is usually an unexpected dropout of one of the generators. If a 1 gigawatt generator suddenly goes off line, the grid controllers might respond by taking four other generators from 700 to 950 MW output. It would be impossible for this to happen instantly, but luckily, when most generators go offline, they do so gradually, and if they coast down over the course of seconds, other generators can crank up to match. If a circuit breaker pops or a line parts, or something else happens very quickly, then there is usually a temporary brownout as the line voltage drops down to the point where the loads match the generation. The backup generators usually ramp up within seconds, and many devices (like your computer or TV) can ride through a partial loss of power for a second or so.

So, the bottom line is that utilities predict the change in demand on their generators, and there is some variation from their prediction, and being ready for this variation costs money because some turbines (the spinning reserve) must be run at partial throttle which is less efficient than flat-out.

Just as an aside: consider how valuable it would be for the utility company to be able to instantly shut down your air conditioner for just a few minutes. This ability would act as part of their spinning reserve. During the summer, air conditioners are a substantial fraction of the total power burned. I'm pretty sure that for most of the U.S., the ability to shut down even a fraction of the air conditioners for 10 minutes would cover the entire spinning reserve requirement. That could save a lot of money, and PG&E (my local utility in California) is experimenting with just that through their Underfrequency Relay Option on the Base Interruptible Program. Anyway, back to the summary.

Wind farms produce electricity whenever the wind blows. Wind speeds can be predicted, and there is always variation from the prediction. When a wind farm is connected to the grid, the total variation in load on the load-following turbines is larger than without the wind farm, and so more turbines must be run in load-following mode, and these incur a cost associated with wind power that is real but not easy to predict before the wind farm is built.

For instance, part of Denmark's grid is connected to Norway's grid. Norway gets most of its power from hydroelectric plants. Hydroelectric plants are very good at load following and so they are usually the first choice of plant to handle variation from plan. Because Norway has lots of hydroelectric plants, and because it has high-throughput connections to Denmark, Denmark can hook up fairly high powered wind farms to its grid and incur relatively low costs for standby power.

Now that utilities are connecting large amounts of wind power to their grids, they are getting more precise numbers on the costs of doing so.
  • Wind works well where you have year-round high winds near hydroelectric dams.
  • The short-term variation from wind farms is usually quite small, since turbines are small (a few megawatts) and don't all shut off at the same time.
  • Big storms give the worst case variation, since when a wind turbine goes too fast it feathers its blades and shuts down, going from full output to nothing, often in synchrony with other wind turbines around it.
  • Wind farms spread over large geographic areas have less total variation (the wind doesn't die everywhere at the same time). Ideally the spinning reserve would thus scale up slower than the total windpower connected, making marginal wind power less expensive. Unfortunately Denmark is too small to see this effect, and it would require very high throughput long distance power distribution.
Wind turbine manufacturers are working on making their turbines play better with the grid. Variable-frequency turbines go offline more slowly by generating power from their blades as they spin down after losing wind. Many manufacturers are shipping wind turbines with extra-large blades, so that the turbine produces a larger fraction of its output more of the time. This reduces the cost of variability in exchange for an increase in the capital cost of the turbine, which is a tradeoff made possible by an understanding of the cost of that variability.

Wind turbines appear to work economically (when the utilities are prodded by a production tax credit, which I support). As more turbines are installed, the best windy areas are used up and the least expensive spinning reserve is committed. On the other hand, wind turbine costs might be coming down some day (maybe -- materials costs are going up), and the need for spinning reserve is decreasing. It's a fairly tense balance.

Personally, I'm happy to see more wind turbines getting installed, since it's domestically produced, mostly-carbon-free, low marginal cost power, and let's have more of that. There is going to be a limit to how much wind power can be installed, but we're nowhere near that limit yet.

At the same time, it's sobering to consider that the United States once built almost 100 nuclear plants in about two decades, bringing new power online at the average rate of 5 gigawatts a year, at a time when our economy was one-third to one-half the size it is now, in real terms. At it's peak, we were building much faster than that. This economic explosion was driven by the business fundamentals as much as it was by overexcited businessmen jumping on the latest bandwagon. And the fundamentals were and are that if we decide on a reactor design (like Palo Verde), we can build and operate them cheaper than coal plants.

To match this performance, and get to 20% of the U.S. grid in 20 years, the wind industry would need to install about 300 gigawatts of nameplate capacity. That would require getting to a peak of 30 gigawatts a year, up from 4 gigawatts in 2007, which is 11 years of sustained 20% growth. The fundamentals for wind are not as good as they were for nuclear in the 1960s. It won't happen without a major breakthrough.

Monday, August 04, 2008

SpaceX launch 3 failure

I watched the YouTube video, which shows launch through 6:16 indicated (T+2:11). Two comments:
  • Can we please get a heater on the on-rocket camera lens cover? It's hard to see much through the condensation.
  • It looks like the rocket has a 5 second roll oscillation that starts as soon as they go supersonic. I don't know if this is fuel slosh, like on the second stage of the last launch, but it doesn't look like other rocket launches to me. Any kind of wobble before seperation could cause the first stage to ding the second stage, as happened on the last launch.
I really hope their next attempt goes better. This launch wasn't obviously progress.

Constraints to wind power

Jesse Ausubel has a pretty good essay which describes what he thinks the future of power production will look like. It's a somewhat rosy picture, and although I'm also optimistic (but not for the next few years), I disagree on a couple of points.

He, like I, thinks that we've got to get away from coal. I don't follow his reasoning for why he thinks we have to get away from coal. It seems he has identified a long term trend towards fuels with less carbon and more hydrogen, and he thinks we should make choices to perpetuate that trend. As near as I can tell, he's skipped the part about why the trend is a good thing. Perhaps he thinks consumers like lower carbon fuels because they tend to burn with fewer combustion byproducts, but he doesn't back this claim up with any market analysis.

I think we as a country need to stop burning coal because
  • we import a lot of oil to burn coal (we spend almost as much on oil to move the coal as on the coal itself), so that the price of coal-fired power is quite sensitive to the cost of oil,
  • it is politically possible to install lots more windpower, but coal is seeing opposition, and it is vital to our economic health to get a lot more electric supply,
  • wind power is inelastic supply, whereas coal power is elastic. That is, a coal plant will shut down if the price of electricity falls below it's operating costs, but a wind turbine costs almost nothing to run and will keep generating through a larger swing in electricity prices, which will make our electricity supply more predictable,
  • and finally and perhaps most importantly, because climate change matters.
My biggest point of disagreement is with Jesse's assertion that windpower is impractical due to land use constraints. Other, perhaps clearer-thinking people have made this same point. Jesse makes a very sobering calculation: he figures a wind farm produces 1.2 watts per square meter, average. To produce all of the U.S. grid's 450 gigawatts (average), you'd need a lot of land. Jesse calculates 780,000 square kilometers. The area of the U.S., for reference, is 9.16 million square kilometers, with 1.75 million square kilometers of cultivated cropland. I don't get quite as large a number as Jesse, but we'll take his 780k km^2 for now.

He figures that's just too much land. But this argument is trite. I'll skip over the point that farmland with wind turbines is still farmed land, and instead focus on a more basic question: How much is too much? I think too much is when the next wind turbine to be installed is projected to make no money. That could be all the farmland plus a lot of offshore turbines, or it could be just a few places in North Dakota. It won't be decided by people getting scared of erecting some more infrastructure on 44% of our existing cropland. Farms in the Netherlands in the 1800s were dotted with windmills, because that's what drove the pumps to keep the water out. Farms in the U.S. in the late 1800s were dotted with windmills, with parts shipped at enormous expense across the continent, because that's what pumped the irrigation water wells. Modern farms aren't currently dotted with wind turbines because they've been using oil instead.

Jesse's argument is also trite because it ignores the huge variation in windiness around the U.S. In North Dakota, the entire state is class 4 or above. That means the power available at 50 meters above the ground is 400-500 watts/meter^2. Even during the summer doldrums, the average power available is 300-400 watts/meter^2.

Jesse's 1.2 watts/meter^2 number comes from a wind farm in Lamar, Colorado. That wind farm has 108 1.5 MW turbines spread over a 11840 acre area. Multiply by a 30% capacity factor, and you get 1.01 watts/meter^2. (I'm not sure how he got the extra 20%.) Why is this number so low?

It's economics. The company that owns the wind turbines pays the company that owns the land on which the turbine is sited approximately $3000 to $6000 per year per turbine. The net present value of that payment stream is $60,000 to $120,000. The turbine costs $1,500,000, which is a lot more. Spacing the turbines farther apart slightly increases the power from each turbine, at small increases in royalty payments and road and cable construction costs. If land scarcity ever becomes an issue for wind farmers, I would expect $ per watt and watts per km^2 to go up. Note that $/watt may go up slightly, while watts per km^2 may go up a lot.

Consider that the first big wind farm, on the Altamont Pass, has a power density of 0.86 watts/m^2, which is lower than Lamar's density. If you follow that link, you'll note that wind farms vary from 0.24 watts/m^2 (Pierce County, N.D.) to 5.3 watts/m^2 (Braes of Doune, Scotland). I think land prices, more than turbine capability, is driving the energy density of these farms.

Note that the wind power map above quotes wind at 10 and 50 meters above the ground. Back when the Department of Energy began collecting data for these maps, those were considered the likely bounds of practically sized wind turbines. However, the Lamar turbine towers are 70 m tall. It turns out that the tower costs are mostly just steel, and the higher up you go, the faster the wind blows. After the industry got experience with the costs of siting, permitting, building, bird strikes, aesthetics, and so forth, it turned out worthwhile to spend more on steel in the tower and concrete in the foundation. As a result, watts per km^2 has gone up.

Is there a limit? Placing turbines closer together can collect more wind energy, but fundamentally most wind power is still being dissipated as turbulence and then heat higher up in the atmosphere. Bigger wind turbines reach farther up to capture more energy. It is hard for me to imagine that ground-based wind turbines are going to get substantially taller than they are now, and so I do not expect the average power yield to increase much beyond, say, 2 or 3 watts/m^2 average. 2 watts/m^2 across all of North and South Dakota would yield 750 gigawatts, which is why you hear wind advocates claiming that the Dakotas can power the rest of the U.S. They could, if you could transport the electricity to market.

Finally, I doubt very much that, even if windpower is wildly successful, it will ever account for anything like 100% of the U.S. grid's production. If many coal plants are forced out of production by lower cost wind plants, I would expect that some very efficient mine-mouth plants will remain. I will be astonished (and pleased) if wind ever produces half the U.S. capacity. If that ever happens, wind turbines will be a familiar sight, but not an overwhelming use of land.

Jesse also complains that wind turbines take significantly more steel and concrete than nuclear powerplants. Obviously the steel and concrete are factored into the current prices of turbines, so it's already part of the price comparisons being made. There are two future risks to large use of concrete and steel, however:
  • Wind turbine prices in the future could be more closely tied to raw material prices (which in turn depend on the cost of energy) than on the price of labor (which depends on the state of the economy). This question resolves to whether future wind turbine prices are more sensitive to the cost of imported energy than electricity from coal is. Coal fired electricity is fairly sensitive to oil prices, so I doubt this is a problem.
  • A large bump in wind turbine construction could use so much concrete and steel that it would distort the markets and cause large price increases.
The second issue got me to pull out the calculator again. Here are Jesse's numbers, actually Per Peterson's numbers, in context of the production necessary to build a 250 GWe average windpower grid (about half U.S. electric consumption):
  • Steel: 460 metric tons per MWe. The U.S. produces about 90 million metric tons of steel every year. Over the 30 years it would take to build a new US grid, wind turbines would require 1.3 years' worth of production.
  • Concrete: 870 cubic meters of concrete. The U.S. ready-mix industry produces about 350 million cubic meters a year, so we'd need 0.6 years' worth of concrete production.
These constitute a nice bump to domestic production, but are significantly less that ordinary year-to-year variation.

The bottom line: if the price is right (or even close), let's have all the wind turbines we can build, because it really could help with our foreign trade deficit, economic sensitivity to energy prices, and global warming.

Tuesday, July 22, 2008

Dumping Quicklime into the Oceans

Tim Kruger at Cquestrate has an idea for sequestering large amounts of CO2: dump quicklime (CaO) in the ocean.

The basic idea is to convert limestone (CaCO3) and CO2 into calcium bicarbonate (Ca(HCO3)2).

CaCO3 + energy -> CaO + CO2 Burn limestone into quicklime
CaO + H2O -> Ca(OH)2 Dissolve quicklime in ocean to make calcium hydroxide
Ca(OH)2 + 2CO2 -> Ca(HCO3)2 Calcium hydroxide absorbs CO2 to make calcium bicarbonate

Net:
CaCO3 + H2O + CO2 + 178 kJ/mol -> Ca(HCO3)2

The problem is the amount of energy required. Let's say it comes from coal. Typically, you can get 30 MJ/kg out of coal. To get your 178 kJ above, you'll produce a half mol of CO2 just burning coal, assuming perfect efficiency. That's half your benefit gone right there.

But, it's a high temperature reaction (840 C). That means you have to get the reactants (calcium carbonate, coal and coal oxidizer, e.g. air) up to that temperature, react them, then drop the reaction products back down to normal temperature. To get perfect efficiency, all of the heat from the cooling products has to be transferred to the reactants. There is going to be some loss.

Let's say you lose 25% of the coal heat, and 75% goes to making quicklime. Then, for every 2 kg of coal burned, you will eventually absorb the CO2 that was produced by burning another kg of coal somewhere else.

Bottom line: we'd have to triple the rate at which we burn coal to get carbon neutral with this scheme. That's not practical. It'll get better if we use natural gas or oil, but it won't change the basic calculation that we'd have to multiply our existing consumption of fossil fuels to get carbon neutral.

Now, if someone wants to tell me about a scheme in which limestone is burned in a solar furnace to make cement, I'm all ears. CO2 sequestration from such cement manufacture makes more sense than it does from coal-fired powerplants, because limestone burning (no air) releases pure CO2, whereas coal burning releases CO2 mixed with lots of nitrogen from the air. However, there are lots of other problems.

Sigh. We're not getting out of this mess easily.

Tuesday, July 15, 2008

The Pickens Plan

Check out the guy's website, if you haven't already. There is not a lot of meat there. Basically, the idea is that if we build enough wind turbines to provide 20% of our electricity, we can reduce the amount of natural gas that we burn to make electricity. This natural gas can be used to power special new cars, which will reduce our imports of petroleum.

Mr. Pickens' chief aim is to reduce U.S. petroleum imports. That's great, because that's the energy policy issue I care about most, too. However, I see two problems with his plan:
  1. As things stand now, large fast changes in wind turbine output will have to be accomodated by throttling natural gas turbines. Gas turbines cannot throttle down to zero power efficiently. So, even when the wind is blowing a large amount of power will have to come from gas turbines running at partial throttle ready to take over if the wind cuts out. If wind is supplying 20% of our domestic power, these partial-load gas turbines will have to supply some similarly large amount, and as a result there may not be a large amount of gas actually saved.
  2. I don't forsee a switch to compressed natural gas burning cars. I suspect it would be cheaper and have a larger, more immediate impact to convert the natural gas (and some coal) into gasoline in a refinery, and then feed that into the existing transportation system.
I have two humble suggestions for Mr. Pickens, or energy policymakers.

1. Switch home heating to electric heat pumps.

In 2006, 5 billion gallons of distillate fuel oil was sold to residential users, almost all of it used to heat their homes. Ignoring refinery gain, this is 160.8 million barrels, or about 3.6% of the 4.5 billion barrels of oil imported that year.

Nearly all the houses heated by distillate fuel oil have grid electricity. These houses can be upgraded to air-source heat pumps for a few thousand dollars each. Electricity can come from coal or natural gas, either one of which is better than petroleum. The economics are probably already there for the switch, so some public education and low-cost financing should push homeowners to embrace heat pumps en masse. This can happen a lot sooner than moving the U.S. car fleet to compressed natural gas.

This switch can reduce our oil import bill without requiring the first step of lots of wind turbines. Maybe I'm just nitpicking, but $21.4 billion dollars per year (for the 160.8 million barrels imported) seems like an interesting amount of money.

2. Make air conditioners work on intermittent electricity

This is also known as "Direct Load Control" or "Demand-side Management".

One of the problems with wind energy is that it's intermittent. Increasing the amount of wind generation in the national grid will increase the variation in load that the other generators must accomodate. This will cost money. It will cost less money if the other generators have 10 or 15 minutes to accomodate variation.

Air conditioners and heat pumps naturally store energy. It takes time to cool or heat a building. Usually, the pump cycles on or off every few minutes. If the utility has a fast way to shut down large numbers of compressors for a few minutes, it can filter out much of the short-term variation in load and supply. Instead of throttling gas turbines from 50% to 100%, a few minutes' notice gives the utility time to turn on gas turbines -- from 0% to 100%. That means that the 50% rated capacity that was otherwise being produced by a gas turbine can be produced by a coal-fired turbine instead, which is much cheaper.

This change is a good idea regardless of whether a massive wind turbine build happens, because it will allow utilities to use less natural gas and more coal. That may alarm some folks. Some may see a hidden agenda here. I think if the same bill in Congress mandates Direct Load Control on HVAC devices, and guarantees a production tax credit for all non-carbon domestic sources for a decade, that should assure doubters and put some real fire in the market.

Right now, hydroelectric turbines are the cheapest load-following generation around. They produce just 7.1% of the electricity in the United States (2006). Unfortunately, all of this load following capacity is already used.

For comparison, HVAC uses more than 29% (page 44 here, plus this, both from the EIA) of our generated electricity. Instantaneous control over this much load would be sufficient to accomodate any amount of wind power that we care to build. Of course, the utilities (really the system operators) can't control HVAC, yet. I don't think this is a problem, because we don't have 450 gigawatts of wind turbines yet either.

I suspect the average lifetime of HVAC equipment is around 20 years. If the government mandated that all HVAC equipment sold after, say, 2009 had Direct Load Control features, then we'd see about 15 new gigawatts of Direct Load Control every year. There is little danger of us building wind turbines faster than that in the near future.

Wednesday, July 09, 2008

Burning coal is burning oil

I found some numbers for the oil cost of burning coal.

Freight trains in the United States burn 1 gallon of diesel to move a ton of frieght 436 miles.

Average distance coal travels in US: 628 miles from mine mouth to powerplant. At $4.03/gallon, that's $5.80 for the diesel to move a ton of coal from the mine mouth to the powerplant, on average. Wyoming coal costs $9 at the mine mouth. So, electric producers pay almost as much for the diesel to move the coal as for the coal itself. Since marginal petroleum is imported, it's fair to say that coal is not entirely a domestic fuel.

The average powerplant cost for coal in the U.S. in 2006 was $34.26/ton. That's because coal mined outside of the Powder River basin in Wyoming costs a lot more to dig out -- the average mine-mouth price across the U.S. in 2006 was $25.16/ton. The difference is $9.10/ton, which is the cost of transport. The cost of diesel was a bit lower in 2006, but it looks like around half the transport cost is the diesel.

If the coal is 22 MJ/kg, and the plant is 35% efficient, then for each kWh at the powerplant you spend on average 1.8 cents for the coal. Just the fuel cost of the coal plant is more than the total operating cost of the Palo Verde nuclear powerplant, per kWh. This result is entirely independent of subsidies or clean coal. The black stuff is apparently just really expensive.

A while back, I snarkily suggested that mine mouth coal powerplants were a way to keep the pollution away from rich people. Looks like I was wrong:
  • Transporting a kWh of electricity 1000 miles increases the cost by 19%.
  • Transporting the coal necessary to make that electricity 1000 miles costs $14.49/ton, assuming cost is linear with distance. That's a 58% increase in the cost of the fuel. Assuming the fuel cost is 70% of the cost of producing electricity, that's a 40% increase in the cost of the electricity.
  • 4000 miles (across the continent) by electricity: increase cost by 107%.
  • 4000 miles by coal train: 160% increase.
What about the extra carbon? Transporting 1000 miles as electricity means you must make an extra 8.7% more electricity which gets lost in the wires, which produces 8.7% more CO2. Transporting 1000 miles by coal train burns 6.3 kg of carbon in the diesel to deliver perhaps 800 kg of carbon, which increases the total carbon released by 0.8%. Clearly the diesel locomotive is the lower carbon, if much more expensive, alternative.

Average distance coal travels in China: 230 miles. They're burning a lot less diesel to take advantage of their domestic coal.

Sunday, June 29, 2008

Keep women away from stairs!

I have summarized for your convenience the top 7 consumer killers in the United States in the year 2001, and swimming pools, for comparison.  I think the conclusion here is inescapable: women must be kept away from stairs. This is a significant issue for me because I live in a house with two stories and a basement, with my wife and three daughters. Although the statistics presented are not specific, it does appear that the problem is largely with older women, so we'll definitely have my mother in law stay in the downstairs bedroom.

As I write this my two older girls are directly behind me, messing around in the crib that we generally use for our youngest. A quick check shows that I should escort them outside where they can safely fool around in traffic on some ATVs!

total deathsmale accidentsfemale accidentscategory
2021047671421274004Stairs, Ramps, Landings, Floors
45964248445 291530Beds, Mattresses, Pillows
30271203930 252960Chairs, Sofas, Sofa beds
25023125312 168238Bathroom structures and fixtures
24750414008 151660Bicycles
21239169834 38022ATVs, Mopeds, Minibikes
19085150667 72498Ladders, Stools
53228886472894Swimming Pools, Equipment

Wednesday, June 25, 2008

CO2 sequestration -- size of the kill zone

Sometimes, the underground reservoirs that store natural gas explode. Drilling wells into them makes this more likely. When wells explode, the gas generally ignites, making a spectacular flame that can be seen for miles. Aside from the loss of valuable fuel and equipment damage, well explosions generally aren't too big a problem for people living nearby.

One less noteworthy effect of a well explosion is that the CO2 generated from the combustion of the methane is carried high up into the atmosphere by the heat of combustion, where it is mixed by high-altitude winds (routinely 100 MPH).

One plan for CO2 sequestration from coal-fired powerplants is to inject the CO2 into old, empty gas wells. Like the methane, the CO2 is in a supercritical state in the well -- not so much a liquid as a very dense high pressure gas.

The difference between CO2 and CH4 comes when the well explodes. CO2 does not start a fire. Instead, it expands, and cools, and the cold CO2 will flow with the wind, against the ground, eventually dissipating.

A 1 GW (electrical) coal-fired powerplant will burn 2.2 GW (thermal) of coal (because it's about 45% efficient). That's about 7000 metric tons every day. It will produce 4.7 cubic kilometers of carbon dioxide per year, at standard temperature and pressure. That CO2 is fatal to mammals at concentrations greater than 4%.

So, if a sequestration field explodes after 10 years of sequestering the output from a 1 GW coal plant, it will create an invisible blob of CO2 that will be at least 7 km across before it dissipates to the point of being nonlethal.

Think about this thing for a bit. CO2 inhalation is fatal within a couple of minutes, and I suspect it is disabling well before that. Who is going to detect this blob of gas before being overcome? You cannot see it. You cannot run from it. You cannot stay indoors to escape. You cannot start your car to drive away from it. As the wind wafts it across the scenery, it kills every animal in its path. It could go for 50 kilometers or more before wind shear mixes it with enough air to become safe.

Not in my back yard, if you please.

Monday, June 09, 2008

Anya's Gift


For Anya's 6th birthday, we had a huge party. Over 50 people came. It was a blast.

We've been trying to reduce the accumulation of toys in our house, and presents from that number of people was going to be a problem. So, we told people not to bring presents, or if they did, bring something suitable for the Ronald McDonald house, which is a temporary home for the families of kids undergoing serious treatments at Stanford Hospital.

If anything, the haul got better (oy! consumerism). Here is Anya delivering her presents to the charity.

Monday, June 02, 2008

Discovery Launch


I just got back from watching the Discovery launch. My boss, Ed Lu (former 3-time astronaut, second from left), hosted us, which really made the experience for me because he was able to introduce us to lots of folks. Every time we walked into a restaurant, and every 5 minutes while we were at Kennedy Space Center, someone would smile and come over to talk with Ed. NASA doesn't pay well and most folks don't get to try wacky things like we do at Google, but they seem to have great interpersonal relationships. It's heartwarming to see.



On launch day, we were 3 miles from the pad at the media site. This is as close as you can get. We had a lot of waiting around to do. Here is a cherry spitting contest.



I know there is a great deal of speculation out there about whether hacking on camera hardware at Google makes one a babe magnet. While such question are only academic for me personally, I can tell you that getting out in the midst of a bunch of media types with some very customized photographic hardware attracts all sorts of attention. I don't actually know who this person is but I think we can all agree she's gorgeous, and she was very interested in the camera hardware and what Google was doing with it.



From our vantage point 3 miles away, the shuttle stack was just a little bigger than the full moon, which meant that the flame coming out the back was about that size too. There have been some comparisons to the shuttle exhaust being as bright as day....

Let me put that myth to rest. After two years of designing outdoor cameras, I can tell you that just about nothing is as bright as the sun. From our vantage point it had more angular size than the sun -- maybe 400 feet long by 100 feet wide, viewed from 3 miles, is 1.5 by 0.5 degrees.  The sun is 0.5 degrees across.  But the Shuttle plume is not as hot as the sun -- 2500 K at most, compared to 6000 K for the sun.  Brightness increases as the 4th power of the temperature, so the Sun's delivered power per square meter is something like 11x larger.  Furthermore, most of the light coming from the Shuttle is in the deep infrared where you can't see it, compared to the Sun's peak right at yellow.  So my guess is that the shuttle was lighting us up to 9,000 lux illumination.  That's twice as bright as an operating room, and way brighter than standard office bright (400 lux).  But it's just nothing like the 100,000 lux that you get outside in bright sunlight.  Nobody's going to get a suntan exposing themselves to the shuttle.  (Yes, the shuttle flame reflects off the exhaust plume, but the sun reflects off clouds, which are much bigger, so there is no relative gain there.)

Anyway, back to the people we got to meet. Here we are at lunch in the KSC cafeteria, the day before the launch. That guy two to my right is... named at the bottom of the blog. Have a guess. He had a really neat retro electronic watch and talked about how much he likes his Segway. Picture was shot by Jim Dutton, one of the F-22 test pilots who is now an unflown astronaut.


Here's a terrible picture of Scott Horowitz (former #2 at NASA, the guy who set the direction for the Ares rockets and Orion capsule) talking with Ed. The two were talking about their airplanes, a subject that gets both of them fairly animated ("I love my airplane. It's trying to kill me.")  Sadly, Ed's plane was subsequently destroyed by Hurricane Gustav (while in a supposedly hurricane-proof hanger) later this year.

Sorry about the quality, it was incredibly crowded and Ed and Scott weren't posing. This was on the day of the launch. Scott came out and looked at our Street View vehicle, then narrated the launch for us. Scott is a former 4-time astronaut and has a great deadpan delivery ("okay we just burned off a million pounds of propellant"); he's probably done it a hundred times.

Here's Mike Foale, who Ed has closed the hatch on twice (that means Mike was in the crew after Ed at the ISS twice).


I enjoyed meeting the people and looking at the hardware quite a bit more than the spectacle of the actual launch itself. Basically, the Shuttle makes a big white cloud, climbs out, loud noises ensue, and within two minutes you can just make out the SRB seperation with your unaided eyes, and it's gone. The Indy 500, for instance, is louder, and more interesting because there are always going to be crashes and various anomalies, which are not usually injurious and therefore lots of fun for the crowd. After meeting all those competent people who are working so hard to thread this finicky beast through a loophole in Murphy's law, I was just praying the thing wouldn't break on the way up.


P.S. That's Steve Wozniak, cofounder of Apple Computer.

Tuesday, April 29, 2008

How GPUs are better than CPUs

Intel has a great CPU core right now, AMD does not, and in combination with Intel having higher-performance silicon, Intel is currently beating AMD handily. Meanwhile, Intel and AMD are both integrating graphics into the CPU and NVidia probably feels sidelined. So NVidia says that the CPU is dead. I agree, a little.

Many things people want to do these days are memory bandwidth limited. Editing/recoding video, or even tweaking still pictures and playing games are all memory bandwidth limited. GPUs have far better memory bandwidth than CPUs, because they are sold differently.

The extra bandwidth comes from five advantages that GPUs enjoy:
  • GPU and memory come together on one board (faster, more pins)
  • point-to-point memory interface (faster, lower power)
  • cheap GPU silicon real estate means more pins
  • occasional bit errors in GPU memory are considered acceptable
  • GPUs typically have less memory than CPUs
When people buy CPUs, they buy the memory seperately from the CPU. There are 2 chip carriers, one socket, a PC board, and one DIMM connector between the two. In comparison, when people buy GPUs, they buy the memory and the GPU chip together. There are 2 chip carriers and a PC board between the two.

CPU memory interfaces are expected to be expandible. Expandibility has dropped somewhat, so that currently you get two slots, one of which starts out populated and the other of which is sometimes populated and sometimes not. The consequence is that the CPU to DRAM connection has multiple drops on each pin.

GPUs always have one DRAM pin to each GPU pin. If they use more DRAM chips, those chips have more narrow interfaces. Because they are guaranteed point-to-point interfaces, the interfaces can run at higher speed, generally about twice the rate of CPU interfaces.

CPU silicon is optimized for single-thread performance -- both Intel and AMD have very high performance silicon. As a result, the silicon costs more per unit area than the commodity silicon the GPUs are built with. The "programs" that run on GPUs are much more amenable to parallelization, which is why GPUs can be competitive with lower-performance silicon.

It turns out that I/O pins require drivers and ESD protection structures that have not scaled down with logic transistors over time. As a result, pins on CPUs cost more than pins on GPUs, and so GPUs have more pins. That means they can talk to more DRAM pins and get more bandwidth.

All of the above advantages would apply to a CPU if you sold it the same way a GPU is sold. The final two advantages that GPUs enjoy would not apply, but are easy to work around.

The first is the acceptability of bit errors. GPUs do not have ECC. It would be easy to make a CPU/GPU that had a big wide interface with ECC.

The second is the memory size. GPUs typically connect to 8 or 16 DRAM chips with 32b interfaces each. It would be straightforward to connect with 64 DRAM chips with 8b interfaces each. If fanout to the control pins of the DRAMs becomes a problem, off-chip dedicated drivers would be cheap to implement.

So, I think integrated CPU/GPU combinations will be interesting for the market, but I think they will be more interesting once they are sold the way GPUs are sold today. Essentially, you will buy a motherboard from Iwill with an AMD CPU/GPU and 2 to 8 GB of memory, and the memory and processor will not be upgradable.

For servers, I think AMD is going in the right direction: very low power (very cheap) mux chips which connect perhaps 4 or even 8 DRAM pins to each GPU/CPU pin. This solution can maintain point-to-point electrical connections to DIMM-mounted DRAMs, and get connectivity to 512 DRAM chips for 64 GB per GPU/CPU chip.

Sunday, April 20, 2008

Fountain Prototype


Martha and I are building a pool in the back yard. In that pool will be hot tub, and pouring into that hot tub will be a fountain. I want lots of water flow, and curves, especially since the overall pool will be rectangular (due to the automatic cover). To give you an idea, here's the pool:


The hot tub is circular, and has a 1 foot thick wall that seperates it from the pool. Out of the center of that wall, water will leap up, arch over, and fall into the tub. This will pour nicely over your shoulders if you are an adult, and it will make a fancy tube to explore if you are a child.

The trouble is that nobody sells a curved fountain like this. No problem, I'll just assemble it from a number of straight sections. Also, I do not want to use high-pressure pool pumps for this thing. Instead, I want to use low-power, low-pressure pond pumps. The manufacturer of the fountain has specs for the amount of water flow you need, but not the pressure. I smell project risk. Time for a prototype. Here's the overall arrangement: two fountain units, 1 foot wide each, one Sequence 4200seq12 pump, and some pipes to move the water.





I've got a flow gauge, two pressure gauges, and a ball valve so I can figure out how many gallons per minute throws the water how far.


I've also got a peanut gallery. They're interested because they're going to get to dance around in the water in a bit.


The fountains throw water about as far as the manufacturer claims. Note that my flow rates are for two 1 foot units.

Flow rateThrowNotes
48 GPM26.0 inches7 inch rise
45 GPM23.5 inches
41 GPM18.7 inches
37 GPM14.7 inches
35 GPM11.5 inches3+ psi pressure drop

I learned a bunch of things from this prototype:
  • The flow through the two units was not identical. One moved about 8% more water than the other, and threw the water a little further.
  • The flow through each units was not uniform. The unit throwing farther was throwing farther on one end.
  • With no fine filtration, and just a skimmer before the pump, the fountain units quickly accumulated debris that interfered with the flow.
  • The water sheet from each unit contracts from surface tension as it gets farther from the fountain. A 14 degree included angle between the two units turned out to roughly match the contraction, but this still left a constant gap from one to the next. I may try to fix that by mitering the two fountains together.
  • Martha and I agreed that 15 or 20 GPM per linear foot is not enough. We really like 25 GPM/foot better.
  • The fountain water entering the water surface was the cause of all the noise. The pump was really quiet, and you could only hear it when you walked right over to it.
  • The pump really doesn't prime itself. I had to stuff a hose up the intake and fill it full of water before the pump would move anything.
  • This pump can just move 48 GPM with this setup (which implies it is seeing about 5 feet of head). With more angles and losses in the system, I am going to need more pressure at that flow.
I also noted that the water sheet was rough. Water entry was noisy. I took a high-speed shot of the water, and sure enough, it's breaking up in flight. Note also how much shorter the rear fountain is than the front.


I noticed that the flow gauge was bouncing around a fair bit, so I presume I'm getting a bunch of turbulence, which probably does not help the fountains at all. These units are the "short lip" version of these fountains, which means they have just 1" of flow straightener before they launch the water. The standard version has a 6 inch lip, which I think might damp the turbulence more and lead to a cleaner sheet of water.

Inside the unit there are apparently 3 supports of some sort. These have visible wakes, but I wasn't able to see that the wakes caused more breaking up when they hit the edges.


So, my plan is not yet validated.
  • I need bigger pumps. 3 of the 5100SEQ22 will produce 200 gpm total at 10' head. That should give me enough extra force to push through the extra twists and turns.
  • Each fountain unit is going to need it's own throttle. The best way to implement this is probably a bank of eight $20 ball valves, and a seperate run to each fountain unit.
  • As long as I'm doing a seperate run to each fountain unit, I might arrange for the final connection to be long and straight to reduce turbulence. There will be a lot of turbulence in the fountain unit itself, so maybe this is hopeless.
  • I should order a fountain unit with a 6" lip, and see if I like that flow better.

Tuesday, April 08, 2008

Conservation versus outsourcing

Read "The Wonderful Curse of Natural Gas Price Volatility". It's short, just 12 pages long.

Check out the graph at the top of page 9: "U.S. Industrial Gas Demand Destruction". That's a 22% drop in industrial natural gas utilization between 1997 and 2006. That's not efficiency, that's offshoring! What's going on here?
  • Natural gas is a feedstock for the fertilizer, chemical, and plastics industries, and a fuel for the electric generation industry.
  • Electric power generators are less sensitive to the price of their fuel than fertilizer, ethanol, and plastics, since the latter three can all be shipped to us oversea, and electricity cannot.
  • The electric generation industry is sensitive to the capital necessary to build capacity, because the rent on the capital to build their plants has to be priced into the electricity sold, and different plants do compete to produce and sell electricity. Thus, more capital-intensive plants are more likely to have lower return on investment if electricity prices dip.
  • Gas turbine power plants have exceptionally low capital costs, making them very desirable to the power producers, and gas prices were low during the 1980s and 90s.
  • So, electric generators built 200 gigawatts of gas turbine powerplants during the 1990s and early 00s, so that gas turbine plants now constitude 41% of our nameplate capacity (EIA figures). These gas turbine plants are now running at a capacity factor of 21%, and produce 20% of our domestic power (once again, EIA).
  • Figure 7 of page 8 of the Ventyx report shows that between 1997 and 2006, gas consumption by the power generators rose from 11 to 17 billion cubic feet a day. That's all those gas turbines coming on line.
  • It turns out there is a limited supply of domestic natural gas. Demand rose, supply stayed constant, and thus prices rose.
  • Over the same time, industrial consumption dropped from 23 to 18 billion cubic feet a day. That's domestic fertilizer, chemical, and plastics production being moved overseas in response to higher feedstock costs.
  • U.S. consumption of fertilizer, chemicals, and plastics has not dropped, and conversion from the feedstock to the final product increases value, so offshoring has driven the jobs overseas and also increased our trade deficit by much more than the cost of the natural gas consumed by the electric generation industry.
What we have here is another example of a strong negative correlation between the performance of the U.S. power generation industry and the U.S. economy as a whole. This is a tragedy, partially responsible for our $708 billion dollar/year trade deficit. That's an unpaid $2360 bill, per man, woman, and child, per year, for everyone in the United States.

This post and the last one may lead some of you to think I'm all for a command economy. No. I'm pretty sure that if we nationalized the electric power generation industry, we'd end up running it less efficiently, which would also lead to higher domestic power costs. I do think we need to bring the measure of performance of the electric power generation industry into better alignment with the domestic economy.

The domestic economy does well with cheap energy. In this context, gas turbines are a disaster, since they redirect a feedstock away from high-value-added uses (plastics) into low-value-added uses (electric generation). We have readily available substitutes for electric generation (coal and nuclear), but not natural gas. In some sense, all a gas turbine does is convert one kind of energy into another without increasing the domestic supply.

I don't know how to make domestic power producers profit more when the US economy has cheaper energy. The benefit of marginally cheaper power is probably nonlinear, and possibly unmeasureable in any way that would allow accountants to calculate a credit to power producers. I do not want to see more coal powerplants, because of the currently externalized cost of CO2 production, even though they are a cheap source of power. Perhaps the simplest way forward is what we have now: tax credits or subsidies for the obvious answers, like wind and nuclear, and just feel our way through, year by year, guessing which subsidies will distort the electricity market to best serve the interests of our citizens.

I'm sorry to keep harping on this energy and trade stuff, but to be honest, I'm scared. I don't understand how to predict what this trade deficit will do, nor do I understand how big is too big, but $700 billion feels too big. Our trade deficit, national budget deficit, credit crisis, housing market meltdown, and war in Iraq give me the feeling that this nation has derailed and is about to make a very expensive and possibly bloody mess.

The last time we got into a World War, we had just splurged on national infrastructure. Think about this: 90% of the Allied aluminum flying over Germany was made with power from the Grand Coulee Dam, built from 1933 to 1942, i.e. just in time. I'm not saying I expect another World War, but I am saying that when times get tough it's good to have serious infrastructure in your back pocket.

Wednesday, March 26, 2008

Let's drive electricity prices into the ground

Read this report. It's basically a big apology for why electricity prices have been going up.

On page 31, it shows the EIA estimate that a 10% increase in the price of electricity in 2006 would cause a 4% (175 billion kWh/year) drop in electricity demand in 2014, down from 4.2 trillion kWh/year. This is basic supply and demand, with the EIA doing the error-prone work of predicting the demand curve in the future. The first thing I'll note here is that a 10% price increase, coupled to a 4% sales drop, leaves a 6% revenue increase (at least $12 billion/year) coupled with decreased costs for the folks selling electricity. It's an inelastic demand curve. So, if the folks making electricity can do anything to reduce the overall supply, it's well worth their effort.

When the price of electricity goes up, some of that reduction in demand is accomplished by economic activity (buying a more efficient air conditioner), and some is accomplished by reducing economic activity (shutting down the night shift of a marginal plant). Overall, how much of each? My guess is that the reduction in economic activity is the main reducer of demand. Let's suppose I'm right, and that a 4% drop in electric demand is accompanied by a 1% drop in GDP. That's a $130 billion dollar drop.

You can see that price fixing among electricity producers would be seriously damaging to me and you. It is in the national interest that electricity prices not rise 10%. Note that this is true regardless of whether the utilities make or lose money, because as a nation we are making or losing quite a bit more money than the utilities are.

So let's consider a different investor, the U.S. government. Suppose that the electric demand curve slope is locally smooth. A 10% decrease in the cost of electricity, then, should lead to a 4% increase in sales, and a corresponding 1% increase in GDP. This is what Rod Adams is talking about when he calls electricity an economic lubricant.

How much is that 1% increase worth to the federal government? They tax the GDP at about 18.4%, so it's worth around $24 billion per year. To review:
  • A 10% decrease in the cost of electricity, from $0.07/kWh to $0.063/kWh, would lead, 10 years later, to
  • ...a 4% (175 billion kWh) increase in electricity sales, for a net revenue loss to the industry of
  • ...$12 billion/year. The federal government, however, would be raking in an extra
  • ...$24 billion/year, and the rest of us would be enjoying an additional
  • ...$130 billion/year in GDP.
Sounds good. Let's mandate a drop in prices! Who says we can't have a centrally controlled command economy?

Well, it's not that simple. First, we need to know much investment is required to drive electric prices down 10%. Presuming that the government has to somehow compensate utilities for taking a $12B/year hit for the team, that leaves $12B/year to pay for the capital required. The federal government currently borrows money for 30 years at 4.5% (they are a better credit risk than you), so the capital required for this investment had better be significantly less than $266B.

The Palo Verde nuclear power plant supplies power for $0.027/kWh, including operations (fuel), maintenance, and interest and depreciation costs. In 2002, the marginal cost (not including capital) was 42% less than that for coal in the area, and since then the difference has increased as coal costs have risen. This is the best lever we can use to drive down electricity prices.

To drive down wholesale prices by 10%, we'd need to bring the cost of production down approximately 10%. Using the Palo Verde area numbers from this report, and assuming we keep the same coal and hydro production (as they are both low cost), but reduce gas and increase nuclear, we'd need 49 gigawatts of new nuclear production nationwide. That's not going to happen by 2014, but we would probably see some fraction of the benefit for some fraction of the cost. Just incidentally, 49 gigawatts of new nuclear production scaled up from Palo Verde's employment base is 89,000 extra jobs here in the U.S., paying an average of 13% more than the average American salary.

Palo Verde cost $5.9 billion, was finished in 1988, and has a peak capacity of 3.72 GW and sustains a capacity factor in excess of 90%. We would need 13 more Palo Verdes to produce enough electricity to make that 10% cost reduction happen, at a present-day cost of around $120 billion [edited; thanks]. The generating utilities are not going to take this on, given that the "benefit" is a $12 billion/year loss to them. But for the U.S. government, looking at $24 billion/year in increased tax revenue, the cost of the plants is easily worth it. What remains is determining a way to have the government provide the capital and offset the revenue losses associated with a huge expansion of the nuclear reactor fleet, without getting ourselves further into the management disaster of a command economy.

I'll note that we're going into a recession, and interest rates are falling. This is a good (cheap) time for the government to borrow a bunch of money to invest in long term economic infrastructure. The reactor buildout I'm proposing would cost about the same as the $300/person economic stimulus package our leaders just conjured up. To my mind, the difference is very much teaching a man to fish versus giving him fish.

Tuesday, March 11, 2008

Clinton's choice

I'm watching replays of the CNN Obama/Clinton debate. This is painful.

The argument that Clinton needs, and is failing to make, is that there is a difference between how Senators and Presidents collect the information they need to make their decisions. The Congress does not have an NSA. The President does. Clinton made her decision, one she regrets, on the basis of information provided by George Bush's team. Had she been in President Bush's position, things would be entirely different because she would have had a completely different set of options, including better discovery of what the facts actually were.

She's not making that argument. I'm not sure why, and it suggests to me that she still doesn't think about how to be a President. She's thinking about how to argue about stuff, not how to find the right answer.

There is another angle that Clinton is missing. To win, the Democratic presidential candidate will have to appeal to some Republicans. What is going to go over better? "I was right, you shouldn't have gone to war, now I'm going to fix your mistake and pin the cost on you?" or "We got into this tragedy together, and I will help get us out of it together?" Obama's Iraq message is actually more divisive.

Finally, for what it's worth, the idea of scheduling a withdrawal scares me a lot. I think our withdrawal from Mogadishu contributed directly to the planning of 9/11. I worry about what we're going to be dealing with in 10 years, and where we're going to be dealing with it.

Monday, February 11, 2008

Dessert Recommendation

On Sunday night my wife and three kids had the "Lemon Meringue Ice Cream Pie" at the Half Moon Bay Inn. It was one of the best desserts I have ever had. For dinner I had the cheeseburger, also one of the best burgers I've ever had.

I'd like to put in a Google Maps link, but Maps doesn't have it! Half Moon Bay Inn is at 401 Main Street, Half Moon Bay, CA 650-560-9758.

Subsidizing wheat in Afghanistan

Afghanistan grows most of the world's opium. Opium is technically an illegal crop there, and it is one of the few crops that makes enough money to support a farmer in Afghanistan. If you grow opium, the central government is officially supposed to stop you, and the local official will probably look the other way if you pay him off. It may seem cheaper and easier for the folks growing opium in the Taliban-controlled areas, since the Taliban actively helps farmers sell their crop, in exchange for some of the profit. I'm sure many farmers prefer the Taliban for purely economic reasons.

If wheat sold for more money, perhaps 3 times the world price (which is around $350-$400/metric ton), some folks think the value of the wheat crop would be large enough to encourage many farmers to switch to wheat production. Wheat is legal to grow, so their is no disadvantage for a wheat farmer to having a functional Afghani government. Foreign aid organizations could run grain mills which bought wheat at $1100/ton and sold the flour for $350/ton. Bread prices would presumably stay low as flour flooded the market, and Afghanistan would presumably become an exporter of flour.

Folks in Pakistan and Iran would be encouraged to sell grain to Afghanistan for milling. I'm not entirely sure this is an entirely bad thing. Presumably economic conditions do not vary dramatically as you cross the border, so that areas outside Afghanistan are probably also growing opium. And, as long as we stop bulk cargo deliveries of grain to Afghanistan, one would think it would be expensive to move large quantities of grain by, say, mule across the border. There is some subsidy at which it is not worth moving grain by mule. Hopefully it's cheaper for small Afghani farmers to get their product to the mills than it is for Pakistani importers.

So, how much would this cost? Afghanistan produced 4.4 million metric tons of wheat in 2007/2008, so someone would have to cough up $3.3 billion/year to carry this subsidy. That's real money, and apparently we'd have to keep it up for a decade or so. If there are not large agribusinesses in Afghanistan now, there will be within a year or two. These businesses will get efficient at growing grain in Afghanistan, and start to produce the majority of the grain there. The subsidy on grain will decrease over time, large efficient businesses will capture nearly all of it (as they capture farm subsidies in the U.S.), and the marginal farmers will move back to poppies. I don't have a great deal of hope for this effort.

By the way: anyone have a clue what this is?

Tuesday, February 05, 2008

Cost of oil, revisited

Last time I looked, oil was priced at $22/barrel and we were importing 9.14 million barrels a day, which made up 20% of our trade deficit of $374 billion. We were actually importing more, but I hadn't counted the refined stuff. So it was actually 12.6 million barrels/day, so $101 billion or 27% of the trade deficit.

Now, as you know, the oil spot price is around $95/barrel, but $72/barrel is closer to the average price, and we are importing 12.2 million barrels a day (crude plus some refined products). The census bureau has nicely summarized the data here, which doesn't quite match the simple math I would do. For Dec 2006-Nov 2007, they see petroleum imports as $283 billion (35%) of a $813 billion deficit.

Grim.

How much does a plug-in hybrid help?
  • Over a 20-year lifetime, the car is driven 250k miles.
  • It gets 75 mpg rather than 25 mpg.
  • It burns 80 barrels of oil rather than 320 (and burns a bunch of domestic coal instead).
  • It saves the importation of $15,500 of crude.
  • It saves the user $23,000 in gas.
  • It costs the user $5800 in electricity. (250k miles) / (3 miles/kw-hr) * (0.07 $/kw-hr)
My guess is that a practical plug-in hybrid chews up more electricity and gasoline than this, but it still seems pretty good. Unfortunately,
  • It's made by Toyota in Japan, and costs $25,000, so the net trade debt increases. At least the money is going to a responsible nation like Japan. I will cede that eventually Toyota will make most of these plug-in hybrids here, and so only the profits will go to Japan.
  • If 10 million cars in the U.S. were plug-in hybrids, it would reduce our oil imports by 282,000 barrels/day, or 2.3%.
That last point is a killer. It is just incredibly hard to replace oil.