Monday, December 29, 2008

"we want these detainees broken"

  • Former Navy General Counsel Alberto Mora: “there are serving U.S. flag-rank officers who maintain that the first and second identifiable causes of U.S. combat deaths in Iraq – as judged by their effectiveness in recruiting insurgent fighters into combat – are, respectively the symbols of Abu Ghraib and Guantanamo."
  • Jonathan Fredman, chief counsel to the CIA’s CounterTerrorist Center: "If the detainee dies you’re doing it wrong."
  • In mid-August 2003, an email from staff at Combined Joint Task Force 7 headquarters in Iraq requested that subordinate units provide input for a “wish list” of interrogation techniques, stated that “the gloves are coming off,” and said “we want these detainees broken.”
  • JPRA Commander Colonel Randy Moulton’s authorization of SERE instructors, who had no experience in detainee interrogations, to actively participate in Task Force interrogations using SERE resistance training techniques was a serious failure in judgment.
  • Secretary of Defense Donald Rumsfeld’s authorization of aggressive interrogation techniques for use at Guantanamo Bay was a direct cause of detainee abuse there.
I think at this point I'm just sick of all the damage that has been done to my country by Bush and his team.  I doubt that throwing many of them in jail will do much to improve the behavior of similarly-minded people, but I'm all for prosecutions so long as they don't shift attention from the job at hand, which is to fix the economy.

Thursday, December 18, 2008

Insulating the pool

Since this is my first posting about the pool, let me show you around a bit.  South is to the left.  You can see all along the south side of the pool there is a two-foot-wide shelf.  The water over that shelf will be 12 inches deep in the shallow end and 18 inches deep in the deep end.  In the middle of the south side you can see where the hot tub will go.  At the far end (west), you can see the shelf that will eventually support the automatic cover vault.  To the left of the pool, at the far end, you can see the pump vault that will hold the pumps below ground, behind baffles, which should make them completely silent.  The top of the wooden form will be about three inches below the coping around the pool, so you can see that the coping is about 18 inches above grade on the near side.  That coping will be about 16 inches wide, and will form a sort of bench seat most of the way around the pool.  At the far end, the grade level will be raised to match the coping height, which because of the slope of the back yard will only be about six inches.

The blue stuff you are looking at is the 2 inch thick Dow Styrofoam Highload 40 insulation, most of which is glued (yes glued, with polyurethane foam) to the soil behind it.  On the bottom of the pool, the insulation sits on 4 inches of crushed drain rock, which sits on a geotextile fabric membrane.  The next thing to go into the pit is the plumbing and rebar grid, and after that we shoot gunite.

Heat losses for most pools are dominated by evaporation. We're going to be getting a safety cover, and one of the side effects of these covers is that they are close to vapor-tight. We'll keep it closed for most of the day and only open it to go swimming. As a result, I'm expecting evaporative losses to be quite small -- perhaps a half inch a month or so.  On our 18x46 foot pool, that's 260 gallons/month. Multiply by 2270 kJ/kg (water's heat of vaporization), and that's 70,000 BTU/day.  [Update: this evaporation estimate was dead on.]

Direct heat loss through the cover will be the largest remaining heat loss.  The 24-hour average outside temperature in spring is around 60 degrees, and the safety cover will be something like R-1, so loss from a 85 degree pool would be 520,000 BTU/day.  During the early spring and late fall, we'll probably lower this loss by putting a bubble-type cover over the safety cover.  (I wasn't able to find an automatic safety cover which insulates.)  The combination of the two covers will be around R-3, so losses will be around 180,000 BTU/day.

Most contractors tell me that the dirt under the pool is a fine insulator, but I think they really don't know what they're talking about. Houses use under-slab insulation to insulate their 70 degree interiors from the 55 degree earth heat sink. The pool will be at 85 degrees, which is twice that temperature gradient, so I think insulation will matter even more for the pool. In particular, I'm most concerned about the water table contacting the bottom of the pool in the spring and sucking all the pool's heat into an underground plume of warm water headed towards the San Francisco Bay.  [Update: the water table never gets to the bottom of the pool, as our dirt is well drained.]

Let's suppose that concrete conducts 1.7 W/m-K. To convert that to more familiar units, 8 inches of concrete would be R-0.68 (which is terrible -- single-paned windows are better). I'll guess that the dirt, even when wet, insulates a bit as well, so that the bottom of the pool is about R-2. My pool will have an average depth of 7 feet (it has a 10.5 foot deep diving area), so it'll have an exposed area of about 1790 feet. If the pool temp is 85 degrees, and the ground temp is 60 degrees, that's 537,000 BTU/day, about three times my expected loss through the top in the spring. That's why I'm insulating the pool.

The blue Styrofoam has an R value of R-10. It is designed to deflect 5% at 40 psi, and is rated for continuous (dead) load of 13.3 psi. 10.5 feet of water plus 8 inches of concrete will be 5.25 psi, well within the dead load rating of the Styrofoam. The stuff should compress 0.33 mm when the pool is filled, which I don't think will cause cracking anywhere (thermal expansion is probably more than that).

The insulation changes the bottom of the pool from R-2 to R-12, so now I expect to lose 90,000 BTU/day, which is a savings of 447,000 BTU/day.  To put that in perspective, on an average day my solar panels will deliver around 40,000 BTU each.  I think I can squeeze 12 of them up onto the roof.  The insulation is saving about as much heat flow as the panels put in!

To attach the insulation to bare dirt on the walls and shelves, I glued them on with Tap Plastics X-30 two-part expanding polyurethane foam, which I sprayed on with a pair of Wagner power sprayers.  The trick here was to hold the panels firmly (with a few hundred pounds of force) against the dirt while the polyurethane expanded, which takes about 30 minutes.  Once we did that the panels really stuck on well.

Note: I wore complete face covering, goggles, and breathed through an activated charcoal filter to take out the volatile organics.  It was impossible to get the crew to wear the same protective gear.  In the end, everything that was exposed got coated in a fine mist of polyurethane.  The warning label says skin will develop an allergy to the stuff with repeated exposure.  If you try this yourself, be careful, and be careful with your crew!

The soil at the bottom of the pool is firm clay, covered with filter fabric, covered with drain rock.  Initially I tried gluing the insulation to the drain rock, but this doesn't work well.  The insulation does not sit flat against the drain rock, and it rocks around a bit.  Worse still, the polyurethane is quite springy, and was deflecting about a quarter inch under my body weight.  This made me realize that the combinations of shelves, drain rock, and insulation is very bad, because the insulation on drain rock might settle, which would cause the pool to hang from the shelves, which would cause the gunite shell to crack.  If I had it to do over again, I would eliminate the soil shelves, which are only there to reduce the cost of the gunite by $1500 or so.

Instead, I ripped up all the bottom insulation that the crews had installed and reinstalled it myself.  I had already compacted the soil, first with the Bobcat, then with my feet (the crew were actually laughing at me), then with a heavy roller.  Then I compacted the drain rock with the heavy roller.  Then I cut the insulation into 15" x 24" squares and hand placed each one, tweaking the rock placement and vibrating the panel so that the rocks settled into a configuration which was flat for each panel.  That took about four partial days, and only got finished because Martha got into the pit with me.  I read later that a skilled laborer with one assistant is expected to place 100 square feet of under-slab insulation per hour, so I was about two times slower than that.

I'm still paranoid about settling, so I'm going to have the south and west walls of the pool done with #5 40 ksi vertical rebar on 6 inch centers.  #5 is strong enough to actually support the entire pool weight (450,000 pounds filled) on the soil shelves, should the shelves not creep under that load.  I also put a 1 foot chamfer on the edge of the shelf under the cover vault.  The fillet of gunite that fills that chamfer, along with the rebar, will spread the torque from the pool wall hanging on that shelf.

[Update: I ended up with a mix of #3 40 ksi and #4 60 ksi rebar, with about half the strength necessary to hold up the whole pool. I figured I only had to hold up one end, and the rebar guy objected strongly to #5 -- he bent most of the rebar with his right knee!]

Another issue is the bond strength between the XPS insulation and the gunite.  I've since built the pump vault, which is insulated as well, and the XPS/concrete bond there is perhaps only 1 to 3 pounds per square inch shear strength.  If I assume 1 psi for the pool shell, then it will take about 140,000 pounds of shear, which is substantially less than the 450,000 pound filled weight of the pool.  This shear strength is comparable to the shear strength of the polyurethane bond to the soil.  This pool shell will be sitting on its bottom.  The polyurethane is substantially more springy than the polystyrene, and I think it will deflect the 0.3mm in shear that will happen when we fill the pool.

I did not predict the cost well.  I used 130 2 foot by 8 foot panels, with a small amount left over that I am using to insulate the pool piping.  These cost $2800.  I used 30 gallons of the X-30, which cost me about $1500.  It took me and a crew of three guys four days to glue on the side panels, and I think that labor cost me $2500.  The bottom I did myself, but the labor for that would probably have been another $1000 or so.

You can try to compare that cost to the cost of the panel array.  The problem with the comparison is that the panels deliver the most heat when its warm, when you don't need it.  The second problem is that the panel array is limited by the size of my roof, so I can't just spend an arbitrarily large sum on panels.

That said, standard FAFCO-type panels are something like $300 each, and produce 25,000 BTU/day.  To generate the same amount of heat saved, you'd need 18 panels, which would cost $5400.  However, the panels don't really work at all in November and April, when you need the heat most.  The panels I'm using are SunEarth EP40s, which are glazed and insulated copper panels, and cost around $1100 each.  These will deliver around 35,000 BTU/day, and will work in November and April.  But I'd need 13 to match the insulation's savings.  And that would cost $14,000!  So I'm pretty confident the insulation will be a win.

I'm not quite done with the insulation.  You can see a pile of pink insulation at the southeast corner of the pool.  This is standard XPS, and it will be going around all the pool piping.  I tried to find XPS pipe insulation, but the stuff is hideously expensive because it is all custom cut from large billets, usually for municipalities who are insulating their sewer pipes against frost.  So instead I'm gluing together a little box of 2" XPS around all my pipe runs.  Because the pool pump will be running 24 hours a day, the heat losses from the pipes can actually be comparable to the pool shell itself if not insulated.  If you are building a pool, you might want to consider insulating your pipe runs even if you don't insulate the shell, because insulated pipe runs don't suffer any of the structural and installation issues that come with the shell.

Side note: I had a hell of a time finding a Highload 40 distributor. I ended up calling Dow directly, who referred me to White Cap Insulation in San Francisco, who sold me the stuff for $21.58 per 2" x 2' x 8' sheet. Note that standard 20 psi extruded polystyrene has a dead load rating of over 6 psi, so it would have been technically okay for my 5.25 psi of dead load. If I wanted to save around $600, I would probably have gone for the thinner stuff. It would also be easier to buy. I'm hoping that the extra-sturdy foam is buying me a little margin, which is nice to have because I don't have examples of other pools that have been built this way.  [Update: somehow I miscalculated the amount of gunite in the pool, and it's a bit larger than my estimate here.  So, I'm glad I have lots of extra margin on the load carrying capacity of the insulation.]

Side note two: the space down at the bottom of the deep end experiences really wild temperature swings right now.  Because it is down low, it is radiatively coupled to the sky and not much else.  When the sun is shining down there, it can get well over 100 degrees.  As soon as the sun goes down, the temperature plummets, and because the pool is a depression, chilled air tends to stay down there.

Side note three: If you read this post and insulate your pool, or know of a pool that's been insulated, you might leave a comment so that there is some repository of success stories on the internet for this idea. And please please post if there has been a problem with an insulated pool.

Sunday, December 14, 2008

The World's Underwriter

This graph, from the New York Times, shows the extent of U.S. financial commitments made over the last year to deal with the credit crisis.

From December 2007 through September 2008, we committed $537 billion.  That is not money spent, but it is money put at risk by guaranteeing various financial instruments, and by loaning to banks that could not otherwise get loans.

But in September 2008 the Fed went nuts, nearly doubling the commitment to $1097 billion, or around $5500 per taxpayer.  (There are around 200M taxpayers, right?)

And then in October, the FDIC joined the Fed, and, according to the NYT, together they upped the commitment to $5069 billion.  I am not following the NYT very well here, however, since it appears that $1600 billion of this refers to the size of the commercial paper pool, and not the size of the projected government purchases within that pool.  So it's not like the government is actually selling $5 trillion of T bills.

It appears the Fed has become the world's underwriter.  The commercial paper thing appears pretty transparent, for instance.  The Fed sells T bills at 1% interest, and buys commercial paper at 5 to 10% interest.  So long as the default rate is lower than the spread, the Fed makes money.  Given the size of the money flow here, it is possible that the Fed could either make or lose amounts similar to the size of the national debt over the next couple of years.

Of course the problem is that someone at the Fed has to decide what rate they want for paper from which companies.  Since the decisions required are vast -- they have to price the entire commercial paper market -- one presumes the same people are doing this that just presided over the credit default swap implosion.  So, it seems like we might be more likely to headed for the "likely to double the national debt" outcome and less likely to end up at the "paid back the national debt" outcome.

This is Macroeconomics, for real.  Wow.  It really makes you wish there were a way to get off this train.

Friday, December 12, 2008


That white paper got me thinking: what if the government made a bunch of other sensible decisions?
  • They might shut down Yucca Mountain, and require that all nuclear waste be stored on the site of the reactor for 300 years. Nah, won't happen. [Update: They did it!]
  • They might just have NASA cancel Ares-I and Ares-V, and leave it to SpaceX to provide a launcher. This might actually happen. All those folks in Florida and Utah that used to work for NASA contractors? Learn to build windmills. Some of you can learn to build Dragons and Falcons. [Update: Holy crap! They did it!]
  • They might require all air conditioners and heat pumps to have short-term demand management controls. As the newer air conditioners got deployed, we'd have a lot less need for online throttled-down combustion gas turbines to back up all these new wind farms. I've not seen any rumblings of this yet.
  • They might even standardize form factors for rechargable batteries... [Update: Um, they sort of did it! (Europe hsa standardized cellphone recharging plugs)]

Going Nuclear

Stephen Chu, the current director of the Lawrence Berkeley National Lab, and the guy that Obama just fingered to be the new Secretary of Energy, signed this paper in August of this year.  The paper is a short, very high level, but clear and broad statement of why and how we should invest heavily in nuclear power.  Usually, statements like this come from people with no clout.

Holy crap.  This Barack Obama guy seems to be making at least some decisions I agree with.  I'm confused and unused to this feeling, but I think I like it.


Monday, December 01, 2008

Spend, spend, spending our way out of recession

We have a consensus among politicians in this country that we must spend our way out of this recession.  Of course, there are many things to spend money on:
  • For the last six years, the neocons within the Bush Administration have spent around 700 billion dollars of our money on wars, and committed another trillion or so to the aftermath of those wars (caring for the permanently maimed American soldiers).
  • This March, the idea that everyone jumped at was spending money on consumer goods, so every American got a check for $400 from the government, and was exhorted to spend it on consumables.  Pfft!  Just like that, $80 billion gone.
  • In just the last month, the Federal Reserve has spent hundreds of billions by buying stock in badly run banks whose value is justly plummeting as people realize how stupid their management was.
All of these have been colossal failures from an economic point of view.  And I do mean colossal.  The U.S. economy is about $13 trillion a year, and has a historic growth rate of about 3% per year.  It is the most awesome generator of wealth ever.  The economy usually generates about $400 billion a year in growth.  And our government is now throwing away money at a rate which entirely negates the economy's ability to grow at all.  The current trends guarantee that our children will be worse off than we are.  We are within a single order of magnitude of blowing away the entire output of the U.S. economy, which would reduce us to hunter-gatherers amid fancy energy-starved infrastructure within a year or two.  Lest you think that an order of magnitude increase in government spending is preposterous, consider that one department, the Fed, is now spending more than 1% of the U.S. GDP per month propping up just one industry.  The auto industry is lining up at the trough, and others aren't far behind.

Although the things the government spends money on now are crippling the economy, this is actually a good time for government spending to grow, so long as we spend money on the right things.  Private sector returns on investment are low, so the cost of borrowing has dropped, which means investments have longer to make a return.  This is a great time for the government to spend money on things that will cause the economy to grow in the long term:
  • Domestic power infrastructure, like wind farms and nuclear powerplants.  These will make the cost of future energy more predictable.  Predictability means less risk, so that the cost of capital for energy-intensive manufacturing, like fertilizers and aluminum and steel and plastic, will be lower in the U.S. than in other countries without the same infrastructure.  That will give our descendants decades of competitive advantage, which is enough time for not just businesses but industries to grow.
  • Health care efficiency.  I'm not suggesting we spend more on health care itself -- we're spending too much on health care.  I suspect that a huge amount of operational expense can be slashed from health care through radical restructuring without large amounts of investment.  The restructuring will be radical though.  Imagine the number of people put out of work if drug advertising stopped.
  • Electrified transportation.  Hydrocarbon-based transport will always rely on imported fuels subject to ever-more volatile price swings.  The value of real estate depends in part on the cost of transportation (if you drive 40 miles to work at 20 mpg and $4/gallon and 3% discount, that's $100k present value), and so volatility in energy prices causes volatility in housing prices which caps the house value that people can afford.  Worse still, we get situations like the current housing bubble, caused by just 2 million people simulaneously finding out they bought way too much house.  (That's just 1.7% of the 116 million homes in the U.S.!)
At least the president-elect seems to have the right idea.  I heard him refer to investment spending as a "two-fer".  I was disappointed at his notion that half the good idea of a "two-fer" is just spending the money at all.  But at least he's looking for the right kind of thing to spend on.

Friday, October 31, 2008

Save GM's workers, but not GM

I've just listened to a discussion on KQED about whether the US government should bail out the auto industry. The reason the solution here is not obvious is that everybody assumes that to save the job of someone who works for GM, you have to save GM. I suspect that's not true.

Toyota makes lots of cars in the United States. I suspect the majority of Toyota's domestic sales are domestically built. It saves them shipping, and avoids exposure to foreign exchange risk. The overall point is that there is no problem with American workers making cars that sell. The problem is that some American workers are making the wrong cars, and that is the fault of their engineering and management.

The usual way for these things to work out is that GM would go bankrupt and it's assets would be sold. Toyota would hire some of those workers, and buy some of those assets on the cheap, because they'd know that with GM gone there would be a reduction in the supply of cars and so they could make more money by increasing their own production.

Crucially, the assets (e.g. assembly plants) would be sold for a fraction of their original cost because Toyota would have to adapt them to Toyota's production style. This is very important -- GM's production style isn't profitable, so those assets have to be changed to become profitable.

Also, crucially, the former GM workers hired by Toyota would have to be retrained. This takes a lot of time for Toyota. The rehiring process also tends to weed out at least some of the poorly producing workers.

All this adaptation of physical capital and retraining of workers is all good stuff. So what's so bad about bankrupcy?

GM's shareholders lose a lot of money. Actually, GM's shareholders lost a lot of money a long time ago. GM's total market cap isn't very large, and hasn't been for a long time, because GM has had bad management for a long time. I claim that this problem is not terrible and does not warrant intervention from the government.

GM's workers and suppliers have no business for a long time. This is the real problem, and this is something that the government can help with.

So, here is what I suggest. I suggest that because GM, Ford, and Chrysler are "too big to fail", the government should intervene before they shut their doors, and negotiate an orderly transfer of their assets to companies that can make those assets perform. I suggest that Toyota and Honda and Mercedes and so forth would be willing to purchase much of the assets of the big three for a fraction of their orginal price, and would be willing to take government inducements to keep the factories open and the workers employed while they are restructured and retrained.

The result will be that some assembly people will be put out of work, because not all the Big Three's plants will be purchased. The corporate management of the big three will go unemployed, which is fine, as they are responsible for the current distress in those companies. There will be differences between the financial commitments that GM has made to its workers, and those that Toyota makes its own, and the differences between those commitments will have to be worked into the sale price from GM to Toyota.

We need to clarify the idea of "too big to fail". Our commitment is to our people, not to our companies.

Wednesday, October 29, 2008

The Biggest Haul Ever

The $700 billion bailout is about the same size as the dollar cash money supply in North America (that is, excluding dollars held as reserve currency in foreign sovereign banks).

The robbers in the Securitas Depot Robbery of 2006 made off with $92 million. This is the largest robbery in the western world, surpassed only but hugely by the $1 billion heist of cash from the Central Bank of Iraq in 2003, which was curiously similar to current events in that it appears to have been made possible by the actions of the Republicans in the U.S. government. In that case, it was the initiation of bombing of Bagdad. The current mess traces back to the Gramm-Leach act of 1999 which repealed the Glass-Steagall Act of 1933, which was intended to prevent banks from engaging in risky investment practices such as those that had caused the banking collapse in early 1933.

But neither of those robberies is anything like the size of the bailout. Imagine that some group managed to knock over every single bank, not just in one city, but in all of them. Not just the banks, but the convenience stores, groceries, businesses. Not just those but swiped the cash from every house and every wallet of every person in the United States. What a haul! It's positively Grinchian in scope.

Now imagine that those responsible are being paid a salary by the federal government. And that the folks they are turning the money over to are being paid extra to figure out what to do with it!

I've been mugged!

Wednesday, October 01, 2008

Bail out Main Street, not Wall Street

The $700 billion Paulson plan is too expensive and fixes the wrong end of the problem.  A better solution is obvious: bail out the mortgages that are going into default, rather than bailing out the banks that are capitalized by derivatives of those mortgages.

Bailing out the mortgages themselves is much cheaper (around $200 billion), directly eases the problems of millions of homeowners, and should fix the bank's capitalization problems.

Now, if it doesn't fix the capitalization problems, that means there is a lot more bad debt floating around than just these subprime mortgages.  And if this other debt is the problem, let's get that problem front and center so we can deal with it.

Some background:  The subprime crisis is essentially this:
  • Millions of people in the U.S. bought houses more expensive than they could afford, on the assumption that the rise in the price of the house would help them pay for the house.
  • Those people accepted mortgages with initally low rates, that then increase after a period of time.  They figured that before the rates increased, they would either get a job that paid better, or refinance the house based on a higher assessed value which would give them a larger equity stake and therefore make it possible to get a loan with a lower interest rate.
  • Housing prices are not rising, and those people are unable to refinance and don't have a better job.  When the interest rate rises, they can't pay the increased bill and fall behind on their mortgage payments.
After 10 years of excessive spending on houses, and excessive building of expensive houses, the US has too many expensive houses.  The house price collapse will not be resolved until the US economy has grown enough folks with enough income to afford those houses.  My guess is this will take about a decade.

In the interim, we cannot allow those extra expensive houses to go unoccupied.  Unoccupied houses fall apart, and so they lose value quickly.  This loss will be borne by the large institutions in our economy and will be a net drag on our net asset growth.  Under the Paulson plan, the government will buy derivatives of these mortgages and these losses will pass to the taxpayer.

So, the houses must remain occupied, and we do not have enough people who can afford them to occupy them.  Until we grow those folks, the best people to occupy those houses is the people who are living in them now.  I have a plan to keep them there, detailed below.

The financial crisis is an extension of the subprime crisis:
  • Banks have been converting bundles of mortgages into mortgage-backed equities, and selling those.  Many institutions now own these assets.
  • The value of those equities depends on the number of mortgages within them that default.
  • The value was initially determined by making assumptions about the usual number of mortgage defaults.  Side note: some mortgage-backed equities are guaranteed, so that the risk of many defaults is seperated out into another security.  That does not change the problem, only who owns it.
  • Because the subprime crisis is surprising, nobody is comfortable estimating the number of mortgage defaults any more, and so the value of these equities is not well known.  Their value is plummeting for two reasons:
    • More risk means less value.
    • Banks are trying to sell these MBEs, but nobody is buying.
  • Banks are required to have a dollar of capital for every 12 dollars they loan out.  When the value of their assets drops, they have to sell some of their loans (packaged as mortgage-backed equities) to generate more capital.
  • So, selling MBEs causes the value of the MBEs still held to drop, which forces the bank to sell more MBEs.  The feedback loop is driving the capitalization of the banks below the legal requirements, which causes the banks to fail.
Paulson's plan is to have a U.S. government agency buy $700 billion of mortgage backed assets at some "fair" price.  This price will set a floor on the valuations of the assets owned by banks, so that they don't have to sell so many MBEs, and it will give them the cash they need to improve their asset positions to the point where they can loan money again.

The Paulson plan will fix the Wall Street problem, but it does not address the Main Street problem, that is, the mortgage defaults, except that in avoiding a recession it will reduce the number of mortgage defaults from people losing their jobs.

The alternative is to fix the Main Street problem directly.  If the government guarantees that only the expected number of mortgage defaults happen, then the MBE will have known values, which should also put a floor on the valuation of the bank's held assets and similarly ward off the credit crunch.

Here is how the government makes that guarantee.  Let's say that Harry has an adjustable-rate mortgage which just increased its rate, and he can't afford it.  His original 10% equity in the house is down to 5%.  Ordinarily he would go into default, and at some point the bank would reposess the house and evict him, and then sell the house.  We don't want that to happen because either the house will go vacant for a long period of time, during which it will deteriorate and lose value, or the bank will sell it for a song and lose a great deal of money, causing the bank to go bankrupt.

Instead, the government steps in and offers Harry the following deal:
  • The government buys 25% of the house from Harry.
  • Harry now refinances his 75% of the house.
    • He now has 6.66% equity in his smaller share of the house.
    • His loan amount is 74% of what it was, so his payments are proportionally smaller.
  • At some point in the next 10 years, presumably after the housing market has worked through its excess inventory, Harry gets a one-year notice from the government that he needs to either buy back the government's 25% or sell the house.  He can sell at any earlier time if he likes.
So, how much cash would the government have to invest with this plan?  Suppose the number of subprime mortgages is 9 million, and 25% of those are in serious delinquincy.  Suppose the average value of those houses was $320,000.  For the government to purchase a 25% stake in all of those would cost $180 billion.  And note that this is an investment.  After ten years or so, we should expect to see most of that money back, perhaps with some gain.

Why is my plan cheaper than Paulson's plan?  The primary difference is that my plan keeps people in their houses by purchasing a portion of the house's value.  Paulson's plan requires the government to buy the full value of the mortgage.  What Paulson's plan does not detail out is that the government will be stuck with the loss when the owners default on the mortgage and then the house is repossessed and sold for a loss.

Saturday, September 27, 2008

Deer (plural) in the headlights

I watched the first presidential debate last night, as I expect many of you did.

My basic takeaways:
  • I was surprised to find myself liking Obama's foreign policy points more than McCain's.
  • McCain seems really worried about the effect of a defeat in Iraq on the U.S. military.
  • Neither candidate has a clue about the financial meltdown, either what to do about it or how it would affect his administration.
The conventional wisdom was that McCain understands foreign policy better than Obama.

Certainly he's travelled to hot spots far more than Obama has.  This is partially a function of the amount of time that McCain has been in the Senate.  But McCain has also been more interested in going places than Obama, and I'm disappointed that Obama hasn't taken better advantage of his ability, as a Senator, to go places and see first-hand the situation on the ground.  As a leader I think you always have to spot check the information you are getting, otherwise you end up with travesties like Colin Powell presenting bad evidence to the U.N.

But I like Obama's take on South Ossetia better than McCain's.  McCain espouses a simple response: The Russians went in, we want them out.  He made the more interesting point that six months ago he had called for replacing the Russian "peacekeepers" in South Ossetia with troops from other areas, since the Russians were hardly neutral.  Obama called for both sides to cool down, which is significant because he's acknowledging that Georgia's president was being provocative.  He didn't mention any details during the debate, but I think his position may be more realistic here.

Obama tiresomely pointed out that going into Iraq was a bad idea in the first place, which I agree with, but as McCain points out, the next president decides when and how we get out, not whether we go in.  But McCain seems to think that withdrawal is driven by avoiding the stigma of defeat, both for its effect on our military and on also for the effect on our adversaries.  I thought it would have been useful for McCain to point out that there is a real link between our defeat in Somalia and the 9/11 attacks: bin Laden took specific inspiration from our debacle in Mogadishu, as reported by one of his lieutenants in the 9/11 commission report.

Obama is right when he points out that the current administration has been completely occupied by Iraq.  As he says, we took our eyes off the ball, which was nailing bin Laden and al Queda.  I think there is real value in killing this man and his organization, because it sends the right message to other asymmetrical adversaries: any single person will become exhausted before the U.S. military does.  I think the rest of the world expects us to tear up a lot of ground while uprooting al Queda, and we should.

McCain made a good point that Obama has made tactical mistakes: Obama should never have explicitly said that we would attack from Afghanistan into Pakistan, and he should have been more careful about suggesting that he himself, as President, would meet with and thus legitimize Ahmadinejad.  These are screwups, I think reflective of Obama's inexperience, but I think we can live with these kinds of screwups.

What I am impressed by is Obama's view that our Iraq engagement has prevented us from using our military power elsewhere to apply pressure to al Queda.  To some extent we've managed to attract foreign adversaries to a field where we are clear to fire, but I think the larger effect has been to grow new adversaries.  Although there is an aspect of bean-counting to it, I think Obama has the more useful perspective of Iraq in the context of the world.  I would like to see him frame our withdrawal as a balance between the risk of emboldening adversaries, as in Mogadishu in 1993, and allowing adversaries to flourish unopposed, as in Afghanistan during the rise of the Taliban.

Obama mentioned taking four divisions out of Iraq and putting two into Afghanistan.  I wish people would use numbers of troops instead of words like "division", because folks like me don't know how many people are in a division.

Given significant prompting, neither candidate offered a clue about how to react to the financial meltdown, nor how the meltdown might affect their administration. They were like deer in the headlights of an oncoming car.  Obama made the distinction of "Main Street" vs "Wall Street", but that's just a sound bite.  I think the more interesting distinction is whether the government should buy partial interest in houses whose present owners face foreclosure, or if the government should buy credit default swaps whose values are unknown because of the prospect of widespread foreclosures.  More disappointingly, neither candidate had any suggestions for how we might stick the bill to the people who profited so hugely from this debacle ($62 billion in bonuses in 2006!).

Both candidates came out in favor of nuclear power, and the distinction was over their approach to waste storage and reprocessing.  Fabulous!  Their differences are over waste: Obama wants to see a better solution, which has been used in the past as a passive aggressive stall tactic on nuclear.  McCain seems convinced that there is some solution to the waste problem, although I suspect he thinks Yucca Mountain is it.  McCain twice pointed out that Obama's position is passively antinuclear, as you can't have powerplants unless you store the waste.

On the whole, I was impressed with the dignity of the debate.  Last year some friends predicted a McCain-Obama contest, and I said then that I'd be happy with either result.  I still think that's true.

Thursday, August 28, 2008

Dichroics 101

A friend asked for a rundown on dichroics, which are the coatings we put on optical glass (and sunglasses) to optimize the transmission properties of our lenses. So, here is Dichroics 101. You could also check out Wikipedia.

Visible light is made of photons. Each one has a wavelength. Your eyes are receiving a bazillion photons every second in the daytime. At night, your dark-adapted eyes are capable of noticing the flicker of individual photons, but only sometimes. Visible light varies from 420 nm (blue) to 700 nm (red). A nm is a nanometer. 1000 nm is a micron (micrometer, but nobody but Europeans says that), 1000 microns is a mm (millimeter), 1000 mm is a meter.

So, a 420 nm photon is really small. However, atoms are 0.1 to 10 nm across, and we can lay down layers just a few dozen atoms thick, so we can actually build things that are smaller than the wavelength of light. Get back to that in a sec...

Any given transparent material has an index of refraction, which tells you (among other things) the speed of light in the stuff. Air has an index of refraction of about 1, so that the speed of light in air is the same as it is in a vacuum: 186,000 miles per second. Glass has an index of refraction of 1.5 to 1.8 (depending on which kind of glass). Plastics (like polycarbonate, what is most likely used in sunglasses) have an index of refraction around 1.5. So, in plastic, the speed of light is 186,000 miles/second / 1.5 = 124,000 miles/second.

Whenever photons go through a surface, like changing from air to glass or back, some will reflect. The number of photons reflected has to do with the change in index of refraction. The equation is: reflected fraction = (Na - Nb)^2 / (Na + Nb)^2, where Na and Nb are the indices of refraction for the two materials. For example, from air (Na = 1.0) to plastic (Nb = 1.5), the fraction is 0.04, or 4%. When you see yourself in a window (or in someone else's sunglasses), this is what you are looking at. Actually, you see two of these, one off the front surface, and one off the back surface of glass.

Suppose we put a 100 nm thick coating of Magnesium Fluoride (a.k.a. MgF2, index of refraction is 1.38) on a piece of glass (index of refraction 1.5). There will be two reflections: one from the air/MgF2 surface (2.5%), and one from the MgF2/glass surface (1.7%). Because there is a smaller change of index of refraction across each surface, each surface reflects less. If the index of refraction of MgF2 was closer to 1.25, halfway between air and glass, then we'd find that the power of the two reflections, when added, would be less than the power of a single reflection of an air/glass surface. But MgF2 doesn't have that nice property, so why do we bother?

Those photons act like waves: they have peaks and troughs. That 100 nm thickness wasn't just any old number, it is 1/4 of the wavelength of green light in MgF2. Green is ordinarily 550 nm, but going through MgF2 it's about 550/1.38=398.5 nm. The light reflected from the MgF2/glass surface will have travelled two times 100 nm more than the light reflected from the air/MgF2 surface, or one-half a wavelength more. So, the crests of the wave off the MgF2/glass surface will line up with the troughs of the wave off the air/MgF2 surface. When you add those two together, you get... cancellation.

Or... nearly cancellation. The air/MgF2 reflection is a little stronger than the MgF2/glass reflection, so there will be a little wave left over. A 550 nm photon will reflect (2.5-1.7=0.8%). There you have it, the first antireflection coating, as developed for German submarine periscopes in World War II.

Now notice that 420 nm (blue) light will not cancel as well, because the two reflections are 65% of a wavelength offset from one another. The peaks and troughs don't quite line up, so it doesn't cancel as well. The same is true of 650 nm (red) light. So, the reflected light will be purplish: it will have some blue and red, but not so much green.

This is the basis of dichroic filters. You can put several layers of stuff onto a glass or plastic surface, and each additional surface will have a reflection. At each wavelength, you can add up those reflections with their wave offsets to get an overall reflectivity. You end up with a graph which shows how much the thing reflects at each wavelength. By varying the materials deposited and the thicknesses, you can get something which has interesting and useful properties, such as reflecting all the IR (> 670 nm) and UV (< 390 nm).

You can buy such a thing at a good camera store. It's called a B+W 486 filter.

Tuesday, August 19, 2008

Grove's plan

Now I've read Andy Grove's plan (at The American, or via Wired) to fix our energy dependence national security mess. Mr. Grove thinks, as I do, that national security is the most immediate problem we face.

There are flaws in both Grove's and Pickens' plans, but we can take good ideas from both and act on them immediately.

There are two steps to either plan:
  1. Switch our vehicles to a non-petroleum energy form
  2. Make that energy domestically
With Pickens' plan, we switch to natural gas to power cars (step 1), and simultaneously free up domestic methane production by building wind turbines (step 2). If we do step 1 without step 2, we end up switching from imported petroleum to imported natural gas, which suits Pickens quite well because he has lots of gas to sell us.

With Grove's plan, if we do step 1 without step 2, we've got a bunch of electric cars which will plug in at night. The extra demand at night will drive utilities to produce more baseload power -- coal, wind, or nuclear, in that order, all of which is domestic. The advantage of Grove's plan is that step 2 is handled by the market.

The problem with Grove's plan is that converting cars and trucks to electricity instead of natural gas is more costly. The added cost will cripple the plan in two ways:
  1. It is so much more costly that the conversion will happen more slowly. Per year, less petroleum imports will be displaced.
  2. Fewer vehicles, in the end, will be converted.
I think there are good ideas in both these plans that we should isolate and exploit immediately.

There are no forseeable battery technologies that will work on long distance trucks. The obvious substitution here is to move long distance freight by electrified train. We already have most of the rail infrastructure (rights of way are the big issue here), and the nation is already switching some cargoes back to rail. But railroads have been sick for a long time, and we have to fix them before they can help America.

Rail's crushing disadvantage compared to trucking is it's capital structure -- the fact that the same companies own the road and the trucks. Long distance trucking works because multiple companies run trucks over the same routes, which are owned and paid for by the U.S. government via tolls on the trucks and taxes on the diesel they burn. We should change rail to use this structure. The rail infrastructure should be electrified in the process, so that the independent trains can choose to run on cheaper domestically produced electricity where it is available. All the technology necessary is already developed and in production.

The good idea in Pickens' plan is to build lots (many tens of thousands) of wind turbines. Wind turbines displace imported natural gas with domestic labor, and that is the most useful part of his plan. If you then convert cars to run on natural gas rather than petroleum, you in turn displace some imported petroleum with some imported natural gas. This second step is a fine thing too, as petroleum costs more than natural gas per unit energy, but the first step is what is most important.

The United States made a terrible mistake during the 1990s by building nearly a terawatt of natural gas-fired turbines. The choice was driven by utilities who know that fuel costs can always be passed to the consumer, so that cheap gas turbines minimized investment and so maximized return on investment. The problem here is that utilities were allowed to make investments with large externalized costs. Market forces do not work to the advantage of most citizens unless the market is set up to internalize the costs that matter to the citizens. Because utilities have no sons and daughters to send to war, they cannot be allowed to make investment decisions that force us to send our sons and daughters to war.

Electric freight trains and wind turbines will not fix America's imported energy problem. Both, however, can be pursued immediately, are solid steps in the right direction that will not have to be reversed, and will make market-affecting changes in our consumption of imported energy. Both options will buy us some time during which we must develop better options.

In the medium term, we can build more nuclear power plants. These take longer than wind turbines to come on line, but the eventual impact can be much larger. The public discussion of our nuclear options is becoming more sensible, and I am beginning to hope that we may be able to begin building this infrastructure again after a two decade hiatus that has cost us terribly.

Nuclear power generation, if pursued in a sensible way, can drive the cost of electricity in the U.S. down below the cost of coal power in China, in a predictable, long-term way, which I think should be an explicit goal of our national energy policy. This will have the effect of "onshoring" basic industries that we have been moving overseas for decades. The onshoring effect is actually more powerful than displacing imported petroleum, because the imports that are replaced for a given amount of investment have higher added value.

Monday, August 18, 2008

Unreliable Wind Power

The electricity that arrives at your home or business is extremely reliable (if you live in the U.S.). The electricity that comes from a windmill is unreliable -- it only comes 1/3 of the time, and you never know exactly when it's going to come or how much you are going to get. If you want to sell wind energy, you have to convert unreliable wind energy into reliable power. How is it done?

This paper explains how, but it's a tough slog. What follows is my summary.

The utility companies already solve a similar problem. Electricity is not easily stored, and in modern grids it is generally not stored at all. So, when you flick on a light switch, the immediate effect is that the power dissipated by all the lights in your neighborhood drops a little to compensate. Within seconds, some power turbine perhaps hundreds of miles away must twist a little harder on its generator shaft to get everyone's line voltage back up, and the extra thermal or hydroelectric power fed into the turbine to get this extra twist will be just about what your light bulb burns, plus the inefficiencies of getting the power from the turbine input to your bulb.

Turbines can only throttle up to 100% of their rated capacity, and they get inefficient when they throttle down too far, so utilities will shut down or spin up units to make larger changes. Changes like these take a long time, so utilities predict what the expected load at any given time will be, hours or days in advance, and schedule units to be on or off line to match the predicted load. Utilities keep some fraction of their turbines at partial output so that they can immediately crank up to match unexpected increases in the load.

The biggest increase in the load that they plan for is usually an unexpected dropout of one of the generators. If a 1 gigawatt generator suddenly goes off line, the grid controllers might respond by taking four other generators from 700 to 950 MW output. It would be impossible for this to happen instantly, but luckily, when most generators go offline, they do so gradually, and if they coast down over the course of seconds, other generators can crank up to match. If a circuit breaker pops or a line parts, or something else happens very quickly, then there is usually a temporary brownout as the line voltage drops down to the point where the loads match the generation. The backup generators usually ramp up within seconds, and many devices (like your computer or TV) can ride through a partial loss of power for a second or so.

So, the bottom line is that utilities predict the change in demand on their generators, and there is some variation from their prediction, and being ready for this variation costs money because some turbines (the spinning reserve) must be run at partial throttle which is less efficient than flat-out.

Just as an aside: consider how valuable it would be for the utility company to be able to instantly shut down your air conditioner for just a few minutes. This ability would act as part of their spinning reserve. During the summer, air conditioners are a substantial fraction of the total power burned. I'm pretty sure that for most of the U.S., the ability to shut down even a fraction of the air conditioners for 10 minutes would cover the entire spinning reserve requirement. That could save a lot of money, and PG&E (my local utility in California) is experimenting with just that through their Underfrequency Relay Option on the Base Interruptible Program. Anyway, back to the summary.

Wind farms produce electricity whenever the wind blows. Wind speeds can be predicted, and there is always variation from the prediction. When a wind farm is connected to the grid, the total variation in load on the load-following turbines is larger than without the wind farm, and so more turbines must be run in load-following mode, and these incur a cost associated with wind power that is real but not easy to predict before the wind farm is built.

For instance, part of Denmark's grid is connected to Norway's grid. Norway gets most of its power from hydroelectric plants. Hydroelectric plants are very good at load following and so they are usually the first choice of plant to handle variation from plan. Because Norway has lots of hydroelectric plants, and because it has high-throughput connections to Denmark, Denmark can hook up fairly high powered wind farms to its grid and incur relatively low costs for standby power.

Now that utilities are connecting large amounts of wind power to their grids, they are getting more precise numbers on the costs of doing so.
  • Wind works well where you have year-round high winds near hydroelectric dams.
  • The short-term variation from wind farms is usually quite small, since turbines are small (a few megawatts) and don't all shut off at the same time.
  • Big storms give the worst case variation, since when a wind turbine goes too fast it feathers its blades and shuts down, going from full output to nothing, often in synchrony with other wind turbines around it.
  • Wind farms spread over large geographic areas have less total variation (the wind doesn't die everywhere at the same time). Ideally the spinning reserve would thus scale up slower than the total windpower connected, making marginal wind power less expensive. Unfortunately Denmark is too small to see this effect, and it would require very high throughput long distance power distribution.
Wind turbine manufacturers are working on making their turbines play better with the grid. Variable-frequency turbines go offline more slowly by generating power from their blades as they spin down after losing wind. Many manufacturers are shipping wind turbines with extra-large blades, so that the turbine produces a larger fraction of its output more of the time. This reduces the cost of variability in exchange for an increase in the capital cost of the turbine, which is a tradeoff made possible by an understanding of the cost of that variability.

Wind turbines appear to work economically (when the utilities are prodded by a production tax credit, which I support). As more turbines are installed, the best windy areas are used up and the least expensive spinning reserve is committed. On the other hand, wind turbine costs might be coming down some day (maybe -- materials costs are going up), and the need for spinning reserve is decreasing. It's a fairly tense balance.

Personally, I'm happy to see more wind turbines getting installed, since it's domestically produced, mostly-carbon-free, low marginal cost power, and let's have more of that. There is going to be a limit to how much wind power can be installed, but we're nowhere near that limit yet.

At the same time, it's sobering to consider that the United States once built almost 100 nuclear plants in about two decades, bringing new power online at the average rate of 5 gigawatts a year, at a time when our economy was one-third to one-half the size it is now, in real terms. At it's peak, we were building much faster than that. This economic explosion was driven by the business fundamentals as much as it was by overexcited businessmen jumping on the latest bandwagon. And the fundamentals were and are that if we decide on a reactor design (like Palo Verde), we can build and operate them cheaper than coal plants.

To match this performance, and get to 20% of the U.S. grid in 20 years, the wind industry would need to install about 300 gigawatts of nameplate capacity. That would require getting to a peak of 30 gigawatts a year, up from 4 gigawatts in 2007, which is 11 years of sustained 20% growth. The fundamentals for wind are not as good as they were for nuclear in the 1960s. It won't happen without a major breakthrough.

Monday, August 04, 2008

SpaceX launch 3 failure

I watched the YouTube video, which shows launch through 6:16 indicated (T+2:11). Two comments:
  • Can we please get a heater on the on-rocket camera lens cover? It's hard to see much through the condensation.
  • It looks like the rocket has a 5 second roll oscillation that starts as soon as they go supersonic. I don't know if this is fuel slosh, like on the second stage of the last launch, but it doesn't look like other rocket launches to me. Any kind of wobble before seperation could cause the first stage to ding the second stage, as happened on the last launch.
I really hope their next attempt goes better. This launch wasn't obviously progress.

Constraints to wind power

Jesse Ausubel has a pretty good essay which describes what he thinks the future of power production will look like. It's a somewhat rosy picture, and although I'm also optimistic (but not for the next few years), I disagree on a couple of points.

He, like I, thinks that we've got to get away from coal. I don't follow his reasoning for why he thinks we have to get away from coal. It seems he has identified a long term trend towards fuels with less carbon and more hydrogen, and he thinks we should make choices to perpetuate that trend. As near as I can tell, he's skipped the part about why the trend is a good thing. Perhaps he thinks consumers like lower carbon fuels because they tend to burn with fewer combustion byproducts, but he doesn't back this claim up with any market analysis.

I think we as a country need to stop burning coal because
  • we import a lot of oil to burn coal (we spend almost as much on oil to move the coal as on the coal itself), so that the price of coal-fired power is quite sensitive to the cost of oil,
  • it is politically possible to install lots more windpower, but coal is seeing opposition, and it is vital to our economic health to get a lot more electric supply,
  • wind power is inelastic supply, whereas coal power is elastic. That is, a coal plant will shut down if the price of electricity falls below it's operating costs, but a wind turbine costs almost nothing to run and will keep generating through a larger swing in electricity prices, which will make our electricity supply more predictable,
  • and finally and perhaps most importantly, because climate change matters.
My biggest point of disagreement is with Jesse's assertion that windpower is impractical due to land use constraints. Other, perhaps clearer-thinking people have made this same point. Jesse makes a very sobering calculation: he figures a wind farm produces 1.2 watts per square meter, average. To produce all of the U.S. grid's 450 gigawatts (average), you'd need a lot of land. Jesse calculates 780,000 square kilometers. The area of the U.S., for reference, is 9.16 million square kilometers, with 1.75 million square kilometers of cultivated cropland. I don't get quite as large a number as Jesse, but we'll take his 780k km^2 for now.

He figures that's just too much land. But this argument is trite. I'll skip over the point that farmland with wind turbines is still farmed land, and instead focus on a more basic question: How much is too much? I think too much is when the next wind turbine to be installed is projected to make no money. That could be all the farmland plus a lot of offshore turbines, or it could be just a few places in North Dakota. It won't be decided by people getting scared of erecting some more infrastructure on 44% of our existing cropland. Farms in the Netherlands in the 1800s were dotted with windmills, because that's what drove the pumps to keep the water out. Farms in the U.S. in the late 1800s were dotted with windmills, with parts shipped at enormous expense across the continent, because that's what pumped the irrigation water wells. Modern farms aren't currently dotted with wind turbines because they've been using oil instead.

Jesse's argument is also trite because it ignores the huge variation in windiness around the U.S. In North Dakota, the entire state is class 4 or above. That means the power available at 50 meters above the ground is 400-500 watts/meter^2. Even during the summer doldrums, the average power available is 300-400 watts/meter^2.

Jesse's 1.2 watts/meter^2 number comes from a wind farm in Lamar, Colorado. That wind farm has 108 1.5 MW turbines spread over a 11840 acre area. Multiply by a 30% capacity factor, and you get 1.01 watts/meter^2. (I'm not sure how he got the extra 20%.) Why is this number so low?

It's economics. The company that owns the wind turbines pays the company that owns the land on which the turbine is sited approximately $3000 to $6000 per year per turbine. The net present value of that payment stream is $60,000 to $120,000. The turbine costs $1,500,000, which is a lot more. Spacing the turbines farther apart slightly increases the power from each turbine, at small increases in royalty payments and road and cable construction costs. If land scarcity ever becomes an issue for wind farmers, I would expect $ per watt and watts per km^2 to go up. Note that $/watt may go up slightly, while watts per km^2 may go up a lot.

Consider that the first big wind farm, on the Altamont Pass, has a power density of 0.86 watts/m^2, which is lower than Lamar's density. If you follow that link, you'll note that wind farms vary from 0.24 watts/m^2 (Pierce County, N.D.) to 5.3 watts/m^2 (Braes of Doune, Scotland). I think land prices, more than turbine capability, is driving the energy density of these farms.

Note that the wind power map above quotes wind at 10 and 50 meters above the ground. Back when the Department of Energy began collecting data for these maps, those were considered the likely bounds of practically sized wind turbines. However, the Lamar turbine towers are 70 m tall. It turns out that the tower costs are mostly just steel, and the higher up you go, the faster the wind blows. After the industry got experience with the costs of siting, permitting, building, bird strikes, aesthetics, and so forth, it turned out worthwhile to spend more on steel in the tower and concrete in the foundation. As a result, watts per km^2 has gone up.

Is there a limit? Placing turbines closer together can collect more wind energy, but fundamentally most wind power is still being dissipated as turbulence and then heat higher up in the atmosphere. Bigger wind turbines reach farther up to capture more energy. It is hard for me to imagine that ground-based wind turbines are going to get substantially taller than they are now, and so I do not expect the average power yield to increase much beyond, say, 2 or 3 watts/m^2 average. 2 watts/m^2 across all of North and South Dakota would yield 750 gigawatts, which is why you hear wind advocates claiming that the Dakotas can power the rest of the U.S. They could, if you could transport the electricity to market.

Finally, I doubt very much that, even if windpower is wildly successful, it will ever account for anything like 100% of the U.S. grid's production. If many coal plants are forced out of production by lower cost wind plants, I would expect that some very efficient mine-mouth plants will remain. I will be astonished (and pleased) if wind ever produces half the U.S. capacity. If that ever happens, wind turbines will be a familiar sight, but not an overwhelming use of land.

Jesse also complains that wind turbines take significantly more steel and concrete than nuclear powerplants. Obviously the steel and concrete are factored into the current prices of turbines, so it's already part of the price comparisons being made. There are two future risks to large use of concrete and steel, however:
  • Wind turbine prices in the future could be more closely tied to raw material prices (which in turn depend on the cost of energy) than on the price of labor (which depends on the state of the economy). This question resolves to whether future wind turbine prices are more sensitive to the cost of imported energy than electricity from coal is. Coal fired electricity is fairly sensitive to oil prices, so I doubt this is a problem.
  • A large bump in wind turbine construction could use so much concrete and steel that it would distort the markets and cause large price increases.
The second issue got me to pull out the calculator again. Here are Jesse's numbers, actually Per Peterson's numbers, in context of the production necessary to build a 250 GWe average windpower grid (about half U.S. electric consumption):
  • Steel: 460 metric tons per MWe. The U.S. produces about 90 million metric tons of steel every year. Over the 30 years it would take to build a new US grid, wind turbines would require 1.3 years' worth of production.
  • Concrete: 870 cubic meters of concrete. The U.S. ready-mix industry produces about 350 million cubic meters a year, so we'd need 0.6 years' worth of concrete production.
These constitute a nice bump to domestic production, but are significantly less that ordinary year-to-year variation.

The bottom line: if the price is right (or even close), let's have all the wind turbines we can build, because it really could help with our foreign trade deficit, economic sensitivity to energy prices, and global warming.

Tuesday, July 22, 2008

Dumping Quicklime into the Oceans

Tim Kruger at Cquestrate has an idea for sequestering large amounts of CO2: dump quicklime (CaO) in the ocean.

The basic idea is to convert limestone (CaCO3) and CO2 into calcium bicarbonate (Ca(HCO3)2).

CaCO3 + energy -> CaO + CO2 Burn limestone into quicklime
CaO + H2O -> Ca(OH)2 Dissolve quicklime in ocean to make calcium hydroxide
Ca(OH)2 + 2CO2 -> Ca(HCO3)2 Calcium hydroxide absorbs CO2 to make calcium bicarbonate

CaCO3 + H2O + CO2 + 178 kJ/mol -> Ca(HCO3)2

The problem is the amount of energy required. Let's say it comes from coal. Typically, you can get 30 MJ/kg out of coal. To get your 178 kJ above, you'll produce a half mol of CO2 just burning coal, assuming perfect efficiency. That's half your benefit gone right there.

But, it's a high temperature reaction (840 C). That means you have to get the reactants (calcium carbonate, coal and coal oxidizer, e.g. air) up to that temperature, react them, then drop the reaction products back down to normal temperature. To get perfect efficiency, all of the heat from the cooling products has to be transferred to the reactants. There is going to be some loss.

Let's say you lose 25% of the coal heat, and 75% goes to making quicklime. Then, for every 2 kg of coal burned, you will eventually absorb the CO2 that was produced by burning another kg of coal somewhere else.

Bottom line: we'd have to triple the rate at which we burn coal to get carbon neutral with this scheme. That's not practical. It'll get better if we use natural gas or oil, but it won't change the basic calculation that we'd have to multiply our existing consumption of fossil fuels to get carbon neutral.

Now, if someone wants to tell me about a scheme in which limestone is burned in a solar furnace to make cement, I'm all ears. CO2 sequestration from such cement manufacture makes more sense than it does from coal-fired powerplants, because limestone burning (no air) releases pure CO2, whereas coal burning releases CO2 mixed with lots of nitrogen from the air. However, there are lots of other problems.

Sigh. We're not getting out of this mess easily.

Tuesday, July 15, 2008

The Pickens Plan

Check out the guy's website, if you haven't already. There is not a lot of meat there. Basically, the idea is that if we build enough wind turbines to provide 20% of our electricity, we can reduce the amount of natural gas that we burn to make electricity. This natural gas can be used to power special new cars, which will reduce our imports of petroleum.

Mr. Pickens' chief aim is to reduce U.S. petroleum imports. That's great, because that's the energy policy issue I care about most, too. However, I see two problems with his plan:
  1. As things stand now, large fast changes in wind turbine output will have to be accomodated by throttling natural gas turbines. Gas turbines cannot throttle down to zero power efficiently. So, even when the wind is blowing a large amount of power will have to come from gas turbines running at partial throttle ready to take over if the wind cuts out. If wind is supplying 20% of our domestic power, these partial-load gas turbines will have to supply some similarly large amount, and as a result there may not be a large amount of gas actually saved.
  2. I don't forsee a switch to compressed natural gas burning cars. I suspect it would be cheaper and have a larger, more immediate impact to convert the natural gas (and some coal) into gasoline in a refinery, and then feed that into the existing transportation system.
I have two humble suggestions for Mr. Pickens, or energy policymakers.

1. Switch home heating to electric heat pumps.

In 2006, 5 billion gallons of distillate fuel oil was sold to residential users, almost all of it used to heat their homes. Ignoring refinery gain, this is 160.8 million barrels, or about 3.6% of the 4.5 billion barrels of oil imported that year.

Nearly all the houses heated by distillate fuel oil have grid electricity. These houses can be upgraded to air-source heat pumps for a few thousand dollars each. Electricity can come from coal or natural gas, either one of which is better than petroleum. The economics are probably already there for the switch, so some public education and low-cost financing should push homeowners to embrace heat pumps en masse. This can happen a lot sooner than moving the U.S. car fleet to compressed natural gas.

This switch can reduce our oil import bill without requiring the first step of lots of wind turbines. Maybe I'm just nitpicking, but $21.4 billion dollars per year (for the 160.8 million barrels imported) seems like an interesting amount of money.

2. Make air conditioners work on intermittent electricity

This is also known as "Direct Load Control" or "Demand-side Management".

One of the problems with wind energy is that it's intermittent. Increasing the amount of wind generation in the national grid will increase the variation in load that the other generators must accomodate. This will cost money. It will cost less money if the other generators have 10 or 15 minutes to accomodate variation.

Air conditioners and heat pumps naturally store energy. It takes time to cool or heat a building. Usually, the pump cycles on or off every few minutes. If the utility has a fast way to shut down large numbers of compressors for a few minutes, it can filter out much of the short-term variation in load and supply. Instead of throttling gas turbines from 50% to 100%, a few minutes' notice gives the utility time to turn on gas turbines -- from 0% to 100%. That means that the 50% rated capacity that was otherwise being produced by a gas turbine can be produced by a coal-fired turbine instead, which is much cheaper.

This change is a good idea regardless of whether a massive wind turbine build happens, because it will allow utilities to use less natural gas and more coal. That may alarm some folks. Some may see a hidden agenda here. I think if the same bill in Congress mandates Direct Load Control on HVAC devices, and guarantees a production tax credit for all non-carbon domestic sources for a decade, that should assure doubters and put some real fire in the market.

Right now, hydroelectric turbines are the cheapest load-following generation around. They produce just 7.1% of the electricity in the United States (2006). Unfortunately, all of this load following capacity is already used.

For comparison, HVAC uses more than 29% (page 44 here, plus this, both from the EIA) of our generated electricity. Instantaneous control over this much load would be sufficient to accomodate any amount of wind power that we care to build. Of course, the utilities (really the system operators) can't control HVAC, yet. I don't think this is a problem, because we don't have 450 gigawatts of wind turbines yet either.

I suspect the average lifetime of HVAC equipment is around 20 years. If the government mandated that all HVAC equipment sold after, say, 2009 had Direct Load Control features, then we'd see about 15 new gigawatts of Direct Load Control every year. There is little danger of us building wind turbines faster than that in the near future.

Wednesday, July 09, 2008

Burning coal is burning oil

I found some numbers for the oil cost of burning coal.

Freight trains in the United States burn 1 gallon of diesel to move a ton of frieght 436 miles.

Average distance coal travels in US: 628 miles from mine mouth to powerplant. At $4.03/gallon, that's $5.80 for the diesel to move a ton of coal from the mine mouth to the powerplant, on average. Wyoming coal costs $9 at the mine mouth. So, electric producers pay almost as much for the diesel to move the coal as for the coal itself. Since marginal petroleum is imported, it's fair to say that coal is not entirely a domestic fuel.

The average powerplant cost for coal in the U.S. in 2006 was $34.26/ton. That's because coal mined outside of the Powder River basin in Wyoming costs a lot more to dig out -- the average mine-mouth price across the U.S. in 2006 was $25.16/ton. The difference is $9.10/ton, which is the cost of transport. The cost of diesel was a bit lower in 2006, but it looks like around half the transport cost is the diesel.

If the coal is 22 MJ/kg, and the plant is 35% efficient, then for each kWh at the powerplant you spend on average 1.8 cents for the coal. Just the fuel cost of the coal plant is more than the total operating cost of the Palo Verde nuclear powerplant, per kWh. This result is entirely independent of subsidies or clean coal. The black stuff is apparently just really expensive.

A while back, I snarkily suggested that mine mouth coal powerplants were a way to keep the pollution away from rich people. Looks like I was wrong:
  • Transporting a kWh of electricity 1000 miles increases the cost by 19%.
  • Transporting the coal necessary to make that electricity 1000 miles costs $14.49/ton, assuming cost is linear with distance. That's a 58% increase in the cost of the fuel. Assuming the fuel cost is 70% of the cost of producing electricity, that's a 40% increase in the cost of the electricity.
  • 4000 miles (across the continent) by electricity: increase cost by 107%.
  • 4000 miles by coal train: 160% increase.
What about the extra carbon? Transporting 1000 miles as electricity means you must make an extra 8.7% more electricity which gets lost in the wires, which produces 8.7% more CO2. Transporting 1000 miles by coal train burns 6.3 kg of carbon in the diesel to deliver perhaps 800 kg of carbon, which increases the total carbon released by 0.8%. Clearly the diesel locomotive is the lower carbon, if much more expensive, alternative.

Average distance coal travels in China: 230 miles. They're burning a lot less diesel to take advantage of their domestic coal.

Sunday, June 29, 2008

Keep women away from stairs!

I have summarized for your convenience the top 7 consumer killers in the United States in the year 2001, and swimming pools, for comparison.  I think the conclusion here is inescapable: women must be kept away from stairs. This is a significant issue for me because I live in a house with two stories and a basement, with my wife and three daughters. Although the statistics presented are not specific, it does appear that the problem is largely with older women, so we'll definitely have my mother in law stay in the downstairs bedroom.

As I write this my two older girls are directly behind me, messing around in the crib that we generally use for our youngest. A quick check shows that I should escort them outside where they can safely fool around in traffic on some ATVs!

total deathsmale accidentsfemale accidentscategory
2021047671421274004Stairs, Ramps, Landings, Floors
45964248445 291530Beds, Mattresses, Pillows
30271203930 252960Chairs, Sofas, Sofa beds
25023125312 168238Bathroom structures and fixtures
24750414008 151660Bicycles
21239169834 38022ATVs, Mopeds, Minibikes
19085150667 72498Ladders, Stools
53228886472894Swimming Pools, Equipment

Wednesday, June 25, 2008

CO2 sequestration -- size of the kill zone

Sometimes, the underground reservoirs that store natural gas explode. Drilling wells into them makes this more likely. When wells explode, the gas generally ignites, making a spectacular flame that can be seen for miles. Aside from the loss of valuable fuel and equipment damage, well explosions generally aren't too big a problem for people living nearby.

One less noteworthy effect of a well explosion is that the CO2 generated from the combustion of the methane is carried high up into the atmosphere by the heat of combustion, where it is mixed by high-altitude winds (routinely 100 MPH).

One plan for CO2 sequestration from coal-fired powerplants is to inject the CO2 into old, empty gas wells. Like the methane, the CO2 is in a supercritical state in the well -- not so much a liquid as a very dense high pressure gas.

The difference between CO2 and CH4 comes when the well explodes. CO2 does not start a fire. Instead, it expands, and cools, and the cold CO2 will flow with the wind, against the ground, eventually dissipating.

A 1 GW (electrical) coal-fired powerplant will burn 2.2 GW (thermal) of coal (because it's about 45% efficient). That's about 7000 metric tons every day. It will produce 4.7 cubic kilometers of carbon dioxide per year, at standard temperature and pressure. That CO2 is fatal to mammals at concentrations greater than 4%.

So, if a sequestration field explodes after 10 years of sequestering the output from a 1 GW coal plant, it will create an invisible blob of CO2 that will be at least 7 km across before it dissipates to the point of being nonlethal.

Think about this thing for a bit. CO2 inhalation is fatal within a couple of minutes, and I suspect it is disabling well before that. Who is going to detect this blob of gas before being overcome? You cannot see it. You cannot run from it. You cannot stay indoors to escape. You cannot start your car to drive away from it. As the wind wafts it across the scenery, it kills every animal in its path. It could go for 50 kilometers or more before wind shear mixes it with enough air to become safe.

Not in my back yard, if you please.

Monday, June 09, 2008

Anya's Gift

For Anya's 6th birthday, we had a huge party. Over 50 people came. It was a blast.

We've been trying to reduce the accumulation of toys in our house, and presents from that number of people was going to be a problem. So, we told people not to bring presents, or if they did, bring something suitable for the Ronald McDonald house, which is a temporary home for the families of kids undergoing serious treatments at Stanford Hospital.

If anything, the haul got better (oy! consumerism). Here is Anya delivering her presents to the charity.

Monday, June 02, 2008

Discovery Launch

I just got back from watching the Discovery launch. My boss, Ed Lu (former 3-time astronaut, second from left), hosted us, which really made the experience for me because he was able to introduce us to lots of folks. Every time we walked into a restaurant, and every 5 minutes while we were at Kennedy Space Center, someone would smile and come over to talk with Ed. NASA doesn't pay well and most folks don't get to try wacky things like we do at Google, but they seem to have great interpersonal relationships. It's heartwarming to see.

On launch day, we were 3 miles from the pad at the media site. This is as close as you can get. We had a lot of waiting around to do. Here is a cherry spitting contest.

I know there is a great deal of speculation out there about whether hacking on camera hardware at Google makes one a babe magnet. While such question are only academic for me personally, I can tell you that getting out in the midst of a bunch of media types with some very customized photographic hardware attracts all sorts of attention. I don't actually know who this person is but I think we can all agree she's gorgeous, and she was very interested in the camera hardware and what Google was doing with it.

From our vantage point 3 miles away, the shuttle stack was just a little bigger than the full moon, which meant that the flame coming out the back was about that size too. There have been some comparisons to the shuttle exhaust being as bright as day....

Let me put that myth to rest. After two years of designing outdoor cameras, I can tell you that just about nothing is as bright as the sun. From our vantage point it had more angular size than the sun -- maybe 400 feet long by 100 feet wide, viewed from 3 miles, is 1.5 by 0.5 degrees.  The sun is 0.5 degrees across.  But the Shuttle plume is not as hot as the sun -- 2500 K at most, compared to 6000 K for the sun.  Brightness increases as the 4th power of the temperature, so the Sun's delivered power per square meter is something like 11x larger.  Furthermore, most of the light coming from the Shuttle is in the deep infrared where you can't see it, compared to the Sun's peak right at yellow.  So my guess is that the shuttle was lighting us up to 9,000 lux illumination.  That's twice as bright as an operating room, and way brighter than standard office bright (400 lux).  But it's just nothing like the 100,000 lux that you get outside in bright sunlight.  Nobody's going to get a suntan exposing themselves to the shuttle.  (Yes, the shuttle flame reflects off the exhaust plume, but the sun reflects off clouds, which are much bigger, so there is no relative gain there.)

Anyway, back to the people we got to meet. Here we are at lunch in the KSC cafeteria, the day before the launch. That guy two to my right is... named at the bottom of the blog. Have a guess. He had a really neat retro electronic watch and talked about how much he likes his Segway. Picture was shot by Jim Dutton, one of the F-22 test pilots who is now an unflown astronaut.

Here's a terrible picture of Scott Horowitz (former #2 at NASA, the guy who set the direction for the Ares rockets and Orion capsule) talking with Ed. The two were talking about their airplanes, a subject that gets both of them fairly animated ("I love my airplane. It's trying to kill me.")  Sadly, Ed's plane was subsequently destroyed by Hurricane Gustav (while in a supposedly hurricane-proof hanger) later this year.

Sorry about the quality, it was incredibly crowded and Ed and Scott weren't posing. This was on the day of the launch. Scott came out and looked at our Street View vehicle, then narrated the launch for us. Scott is a former 4-time astronaut and has a great deadpan delivery ("okay we just burned off a million pounds of propellant"); he's probably done it a hundred times.

Here's Mike Foale, who Ed has closed the hatch on twice (that means Mike was in the crew after Ed at the ISS twice).

I enjoyed meeting the people and looking at the hardware quite a bit more than the spectacle of the actual launch itself. Basically, the Shuttle makes a big white cloud, climbs out, loud noises ensue, and within two minutes you can just make out the SRB seperation with your unaided eyes, and it's gone. The Indy 500, for instance, is louder, and more interesting because there are always going to be crashes and various anomalies, which are not usually injurious and therefore lots of fun for the crowd. After meeting all those competent people who are working so hard to thread this finicky beast through a loophole in Murphy's law, I was just praying the thing wouldn't break on the way up.

P.S. That's Steve Wozniak, cofounder of Apple Computer.

Tuesday, April 29, 2008

How GPUs are better than CPUs

Intel has a great CPU core right now, AMD does not, and in combination with Intel having higher-performance silicon, Intel is currently beating AMD handily. Meanwhile, Intel and AMD are both integrating graphics into the CPU and NVidia probably feels sidelined. So NVidia says that the CPU is dead. I agree, a little.

Many things people want to do these days are memory bandwidth limited. Editing/recoding video, or even tweaking still pictures and playing games are all memory bandwidth limited. GPUs have far better memory bandwidth than CPUs, because they are sold differently.

The extra bandwidth comes from five advantages that GPUs enjoy:
  • GPU and memory come together on one board (faster, more pins)
  • point-to-point memory interface (faster, lower power)
  • cheap GPU silicon real estate means more pins
  • occasional bit errors in GPU memory are considered acceptable
  • GPUs typically have less memory than CPUs
When people buy CPUs, they buy the memory seperately from the CPU. There are 2 chip carriers, one socket, a PC board, and one DIMM connector between the two. In comparison, when people buy GPUs, they buy the memory and the GPU chip together. There are 2 chip carriers and a PC board between the two.

CPU memory interfaces are expected to be expandible. Expandibility has dropped somewhat, so that currently you get two slots, one of which starts out populated and the other of which is sometimes populated and sometimes not. The consequence is that the CPU to DRAM connection has multiple drops on each pin.

GPUs always have one DRAM pin to each GPU pin. If they use more DRAM chips, those chips have more narrow interfaces. Because they are guaranteed point-to-point interfaces, the interfaces can run at higher speed, generally about twice the rate of CPU interfaces.

CPU silicon is optimized for single-thread performance -- both Intel and AMD have very high performance silicon. As a result, the silicon costs more per unit area than the commodity silicon the GPUs are built with. The "programs" that run on GPUs are much more amenable to parallelization, which is why GPUs can be competitive with lower-performance silicon.

It turns out that I/O pins require drivers and ESD protection structures that have not scaled down with logic transistors over time. As a result, pins on CPUs cost more than pins on GPUs, and so GPUs have more pins. That means they can talk to more DRAM pins and get more bandwidth.

All of the above advantages would apply to a CPU if you sold it the same way a GPU is sold. The final two advantages that GPUs enjoy would not apply, but are easy to work around.

The first is the acceptability of bit errors. GPUs do not have ECC. It would be easy to make a CPU/GPU that had a big wide interface with ECC.

The second is the memory size. GPUs typically connect to 8 or 16 DRAM chips with 32b interfaces each. It would be straightforward to connect with 64 DRAM chips with 8b interfaces each. If fanout to the control pins of the DRAMs becomes a problem, off-chip dedicated drivers would be cheap to implement.

So, I think integrated CPU/GPU combinations will be interesting for the market, but I think they will be more interesting once they are sold the way GPUs are sold today. Essentially, you will buy a motherboard from Iwill with an AMD CPU/GPU and 2 to 8 GB of memory, and the memory and processor will not be upgradable.

For servers, I think AMD is going in the right direction: very low power (very cheap) mux chips which connect perhaps 4 or even 8 DRAM pins to each GPU/CPU pin. This solution can maintain point-to-point electrical connections to DIMM-mounted DRAMs, and get connectivity to 512 DRAM chips for 64 GB per GPU/CPU chip.