Saturday, December 29, 2007

Why are there no GTCC plants doing CO2 sequestration?

Rod Adams makes an excellent point here. Go down a bit. 11th paragraph:
If it is relatively easy to capture the CO2 from an IGCC [Integrated Gasification and Combined Cycle coal-burning plant], why wouldn't we start working to prove that assumption by capturing the CO2 from at least several of the existing GTCC (gas turbine combined cycle) plants that use natural gas as their heat source?
CO2 sequestration for coal-fired powerplants is held out as the major way that America will reduce it's CO2 emissions significantly over the next two decades. But, CO2 sequestration requires a lot of tinkering with the plant. An IGCC is nice for efficiency, but is not required. Several other really serious pieces of equipment are required, however:
  • Sequestration costs big money. Since you really don't want to unnecessarily sequester 4 times as much nitrogen as CO2, you seperate that nitrogen and vent it. Since you don't want to seperate nitrogen from the exhaust gas (you'd have to cool it), you seperate it from the incoming airstream. Thus, the air filter on an ordinary plant is replaced with an expensive and energy-hungry plant with cryogenics, multiple turbines, and heat exchangers galore.
  • The exhaust must be compressed and liquified to inject it into the ground. Most of the heat must be removed from the exhaust in order to compress it. In a normal coal-fired powerplant, a large fraction of the waste heat is rejected by simply venting the exhaust into the air. In a CO2 sequestrating facility, you need a big heat exchanger and a cooling tower to do that work. Oh, and a larger fresh water supply.
Rod is right, the economics of all this stuff could be proved out on an GTCC plant, or even a plain old combustion turbine fired by nearly anything. I think it's pretty obvious that the carbon-burning electricity producers (coal and gas) benefit from deferring the installation of CO2 sequestration equipment. And, no better way to defer installation than to defer development until after the development of a brand-new burner technology (IGCC) which will take a decade or two to roll out.

So, they talk about sequestration while they defer it as long as possible.

Interestingly, one of the side effects of concentrating the oxygen in the gas being burned is that the operating temperature increases, which could improve efficiency. Unfortunately, combustion turbines already run at temperatures higher than the melting point of the turbine blades... and probably cannot be run hotter. My guess is that exhaust CO2 will be cooled, recirculated and recompressed, and then used to dilute the oxygen in the incoming stream to lower flame temperature.

[Update: check the comments on this post. Harry Jaeger makes some nice points.]

Sunday, December 16, 2007

Teddy Bear Tea

I took my daughter to the Ritz-Carlton's Teddy Bear Tea today. $184 for a few dried-out finger sandwiches and a bunch of chocolates, a teddy bear, some singing, and a chance to get pictures with... a person-sized teddy bear. I couldn't help but think of how tasty a $184 dollar dinner can be. Or how fun the local production of "'Twas the night before Christmas" had been the day before.
Children of all ages gather for a favorite family tradition at The Ritz-Carlton. Guests enjoy a fun-filled afternoon in festive surroundings featuring a storytelling Teddy Bear, a pianist, hot cocoa, tea, a selection of tea pastries and mini finger sandwiches, and a Christmas candy and sweets buffet table. Each child takes home a teddy bear and photo as souvenirs. $75 per guest, $65 for children 12 years and under, exclusive of tax and gratuity. For additional information or reservations, please call (650) 712-7040.
I could wonder how the Ritz-Carlton could end up serving crud for such an expensive lunch. Stories from Teddy may have happened before we got there, 10 minutes late. But why bother with these specifics? A more important question is: how did I ever end up in such a travesty?

I did ask, several times before going, what exactly this "tea" entailed. Martha was nonspecific. Since the other folks going were all in one of Martha's mother's groups, I knew essentially no-one. I'm antisocial as it is; dropping me into a mother's group without something to specifically contribute to the proceedings turns me into a stone wall. I went because I was led to believe that the event had already been paid for, Martha could not attend as she had a cold, so, I might as well see what we paid for. Instead, I got a 3-digit bill. I think the lesson here is to (a) ask for specifics beforehand, which I did, but then (b) refuse to go when specifics are not provided.

From Kathleen's point of view, there was: (a) nothing to climb on, (b) nothing to legitimately squish with her fingers, (c) nothing with which to draw on herself, nor stickers, fake tatoos, or dress-up clothes, (d) no pool, and (e) no kids singing or doing something else to be emulated. Even a desert wasteland would at least have had rocks to turn over.

If anyone from the mother's club reads this, let me get in a last word: it's not you, it's me. Given something specific to do and at least some semblance of DIY flair, I can have a great time with y'all. But I'm never going to convincingly pull off an hour of small talk.

Tuesday, December 11, 2007

ISS does not smell like old feet

I work with Ed Lu, who is a former astronaut who spent 6 months in the ISS, without taking a shower. I asked the obvious question, didn't you and everything else just stink?

No. Ed says that the air conditioning/purification system was ridiculously good, so much so that the only time you ever smelled anything was when you opened a food packet. Even then, the smell was whisked away pretty quickly.

I asked if there was problems with vapor from breathing condensing all over the interior of the spacecraft walls. Apparently not. The thing has hot spots as well as cold spots, and heat pipes to balance it all out, and lots of insulation over that. Apparently stuff doesn't freeze. Given that the thing is cold soaked in sub-liquid-nitrogen temps 45 of every 90 minutes, I'm amazed. I was expecting a story of two-inch-thick ice sheets on the interior walls.

Thursday, December 06, 2007

The US is building more wind power than coal

I've just read this report from the DOE, and though it doesn't talk about windpower at all, I find it quite exciting for wind's prospects.

The conventional wisdom has been that the small size of the turbines (generally about 2 MW each) and the unreliability of both the wind and the turbines makes it improbable that the bulk of our power needs can be met with wind.

Meantime, the installed cost of windpower has been dropping, and is now at something like $1300/kilowatt of peak capacity, and coal-fired powerplants have been getting more expensive ($2200/kilowatt), and gas-fired powerplants have been getting more expensive to run (they remain cheap to build at $600/kilowatt). That doesn't explain everything, but check out this statistic from the DOE report:

From 2000 to 2007, the U.S. built an average of 293 MW/year of new coal-fired capacity. In that time, wind build rate went from essentially nothing to... about 4000 MW in 2007! Holy cow, that's an order of magnitude more build than coal!

Now I understand that, like the long Nuclear Pause, there has been something of a moratorium on new Coal for a (shorter) while. And, I'm told there are lots of coal-fired plants in planning right now. But just for scale, note that the EIA projects that the U.S. needs 6000 kW/year of new capacity for the next couple decades. Even assuming a 33% utilization rate, wind is within an order of magnitude of producing ALL of that new capacity, right now.

It's no longer a question of whether wind can ever dominate coal... it's a question of whether coal can come back! Look at figure 2 in the DOE report, and project a growth curve for windpower at 1300 MW/year in 2007 rising to 3200 MW/year in 2012. Why is my 2007 wind number small? Because you have to divide windpower by 3 to account for the wind not blowing much of the time.

Anyway, what you see is that wind will outpace coal again in 2008, but coal will win in 2009 and 2010. But after that, all this new wind capacity is going to meet most of the need for new capacity, reducing the need for new coal plants (and greatly increasing the need for long distance power lines at the same time).

And, by the way, there are about a dozen new nuclear plants in the works, perhaps half of which will come online in 2012 or thereabouts. They'll eat even more of the demand that would otherwise go to coal.

Here's a satisfying question to ponder: what year will U.S. coal production peak, not from lack of supply, but from lack of demand?

Friday, November 30, 2007

A Manhattan Project

Charles Cooper wants a Manhattan Project to fix our dependency on foreign oil. The Manhattan Project was a good deal for most folks (U.S. of course, but I'll claim Japanese too) because a bunch of people they never met toiled away and produced something they never had to interact with which eliminated the need for all these people to fight and die.

Trouble is, we need to be saved from ourselves. It can be done, but we're all going to have to do the toil.

The most obvious thing we can do is switch to plug-in hybrids for our cars, so that the energy comes from something domestic (coal, hydro, nuclear) rather than something imported (gasoline). But that's just not enough. Look at the numbers:

EIA Petroleum Imports

EIA Petroleum Usage

For the week ended 11/23/2007, we imported 13.4M of the 15.5M barrels of oil we used. We turned that into 9.0M barrels of gasoline, 4.3M barrels of diesel fuel, and 1.4M barrels of jet fuel.

Just converting our car fleet to plug-in hybrids won't cut it. Even plug-in hybrids burn gas, just not as much. If, starting today, all cars sold in the U.S. were plug-in hybrids, then in two decades you might eliminate the equivalent of 6M barrels of today's consumption.

What else could we do? How can we convert that diesel usage to electricity? We could electrify our frieght trains, and use trucks only for local hauling of cargo from business to frieght terminal and back. That might eliminate half of diesel usage, call it 2.2M barrels/week. Together with the plug-in hybrids, that get's us down from 13.4M to 5.2M barrel, every week. Not enough to ignore OPEC.

Carving into that remaining 5.2M barrels/week will be really hard. A rationalization of our transport network might move a lot of frieght and some people onto electric trains from planes. There is opportunity there: between parking and security, it takes two hours to get on a plane. If you can get on a 200 MPH train in 10 minutes vs 120 minutes for a 600 MPH train, it's faster for journeys shorter than 550 miles.

Mr. Cooper thinks we should be investing in nuclear energy. But nuclear doesn't help us break free from OPEC. Nuclear saves the environment from all that CO2. It's a seperate issue, also very important, and very interesting, really, since nuclear waste, even if it gets out, isn't really going to bother most birds and bees, but it's a problem for us bipeds who live to 70 years old and care about property values. If anything, nuclear transfers risk from the rest of the world back to us. Seems we don't like that, even if the total risk is reduced.

Tuesday, November 20, 2007

Geologic CO2 sequestration?

A friend of mine sent me a review of geologic CO2 sequestration in Australia and the United States. Quite interesting, very upbeat. I'm not buying it.

I think costs are a big problem here. Powder river basin coal costs $5/ton at the mine mouth, and by the time it gets to the various powerplants, it's anywhere from $9/ton to $30/ton. The coal burned is about 75% of the cost of electricity generated, if you believe these guys. That means, in summary, the costs of electricity in the U.S. are driven by the costs of transporting coal from mine to powerplant via rail.

Zoom on the loopy thing at the bottom, that's a friggin COAL TRAIN at the mine mouth for what I think may be the Black Thunder mine in Wyoming. These mines are operating at gigantic scale and are very efficient. Coal transport is handled by two competing train operators who are also efficient.


View Larger Map

Now for the problems with sequestration: CO2 weighs about 44/12 = 3.7x as much as the coal that it came from. Right there, big problem. More mass to move costs more.

Worse still, you can't just transport CO2 in an open coal car on a railroad. Instead, you have to cool it (costs energy, capital equipment, access to water or some heat sink, etc), compress it (this costs energy and some capital equipment), then pump it through a high pressure pipeline. That's going to cost more than moving the coal did.

So, if the CO2 is useful for something, like oil or gas extraction with a result worth $0.25/pound or more, then that value can cover a lot of transport costs for the CO2. But if not, the transport cost of the CO2 from powerplant to sequestration site will come to dominate the cost of electricity in the U.S. And I think that any fix for the coal addiction we have now will have to be something that makes electricity for less money, not more.

Anyone want to argue that CO2 pipelines are going to be at least 4x cheaper than coal trains, or that deep CO2 sequestration is going to be more conveniently located than coal mines?

P.S. Southern California's scheme of having a mine-mouth powerplant ship electricity to beautiful people far away from the black stuff is just stupid. Transporting electricity is way more expensive than transporting coal. The scheme only makes sense because beautiful people are willing to pay extra to have their powerplants well downwind and out of sight. It's only a matter of time before Mexico wakes up to this fact and builds a bunch of nuke plants in Tijuana to ship the power across the border.

Wednesday, October 24, 2007

Missing Iniki

Me with a months-old Iniki
A grown-up Iniki with our friend Carol.


Iniki was really gentle and had a very soft mouth. She could eat anything out of your hand without bothering you in the slightest. If the kids bothered her she would lick them until they stopped. She was more of a snuggler than Bailey.

When we were out walking, Iniki always greeted other dogs with a bark, a lunge, and her tail held high. Other dog owners didn't always interpret this as friendly, but Iniki certainly meant well.

I remember once hiking downhill from Schilling Lake with the two dogs and Martha. I think we had just Anya with us. Iniki disappeared over the edge of the trail. I looked over the drop and decided there was no way I was going to try that, so she'd have to make it back up on her own. We could hear her crashing around down there, and she wasn't coming back up, so we decided to walk along a bit, calling to her, to see if she could find a way back up.

About five minutes later there was this incredible thrashing sound that just went on and on, and Iniki eventually emerged up through the bushes, legs tearing into the soft ground, hauling the entire back end of a deer up the cliff with her. She looked absolutely as pleased as could be, tail high in the air, as if to say, "Look what I found! I swear, nobody was using it! It was just sitting there!" She dragged that carcass after us for a mile or so before we got her to let go of it.




We travelled to Mammoth when Anya was just learning to walk. One evening while there I went on a short hike with Anya and the dogs. During this hike, we travelled by a frozen lake. Bailey was timid about getting out on the frozen surface, but Iniki just charged right out. On her way back in, she got to some thin ice and fell through.

Her head and shoulders popped back up, and she started padding as best she could... back out into the middle of the ice. I think she knew she was in trouble, and she was trying to retrace her steps. Instead of going through 20 feet of thin ice directly to me, she plowed her way though a couple hundred feet of thick ice. The entire way, she would get her front paws up on the ice sheet, struggle to haul her upper body out of the water, only to crash back through the ice and into the freezing water. It took her 20 minutes or more to chop and grind a passage all the way through the ice back to the shore point where she'd first gotten onto it. Bailey and I waited there for her, me with my heart in my mouth wondering if she was going to freeze or drown. When she got out Bailey barked at her and then tackled her, as if to tell her, "You idiot! You scared the hell out of us!"




Another time, hiking above Schilling Lake, we found a recurring mudslide covering the path. Iniki smelled something in the mud, and pushed her nose into it, then her whole muzzle, and finally her whole head. I don't know what she found in there, but it was pretty funny to see this collar on the ground with a lab's body sprouting from it.




Iniki loved water. When we were out walking around, if she found something even moderately damp, she sat on it or got into it. Here she is enjoying a puddle near Blue Oaks in Portola Valley.

Monday, October 22, 2007

Iniki is dead

On Sunday, I was hiking with the family at Ed Levin Park. Our two black Labradors, Iniki and Bailey, found a rattlesnake in the middle of the path. We called, Bailey came, Iniki didn't, the snake tried to warn her off and after about ten seconds bit her on the muzzle. I'm pretty sure she was dead by the time we got her to the parking lot.

We were probably a mile up the trail, the dogs were in front, off leash, when they found the snake. I had Kathleen on my shoulders and Ava in the pack, probably about 70 pounds all told, on a pretty steep part of the trail. When I saw what the dogs were barking at (a few seconds at most), I got Kathleen on the ground headed back to Martha. Martha heard me yell "snake", and called the dogs. Bailey took off back down the trail, Iniki pointed right at the snake, her muzzle about 5 inches from the thing, barking but not striking. It struck at Iniki repeatedly while I tried to maneuver behind Iniki to go for a grab. I grabbed her by her flanks and yanked. Martha thinks the snake made contact right then.
I should have dropped the backpack with Ava immediately, and then advanced on Iniki. Had I done that, I might have figured out that I could seperate Ava from the pack and then use it as a projectile. With Ava in the pack I was more awkward.

My brother-in-law says the snake's strike range is about 2/3 of their body length, so I was probably in range when I went for the grab -- not very smart. And I screwed around too long setting up the grab. I basically only asserted myself when the snake started striking.

Martha thinks Iniki might have left the snake if I had moved away from it. I don't think so -- 30 minutes earlier she was barking her head off at some dogs on the other side of a fence.
Iniki was a tough if gentle dog. She only whimpered a bit once I had her seperated from the snake. I had her over one shoulder within ten seconds of the bite, and headed back down the trail. At this point I was carrying over 100 pounds on a trail, and I could not run.
Again, I should have dumped Ava with Martha as I passed her. Also, there were several other people within 100 feet. I could have gotten a volunteer to run down the trail with me, trading the load. That would have made a run possible, and also made it possible to check Iniki's airway as she started barfing.
I think the snake bite was very serious. Iniki was barfing and pooping within 2 or 3 minutes, and was unconscious within 5 minutes. This site suggests that death comes from blood loss and then shock "within hours" -- and we just weren't on that schedule.

I made it about halfway down before my arms got seriously wobbly from holding Iniki's weight. Martha caught up, grabbed the dog and kept going. She got 100 feet before she was out of gas. We put Iniki in the baby carrier backpack and I took her the rest of the way down in that. Martha noted when we put her in the pack that her whole rear end was very stiff.
The pack was much easier -- the way to go from the start. I might have been able to run had I started this way. The trouble was I couldn't see Iniki, and I was trying to talk to 911 while walking, and couldn't do that while running either.
Iniki thrashed around a bit about 30 seconds from the parking lot, which I took as a good sign she was still alive. But when I put her hin the car a minute later, I'm pretty sure she was dead.
Later, when we got to the clinic, she doc told us she had aspirated vomit and choked to death. I now think she choked just as she got to the parking lot. I should have dumped the pack and checked her airway when I felt her bucking. I'm feeling seriously bad about this mistake right now.

That said, nobody seems to think she would have made it 25 minutes longer, so I'm not sure my mistake changed the outcome.
We were on an unfamiliar side of the Bay Area. I got someone from the dog park there to drive in front of me and lead me to a vet. Unfortunately, neither she nor the 911 operator I was talking with could find an after-hours weekend vet with anti-venin. It turns out there are only two in the Bay Area, one in San Leandro and one in Campbell. It took at least 25 minutes to drive to the one in Campbell. The doc pronounced her dead when I brought her in.

The biggest question in my mind is, what if it was one of the kids? Iniki was 7 years old, 65 pounds, and unable to control her own airway within 15 minutes. Anya weighs 38 pounds and Kathleen is more like 30. To even have a chance if they had been bitten, we would have had to have an ambulance meet us in the parking lot, maybe with anti-venin, and we would have had to run down the trail. I'm not sure the ambulance folks would have time to pick up anti-venin, and I don't think I could have run all the way down the hill. 911 would have worked better, of course, and there would have been a local hospital with the anti-venin, but it still seems pretty grim.

One big overall mistake here was that I fixed on the idea of getting the dog to emergency aid (and specifically anti-venin) as fast as possible, and neglected everything else. That'll work if aid is minutes away, but if not, it's critical to be able to maintain the basic body functions of the animal (or person!) until help arrives. After reviewing the literature, it seems that anti-venin is not a magic instant cure. Instead, snakebites seem like one more thing where most of what medical science has to offer is basic life support (oxygen, fluids) while one's body fixes the problem on its own.

In this context it is sort of irritating is that the 911 operator couldn't give me basic instructions: check airway, breathing, heartbeat. Perhaps they would have done this eventually; I don't know because I had no cellphone coverage in the parking lot.

I'm feeling sad now.

Tuesday, October 02, 2007

More Lunar Hopper

Here's a specific mission mass budget:

The goal on the lunar surface is to deploy three HDTV cameras with motorized zoom, pan, and tilt. The cameras shoot stills or video, record to flash, and then send their bits to the main transceiver over an 802.11b link. The radio links require line-of-sight and fairly close range, less than 1 km. The camera and radio are powered-down almost all the time, and the onboard battery has enough juice for perhaps five minutes of camera operation and maybe 20 minutes of radio time.

Each camera sits on a little post with three legs and an anchor that secure it to the lunar surface. The anchor is explosively shot a foot or so into the lunar dust, and then a spring-wound mechanism tensions the hold-down string. The hold-down string is to keep the rocket plume from blowing the post over when the lander jumps.

The mission is to land somewhere with a good view of the surrounding terrain, deploy one camera, look around a bit, upload pictures/video, and let mission control find somewhere interesting to hop, then jump there and deploy another camera. Then do that again. Then do a third jump, after which we just use the camera on the jumper. The idea is that the first and later second cameras can get video of the jumper taking off and landing, then send that video back to the jumper, which sends it to Earth.

The camera weight with zoom and pan/tilt sets the mission weight. I don't know anything about spaceflight-qualified hardware, but I've looked at the MSSS web site like many of you. A little Googling around makes it look like pan/tilt heads are pretty heavy, but these are designed for Earth weather and Earth gravity.

HD Video/still camera500 g4 watts
Zoom Lens650 g0.5 watts
pan/tilt head500 g0.5 watts
5 foot post and three legs400 g
explosive anchor and spring reel500 g
battery200 g
radio/computer250 g
total3000 g


Two of these, plus a pan/tilt on the lander are going to be about 8 kg. My guess is that the lander's radio link will be about 4 kg, and the dry mass of the vehicle necessary to land all this will be another 18 kg for a total of 30 kg.

Descent from lunar orbit, landing, and two more hops will take 2000 m/s delta-V. If we're using a N2O4/UDMH hypergolic motor with 2500 m/s exhaust velocity, then we'll need 37 kg of propellant when in lunar orbit.

I think you want to do the earth exit burn, lunar orbit injection burn, and descent and hopping all with the same motor. You do it with drop tanks, which probably get blown off after the first lunar deorbit burn. This gets the mass in low earth orbit to around 400 kg, which is well inside what a Falcon 1 can lift from Omelek.

Monday, October 01, 2007

Lunar hopper?

So, yeah, I work for Google, but I have no specific knowledge of the Lunar X Prize. I just took a look at their home page, saw the brief summary of the rules, but didn't find a complete draft. It looks like they are going to revise the rules a bit after some feedback.

Here's what I've been thinking: if you want to land on the moon, look around, and then get close to something else and take pictures of it, you don't really need wheels, because you've already got a rocket that knows how to land.

In the moon's soft gravity, it takes fairly small amounts of delta-v to jump a long way. In the moon's 1.62 m/s^2 gravity, you can get 50 seconds of flight time with a 82 m/s delta-v. Use some more delta-v to go sideways, and a bit more to manoever for the landing, and you could cover 500 meters with about 100 m/s of delta-v.

Landing from a lunar orbit takes 1600 m/s of delta-v, so adding a few hundred for a few hops is not a huge increase. Yes, it's exponential, but if done with LOX/kerosene or hypergolics, a 2000 m/s total delta-v budget for the lander implies a very reasonable mass ratio.

Why hasn't it been done before? Multiple rocket hops would have been stupid for the manned mission, because the landing was the highest-risk portion of the mission. It's still the highest-risk portion, and the lunar hopper idea stands a very good chance of crashing one of it's landings. But that's okay, because after a few hops the thing will run out of gas and be dead anyway.

Tuesday, September 18, 2007

I blog to think

About 30% of the entries I write for the blog never get posted, because I cannot get my reasoning straightened out. There are some entries that I post, that I shouldn't have for the same reason.

I write these blog postings because when I try to figure out something complicated, it helps to write it down. When I started a company by myself (10x), I had to make progress with no coworkers for three years. There was no-one with whom to talk over complex ideas. I ended up writing essays to myself, each designed to take someone from the state I had been in before writing the essay (confused) to the state I was in after writing the essay (enlightened).

I've just discovered Paul Graham's site. He does a better job of saying the same thing here.

Thursday, September 06, 2007

One of the problems with a Thorium-fuelled Molten Salt Reactor is starting it. Plutonium (and U-238) from reprocessed reactor waste is the most obvious start charge, but the problem is that it takes several tons of Plutonium (and thus a ton or more of U-238 that comes with it) to start the reactor, which will burn just one ton of actinides a year (which will soon be mostly the U-233 introduced from the blanket), and so it will be decades before the plutonium level is low enough not to be a problem in the waste stream.

The core geometry that David LeBlanc suggests is quite simple -- one big Hastelloy tube in a big unpressurized vat of blanket salt with no graphite. We could make it more complicated by having a three-fluid reactor, with two fuel salt tubes in a single big blanket. The first fuel salt would contain the start charge of Plutonium/U-238. The second fuel salt would be gradually charged with U-233 recovered from the blanket. Fission product seperation would run on the second fuel salt but not the first. The idea is that the fission product buildup in the first fuel salt wouldn't ever rise to a level that would kill the reactor completely, and by immersing the start charge in the neutron flux for decades, you could eventually burn all the transuranics.

One other advantage of this arrangement is that if you had a reactor with a breeding ratio of, say, 0.95, you could insert a small amount (70 kg/year) of reactor-grade Plutonium into the first fuel salt loop to make up for the insufficient breeding. The fission products from these later additions would never add up to the same level as from the initial Plutonium charge, and so they would not poison the reactor either.

There is one other point I'd like to make about reactors with less than unity breeding ratio: the reactor is quite insensitive to the actual fissile load it carries. It would be quite reasonable to have a big start charge and subsequent make-up charges of Plutonium breed an extra 50% or even 100% more fissile than needed, so that the reactor could go for one or two decades without any further make-up. During those decades, the actinides in the first fuel loop can burn down to nothing. After the third decade of operation, while you are replacing the radiation-damaged tubes in the core, you can seperate the Uranium and salt from the two fuel loops, dump the remainder as short-lived waste, and restart the reactor with the Uranium it stopped with, plus another, smaller start charge of Plutonium.

All this excess fissile material is a proliferation hazard in foreign countries. But the worst energy problem in the world is in the United States, where proliferation is not a problem -- we already have the Bomb. We do have to worry about diversion, but I frankly think that's a pretty small problem compared to the national security problem we face due to importation of oil.

Thursday, August 16, 2007

Contrast beyond measurement

Part of the challenge of taking pictures outside is that the world has a lot of dynamic range, and our ability to capture that dynamic range is fundamentally limited by the flare in the lenses that we use. So, I've been working on reducing flare in my lenses for a few months.

It turns out there are some guys, Paul Boynton and Edward Kelley, at NIST who had a similar problem (here's the link). They were trying to measure the contrast of LCD displays. It turns out that customers demand higher contrast from their displays than a standard camera can directly measure, because of limitations of veiling flare in the camera. To reduce flare from air/glass interfaces, they built a camera with no air/glass interfaces by filling it with liquid. Totally cool. But also very geeky, not the kind of thing you'd expect to bump into in the day-to-day world.

This morning the DJ on the radio was reading an ad for a Pioneer LCD TV, and claimed that it had "contrast beyond measurement". The person writing that ad probably doesn't know what that means, but I wonder who he heard it from. I find it funny to think about that phrase working it's way from one of the few thousand people who actually care, through executives and ad campaigns and broadcast radio, to me, one of the other few thousand who actually know something of the back story.

Random disorganized blog thread: The high-contrast LCD TV thing raises two leading questions: how are broadcasters producing those high-contrast signals in the first place, if cameras can't capture that much dynamic range, and how is it that customers can discern contrast levels that cameras cannot?

I think the answer to the first question is that broadcasters are stretching image contrast before display, probably to make up for veiling glare on the air/glass interface at the front of the LCD.

The answer to the second is that the human eye probably has less veiling flare than a camera, because it has just one air/liquid interface. I wonder if the human eye has dichroic antireflection coatings/layers on the exterior air/solid interface? I know we've not yet evolved correction for longitudinal or lateral chromatic shift (achromatic and apochromatic lenses), which I think is odd, given the sharpness benefits.

PC sync output from Canon 1D Mark III

Recently I was faced with the problem of generating a TTL compatible pulse from the PC sync output of a Canon 1D Mark III digital SLR.

This ought to be pretty easy. The sync jack on the camera has two contacts: the center pin and the shield, normally disconnected. When the shutter fires, a switch momentarily closes between the two. My understanding is that older flash units would use this switch closing to discharge a capacitor through a xenon flash tube.

So the circuit is trivial: a 5V supply, a resistor from +5V to the TTL output, connect the grounds, and the PC sync goes between the TTL output and ground. When the shutter fires the camera pulls the TTL output low, otherwise it gets pulled high by the battery.

Except: the Mark III doesn't open the switch until the current has stopped flowing. That's not a bad thing; the camera thinks it is discharging a cap, and keeps going until the cap is drained or the arc in the xenon tube collapses. It does mean that I needed to build a one-shot instead of the single-passive circuit, but that's not really so bad.

The problem is that Canon hasn't documented this behavior anywhere. Worse still, their phone support folks told me the exact operation of the PC sync terminal is proprietary (after keeping me on the phone for over an hour). This is a standard interface! They already have a proprietary flash interface on the top of the camera -- there is hardly any need for another one.

Call me disgusted.

Sunday, August 05, 2007

Buying a house -- lessons learned

We tried to buy a house without using a buyer's agent. We got the house, but ended up with a agent. Here are our lessons learned about the transaction itself:
  1. Watch the language. The legal language is different, and more accurate, than the language used by the agents themselves. The "seller's agent" is legally called the listing agent, and the "buyer's agent" is legally called the selling agent. If you are buying a house, this is supposed to clue you in that "your" agent is really not acting in your interests.
  2. Extract more money from the mortgage broker. The mortgage broker gets a huge kickback from the bank: 1.5% in our case. We had multiple mortgage brokers find us loans, and we told the ones that were more expensive to come up with a better offer. What we did not realize is the size of their kickbacks. We could have asked, for instance, for 1% of the loan amount to be paid back to us at closing by the mortgage broker. We also talked directly to banks, and were unable to get a better loan than what we got through a broker. This seems like stupid behavior on the part of the banks.
  3. It's hard to avoid a selling agent. We found our own house, and told the listing agent that we did not want to use a selling agent. We figured we could save the 3% that the selling agent usually charges.
    1. The listing agent reacted very negatively (as did everyone else in the real estate business to whom we suggested this idea) when she heard this. She said her sellers would not give us a fair hearing unless we had an agent. She had no sensible explanation why. I finally phoned the seller at home, and left a message saying I wanted to hear directly from him that he wanted us to have an agent. What I got was a vague message back from the listing agent hinting that we needed a selling agent. We really liked the house; I caved in. I suspect but don't know that if I had used the "listing/selling agent" terminology instead of "seller's and buyer's broker" terminology, I might have broken through.
    2. We used Todd Beardsley as a selling agent. We found him in a posting at Mike's Lookout (Mike also used Todd). Todd charges 1% and rebates whatever extra the sellers are offering (typical selling agent commissions are 2.5% to 3%). He doesn't help you find the house, he just helps the negotiation. It worked, the listing agent accepted him immediately. I thought Todd was very professional, and would recommend him with one caveat: Like all selling agents, Todd is incented to (a) get you to buy the house, and (b) get you to pay as much as possible. He is a professional, but the incentive leaks through. For instance, as an opening strategy, Todd suggested that we figure out the maximum amount of money we would be willing to pay for the house, and offer that. No way!
    3. A few years ago, we bought a plot of land without a selling agent. In that case, there were no competing bids and the sellers were motivated (the land had been dropping in value and they had been trying to sell it for two years). The way it worked was that the listing agent pretended to represent both buyer and seller, and changed his fees to the seller from 6% down to 3%. The Mike's Lookout post above suggests that it's unusual for the listing agent to renegotiate his commission like this. I don't think so. Another real estate agent that we have worked with has told us that the commissions get renegotiated all the time, for instance when selling agents are trying to close the last 1% of so between the buyer and seller.
  4. Never counter-offer all of your bidders. When you counter-offer your highest bidder, you are rejecting their bid, placing the bird in hand back in the bush. In our case, I'm pretty sure we were the highest opening bidder of three. One other was a low-ball or nonserious bid. The sellers counter-offered all of us, basically trying to rachet us up. What they managed to do, instead, was tell me that I was the highest bidder. When I lowered my bid, they countered with my original bid, which told me the other bidder hadn't matched my lower bid. I should have gone lower still. Instead, I caved. Even so, the counter-offering everyone strategy resulted in us paying less.
  5. Pay the selling agent directly, instead of allowing the listing agent to do it. In our case, Todd was able to negotiate this with the listing agent during escrow. It's worth a lot to the buyers, since they pay property tax on the amount paid to the agents for as long as they own the house. But it's also worth money to the sellers: they save transfer tax, and perhaps a few other items.
  6. Try to pay the listing agent directly, instead of allowing the seller to do it. We didn't try this, because we couldn't figure out how to do it. You might phrase your offer as an amount for the house and a fixed amount for the listing agent. That way, the listing agent and seller can renegotiate their terms without involving you, and it doesn't appear that you are incenting the agent to sway his client, to whom he owes a theoretical professional obligation.

Friday, June 22, 2007

Eating their way to carbon neutrality

At work we got to talking about the incredible size of blue whales. The relevant stats are at http://www2.ucsc.edu/seymourcenter/PDF/2.%20Ms.%20B%20measurements.pdf

At the bottom of this document it talks about how much krill these things would have eaten. Krill live at the surface, perhaps 1 meter, so 15 billion cubic meters of ocean is something like 15000 square kilometers, which is the area of maximally krill-swarmed antartic water that the blue whale population would have filtered through each year, before we killed nearly all of them. I don't know how fast krill populations reproduce, but it seems like that's enough consumption to materially affect the local environment.

Compare 136 million metric tons of krill per year eaten by all those whales (blue, fin, humpback and sei) to about 700 million metric tons of oil a year consumed by the United States. Obviously krill don't have quite the energy content of crude oil, but the notion that the numbers are even comparable is just boggling.

The document suggests that krill have energy content of 3.8 MJ/kg, which is considerably lower than the approximately 25 MJ/kg of crude oil. Each day a blue whale would eat 3 tons of krill and gain 400 kg, so if the weight gain was mostly fat and not water, the whales would have to be converting nearly all the swallowed krill energy into fat. Later that would be burned off into CO2, so my guess is that these whales were gigantic hydrocarbon burners, consuming energy equivalent to 3% of U.S. oil consumption.

Almost all those animals are gone now, so I wonder what is happening to all those krill down near Antartica right now. Nature abhors a vacuum.

Thursday, June 07, 2007

Fertilizing the ocean with iron

John Martin suggested seeding the South Pacific with iron ("The Iron Hypothesis") to increase the photosynthetic activity there. (Here's a clip of Richard Barber describing the idea.) If this increase is to sequester CO2, some of the carbon fixed from the atmosphere has to fall into the deep ocean rather than being respired by animals. Generally, live animals don't fall into the deep ocean, but excrement (referred to as "marine snow") does. So far as I know, nobody knows the carbon content or overall rate of this marine snow, and certainly nobody has any idea how it might change if you dumped a bunch of iron into the water.

One thing is clear though: dump iron into any of a number of spots in the ocean and you get a massive increase in biological activity. More phytoplankton, more zooplankton, and according to one report, more larger fish from surrounding areas swarming in to eat the bounty. This makes sense to me: these productivity spikes have probably been happening for millions of years from dust storms. Fish can probably smell the extra nutrients or some other related effect, and I'm sure the effect is like a temporary oasis in a desert.

What is less clear, but certainly possible, is that the increase in productivity at the base of the food chain leads to an increase farther up. That's interesting to me because I don't eat a lot of zooplankton myself, but I do enjoy tuna, salmon, and a number of other pelagic fish which are all under pressure from commercial fishing. I'd certainly support my tax dollars going to a study to find if iron seeding increased the productivity of a fishery. If it did, you'd think the commercial fishermen would be more than willing to take some iron fertilizer out with them on each trip.

Fishery fertilization might significantly improve the global human food supply, both in quantity and quality. If it works, you'd have fairly wide-scale and sustained fertilization, which would make the carbon sequestration (and other) effects of fertilization much easier to study. After a decade or two of that, you might have enough information to know whether more massive fertilization might help with the global warming problem.

Wednesday, May 23, 2007

Britain Prepares to Meet the Energy Challenge

International Herald Tribune has an article on Britain's white paper: Britain sets out plans to secure energy and fight warming. The British Government has done a good job studying and writing up their position, in order to inform their populace and encourage a better debate. So, why doesn't the article give the reader the link to the white paper itself? What is wrong with the people at Reuters?

As the government points out, conservation is a big idea -- some kinds work, some kinds don't. Overall, everyone would like to see their economies produce more wealth per energy expended.... Most folks would also like to see more wealth, too, so it's a horserace to see whether energy consumption goes up or down. If energy is coming from hydrocarbons, it's uncertain whether CO2 production will go up or down. (This kind of race can go the wrong way: Russia has seen it's CO2 production go down since the end of the Soviet Union, which is due to factories shutting down and overall economic slowdown, rather than increased efficiency.)

Nuclear is a well understood way to make a lot of low-carbon energy. If you build enough nuke plants, you can ensure that in any reasonable scenario, you can drive carbon emissions down, even if there is a boom.

Here is the British Government's Energy Review.

Here is the press release on the white paper.

Here is their nuclear energy paper, published as an addendum to their main paper. This is a logical structure for their presentation: the main paper deals with all the approaches to energy security and global warming, and the addendum deals with one of those approaches.

Saturday, May 05, 2007

Read and Enjoy

I think this website is a parody, but even if not, it's just as much fun:

http://www.429truth.com/

The premise is that the tanker crash and fire which destroyed a freeway ramp in Oakland is part of some Al Queda-type attack. Some juicy quotes:
  • The name “Macarthur Maze” has 13 letters. Element 13 is Aluminum, an ingredient in Thermite. Aluminum oxide was discovered all over the scene, indicating a massive thermal event involving large amounts of aluminum. Unlike hydrocarbon fire, aluminothermic reactions can fuse steel or destroy it entirely. The police refuse to question Custom Alloy, a pro-aluminum corporation based within easy artillery range of the overpass.
  • We admit our mistakes openly and in good faith to preserve our credibility, advance the truth, and emphasize above all else that this website is not a joke.
I generally like my parodies to be a bit more obvious, like the Onion. This site is a bit edgier since it seems more sincere, but it's still pretty funny.

McCain's visit to Google

John McCain visited Google. My notes:
  1. He said, energy independence is going to require nuclear power, and he referenced France generating 80% of their power from nuclear.
    I would say that mitigating global warming is going to require nuclear power. And not just "part of the mix", but a massive investment of a scale France has never needed to consider -- 500 new gigawatt reactors in 25 years. Energy independence, on the other hand, merely requires domestic energy, which is going to be coal before it's going to be nuclear. I think McCain's sense of this issue is not well nuanced yet. He admitted that 8 years ago he knew very little about global warming, and claimed he'd learned a lot since.
  2. He said, America has to enforce its borders, evict illegal immigrants (put them at the back of the line behind everyone else waiting for a visa), set up a broader-scale temporary guest worker program, and let those who wish to study at our universities come.
    I agree that allowing foreigners to study at our universities, effectively subsidizing their education which they then take back home with them, is a good way to export America's core values, which makes the world a safer place. Sometimes I wonder if I should believe this, when it seems so many foreign tyrants studied at U.S. institutions (e.g. Idi Amin).
  3. Some senator apparently said that America is losing the Iraq war, and McCain apparently claimed that if we lost, Al Queda won, and that's not acceptable. One Googler suggested that perhaps everyone lost. McCain had a lame answer for this, essentially that if someone loses, someone else must have won, that's just logic.
    It seems to me the big winners in Iraq are Iran and Al Queda. Iran managed to get the Great Satan to fight the war Iran was incapable of winning on its own. The Shi'ite majority in Iraq seems to lean towards Shi'ite Iran, and has clearly gone front oppressed underdog to presumed incumbent in the midst of a civil war.
  4. I had never seen McCain in person before. He clearly enjoys talking with people. He liked taking questions from the audience. He appears spry. He also says a lot of things that would appear to be uncomfortable politically -- he said the folks in charge of Iran had some "cockamamie" (he used that word) idea that (I'm having trouble remembering the exact details) the 13th Imam was going to cause a holocaust wiping the nonbelievers off the earth. He then said that Iran had a large population of well educated people with more moderate views who wanted a less oppressive government. He also said, at another time in the talk, that he felt the neighboring governments around Iraq, Iran included, were going to want to help with Iraq's problems. He contrasted Iran with North Korea, where he implied that the people are not well educated, which seems unnecessarily frank.
    His description of Iran's leader's cockamamie ideas sounds nearly identical to those of our religious right. McCain toed the party line in 2004, and I wonder to what extent he is beholden to the kooks who took control of the Republican part in the 1990s.
  5. McCain claimed the Republican party was the part of small government, but at the same time stood for several other expansive ideas, chief among them the idea that the pursuit of life, liberty, and the pursuit of happiness was something America should export to the rest of the world.
    I'm hopeful that McCain would attempt to be more cautious with the budget, but it's clear to me that neither the Republicans nor the Democrats have any sense of frugality.
  6. McCain said the Republican party was for minimal governmental regulation, but he failed to motivate what the minimum amount of regulation might be.
    It's clear to me that the free market is a game. The government sets the rules and keeps people from cheating, and the players optimize their return within those rules. The rules have to set carefully to allow creativity to blossom, while at the same time constraining the players not to destroy the community for their own benefit. It would be nice to hear someone espousing small government clarify the difference between small government and anarchy.
Anyway, I was very impressed. McCain seemed much more willing to commit himself to a vision of the future than Hilary Clinton was about a month ago when she visited.

McCain's vision seems a bit like mine. McCain might just love talking more than thinking. (One wonders what he is like talking to foreign, perhaps unfriendly, leaders.) I'm also concerned he opposes the right to abortion. Given that he'll face a Democratic congress, this won't deter my vote for him.

Clinton doesn't seem to have a vision except excessive rosiness and lots of what the government owes the people. Her attitude towards the Iraq war is that Bush started it, and he should finish it, and if he doesn't, she will. But that's really short on detail, and suggests she would prefer it wasn't there. McCain, on the other hand, seems abundantly clear on the implications of pulling out, and simultaneously clear on the mess we have now. I think he's likely to escalate our involvement in an honest attempt to salvage a positive outcome. And, I think he probably has a better sense than anyone else I've seen of how to handle it. And, I think the Iraq war and the larger security situation around it is the central issue of this election.

So long as his running mate isn't some religious wacko (e.g. G. W. Bush clone), I'm probably going to vote for McCain, and pray he stays healthy for four years.

Saturday, April 28, 2007

Better Video -- Gamma and A/D converters

Okay folks, this is going to get somewhat detailed, because I think I have a half-decent and possibly new idea here. As you read along, just keep in mind that the overall goal is to make a ramp-compare A/D converter that has really large dynamic range (16 bits) and goes fast so we can minimize the frame scan time.
[The referenced wikipedia article has some significantly wrong bits. Just reading the referred articles shows the problems.]
First we're going to talk about gamma. Most digital sensors generate digital values from the A/D converter which are linear with the amount of light received by the pixel. One of the first steps in the processing pipeline is to convert this e.g. 12 bit value into a nonlinear 8 bit value. You might wonder why we would go to all that trouble to get 12 bits off the sensor, only to throw away 4 of them.

Consider just four sources of noise in the image for a moment:
  1. Readout noise. This noise is pretty much constant across varying light levels. For the purposes of discussion, let's suppose we have a standard deviation of 20 electrons of readout noise.
  2. kTC noise. Turn off a switch to a capacitor, and you unavoidably sample the state of the electrons diffusing back and forth across the switch. What you are left with is kTC noise, e.g. 28 electrons in a 5 fF well at 300 degrees K. Correlated double sampling (described below) can cancel this noise.
  3. Photon shot noise. This rises as the square root of the electrons captured.
  4. Quantization noise. This is the difference between the true signal and what the digital system is able to represent. Standard deviation is 1/12 of the step size between quantization levels.
You can't add standard deviations but you can add variances. To add these noise sources, take the square root of the sum of the squares. So, if we have a sensor (such as the Kodak KAF-18000) with 20 electrons of readout noise, and a full well capacity of 100,000 electrons, read by a 14 bit A/D with a range that perfectly matches the sensor, then we will see total noise which is dominated by photon shot noise. I've done a spreadsheet which lets you see this here.

Amazingly enough, we can represent the response of this sensor in just 7 bits without adding significant quantization noise. This is why an 8-bit JPG output from a camera with a 12-bit A/D converter really can capture nearly all of what the sensor saw. JPG uses a standard gamma value, which is tuned for visually pleasing results rather than optimal data compression, but the effect is similar. 8-bit JPG doesn't have quite the dynamic range of today's sensors, but it is pretty good.

The ramp-compare A/D converters described in my last blog entry work by comparing the voltage to be converted to a reference voltage which increases over time. When the comparator says the voltages are equal, the A/D samples a digital counter which rises along with the analog reference voltage. Each extra bit of precision requires the time taken to find the voltage value to double. When we realize that much of that time is spent discerning small difference in large signal values that will subsequently be ignored, the extra time spent seems quite wasteful.

Instead of having the reference voltage linearly ramp up, we could have the reference voltage exponentially ramp up, so that the A/D converter would generate the 8b values from the gamma curve directly. The advantage would be that the ramp could take 2^8=256 compares instead of, say, 2^12=4096 compares -- a lot faster!

It's not quite so easy, however. In order to eliminate kTC noise, the A/D converter actually makes two measurements: one of the pixel value after reset (which has sampled kTC noise), and another of the pixel value after exposure (which has the same sample of kTC noise plus the accumulated photoelectrons). Because the kTC sample is the same, the difference between the two has no kTC noise. This technique is called correlated double sampling (CDS), and it is essential. Because gamma-coded values are nonlinear, there is no easy way to subtract them -- you have to convert to linear space, then subtract, then convert back. As I mentioned, for a typical 5 fF capacitance, kTC noise at room temperature is 28 electrons, so this can easily dominate the noise in low illumination operation.

So what we need is an A/D that produces logarithmically encoded values that are easy to subtract. That's easy -- floating point numbers!

If we assume we have a full well capacity of 8000 electrons and we want the equivalent of 10b dynamic range but only need 6b of precision, then the floating-point ramp-compare A/D does the following:
Mantissa 6 bits, 8 e- step size
64 steps of 8 e- to 512 electrons, measure kTC noise
64 steps of 8 e- to 512 electrons
32 more steps of 16 e- to 1024
32 more steps of 32 e- to 2048
32 more steps of 64 e- to 4096
32 more steps of 128 e- to 8192

That's just 256 compares, and gets 10b dynamic range, so it's 4x faster than a normal ramp-compare.

In the last blog post, I described how you could do sequential, faster exposures per pixel to get increased dynamic range (in highlights, not shadow, of course). For example, each faster exposure might be 1/4 the time of the exposure before. The value from one of these faster exposures would only be used if the well had collected between 2000 and 8000 electrons, since if there are fewer electrons the next longer exposure would be used for more accuracy, and if there are more electrons the well is saturated and inaccurate.

One nice thing about having a minimum of 2000 electrons in the signal you are sampling is that the signal-to-noise ratio will be around 40, mainly due to photon shot noise. kTC noise will be swamped, so there is no need for correlated double sampling for these extra exposures. 40:1 is a good SNR ratio. For comparison, you can read tiny white-on-black text through a decent lens with just 10:1 SNR.

If you make the ratio between exposures larger, say 8:1, then you either lose SNR at the bottom portion of the subsequent exposures, or you need a larger well capacity, and in either case the A/D conversion will take more steps. These highlight exposures are very quick to convert because they don't need lots of high-precision LSB steps.

When digitizing the faster exposures, the ramp-compare A/D coverters just do:
64 steps of 64 e- to 4096
32 more steps of 128 e- to 8192

That's 96 compares and gets another 2 bits of dynamic range.

1 base exposure and 3 such faster exposures would give 16b equivalent precision in 544 compares, which is faster than the 10b linear ramp-compare A/D converters used by Micron and Sony. Now as I said in my previous post, this is a dream camera, not a reality. There is a lot of technical risk in this A/D scheme. These ADCs are very touchy devices. For example, 8000 electrons on a 5 fF capacitor is just 0.256 volts and requires distinguishing 0.256 millivolt signal levels. If the compare rate is 50 MHz, you get just 20 ns to make that quarter-millivolt distinction. It's tough.

But, the bottom line is that this scheme can deliver a wall of A/Ds which can do variable dynamic range with short conversion times. The next post will show how we'll use these to construct a very high resolution, high sensitivity, high frame rate sensor for reasonable cost.

Friday, April 20, 2007

Better Video -- A/D converters

Most of what I shoot with my camera is my kids and extended family, vacations and so forth. I need a better camera, one that can do DSLR quality stills, and also better-than-HDTV video. I'm going to write about a few things I'd like to see in that better camera.

Wall of A/D Converters

Modern DSLRs have a focal plane shutter which transits the focal plane in 4 to 5 ms. This shutter is limited to 200k to 300k operations, or about 166 hours of video, so it's incompatible with video camera operation.

Video cameras typically read their images out in 16 to 33 ms with what is known as an electronic rolling shutter. The camera has two counters which count through the rows of pixels on the sensor. Each row pointed to by the first counter is reset, and each row pointed to by the second counter is read. The time delay between the two counters sets the exposure time, up to a maximum of the time between frames, which is usually 33 ms.

A lot can happen in 33 ms, so the action at the top of the frame can look different from that at the bottom. In video, since the picture is displayed with the bottom delayed relative to the top, this can be okay, but it looks wierd in still shots. ERS is even worse in most higher resolution CMOS sensors which can take a hundred or more ms to read out.

It turns out there is a solution which serves both camps. Micron and Sony both have CMOS sensors (Micron's 4MP 200 FPS and Sony's 6.6MP 60 FPS) designed to scan the image out in about the same time as a DSLR shutter. Instead of running all the pixels through a single or small number of A/D converters, they have an A/D converter per column, and digitize all the pixels in a row simultaneously. The A/D converters are slower, so there is still a limit to how fast the thing can run, but it is feasible (the Micron chip does it) to read the sensor in 5 ms.

These A/D converters are cool because they allow good-looking stop motion like a focal plane DSLR shutter, they can be used for video, and here's the kicker: you get the capability of 200 frame-per-second video!

Currently these A/D converters have 10 bits of precision. Sony's chip can digitize at 1/4 speed and get 12 bits of precision, matching what DSLRs have delivered for years. We can do better than that.

The basic idea is to combine multiple exposures. Generally this is done by doing one exposure at, say, 16ms, and then another at 4ms immediately afterwards, and combining in software. The trouble with this technique is that there is a minimum delay between the exposures of whatever the readout time of the sensor frame is -- call it 5 ms. Enough motion happens in this 5 ms to blur bright objects which one would otherwise expect to be sharp.

Instead, let's have all the exposures at each pixel be done sequentially with no intervening gaps. Three counters progress down the sensor: a first reset, a second which reads and then immediately resets, and a third which just reads. The delay between the first and second waves is 16 times greater than the delay between the second and third waves. The sensor alternates between reading the pixels on the second and third wave rows, and alternates between resetting the first and second wave rows.

Because one exposure is 16x the other, we get 4 more bits than the basic A/D converter would give us otherwise. If the base A/D converter is 10 bits, this would get us to 14 bits. We don't want to have more than a 16x difference, because pixels that just barely saturate the long-exposure A/D have just 6 bits of precision in the short-exposure A/D. 5 bits or less might look funny (you'd see a little quantization noise right at the switchover where darker pixels had less).

But we can do still better. These column-parallel 10 bit A/D converters work by sharing a common voltage line which is cycled through 1024 possible voltage levels by a single D/A converter. So for a 1000 row sensor has to cycle through 1024000 voltages in 5 ms -- the D/A is running at an effective 205 MHz. I'm pretty sure they actually run at 1/2 to 1/4 this clock speed and take multiple samples during each clock cycle. Each column A/D is actually just a comparator which signals when the common voltage exceeds the pixel voltage. If we're willing to have just 9 bits of precision, the thing can run 2 times faster. In low light, that gives us ample time for 4 successive exposures down the sensor (not just two), each, say, 8x smaller than the one before. Now we have 9+3+3+3=18 bits of dynamic range, good for about 14 stops of variation in the scene, with at least six significant bits everywhere but the bottom of the range.

Why bother? Well, if the sensor has a decent pixel size and reasonably low readout noise (I'm thinking of the Micron sensor, but can't say numbers), then an e.g. 16 ms shot with an f/2.8 lens should capture an EV 4 interior reasonably well (here's the wikipedia explanation of EV). That's a dim house interior, or something like a church. Using the 18b A/D scheme above, we could capture an EV 18 stained glass window in that church and a bride and groom at the same time, with no artificial lighting, assuming the camera is on a tripod. That's pretty cool.

The fact that it takes twice as long (e.g. 10 ms instead of 5 ms) to read the sensor is fine. You'd only do this in low light, where your exposures will have to be long anyway. Even if you could read the sensor in 5 ms, if the exposure is 16 ms you can't possibly have better than 60 frames per second anyway. And people who want slow-motion high-resolution video with natural lighting in church interiors are simply asking for too much.

When the scene doesn't need the dynamic range, (say, you are outdoors), you can drop down to 12 bits and run as fast as the 10b column-parallel A/Ds allow in the Micron and Sony chips. This gives you 8 stops of EV range, similar to what most DSLRs deliver today. If you want extra-high frame rates (400 fps full frame), drop to 9 bits of precision.

Actually, if the camcorder back end can handle 8x the data rate, you can imagine very high frame rates (and correspondingly short exposures) done by dropping to 8 or 7 bits of precision, and binning the CMOS pixels together or using a subset of the sensor rows. I think 432-line resolution at 8000 fps would be a pretty awesome feature on a consumer camcorder, even if it couldn't sustain that for more than a second or two after a shutter trigger. By using a subset of the sensor columns or binning CMOS pixels horizontally, you might get the back end data rates down to 1-2x normal operation. That'd be amazing: normal TV resolution, sustained capture of 8000 fps video. Looking at it another way, it gives an idea of how hard it is to swallow the data off a sensor such as I am describing. (I'm getting ahead of myself, talking about resolution here, but bear with me.)

Side note: you don't have and don't want an actual 18b number to represent the brightness at a pixel. Instead, the sensor selects which of the 4 exposures was brightest but not saturated. The output value is then 2 bits to indicate the exposure and 9 bits to indicate the value. This data reduction step happens in the sensor: If the maximum exposure time at full frame rate is 16 ms, then the sensor needs to carry just 1 ms worth of data from the first wave of pixel readouts to the second and later waves... at most about 1/20 of the total number of pixels. That's 560 KB of state for an 8 MP chip. Since the chip is CMOS, that's a pretty reasonable amount of state to carry around.

Stay tuned for an even better place to stuff that 560 KB.

Thursday, March 22, 2007

SpaceX launch!

Apparently they launched with a known faulty GPS beacon on the first stage, and as a result they did not recover the first stage. That seems like a pretty substantial loss. My guess is that scheduling the range is difficult enough that knowing that they would probably lose the first stage was not enough to scrub the launch. That makes me wonder about their claim that range fees are not going to eat their profit margins.

Also, I'll note that Elon Musk was speculating about a stuck helium thruster causing the second stage wobble. I don't think this would be a roll thruster, since that wouldn't get progressively less controllable. Their roll control is with these cold-gas thrusters, so control authority would be constant relative to the unexpected torque. If they could cancel the torque in the first minute of second-stage burn, they'd be able to cancel it until the helium ran out.

But SpaceX uses axial helium cold-gas thrusters to seperate the tanks and settle the propellant in the second stage tanks. If one of those thrusters was stuck on, you could end up with some torque from the stage seperation that would explain the nozzle hit during the second-stage seperation. I'm not sure exactly how a single stuck axial helium thruster could explain the worsening roll wobble, but some coupling is at least conceivable.

Propellant slosh is an issue for SpaceX because they have a partially pressure-stabilized structure, made with thin skins welded to internal stringers and rings. Their interior surface is a lot cleaner than the isogrid surface of, say, a Delta IV or Atlas V, and so damps sloshing worse. Once they've got a little roll wobble going, it can really build up over time, especially since they'll remove very little rotational inertia from the propellant through the drain, so that you'd get a bit of the ice-skater effect as the propellant drains and concentrates any roll slosh into the remaining propellant.

The Space Shuttle also has welded stringers and so on in it's external tank. I'm not sure how they do anti-sloshing. I think I've seen cutaway pictures of the tank with extra stuff in there just to settle down the propellants.

One other thing I noticed about this launch. Last year, they added insulating blankets to the exterior of the vehicle which were ripped away during launch. The blankets were added after a scrub due to helium loss in turn due to excessive liquid oxygen boiloff. This year, the blankets were gone. My guess is that Elon had them build a properly sized LOX supply on Omelek, so that they would have no more troubles with boiloff. ("That'll be SIX sh*tloads this time!")

As for 90% of the risk being retired...
  • Orbital insertion accuracy is a big deal, and no data on that.
  • Ejecting the payload without tweaking it is... at least somewhat tough, no data on that. Consider problems with Skylab's insulation and solar panels, and the antenna on Galileo.
  • Getting away from the payload and reentering is risky too.
Still, I'm happy to see progress. I sure hope that the OSD/NRL satellite is easy & cheap to replace.

Sunday, March 04, 2007

The terrible cost of moving electricity

Wind, solar, and hydro electrical generation are all intermittent fluctuating power sources that require long distance power lines to transfer the generated power to end users. It's a little difficult to know how feasible it is to transmit power across thousands of miles. On the one hand, it's obvious that if you make the conductors thick enough, you can reduce the losses as much as you like. On the other hand, it isn't done already on the massive scale necessary to support intermittent fluctuating power sources.

First, how much does overhead transmission wire cost?

Consider ACSS/AW: soft aluminum, supported by aluminum-clad steel. The largest size that Southwire sells (Joree) is 1.88 inches in diameter, 2.38 pounds per foot of aluminum, .309 pounds per foot steel, .0066 ohms/1000 ft DC @ 20 C, rated for 3407 amps at 200 C. As of Dec 1, 2006, it costs $322/CWT. CWT is 100 pounds, so that's $8.66/foot.

Now lets consider how much wire we need to move 10 gigawatts across 1000 miles. The more wire (cross section) we use, the less resistance we'll have and the less power will be lost. The optimal point for these kinds of problems is when the marginal cost of the wire is equal to the marginal cost of the electricity lost to resistance. After this point, when you add wire, the cost of the wire increases faster than the value of the power saved, so that you have lost money.

Let's assume the electricity costs $0.04/kw-hr and that we're transmitting an RMS average of 10 gigawatts. The RMS (root mean square) part of this last assumption lets us estimate power losses. Finally, lets assume we transmit with a +/- 500 kV high-voltage DC transmission system, which is the lowest-loss long-distance transmission system available today.

To convert ongoing electrical costs into a present value we can compare to the cost of the wire, assume a discount rate of 5%.

The optimal point for 10 GW is 4 conductors each way (8 total conductors).
  • wire cost: $366 million
  • resistance: 8.72 ohms
  • power lost: 871 megawatts
  • P.V. lost electricity: $305 million
Here the wire cost doesn't quite equal the present value of the lost electricity because the number of conductors is quantized, and I'm only considering one type of conductor. But, it's close.

One interesting thing about electrical transmission is that the optimal point for wires used doesn't change with distance. Double the distance, double the resistance, double the power lost, and double the wire cost. The total cross section of conductors used is the same. So we can talk about how much more electricity costs after it has moved a distance.

The electricity transmitted has three costs: the cost of the power lost, the rent on the money borrowed to build the transmission lines, and the maintenance and depreciation on the power lines. We just showed the first two will be equal, and the last will be smaller - electric power lines are like dams and bridges, they last for a long time. So the total cost of transmission will be a bit more than twice the cost of the power lost.

This is a really nice rule of thumb because it reduces away the actual costs of power and interest rates and so forth. We can now convert a distance into a cost multiplier. For the geeks among you, the multiplier is (1+power lost)/(1-power lost). Note that power lost is a function of the relative costs of copper and electricity, so that hasn't been reduced away, but merely hidden.

After 1000 miles, 8.71% is lost, and delivered power costs at least 19% extra.
After 2000 miles, 17.4% is lost, and delivered power costs at least 42% extra.
After 3000 miles, 26.1% is lost, and delivered power costs at least 71% extra.
After 4000 miles, 34.8% is lost, and delivered power costs at least 107% extra.

This, in a nutshell, is the argument for locating generators near their loads.

There is a hidden assumption above: that the average power distributed (this goes in the denominator for loss%) is equal to the RMS power distributed (this goes in the numerator for loss%). If the power transmitted is peaky, like from an intermittent wind farm, then the average power will be smaller than the peak power, the power lost % grows, and delivered power costs even more.

Delivered costs are actually even worse: typically, when a transmission line is built, its capacity isn't used immediately. In the years until the capacity is reached, you pay for the capacity you are not using. In fact, you always want some reserve capacity, which drives the price up even more.

So, there you have it. If you spread your wind farms over the whole continent, and interconnect them with a high-capacity power grid, then the cost of that power once delivered is substantially more than the cost of producing it. Not only does wind power have to be as cheap as coal, even after you divide by availability, but it has to overcome the extra and substantial cost of distribution.

And the same goes for solar and hydro too.

I'll leave with a note of hope. Hydro ended up being cheap enough that the cost of distribution could be overcome. Maybe solar or wind can get that cheap as well.

Tuesday, February 20, 2007

What if....

John Goff has set off a round of "what if..." Now Mr. X is at it.

The big problem with companies like Masten Space Systems, or Armadillo Aerospace, or SpaceX for that matter, is that they all have to design and build rocket engines before they even start on the curve towards cost-effectiveness. The engine is the single biggest piece of design risk on a rocket. It takes the most time from project start to operational status. It is, for the most part, the cost of entry into the space race.

When the Apollo program was shut down and replaced by the Shuttle, the U.S. already had the engine it needed for a big dumb booster: the F-1. This was a fantastic engine: it ran on the right propellants, it was regeneratively cooled (and so reusable), it's turbine ran off a gas generator (so there were no fantastic pressures involved), it had excellent reliability (some test engines ran for the equivalent of 20 missions), and it had respectable if not stellar Isp. By the time Apollo 16 launched, the F-1 was a production-quality engine.

The Saturn V was, however, way too big for unmanned payloads. It did not make sense to keep building these monsters. But a rocket powered by a single F-1, with a Centaur upper stage powered by two RL10s, could have put between 30,000 and 45,000 pounds into low earth orbit, about as much as an Atlas V lofts today. In fact, such a rocket would have essentially been an Atlas V, only we would have had it in 1973, and it would have been built with two engines whose development had already been paid for, one of which was already man-rated, and the other of which was the most reliable U.S. engine ever built.

Alternatively, the J-2 could have been used for the upper stage. This would have had the advantage of already being man-rated, but the disadvantage of being overkill (expensive) for putting satellites into GEO.

This rocket would have served as a great booster, for decades. Over those years, the F-1 could have had a long cost reduction and incremental development program, just as the RL10 actually did. Within a few years it could have been man-rated (using two or three RL10s on the upper stage), and carried astronauts up to a space station. Without the ennervating Shuttle development, that space station could have been a bit more meaningful. Heck, without the Shuttle development, we could have had a new Hubble every year.

And, over the years, if it had made sense, we could have tried parachute recovery of those first stages and their valuable F-1s. In short, we could have spent the last three decades racheting down the cost of LEO boost, while spending a lot more money on stuff like Cassini.

And, of course, the beauty part is that with the F-1 production lines still running, the U.S. would have had the capability to build a few Saturn 1Cs. That's the five-F1 first stage from the Saturn V. In the mid-80s, NASA would have debated the cost of building the Space Station with heavy launch, or with the single-F1 launch vehicle, and my guess is they would have restarted the J-2X program and gone with bigger ISS segments.

Anyway, didn't happen.

Friday, February 16, 2007

Hi Mom! I'm on TV!

My first Google Video is up. I gave this tech talk at Google as part of Dick Lyon's Photo Tech series. If you've ever wanted to know about flare and contrast, now is your chance.

And, if you are wondering what is wrong with me during the first few minutes of the talk, it might help to know that I had just sprinted from parking my car to the conference room, which is up a long flight of stairs. I'm doing my best to smile and not gasp for air, ballerina style.

If you haven't viewed any of the previous videos in this series (mine is #4), go watch Dick in #1 and #2. Dick was one of the founders of Foveon, invented one of the first optical mice, seems to know everyone in the photography world personally, and is a good speaker to boot.

Friday, January 19, 2007

Hello? NASA PR?

I finally found the video shot by the WB-57 chase plane of the ST-114 "Return to Flight" launch. It's fabulous. Keep watching, and near the end you'll see the SRB seperation. The plumes from the seperation rockets are huge!

NASA article on the development of the WB-57 camera system

Video

I think this is really compelling imagery, but it's grainy and shaky. Still, it's way more interesting that the view of the three white dots of the engines during ascent. A little postproduction work could stabilize the imagery on this video, and yield something even more fun to watch.

Saturday, January 06, 2007

Tyrannosaurus Regina

My daughter Anya is 4.5 years old. She likes to have me make line drawings of things and then paint them. Today she wanted a dinosaur.

"Stegosaurus?"

"No, Tyrannosaurus Rex."

I'm no artist, but I did my best, and I was fairly pleased with the result. You know, the strong nose ridge, the gaping jaw filled with long sharp teeth, the massive tail, and the huge talons at the points of the rear feet.

Anya added a crown. "A princess Tyrannosaurus Rex!"

Then she insisted that I add glass slippers.

These things weighed what, ten tons? A great deal of that was the neck and torso musculature necessary to thrash car-sized animals to death. It's hard to overstate how dangerous these things would have been around princes and princesses and folks with chain mail and so forth. The talons on these things could probably punch through an unreinforced driveway.

And then, the glass slipper. The most impractical possible footwear. Like clogs, inflexible. But also prone to shattering, possessing little traction, and probably heavy once made thick enough to be safe. I'm personally certain that the glass slipper concept is due to some sort of mistranslation. But still, how do you apply glass slippers to a Tyrannosaurus?

Monday, January 01, 2007

Organic Photochemistry

Some of you may know that I've been carrying around a wacky hunch about the operation of the brain for several years. Here's the start of the thread in August 2000, and here is my summary of the idea. Ever since then I occasionally grope around for some way to design an experiment to refine or reject the idea.

Yesterday I had a very interesting conversation with a pair of physical chemists. While he didn't get me to an experiment design, he did provide a lot of insight.

First, I had assumed that if the gated ion channels were exchanging photons, there would be a glow that could be measured. Not so. AJ gave me the impression that some photons get produced and consumed in a way that can't be interrupted: remove the consumer and the producer does not emit. As a result, you'd never see a glow. I've read that magnets and charged particles transmit force through photons... somehow those photons must not be observable either.

Next was the biochemistry of light emission and absorption. Apparently absorbing and emitting light requires violent chemical reactions that tend to destroy the molecules involved. AJ said that much of what retinal cells do is regenerate the rhodopsin as it gets smashed. He would expect to see a lot of biochemical infrastructure to handle free radicals and so forth if the brain was producing and consuming lots of photons. And he figures that people would have seen all that chemistry already if it were there (although maybe they weren't looking for it).

And then there was the issue of wavelength. For best efficiency, a long straight radio antenna typically should be one quarter of the wavelength of the signal being sent or received. For instance, a cell phone uses 1.9 GHz signals, about 6 inches long. Most cell phones have antennas about 1.5 inches long, most of which is buried in the cellphone. So I had assumed that rhodopsin, which receives photons from 400 to 600 nm, would be around 100 to 150 nm long. Not so. As this link shows (look at the third figure down), rhodopsin is at most 9nm long. Apparently the coupling of photons to such small structures is via a completely different mechanism. That's a good thing, because I was expecting 12 micron photons, or thereabouts, and cell membranes are three orders of magnitude smaller. This throws a significant wrench into my hunch that the membranes are acting like waveguides.

I had previously computed that the energy from one ion dropping across a gated ion channel was equivalent to a 12 micron photon, which is very deep in the infrared. So, if you wildly assume (in the spirit of this whole thing) that one ion generates one photon, you'd expect to see 12 micron or longer photons. AJ points out that this is a portion of the spectrum to which most organic molecules (and water too, if I understand correctly) are quite opaque. But of course, that might be a good thing. If the ion channels are exchanging photons through waveguides, it's probably best if the photons propagate well only in those waveguides and not elsewhere, otherwise there could be a fair bit of crosstalk.

None of this gets me closer to an experimental design, of course. If anyone has a suggestion for a book that discusses organic photochemistry, I'd love to hear about it.

Combat resupply and rescue

I'm not a military guy, I don't know much about how they do things. But I have read Blackhawk Down, and I have some sense that a casualty is a much bigger problem than one guy getting shot. If there is no well-linked rear to which to send a casualty, a fire team has a huge liability. I think the usual rule is that one casualty effectively soaks up four people. It reduces the fire team's mobility and effectiveness, and can rapidly send a mission down a cascade of further problems. So, I got to thinking about how you could improve combat rescue.

Let's assume you control the airspace about the battlefield, or at least the portion of it above small-arms range. Helicopters work pretty well when you want to insert your fire teams, because folks near the target can often be taken by surprise and the choppers can dump their loads and be off before serious resistance is organized. But helicopters are not a good way to get people back out, because they move slowly near the landing zone and are thus pretty easy targets. What you need, getting out, is a lot of acceleration and altitude, right away. You want a rocket.

The wounded guy goes into a stretcher. I'm imagining something like a full-body inflatable splint: it squeezes the hell out of him, totally immobilizing him, and insulating him from cold and wind. You'd design the thing so that it could be popped in a couple of places and still work fine. The stretcher gets attached to a rope attached to a reel at the bottom of the rocket.

The rocket fires a very short exhaust pulse, which sends the thing up 50 feet or so. At this point the rope is entirely unreeled. When the rope goes taut, the main burn starts, accelerating the stretcher at, say, 5G straight up. The exhaust plume is directed out two symmetrical nozzles slightly away from straight down so that the poor guy at the bottom doesn't get burned. Acceleration drops to 1G for ten seconds or so once the guy is at a few hundred miles per hour, and then cuts out. The rocket coasts to a stop at 10,000 feet or so, at which point a parasail pops out.

At this point an autopilot yanking on the control lines can fly the guy ten miles away to get picked up on the ground, or a helicopter or C-130 can grab him out of midair. A midair grab sounds ridiculous but apparently they already use this technique for recovering deorbited film capsules and they haven't dropped any yet. A midair pickup at 2000 feet would have 8 minutes to snatch a guy falling from 10,000 feet at 16 feet/second, which seems plausible with good communication.

[Update: apparently they already use midair grabs for picking up people, too. They use a helium balloon and snag that. The trouble is that when they winch the guy in, he generally spins around in the vortex behind the airplane, and when he gets to the tail of the airplane he can get bashed against the fuselage a fair bit before they get him inside.]

A rocket sufficient to boost 300 lbs of payload to 3200 meters needs about 300 m/s delta-V. With a mass ratio of 80% and an Ve of 2600 m/s, the rocket will weigh 120 pounds. That's not something you want to be carrying around with you, but it is something that one guy can manhandle into an upright position. So you have to deliver this heavy, bulky thing to a fire team in the middle of a combat zone which is already distracted by tending to some casualties. Luckily, you can make the rocket pretty tough.

I suggest dropping the recovery package (ascent rocket, stretcher, medical kit, ammunition) on the fire team as you might drop a smart bomb. Instead of exploding a warhead, this munition pops a parachute or fires a retrorocket right before impact to minimize the damage to whatever it hits and cushion the blow to the medical kit. Someone on the fire team might use a laser designator to pick the landing spot, so that they have good control over the difficulty of recovering the thing. You'd want to be careful: bomb there, recovery kit here.

I posted about this three years ago in this thread: http://groups-beta.google.com/group/sci.space.tech/browse_thread/thread/efb906c8dd19915a/a355a9c6b2ed55f5?hl=en
Back then I thought you needed the robot paraglider to deliver the recovery package. Now I suspect something more like the smart bombs we already have would be okay.