Saturday, April 28, 2007

Better Video -- Gamma and A/D converters

Okay folks, this is going to get somewhat detailed, because I think I have a half-decent and possibly new idea here. As you read along, just keep in mind that the overall goal is to make a ramp-compare A/D converter that has really large dynamic range (16 bits) and goes fast so we can minimize the frame scan time.
[The referenced wikipedia article has some significantly wrong bits. Just reading the referred articles shows the problems.]
First we're going to talk about gamma. Most digital sensors generate digital values from the A/D converter which are linear with the amount of light received by the pixel. One of the first steps in the processing pipeline is to convert this e.g. 12 bit value into a nonlinear 8 bit value. You might wonder why we would go to all that trouble to get 12 bits off the sensor, only to throw away 4 of them.

Consider just four sources of noise in the image for a moment:
  1. Readout noise. This noise is pretty much constant across varying light levels. For the purposes of discussion, let's suppose we have a standard deviation of 20 electrons of readout noise.
  2. kTC noise. Turn off a switch to a capacitor, and you unavoidably sample the state of the electrons diffusing back and forth across the switch. What you are left with is kTC noise, e.g. 28 electrons in a 5 fF well at 300 degrees K. Correlated double sampling (described below) can cancel this noise.
  3. Photon shot noise. This rises as the square root of the electrons captured.
  4. Quantization noise. This is the difference between the true signal and what the digital system is able to represent. Standard deviation is 1/12 of the step size between quantization levels.
You can't add standard deviations but you can add variances. To add these noise sources, take the square root of the sum of the squares. So, if we have a sensor (such as the Kodak KAF-18000) with 20 electrons of readout noise, and a full well capacity of 100,000 electrons, read by a 14 bit A/D with a range that perfectly matches the sensor, then we will see total noise which is dominated by photon shot noise. I've done a spreadsheet which lets you see this here.

Amazingly enough, we can represent the response of this sensor in just 7 bits without adding significant quantization noise. This is why an 8-bit JPG output from a camera with a 12-bit A/D converter really can capture nearly all of what the sensor saw. JPG uses a standard gamma value, which is tuned for visually pleasing results rather than optimal data compression, but the effect is similar. 8-bit JPG doesn't have quite the dynamic range of today's sensors, but it is pretty good.

The ramp-compare A/D converters described in my last blog entry work by comparing the voltage to be converted to a reference voltage which increases over time. When the comparator says the voltages are equal, the A/D samples a digital counter which rises along with the analog reference voltage. Each extra bit of precision requires the time taken to find the voltage value to double. When we realize that much of that time is spent discerning small difference in large signal values that will subsequently be ignored, the extra time spent seems quite wasteful.

Instead of having the reference voltage linearly ramp up, we could have the reference voltage exponentially ramp up, so that the A/D converter would generate the 8b values from the gamma curve directly. The advantage would be that the ramp could take 2^8=256 compares instead of, say, 2^12=4096 compares -- a lot faster!

It's not quite so easy, however. In order to eliminate kTC noise, the A/D converter actually makes two measurements: one of the pixel value after reset (which has sampled kTC noise), and another of the pixel value after exposure (which has the same sample of kTC noise plus the accumulated photoelectrons). Because the kTC sample is the same, the difference between the two has no kTC noise. This technique is called correlated double sampling (CDS), and it is essential. Because gamma-coded values are nonlinear, there is no easy way to subtract them -- you have to convert to linear space, then subtract, then convert back. As I mentioned, for a typical 5 fF capacitance, kTC noise at room temperature is 28 electrons, so this can easily dominate the noise in low illumination operation.

So what we need is an A/D that produces logarithmically encoded values that are easy to subtract. That's easy -- floating point numbers!

If we assume we have a full well capacity of 8000 electrons and we want the equivalent of 10b dynamic range but only need 6b of precision, then the floating-point ramp-compare A/D does the following:
Mantissa 6 bits, 8 e- step size
64 steps of 8 e- to 512 electrons, measure kTC noise
64 steps of 8 e- to 512 electrons
32 more steps of 16 e- to 1024
32 more steps of 32 e- to 2048
32 more steps of 64 e- to 4096
32 more steps of 128 e- to 8192

That's just 256 compares, and gets 10b dynamic range, so it's 4x faster than a normal ramp-compare.

In the last blog post, I described how you could do sequential, faster exposures per pixel to get increased dynamic range (in highlights, not shadow, of course). For example, each faster exposure might be 1/4 the time of the exposure before. The value from one of these faster exposures would only be used if the well had collected between 2000 and 8000 electrons, since if there are fewer electrons the next longer exposure would be used for more accuracy, and if there are more electrons the well is saturated and inaccurate.

One nice thing about having a minimum of 2000 electrons in the signal you are sampling is that the signal-to-noise ratio will be around 40, mainly due to photon shot noise. kTC noise will be swamped, so there is no need for correlated double sampling for these extra exposures. 40:1 is a good SNR ratio. For comparison, you can read tiny white-on-black text through a decent lens with just 10:1 SNR.

If you make the ratio between exposures larger, say 8:1, then you either lose SNR at the bottom portion of the subsequent exposures, or you need a larger well capacity, and in either case the A/D conversion will take more steps. These highlight exposures are very quick to convert because they don't need lots of high-precision LSB steps.

When digitizing the faster exposures, the ramp-compare A/D coverters just do:
64 steps of 64 e- to 4096
32 more steps of 128 e- to 8192

That's 96 compares and gets another 2 bits of dynamic range.

1 base exposure and 3 such faster exposures would give 16b equivalent precision in 544 compares, which is faster than the 10b linear ramp-compare A/D converters used by Micron and Sony. Now as I said in my previous post, this is a dream camera, not a reality. There is a lot of technical risk in this A/D scheme. These ADCs are very touchy devices. For example, 8000 electrons on a 5 fF capacitor is just 0.256 volts and requires distinguishing 0.256 millivolt signal levels. If the compare rate is 50 MHz, you get just 20 ns to make that quarter-millivolt distinction. It's tough.

But, the bottom line is that this scheme can deliver a wall of A/Ds which can do variable dynamic range with short conversion times. The next post will show how we'll use these to construct a very high resolution, high sensitivity, high frame rate sensor for reasonable cost.

Friday, April 20, 2007

Better Video -- A/D converters

Most of what I shoot with my camera is my kids and extended family, vacations and so forth. I need a better camera, one that can do DSLR quality stills, and also better-than-HDTV video. I'm going to write about a few things I'd like to see in that better camera.

Wall of A/D Converters

Modern DSLRs have a focal plane shutter which transits the focal plane in 4 to 5 ms. This shutter is limited to 200k to 300k operations, or about 166 hours of video, so it's incompatible with video camera operation.

Video cameras typically read their images out in 16 to 33 ms with what is known as an electronic rolling shutter. The camera has two counters which count through the rows of pixels on the sensor. Each row pointed to by the first counter is reset, and each row pointed to by the second counter is read. The time delay between the two counters sets the exposure time, up to a maximum of the time between frames, which is usually 33 ms.

A lot can happen in 33 ms, so the action at the top of the frame can look different from that at the bottom. In video, since the picture is displayed with the bottom delayed relative to the top, this can be okay, but it looks wierd in still shots. ERS is even worse in most higher resolution CMOS sensors which can take a hundred or more ms to read out.

It turns out there is a solution which serves both camps. Micron and Sony both have CMOS sensors (Micron's 4MP 200 FPS and Sony's 6.6MP 60 FPS) designed to scan the image out in about the same time as a DSLR shutter. Instead of running all the pixels through a single or small number of A/D converters, they have an A/D converter per column, and digitize all the pixels in a row simultaneously. The A/D converters are slower, so there is still a limit to how fast the thing can run, but it is feasible (the Micron chip does it) to read the sensor in 5 ms.

These A/D converters are cool because they allow good-looking stop motion like a focal plane DSLR shutter, they can be used for video, and here's the kicker: you get the capability of 200 frame-per-second video!

Currently these A/D converters have 10 bits of precision. Sony's chip can digitize at 1/4 speed and get 12 bits of precision, matching what DSLRs have delivered for years. We can do better than that.

The basic idea is to combine multiple exposures. Generally this is done by doing one exposure at, say, 16ms, and then another at 4ms immediately afterwards, and combining in software. The trouble with this technique is that there is a minimum delay between the exposures of whatever the readout time of the sensor frame is -- call it 5 ms. Enough motion happens in this 5 ms to blur bright objects which one would otherwise expect to be sharp.

Instead, let's have all the exposures at each pixel be done sequentially with no intervening gaps. Three counters progress down the sensor: a first reset, a second which reads and then immediately resets, and a third which just reads. The delay between the first and second waves is 16 times greater than the delay between the second and third waves. The sensor alternates between reading the pixels on the second and third wave rows, and alternates between resetting the first and second wave rows.

Because one exposure is 16x the other, we get 4 more bits than the basic A/D converter would give us otherwise. If the base A/D converter is 10 bits, this would get us to 14 bits. We don't want to have more than a 16x difference, because pixels that just barely saturate the long-exposure A/D have just 6 bits of precision in the short-exposure A/D. 5 bits or less might look funny (you'd see a little quantization noise right at the switchover where darker pixels had less).

But we can do still better. These column-parallel 10 bit A/D converters work by sharing a common voltage line which is cycled through 1024 possible voltage levels by a single D/A converter. So for a 1000 row sensor has to cycle through 1024000 voltages in 5 ms -- the D/A is running at an effective 205 MHz. I'm pretty sure they actually run at 1/2 to 1/4 this clock speed and take multiple samples during each clock cycle. Each column A/D is actually just a comparator which signals when the common voltage exceeds the pixel voltage. If we're willing to have just 9 bits of precision, the thing can run 2 times faster. In low light, that gives us ample time for 4 successive exposures down the sensor (not just two), each, say, 8x smaller than the one before. Now we have 9+3+3+3=18 bits of dynamic range, good for about 14 stops of variation in the scene, with at least six significant bits everywhere but the bottom of the range.

Why bother? Well, if the sensor has a decent pixel size and reasonably low readout noise (I'm thinking of the Micron sensor, but can't say numbers), then an e.g. 16 ms shot with an f/2.8 lens should capture an EV 4 interior reasonably well (here's the wikipedia explanation of EV). That's a dim house interior, or something like a church. Using the 18b A/D scheme above, we could capture an EV 18 stained glass window in that church and a bride and groom at the same time, with no artificial lighting, assuming the camera is on a tripod. That's pretty cool.

The fact that it takes twice as long (e.g. 10 ms instead of 5 ms) to read the sensor is fine. You'd only do this in low light, where your exposures will have to be long anyway. Even if you could read the sensor in 5 ms, if the exposure is 16 ms you can't possibly have better than 60 frames per second anyway. And people who want slow-motion high-resolution video with natural lighting in church interiors are simply asking for too much.

When the scene doesn't need the dynamic range, (say, you are outdoors), you can drop down to 12 bits and run as fast as the 10b column-parallel A/Ds allow in the Micron and Sony chips. This gives you 8 stops of EV range, similar to what most DSLRs deliver today. If you want extra-high frame rates (400 fps full frame), drop to 9 bits of precision.

Actually, if the camcorder back end can handle 8x the data rate, you can imagine very high frame rates (and correspondingly short exposures) done by dropping to 8 or 7 bits of precision, and binning the CMOS pixels together or using a subset of the sensor rows. I think 432-line resolution at 8000 fps would be a pretty awesome feature on a consumer camcorder, even if it couldn't sustain that for more than a second or two after a shutter trigger. By using a subset of the sensor columns or binning CMOS pixels horizontally, you might get the back end data rates down to 1-2x normal operation. That'd be amazing: normal TV resolution, sustained capture of 8000 fps video. Looking at it another way, it gives an idea of how hard it is to swallow the data off a sensor such as I am describing. (I'm getting ahead of myself, talking about resolution here, but bear with me.)

Side note: you don't have and don't want an actual 18b number to represent the brightness at a pixel. Instead, the sensor selects which of the 4 exposures was brightest but not saturated. The output value is then 2 bits to indicate the exposure and 9 bits to indicate the value. This data reduction step happens in the sensor: If the maximum exposure time at full frame rate is 16 ms, then the sensor needs to carry just 1 ms worth of data from the first wave of pixel readouts to the second and later waves... at most about 1/20 of the total number of pixels. That's 560 KB of state for an 8 MP chip. Since the chip is CMOS, that's a pretty reasonable amount of state to carry around.

Stay tuned for an even better place to stuff that 560 KB.

Thursday, March 22, 2007

SpaceX launch!

Apparently they launched with a known faulty GPS beacon on the first stage, and as a result they did not recover the first stage. That seems like a pretty substantial loss. My guess is that scheduling the range is difficult enough that knowing that they would probably lose the first stage was not enough to scrub the launch. That makes me wonder about their claim that range fees are not going to eat their profit margins.

Also, I'll note that Elon Musk was speculating about a stuck helium thruster causing the second stage wobble. I don't think this would be a roll thruster, since that wouldn't get progressively less controllable. Their roll control is with these cold-gas thrusters, so control authority would be constant relative to the unexpected torque. If they could cancel the torque in the first minute of second-stage burn, they'd be able to cancel it until the helium ran out.

But SpaceX uses axial helium cold-gas thrusters to seperate the tanks and settle the propellant in the second stage tanks. If one of those thrusters was stuck on, you could end up with some torque from the stage seperation that would explain the nozzle hit during the second-stage seperation. I'm not sure exactly how a single stuck axial helium thruster could explain the worsening roll wobble, but some coupling is at least conceivable.

Propellant slosh is an issue for SpaceX because they have a partially pressure-stabilized structure, made with thin skins welded to internal stringers and rings. Their interior surface is a lot cleaner than the isogrid surface of, say, a Delta IV or Atlas V, and so damps sloshing worse. Once they've got a little roll wobble going, it can really build up over time, especially since they'll remove very little rotational inertia from the propellant through the drain, so that you'd get a bit of the ice-skater effect as the propellant drains and concentrates any roll slosh into the remaining propellant.

The Space Shuttle also has welded stringers and so on in it's external tank. I'm not sure how they do anti-sloshing. I think I've seen cutaway pictures of the tank with extra stuff in there just to settle down the propellants.

One other thing I noticed about this launch. Last year, they added insulating blankets to the exterior of the vehicle which were ripped away during launch. The blankets were added after a scrub due to helium loss in turn due to excessive liquid oxygen boiloff. This year, the blankets were gone. My guess is that Elon had them build a properly sized LOX supply on Omelek, so that they would have no more troubles with boiloff. ("That'll be SIX sh*tloads this time!")

As for 90% of the risk being retired...
  • Orbital insertion accuracy is a big deal, and no data on that.
  • Ejecting the payload without tweaking it is... at least somewhat tough, no data on that. Consider problems with Skylab's insulation and solar panels, and the antenna on Galileo.
  • Getting away from the payload and reentering is risky too.
Still, I'm happy to see progress. I sure hope that the OSD/NRL satellite is easy & cheap to replace.

Sunday, March 04, 2007

The terrible cost of moving electricity

Wind, solar, and hydro electrical generation are all intermittent fluctuating power sources that require long distance power lines to transfer the generated power to end users. It's a little difficult to know how feasible it is to transmit power across thousands of miles. On the one hand, it's obvious that if you make the conductors thick enough, you can reduce the losses as much as you like. On the other hand, it isn't done already on the massive scale necessary to support intermittent fluctuating power sources.

First, how much does overhead transmission wire cost?

Consider ACSS/AW: soft aluminum, supported by aluminum-clad steel. The largest size that Southwire sells (Joree) is 1.88 inches in diameter, 2.38 pounds per foot of aluminum, .309 pounds per foot steel, .0066 ohms/1000 ft DC @ 20 C, rated for 3407 amps at 200 C. As of Dec 1, 2006, it costs $322/CWT. CWT is 100 pounds, so that's $8.66/foot.

Now lets consider how much wire we need to move 10 gigawatts across 1000 miles. The more wire (cross section) we use, the less resistance we'll have and the less power will be lost. The optimal point for these kinds of problems is when the marginal cost of the wire is equal to the marginal cost of the electricity lost to resistance. After this point, when you add wire, the cost of the wire increases faster than the value of the power saved, so that you have lost money.

Let's assume the electricity costs $0.04/kw-hr and that we're transmitting an RMS average of 10 gigawatts. The RMS (root mean square) part of this last assumption lets us estimate power losses. Finally, lets assume we transmit with a +/- 500 kV high-voltage DC transmission system, which is the lowest-loss long-distance transmission system available today.

To convert ongoing electrical costs into a present value we can compare to the cost of the wire, assume a discount rate of 5%.

The optimal point for 10 GW is 4 conductors each way (8 total conductors).
  • wire cost: $366 million
  • resistance: 8.72 ohms
  • power lost: 871 megawatts
  • P.V. lost electricity: $305 million
Here the wire cost doesn't quite equal the present value of the lost electricity because the number of conductors is quantized, and I'm only considering one type of conductor. But, it's close.

One interesting thing about electrical transmission is that the optimal point for wires used doesn't change with distance. Double the distance, double the resistance, double the power lost, and double the wire cost. The total cross section of conductors used is the same. So we can talk about how much more electricity costs after it has moved a distance.

The electricity transmitted has three costs: the cost of the power lost, the rent on the money borrowed to build the transmission lines, and the maintenance and depreciation on the power lines. We just showed the first two will be equal, and the last will be smaller - electric power lines are like dams and bridges, they last for a long time. So the total cost of transmission will be a bit more than twice the cost of the power lost.

This is a really nice rule of thumb because it reduces away the actual costs of power and interest rates and so forth. We can now convert a distance into a cost multiplier. For the geeks among you, the multiplier is (1+power lost)/(1-power lost). Note that power lost is a function of the relative costs of copper and electricity, so that hasn't been reduced away, but merely hidden.

After 1000 miles, 8.71% is lost, and delivered power costs at least 19% extra.
After 2000 miles, 17.4% is lost, and delivered power costs at least 42% extra.
After 3000 miles, 26.1% is lost, and delivered power costs at least 71% extra.
After 4000 miles, 34.8% is lost, and delivered power costs at least 107% extra.

This, in a nutshell, is the argument for locating generators near their loads.

There is a hidden assumption above: that the average power distributed (this goes in the denominator for loss%) is equal to the RMS power distributed (this goes in the numerator for loss%). If the power transmitted is peaky, like from an intermittent wind farm, then the average power will be smaller than the peak power, the power lost % grows, and delivered power costs even more.

Delivered costs are actually even worse: typically, when a transmission line is built, its capacity isn't used immediately. In the years until the capacity is reached, you pay for the capacity you are not using. In fact, you always want some reserve capacity, which drives the price up even more.

So, there you have it. If you spread your wind farms over the whole continent, and interconnect them with a high-capacity power grid, then the cost of that power once delivered is substantially more than the cost of producing it. Not only does wind power have to be as cheap as coal, even after you divide by availability, but it has to overcome the extra and substantial cost of distribution.

And the same goes for solar and hydro too.

I'll leave with a note of hope. Hydro ended up being cheap enough that the cost of distribution could be overcome. Maybe solar or wind can get that cheap as well.

Tuesday, February 20, 2007

What if....

John Goff has set off a round of "what if..." Now Mr. X is at it.

The big problem with companies like Masten Space Systems, or Armadillo Aerospace, or SpaceX for that matter, is that they all have to design and build rocket engines before they even start on the curve towards cost-effectiveness. The engine is the single biggest piece of design risk on a rocket. It takes the most time from project start to operational status. It is, for the most part, the cost of entry into the space race.

When the Apollo program was shut down and replaced by the Shuttle, the U.S. already had the engine it needed for a big dumb booster: the F-1. This was a fantastic engine: it ran on the right propellants, it was regeneratively cooled (and so reusable), it's turbine ran off a gas generator (so there were no fantastic pressures involved), it had excellent reliability (some test engines ran for the equivalent of 20 missions), and it had respectable if not stellar Isp. By the time Apollo 16 launched, the F-1 was a production-quality engine.

The Saturn V was, however, way too big for unmanned payloads. It did not make sense to keep building these monsters. But a rocket powered by a single F-1, with a Centaur upper stage powered by two RL10s, could have put between 30,000 and 45,000 pounds into low earth orbit, about as much as an Atlas V lofts today. In fact, such a rocket would have essentially been an Atlas V, only we would have had it in 1973, and it would have been built with two engines whose development had already been paid for, one of which was already man-rated, and the other of which was the most reliable U.S. engine ever built.

Alternatively, the J-2 could have been used for the upper stage. This would have had the advantage of already being man-rated, but the disadvantage of being overkill (expensive) for putting satellites into GEO.

This rocket would have served as a great booster, for decades. Over those years, the F-1 could have had a long cost reduction and incremental development program, just as the RL10 actually did. Within a few years it could have been man-rated (using two or three RL10s on the upper stage), and carried astronauts up to a space station. Without the ennervating Shuttle development, that space station could have been a bit more meaningful. Heck, without the Shuttle development, we could have had a new Hubble every year.

And, over the years, if it had made sense, we could have tried parachute recovery of those first stages and their valuable F-1s. In short, we could have spent the last three decades racheting down the cost of LEO boost, while spending a lot more money on stuff like Cassini.

And, of course, the beauty part is that with the F-1 production lines still running, the U.S. would have had the capability to build a few Saturn 1Cs. That's the five-F1 first stage from the Saturn V. In the mid-80s, NASA would have debated the cost of building the Space Station with heavy launch, or with the single-F1 launch vehicle, and my guess is they would have restarted the J-2X program and gone with bigger ISS segments.

Anyway, didn't happen.

Friday, February 16, 2007

Hi Mom! I'm on TV!

My first Google Video is up. I gave this tech talk at Google as part of Dick Lyon's Photo Tech series. If you've ever wanted to know about flare and contrast, now is your chance.

And, if you are wondering what is wrong with me during the first few minutes of the talk, it might help to know that I had just sprinted from parking my car to the conference room, which is up a long flight of stairs. I'm doing my best to smile and not gasp for air, ballerina style.

If you haven't viewed any of the previous videos in this series (mine is #4), go watch Dick in #1 and #2. Dick was one of the founders of Foveon, invented one of the first optical mice, seems to know everyone in the photography world personally, and is a good speaker to boot.

Friday, January 19, 2007

Hello? NASA PR?

I finally found the video shot by the WB-57 chase plane of the ST-114 "Return to Flight" launch. It's fabulous. Keep watching, and near the end you'll see the SRB seperation. The plumes from the seperation rockets are huge!

NASA article on the development of the WB-57 camera system

Video

I think this is really compelling imagery, but it's grainy and shaky. Still, it's way more interesting that the view of the three white dots of the engines during ascent. A little postproduction work could stabilize the imagery on this video, and yield something even more fun to watch.

Saturday, January 06, 2007

Tyrannosaurus Regina

My daughter Anya is 4.5 years old. She likes to have me make line drawings of things and then paint them. Today she wanted a dinosaur.

"Stegosaurus?"

"No, Tyrannosaurus Rex."

I'm no artist, but I did my best, and I was fairly pleased with the result. You know, the strong nose ridge, the gaping jaw filled with long sharp teeth, the massive tail, and the huge talons at the points of the rear feet.

Anya added a crown. "A princess Tyrannosaurus Rex!"

Then she insisted that I add glass slippers.

These things weighed what, ten tons? A great deal of that was the neck and torso musculature necessary to thrash car-sized animals to death. It's hard to overstate how dangerous these things would have been around princes and princesses and folks with chain mail and so forth. The talons on these things could probably punch through an unreinforced driveway.

And then, the glass slipper. The most impractical possible footwear. Like clogs, inflexible. But also prone to shattering, possessing little traction, and probably heavy once made thick enough to be safe. I'm personally certain that the glass slipper concept is due to some sort of mistranslation. But still, how do you apply glass slippers to a Tyrannosaurus?

Monday, January 01, 2007

Organic Photochemistry

Some of you may know that I've been carrying around a wacky hunch about the operation of the brain for several years. Here's the start of the thread in August 2000, and here is my summary of the idea. Ever since then I occasionally grope around for some way to design an experiment to refine or reject the idea.

Yesterday I had a very interesting conversation with a pair of physical chemists. While he didn't get me to an experiment design, he did provide a lot of insight.

First, I had assumed that if the gated ion channels were exchanging photons, there would be a glow that could be measured. Not so. AJ gave me the impression that some photons get produced and consumed in a way that can't be interrupted: remove the consumer and the producer does not emit. As a result, you'd never see a glow. I've read that magnets and charged particles transmit force through photons... somehow those photons must not be observable either.

Next was the biochemistry of light emission and absorption. Apparently absorbing and emitting light requires violent chemical reactions that tend to destroy the molecules involved. AJ said that much of what retinal cells do is regenerate the rhodopsin as it gets smashed. He would expect to see a lot of biochemical infrastructure to handle free radicals and so forth if the brain was producing and consuming lots of photons. And he figures that people would have seen all that chemistry already if it were there (although maybe they weren't looking for it).

And then there was the issue of wavelength. For best efficiency, a long straight radio antenna typically should be one quarter of the wavelength of the signal being sent or received. For instance, a cell phone uses 1.9 GHz signals, about 6 inches long. Most cell phones have antennas about 1.5 inches long, most of which is buried in the cellphone. So I had assumed that rhodopsin, which receives photons from 400 to 600 nm, would be around 100 to 150 nm long. Not so. As this link shows (look at the third figure down), rhodopsin is at most 9nm long. Apparently the coupling of photons to such small structures is via a completely different mechanism. That's a good thing, because I was expecting 12 micron photons, or thereabouts, and cell membranes are three orders of magnitude smaller. This throws a significant wrench into my hunch that the membranes are acting like waveguides.

I had previously computed that the energy from one ion dropping across a gated ion channel was equivalent to a 12 micron photon, which is very deep in the infrared. So, if you wildly assume (in the spirit of this whole thing) that one ion generates one photon, you'd expect to see 12 micron or longer photons. AJ points out that this is a portion of the spectrum to which most organic molecules (and water too, if I understand correctly) are quite opaque. But of course, that might be a good thing. If the ion channels are exchanging photons through waveguides, it's probably best if the photons propagate well only in those waveguides and not elsewhere, otherwise there could be a fair bit of crosstalk.

None of this gets me closer to an experimental design, of course. If anyone has a suggestion for a book that discusses organic photochemistry, I'd love to hear about it.

Combat resupply and rescue

I'm not a military guy, I don't know much about how they do things. But I have read Blackhawk Down, and I have some sense that a casualty is a much bigger problem than one guy getting shot. If there is no well-linked rear to which to send a casualty, a fire team has a huge liability. I think the usual rule is that one casualty effectively soaks up four people. It reduces the fire team's mobility and effectiveness, and can rapidly send a mission down a cascade of further problems. So, I got to thinking about how you could improve combat rescue.

Let's assume you control the airspace about the battlefield, or at least the portion of it above small-arms range. Helicopters work pretty well when you want to insert your fire teams, because folks near the target can often be taken by surprise and the choppers can dump their loads and be off before serious resistance is organized. But helicopters are not a good way to get people back out, because they move slowly near the landing zone and are thus pretty easy targets. What you need, getting out, is a lot of acceleration and altitude, right away. You want a rocket.

The wounded guy goes into a stretcher. I'm imagining something like a full-body inflatable splint: it squeezes the hell out of him, totally immobilizing him, and insulating him from cold and wind. You'd design the thing so that it could be popped in a couple of places and still work fine. The stretcher gets attached to a rope attached to a reel at the bottom of the rocket.

The rocket fires a very short exhaust pulse, which sends the thing up 50 feet or so. At this point the rope is entirely unreeled. When the rope goes taut, the main burn starts, accelerating the stretcher at, say, 5G straight up. The exhaust plume is directed out two symmetrical nozzles slightly away from straight down so that the poor guy at the bottom doesn't get burned. Acceleration drops to 1G for ten seconds or so once the guy is at a few hundred miles per hour, and then cuts out. The rocket coasts to a stop at 10,000 feet or so, at which point a parasail pops out.

At this point an autopilot yanking on the control lines can fly the guy ten miles away to get picked up on the ground, or a helicopter or C-130 can grab him out of midair. A midair grab sounds ridiculous but apparently they already use this technique for recovering deorbited film capsules and they haven't dropped any yet. A midair pickup at 2000 feet would have 8 minutes to snatch a guy falling from 10,000 feet at 16 feet/second, which seems plausible with good communication.

[Update: apparently they already use midair grabs for picking up people, too. They use a helium balloon and snag that. The trouble is that when they winch the guy in, he generally spins around in the vortex behind the airplane, and when he gets to the tail of the airplane he can get bashed against the fuselage a fair bit before they get him inside.]

A rocket sufficient to boost 300 lbs of payload to 3200 meters needs about 300 m/s delta-V. With a mass ratio of 80% and an Ve of 2600 m/s, the rocket will weigh 120 pounds. That's not something you want to be carrying around with you, but it is something that one guy can manhandle into an upright position. So you have to deliver this heavy, bulky thing to a fire team in the middle of a combat zone which is already distracted by tending to some casualties. Luckily, you can make the rocket pretty tough.

I suggest dropping the recovery package (ascent rocket, stretcher, medical kit, ammunition) on the fire team as you might drop a smart bomb. Instead of exploding a warhead, this munition pops a parachute or fires a retrorocket right before impact to minimize the damage to whatever it hits and cushion the blow to the medical kit. Someone on the fire team might use a laser designator to pick the landing spot, so that they have good control over the difficulty of recovering the thing. You'd want to be careful: bomb there, recovery kit here.

I posted about this three years ago in this thread: http://groups-beta.google.com/group/sci.space.tech/browse_thread/thread/efb906c8dd19915a/a355a9c6b2ed55f5?hl=en
Back then I thought you needed the robot paraglider to deliver the recovery package. Now I suspect something more like the smart bombs we already have would be okay.

Saturday, November 18, 2006

A little message to the Chair Force Engineer

Engineering is mostly data plumbing. What matters is that the right people understand the right bits of the problem. All those meetings, all those revisions of all those specs, it's all about exploring the problem and getting the right bits to the right guys and gals. Everything is specialized. For the most part, that means you aren't ever going to get it all. Those windbags probably mean well.

The good news is, you don't have to get most of it. You need to understand who needs to know what, and how to get them what they need. Know what is the critical path. There will some small part that's your specialty. Make sure you're on top of that. The rest is plumbing. You're smart enough. Relax.

As for the whole nihilistic thing... What drives you? I'll tell you what drives me: what is the coolest thing I can pull off and actually make work? I've long since realized that a team can make stuff happen that I could never do myself. Right now I'll take the trade of having a smaller part in achieving a more audacious goal.

For the next 30 months, concentrate on getting experience, especially practical experience. You're going to be around for a very long time. This idea that you were going to figure out by 30 what you are going to do in life is a (boring) fantasy. One of the best CPU hackers I know started doing serious astrophysics, did CPUs, moved on to chip assembly tools, and now does something completely different. You know what my dad remembers from hacking on accelerators at Berkeley in the 60s? Plumbing. He's awesome with solder and 3/4" copper.

It sounds like you're motivated by Serving America. Very noble. You can probably see lots of ways of projecting force better than we are now. You don't need to implement that right now. You might take a 20 year detour first. Hell, you might try raising kids. That'll throw a wrench in things, I promise you.

Oh, and when you get out, give me a call.

Friday, November 17, 2006

Comments!

I noticed that ambivalentengineer.blogspot.com got messed up somehow, probably when Google moved to their Blogger in Beta infrastructure. So I switched my account to Blogger in Beta. As part of the process, I realized that I had comments!

This is like an unexpected birthday present! These are great comments, written with care by people who know. Thank you!

This calls for two apologies.

First, for not getting these things published before. Some of these are a year old.

Second, of 42 comments there was one that was inappropriate. In rejecting it, I accidentally erased 7 other comments. They aren't recoverable (which is a bug, btw, and now that I'm a Googler I can file a bug against that!) I know you folks take care writing those, and I'm really sorry about dropping them.

Anyway, thanks a ton for all the great comments.

Wednesday, August 16, 2006

New job, less postings

I'm now working at Google. Specifically, I'm working on Google Earth. The project is irresistably cool, I work with interesting and talented people, I'm learning a lot, and there is plenty of money and resources to get the job done. I'm pretty happy about the whole thing.

But my posting should be about techie stuff. So, along that vein, I'm sure you are all wondering, does water evaporate from the pool faster or slower on cooler nights? I think I have a counterintuitive answer: cold air speeds evaporation.

The pool water holds vastly more heat than the air above it, so the air in contact with the water will be at the water temperature, pretty much regardless of the ambient temperature. Lower ambient temperatures usually come with lower dew points, which means that once the air has been heated to the pool temp, initially colder air is dryer. More water can evaporate into this colder air.

There is a second effect as well. On a windless night, the convection over the pool is driven by the temperature difference between the pool and the ambient air. As ambient gets colder, the convection gets faster and more dry air is brought to the pool surface, evaporating more water.

The net result is that on a cool night, the area all around the pool gets quite damp. I'm fairly sure this is pool water recondensing out of the air as the air moves away from the pool and cools back down.

Saturday, May 27, 2006

Alligators

Listening to NPR yesterday, I heard a guy from the New Orleans Aquarium describing the mess they found after Katrina. The equipment there had stopped working, so the interior temperature had risen into the 90s, most of the tanks had gone anoxic, the fish had died and were rotting in the heat. The tanks were a filthy, smelly, bacteria-laden mess.

The alligators, however, loved it. Somehow, perhaps because I'm the owner of two Labrador retrievers who will eat anything, my heart warms to the thought of unadulterated joy in the midst of what would turn a human stomach.

Monday, May 08, 2006

Renewables vs Nuclear

There is a great discussion over at EnergyPulse. The article suggests renewables can replace nuclear. A number of good comments that follow show the claim to be wrongheaded, but perhaps not entirely wrong.

The claim that renewables (primarily windpower) can replace nuclear is not entirely off base. We are building windpower right now faster than we are building nuclear, and the EIA predicts this will continue to be true for 30 years, even with the new subsidies. The existing U.S. nuclear plant fleet is huge, however, and renewables are not forecast to challenge the capacity of the existing nuclear base.

I think the debate between renewables and nuclear is pointless. Both are domestic supplies. I think the more interesting comparison is between domestic and foreign energy supplies, and what we can do to move to more domestic energy. Secondarily, I think there is an interesting comparison between the amounts of CO2 produced by various energy supplies.

We use oil and gas-fired turbines and hydro to follow variations in electricity demand. Our dependence on foreign oil and gas supplies is a major threat to our national security. In the U.S., hydro is all built out. I see two strategies that will help move the U.S. from foreign energy sources to domestic sources. Both could use some legislation.

  • Dynamic pricing. If the price of electricity varies minute-by- minute, many customers can time-shift their loads. This reduces variations in electrical demand and allows domestic sources (coal, nuclear, renewables) to replace foreign sources (oil, gas). Examples are ice-storing commercial cooling plants and aluminum smelters.


  • Plug-in hybrids. These gasoline and battery powered cars can charge up at night. They actually help twice: they directly shift their energy source from a foreign one (crude oil) to a domestic one (coal, nuclear, renewables). But also, the extra demand at night makes it economic to build more domestic-source generation instead of foreign-source generation.


  • As for the CO2 problem, natural gas was once sold as a cleaner, greener substitute for coal. The cost in blood and money is now clear. We should use all the windpower we can get, and conservation can help too. But I think the big answer here is hundreds of gigawatts of new nuclear generation over the next 30 years.

    I foresee the costs of coal going up in the future, from the additional costs of CO2 sequestration. I foresee the costs of nuclear coming down in the future, if many similar plants are built and regulation and operation become more standardized. So I see a second benefit to a massive nuclear buildout: a drop in the price of energy here in the U.S. I think that drop, along with a change to domestic energy supplies, could make a big difference to our balance of trade and national security.

    Tuesday, May 02, 2006

    Three Stage to Orbit

    I'm going to try to convince you [hi Jon!] that a three stage to orbit rocket might be significantly cheaper than a two stage to orbit system lifting the same size payload. My argument centers on the idea that the engines are the expensive portion of the rocket, and that tanks and structure are cheap.

    To get into a 200 km orbit, a two stage rocket generally has a first stage which burns for about 150 seconds, followed by a second stage which burns for around 500 seconds. Second stage engines deliver much more impulse per dollar, primarily because the engine burns so much longer, and also because the engine has a larger expansion ratio and so a higher specific impulse.

    The second stage burn time cannot be extended greatly beyond 500 seconds when going to low earth orbit, because the initial acceleration becomes so low that the stage falls back to earth before it achieves orbital velocity. Similarly, the first stage burn time cannot be pushed much past 150 seconds (assuming LOX/kerosene, for which the Isp is around 300 seconds) because the rocket must start with significantly more than 1 G of acceleration to get off the ground. This is easy to see: if initial acceleration is exactly 1 G, and the vehicle was all fuel and no structure, then the burn time would be exactly the Isp.

    The point of a three stage rocket is to significantly extend the first LOX/kerosene stage burn time, and improve the expansion ratio, with the use of an earlier and much cheaper stage. To yield a cost improvement, the entire new stage must cost less than the first stage engines it is replacing. Also, the development cost of the new stage must be low.

    Typically, strap-on solid rocket motors are used as cheap extra launch thrust on American or European launchers. LOX/Kerosene launchers, such as the SpaceX Falcon series and many of the Russian launchers, are so cheap that such solid rockets and their handling and cleanup are too expensive to help. I suspect that steam rockets (such as the rocket that hurled Evil Kneivel's "motorcycle" over the Snake River Canyon) may be cheap enough. Unfortunately, steam rockets have such low specific impulse (about 50 seconds) that any benefit must come from just a few hundred m/s of delta-V (realistically, around 300 m/s). The rest of this post will show that there is a very large benefit to having just a few hundred m/s of initial boost.

    One way to measure the benefit of the extra stage zero is to compare the payload of a similarly priced rockets, one with and without the extra stage. Ideally, the rockets would be seperately optimized, but that makes cost comparisons looser. Instead, we will imagine that they have the same engines and differ in smaller details like expansion ratio and tank size. The baseline rocket I have chosen is the SpaceX Falcon 1, which is promised to take 570 kg to a 200km orbit for $6.7M. My simulations show SpaceX has a significant sandbag in there, and that the rocket might deliver 670 kg at the edge of its performance envelope, and it is this larger number that I compare against when calculating the benefit of extra boost.

    The first comparison is between a series of identical Falcons, some with a stage zero delivering various burnout velocities, and one without. As shown, a 300 m/s boost allows the three-stage rocket to lift 41% more payload.

    Falcon 1 with various starting boost

    LEO
    boostpayload
    (m/s)(kg)
    0670
    100760
    200866
    300945
    4001009
    5001065


    Because a three stage Falcon does not require it's LOX/kerosene engines to lift it off the ground, the engines can be configured for larger expansion and higher vacuum Isp (and thus lower sea-level Isp). Elon Musk at SpaceX has claimed that a vacuum-optimized Merlin would deliver 340 seconds Isp. In the table below, I've assumed 330 seconds, and a drop in sea-level Isp to 190. Now the three-stage 300 m/s boost lifts 61% more mass to orbit. This change carries the reliability penalty that the engines may have to be started in flight to avoid thrust instability due exit pressure being much less than sea level pressure. Note that a Falcon with no stage zero boost cannot get off the ground with higher expansion engines.

    Falcon 1 with various starting boost, Isp=330 s

    LEO
    boostpayload
    (m/s)(kg)
    0dead
    100778
    200946
    3001077
    4001173
    5001255


    The full gain of the stage zero is realized by extending the burn time of the first LOX/kerosene stage by extending the tanks. Here I've assumed that the extra tankage and pressurant and so on weighs 4.3% of the extra propellant added. With extended tanks, the three stage 300 m/s boost lifts 138% more mass to orbit.

    Falcon 1 with various starting boost, Isp=330 s, extra propellant

    LEOextra stage 1
    boostpayloadpropellant
    (m/s)(kg)(kg)
    0dead
    1008666000
    200128812000
    300159518000
    400181414400
    500203924000


    You might wonder if these massive payload increases require the use of a bigger and more powerful upper stage. The answer is no, even hefting a 1595 kg payload, the existing falcon upper stage delta-V drops from 4973 to 3454 m/s. I did not look at moving propellant from the lower stage to the upper stage, which might improve the payload numbers a bit more.

    138% is a large payload increase, but we have to compare this to the cost of simply scaling up an existing rocket. Fortunately, SpaceX has done some of this work for us. Interpolating between their prices, the 138% payload boost might be sold for $1.8M more.

    Which leaves the question, can a suitable steam rocket be built to heave the Falcon 1 to 300 m/s straight up for less than $1.8M per shot, and minimal development costs?

    Here's what you'd need:
  • Mass Fraction = 80% (Shuttle SRB mass fraction is 85%, it holds 60 atmospheres pressure)
  • Mass, fuelled = 62992 kg
  • Mass, empty = 12600 kg
  • Volume = 100 m^3
  • Size = 3.6 m diameter x 10 m long


  • A 12600 kg welded stainless steel tank might cost $250,000, but can probably be recovered and reused with minimal work. I'm going to claim the expensive bit isn't the flight hardware, but rather the ground support, and I'll get to that in another post.

    Tuesday, March 28, 2006

    Testing New Rockets

    Now that a respectful period for SpaceX's loss has passed, it's time to begin the enthusiastic but uninformed Monday morning quarterbacking. Don't be shy. Here, I'll go first.

    First, I don't see why people are at all sad the thing blew up. It was a test article, and test articles break, generally with tons of instrumentation to tell you how. If you want to feel sad, feel sad for the upper stage testing guy, who won't see a video of his machine airstarting for another six to nine months. And that's what motivates the rest of this post.

    Why does the vehicle have to be tested all-up? Testing all-up is gutsy and smart when your expectation of failure is very low, but wasteful if you are pretty sure you have multiple problems to find and fix. Dwayne Day has pointed out that about half of all new rockets succeed on their first launch, but this launch was not like the first launches of most of those other rockets. Those rockets had very expensive Big Government testing programs behind them. This rocket did not. That's good (because it can be more efficient), but it means that an all-up launch is not likely to yield a lot of testing data for the amount of money and time invested.

    At this point, SpaceX has already paid for plenty of expensive lessons at Kwaj, so incremental flight testing might not seem necessary anymore. I'd do it anyway: they'll need it for the Falcon 9 too, and nobody knows how many gremlins remain to be flushed out of the Falcon 1 design, because they didn't get to test much of the vehicle.

    If each stage carries a full load of LOX but only enough kerosene for about 100 m/s delta-V, we should get a short, locally recoverable hop, without the need for an intercontinental missile range. A set of large floats attached to the tail of the rocket might keep the engine from a complete dunking on splashdown, if that is perceived to be a problem. The lack of a range is a huge deal -- they could have done quite a bit of flight testing in parallel with getting onto Kwaj, and my guess is they're going to need plenty of launch experience before Vandenberg will let them launch there.

    So the test plan, then, would be to launch two or three rockets perhaps a dozen or more times from a small island in the middle of a small uninhabited lake in the continental United States. Start by launching single stages by themselves (first and second), and then move on to two stage launches. I've read that Wisconsin wants to have a spaceport, perhaps they'd be willing to cough up the necessary permits.

    While short hops are not going to test the vacuum and high-speed portions of the flight, they will test all sorts of other good stuff, many of which were tested for the first time at Kwaj (where it was more expensive):

  • Launch procedures, except those relating to the interface with the range and the recovery vessel. This would include things like discovering how many shitloads of LOX it takes to load the thing up.


  • Launch in high winds, heat, etc, by picking the time of year and using ballast. Granted, this takes time, but expanding the launch envelope is only needed once you are trying to support a high flight rate.


  • First and second stage structure under some but not all flight loads. Again, ballast necessary.


  • Payload environment in the lower atmosphere.


  • The staging event -- shutdown, seperation, propellant settling, ignition, interstage seperation. This is huge.


  • Fairing seperation, unfortunately with an aerodynamic load. The load could be mitigated by blowing the fairing at the top of an almost vertical flight, perhaps by adding enough delay between first stage engine stop and seperation that the vehicle coasts to a stop.


  • Some of the first stage recovery hardware, obviously not including the re-entry sequence.


  • Recovery of first stages -- water handling, floatation, etc.


  • The flight termination system in various stages of flight. Note that since flight termination is nondestructive, they can really test the hell out of it by using FTS to shut down half the time.


  • Reuse of first stages. This could be really big, since some lessons that might otherwise be learned might be erased by reentry.


  • Recovery and reuse of second stages. Yes! They are going to try to do this with Falcon 9, so why not start now? Testing this out with moneymaking operational launches sounds cheap, but you'll never get the same instrumentation or number of tests as you can have with low-altitude test flights.


  • I'll stop here, the list is endless. The point is that a lot of confidence can be built doing cheap flight tests away from the U.S. Government's test range.

    Thursday, March 23, 2006

    Communication Lasers

    Jon Goff notes that MIT has developed a new, more sensitive infrared receiver. The article mentions that data rates to space probes might improve as a result. Brian Dunbar wonders why I think pointing (and though I didn't mention it, antennas) are a big problem.

    The location of Mars is known to great accuracy, and the location of the spacecraft is well known also. The direction the spacecraft is pointing is less well known, and the direction that the laser comes out of it is less well known also, and that's the pointing problem in a nutshell.

    Think in solid angles. An omnidirectional antenna spreads its transmitted energy evenly over the whole sphere: 4*pi steradians. So, if you transmit 100 watts across 4 x 10^8 km (max distance to Mars) to a 70 meter dish at Goldstone, the most signal you can receive is (10 watts) * (goldstone aperture) / (transmit solid angle) / (distance^2). That's 100 * (pi*35^2) / (4*pi) / (4*10^11)^2, or about 2 * 10^-19 watts. You can see why the folks who built Goldstone wanted it a long way from the nearest 100 kilowatt AM radio station.

    A 4 meter high gain antenna might transmit 100 watts at 20 GHz. 20 GHz is 1.5 cm wavelength, so the planar beam spread might be about (1.5cm/4m)^2 = 1.4*10^-5 steradians. The Goldstone dish will receive 7*10^4 times as much power from this dish as from the omnidirectional antenna (still a piddly 1.3*10^-14 watts). The downside is that you have to point the antenna to within 1 part in 530, about 0.5 degree accuracy. That's not too hard.

    Now suppose you transmit with a 100 watt 1.55um infrared laser, with an aperture of 4cm. The beam spread is 100 times smaller than that old high gain antenna (and the solid angle is 10,000 times smaller). The Goldstone dish is now receiving 1.3*10^-10 watts, which is much better, but you'll have to orient the spacecraft that much more accurately. Which means eddy currents in the propellant swishing around in your tanks will knock the beam off. Sunlight pressure will knock it off. Worse still, differential heating of your optics will cause the beam to steer relative to the spacecraft. The fact that the spacecraft is in free-fall means that the structure between your star sensors and your maser has changed shape somewhat since you calibrated it on the ground before launch. All of which means when your computer thinks it's pointing the beam at Earth, it's acutally illuminating the Mare Iridium, and Goldstone is getting nothing.

    Except the trouble is that Goldstone can't handle the infrared radiation your laser produces, so you'll need something more like an infrared telescope, which might have a 2 meter aperture instead of Goldstone's 70 meter aperture, and so you've lost three of your four orders of magnitude improvement.

    Now if someone shows me a spacecraft with a multimeter telescope used to transmit infrared to Earth, I'll get seriously impressed. Courtesy of the fiber optic revolution, we now have an awesome amount of experience with high-data-rate micron (i.e. infrared) lasers. I can imagine a satellite's telescope where the same mirror is used to transmit to and also take pictures of earth. The pictures taken are used for pointing calibration, and now the stable optical structure is just the picture sensor and laser transmit head, and doesn't include the mirror at all. As a side benefit, the same telescope and imaging system can be used for very nice pictures of the probe target, i.e. Mars, although not at the same time, of course. You might even be able to do high-precision lidar (radar with lasers) surface terrain measurement.

    Wednesday, March 22, 2006

    Crackpots and Rocket Science

    There's been an uptick in talk of space elevators. Here's Rand Simberg going at it. I get annoyed when I read this stuff, and lately I've been trying to figure out what it is about space elevators that I find so alarming. Unfortunately I have figured it out and don't like the answer.

    I'll get the tedious bit out of the way first.

  • Space elevators from anywhere in the Earth's atmosphere are not going to be built for a very long time, certainly not in my lifetime, probably not ever. In short, they require engineering miracles (cheap large scale carbon nanotubes and megawatt lasers), and they do not have realistic return-on-investment (a proposed $5 billion elevator would lift one 8-ton cargo per week. 5% interest and 5% maintenance is about $10 million per week, or $625 per pound lifted, and does not include the cost of actually lifting the cargo).


  • And, should the above engineering miracles occur, they would enable other ways of getting to space that would be better than an elevator. You could, for instance, build a fully reusable single-stage-to-orbit rocket from large-scale carbon nanotubes that would certainly be cheaper than an elevator. All sorts of things from la-la land are possible if you make unreasonable assumptions. It's sort of like trying to figure out if a Tyrannosaurus Rex could win a fight with King Kong....


  • Back to the bit that alarms me.

    Much of the breathless discussion of elevators is conducted by the same folks who discuss something quite important to me: cheap rocket launches. These people are clearly unable to sanely evaluate engineering propositions. In short, they are dreamers or crackpots.

    (An aside: researchers who are developing carbon nanotube materials are most definitely not crackpots. That's R&D, which is a great thing. It's common for folks working on new materials to suggest outlandish uses. That's fun and harmless so long as they concentrate at their day job which is figuring out how to make the material in the first place. CNT materials, if developed, are likely to be as popular as carbon fiber is today, and find all sorts of good uses.)

    Anyway, here's the bit I don't like at all: how is someone who does not know a thing about engineering (my mom, for instance) supposed to tell the difference between me and one of the aforementioned crackpots? I'm working on an upcoming post which will suggest that hot water first stages and a little aerodynamic lift could cut LEO launch prices by a factor of about 2. Like the aforementioned folks, I don't work in this industry, and am unlikely to. My suggestions are unlikely to be picked up by others in the industry. Why am I burning my valuable time on this stuff?

    The answer is that I find the engineering entertaining, and I post the bits that I do because I think they might be entertaining to a small group of people who I don't bump into day-to-day. Part of the entertainment is the thought that if I noticed something really useful, I'd act on it. But it's just a thought. I like to think I have a reasonably good sense of the difficulty of making technical progress in a few fields, this one included. Frankly, I decline to make the big investment here (i.e. change careers) because I think the difficulty is too high. It is rocket science, after all.

    Tuesday, February 14, 2006

    Why Merlin 2?

    SpaceX does not need to design and qualify a third, much bigger, engine in order to become a profitable launch company, or to take over payloads intended for the Shuttle, or to fly people to either ISS or a Bigelow hotel. SpaceX should concentrate on execution, development of parallel staging (Falcon 9S9), and on a regeneratively cooled Merlin. Execution includes things like recovery of first stages and getting to one launch per month, with at least one or two Falcon 9 launches each year.

    In this context, a regenerative Merlin 1 (hereafter refered to as Merlin 1x) may make sense because it may improve safety, reliability, and operational costs, and the cost of those improvements may be realistically amortized over dozens of launches. As the engine cycle is more efficient, a small increase in Isp and perhaps thrust is likely.

    So why Merlin 2?

    Elon keeps saying that Merlin 2 will be the biggest single thrust chamber around, but smaller than the F-1. Presumably, he is implying that it will have less total thrust than existing multiple thrust chamber engines, in particular the RS-180 (2 chambers, 4152 kN). That puts a fairly tight bound on the size. Here's a list of the biggest current engines, by thrust per chamber.

    Enginethrust per chamberIspVehicle
    F-17740 kN265 - 304Saturn V
    RS-683312 kN365 - 420Delta IV
    RS-242278 kN363 - 453Space shuttle
    RD-1802076 kN311 - 338Atlas V
    RD-1711976 kN309 - 337Zenit
    NK-331638 kN297 - 331N-1/Kistler
    Vulcain 21300 kN318 - 434Ariane 5


    Taking Elon at his word, Merlin 2 will have between 3312 and 4152 kN of thrust. Call it 4000 kN, 17% more than the Falcon 9 first stage.

    What can SpaceX do with this engine that cannot be done with the Merlin 1B?

    Improve Falcon 9 LEO lift to 11000 kg. That's not even close to lifting stranded Space Shuttle payloads, which is the largest obvious market in terms of LEO mass in the next four years. I'm not sure what payloads are enabled by a 9300 to 11000 kg capability improvement.

    Cost-reduce the Falcon 9. I can't see how this can pay for development, unless the Falcon 9 is launching monthly by 2010. And a further (fractional) price decrease doesn't seem like it would stimulate the market any more in the near-term than the existing, bold, price statement.

    Improve Falcon 9S9 LEO lift to 30000 kg. The promised Falcon 9S9 lift capacity (24750 kg to LEO) is almost exactly that of the Shuttle (24400 kg to LEO), but the shuttle payload bay supports its payload better. The additional weight of a frame inside the 9S9 fairing might make some Shuttle cargoes too heavy to lift. A Merlin 2 based first stage for the Falcon 9S9 would fix this. The trouble is, a modest increase in thrust from the Merlin 1x would also fix the same problem. And it seems the latter change would have to be less costly than a whole new engine.

    If the Falcon 9S9, improved with either Merlin 1x or Merlin 2s in the first stage, took over a dozen Shuttle ISS launches, I can imagine that would be a high enough flight rate to pay for Merlin 2 development. The trouble is that if I were paying to lift a billion-dollar ISS segment, I'd prefer to go on a machine with engine-out capability at launch. Three Merlin 2s don't give you that capability, and 4 implies some kind of vehicle (and its development cost) other than a Falcon 9S9 with single Merlin 2s on the first stages.

    Build a 100,000+ kg to LEO launcher. This is what you get if you want engine-out capability with Merlin 2s. This kind of capacity is not necessary for exploring the solar system with robot probes, or even for building big orbital telescopes. It makes sense only if you want to send people a long way for a long time. Elon Musk passionately wants to build this rocket, that's why it's called the BFR.

    The only organization with any credibility talking about using such heavy launches is NASA, for use in sending people to the Moon and maybe Mars. The BFR is Elon Musk's statement that he wants to take over the U.S. manned space program's launches. The business case for Merlin 2 and BFR must fundamentally rely on the U.S. government privatizing a critical, and the most public, portion of the manned space program. A program which from its outset has been about national pride.

    A more likely scenario is that NASA will spend billions developing its own HLLV in competition with SpaceX, in the process abandoning the Space Station and strangling SpaceX, and will end up being able to afford just two or three launches to the Moon before abandoning VSE for the next thing. The history of heavy launchers is not reassuring. The Saturn V (118,000 kg to LEO, $2.2B per launch in 2004 dollars) was launched 13 times. Energia (85,000 kg to LEO, $1.4B per launch) was launched twice.

    If all this sounds dire, a more reassuring comparison can be made by considering what SpaceX intends the Merlin 2-based vehicle to cost. Mr. Musk has stated several times that the point is to get costs below $1000/kg. A 100,000-kg-to-LEO vehicle priced at $500/kg would be $50M per launch. The history of $50M launchers is much more attractive. The various Delta incarnations (~1300/kg to LEO, ~$50M per launch) were launched hundreds of times. Soyuz (7200/kg to LEO, $45M per launch) has been launched 714 times.

    A big rocket at such prices would clamp Falcon 9S9 prices to $30M and also reduce the 9S9 flight rate, at which point the parallel-staged rocket might cost too much to fly profitably. So SpaceX faces a fork in the road: develop either the Merlin 2 or parallel staging, but not both. Parallel staging will cost less to develop, implement, and support, and the BFR will cost more but enable people to get out of low earth orbit, and perhaps convert the VSE from a boondoggle into a success.

    And thus we get to the vision thing. Elon has at various times stated that SpaceX is out to make money, and at other times stated that the goal of SpaceX is to enable the colonization of space. Let me point out that you can make money first, and enable colonization second, but not the other way around.