Showing posts with label rockets. Show all posts
Showing posts with label rockets. Show all posts

Thursday, December 12, 2013

The SkyBox camera

Christmas (and Christmas shopping) is upon us, and I have a big review coming up, but I just can't help myself...

SkySat-1, from a local startup SkyBox Imaging, was launched on November 21 on a Russian Dnepr rocket, along with 31 other microsatellites and a package bolted to the 3rd stage.  They have a signal, the satellite is alive, and it has seen first light.  Yeehah!

These folks are using area-array sensors.  That's a radical choice, and I'd like to explain why.  For context, I'll start with a rough introduction to the usual way of making imaging satellites.

A traditional visible-band satellite, like the DubaiSat-2 that was launched along with SkySat-1, uses a pushbroom sensor, like this one from DALSA.  It has an array of 16,000 (swath) by 500 (track) pixels.
The "track" pixel direction is divided into multiple regions, which each handle one color, arranged like this:
Digital pixels are little photodiodes with an attached capacitor which stores charge accumulated by the exposure.  A CCD is a special kind of circuit that can shift a charge from one pixel's capacitor to the next.   CCDs are read by shifting the contexts of the entire array along the track direction, which in this second diagram would be to the right.  As each line is shifted into the readout line, it is very quickly shifted along the swath direction.  At multiple points along the swath there are "taps" where the charge stored is converted into a digital number which represents the brightness of the light on that pixel.

A pushbroom CCD is special in that it has a readout line for each color region.  And, a pushbroom CCD is used in a special way.  Rather than expose a steady image on the entire CCD for tens of milliseconds, a moving image is swept across the sensor in the track direction, and in synchrony the pixels are shifted in the same direction.

A pushbroom CCD can sweep out a much larger image than the size of the CCD.  Most photocopiers work this way.  The sensor is often the full width of the page, perhaps 9 inches wide, but just a fraction of an inch long.  To make an 8.5 x 11 inch image, either the page is scanned across the sensor (page feed), or the sensor is scanned across the page (flatbed).

In a satellite like DubaiSat-2, a telescope forms an image of some small portion of the earth on the CCD, and the satellite is flown so that the image sweeps across the CCD in the track direction.
Let's put some numbers on this thing.  If the CCD has 3.5 micron pixels like the DALSA sensor pictured, and the satellite is in an orbit 600 km up, and has a telescope with a focal length of 3 meters, then the pixels, projected back through that telescope to the ground, would be 70 cm on a side.  We call 70 cm the ground sample distance (GSD).  The telescope might have an aperture of 50cm, which is as big as the U.S. Defense Department will allow (although who knows if they can veto a design from Dubai launched on a Russian rocket).  If so, it has a relative aperture of f/6, which will resolve 3.5 micron pixels well with visible light, if diffraction limited.

The satellite is travelling at 7561 m/s in a north-south direction, but it's ground projection is moving under it at 6911 m/s, because the ground projection is closer to the center of the earth.  The Earth is also rotating underneath it at 400 m/s at 30 degrees north of the equator.  The combined relative velocity is 6922 m/s.  That's 9,900 pixels per second.  9,900 pixels/second x 16,000 pixel swath = 160 megapixels/second.  The signal chain from the taps in the CCD probably will not run at this speed well, so the sensor will need at least 4 taps per color region to get the analog to digital converters running at a more reasonable 40 MHz.  This is not a big problem.

A bigger problem is getting enough light.  If the CCD has 128 rows of pixels for one color, then the time for the image to slide across the column will be 13 milliseconds, and that's the effective exposure time.  If you are taking pictures of your kids outdoors in the sun, with a point&shoot with 3.5 micron pixels, 13 ms with an f/6 aperture is plenty of light.  Under a tree that'll still work.  From space, the blue sky (it's nearly the same blue looking both up and down) will be superposed on top of whatever picture we take, and images from shaded areas will get washed out.  More on this later.

Okay, back to SkySat-1.  The Skybox Imaging folks would like to shoot video of things, as well as imagery, and don't want to be dependent on a custom sensor.  So they are using standard area array sensors rather than pushbroom CCDs.

In order to shoot video of a spot on the ground, they have to rotate the satellite at almost 1 degree/second so that the telescope stays pointing at that one point on the ground.  If it flies directly over that spot, it will take about 90 seconds to go from 30 degrees off nadir in one direction to 30 degrees off in the other direction.  In theory, the satellite could shoot imagery this way as well, and that's fine for taking pictures of, ahem, targets.

A good chunk of the satellite imagery business, however, is about very large things, like crops in California's Central Valley.  To shoot something like that, you must cover a lot of area quickly and deal with motion blur, both things that a pushbroom sensor does well.

The image sliding across a pushbroom sensor does so continuously, but the pixel charges get shifted in a more discrete manner to avoid smearing them all together.  As a result, a pushbroom sensor necessarily sees about 1 pixel of motion blur in the track direction.  If SkySat-1 also has 0.7 meter pixels, and just stared straight down at the ground, then to have the same motion blur it would have to have a 93 microsecond exposure.  That is not enough time to make out a signal from the readout noise.

Most satellites use some kind of Cassegrain telescope, which has two mirrors.  It's possible to cancel the motion of the ground during the exposure by tilting the secondary mirror, generally with some kind of piezoelectric actuator.  This technique is used by the Visionmap A3 aerial survey camera.  It seems to me that it's a good match to SkyBox's light problem.  If the sensor is a interline transfer CCD, then it can expose pictures while the secondary mirror stabilizes the image, and cycle the mirror back while the image is read out.  Interline transfer CCDs make this possible because they expose the whole image array at the same time and then, before readout, shift the charges into a second set of shielded capacitors that do not accumulate charge from the photodiodes.

Let's put some numbers on this thing.  They'd want an interline transfer CCD that can store a lot of electrons in each pixel, and read them out fast.  The best thing I can find right now is the KAI-16070, which has 7.4 micron pixels that store up to 44,000 electrons.  They could use a 6 meter focal length F/12 Cassegrain, which would give them 74 cm GSD, and a ground velocity of 9,350 pixels/sec.

The CCD runs at 8 frames per second, so staring straight down the satellite will advance 865 m or 1170 pixels along the ground.  This CCD has a 4888 x 3256 pixel format, so we would expect 64% overlap in the forward direction.  This is plenty to align the frames to one another, but not enough to substantially improve signal-to-noise ratio (with stacking) or dynamic range (with alternating long and short exposures).

And this, by the way, is the point of this post.  Area array image sensors have seen a huge amount of work in the last 10 years, driven by the competitive and lucrative digital camera market.  16 megapixel interline CCDs with big pixels running at 8 frames per second have only been around for a couple of years at most.  If I ran this analysis with the area arrays of five years ago the numbers would come out junk.

Back to Skybox.  When they want video, they can have the CCD read out a 4 megapixel region of interest at 30 fps.  This will be easily big enough to fill a HDTV stream.

They'd want to expose for as long as possible.  I figure a 15 millisecond exposure ought to saturate the KAI-16070 pixels looking at a white paper sheet in full sun.  During that time the secondary mirror would have to tilt through 95 microradians, or about 20 seconds of arc for those of you who think in base-60.  Even this exposure will cause shiny objects like cars to bloom a little, any more and sidewalks and white roofs will saturate.

To get an idea of how hard it is to shoot things in the shade from orbit, consider that a perfectly white sheet exposed to the whole sky except the sun will be the same brightness as the sky.  A light grey object with 20% albedo shaded from half the sky will be just 10% of the brightness of the sky.  That means the satellite has to see a grey object through a veil 10 times brighter than the object.  If the whole blue sky is 15% as bright as the sun, our light grey object would generate around 660 electrons of signal, swimming in sqrt(7260)=85 electrons of noise.  That's a signal to noise ratio of 7.8:1, which actually sounds pretty good.  It's a little worse than what SLR makers consider minimum acceptable noise (SNR=10:1), but better than what cellphone camera makers consider minimum acceptable noise (SNR=5:1, I think).

But SNR values can't be directly compared, because you must correct for sharpness.  A camera might have really horrible SNR (like 1:1), but I can make the number better by just blurring out all the high spatial frequency components.  The measure of how much scene sharpness is preserved by the camera is MTF (stands for Modulation Transfer Function).  For reference, SLRs mounted on tripods with top-notch lenses generally have MTFs around 40% at their pixel spatial frequency.

In summary, sharpening can double the high-frequency MTF by reducing SNR by a factor of two.  Fancy denoise algorithms change this tradeoff a bit, by making assumptions about what is being looked at.  Typical assumptions are that edges are continuous and colors don't have as much contrast as intensity.

The atmosphere blurs things quite a bit on the way up, so visible-band satellites typically have around 7-10% MTF, even with nearly perfect optics.  If we do simple sharpening to get an image that looks like 40% MTF (like what we're used to from an SLR), that 20% albedo object in the shade will have SNR of around 2:1.  That's not a lot of signal -- you might see something in the noise, but you'll have to try pretty hard.

The bottom line is that recent, fast CCDs have made it possible to use area-array instead of pushbroom sensors for survey satellites.  SkyBox Imaging are the first ones to try this idea.  Noise and sharpness will be about as good as simple pushbroom sensors, which is to say that dull objects in full-sky shade won't really be visible, and everything brighter than that will.

[Updated] There are a lot of tricks to make pushbroom sensors work better than what I've presented here.

  • Most importantly, the sensor can have more rows, maybe 1000 instead of 128 for 8 times the sensitivity.  For a simple TDI sensor, that's going to require bigger pixels to store the larger amount of charge that will be accumulated.  But...
  • The sensor can have multiple readouts along each pixel column, e.g. readouts at rows 32, 96, 224, 480, 736, and 992.  The initial readouts give short exposures, which can see sunlit objects without accumulating huge numbers of photons.  Dedicated short exposure rows mean we can use small pixels, which store less charge.  Small pixels enable the use of sensors with more pixels.  Multiple long exposure readouts can be added together once digitized.  Before adding these long exposures, small amounts of diagonal image drift, which would otherwise cause blur, can be compensated with a single pixel or even half-pixel shift.

[Updated] I've moved the discussion of whether SkyBox was the first to use area arrays to the next post.

Friday, April 19, 2013

Optical Bar Cameras

[Update: I'd like to thank Phil Pressel, John Gregor, and Gordon Petrie for their corrections to this post.  The changes required have been so extensive I have not marked them.]

[Update 2: Phil Pressel just released his book Meeting the Challenge: The Hexagon KH-9 Reconnaissance Satellite (get it at AIAA or Amazon).  I've ordered it and will post a review after I get it.]

From 1957 to 1965, a high tech startup called Itek made the world's most sophisticated satellite reconnaissance cameras for a single customer -- the CIA.  The company has a fascinating subsequent history, as they ended up building cameras for Apollo and Viking.  Eventually they ended up building the DB-110 reconnaissance pod, which I'll do a blog post on some day.

Merton Davies at RAND apparently originated the idea of using a spinning camera mounted on a spin-stabilized satellite to take panoramic shots with a narrow-angle lens.  Amrom Katz passed the concept to Walter Levinson at Boston University Physical Research Laboratory (BUPRL).  Levinson refined the idea to that of using an oscillating lens, for use in the HYAC-1 panoramic camera in Air Force high altitude reconnaissance balloons.  Itek, just a few weeks after incorporation in late 1957, bought BUPRL, which was developing the HYAC-1 panoramic camera to take pictures from high altitude balloons.

Soon after, the CIA contacted Itek to discuss the camera requirements for the first spy satellites.  All of these initial satellites came to use the rotating panoramic camera.  I think this is the KH-4A or KH-4B Corona.

Itek also built versions of these cameras for use in the U-2 and SR-71 aircraft, in which they were called the OBC (Optical Bar Camera, named for the appearance of the field of regard on the ground).  These were first used in the 1960s and are still in use today.  Here is an Itek Optical Bar Camera that goes in the nose of an U-2:


Take a look at the big, wide blue bar under the airplane.  That's what a single frame of the camera shot.  It's huge.  I've heard that this "bar" looking frame was why they called it the optical bar camera.  However, the NRO's Hexagon documentation refers to the rotating optical assembly (what in many cameras is called the "Optical Tube Assembly") as the optical bar.


After a string of successful programs, requirements racheted up and tensions grew between NRO and CIA over the next program, Fulcrum.  Early on, Itek was contracting with both the NRO and the CIA on competing projects.  Itek pulled out of the CIA's project, and some combination of the NRO and the CIA took all their (government-owned) work and gave the job to Perkin-Elmer.  When the dust settled the project was renamed Hexagon.  Perkin-Elmer built the KH-9 Optical Bar Camera with their own design rather than Itek's, as they didn't think the Itek design would work.  Here is a look into the aperture of the KH-9 Optical Bar Camera.

The Itek OBCs in the U-2, SR-71, and Apollo spacecraft all had a rotating structure called the roller cage, which I suspect was fixed to the rotating camera.  The Perkin-Elmer design in the KH-9 deleted the roller cage and the rotating fold mirror inside it, and instead had a servo controlled twisting platen.

Here is a comparison of various optical bar cameras built up to 1971 (the launch of the first KH-9).

KA-80
(U-2/SR-71/Apollo)
(U-2/SR-71)(KH-9)
Focal length610 mm (24 inches)760 mm (30 inches)1524 mm (60 inches)
Aperture174 mm (f/3.5)218 mm (f/3.5?)508 mm (f/3.0)
Cross-track Field of View108 degrees140 degrees120 degrees
Film width127 mm (5 inches)127 mm (5 inches)168 mm (6.6 inches)
Film length2005 m (6500 feet)3200 m (10500 feet)70,000 m (230,000 feet)
Format114 x 1073 mm114 x 1857 mm155 x 3190 mm
Film resolution3.7 micron3.7 micron1000:1 contrast: 3.7 micron
1.6:1 contrast: 7.4 microns
Depth of focus+/- 13 microns+/- 13 microns+/- 11 microns
Format resolution31k x 290k = 9 Gpix31k x 502k = 15 Gpix42k x 862k = 36 Gpix
Frames1650164021,000
Nominal Altitude24.4 km (80k feet)24.4 km (80k feet)152 km (82 nm)
Center ground resolution14.8 cm11.9 cm37 cm
Swath67 km134 km555 km
In-track field of view, center4.55 km3.66 km15 km
Nominal overlap55%55%10% (each camera)
Area collected226k km2362k km22x 80m km2
Nominal ground velocity1000 m/s1000 m/s7,800 m/s
Cycle time2 sec1.65 sec1.73 sec
Film velocity at slit1.9 m/s2.9 m/s5.5 m/s
Maximum slit size7.62 mm12? mm22? mm
Max exposure time4 ms4? ms4? ms

Take a look at the area collected by the KH-9.  The Soviet Union was a big place: 20m km2.  Each of the four film re-entry capsules could return the entire USSR, in stereo, with plenty of side overlap and margin for images fouled by clouds.  Typically they chose to take more frames with smaller swaths (90 degrees or 60 degrees) to get higher average resolution, which brought down the total take somewhat to around 60 million km2.

My resolution numbers up there are slightly inflated.  The film used could only eke out 130 lp/mm when given the maximum possible amount of contrast, such as would be seen at a shadow line.  For finding something like paint on a road they were about half that.  Pixellated sensors have a much more drastic cliff, of course.  So the e.g. KH-9 resolution above might be compared to anything like a 20 to 30 gigapixel camera today.  I'll note that I don't have any of those to suggest.

There are two big concepts here that I think are important.  The first is the mechanical and logistical difficulties of using film.  Below I've spelled out some of the details.  The second is that despite these headaches, until very recently, film has been superior in many ways to electronic sensors for massive survey work.

The basic problems with using film stem from the fact that the sensor surface is a thin, pliable, moving, relatively inexpensive object that has to be held within the camera with very high precision.  There must have been several problems associated with getting the film aligned within +/- 11 microns of the lens's focal plane. Among other things, the machines resemble Van de Graf generators, and so the film is subject to static electricity buildup and discharges, along with heating that tends to make it even more pliable and sticky.  To reduce the static buildup, many of these cameras slid the film over surfaces with hundreds of pores lifting the film with pressurized air.

I think the Itek designs spun the entire optical assembly at a constant rate.  The entire spinning optical assembly is inside a gimbal which rocks back and forth 1.6 degrees in each cycle.  The rocking motion accomplishes forward motion compensation, so that the sweep of the slit across the ground is orthogonal to the direction of flight.  This compensation ensures that the motion of the image on the film moves almost exactly with the film, and there is no blurring in the direction of flight during longer exposures.  This rocking motion must have required fairly large torques, and I'm sure this is one of the things that the Perkin-Elmer folks balked at when considering the Itek design in space.  Note that a constantly rotating camera sweeps at the outer edges faster than at the center, so the forward motion compensation probably had to vary it's rate of rocking as it went to compensate.


Here is a diagram which shows how the film was wrapped around the roller cage in the Itek designs.  As the optical assembly (including the roller cage) twists counterclockwise, the film is transported clockwise.

However, even with a constant spin rate, the film does not transport at a constant rate.  For instance, in the SR-71 OBC, a frame is exposed for 640 ms, during which time the film rips around the roller cage at 2.9 meters per second (that's what the rollers see).  For the next second, the film advances at just 1 meter per second, so that the frame boundary going across the top of the roller cage can meet up with the slit going around the bottom.  Because of the speed change, many of the freewheeling rollers on the roller cage will contact unexposed film coming from the framing roller with a tangential speed difference of 1.9 meters per second.  As each freewheeling roller changes speed to match the film, it seems to me it would tend to scuff the film.  I'm sure they made sure to make those rollers as lightweight as possible to reduce their rotational momentum.

Note the unlabeled slit after the second mirror right next to the film wrapped around the roller cage.  Only the portion of the film after this in the optical chain has light on it, so this is the only spot that must be held accurately.  I don't really know how it was done, since every moving belt of material that I've even seen has vibrated.  They may have had a glass reseau plate that the film slid across.  Sliding film across glass at 2.9 meters per second seems like an excellent way to scratch one or both.  I have no evidence for it yet, but this seems like a good application for the compressed air film handling scheme.

The Itek forward motion compensation gimbal also took out airplane pitch motions.  Airplane roll motions were taken out by varying the optical tube roll rate.  That's easy enough (it doesn't require torque), but the film rate supplied to the roller cage assembly in the back also had to change to match.

That last diagram gives a pretty good clue to another challenge in this design -- film curvature.  Although I've not found any labelled dimensions, it looks like the roller cage in the Itek designs was about 300 mm in diameter.  The design of the roller cage really had me puzzled for a while, because the film transport path is cylindrical, but the film being exposed by the slit has to be flat.  I finally figured it out when I took a good look at this photo of the Apollo OBC (also made by Itek):


The roller cage is a cylindrical cage of ROLLERS!  Duh!  As the film passes between the rollers, it's locally flat, and that's how they keep the film surface matched to the lens focal plane.  Here's a 5x blowup of the roller cage in the picture above.  It looks like the rollers are packed together as close as they can get in order to minimize the polygonal variation in film distance from the center of roller cage rotation.  I think this variation leads to a (small) variation in film speed at the exposure site.
There should be a spot in the roller cage where there aren't any rollers, and the film becomes planar for a while.  This spot would be over the exposure slit.  In the Apollo OBC pictured here, the gap between the rollers must be at least 7.6mm, and given the orientation of the lens at the top it should be on the back side of the roller cage that we don't see here.


The film is pulled taut around the stuttering, bucking and twisting roller cage with an assembly that looks like the following.  During the exposure film is pulled around the roller cage at 5.7 meters/second.  Here's the film path:
If the tension on the film is too small, small irregularities in it's "set" curvature will make it lift off as it goes around the roller cage, and small patches of the film will lift away from the cage.  With a +/- 11 micron depth of focus, it doesn't take much lift off to make a problem.  If the tension is too high, the film will wrinkle longitudinally.


The Perkin-Elmer design did not have the roller cage or the gimbal.  Instead, they had a twisting platen assembly at the focal plane.  This would twist back and forth through 130 degrees as the optical bar rotated through 360 degrees.  The two were nearly locked together through the 120 degrees of rotation that were used for exposure.


Because the Perkin-Elmer design has no rocker gimbal doing forward motion compensation, and the optical assemblies rotate at constant speed, the sweep is faster at the edges than in the center, and the area swept in each frame is slightly S shaped.  They may have splayed the roll axes of the optical bars to get first order forward motion compensation, but this doesn't change the S shape.  To keep the image from smearing across the film, the KH-11 has to keep the slit perpendicular to the motion of the slit across the ground, accounting for the changing sweep rate versus spacecraft velocity, as well as the rotation of the Earth, which is 6% of the spacecraft velocity at the equator.

This is why the twisting platen in the KH-11 is servo locked to the twisting optical assembly.  They have to vary the relative twist of the two a little bit to keep the projected slit perpendicular to it's projected motion.

After the picture was shot the film sat around for a month in a temperature and humidity controlled environment, and then was dropped, recovered, and developed.  There was a lot of opportunity for the film to shrink differentially.  All the mechanical twisting, as well as film shrinkage, must have made photogrammetry a nightmare.

The Sunny 16 rule says that you can properly expose ISO 100 film in bright daylight conditions with a f/16 lens with a 10 ms exposure.  The KH-9 used mostly monochrome Kodak 1414, which has an Aerial Film Speed of 15, which I think is equivalent to ISO 38.  In full sun a 1 ms exposure at f/3 would have worked fine.  On the Apollo OBC that corresponds to a 1.9mm wide slit.  They did have exposure control hardware, and it looks like they could exposure for two stops dimmer than full sun.  They might have also stopped down from that, in order to avoid blowouts over ice.



At this point, I'm sure those of you who have worked with digital sensors are feeling pretty smug.  But consider how wonderful film was for capturing and returning large amounts of imagery.

Each of the four re-entry vehicles on the KH-9 would bring back 5,000 36 gigapixel images.  If somehow compressed to 1 bit per pixel, that would be about 20 terabytes.  These days that's about 10 hard disks, and would take about three months to downlink at a constant 25 megabits per second.  Since they were returning these re-entry vehicles every few months, a modern downlink is only barely adequate.  It has only been in the last 10 years or so that disk drive capacities have become high enough to fit the data into the payload of one of those re-entry vehicles -- 30 years after the KH-9 was originally deployed.

In 1971, the area of film actively integrating photons in the KH-9 was 155 mm x 15 mm.  The largest, fastest TDI CCD sensors commercially available in 2013 are 64 mm x 1.3 mm.  The pixels on these are 5.2 microns rather than 3.7 as on film.  The smaller integrating length (1.3 mm versus 22 mm) gives a maximum exposure time of  240 microseconds, which is smaller than the 2 milliseconds we would prefer.  155 mm x 10 mm CCDs with 3.7 micron pixels are not commercially available, but could probably be custom made.

Another issue would be the readout rate.  A fast TDI in 2013 reads out lines at 90 KHz.  The KH-9 was exposing film in the roller cage at 5.5 meters/second, which corresponds to a line readout rate of 1.5 MHz.  This could be achieved with a custom built TDI sensor maybe 5 years ago.  It would require 1300 ADCs running at 45 MHz, which would burn a lot of power.  This might be possible with interesting packaging.

The capture rate of the KH-9 was so enormous it completely overwhelmed the ability of the photointerpreters at the CIA to examine the photographs. It's only recently that computers have gotten large enough to store and process imagery at this volume, and it's not at all clear that anyone has yet developed algorithms to find the "unknown unknowns" in a bunch of raw imagery.  I think they call this "uncued image search".

To my knowledge the U.S. never again fielded a spysat with the ability to survey major portions of the earth at high resolution.  Later Keyhole satellites appear to have concentrated more on taking valuable single shots at higher resolution (7 cm GSD), and on having the orbital maneuverability to get those shots.  I think the intelligence folks lost interest in survey satellites when it became clear that they couldn't take advantage of the comprehensive coverage which was their primary feature.  It's kind of ironic that the very problem that Itek was founded to solve (managing the huge volume of survey photography) ended up being a major reason why satellites with survey capacity made possible by Itek's cameras faded away.  It's fascinating for me to see what has become of this problem.

Brian McClendon is on record as saying that Google has 5 million miles and 20 petabytes of Street View imagery.  That's the processed imagery, not the raw take.  The KH-9 raw take that overwhelmed the CIA photointerpreter capacity 30 years ago was less than 1% of what Google Street View shot.  Unlike the KH-9 imagery, most of which I suspect has never been looked at, every one of the Street View panoramas has been seen by a real user.  And Google is hardly the only organization using Big Data like this.  The consumption problem that the CIA and NRO never solved has been utterly crushed by organizations with entirely different goals.  (Granted, uncued search remains unsolved.)

Now that Big Data has caught up with the photo throughput of digital satellites, it's fun to think about what could be built with modern technologies.  But that's probably best saved for another post.

Thursday, February 17, 2011

Ouch

Ed Lu gave a talk at Google today on his B612 foundation.

He mentioned that the asteroid that caused the K-T extinction was probably 10 miles in diameter, um hummm.... which meant that the top had not yet entered the bulk of the atmosphere when the bottom hit the ocean. That image really got me.

The speed of sound through granite is 5950 m/s, which is substantially less than the speed of an incoming asteroid. Things in low earth orbit go at about 7800 m/s, and Ed said incoming asteroids are around 3-4 times that. So that means that when the asteroid smacks into the earth, there is a really good chance that the back of the asteroid will hit the earth before the shock wave gets to it -- it'll punch all the way into the Earth surface. Koeberl and MacLeod have a nice table here which shows a granite on ice impact needs only 6 km/s vertical velocity to punch all the way in (they neglected water as a target material, an odd oversight since the majority of the earth is covered in water, more or less a solid at these velocities). If the incoming velocity is 25 km/s, which is on the low side of what Ed suggested, then anything striking within 76 degrees of vertical is going all the way in. It seems to me that most impacts would be like that.

So after the impact, most of the energy is added to stuff below the ground surface. That's 1e25 joules for the K-T asteroid. Enough to melt 5e18 kg of rock, which is 100 times as much mass as the asteroid itself. Figure a small fraction of that will vaporize and the whole mess goes suborbital.

For the next hour you have thousands of trillions of tons of glowing molten lava raining down on everything. Everything that can burn (6e14 kg of carbon in the world's biomass) does so, promptly.

And this asteroid impact thing has actually happened, many times. As Ed says, it's like pushing Ctrl-Alt-Del on the whole world.

Side notes: There is 1e18 kg of oxygen in the atmosphere, far more than necessary to burn every scrap of biomass on earth. The story I read was that the oxygen in the atmosphere came from plants. If so, there must be a lot of carbon buried somewhere: 8.6e14 kg of known coal reserves are less than 1%.

Another interesting point, vis-a-vis ocean fertilization: there is about as much carbon in the atmosphere as in the world's biomass. We'd have to boost the productivity of 1/10 of the world's ocean by a factor of 8, from existing productivity (125 gC/m^2/year), to fix the CO2 problem in the atmosphere in 15 years. Ocean CO2 would take a century or more. That productivity boost is like converting that much ocean into a coral reef! This seems like a lot to me.

Thursday, January 01, 2009

5 Ways to Die During Reentry

If you haven't already seen it, the Columbia Crew Survival Investigation Report.

During reentry, there is a 10 minute long window of maximum heating.  They almost made it through all 10 minutes.  Right at the end they lost their hydraulics.  Makes me wonder if they could have flown the orbiter at a funny incoming angle to spare the load on the left wing.  Maybe they wouldn't have gotten Columbia onto the ground, but if it had broken up five minutes later things might have gone a bit better.

There were 40 seconds after loss of control during which the Columbia pitched up into something like a flat spin, and the folks inside tried to get their hydraulic systems back.

After that, they had a depressurization that took less than 17 seconds and probably, hopefully knocked everyone unconscious.  Nobody dropped their visors (which would let their suits handle pressurization).  Apparently they were all in "fix the vehicle" mode and not in "survival as long as possible" mode.

After that the cabin seperated from the rest of the vehicle, the crew's shoulder and other restraints mostly didn't work, and they got thrashed to death: fatal trauma to their heads from the insides of their helmets.  Owww.

From my reading, had they dropped their visors, gone to suit oxygen, and braced, several of the crew could have made it through both depressurization and cabin separation.

But then the cabin blew apart and they were in their suits in a mach 15 airstream.  I didn't actually read this anywhere, but it sounds like most of the suits came off before they hit the ground.

Side note for camera geeks: notice how crappy the home video shots of the breakup look.  Then look at the Apache Helicopter shots of the same thing, especially when it zooms in.  That chopper has some nice telescopes!

Friday, December 12, 2008

Prediction

That white paper got me thinking: what if the government made a bunch of other sensible decisions?
  • They might shut down Yucca Mountain, and require that all nuclear waste be stored on the site of the reactor for 300 years. Nah, won't happen. [Update: They did it!]
  • They might just have NASA cancel Ares-I and Ares-V, and leave it to SpaceX to provide a launcher. This might actually happen. All those folks in Florida and Utah that used to work for NASA contractors? Learn to build windmills. Some of you can learn to build Dragons and Falcons. [Update: Holy crap! They did it!]
  • They might require all air conditioners and heat pumps to have short-term demand management controls. As the newer air conditioners got deployed, we'd have a lot less need for online throttled-down combustion gas turbines to back up all these new wind farms. I've not seen any rumblings of this yet.
  • They might even standardize form factors for rechargable batteries... [Update: Um, they sort of did it! (Europe hsa standardized cellphone recharging plugs)]

Monday, June 02, 2008

Discovery Launch


I just got back from watching the Discovery launch. My boss, Ed Lu (former 3-time astronaut, second from left), hosted us, which really made the experience for me because he was able to introduce us to lots of folks. Every time we walked into a restaurant, and every 5 minutes while we were at Kennedy Space Center, someone would smile and come over to talk with Ed. NASA doesn't pay well and most folks don't get to try wacky things like we do at Google, but they seem to have great interpersonal relationships. It's heartwarming to see.



On launch day, we were 3 miles from the pad at the media site. This is as close as you can get. We had a lot of waiting around to do. Here is a cherry spitting contest.



I know there is a great deal of speculation out there about whether hacking on camera hardware at Google makes one a babe magnet. While such question are only academic for me personally, I can tell you that getting out in the midst of a bunch of media types with some very customized photographic hardware attracts all sorts of attention. I don't actually know who this person is but I think we can all agree she's gorgeous, and she was very interested in the camera hardware and what Google was doing with it.



From our vantage point 3 miles away, the shuttle stack was just a little bigger than the full moon, which meant that the flame coming out the back was about that size too. There have been some comparisons to the shuttle exhaust being as bright as day....

Let me put that myth to rest. After two years of designing outdoor cameras, I can tell you that just about nothing is as bright as the sun. From our vantage point it had more angular size than the sun -- maybe 400 feet long by 100 feet wide, viewed from 3 miles, is 1.5 by 0.5 degrees.  The sun is 0.5 degrees across.  But the Shuttle plume is not as hot as the sun -- 2500 K at most, compared to 6000 K for the sun.  Brightness increases as the 4th power of the temperature, so the Sun's delivered power per square meter is something like 11x larger.  Furthermore, most of the light coming from the Shuttle is in the deep infrared where you can't see it, compared to the Sun's peak right at yellow.  So my guess is that the shuttle was lighting us up to 9,000 lux illumination.  That's twice as bright as an operating room, and way brighter than standard office bright (400 lux).  But it's just nothing like the 100,000 lux that you get outside in bright sunlight.  Nobody's going to get a suntan exposing themselves to the shuttle.  (Yes, the shuttle flame reflects off the exhaust plume, but the sun reflects off clouds, which are much bigger, so there is no relative gain there.)

Anyway, back to the people we got to meet. Here we are at lunch in the KSC cafeteria, the day before the launch. That guy two to my right is... named at the bottom of the blog. Have a guess. He had a really neat retro electronic watch and talked about how much he likes his Segway. Picture was shot by Jim Dutton, one of the F-22 test pilots who is now an unflown astronaut.


Here's a terrible picture of Scott Horowitz (former #2 at NASA, the guy who set the direction for the Ares rockets and Orion capsule) talking with Ed. The two were talking about their airplanes, a subject that gets both of them fairly animated ("I love my airplane. It's trying to kill me.")  Sadly, Ed's plane was subsequently destroyed by Hurricane Gustav (while in a supposedly hurricane-proof hanger) later this year.

Sorry about the quality, it was incredibly crowded and Ed and Scott weren't posing. This was on the day of the launch. Scott came out and looked at our Street View vehicle, then narrated the launch for us. Scott is a former 4-time astronaut and has a great deadpan delivery ("okay we just burned off a million pounds of propellant"); he's probably done it a hundred times.

Here's Mike Foale, who Ed has closed the hatch on twice (that means Mike was in the crew after Ed at the ISS twice).


I enjoyed meeting the people and looking at the hardware quite a bit more than the spectacle of the actual launch itself. Basically, the Shuttle makes a big white cloud, climbs out, loud noises ensue, and within two minutes you can just make out the SRB seperation with your unaided eyes, and it's gone. The Indy 500, for instance, is louder, and more interesting because there are always going to be crashes and various anomalies, which are not usually injurious and therefore lots of fun for the crowd. After meeting all those competent people who are working so hard to thread this finicky beast through a loophole in Murphy's law, I was just praying the thing wouldn't break on the way up.


P.S. That's Steve Wozniak, cofounder of Apple Computer.

Tuesday, December 11, 2007

ISS does not smell like old feet

I work with Ed Lu, who is a former astronaut who spent 6 months in the ISS, without taking a shower. I asked the obvious question, didn't you and everything else just stink?

No. Ed says that the air conditioning/purification system was ridiculously good, so much so that the only time you ever smelled anything was when you opened a food packet. Even then, the smell was whisked away pretty quickly.

I asked if there was problems with vapor from breathing condensing all over the interior of the spacecraft walls. Apparently not. The thing has hot spots as well as cold spots, and heat pipes to balance it all out, and lots of insulation over that. Apparently stuff doesn't freeze. Given that the thing is cold soaked in sub-liquid-nitrogen temps 45 of every 90 minutes, I'm amazed. I was expecting a story of two-inch-thick ice sheets on the interior walls.

Tuesday, October 02, 2007

More Lunar Hopper

Here's a specific mission mass budget:

The goal on the lunar surface is to deploy three HDTV cameras with motorized zoom, pan, and tilt. The cameras shoot stills or video, record to flash, and then send their bits to the main transceiver over an 802.11b link. The radio links require line-of-sight and fairly close range, less than 1 km. The camera and radio are powered-down almost all the time, and the onboard battery has enough juice for perhaps five minutes of camera operation and maybe 20 minutes of radio time.

Each camera sits on a little post with three legs and an anchor that secure it to the lunar surface. The anchor is explosively shot a foot or so into the lunar dust, and then a spring-wound mechanism tensions the hold-down string. The hold-down string is to keep the rocket plume from blowing the post over when the lander jumps.

The mission is to land somewhere with a good view of the surrounding terrain, deploy one camera, look around a bit, upload pictures/video, and let mission control find somewhere interesting to hop, then jump there and deploy another camera. Then do that again. Then do a third jump, after which we just use the camera on the jumper. The idea is that the first and later second cameras can get video of the jumper taking off and landing, then send that video back to the jumper, which sends it to Earth.

The camera weight with zoom and pan/tilt sets the mission weight. I don't know anything about spaceflight-qualified hardware, but I've looked at the MSSS web site like many of you. A little Googling around makes it look like pan/tilt heads are pretty heavy, but these are designed for Earth weather and Earth gravity.

HD Video/still camera500 g4 watts
Zoom Lens650 g0.5 watts
pan/tilt head500 g0.5 watts
5 foot post and three legs400 g
explosive anchor and spring reel500 g
battery200 g
radio/computer250 g
total3000 g


Two of these, plus a pan/tilt on the lander are going to be about 8 kg. My guess is that the lander's radio link will be about 4 kg, and the dry mass of the vehicle necessary to land all this will be another 18 kg for a total of 30 kg.

Descent from lunar orbit, landing, and two more hops will take 2000 m/s delta-V. If we're using a N2O4/UDMH hypergolic motor with 2500 m/s exhaust velocity, then we'll need 37 kg of propellant when in lunar orbit.

I think you want to do the earth exit burn, lunar orbit injection burn, and descent and hopping all with the same motor. You do it with drop tanks, which probably get blown off after the first lunar deorbit burn. This gets the mass in low earth orbit to around 400 kg, which is well inside what a Falcon 1 can lift from Omelek.

Monday, October 01, 2007

Lunar hopper?

So, yeah, I work for Google, but I have no specific knowledge of the Lunar X Prize. I just took a look at their home page, saw the brief summary of the rules, but didn't find a complete draft. It looks like they are going to revise the rules a bit after some feedback.

Here's what I've been thinking: if you want to land on the moon, look around, and then get close to something else and take pictures of it, you don't really need wheels, because you've already got a rocket that knows how to land.

In the moon's soft gravity, it takes fairly small amounts of delta-v to jump a long way. In the moon's 1.62 m/s^2 gravity, you can get 50 seconds of flight time with a 82 m/s delta-v. Use some more delta-v to go sideways, and a bit more to manoever for the landing, and you could cover 500 meters with about 100 m/s of delta-v.

Landing from a lunar orbit takes 1600 m/s of delta-v, so adding a few hundred for a few hops is not a huge increase. Yes, it's exponential, but if done with LOX/kerosene or hypergolics, a 2000 m/s total delta-v budget for the lander implies a very reasonable mass ratio.

Why hasn't it been done before? Multiple rocket hops would have been stupid for the manned mission, because the landing was the highest-risk portion of the mission. It's still the highest-risk portion, and the lunar hopper idea stands a very good chance of crashing one of it's landings. But that's okay, because after a few hops the thing will run out of gas and be dead anyway.

Thursday, March 22, 2007

SpaceX launch!

Apparently they launched with a known faulty GPS beacon on the first stage, and as a result they did not recover the first stage. That seems like a pretty substantial loss. My guess is that scheduling the range is difficult enough that knowing that they would probably lose the first stage was not enough to scrub the launch. That makes me wonder about their claim that range fees are not going to eat their profit margins.

Also, I'll note that Elon Musk was speculating about a stuck helium thruster causing the second stage wobble. I don't think this would be a roll thruster, since that wouldn't get progressively less controllable. Their roll control is with these cold-gas thrusters, so control authority would be constant relative to the unexpected torque. If they could cancel the torque in the first minute of second-stage burn, they'd be able to cancel it until the helium ran out.

But SpaceX uses axial helium cold-gas thrusters to seperate the tanks and settle the propellant in the second stage tanks. If one of those thrusters was stuck on, you could end up with some torque from the stage seperation that would explain the nozzle hit during the second-stage seperation. I'm not sure exactly how a single stuck axial helium thruster could explain the worsening roll wobble, but some coupling is at least conceivable.

Propellant slosh is an issue for SpaceX because they have a partially pressure-stabilized structure, made with thin skins welded to internal stringers and rings. Their interior surface is a lot cleaner than the isogrid surface of, say, a Delta IV or Atlas V, and so damps sloshing worse. Once they've got a little roll wobble going, it can really build up over time, especially since they'll remove very little rotational inertia from the propellant through the drain, so that you'd get a bit of the ice-skater effect as the propellant drains and concentrates any roll slosh into the remaining propellant.

The Space Shuttle also has welded stringers and so on in it's external tank. I'm not sure how they do anti-sloshing. I think I've seen cutaway pictures of the tank with extra stuff in there just to settle down the propellants.

One other thing I noticed about this launch. Last year, they added insulating blankets to the exterior of the vehicle which were ripped away during launch. The blankets were added after a scrub due to helium loss in turn due to excessive liquid oxygen boiloff. This year, the blankets were gone. My guess is that Elon had them build a properly sized LOX supply on Omelek, so that they would have no more troubles with boiloff. ("That'll be SIX sh*tloads this time!")

As for 90% of the risk being retired...
  • Orbital insertion accuracy is a big deal, and no data on that.
  • Ejecting the payload without tweaking it is... at least somewhat tough, no data on that. Consider problems with Skylab's insulation and solar panels, and the antenna on Galileo.
  • Getting away from the payload and reentering is risky too.
Still, I'm happy to see progress. I sure hope that the OSD/NRL satellite is easy & cheap to replace.

Tuesday, February 20, 2007

What if....

John Goff has set off a round of "what if..." Now Mr. X is at it.

The big problem with companies like Masten Space Systems, or Armadillo Aerospace, or SpaceX for that matter, is that they all have to design and build rocket engines before they even start on the curve towards cost-effectiveness. The engine is the single biggest piece of design risk on a rocket. It takes the most time from project start to operational status. It is, for the most part, the cost of entry into the space race.

When the Apollo program was shut down and replaced by the Shuttle, the U.S. already had the engine it needed for a big dumb booster: the F-1. This was a fantastic engine: it ran on the right propellants, it was regeneratively cooled (and so reusable), it's turbine ran off a gas generator (so there were no fantastic pressures involved), it had excellent reliability (some test engines ran for the equivalent of 20 missions), and it had respectable if not stellar Isp. By the time Apollo 16 launched, the F-1 was a production-quality engine.

The Saturn V was, however, way too big for unmanned payloads. It did not make sense to keep building these monsters. But a rocket powered by a single F-1, with a Centaur upper stage powered by two RL10s, could have put between 30,000 and 45,000 pounds into low earth orbit, about as much as an Atlas V lofts today. In fact, such a rocket would have essentially been an Atlas V, only we would have had it in 1973, and it would have been built with two engines whose development had already been paid for, one of which was already man-rated, and the other of which was the most reliable U.S. engine ever built.

Alternatively, the J-2 could have been used for the upper stage. This would have had the advantage of already being man-rated, but the disadvantage of being overkill (expensive) for putting satellites into GEO.

This rocket would have served as a great booster, for decades. Over those years, the F-1 could have had a long cost reduction and incremental development program, just as the RL10 actually did. Within a few years it could have been man-rated (using two or three RL10s on the upper stage), and carried astronauts up to a space station. Without the ennervating Shuttle development, that space station could have been a bit more meaningful. Heck, without the Shuttle development, we could have had a new Hubble every year.

And, over the years, if it had made sense, we could have tried parachute recovery of those first stages and their valuable F-1s. In short, we could have spent the last three decades racheting down the cost of LEO boost, while spending a lot more money on stuff like Cassini.

And, of course, the beauty part is that with the F-1 production lines still running, the U.S. would have had the capability to build a few Saturn 1Cs. That's the five-F1 first stage from the Saturn V. In the mid-80s, NASA would have debated the cost of building the Space Station with heavy launch, or with the single-F1 launch vehicle, and my guess is they would have restarted the J-2X program and gone with bigger ISS segments.

Anyway, didn't happen.

Friday, January 19, 2007

Hello? NASA PR?

I finally found the video shot by the WB-57 chase plane of the ST-114 "Return to Flight" launch. It's fabulous. Keep watching, and near the end you'll see the SRB seperation. The plumes from the seperation rockets are huge!

NASA article on the development of the WB-57 camera system

Video

I think this is really compelling imagery, but it's grainy and shaky. Still, it's way more interesting that the view of the three white dots of the engines during ascent. A little postproduction work could stabilize the imagery on this video, and yield something even more fun to watch.

Saturday, November 18, 2006

A little message to the Chair Force Engineer

Engineering is mostly data plumbing. What matters is that the right people understand the right bits of the problem. All those meetings, all those revisions of all those specs, it's all about exploring the problem and getting the right bits to the right guys and gals. Everything is specialized. For the most part, that means you aren't ever going to get it all. Those windbags probably mean well.

The good news is, you don't have to get most of it. You need to understand who needs to know what, and how to get them what they need. Know what is the critical path. There will some small part that's your specialty. Make sure you're on top of that. The rest is plumbing. You're smart enough. Relax.

As for the whole nihilistic thing... What drives you? I'll tell you what drives me: what is the coolest thing I can pull off and actually make work? I've long since realized that a team can make stuff happen that I could never do myself. Right now I'll take the trade of having a smaller part in achieving a more audacious goal.

For the next 30 months, concentrate on getting experience, especially practical experience. You're going to be around for a very long time. This idea that you were going to figure out by 30 what you are going to do in life is a (boring) fantasy. One of the best CPU hackers I know started doing serious astrophysics, did CPUs, moved on to chip assembly tools, and now does something completely different. You know what my dad remembers from hacking on accelerators at Berkeley in the 60s? Plumbing. He's awesome with solder and 3/4" copper.

It sounds like you're motivated by Serving America. Very noble. You can probably see lots of ways of projecting force better than we are now. You don't need to implement that right now. You might take a 20 year detour first. Hell, you might try raising kids. That'll throw a wrench in things, I promise you.

Oh, and when you get out, give me a call.

Tuesday, May 02, 2006

Three Stage to Orbit

I'm going to try to convince you [hi Jon!] that a three stage to orbit rocket might be significantly cheaper than a two stage to orbit system lifting the same size payload. My argument centers on the idea that the engines are the expensive portion of the rocket, and that tanks and structure are cheap.

To get into a 200 km orbit, a two stage rocket generally has a first stage which burns for about 150 seconds, followed by a second stage which burns for around 500 seconds. Second stage engines deliver much more impulse per dollar, primarily because the engine burns so much longer, and also because the engine has a larger expansion ratio and so a higher specific impulse.

The second stage burn time cannot be extended greatly beyond 500 seconds when going to low earth orbit, because the initial acceleration becomes so low that the stage falls back to earth before it achieves orbital velocity. Similarly, the first stage burn time cannot be pushed much past 150 seconds (assuming LOX/kerosene, for which the Isp is around 300 seconds) because the rocket must start with significantly more than 1 G of acceleration to get off the ground. This is easy to see: if initial acceleration is exactly 1 G, and the vehicle was all fuel and no structure, then the burn time would be exactly the Isp.

The point of a three stage rocket is to significantly extend the first LOX/kerosene stage burn time, and improve the expansion ratio, with the use of an earlier and much cheaper stage. To yield a cost improvement, the entire new stage must cost less than the first stage engines it is replacing. Also, the development cost of the new stage must be low.

Typically, strap-on solid rocket motors are used as cheap extra launch thrust on American or European launchers. LOX/Kerosene launchers, such as the SpaceX Falcon series and many of the Russian launchers, are so cheap that such solid rockets and their handling and cleanup are too expensive to help. I suspect that steam rockets (such as the rocket that hurled Evil Kneivel's "motorcycle" over the Snake River Canyon) may be cheap enough. Unfortunately, steam rockets have such low specific impulse (about 50 seconds) that any benefit must come from just a few hundred m/s of delta-V (realistically, around 300 m/s). The rest of this post will show that there is a very large benefit to having just a few hundred m/s of initial boost.

One way to measure the benefit of the extra stage zero is to compare the payload of a similarly priced rockets, one with and without the extra stage. Ideally, the rockets would be seperately optimized, but that makes cost comparisons looser. Instead, we will imagine that they have the same engines and differ in smaller details like expansion ratio and tank size. The baseline rocket I have chosen is the SpaceX Falcon 1, which is promised to take 570 kg to a 200km orbit for $6.7M. My simulations show SpaceX has a significant sandbag in there, and that the rocket might deliver 670 kg at the edge of its performance envelope, and it is this larger number that I compare against when calculating the benefit of extra boost.

The first comparison is between a series of identical Falcons, some with a stage zero delivering various burnout velocities, and one without. As shown, a 300 m/s boost allows the three-stage rocket to lift 41% more payload.

Falcon 1 with various starting boost

LEO
boostpayload
(m/s)(kg)
0670
100760
200866
300945
4001009
5001065


Because a three stage Falcon does not require it's LOX/kerosene engines to lift it off the ground, the engines can be configured for larger expansion and higher vacuum Isp (and thus lower sea-level Isp). Elon Musk at SpaceX has claimed that a vacuum-optimized Merlin would deliver 340 seconds Isp. In the table below, I've assumed 330 seconds, and a drop in sea-level Isp to 190. Now the three-stage 300 m/s boost lifts 61% more mass to orbit. This change carries the reliability penalty that the engines may have to be started in flight to avoid thrust instability due exit pressure being much less than sea level pressure. Note that a Falcon with no stage zero boost cannot get off the ground with higher expansion engines.

Falcon 1 with various starting boost, Isp=330 s

LEO
boostpayload
(m/s)(kg)
0dead
100778
200946
3001077
4001173
5001255


The full gain of the stage zero is realized by extending the burn time of the first LOX/kerosene stage by extending the tanks. Here I've assumed that the extra tankage and pressurant and so on weighs 4.3% of the extra propellant added. With extended tanks, the three stage 300 m/s boost lifts 138% more mass to orbit.

Falcon 1 with various starting boost, Isp=330 s, extra propellant

LEOextra stage 1
boostpayloadpropellant
(m/s)(kg)(kg)
0dead
1008666000
200128812000
300159518000
400181414400
500203924000


You might wonder if these massive payload increases require the use of a bigger and more powerful upper stage. The answer is no, even hefting a 1595 kg payload, the existing falcon upper stage delta-V drops from 4973 to 3454 m/s. I did not look at moving propellant from the lower stage to the upper stage, which might improve the payload numbers a bit more.

138% is a large payload increase, but we have to compare this to the cost of simply scaling up an existing rocket. Fortunately, SpaceX has done some of this work for us. Interpolating between their prices, the 138% payload boost might be sold for $1.8M more.

Which leaves the question, can a suitable steam rocket be built to heave the Falcon 1 to 300 m/s straight up for less than $1.8M per shot, and minimal development costs?

Here's what you'd need:
  • Mass Fraction = 80% (Shuttle SRB mass fraction is 85%, it holds 60 atmospheres pressure)
  • Mass, fuelled = 62992 kg
  • Mass, empty = 12600 kg
  • Volume = 100 m^3
  • Size = 3.6 m diameter x 10 m long


  • A 12600 kg welded stainless steel tank might cost $250,000, but can probably be recovered and reused with minimal work. I'm going to claim the expensive bit isn't the flight hardware, but rather the ground support, and I'll get to that in another post.

    Tuesday, March 28, 2006

    Testing New Rockets

    Now that a respectful period for SpaceX's loss has passed, it's time to begin the enthusiastic but uninformed Monday morning quarterbacking. Don't be shy. Here, I'll go first.

    First, I don't see why people are at all sad the thing blew up. It was a test article, and test articles break, generally with tons of instrumentation to tell you how. If you want to feel sad, feel sad for the upper stage testing guy, who won't see a video of his machine airstarting for another six to nine months. And that's what motivates the rest of this post.

    Why does the vehicle have to be tested all-up? Testing all-up is gutsy and smart when your expectation of failure is very low, but wasteful if you are pretty sure you have multiple problems to find and fix. Dwayne Day has pointed out that about half of all new rockets succeed on their first launch, but this launch was not like the first launches of most of those other rockets. Those rockets had very expensive Big Government testing programs behind them. This rocket did not. That's good (because it can be more efficient), but it means that an all-up launch is not likely to yield a lot of testing data for the amount of money and time invested.

    At this point, SpaceX has already paid for plenty of expensive lessons at Kwaj, so incremental flight testing might not seem necessary anymore. I'd do it anyway: they'll need it for the Falcon 9 too, and nobody knows how many gremlins remain to be flushed out of the Falcon 1 design, because they didn't get to test much of the vehicle.

    If each stage carries a full load of LOX but only enough kerosene for about 100 m/s delta-V, we should get a short, locally recoverable hop, without the need for an intercontinental missile range. A set of large floats attached to the tail of the rocket might keep the engine from a complete dunking on splashdown, if that is perceived to be a problem. The lack of a range is a huge deal -- they could have done quite a bit of flight testing in parallel with getting onto Kwaj, and my guess is they're going to need plenty of launch experience before Vandenberg will let them launch there.

    So the test plan, then, would be to launch two or three rockets perhaps a dozen or more times from a small island in the middle of a small uninhabited lake in the continental United States. Start by launching single stages by themselves (first and second), and then move on to two stage launches. I've read that Wisconsin wants to have a spaceport, perhaps they'd be willing to cough up the necessary permits.

    While short hops are not going to test the vacuum and high-speed portions of the flight, they will test all sorts of other good stuff, many of which were tested for the first time at Kwaj (where it was more expensive):

  • Launch procedures, except those relating to the interface with the range and the recovery vessel. This would include things like discovering how many shitloads of LOX it takes to load the thing up.


  • Launch in high winds, heat, etc, by picking the time of year and using ballast. Granted, this takes time, but expanding the launch envelope is only needed once you are trying to support a high flight rate.


  • First and second stage structure under some but not all flight loads. Again, ballast necessary.


  • Payload environment in the lower atmosphere.


  • The staging event -- shutdown, seperation, propellant settling, ignition, interstage seperation. This is huge.


  • Fairing seperation, unfortunately with an aerodynamic load. The load could be mitigated by blowing the fairing at the top of an almost vertical flight, perhaps by adding enough delay between first stage engine stop and seperation that the vehicle coasts to a stop.


  • Some of the first stage recovery hardware, obviously not including the re-entry sequence.


  • Recovery of first stages -- water handling, floatation, etc.


  • The flight termination system in various stages of flight. Note that since flight termination is nondestructive, they can really test the hell out of it by using FTS to shut down half the time.


  • Reuse of first stages. This could be really big, since some lessons that might otherwise be learned might be erased by reentry.


  • Recovery and reuse of second stages. Yes! They are going to try to do this with Falcon 9, so why not start now? Testing this out with moneymaking operational launches sounds cheap, but you'll never get the same instrumentation or number of tests as you can have with low-altitude test flights.


  • I'll stop here, the list is endless. The point is that a lot of confidence can be built doing cheap flight tests away from the U.S. Government's test range.

    Thursday, March 23, 2006

    Communication Lasers

    Jon Goff notes that MIT has developed a new, more sensitive infrared receiver. The article mentions that data rates to space probes might improve as a result. Brian Dunbar wonders why I think pointing (and though I didn't mention it, antennas) are a big problem.

    The location of Mars is known to great accuracy, and the location of the spacecraft is well known also. The direction the spacecraft is pointing is less well known, and the direction that the laser comes out of it is less well known also, and that's the pointing problem in a nutshell.

    Think in solid angles. An omnidirectional antenna spreads its transmitted energy evenly over the whole sphere: 4*pi steradians. So, if you transmit 100 watts across 4 x 10^8 km (max distance to Mars) to a 70 meter dish at Goldstone, the most signal you can receive is (10 watts) * (goldstone aperture) / (transmit solid angle) / (distance^2). That's 100 * (pi*35^2) / (4*pi) / (4*10^11)^2, or about 2 * 10^-19 watts. You can see why the folks who built Goldstone wanted it a long way from the nearest 100 kilowatt AM radio station.

    A 4 meter high gain antenna might transmit 100 watts at 20 GHz. 20 GHz is 1.5 cm wavelength, so the planar beam spread might be about (1.5cm/4m)^2 = 1.4*10^-5 steradians. The Goldstone dish will receive 7*10^4 times as much power from this dish as from the omnidirectional antenna (still a piddly 1.3*10^-14 watts). The downside is that you have to point the antenna to within 1 part in 530, about 0.5 degree accuracy. That's not too hard.

    Now suppose you transmit with a 100 watt 1.55um infrared laser, with an aperture of 4cm. The beam spread is 100 times smaller than that old high gain antenna (and the solid angle is 10,000 times smaller). The Goldstone dish is now receiving 1.3*10^-10 watts, which is much better, but you'll have to orient the spacecraft that much more accurately. Which means eddy currents in the propellant swishing around in your tanks will knock the beam off. Sunlight pressure will knock it off. Worse still, differential heating of your optics will cause the beam to steer relative to the spacecraft. The fact that the spacecraft is in free-fall means that the structure between your star sensors and your maser has changed shape somewhat since you calibrated it on the ground before launch. All of which means when your computer thinks it's pointing the beam at Earth, it's acutally illuminating the Mare Iridium, and Goldstone is getting nothing.

    Except the trouble is that Goldstone can't handle the infrared radiation your laser produces, so you'll need something more like an infrared telescope, which might have a 2 meter aperture instead of Goldstone's 70 meter aperture, and so you've lost three of your four orders of magnitude improvement.

    Now if someone shows me a spacecraft with a multimeter telescope used to transmit infrared to Earth, I'll get seriously impressed. Courtesy of the fiber optic revolution, we now have an awesome amount of experience with high-data-rate micron (i.e. infrared) lasers. I can imagine a satellite's telescope where the same mirror is used to transmit to and also take pictures of earth. The pictures taken are used for pointing calibration, and now the stable optical structure is just the picture sensor and laser transmit head, and doesn't include the mirror at all. As a side benefit, the same telescope and imaging system can be used for very nice pictures of the probe target, i.e. Mars, although not at the same time, of course. You might even be able to do high-precision lidar (radar with lasers) surface terrain measurement.

    Wednesday, March 22, 2006

    Crackpots and Rocket Science

    There's been an uptick in talk of space elevators. Here's Rand Simberg going at it. I get annoyed when I read this stuff, and lately I've been trying to figure out what it is about space elevators that I find so alarming. Unfortunately I have figured it out and don't like the answer.

    I'll get the tedious bit out of the way first.

  • Space elevators from anywhere in the Earth's atmosphere are not going to be built for a very long time, certainly not in my lifetime, probably not ever. In short, they require engineering miracles (cheap large scale carbon nanotubes and megawatt lasers), and they do not have realistic return-on-investment (a proposed $5 billion elevator would lift one 8-ton cargo per week. 5% interest and 5% maintenance is about $10 million per week, or $625 per pound lifted, and does not include the cost of actually lifting the cargo).


  • And, should the above engineering miracles occur, they would enable other ways of getting to space that would be better than an elevator. You could, for instance, build a fully reusable single-stage-to-orbit rocket from large-scale carbon nanotubes that would certainly be cheaper than an elevator. All sorts of things from la-la land are possible if you make unreasonable assumptions. It's sort of like trying to figure out if a Tyrannosaurus Rex could win a fight with King Kong....


  • Back to the bit that alarms me.

    Much of the breathless discussion of elevators is conducted by the same folks who discuss something quite important to me: cheap rocket launches. These people are clearly unable to sanely evaluate engineering propositions. In short, they are dreamers or crackpots.

    (An aside: researchers who are developing carbon nanotube materials are most definitely not crackpots. That's R&D, which is a great thing. It's common for folks working on new materials to suggest outlandish uses. That's fun and harmless so long as they concentrate at their day job which is figuring out how to make the material in the first place. CNT materials, if developed, are likely to be as popular as carbon fiber is today, and find all sorts of good uses.)

    Anyway, here's the bit I don't like at all: how is someone who does not know a thing about engineering (my mom, for instance) supposed to tell the difference between me and one of the aforementioned crackpots? I'm working on an upcoming post which will suggest that hot water first stages and a little aerodynamic lift could cut LEO launch prices by a factor of about 2. Like the aforementioned folks, I don't work in this industry, and am unlikely to. My suggestions are unlikely to be picked up by others in the industry. Why am I burning my valuable time on this stuff?

    The answer is that I find the engineering entertaining, and I post the bits that I do because I think they might be entertaining to a small group of people who I don't bump into day-to-day. Part of the entertainment is the thought that if I noticed something really useful, I'd act on it. But it's just a thought. I like to think I have a reasonably good sense of the difficulty of making technical progress in a few fields, this one included. Frankly, I decline to make the big investment here (i.e. change careers) because I think the difficulty is too high. It is rocket science, after all.

    Tuesday, February 14, 2006

    Why Merlin 2?

    SpaceX does not need to design and qualify a third, much bigger, engine in order to become a profitable launch company, or to take over payloads intended for the Shuttle, or to fly people to either ISS or a Bigelow hotel. SpaceX should concentrate on execution, development of parallel staging (Falcon 9S9), and on a regeneratively cooled Merlin. Execution includes things like recovery of first stages and getting to one launch per month, with at least one or two Falcon 9 launches each year.

    In this context, a regenerative Merlin 1 (hereafter refered to as Merlin 1x) may make sense because it may improve safety, reliability, and operational costs, and the cost of those improvements may be realistically amortized over dozens of launches. As the engine cycle is more efficient, a small increase in Isp and perhaps thrust is likely.

    So why Merlin 2?

    Elon keeps saying that Merlin 2 will be the biggest single thrust chamber around, but smaller than the F-1. Presumably, he is implying that it will have less total thrust than existing multiple thrust chamber engines, in particular the RS-180 (2 chambers, 4152 kN). That puts a fairly tight bound on the size. Here's a list of the biggest current engines, by thrust per chamber.

    Enginethrust per chamberIspVehicle
    F-17740 kN265 - 304Saturn V
    RS-683312 kN365 - 420Delta IV
    RS-242278 kN363 - 453Space shuttle
    RD-1802076 kN311 - 338Atlas V
    RD-1711976 kN309 - 337Zenit
    NK-331638 kN297 - 331N-1/Kistler
    Vulcain 21300 kN318 - 434Ariane 5


    Taking Elon at his word, Merlin 2 will have between 3312 and 4152 kN of thrust. Call it 4000 kN, 17% more than the Falcon 9 first stage.

    What can SpaceX do with this engine that cannot be done with the Merlin 1B?

    Improve Falcon 9 LEO lift to 11000 kg. That's not even close to lifting stranded Space Shuttle payloads, which is the largest obvious market in terms of LEO mass in the next four years. I'm not sure what payloads are enabled by a 9300 to 11000 kg capability improvement.

    Cost-reduce the Falcon 9. I can't see how this can pay for development, unless the Falcon 9 is launching monthly by 2010. And a further (fractional) price decrease doesn't seem like it would stimulate the market any more in the near-term than the existing, bold, price statement.

    Improve Falcon 9S9 LEO lift to 30000 kg. The promised Falcon 9S9 lift capacity (24750 kg to LEO) is almost exactly that of the Shuttle (24400 kg to LEO), but the shuttle payload bay supports its payload better. The additional weight of a frame inside the 9S9 fairing might make some Shuttle cargoes too heavy to lift. A Merlin 2 based first stage for the Falcon 9S9 would fix this. The trouble is, a modest increase in thrust from the Merlin 1x would also fix the same problem. And it seems the latter change would have to be less costly than a whole new engine.

    If the Falcon 9S9, improved with either Merlin 1x or Merlin 2s in the first stage, took over a dozen Shuttle ISS launches, I can imagine that would be a high enough flight rate to pay for Merlin 2 development. The trouble is that if I were paying to lift a billion-dollar ISS segment, I'd prefer to go on a machine with engine-out capability at launch. Three Merlin 2s don't give you that capability, and 4 implies some kind of vehicle (and its development cost) other than a Falcon 9S9 with single Merlin 2s on the first stages.

    Build a 100,000+ kg to LEO launcher. This is what you get if you want engine-out capability with Merlin 2s. This kind of capacity is not necessary for exploring the solar system with robot probes, or even for building big orbital telescopes. It makes sense only if you want to send people a long way for a long time. Elon Musk passionately wants to build this rocket, that's why it's called the BFR.

    The only organization with any credibility talking about using such heavy launches is NASA, for use in sending people to the Moon and maybe Mars. The BFR is Elon Musk's statement that he wants to take over the U.S. manned space program's launches. The business case for Merlin 2 and BFR must fundamentally rely on the U.S. government privatizing a critical, and the most public, portion of the manned space program. A program which from its outset has been about national pride.

    A more likely scenario is that NASA will spend billions developing its own HLLV in competition with SpaceX, in the process abandoning the Space Station and strangling SpaceX, and will end up being able to afford just two or three launches to the Moon before abandoning VSE for the next thing. The history of heavy launchers is not reassuring. The Saturn V (118,000 kg to LEO, $2.2B per launch in 2004 dollars) was launched 13 times. Energia (85,000 kg to LEO, $1.4B per launch) was launched twice.

    If all this sounds dire, a more reassuring comparison can be made by considering what SpaceX intends the Merlin 2-based vehicle to cost. Mr. Musk has stated several times that the point is to get costs below $1000/kg. A 100,000-kg-to-LEO vehicle priced at $500/kg would be $50M per launch. The history of $50M launchers is much more attractive. The various Delta incarnations (~1300/kg to LEO, ~$50M per launch) were launched hundreds of times. Soyuz (7200/kg to LEO, $45M per launch) has been launched 714 times.

    A big rocket at such prices would clamp Falcon 9S9 prices to $30M and also reduce the 9S9 flight rate, at which point the parallel-staged rocket might cost too much to fly profitably. So SpaceX faces a fork in the road: develop either the Merlin 2 or parallel staging, but not both. Parallel staging will cost less to develop, implement, and support, and the BFR will cost more but enable people to get out of low earth orbit, and perhaps convert the VSE from a boondoggle into a success.

    And thus we get to the vision thing. Elon has at various times stated that SpaceX is out to make money, and at other times stated that the goal of SpaceX is to enable the colonization of space. Let me point out that you can make money first, and enable colonization second, but not the other way around.

    Wednesday, February 01, 2006

    Catapult Gain

    The Chair Force Engineer doesn't think much of a catapult start for rockets. His point is that if your catapult gets the rocket going fast horizontally, it just makes the max Q problem and drag losses worse.

    As the rocket accelerates, the dynamic pressure on the front of the craft gets larger, until it gains enough altitude and vertical velocity that the atmospheric density drops faster than the velocity component rises. The maximum pressure experienced is max Q. Max Q is a problem because that drag turns the rocket's forward velocity into heat, which the structure has to deal with somehow, usually with a thin layer of ablatives on the nose.

    A study done at UC Davis, A Study of Air Launch Methods for RLVs looked at dropping the rocket from an airplane, as is done for the Pegasus and SpaceShipOne rockets. They found that the high starting altitude helped, as did releasing the rocket with some significant vertical velocity component.

    The thing that air launch and (vertical) catapults have in common is they both allow the rocket to delay fighting gravity.

    As I pointed out in an earlier post, a rocket coming vertically off a launch pad is losing nearly all of its delta-V to gravity losses. Later on in the flight, once a good bit of propellant has been burned off and the acceleration has improved, a much smaller portion of the delta-V is lost to gravity. Note that the total vertical impulse required is a function of the time spent getting to orbital velocity, and is insensitive to the actual flight path.

    The situation is much different for a rocket firing horizontally. Delta-V expended horizontally adds to the rocket's final velocity regardless of what the current acceleration is. Unfortunately, horizontal velocity added early in the flight adds a lot of drag loss.

    So to minimize gravity losses, the rocket should start out firing at least somewhat horizontally, losing vertical velocity, and only later recover that vertical velocity. There is some limit to this pattern, since too much horizontal velocity too low in the atmosphere will exacerbate drag losses, and since we don't want the rocket to smack into the ground. Starting at high altitude helps, as does starting with some vertical velocity.

    This leads to a concept I'll call "catapult gain", because I haven't read about it and don't know what it's really called. If I start the rocket off with a couple hundred m/s of vertical velocity, I increase the velocity at first stage burnout by more than the initial velocity boost, for two reasons. First, the gravity losses are lower, since the rocket can postpone some of its vertical impulse to when it is more efficient. Second, the first stage rocket can start out with more gas in the tanks, burn longer, and deliver more delta-V, because it doesn't have to have positive vertical acceleration right from the start.

    Catapult gain is limited to the first few hundred m/s of velocity. The first effect is limited because we're reducing gravity losses, and increasing drag losses. There is only so much gravity loss to be mined, and drag losses rachet up quickly. The second effect is limited for the same reason that upper stage minimum accelerations are limited: eventually the extra delta-V is stretched over so much time that the gravity losses start to overcome all the extra delta-V. More starting velocity is always good, of course, but the point is that it reverts to a gain of one. To give a rough idea of catapult gain, some spreadsheet calculations indicate that a 250 m/s jump reduces the upper stage delta-V requirements of a SpaceX Falcon 9 by 500 m/s, for a catapult gain of two. Lest that seem small, note that it would allow the dead weight of the upper stage to increase by around 18%, and I'd guess the payload would increase by at least twice that.

    So now that I've established a motivation for a vertical catapult, let me suggest one: a steam rocket. This is a very big bottle of very hot water at very high pressure, which partially flashes to steam as it exits. It has really crappy specific impulse (about 50 seconds), but can produce really large amounts of thrust very cheaply. For a Falcon 9 stage 0, it would be an 80 tonne steel tank (HY 100, safety factor 1.5) holding 400 tonnes of water, starting at 300 degrees C and 86 bar. The thing would produce around 26 MN of thrust for about 7.5 seconds, boosting the Falcon to 280 m/s. A 25 meter pipe, inserted up the throat of the nozzle, would add another 40 m/s by acting as the piston in a cylinder, allowing the Falcon to react against the Earth instead of mere propellants.

    Let me point out how simple this system is. It has no moving parts on the vehicle, no flight-operational valves, no sequencers. The nozzle is bolted down onto a seal while the water in the tank is heated (pumped through an external heat exchanger). Fire the explosive bolts and it goes. There are no gimbals, and no guidance system. The LOX-Kero first stage is started while on the pad, and fires directly onto the top of the tank, which being solid steel backed by water is unaffected. The interstage is a truss which the engines fire through. There is no recovery system: the thing sails through the air for about a minute, coasting up to 4 kilometers high, and then comes crashing down into the sea, where it eventually resurfaces, nozzle down, until it's dragged back to shore. If we wanted to be sure it doesn't sink, we could inflate a balloon inside the casing, which would passively inflate as the internal pressure dropped during boost, and would require no pyrotechnics or interface of any kind. The exhaust is hot water and steam, has mild overpressure, requires no water suppression, nor cleaning of nasty chemicals afterwards. I will guess that the tank and nozzle can be built for $500k. Fuel costs for heating it up are about $10k, so an extensive program of test-firings with dummy rockets atop would be cheap.

    I recommend tilting the launch pad slightly to ensure that the thing comes down in the ocean and not back onto the pad.

    The thing vents 53 tonnes of propellant per second, which is orders of magnitude more than any chemical rocket ever built, and will produce a singular launch spectacle. So long as pictures of parboiled parrots can be avoided, the PR value alone should be immense. As a side benefit it should allow the Falcon 9S9 to put more payload into orbit than a Delta IV Heavy, and maybe enough more than the Shuttle to allow it to lift Shuttle ISS cargoes with an added strongback.

    EDIT NOTE: In an earlier version of this post I claimed a doubling of throw weight. Math error. That's what I get for posting after midnight.