Showing posts with label spysat. Show all posts
Showing posts with label spysat. Show all posts

Wednesday, January 15, 2014

Sensors, Survey and Surveillance from Space

The SkyBox satellites are the first to use area array rather than pushbroom sensors for survey work, but they certainly aren't the first to use area array sensors.  I think the first satellites to do that were the KH-11 surveillance satellites, versions of which are still the principle US optical spysats in use today.  The first KH-11s sported area array sensors of about the same resolution as a standard definition TV.  The most recent KH-11s probably have a focal plane similar to this tiled 18,000 x 18,000, 10 micron focal plane (shown below, that circle is a foot in diameter).

Optical spysats have two missions; call them surveillance and survey.  When you already know where the thing is, that's surveillance.  Response time matters, but throughput is usually not a big deal.  When you don't know where your thing is, or you don't even know what it is yet, you are doing survey.  Throughput is king in survey work, and if response time matters, you have a problem.  Coast Guard aerial search and rescue, for example, has this problem.  You can read about the difficulties of search at sea in this NY Times article on rescue of a fisherman last July.

General Schwarzkopf said after the first Gulf War that spysats (he must have been referring to the earlier KH-11s) could not provide useful, timely imagery.  He was comparing single pictures of targets after a hit to the target camera footage of his planes, which gave him continuous video snippets of the target before, during, and after a hit.  These videos were very popular at press conferences and with his upper management.
Satellites are excellent for getting access to denied airspace -- there is no other way to take pictures inside China and Russia.  But in Iraq, Afghanistan, and Pakistan they are completely outclassed by airplanes and now drones with long-range optics (like the MB-110 reconnaissance pod which I still haven't written up).  In a 20 year battle against irrelevancy, I suspect that getting near-real-time imagery, especially video, from orbit has been a major NRO focus.  I'm sure the Block IV KH-11 launches in 2005, 2011, and recently in August 2013 can all do real-time downlinks of their imagery through the SDS satellite constellation.  However, the second part of real-time is getting a satellite into position to take the picture quickly.  The three KH-11s in orbit often cannot get to a surprise target in less than 30 minutes, and cannot provide continuous video coverage.  Guaranteeing coverage within 30 minutes would require dozens of satellites.  Continuous coverage, if done with satellites 300 km up, would require around 1000.  The KH-11 series is expensive (they refer to them as "battleships") and the US will not be launching a big constellation of these.

The Next Generation Electro-Optical program, which started in 2008 or so, is probably looking at getting the cost of the satellites down into the sub-$500m range, while still using 2+ meter telescopes, so that a dozen or two can be launched over a decade within a budget that NRO can actually sell to Congress.  My guess is they won't launch one of these until 2018.  In the meantime, SkyBox Imaging and ExactEarth, who are both launching constellations of small imaging sats, will be trying to sell much lower-resolution images that can be had more quickly.  These civilian operators have 50-60 cm apertures and higher orbits, and so can't deliver the resolution that NRO and NGA customers are used to, and they can't or don't use the SDS or TDRS constellations to relay data in real time.  (SkyBox can do video, but then downlinks it 10 to 90 minutes later when they overfly one of their ground stations.)

The second spysat mission is survey: looking for a needle in a haystack.  From 1972 to 1986 we had this in the form of the KH-9 Hexagon, which shot the entire Soviet Union every 2 to 4 months at 1 to 2 foot resolution.  The intel community at the time could not search or inspect all that imagery, but the survey imagery was great once they'd found something surprising.  Surprise, a new site for making nuclear weapons!  Survey answers the question: What did it look like during construction?  Or, How many other things like this are there?  Nowadays, Big Data and computer vision have got some handle on finding needles in haystacks, but we no longer have the KH-9 or anything like it to supply the survey imagery to search.  We still use the U-2 for aerial survey imagery, but we haven't flown that into denied airspace (e.g. Russia and China) for many decades.

From 1999 to 2005 Boeing ran the Future Imagery Architecture program,which was intended to make a spy satellite that could do radar, survey, and surveillance.  The program took too long and ran way over budget, and was eventually salvaged by cancelling the optical portion and having the team design a synthetic aperture radar satellite, which did get launched.  (Apparently this was the successor to the Lacrosse radar satellite.)

As I wrote, SkyBox does survey with a low-resolution area array.  They would need about 16,000 orbits to cover the entire surface of the earth, which is 2.7 years with one satellite.  I'm sure they can optimize this down a bit by steering left/right when over the ocean.  But this is 70 cm GSD imagery.

Two of the telescopes designed for FIA were donated to NASA in 2012, and the few details that have emerged tell us about late 1990s spy satellites.  From 300 km up, they could deliver 7 cm imagery, and had a (circular) field of view of about 50,000 pixels.  This could have been used with a 48,000 x 16,000 pixel tiled focal plane array.  Using the simple expedient of shooting frames along the line of the ground track, the ground swath would have been 3.2 km wide, and could have surveyed the entire Earth in about 2.7 years (the same number is a coincidence -- spysats fly at half the altitude and this one had twice my presumed field of view for SkyBox).

However, to keep up with the ground track pixel velocity, the sensors would have to read out at over 6 frames per second.  That's almost 5 gigapixels per second.  I don't believe area array sensors that big can yet read out that fast with low enough noise.  (The recent Aptina AR1411 reads out at 1.4 gigapixels per second, but it's much smaller, so the column lines have far less capacitance.)

The large number is not a result of the specifics of the telescope or sensor design -- it's fundamental to high resolution orbital survey.  It's just the rate at which the satellite flies over ground pixels.  Getting 5 billion tiny analog charge packets to A/D converters every second is hard.  Once there, getting 20 gigabits/second of digital data to the ground is even harder (I don't think it's been done yet either).  I'll defer that discussion to a later post.

Pushbroom sensors are more practical to arrange.
  • The satellite simply stares straight down at the ground.  Attitude corrections are very slow.
  • It's easy to get lots of A/D converters per sensor, you simply add multiple taps to the readout line.
  • It's easy to tile lots of sensors across the focal plane.  You stagger two rows of sensors, so that ground points that fall between the active areas of the first row are imaged by the second row, like this focal plane from ESA Sentinel-2.  Once stitched, the resulting imagery has no seams.


Tiled area array sensors are more difficult, but have the advantage of being able to shoot video, as well as a few long exposures on the night side of the Earth.
  • The image must be held steady while the field of view slides along the ground.  Although this can be done by rotating the whole satellite, survey work is going to require rapidly stepping the stabilized field forward along the optical path, several times a second.  Fast cycling requires a lightweight optical element, usually the secondary mirror, to have a fast and super precise tip/tilt mechanism to track out the motion.  Cycling this element back into position between shots can add vibration to the satellite.
  • While the secondary mirror is moving the image back into position, the pixel photodiodes must not accumulate charge that affects the values read out.  This typically means that either the cycling time can't be used for readout, or (as in the VisionMap A3) the sensor is an interline CCD with two capacitors per pixel, one of which is shielded.  With this choice comes a bunch of minor but annoying problems.
  • In one line time, charge is transferred from the pixels all the way across the array to the readout.  The bit lines can be long and capacitive and add noise.
  • Take another look at the first pic in this blog post, and note the seams between the active arrays.  These are annoying.  It's possible to take them out with clever combinations of sparse arrays and stepping patterns.
Lenses generally resolve a circular field of view, and pushbroom sensors take a rectangular stripe down the middle.  It's possible to put an area array sensor in the leftover upper or lower crescent around a pushbroom sensor.  This gives a smaller area sensor, but in the context of a 50,000 pixel diameter focal plane, a "smaller" area sensor might be 10,000 pixels on a side, with 50 times the pixel count of an HD video sensor.  This allows for a 10:1 "digital zoom" for context with no loss of display resolution.

If I were building a government spysat today, I'd want it to do survey work, and I'd make surveillance the secondary mission.  Airplanes and drones are better for most surveillance work.  I'd want to shoot the whole Earth each year, which can be done with three satellites at 300 km altitude.  I'd use a staggered pushbroom array as the primary sensor and a smaller area array for surveillance.

The step-stare approach that SkyBox is using makes sense when a big, fast area array sensor covering the whole field of view can be had at low risk.  Sensors are developing quickly, so this envelope is growing over time, but it's still an order of magnitude away from what large-aperture spysats can do.

Maybe I'm wrong about that order of magnitude.  In 2010 Canon announced a 205 mm square CMOS sensor that supposedly reads out 60 frames per second.  Here it is pictured next to a full-frame 35mm DSLR sensor -- it's slightly bigger than the tiled array at the top of this post.  Canon did not announce the resolution, but they did say the sensor had 100 times the sensitivity of a DSLR, which suggests a pixel size around 35 microns.  That's too big for a spysat focal plane, unless it's specifically for use at night.
No subsequent announcement was made suggesting a purpose for this sensor.  Canon claims it was a technology demonstration, and I believe that (they would not have been allowed to show a production part for a spysat to the press).  Who were they demonstrating that technology to?  Is this the focal plane for a Japanese spysat?

Thursday, December 12, 2013

The SkyBox camera

Christmas (and Christmas shopping) is upon us, and I have a big review coming up, but I just can't help myself...

SkySat-1, from a local startup SkyBox Imaging, was launched on November 21 on a Russian Dnepr rocket, along with 31 other microsatellites and a package bolted to the 3rd stage.  They have a signal, the satellite is alive, and it has seen first light.  Yeehah!

These folks are using area-array sensors.  That's a radical choice, and I'd like to explain why.  For context, I'll start with a rough introduction to the usual way of making imaging satellites.

A traditional visible-band satellite, like the DubaiSat-2 that was launched along with SkySat-1, uses a pushbroom sensor, like this one from DALSA.  It has an array of 16,000 (swath) by 500 (track) pixels.
The "track" pixel direction is divided into multiple regions, which each handle one color, arranged like this:
Digital pixels are little photodiodes with an attached capacitor which stores charge accumulated by the exposure.  A CCD is a special kind of circuit that can shift a charge from one pixel's capacitor to the next.   CCDs are read by shifting the contexts of the entire array along the track direction, which in this second diagram would be to the right.  As each line is shifted into the readout line, it is very quickly shifted along the swath direction.  At multiple points along the swath there are "taps" where the charge stored is converted into a digital number which represents the brightness of the light on that pixel.

A pushbroom CCD is special in that it has a readout line for each color region.  And, a pushbroom CCD is used in a special way.  Rather than expose a steady image on the entire CCD for tens of milliseconds, a moving image is swept across the sensor in the track direction, and in synchrony the pixels are shifted in the same direction.

A pushbroom CCD can sweep out a much larger image than the size of the CCD.  Most photocopiers work this way.  The sensor is often the full width of the page, perhaps 9 inches wide, but just a fraction of an inch long.  To make an 8.5 x 11 inch image, either the page is scanned across the sensor (page feed), or the sensor is scanned across the page (flatbed).

In a satellite like DubaiSat-2, a telescope forms an image of some small portion of the earth on the CCD, and the satellite is flown so that the image sweeps across the CCD in the track direction.
Let's put some numbers on this thing.  If the CCD has 3.5 micron pixels like the DALSA sensor pictured, and the satellite is in an orbit 600 km up, and has a telescope with a focal length of 3 meters, then the pixels, projected back through that telescope to the ground, would be 70 cm on a side.  We call 70 cm the ground sample distance (GSD).  The telescope might have an aperture of 50cm, which is as big as the U.S. Defense Department will allow (although who knows if they can veto a design from Dubai launched on a Russian rocket).  If so, it has a relative aperture of f/6, which will resolve 3.5 micron pixels well with visible light, if diffraction limited.

The satellite is travelling at 7561 m/s in a north-south direction, but it's ground projection is moving under it at 6911 m/s, because the ground projection is closer to the center of the earth.  The Earth is also rotating underneath it at 400 m/s at 30 degrees north of the equator.  The combined relative velocity is 6922 m/s.  That's 9,900 pixels per second.  9,900 pixels/second x 16,000 pixel swath = 160 megapixels/second.  The signal chain from the taps in the CCD probably will not run at this speed well, so the sensor will need at least 4 taps per color region to get the analog to digital converters running at a more reasonable 40 MHz.  This is not a big problem.

A bigger problem is getting enough light.  If the CCD has 128 rows of pixels for one color, then the time for the image to slide across the column will be 13 milliseconds, and that's the effective exposure time.  If you are taking pictures of your kids outdoors in the sun, with a point&shoot with 3.5 micron pixels, 13 ms with an f/6 aperture is plenty of light.  Under a tree that'll still work.  From space, the blue sky (it's nearly the same blue looking both up and down) will be superposed on top of whatever picture we take, and images from shaded areas will get washed out.  More on this later.

Okay, back to SkySat-1.  The Skybox Imaging folks would like to shoot video of things, as well as imagery, and don't want to be dependent on a custom sensor.  So they are using standard area array sensors rather than pushbroom CCDs.

In order to shoot video of a spot on the ground, they have to rotate the satellite at almost 1 degree/second so that the telescope stays pointing at that one point on the ground.  If it flies directly over that spot, it will take about 90 seconds to go from 30 degrees off nadir in one direction to 30 degrees off in the other direction.  In theory, the satellite could shoot imagery this way as well, and that's fine for taking pictures of, ahem, targets.

A good chunk of the satellite imagery business, however, is about very large things, like crops in California's Central Valley.  To shoot something like that, you must cover a lot of area quickly and deal with motion blur, both things that a pushbroom sensor does well.

The image sliding across a pushbroom sensor does so continuously, but the pixel charges get shifted in a more discrete manner to avoid smearing them all together.  As a result, a pushbroom sensor necessarily sees about 1 pixel of motion blur in the track direction.  If SkySat-1 also has 0.7 meter pixels, and just stared straight down at the ground, then to have the same motion blur it would have to have a 93 microsecond exposure.  That is not enough time to make out a signal from the readout noise.

Most satellites use some kind of Cassegrain telescope, which has two mirrors.  It's possible to cancel the motion of the ground during the exposure by tilting the secondary mirror, generally with some kind of piezoelectric actuator.  This technique is used by the Visionmap A3 aerial survey camera.  It seems to me that it's a good match to SkyBox's light problem.  If the sensor is a interline transfer CCD, then it can expose pictures while the secondary mirror stabilizes the image, and cycle the mirror back while the image is read out.  Interline transfer CCDs make this possible because they expose the whole image array at the same time and then, before readout, shift the charges into a second set of shielded capacitors that do not accumulate charge from the photodiodes.

Let's put some numbers on this thing.  They'd want an interline transfer CCD that can store a lot of electrons in each pixel, and read them out fast.  The best thing I can find right now is the KAI-16070, which has 7.4 micron pixels that store up to 44,000 electrons.  They could use a 6 meter focal length F/12 Cassegrain, which would give them 74 cm GSD, and a ground velocity of 9,350 pixels/sec.

The CCD runs at 8 frames per second, so staring straight down the satellite will advance 865 m or 1170 pixels along the ground.  This CCD has a 4888 x 3256 pixel format, so we would expect 64% overlap in the forward direction.  This is plenty to align the frames to one another, but not enough to substantially improve signal-to-noise ratio (with stacking) or dynamic range (with alternating long and short exposures).

And this, by the way, is the point of this post.  Area array image sensors have seen a huge amount of work in the last 10 years, driven by the competitive and lucrative digital camera market.  16 megapixel interline CCDs with big pixels running at 8 frames per second have only been around for a couple of years at most.  If I ran this analysis with the area arrays of five years ago the numbers would come out junk.

Back to Skybox.  When they want video, they can have the CCD read out a 4 megapixel region of interest at 30 fps.  This will be easily big enough to fill a HDTV stream.

They'd want to expose for as long as possible.  I figure a 15 millisecond exposure ought to saturate the KAI-16070 pixels looking at a white paper sheet in full sun.  During that time the secondary mirror would have to tilt through 95 microradians, or about 20 seconds of arc for those of you who think in base-60.  Even this exposure will cause shiny objects like cars to bloom a little, any more and sidewalks and white roofs will saturate.

To get an idea of how hard it is to shoot things in the shade from orbit, consider that a perfectly white sheet exposed to the whole sky except the sun will be the same brightness as the sky.  A light grey object with 20% albedo shaded from half the sky will be just 10% of the brightness of the sky.  That means the satellite has to see a grey object through a veil 10 times brighter than the object.  If the whole blue sky is 15% as bright as the sun, our light grey object would generate around 660 electrons of signal, swimming in sqrt(7260)=85 electrons of noise.  That's a signal to noise ratio of 7.8:1, which actually sounds pretty good.  It's a little worse than what SLR makers consider minimum acceptable noise (SNR=10:1), but better than what cellphone camera makers consider minimum acceptable noise (SNR=5:1, I think).

But SNR values can't be directly compared, because you must correct for sharpness.  A camera might have really horrible SNR (like 1:1), but I can make the number better by just blurring out all the high spatial frequency components.  The measure of how much scene sharpness is preserved by the camera is MTF (stands for Modulation Transfer Function).  For reference, SLRs mounted on tripods with top-notch lenses generally have MTFs around 40% at their pixel spatial frequency.

In summary, sharpening can double the high-frequency MTF by reducing SNR by a factor of two.  Fancy denoise algorithms change this tradeoff a bit, by making assumptions about what is being looked at.  Typical assumptions are that edges are continuous and colors don't have as much contrast as intensity.

The atmosphere blurs things quite a bit on the way up, so visible-band satellites typically have around 7-10% MTF, even with nearly perfect optics.  If we do simple sharpening to get an image that looks like 40% MTF (like what we're used to from an SLR), that 20% albedo object in the shade will have SNR of around 2:1.  That's not a lot of signal -- you might see something in the noise, but you'll have to try pretty hard.

The bottom line is that recent, fast CCDs have made it possible to use area-array instead of pushbroom sensors for survey satellites.  SkyBox Imaging are the first ones to try this idea.  Noise and sharpness will be about as good as simple pushbroom sensors, which is to say that dull objects in full-sky shade won't really be visible, and everything brighter than that will.

[Updated] There are a lot of tricks to make pushbroom sensors work better than what I've presented here.

  • Most importantly, the sensor can have more rows, maybe 1000 instead of 128 for 8 times the sensitivity.  For a simple TDI sensor, that's going to require bigger pixels to store the larger amount of charge that will be accumulated.  But...
  • The sensor can have multiple readouts along each pixel column, e.g. readouts at rows 32, 96, 224, 480, 736, and 992.  The initial readouts give short exposures, which can see sunlit objects without accumulating huge numbers of photons.  Dedicated short exposure rows mean we can use small pixels, which store less charge.  Small pixels enable the use of sensors with more pixels.  Multiple long exposure readouts can be added together once digitized.  Before adding these long exposures, small amounts of diagonal image drift, which would otherwise cause blur, can be compensated with a single pixel or even half-pixel shift.

[Updated] I've moved the discussion of whether SkyBox was the first to use area arrays to the next post.