Wednesday, January 15, 2014

Sensors, Survey and Surveillance from Space

The SkyBox satellites are the first to use area array rather than pushbroom sensors for survey work, but they certainly aren't the first to use area array sensors.  I think the first satellites to do that were the KH-11 surveillance satellites, versions of which are still the principle US optical spysats in use today.  The first KH-11s sported area array sensors of about the same resolution as a standard definition TV.  The most recent KH-11s probably have a focal plane similar to this tiled 18,000 x 18,000, 10 micron focal plane (shown below, that circle is a foot in diameter).

Optical spysats have two missions; call them surveillance and survey.  When you already know where the thing is, that's surveillance.  Response time matters, but throughput is usually not a big deal.  When you don't know where your thing is, or you don't even know what it is yet, you are doing survey.  Throughput is king in survey work, and if response time matters, you have a problem.  Coast Guard aerial search and rescue, for example, has this problem.  You can read about the difficulties of search at sea in this NY Times article on rescue of a fisherman last July.

General Schwarzkopf said after the first Gulf War that spysats (he must have been referring to the earlier KH-11s) could not provide useful, timely imagery.  He was comparing single pictures of targets after a hit to the target camera footage of his planes, which gave him continuous video snippets of the target before, during, and after a hit.  These videos were very popular at press conferences and with his upper management.
Satellites are excellent for getting access to denied airspace -- there is no other way to take pictures inside China and Russia.  But in Iraq, Afghanistan, and Pakistan they are completely outclassed by airplanes and now drones with long-range optics (like the MB-110 reconnaissance pod which I still haven't written up).  In a 20 year battle against irrelevancy, I suspect that getting near-real-time imagery, especially video, from orbit has been a major NRO focus.  I'm sure the Block IV KH-11 launches in 2005, 2011, and recently in August 2013 can all do real-time downlinks of their imagery through the SDS satellite constellation.  However, the second part of real-time is getting a satellite into position to take the picture quickly.  The three KH-11s in orbit often cannot get to a surprise target in less than 30 minutes, and cannot provide continuous video coverage.  Guaranteeing coverage within 30 minutes would require dozens of satellites.  Continuous coverage, if done with satellites 300 km up, would require around 1000.  The KH-11 series is expensive (they refer to them as "battleships") and the US will not be launching a big constellation of these.

The Next Generation Electro-Optical program, which started in 2008 or so, is probably looking at getting the cost of the satellites down into the sub-$500m range, while still using 2+ meter telescopes, so that a dozen or two can be launched over a decade within a budget that NRO can actually sell to Congress.  My guess is they won't launch one of these until 2018.  In the meantime, SkyBox Imaging and ExactEarth, who are both launching constellations of small imaging sats, will be trying to sell much lower-resolution images that can be had more quickly.  These civilian operators have 50-60 cm apertures and higher orbits, and so can't deliver the resolution that NRO and NGA customers are used to, and they can't or don't use the SDS or TDRS constellations to relay data in real time.  (SkyBox can do video, but then downlinks it 10 to 90 minutes later when they overfly one of their ground stations.)

The second spysat mission is survey: looking for a needle in a haystack.  From 1972 to 1986 we had this in the form of the KH-9 Hexagon, which shot the entire Soviet Union every 2 to 4 months at 1 to 2 foot resolution.  The intel community at the time could not search or inspect all that imagery, but the survey imagery was great once they'd found something surprising.  Surprise, a new site for making nuclear weapons!  Survey answers the question: What did it look like during construction?  Or, How many other things like this are there?  Nowadays, Big Data and computer vision have got some handle on finding needles in haystacks, but we no longer have the KH-9 or anything like it to supply the survey imagery to search.  We still use the U-2 for aerial survey imagery, but we haven't flown that into denied airspace (e.g. Russia and China) for many decades.

From 1999 to 2005 Boeing ran the Future Imagery Architecture program,which was intended to make a spy satellite that could do radar, survey, and surveillance.  The program took too long and ran way over budget, and was eventually salvaged by cancelling the optical portion and having the team design a synthetic aperture radar satellite, which did get launched.  (Apparently this was the successor to the Lacrosse radar satellite.)

As I wrote, SkyBox does survey with a low-resolution area array.  They would need about 16,000 orbits to cover the entire surface of the earth, which is 2.7 years with one satellite.  I'm sure they can optimize this down a bit by steering left/right when over the ocean.  But this is 70 cm GSD imagery.

Two of the telescopes designed for FIA were donated to NASA in 2012, and the few details that have emerged tell us about late 1990s spy satellites.  From 300 km up, they could deliver 7 cm imagery, and had a (circular) field of view of about 50,000 pixels.  This could have been used with a 48,000 x 16,000 pixel tiled focal plane array.  Using the simple expedient of shooting frames along the line of the ground track, the ground swath would have been 3.2 km wide, and could have surveyed the entire Earth in about 2.7 years (the same number is a coincidence -- spysats fly at half the altitude and this one had twice my presumed field of view for SkyBox).

However, to keep up with the ground track pixel velocity, the sensors would have to read out at over 6 frames per second.  That's almost 5 gigapixels per second.  I don't believe area array sensors that big can yet read out that fast with low enough noise.  (The recent Aptina AR1411 reads out at 1.4 gigapixels per second, but it's much smaller, so the column lines have far less capacitance.)

The large number is not a result of the specifics of the telescope or sensor design -- it's fundamental to high resolution orbital survey.  It's just the rate at which the satellite flies over ground pixels.  Getting 5 billion tiny analog charge packets to A/D converters every second is hard.  Once there, getting 20 gigabits/second of digital data to the ground is even harder (I don't think it's been done yet either).  I'll defer that discussion to a later post.

Pushbroom sensors are more practical to arrange.
  • The satellite simply stares straight down at the ground.  Attitude corrections are very slow.
  • It's easy to get lots of A/D converters per sensor, you simply add multiple taps to the readout line.
  • It's easy to tile lots of sensors across the focal plane.  You stagger two rows of sensors, so that ground points that fall between the active areas of the first row are imaged by the second row, like this focal plane from ESA Sentinel-2.  Once stitched, the resulting imagery has no seams.


Tiled area array sensors are more difficult, but have the advantage of being able to shoot video, as well as a few long exposures on the night side of the Earth.
  • The image must be held steady while the field of view slides along the ground.  Although this can be done by rotating the whole satellite, survey work is going to require rapidly stepping the stabilized field forward along the optical path, several times a second.  Fast cycling requires a lightweight optical element, usually the secondary mirror, to have a fast and super precise tip/tilt mechanism to track out the motion.  Cycling this element back into position between shots can add vibration to the satellite.
  • While the secondary mirror is moving the image back into position, the pixel photodiodes must not accumulate charge that affects the values read out.  This typically means that either the cycling time can't be used for readout, or (as in the VisionMap A3) the sensor is an interline CCD with two capacitors per pixel, one of which is shielded.  With this choice comes a bunch of minor but annoying problems.
  • In one line time, charge is transferred from the pixels all the way across the array to the readout.  The bit lines can be long and capacitive and add noise.
  • Take another look at the first pic in this blog post, and note the seams between the active arrays.  These are annoying.  It's possible to take them out with clever combinations of sparse arrays and stepping patterns.
Lenses generally resolve a circular field of view, and pushbroom sensors take a rectangular stripe down the middle.  It's possible to put an area array sensor in the leftover upper or lower crescent around a pushbroom sensor.  This gives a smaller area sensor, but in the context of a 50,000 pixel diameter focal plane, a "smaller" area sensor might be 10,000 pixels on a side, with 50 times the pixel count of an HD video sensor.  This allows for a 10:1 "digital zoom" for context with no loss of display resolution.

If I were building a government spysat today, I'd want it to do survey work, and I'd make surveillance the secondary mission.  Airplanes and drones are better for most surveillance work.  I'd want to shoot the whole Earth each year, which can be done with three satellites at 300 km altitude.  I'd use a staggered pushbroom array as the primary sensor and a smaller area array for surveillance.

The step-stare approach that SkyBox is using makes sense when a big, fast area array sensor covering the whole field of view can be had at low risk.  Sensors are developing quickly, so this envelope is growing over time, but it's still an order of magnitude away from what large-aperture spysats can do.

Maybe I'm wrong about that order of magnitude.  In 2010 Canon announced a 205 mm square CMOS sensor that supposedly reads out 60 frames per second.  Here it is pictured next to a full-frame 35mm DSLR sensor -- it's slightly bigger than the tiled array at the top of this post.  Canon did not announce the resolution, but they did say the sensor had 100 times the sensitivity of a DSLR, which suggests a pixel size around 35 microns.  That's too big for a spysat focal plane, unless it's specifically for use at night.
No subsequent announcement was made suggesting a purpose for this sensor.  Canon claims it was a technology demonstration, and I believe that (they would not have been allowed to show a production part for a spysat to the press).  Who were they demonstrating that technology to?  Is this the focal plane for a Japanese spysat?