Friday, October 31, 2014

STS-93: Yikes! We don't need any more of these.

I just found Wayne Hale's blog.  Be careful reading this thing, I just lost nearly an entire night of sleep.  The latest update, which covers the launch of STS-93, is just breathtaking.

Here's a video which documents the folks at mission control scrambling to figure out what is going on with their bird during the ascent.

Sunday, September 14, 2014

Quick trip to the Sierras

On Friday I took a quick trip to the Sierras to grab some Ponderosa pine forest images with a drone.  Initially, the logging road was just gorgeous.


Then I got to some bits that were less than gorgeous.  These roads don't see much use (I saw one other couple in a pickup during several hours on site), and I think these portions are probably just completely ignored until it's time for another logging operation, at which point they probably fill in the worst spots with gravel.  There were 18-inch-deep gullies in places, and nasty rocky bits that looked like a dry stream bed.  I walked several of these before trying them in the minivan.  There were a few uphill sections on which I was glad to have 4 wheel drive.


Eventually I got up to my target location.  That pile of wood is slash from a logging operation that probably happened in the last few years.


Target.  Life is good.

No crashes, nothing broke.  However, there are new noises coming from the minivan's power steering system now.  So, perhaps I did break something.  Overall it was a successful trip.

Sunday, June 29, 2014

Early Days on Street View

A friend was asking about the early Street View timeline, which prompted a trip down memory lane.  It's been at least five years for most of this stuff, so if anyone has corrections I'm more than happy to apply them.

In 1978, DARPA funded MIT to produce the Aspen Movie Map, which was an interactive LaserDisc-based app for tooling around Aspen Colorado.

In 1996, the Clickwalk project (which I've just learned about thanks to a comment on this post) started in Norway.  By 1999 they had panoramic imagery online.  I'll add more info about this as I find it.

Amazon launched BlockView in January 2005. Because Amazon had no top-down maps interface, it was nearly impossible to navigate using BlockView and it remained a curiosity until it was canned September 2006.

In 2001, Larry Page shot a video from the side window of his car while driving around a few spots in the Bay Area.  His point was that Google had no way to index most of the stuff people interacted with every day.  He showed the video to Marc Levoy, a professor at Stanford, and got Marc some funding.  Marc started the Stanford Cityblock project.  This culminated in Augusto Roman's 2006 thesis "Multiperspective imaging for automated urban visualization". Augusto was producing image strips like this from shooting lots of pictures at the side of the street.


I was hired in June 2006 to work on a Google project which had grown out of the Stanford Cityblock project.  At the time I was hired, we had two copies of the first camera set, which I dubbed R1. These had been assembled by bolting five 11 megapixel CCD based book-scanning cameras (shown below) to a plywood board, and bolting that to the roof of a car, much of which was accomplished by Elliot Kroo when he was, if I'm not mistaken, 14 years old (youngest intern ever at Google). Neither R1 worked much, due to problems with the cameras, not Elliot!

R1 camera

That summer we put together R2, which repackaged R1's electronics and off-the-shelf Sigma 28mm DSLR lenses.  R2 had eight cameras, rather than five, for higher resolution, which turned out to exacerbate the camera problems we were already having with R1.  R2 looked a lot like a tank turret. The vertical slot side-facing port you can see below was for a 4 megapixel 200 frame-per-second camera that would produce the multiperspective strips from Augusto's thesis, and there was one on each side. The need to sink 1600 megapixels per second from those camera drove us to a very large and heavy disk array in the van, one of the many flaky pieces of hardware in this rig. The 8-lens pano camera set on top was a backup for a different user interface, in case the strip UI didn't work. There was also an upward-facing 3 megapixel CMOS camera, meant to complete a hemispherical panorama, but it never really worked.  We never figured out that the camera set wasn't watertight, because so many other things in the system were so broken.

R2 camera

The only watertight things in the rig were those Sick LMS291 laser scanners.  The name is German.

Sick LMS291 laser scanner

The multiperspective strip user interface wasn't very usable and the top pano cam became the focus of our attention.

In late 2006 we were shooting San Francisco with R2 and getting some horrible tearing and blooming artifacts. Whenever a an image of the sun was projected onto the sensor, the photocurrent from that image would overflow the pixel capacitors and flow into other places. Shiny surfaces like car windows and paint would lead to overflows into the readout circuitry, which led to streaks. If the sun was actually in the field of view, the photocurrent was so large it would locally raise the ground voltage in that portion of the sensor. Locally, the CCD would stop shifting, leading to the tearing effect.

We didn't know it at the time, but the problems we were having were essentially due to the high resolution we were trying to shoot. High resolution from a moving platform leads to short exposure times. Short exposure times require large relative apertures to get enough sensitivity. Compared to small format cameras, our focal lengths were fairly long (28 mm), so that meant much larger apertures, much larger photocurrents, and thus the problems we were fighting. In hindsight, the two real solutions were lower resolution or CMOS sensors. Rather than try those, I tried to make the high resolution CCD work with shutters (the R3 camera) and choppers (R4).  Oops.

Anthony Levandowski


Sometime in 2006, Sebastian Thrun started VueTool with six Stanford students that had previously worked on the DARPA/Stanford self-driving car project, one of whom was Anthony Levandowski (later major player on the Google self-driving car project, then stole the tech and took it to Uber, then pardoned by Trump on the last day of his first term). Sebastian had been one of Augusto's thesis advisors, and I think he was a bit exasperated by the slow progress at Google. His team built a far lighter rooftop rig from a Point Grey Ladybug2 camera and three LIDAR scanners, and stitched it together with 80/20 aluminum extrusions. Operating at much lower resolution, the camera did not have the worst artifacts. They had the rig working, on the road, in months, and had a very slick UI with movie-like transitions between navigation points. In early 2007, Google purchased VueTool and most of the existing StreetView team was refocussed on deploying the VueTool rig. By October, we'd shot 1 million miles, at which time dew and rain filled and quickly killed the LadyBug2 cameras, which had been designed for indoor use.

LadyBug2 camera

In May 2007, Street View launched with R2 imagery and imagery purchased from Immersive Media. The VueTool rig was such a success that by the end of the year, most if not all of the launched imagery was from that.

In the last few months of 2007, we knew that the LadyBug2 was not going to work long-term, so we needed another camera for the 2008 driving season. Point Grey was promising the LadyBug3, which would be watertight. I was promising R3 or R4 (and fighting with shutter reliability problems in the lab).  Jason Holt on the Google Geo cam team put together 9 5-megapixel CMOS cameras (as opposed the CCD cameras used by R2 and LadyBug2) and called it R5. The race was on. We dropped R3 & R4, and the camera team went nuts that winter getting R5 to work (and be watertight)! I remember assembling the first R5 to use my custom lenses in February 2008 and driving San Francisco... the day before Jason, in Europe, got the first French imagery from an R5 with off-the-shelf lenses. I was still pretty sore from eating crow on the R3/R4 decision, and I wanted to show at least my lenses worked well.

R5 Camera

LadyBug3 ended up over a year late. R5 had a rough start in Europe, barely making an imagery launch just before the 2008 Tour de France. But it worked.

2008 was a great year for the team, as we had a working rig and strong demand. We spent a lot of time on the assembly line getting the camera yields up and improving focus quality. At the same time, the ops guys were gobbling larger and larger hunks of ground. At each launch, we'd see major traffic blips as the first news reports would let people know they had local imagery.

When an average person discovers that his house is online, he tells a few friends, and some number of them discover their houses are online too. When that number is less than one, which was the case for nearly all our launches, we would see a burst of traffic which was some multiple of the initial population that found out about the new coverage through news reports, etc.

However, when that number is more than one, the discovery ripples through the entire networked population. How big is that network? We found out in August 2008, when we launched all of Australia (and a bit of Japan) in one go. We hit critical mass, and the network effect was insane. I don't remember the exact number, but at one point something like a few percent of the entire Australian population, as well as their overseas friends, were actively clicking on Street View. The traffic spike broke all sorts of things, including the help button on Maps. The next day, I remember seeing one email thread from seriously angry SREs which was getting nasty messages faster than I could read them. SREs are the folks who keep google.com running; they are very smart and enjoy excellent management support -- you do not mess with SREs. So it was important not to be seen high-fiving one another or sporting winning smirks. Thereafter we were required to have very conservative and very expensive budgets for launch traffic spikes.

Fresh from their 2007 success, in 2008 Sebastian Thrun and several members of the former VueTool team pointed out that (a) Google now had all the data sources necessary to make its own street vector database, something it had previously licensed from others, particularly TeleAtlas, and (b) those licenses were very expensive. The inevitable project to build this massive database was known as Ground Truth. By year end, not only did the Street View team have a working product, but we had an internal customer laying down a tight schedule to shoot the entire world. This was like spraying lighter fluid on an already-burning heap of charcoal.

Right in the middle of this huge burst of activity, a wrench went into the works.  Over the winter of 2009 and into 2010 we had fleet stoppages due to lawsuits over collection of WiFi packet payloads. Cellphones use WiFi hotspots, even ones which are encrypted, to figure out where they are, because the stronger signal is easier to pick up and lower power to process than GPS. This works even without a password, because all that is necessary is the station ID (from the unencrypted header), signal strength... and the station location. You can get this data from SkyHook Wireless, but once again Google wanted it's own database. So in theory, we should have just grabbed the header. However, a single wrong line in the configuration file caused us to grab the packet payload too. Had we ever audited the data on the disks, we would have found the bug and fixed it, but... nobody ever noticed. Oops.

R7 camera

Quick aside: the field R5s were slowly accumulating moisture, had 1-5 milliseconds of jitter between cameras, had autoexposure issues, and had a low-resolution upward-facing fisheye that led to an ugly resolution change whenever the user looked up at all.  My boss, Chris Uhlik, wanted me to design our own boards rather than rely on the Elphel electronics we had been using, so that we could kill the synchronization problems and bring the electronic reliability issues in-house where we could apply whatever resources we wanted.  So in the summer of 2008, I stole an intern, George Hotz, from Diego Ruspini (who chose the wrong week to go on paternity leave) and started work on R7, shown above.  George, a.k.a. Geohot, was the same guy who had rooted several generations of iPhones, and later went on to become a rap star.  George and I (but mostly George) prototyped the R7 in 2008, and got some basic bits working by the end of the year.  This was my first board level design and first FPGA design, and it showed: the power system and signal integrity were a joke, and (in retrospect this is all so clear) power glitches were causing unending flaky behavior.  I spent the spring of 2009 redesigning the boards.  During this stage Alessandro Temil transferred into our team and we finally had the engineering critical mass to get the thing done.

George Hotz

It was a good thing too.  Sometime in the summer of 2009 Julie Kim (ops) took my Real Soon Now promises too seriously and filled the basement garage of Google's new offices with Subaru Imprezas waiting for new R7 cameras... which weren't working yet.  Once again, getting the thing watertight was proving to be a problem, and the external baffle (the big red ball) was warping while being welded, which led to some of the overlap areas between cameras not getting sampled at all. The WiFi standdown relaxed pressure on the camera team and let us get the assembly line working better.  We finally got the thing into production during the early part of the 2010 driving season. And amid all the secrecy, a bunch of Googlers actually published an article featuring a teardown of the R7 -- without telling me ahead of time!

As I was the tech lead on this camera, and as the details have been published, allow me to brag a bit. For the first time, the circuit boards and firmware were an entirely internal development. The R7 eliminated essentially all of the major problems with earlier cameras: the individual cameras were synchronized to nanoseconds, the auto-exposure was fast and predictable (and it could shoot HDR if needed), it shot the same resolution in all directions, the relative unit camera orientations were very stable, and it was really watertight. It has a 480GB internal FIFO (flash) that lets the camera outpace the disk drive in the car for long periods if needed. I felt this camera is what I had been hired by Google to do. It still had CMOS rolling shutters, which in combination with vehicle movement make for difficult stitching problems.


The R7 is what Google is using today. I expect they'll be using it for quite some time, as advances in sensors haven't really fixed the problem of high resolution from moving platforms. If a CMOS sensor with a global shutter (no rolling shutter) comes out, they might do a new camera based on that, but I'd be very careful to verify that it works when pointed right at the sun first, as you need to read those pixels near the sun before they've marinated in excess photocurrent for too long.

Postscript

Microsoft chose a different path for Bing Maps, partly funding and contributing to the OpenStreetMap project, which is an open-source alternative to the Google Maps street database. To my knowledge, their imagery collection cars do not directly feed into this project, in the sense of having budgets and deadlines. As a result, their StreetSide project has no revenue source to drive it.

Navteq and TeleAtlas continue to drive the world's streets with cars similar to Google's, but at a somewhat smaller scale, and the imagery from those collections is not published that I know of. One wonders why this imagery doesn't end up at Microsoft, as the oblique imagery from Pictometry does.

The success of Ground Truth cemented Sebastian Thrun's position at Google and made it possible for him to get serious long-term funding for the self-driving car project, which eventually became Chauffeur and then Waymo.

I went on to develop aerial cameras for Google, but that story will have to wait for Google to publish some details.

Tuesday, May 13, 2014

Happy Birthday to me


It's 5:13pm, and I'm sitting in the shade in my back yard, tweaking some really neat flexure mounts, while keeping an eye on two of my kids and two of their friends frolicking in the pool I built years ago. It's hot out, and there is steady traffic between the two hives near the back of the yard and the fountain to my right. A pair of ducks have been watching the kids too, and though they like the look of all that water they're leaving for someplace less noisy.  Lady Jane, our black Labrador, is lying in the grass, which is overdue for mowing, ripping up stems and chewing away. There are stains from fine droplets of sunscreen on the back of my laptop that won't be coming off. Martha will bring my youngest daughter back from gym class in an hour and then we'll head out for my birthday dinner.

At least once a day, at least one person helps me accomplish something I cannot achieve myself, things I am really happy to be working on.  I wonder if I manage to help someone else every day in the same way.

I have a lot to be thankful for.

Wednesday, April 23, 2014

Window 8.1 is unusable on a desktop

For the last two years I've been doing a lot of SolidWorks Simulation on my Lenovo W520 laptop.  This thing has been great.  But I've started doing fluid flow simulations, and it's time for more CPU than a 3.3 GHz (limited by heat load) dual core Sandy Bridge.

So I built a 6 core Ivy Bridge (i7 4930k) which I overclocked to 4.5 GHz.  Very nice.  However, I installed Windows 8.1 on it, which turns out to have been wrong.  This post is for people who, like me, figure that Windows 8 problems are old news and Microsoft must have fixed it by now.

Summary: Nope.

I figured that all those folks bellyaching about the new Windows were just whining about minor UI differences.  Windows 8 should benefit from 3 years of code development by thousands of serious engineers at Microsoft.  The drivers should be better, and it definitely starts up from sleep faster (and promptly serves me ads).   I figured I could deal with different menus or whatever.

I have learned that Windows 8.1 is unusable for a workstation.
  • Metro apps are full screen.  Catastrophe.
    • When I click on a datasheet PDF in Windows 7, it pops up in a window and I stick it next to the Word doc and Excel doc that I'm working on.  In Windows 8, the PDF is full screen, with no way to minimize.  I can no longer cut and paste numbers into Skype.  I can no longer close open documents so that DropBox will avoid cache contention problems.
    • Full screen is fine for a tablet, but obliterates the entire value of a 39 inch 4K monitor.  I spent $500 on that monitor so I could see datasheets, spreadsheet, Word doc, and SolidWorks at the same time.
    • Basically, this is a step back to the Mac that I had 20 years ago, which ran one application, full screen, at a time.
  • Shortly after the build, I cut power to the computer while it was on.  Windows 8 cheerfully told me I had to reinstall the O/S from scratch, and blow away all the data on the machine.  I don't keep important data on single machines, but I still lost two hours of setup work.  That's not nice.  I have not had that problem with Windows 7.
  • I plugged the 4k monitor into my W520 running Windows 7.  It just worked.  My Windows 8 box wants to run different font sizes on it, which look terrible.
  • Windows 8 + Chome + 4k monitor = display problems.  It appears Chrome is rendering at half resolution and then upscaling.  WTF?  This has pushed me to use Internet Explorer, which I dislike.  Chrome works fine on the 4k on Windows 7.
  • Windows 8 + SolidWorks = unreadable fonts in dialog boxes.  I mean two-thirds of the character height is overwritten and not visible.  So actually unreadable.  The SolidWorks folks know they have a problem, and are working on it.  And, I found a workaround.  But it still looks unnecessarily ugly.
  • Windows 8 + SolidWorks + 4k monitor = display problems.  Not quite the same look as upscaling, but something terrible is clearly happening.  Interestingly, if more than half of the SW window is on my 30" monitor, lines drawn on the 39" look okay.  But when more than half of the SW window is on my 39" monitor, lines look like crap... even the ones on the smaller half of the window still on the 30".
  • Windows 7, to find an application: browse through the list on the start button.  Window 8: start by knowing the name of the application.  Go to the upper right corner of the screen, then search, then type in the name.
  • Finally, that upper right corner thing.  I have two screens.  That spot isn't a corner, it's between my two screens.  I keep triggering that thing when moving windows, and can't trigger it easily when I want to.  Microsoft clearly designed this interface for tablets, and was not concerned with how multi-screen desktop users would use it.
And here's the kicker: Microsoft won't swap the Windows 8.1 Pro license I got for a Windows 7 license.  I have to buy Windows 7.

Excel 2013 has one thing I like: multiple spreadsheets open in separate windows, like Word 2010 and like you'd expect.

Word 2013 has two things I dislike: Saving my notes file takes 20 seconds rather than being nearly instant (bug was reported for a year before Microsoft acknowledged it recently), and entering "µm" now takes two more clicks than it used to -- and nothing else has gotten better in exchange.  Lame.

I suggest not upgrading, folks.  No real benefit and significant pain.

You have (another) angry customer, Microsoft.

Here's the difference in SolidWorks rendering, on the SAME MONITOR, running in Windows 8, as I shift the window from being 60% on the 30 inch monitor to 40% on the 30 inch monitor (and 60% on the 39 inch):

30 inch mode: Note that lines are rendered one pixel wide, text is crisp.

39 inch mode: Lines are fatter, antialiasing attempted but wrongly


Wednesday, January 15, 2014

Sensors, Survey and Surveillance from Space

The SkyBox satellites are the first to use area array rather than pushbroom sensors for survey work, but they certainly aren't the first to use area array sensors.  I think the first satellites to do that were the KH-11 surveillance satellites, versions of which are still the principle US optical spysats in use today.  The first KH-11s sported area array sensors of about the same resolution as a standard definition TV.  The most recent KH-11s probably have a focal plane similar to this tiled 18,000 x 18,000, 10 micron focal plane (shown below, that circle is a foot in diameter).

Optical spysats have two missions; call them surveillance and survey.  When you already know where the thing is, that's surveillance.  Response time matters, but throughput is usually not a big deal.  When you don't know where your thing is, or you don't even know what it is yet, you are doing survey.  Throughput is king in survey work, and if response time matters, you have a problem.  Coast Guard aerial search and rescue, for example, has this problem.  You can read about the difficulties of search at sea in this NY Times article on rescue of a fisherman last July.

General Schwarzkopf said after the first Gulf War that spysats (he must have been referring to the earlier KH-11s) could not provide useful, timely imagery.  He was comparing single pictures of targets after a hit to the target camera footage of his planes, which gave him continuous video snippets of the target before, during, and after a hit.  These videos were very popular at press conferences and with his upper management.
Satellites are excellent for getting access to denied airspace -- there is no other way to take pictures inside China and Russia.  But in Iraq, Afghanistan, and Pakistan they are completely outclassed by airplanes and now drones with long-range optics (like the MB-110 reconnaissance pod which I still haven't written up).  In a 20 year battle against irrelevancy, I suspect that getting near-real-time imagery, especially video, from orbit has been a major NRO focus.  I'm sure the Block IV KH-11 launches in 2005, 2011, and recently in August 2013 can all do real-time downlinks of their imagery through the SDS satellite constellation.  However, the second part of real-time is getting a satellite into position to take the picture quickly.  The three KH-11s in orbit often cannot get to a surprise target in less than 30 minutes, and cannot provide continuous video coverage.  Guaranteeing coverage within 30 minutes would require dozens of satellites.  Continuous coverage, if done with satellites 300 km up, would require around 1000.  The KH-11 series is expensive (they refer to them as "battleships") and the US will not be launching a big constellation of these.

The Next Generation Electro-Optical program, which started in 2008 or so, is probably looking at getting the cost of the satellites down into the sub-$500m range, while still using 2+ meter telescopes, so that a dozen or two can be launched over a decade within a budget that NRO can actually sell to Congress.  My guess is they won't launch one of these until 2018.  In the meantime, SkyBox Imaging and ExactEarth, who are both launching constellations of small imaging sats, will be trying to sell much lower-resolution images that can be had more quickly.  These civilian operators have 50-60 cm apertures and higher orbits, and so can't deliver the resolution that NRO and NGA customers are used to, and they can't or don't use the SDS or TDRS constellations to relay data in real time.  (SkyBox can do video, but then downlinks it 10 to 90 minutes later when they overfly one of their ground stations.)

The second spysat mission is survey: looking for a needle in a haystack.  From 1972 to 1986 we had this in the form of the KH-9 Hexagon, which shot the entire Soviet Union every 2 to 4 months at 1 to 2 foot resolution.  The intel community at the time could not search or inspect all that imagery, but the survey imagery was great once they'd found something surprising.  Surprise, a new site for making nuclear weapons!  Survey answers the question: What did it look like during construction?  Or, How many other things like this are there?  Nowadays, Big Data and computer vision have got some handle on finding needles in haystacks, but we no longer have the KH-9 or anything like it to supply the survey imagery to search.  We still use the U-2 for aerial survey imagery, but we haven't flown that into denied airspace (e.g. Russia and China) for many decades.

From 1999 to 2005 Boeing ran the Future Imagery Architecture program,which was intended to make a spy satellite that could do radar, survey, and surveillance.  The program took too long and ran way over budget, and was eventually salvaged by cancelling the optical portion and having the team design a synthetic aperture radar satellite, which did get launched.  (Apparently this was the successor to the Lacrosse radar satellite.)

As I wrote, SkyBox does survey with a low-resolution area array.  They would need about 16,000 orbits to cover the entire surface of the earth, which is 2.7 years with one satellite.  I'm sure they can optimize this down a bit by steering left/right when over the ocean.  But this is 70 cm GSD imagery.

Two of the telescopes designed for FIA were donated to NASA in 2012, and the few details that have emerged tell us about late 1990s spy satellites.  From 300 km up, they could deliver 7 cm imagery, and had a (circular) field of view of about 50,000 pixels.  This could have been used with a 48,000 x 16,000 pixel tiled focal plane array.  Using the simple expedient of shooting frames along the line of the ground track, the ground swath would have been 3.2 km wide, and could have surveyed the entire Earth in about 2.7 years (the same number is a coincidence -- spysats fly at half the altitude and this one had twice my presumed field of view for SkyBox).

However, to keep up with the ground track pixel velocity, the sensors would have to read out at over 6 frames per second.  That's almost 5 gigapixels per second.  I don't believe area array sensors that big can yet read out that fast with low enough noise.  (The recent Aptina AR1411 reads out at 1.4 gigapixels per second, but it's much smaller, so the column lines have far less capacitance.)

The large number is not a result of the specifics of the telescope or sensor design -- it's fundamental to high resolution orbital survey.  It's just the rate at which the satellite flies over ground pixels.  Getting 5 billion tiny analog charge packets to A/D converters every second is hard.  Once there, getting 20 gigabits/second of digital data to the ground is even harder (I don't think it's been done yet either).  I'll defer that discussion to a later post.

Pushbroom sensors are more practical to arrange.
  • The satellite simply stares straight down at the ground.  Attitude corrections are very slow.
  • It's easy to get lots of A/D converters per sensor, you simply add multiple taps to the readout line.
  • It's easy to tile lots of sensors across the focal plane.  You stagger two rows of sensors, so that ground points that fall between the active areas of the first row are imaged by the second row, like this focal plane from ESA Sentinel-2.  Once stitched, the resulting imagery has no seams.


Tiled area array sensors are more difficult, but have the advantage of being able to shoot video, as well as a few long exposures on the night side of the Earth.
  • The image must be held steady while the field of view slides along the ground.  Although this can be done by rotating the whole satellite, survey work is going to require rapidly stepping the stabilized field forward along the optical path, several times a second.  Fast cycling requires a lightweight optical element, usually the secondary mirror, to have a fast and super precise tip/tilt mechanism to track out the motion.  Cycling this element back into position between shots can add vibration to the satellite.
  • While the secondary mirror is moving the image back into position, the pixel photodiodes must not accumulate charge that affects the values read out.  This typically means that either the cycling time can't be used for readout, or (as in the VisionMap A3) the sensor is an interline CCD with two capacitors per pixel, one of which is shielded.  With this choice comes a bunch of minor but annoying problems.
  • In one line time, charge is transferred from the pixels all the way across the array to the readout.  The bit lines can be long and capacitive and add noise.
  • Take another look at the first pic in this blog post, and note the seams between the active arrays.  These are annoying.  It's possible to take them out with clever combinations of sparse arrays and stepping patterns.
Lenses generally resolve a circular field of view, and pushbroom sensors take a rectangular stripe down the middle.  It's possible to put an area array sensor in the leftover upper or lower crescent around a pushbroom sensor.  This gives a smaller area sensor, but in the context of a 50,000 pixel diameter focal plane, a "smaller" area sensor might be 10,000 pixels on a side, with 50 times the pixel count of an HD video sensor.  This allows for a 10:1 "digital zoom" for context with no loss of display resolution.

If I were building a government spysat today, I'd want it to do survey work, and I'd make surveillance the secondary mission.  Airplanes and drones are better for most surveillance work.  I'd want to shoot the whole Earth each year, which can be done with three satellites at 300 km altitude.  I'd use a staggered pushbroom array as the primary sensor and a smaller area array for surveillance.

The step-stare approach that SkyBox is using makes sense when a big, fast area array sensor covering the whole field of view can be had at low risk.  Sensors are developing quickly, so this envelope is growing over time, but it's still an order of magnitude away from what large-aperture spysats can do.

Maybe I'm wrong about that order of magnitude.  In 2010 Canon announced a 205 mm square CMOS sensor that supposedly reads out 60 frames per second.  Here it is pictured next to a full-frame 35mm DSLR sensor -- it's slightly bigger than the tiled array at the top of this post.  Canon did not announce the resolution, but they did say the sensor had 100 times the sensitivity of a DSLR, which suggests a pixel size around 35 microns.  That's too big for a spysat focal plane, unless it's specifically for use at night.
No subsequent announcement was made suggesting a purpose for this sensor.  Canon claims it was a technology demonstration, and I believe that (they would not have been allowed to show a production part for a spysat to the press).  Who were they demonstrating that technology to?  Is this the focal plane for a Japanese spysat?