Saturday, September 21, 2024

8K display on laptop

 I've upgraded the display I use at my desk to a 65" Samsung QN800C, which is an 8K display.  I bought it open-box at a Best Buy Outlet for $1349.


The ThinkPad T14s Gen3 that I have drives it natively at 60 fps, even though the Lenovo documentation does not mention that this is a supported configuration.  There were a few things I had to tweak to make the setup work.  I thought I'd share because I spent way too much time trying to figure this stuff out before the purchase.

  • I needed a high bandwidth 8K capable HDMI cable.
  • The monitor must be set to Game Mode On, Intelligent Mode Off
  • Quite a bit of futzing with the picture settings to get colors which are even moderately acceptable.  I build cameras but I mostly look at SolidWorks, spreadsheets, and circuit designs, so I can live with terrible color.  If you do photography for a living, do not buy this setup, it will not work for you.
    • Brightness 50
    • Contrast 50
    • Sharpness 10
    • Color 11
    • Tint G0/R0
  • The machine was occasionally taking "coffee breaks" of up to 30 seconds when switching focus between different application windows.  This got particularly bad when SolidWorks was running, especially with the SolidWorks window expanded to full screen.  The fix is to change the "UMA frame buffer size" in the BIOS from "Auto" to "4GB".  I have no idea if 2GB would work, I didn't try it.  This also eliminates the frequent warnings from SolidWorks about insufficient system resources.
The display has 187 micron pixels, vs the 157 micron pixels on my laptop screen.  I have it positioned about twice as far away, however, 35 inches from my face.  It's actually mounted to the wall and my desk sits several inches from that wall.  I find a 150% scale factor on the laptop display and a 175% scale factor on the 8K display work well.

The surface of the display is around 103 degrees F, compared to my laptop screen which is 79 F.  I'm not sure why this is, but it's annoying.  The screen takes up a large enough solid angle from my face that I actually feel a little heat from it, which I dislike.  It's a little like the effect of being near a campfire.  I can eliminate the feeling by blowing a 4" desk fan at the screen, which takes the surface down to the 80s.  Even though that fan is "quiet", this is too noisy for me.  I've purchased a couple of very quiet Sanyo Denki fans, which I'm going to mount behind the screen, along with a yet-to-be-designed 3D printed plenum that will reach under the screen and direct a thin sheet of air up the face.

Overall, I'd say it's useful enough to be worth it.  I park a web browser on the left, and a Word file with notes on the right, and mostly use the center of the screen.  I find I'm still using my laptop display as well.  Maybe I'm just a slob and I like to spread out.

Saturday, June 22, 2024

What Airbus should build next

Boeing should obviously build a 737 replacement next.  As I understand it, this project has been underway for years.

Airbus, however, could do something interesting.  They could build an airplane optimized for much greater passenger comfort without sacrificing fuel burn. 

Here's the idea: make the cabin bigger.  WAY bigger.  This is the number one complaint of passengers.

To be clear, I’m suggesting scaling up the cabin dimensions while keeping the cargo weight capacity the same. Obviously if you increase the weight the plane can carry, it’s going to burn proportionally more fuel and not much has changed.

As the cabin gets bigger, then parasitic drag from the fuselage gets larger. This can be fixed by flying the plane at lower dynamic pressure (indicated airspeed). If the surface area is doubled, then flying at half the dynamic pressure will keep drag about the same.

But we don’t want to fly slower, so instead we fly higher. For instance, assuming surface area doubled and dynamic pressure halved, we’d have to fly at 52,000 feet (like the Concorde) instead of 38,000 feet. The speed of sound does not change between these two altitudes, so there shouldn’t be any problem with Mach buffet.

The lower dynamic pressure will require the wing to fly at either higher coefficient of lift or with more wing area, and it will require more wing span. This does not imply more drag from the wing! To a first approximation the L/D will stay the same so the wing drag stays the same.

A bigger cabin would have huge benefits for passenger comfort, so why not do it?

The answer is safety in the case of a blowout. At 43,000 feet, someone breathing pure oxygen (from a simple mask) is getting the same partial pressure of oxygen as someone breathing normal air at 7400 feet. That’s about the same as the pressure altitude of the cabin at cruise, and anyone safe to fly a plane normally should be okay.

But at 52,000 feet, someone breathing pure oxygen would be getting the same PPO2 as someone breathing normal air at 18,000 feet. I’ve done that, and I started hallucinating at 17,000 feet and was pretty woozy at 19,340 feet. It wasn’t terrible for me, but I was in my 20s at the time and running 20+ miles a week. For someone with compromised breathing ability, this exposure could kill them.

All commercial planes are able to maintain internal pressure above outside pressure, even when a couple of windows are blown out, by pumping huge amounts of air into the cabin. To ensure it could hold an even bigger pressure differential (so that it could cruise at 55,000 to 60,000 feet), the Concorde had tiny windows.

So the safety case has been closed in the past. Airbus could make a plane that flew higher, with a bigger cabin. They might even be able to do it with large windows by fitting larger fresh air pumps.  Or just use tiny windows and let people look around by using the screens in front of them, relaying a live feed from cameras in the wingtips and top of the tail.

The plane could have a simpler wing, by landing at speeds similar to todays planes. The smaller ratio of cruise to landing dynamic pressure would require fewer enhanced lift devices on the wing. In fact, for the ratio suggested above (cruising at half normal dynamic pressure), it should be possible to have a wing with no flaps at all. That will save a significant amount of weight and cost, and make it possible to load more fuel.

I think it would be a radical and welcome innovation for long distance flying.

Monday, December 30, 2019

My own Pacifica Hybrid review


Car and Driver’s 40,000 mile 12-month long-term review ofthe Pacifica Hybrid is not rosy.  We’ve put 44,000 miles on ours in 2.6 years.  Why such a difference?  And, what should Chrysler change for future models?

First, C&D had a bunch of mechanical issues that we have not had.
  • C&D had their hybrid battery (!) replaced under warranty.
  • C&D had two instances of low battery coolant.  Worrisome.
  • C&D had the front anti-sway-bar end links replaced under warranty to fix a squeak.
  • C&D had a stuck cupholder which caused the center console to be replaced under warranty.
  • C&D has had intermittent problems with the Uconnect infotainment system.
  • One of the C&D editors reports that the vehicle at 40,000 miles “feels tired”.

Our Pacifica has had the electronics portion of the drivetrain, the Power Inverter Module (PIM), updated in some way at the dealer.  We have one of the first ones built and we think Chrysler had some teething problems with these PIMs.  Our windscreen was damaged by a flying rock and was replaced, covered by insurance.  That’s it.  We have a lot of confidence in the car.

I think we drive our car differently than the C&D editors do, and I think many aspects of our usage are a lot closer to how most people use their minivans.  We drive 17,000 miles a year in California, mostly commuting, and have a level 2 charger at home.  They put on 40,000 miles, mostly on long trips in northern states, and none of their editors have level 2 chargers at home.  The colder environment in Michigan is a significant difference more similar to most Americans and Canadians usage.
  • Our battery gets a full charge every night, and expends that charge almost every day.  On school days we drop the kids off and the car gets topped up before the trip to work.  Over 2.6 years, that’s about 12,000 kWh we’ve put into the car.  My guess is that C&D’s Pacifica got less than 1000 kWh over their trial period.  Our battery has spent the majority of its life at shirtsleeve temperatures and full charge, and their battery has spent the majority of its life at uncomfortable temperatures, mostly cold, and empty of charge.  I’d like to know if the battery on their vehicle is underperforming, leading to current flow limits that are lengthening the time it takes to start the engine.  Slow starts would certainly make the vehicle feel tired.
  • Our Pacifica seems to get about 30.7 MPG when running on gas only, such as on a long trip to southern California.  C&D’s Pacifica got about the same.
  • Our car goes just over 2 miles per kWh, which is much worse than both a Tesla and the EPA estimates.  The former is due to the far more complex (and lossy) drivetrain, and the latter… ugh.  I can’t even imagine what anyone though MPGe would be useful for.
  • We’re quite happy with 2 miles/kWh however.  At $0.10/kWh, that’s 4.9 cents/mile, less than half the 11.6 cents/mile it costs to run on $3.50/gallon gasoline.  It’s not the $1600 saved that we actually care about, but rather that we spent $1300 on electricity that mostly went to Americans, as opposed to $2900 on gasoline that would mostly have gone to overseas interests directly opposed to our own.  We expect to drive the car for 200,000 miles and expect to save around $12,000, which would cover the extra cost of the hybrid even without government subsidies.

The C&D folks complain about the noises coming from the engine compartment.  I know what this is.  When the engine is running (e.g. it’s cold outside), as you come down to a stop you can hear one of the motor/generators spinning faster, which is disconcerting until you understand that the engine is going at constant RPM and is always connected to the wheels, so the M/G is running backwards, faster, as a generator, to suck up the power and RPMs from the engine and let the vehicle stop.

Once again I think it comes down to a difference in driving style.  When I drive the car it’s mostly in town, where full throttle is not safe.  At part-throttle operation it’s almost impossible to tell when the engine starts, and torque response to throttle position is lightning fast, better than my Honda S2000.  My impression is that the car is very quiet and very responsive.

C&D editors were driving around with an empty battery and probably mashing on the throttle from stopsigns like a bunch of incompetent teenagers.  Because the engine doesn’t run when the car is stopped, there is a transient at full-throttle takeoff where the vehicle (a) has only half power available and (b) has to use some of that power to start the engine.  It’s not great.  I never experience this.  People who want a fantastic stopped-launch experience should get Teslas, perhaps with added noisemakers.

They also complained about there not being an EV-only option on this car.  Hello?  All you have to do is stay under 75 MPH and 85 kilowatts with a battery showing any more than 0% and it’ll keep the engine off.  Sheesh.  Where the hell do these people drive?

Before I get going on what I’d like to see on an updated Pacifica, I’d like to point out how much untapped potential there is in the drivetrain.  In particular, during a full-throttle launch to 70 mph it does not rev the engine anywhere close to maximum power.  It barely gets the engine to its torque peak at 60 mph, and as a side effect it needs only 22 kilowatts from a battery sized to deliver 85.  With an upgrade to MG A’s inverter electronics and cooling (neither of which would require any changes to anything else), the Pacifica could get near it’s torque peak at just 10 mph and actually send 20 kilowatts back to the battery while doing a full acceleration run (power generation starts to roll off just before 60 mph).  That’s not helpful to the vehicle as it is, but keep it in mind as you read the following.  It’s incredible to realize that a slightly modified vehicle could deliver 105 kilowatts to… I dunno… another motor perhaps?

I should also point out that the Pacifica beats the crap out of its battery.  A Tesla P100D in ludicrous mode will drain it’s 100 kWh battery at 582 kW, which is a rate of about 6 C.  Regen charging is limited to 60 kW, or 0.6 C, and DC charging is limited to 150 kW, or 1.5 C, and frequent use of DC rapid charging is known to limit battery life.  The Pacifica will discharge its 16 kWh battery at 85 kW, which is 5.3 C, and regen braking sometimes gets as high as 40 kW, which is 2.5 C.  These numbers seem awfully high for a company that doesn’t have much internal expertise in batteries.

With that out of the way, let’s talk about what I’d like to see in our next Pacifica.  Basically, I’d like to see Chrysler have options that put the Pacifica into direct competition with the Expedition, Suburban, Yukon, and Sequoia, without making the basic minivan too expensive.  Chrysler does not compete in the big luxury SUV market and their hybrid can win it.
  • $0k: 2 inch tow hitch.  It’s completely stupid the vehicle doesn’t have it standard.  Fix this.
  • $4k option: 4 wheel drive.  Our previous Grand Caravan had 4WD and the lack of it in the 2017 Pacifica nearly broke the deal.  Electric rear wheel drive will be a huge improvement in so many ways:
    • Getting around in snow.  Yes, even in California we visit the snow for fun, and dislike screwing around with chains which you frequently need here if you don’t have 4WD.
    • Increased electric deceleration.  I have no problem with the brake feel but I suspect it can be made better with 4 wheel balanced regen.  In particular I suspect limited traction braking while turning will get better with balanced regen.
    • Increased efficiency, as the rear electric-only drivetrain will be more efficient than the hybrid drivetrain up front.  Also, we’ll get more regeneration, but I think that’s a smaller effect.
    • Balanced tire wear.  Because all acceleration and nearly all deceleration is handled by the front tires, they wear excessively fast.  Spreading the accel/decel loads will reduce the total wear.
  • $6k option: Bigger battery.  16 kWh means something like 11 kWh of actually usable capacity.  About 56% of our mileage is driven from the plug.  (The vehicle reports a much larger electric fraction because the engine only runs some of the time once the battery is depleted.)  A 28 kWh battery would put an end to engine usage on most weekdays for us.  The rear stow&go must stay.  I just do not believe there is a packaging problem for a 28 kWh battery, as the area under the front seats is underutilized.
    • The bigger battery will reduce battery wear by spreading peak regeneration loads over more cells.  Charging batteries fast is hard on them, and I’ve seen our Pacifica push 40 kW into the battery, which is a 2.5 C rate.  Tesla is very aggressive with their battery but limits regen to under 1 C.  Increasing to 28 kWh would reduce the Pacifica to 1.4 C.
    • Tesla has proven that consumers will pay more for a bigger battery.  They charge $8,000 more for an extra 12 kWh and make most of their sales with the larger batteries.  I think Chrysler could charge an extra $6k for the 16->28 kWh upgrade and see it on almost every sale.  Estimates for Chrysler’s actual costs range from $200 to $400, so $6k for an extra 12 kWh would guarantee a fat profit.  The upgrade would be directly comparable to Tesla which would score some points in the cheering section.
  • Higher power limit on motor/generator A.  Right now this thing can deliver 125 N-m of torque, which is only used to start the engine.  It is power limited to 63 kW.  During full-throttle acceleration runs, during which MG A is in generator mode the whole time, the power limit on MG A causes the computer to run the engine slower than its maximum power peak.  With a more powerful inverter and more coolant flow (but no changes to the rotor and magnets and so on), MG A could have its power limit increased to 110 kW.  That won’t do much on a FWD car (as there’s nowhere else to put the power), but a 4WD car can route the extra power to the rear axle.
  • Let the driver get rid of the full-throttle stopped-launch transient by giving him or her a way to force an engine start.  This is a software change that ideally they could roll out to existing Pacificas.  Put your left foot on the brake.  Floor the accelerator.  The engine should start and rev to 1500 rpm and something like 80 N*m torque.  Motor/generator A will put 12 kW into the battery and you will hear that the engine is under some load, just like if he did this in a normal automatic.  Release the brake.  Now there is no need to start the engine while at full acceleration, so there is no transient, just constant acceleration to 24 mph and smoothly decreasing acceleration from there to full speed.  The engine can switch to wide-open throttle and build revs fast enough to keep A generating power, so it should be possible to get a full-speed run even with a depleted battery.
  • $0: Ludicrous mode.  Tesla has made head-snapping acceleration part of the brand of electric cars.  That’s because at anything like legal speeds, acceleration is about torque and not power.  The current Pacifica has whiners complaining about the drivetrain, which is actually really well designed.  4WD, the 28 kWh battery, and the MG A changes are the preparation needed to utterly invert the perception of reviewers at C&D and get Chrysler on board with Tesla fans (which are provably legion).  Frankly, the combination in a minivan will make for very loud PR that leads to lots of sales and market disruption.  Here’s how:
    • The existing hybrid drivetrain delivers constant acceleration of 0.38 G until it gets power limited at 24 mph, and holds on reasonably well to get a 7.8 second 0-60 time.  Amazingly, the existing hybrid drivetrain (with software changes) needs just 22 kW from the battery while doing this, and that’s only because of the MG A power limits mentioned earlier.
    • The existing 16 kWh battery is limited to 85 kW output.  A 28 kWh battery could put out 150 kW with the same strain.
    • On the back axle, I want a 170 kW motor with a corner speed of 30 mph.  This is a no-screwing-around motor but there is no deal-breaking reason not to install something this large.  It will weigh 525 pounds, just like a Tesla’s rear unit.  If there is a problem with stow&go, increase the wheelbase a few inches to make it fit.  The standard Tesla Model X uses a slightly higher power motor on the rear axle.  With this motor the vehicle will accelerate at over 1 G off the line and will reach 60 mph in 4 seconds, curb stomping the long-range Model X and Porsche Cayenne S.
    • The resulting vehicle will steal most Model X sales.  Ludicrous mode acceleration (2.7 sec 0-60) is not practically achievable since it requires a much larger battery or substantial and expensive changes to the hybrid front end.
    • I understand that any sane product manager will insist on a derated ~80 kW rear end as an option, because you can sell the super-go-fast stuff for an extra $20k.  I’d strongly suggest no derated version, since Chrysler really needs something disruptive to grab a lot of electric car sales.
  • Limited slip front differential.  There is no limited slip, and both my wife and I spin the inside right tire frequently while pulling out and turning sharp right into traffic.  We’ve obviously both gotten sloppy about having lots of torque right off the line, but the car could manage it better.  The rear end should have electronic limited slip since it should have one motor for each wheel.
  • Better weight distribution.  I expect electric rear wheel drive and a larger batter pack to move more weight to the back and get a better weight distribution for the common case of 1-2 adult occupants and no cargo.
  • Higher pressure tires.  The hybrid weighs more than the base model and needs higher pressure tires to handle the higher loads, especially in front.  This is just a stupid oversight on Chrysler’s part.
  • $4k option: 4 inch lift option with metal belly pans.  (5.1 inch normal ground clearance, 9.0 inch with the option.)  My sister lives in the mountains.  We were talking about why she wants a new massive SUV rather than a minivan, and it comes down to ground clearance.  She regularly has to drive through roads that are not plowed.  Snow tires are not enough, she simply cannot have that snow pouring over the hood and windshield.  And you know that a jacked minivan with 20 inch rims will look fantastic.  For $6k, swap to air springs on all four corners and dynamically lift the car while driving.


Thursday, September 21, 2017

Elon Musk should worry about Kim Jong Un

The conventional wisdom is that, if North Korea detonates a nuclear missile near any US or allied territory, be it Guam or Seattle or Tokyo or Seoul, the US will massively retaliate: The NK air defense will be eliminated along with their offensive artillery, their naval assets sunk, their nuclear infrastructure destroyed, and most of the NK offensive capability against South Korea smashed.  Even if the missile was successfully intercepted, the act of firing it would make a certain war now better for the US than an uncertain but possibly more devastating attack in the future.

And, the conventional wisdom is that the North Korean government knows this, and so its nukes will not be fired as a first strike.  They are a deterrent: should the regime's existence be threatened, the additional risk of invasion will seem moot, US deterrence against North Korea vanishes, and the missiles get launched.  So a credible North Korean nuclear missile capability makes the stability of the North Korean regime in the interests of the US.

The missile that North Korea fired over Japan failed during reentry.  Packaging a nuclear weapon into a reentry vehicle is known to be difficult.  It will be some time before they demonstrate reentry capability.  There remains a limited amount of time, perhaps a year or two, during which North Korea will not have the capability to attack the US directly.  If the US wishes to snuff out the North Korean threat before it reaches full maturity it must do so before then, and the person who will ultimately decide the US strategy is Donald Trump.  Mr. Trump has convinced many international leaders that he is unpredictable.

The US has attempted to punish the North Korean regime for decades with economic sanctions.  These have cost the North Koreans terribly.  In particular, millions may have died of starvation when their crops failed and they were unable to import food to make up shortages.  Their leadership has remained undeterred and perhaps unaffected by the sanctions.  The US hope and North Korean fear is that mass starvation will cause the populace to rebel against the regime and replace it with one less interested in bullying other countries.

The sanctions play into the narrative that the Americans are still actively at war with North Koreans, in which they specifically target harm at ordinary people in the country.  North Koreans are frequently reminded that their privations are due to American evil and that their government is actively struggling against that evil.  Their bellicose actions towards South Korea and Japan are woven believably into this narrative.

The logic that prevents North Korea from using its nuclear weapons relies on the assumption that the thing that is threatened (the lives of many American or presumably South Korean or Japanese citizens) is so valuable to people in the US that they are willing to risk the lives of millions of their own and allied citizen to secure against that threat.  North Korea has repeatedly demonstrated, however, that the US will not respond massively to attacks which cause little or no loss of life.

In 2010, North Korea torpedoed and sank a South Korean military ship, killing 46 crew members.  Later that year, they shelled a South Korean island, killing four.  In 1987, North Korean agents blew up a South Korean airliner, killing 115 aboard.  None of these actions provoked a military response from the US or South Korea.

In 2014, North Korean hackers stole internal information from Sony Pictures Entertainment, and then used a combination of blackmail with that information and direct terrorist threats to cause Sony to stop the theatrical release of a movie critical of the North Korean regime.  Sony set aside $15m to cover associated direct damage and lost any money that might have been made on the film.  The next year, New Regency cancelled production of another movie critical of North Korea.

On July 9, 1962, the US detonated a 1.4 megaton thermonuclear bomb 250 miles above Johnson island and 900 miles from Hawaii.  Because the bomb was above the atmosphere, there was no blast damage at ground level.  However, the electromagnet pulse from the bomb blew out 300 street lights in Honolulu and damaged the Kauai'i microwave link, severing phone communication to the island.  The beta particles (high speed electrons) from the bomb, which would be converted into simple heat in an airburst, instead formed radiation belts around the Earth that disabled three satellites and lasted five years.

Many of the effects had been predicted, but still came as a shock to some in the military.  Long-lived degradation of the Low Earth Orbit environment was not in the interests of either of the superpowers at the time.  The US and USSR signed the Partial Test Ban Treaty the next year, ending exoatmospheric nuclear tests by the two.

North Korea is the first ICBM-armed nation that does NOT have orbital infrastructure.  That makes asymmetrical nuclear warfare possible.  During the next few years when their deterrent appears inevitable but is not yet mature, North Korea has the capability to detonate a nuclear weapon just above the atmosphere.

  • If the weapon can be detonated within 80 km of a satellite, that satellite can be directly killed.  Possible targets include the three US KH-12 optical spy satellites currently in orbit, which North Korea could excuse as being legitimate military targets, actively engaged against it, of a state it is currently at war with.  There is no production line for those satellites.  If they are lost, it will take years to replace them.
  • The Low Earth Orbit radiation environment can be made substantially more adverse.  It is possible to make satellites survive these conditions: the GPS satellites currently orbit partially within the Van Allen radiation belts and are specially hardened to tolerate those conditions.  Geosynchronous and Low Earth Orbit satellites have not generally been built that robust.  In particular, the $100 billion International Space Station is in Low Earth Orbit and is unable to protect its occupants from increased radiation.  A North Korean detonation could cause the astronauts to evacuate and permanently abandon the ISS.
  • The weapon could be detonated over the Sea of Japan between Japan and South Korea, for instance, a bit north of Tsushima.  It would appear to be just another demonstration until the moment of detonation.  A single detonation could damage millions of vehicles in both countries without directly killing anyone and without causing similar damage in North Korea or significant fallout.  The resulting logistical problem would cost many billions of dollars to fix.
This is all so expensive that I think it sufficiently deters the US from initiating a strike on North Korea right now.  The window for snuffing out the North Korean nuclear threat has closed.

Once the North Koreans have a credible re-entry vehicle, however, a new window opens.  It's not clear that the US would be willing to go to war and risk an actual nuclear ground strike in response to an North Korean EMP strike.  So that means North Korea can demonstrate an EMP strike and use the threat of a larger EMP strike to extort the US and its allies for billions of dollars a year.

To bring all this around to the clickbait title of this post, I'm concerned that the consequence of those demonstrations is that Low Earth Orbit is going to become uninhabitable in the next few years.  That's going to put a serious crimp in Elon Musk's plans to launch people to the ISS and eventually elsewhere.  So maybe our real life Tony Stark can figure out some way to fix North Korea.

Tuesday, April 14, 2015

Vegetarians use... similar water

The Los Angeles Times has the best tool that I've yet seen for understanding the amount of water used in agriculture.  It lets you put together various proteins, starches, vegetables, and drinks to make a typical dinner, and shows you how much water it takes to produce all those things.  Check it out.

The lowest water-use dinner I could come up with took 135 gallons of water to deliver two eggs, carrots, potatoes, and a glass of beer.  Most dinners are vastly more than that.  The biggest takeaway here is that each person in my family of five uses more water in the food we eat than we use together for the house and back yard.

[Edit: unfortunately, the LA Times article has a serious error, which I made as well in my original version of this post.  They confounded the dry and as-eaten (boiled) weights of peas, lentils, and chickpeas.  This leads to a very large overestimation of the water used per protein delivered.  I have corrected this error in the content and tables below.  Big thanks to Miciah Masters for finding the problem!]

The biggest consumers of water are all proteins: beef, lamb, and pork.  For most meals the protein consumes way more than half of all the water.  I was quite surprised to see that chicken eggs and meat, and goat meat, are about as efficient as vegetarian stables like peas, soy, chickpeas, and lentils.

Arjen Hoekstra, the founder of Water Footprint Network, has spent years researching agricultural water use.  One of his messages is that "animal products demand considerably higher amounts of water than do most other food types."  (Quote of Mr. Hoekstra from LA Times.)  This makes intuitive sense, because animals consume and do not produce protein and carbohydrates.  If you ate the stuff that the animal eats instead, that would have to be a more efficient way to get those proteins and calories than by eating the animal.

But people, even vegetarians, don't eat the stuff we feed animals.  We feed animals cheap vegetables (like grass and grains and corn mash left over from making ethanol), and eat the expensive ones (like carrots and blueberries) ourselves.  Mr. Hoekstra is correct that many animal products demand more water than vegetarian products.  But that's not true of poultry in particular, and it's important to note that chicken is more popular in the U.S. than beef.

This is great news, of course, since it's far easier for most people to eat more chicken and less beef, than it is for them to switch to a vegetarian diet.  It does somewhat skewer Mr. Hoekstra's underlying goal of motivating vegetarianism, but of course that's the danger of working with real data.

So here's my analysis.  The water usage that the LA Times quotes are for an 8 ounce serving of all the various protein-rich foods.  As you might expect, beef has a lot more protein in it than the same amount of lentils.  If I correct the serving size to get the same amount of protein in all servings, the ranking comes out like this, from best to worst:

FoodServing SizeProtein contentFat contentCarb contentWater Used
Peas36 ounces56 g (224 cal)4 g (38 cal)147 g (588 cal)112 gallons
Chicken8 ounces56 g (224 cal)29 g (259 cal)0 g (0 cal)131 gallons
Eggs16 ounces56 g (224 cal)47 g (420 cal)6 g (22 cal)193 gallons
Soy burger12 ounces56 g (224 cal)19 g (168 cal)42 g (168 cal)254 gallons
Chickpeas22 ounces56 g (224 cal)19 g (168 cal)187 g (747 cal)278 gallons
Goat7 ounces56 g (224 cal)19 g (168 cal)0 g (0 cal)282 gallons
Lentils22 ounces56 g (224 cal)0 g (0 cal)127 g (509 cal)294 gallons
Milk (1%)60 ounces56 g (224 cal)17 g (151 cal)84 g (336 cal)359 gallons
Pork9.5 ounces56 g (224 cal)15 g (139 cal)4 g (16 cal)394 gallons
Lamb5.5 ounces56 g (224 cal)48 g (432 cal)0 g (0 cal)465 gallons
Beef8 ounces56 g (224 cal)33 g (298 cal)0 g (0 cal)850 gallons
Almonds9 ounces56 g (224 cal)126 g (1134 cal)56 g (224 cal)1097 gallons

I've added almonds to the table (using Mr. Hoekstra's data again), as they are rich in protein, use a lot of water, and are getting a bad rap in California for their water use right now.  I've also listed fat and carb calories.  For those of us trying to get to a low-carb, high-protein diet, soy and maybe milk look okay but these other veggie options are not great.

Farmers, of course, don't directly care how much protein their products have, but rather how much profit can be made from them.  I wasn't able to get profit numbers, but some quick Googling came up with the following prices.  If a farmer and his inputs are constrained primarily by water, then poultry farming looks like a good way to go, and once again vegetarian staples appear to be a terrible choice.

FoodWater UsedFarm PricePrice per gallon
Eggs1566 m3/tonne$1.90/pound1.013 cents/gallon
Goat5521 m3/tonne$5.25/pound0.794 cents/gallon
Pork5508 m3/tonne$4.06/pound0.615 cents/gallon
Chicken2218 m3/tonne$1.54/pound0.579 cents/gallon
Beef14191 m3/tonne$5.65/pound0.332 cents/gallon
Peas1979 m3/tonne$0.70/pound0.295 cents/gallon
Lamb13007 m3/tonne$4.28/pound0.275 cents/gallon
Milk796 m3/tonne$0.21/pound0.220 cents/gallon
Lentils5874 m3/tonne$0.90/pound0.128 cents/gallon
Almonds16095 m3/tonne$2.00/pound0.104 cents/gallon
Soybeans2145 m3/tonne$0.23/pound0.089 cents/gallon
Chickpeas4177 m3/tonne$0.29/pound0.058 cents/gallon

It's worth mentioning that tap water in California goes for about 0.55 cents/gallon.  If farmers paid for water what I pay, most would stop farming.  On the other hand, if most residential water users were charged similar rates to farmers, most would stop conserving water.  Things are pretty out of whack.

Prices aren't profit, and California farmers aren't all water constrained.  In many places they can pump unlimited (or more precisely, unregulated) amounts of groundwater.  In particular, there has been a trend in California of dairies converting to almond orchards.  The second table above suggests this is exactly backwards if those products have similar profit margins and are water limited.  California farmers are making the move because profit margins on dairy are smaller than almonds and they are not limited by water.

Friday, October 31, 2014

STS-93: Yikes! We don't need any more of these.

I just found Wayne Hale's blog.  Be careful reading this thing, I just lost nearly an entire night of sleep.  The latest update, which covers the launch of STS-93, is just breathtaking.

Here's a video which documents the folks at mission control scrambling to figure out what is going on with their bird during the ascent.

Sunday, September 14, 2014

Quick trip to the Sierras

On Friday I took a quick trip to the Sierras to grab some Ponderosa pine forest images with a drone.  Initially, the logging road was just gorgeous.


Then I got to some bits that were less than gorgeous.  These roads don't see much use (I saw one other couple in a pickup during several hours on site), and I think these portions are probably just completely ignored until it's time for another logging operation, at which point they probably fill in the worst spots with gravel.  There were 18-inch-deep gullies in places, and nasty rocky bits that looked like a dry stream bed.  I walked several of these before trying them in the minivan.  There were a few uphill sections on which I was glad to have 4 wheel drive.


Eventually I got up to my target location.  That pile of wood is slash from a logging operation that probably happened in the last few years.


Target.  Life is good.

No crashes, nothing broke.  However, there are new noises coming from the minivan's power steering system now.  So, perhaps I did break something.  Overall it was a successful trip.

Tuesday, May 13, 2014

Happy Birthday to me


It's 5:13pm, and I'm sitting in the shade in my back yard, tweaking some really neat flexure mounts, while keeping an eye on two of my kids and two of their friends frolicking in the pool I built years ago. It's hot out, and there is steady traffic between the two hives near the back of the yard and the fountain to my right. A pair of ducks have been watching the kids too, and though they like the look of all that water they're leaving for someplace less noisy.  Lady Jane, our black Labrador, is lying in the grass, which is overdue for mowing, ripping up stems and chewing away. There are stains from fine droplets of sunscreen on the back of my laptop that won't be coming off. Martha will bring my youngest daughter back from gym class in an hour and then we'll head out for my birthday dinner.

At least once a day, at least one person helps me accomplish something I cannot achieve myself, things I am really happy to be working on.  I wonder if I manage to help someone else every day in the same way.

I have a lot to be thankful for.

Wednesday, April 23, 2014

Window 8.1 is unusable on a desktop

For the last two years I've been doing a lot of SolidWorks Simulation on my Lenovo W520 laptop.  This thing has been great.  But I've started doing fluid flow simulations, and it's time for more CPU than a 3.3 GHz (limited by heat load) dual core Sandy Bridge.

So I built a 6 core Ivy Bridge (i7 4930k) which I overclocked to 4.5 GHz.  Very nice.  However, I installed Windows 8.1 on it, which turns out to have been wrong.  This post is for people who, like me, figure that Windows 8 problems are old news and Microsoft must have fixed it by now.

Summary: Nope.

I figured that all those folks bellyaching about the new Windows were just whining about minor UI differences.  Windows 8 should benefit from 3 years of code development by thousands of serious engineers at Microsoft.  The drivers should be better, and it definitely starts up from sleep faster (and promptly serves me ads).   I figured I could deal with different menus or whatever.

I have learned that Windows 8.1 is unusable for a workstation.
  • Metro apps are full screen.  Catastrophe.
    • When I click on a datasheet PDF in Windows 7, it pops up in a window and I stick it next to the Word doc and Excel doc that I'm working on.  In Windows 8, the PDF is full screen, with no way to minimize.  I can no longer cut and paste numbers into Skype.  I can no longer close open documents so that DropBox will avoid cache contention problems.
    • Full screen is fine for a tablet, but obliterates the entire value of a 39 inch 4K monitor.  I spent $500 on that monitor so I could see datasheets, spreadsheet, Word doc, and SolidWorks at the same time.
    • Basically, this is a step back to the Mac that I had 20 years ago, which ran one application, full screen, at a time.
  • Shortly after the build, I cut power to the computer while it was on.  Windows 8 cheerfully told me I had to reinstall the O/S from scratch, and blow away all the data on the machine.  I don't keep important data on single machines, but I still lost two hours of setup work.  That's not nice.  I have not had that problem with Windows 7.
  • I plugged the 4k monitor into my W520 running Windows 7.  It just worked.  My Windows 8 box wants to run different font sizes on it, which look terrible.
  • Windows 8 + Chome + 4k monitor = display problems.  It appears Chrome is rendering at half resolution and then upscaling.  WTF?  This has pushed me to use Internet Explorer, which I dislike.  Chrome works fine on the 4k on Windows 7.
  • Windows 8 + SolidWorks = unreadable fonts in dialog boxes.  I mean two-thirds of the character height is overwritten and not visible.  So actually unreadable.  The SolidWorks folks know they have a problem, and are working on it.  And, I found a workaround.  But it still looks unnecessarily ugly.
  • Windows 8 + SolidWorks + 4k monitor = display problems.  Not quite the same look as upscaling, but something terrible is clearly happening.  Interestingly, if more than half of the SW window is on my 30" monitor, lines drawn on the 39" look okay.  But when more than half of the SW window is on my 39" monitor, lines look like crap... even the ones on the smaller half of the window still on the 30".
  • Windows 7, to find an application: browse through the list on the start button.  Window 8: start by knowing the name of the application.  Go to the upper right corner of the screen, then search, then type in the name.
  • Finally, that upper right corner thing.  I have two screens.  That spot isn't a corner, it's between my two screens.  I keep triggering that thing when moving windows, and can't trigger it easily when I want to.  Microsoft clearly designed this interface for tablets, and was not concerned with how multi-screen desktop users would use it.
And here's the kicker: Microsoft won't swap the Windows 8.1 Pro license I got for a Windows 7 license.  I have to buy Windows 7.

Excel 2013 has one thing I like: multiple spreadsheets open in separate windows, like Word 2010 and like you'd expect.

Word 2013 has two things I dislike: Saving my notes file takes 20 seconds rather than being nearly instant (bug was reported for a year before Microsoft acknowledged it recently), and entering "µm" now takes two more clicks than it used to -- and nothing else has gotten better in exchange.  Lame.

I suggest not upgrading, folks.  No real benefit and significant pain.

You have (another) angry customer, Microsoft.

Here's the difference in SolidWorks rendering, on the SAME MONITOR, running in Windows 8, as I shift the window from being 60% on the 30 inch monitor to 40% on the 30 inch monitor (and 60% on the 39 inch):

30 inch mode: Note that lines are rendered one pixel wide, text is crisp.

39 inch mode: Lines are fatter, antialiasing attempted but wrongly


Wednesday, January 15, 2014

Sensors, Survey and Surveillance from Space

The SkyBox satellites are the first to use area array rather than pushbroom sensors for survey work, but they certainly aren't the first to use area array sensors.  I think the first satellites to do that were the KH-11 surveillance satellites, versions of which are still the principle US optical spysats in use today.  The first KH-11s sported area array sensors of about the same resolution as a standard definition TV.  The most recent KH-11s probably have a focal plane similar to this tiled 18,000 x 18,000, 10 micron focal plane (shown below, that circle is a foot in diameter).

Optical spysats have two missions; call them surveillance and survey.  When you already know where the thing is, that's surveillance.  Response time matters, but throughput is usually not a big deal.  When you don't know where your thing is, or you don't even know what it is yet, you are doing survey.  Throughput is king in survey work, and if response time matters, you have a problem.  Coast Guard aerial search and rescue, for example, has this problem.  You can read about the difficulties of search at sea in this NY Times article on rescue of a fisherman last July.

General Schwarzkopf said after the first Gulf War that spysats (he must have been referring to the earlier KH-11s) could not provide useful, timely imagery.  He was comparing single pictures of targets after a hit to the target camera footage of his planes, which gave him continuous video snippets of the target before, during, and after a hit.  These videos were very popular at press conferences and with his upper management.
Satellites are excellent for getting access to denied airspace -- there is no other way to take pictures inside China and Russia.  But in Iraq, Afghanistan, and Pakistan they are completely outclassed by airplanes and now drones with long-range optics (like the MB-110 reconnaissance pod which I still haven't written up).  In a 20 year battle against irrelevancy, I suspect that getting near-real-time imagery, especially video, from orbit has been a major NRO focus.  I'm sure the Block IV KH-11 launches in 2005, 2011, and recently in August 2013 can all do real-time downlinks of their imagery through the SDS satellite constellation.  However, the second part of real-time is getting a satellite into position to take the picture quickly.  The three KH-11s in orbit often cannot get to a surprise target in less than 30 minutes, and cannot provide continuous video coverage.  Guaranteeing coverage within 30 minutes would require dozens of satellites.  Continuous coverage, if done with satellites 300 km up, would require around 1000.  The KH-11 series is expensive (they refer to them as "battleships") and the US will not be launching a big constellation of these.

The Next Generation Electro-Optical program, which started in 2008 or so, is probably looking at getting the cost of the satellites down into the sub-$500m range, while still using 2+ meter telescopes, so that a dozen or two can be launched over a decade within a budget that NRO can actually sell to Congress.  My guess is they won't launch one of these until 2018.  In the meantime, SkyBox Imaging and ExactEarth, who are both launching constellations of small imaging sats, will be trying to sell much lower-resolution images that can be had more quickly.  These civilian operators have 50-60 cm apertures and higher orbits, and so can't deliver the resolution that NRO and NGA customers are used to, and they can't or don't use the SDS or TDRS constellations to relay data in real time.  (SkyBox can do video, but then downlinks it 10 to 90 minutes later when they overfly one of their ground stations.)

The second spysat mission is survey: looking for a needle in a haystack.  From 1972 to 1986 we had this in the form of the KH-9 Hexagon, which shot the entire Soviet Union every 2 to 4 months at 1 to 2 foot resolution.  The intel community at the time could not search or inspect all that imagery, but the survey imagery was great once they'd found something surprising.  Surprise, a new site for making nuclear weapons!  Survey answers the question: What did it look like during construction?  Or, How many other things like this are there?  Nowadays, Big Data and computer vision have got some handle on finding needles in haystacks, but we no longer have the KH-9 or anything like it to supply the survey imagery to search.  We still use the U-2 for aerial survey imagery, but we haven't flown that into denied airspace (e.g. Russia and China) for many decades.

From 1999 to 2005 Boeing ran the Future Imagery Architecture program,which was intended to make a spy satellite that could do radar, survey, and surveillance.  The program took too long and ran way over budget, and was eventually salvaged by cancelling the optical portion and having the team design a synthetic aperture radar satellite, which did get launched.  (Apparently this was the successor to the Lacrosse radar satellite.)

As I wrote, SkyBox does survey with a low-resolution area array.  They would need about 16,000 orbits to cover the entire surface of the earth, which is 2.7 years with one satellite.  I'm sure they can optimize this down a bit by steering left/right when over the ocean.  But this is 70 cm GSD imagery.

Two of the telescopes designed for FIA were donated to NASA in 2012, and the few details that have emerged tell us about late 1990s spy satellites.  From 300 km up, they could deliver 7 cm imagery, and had a (circular) field of view of about 50,000 pixels.  This could have been used with a 48,000 x 16,000 pixel tiled focal plane array.  Using the simple expedient of shooting frames along the line of the ground track, the ground swath would have been 3.2 km wide, and could have surveyed the entire Earth in about 2.7 years (the same number is a coincidence -- spysats fly at half the altitude and this one had twice my presumed field of view for SkyBox).

However, to keep up with the ground track pixel velocity, the sensors would have to read out at over 6 frames per second.  That's almost 5 gigapixels per second.  I don't believe area array sensors that big can yet read out that fast with low enough noise.  (The recent Aptina AR1411 reads out at 1.4 gigapixels per second, but it's much smaller, so the column lines have far less capacitance.)

The large number is not a result of the specifics of the telescope or sensor design -- it's fundamental to high resolution orbital survey.  It's just the rate at which the satellite flies over ground pixels.  Getting 5 billion tiny analog charge packets to A/D converters every second is hard.  Once there, getting 20 gigabits/second of digital data to the ground is even harder (I don't think it's been done yet either).  I'll defer that discussion to a later post.

Pushbroom sensors are more practical to arrange.
  • The satellite simply stares straight down at the ground.  Attitude corrections are very slow.
  • It's easy to get lots of A/D converters per sensor, you simply add multiple taps to the readout line.
  • It's easy to tile lots of sensors across the focal plane.  You stagger two rows of sensors, so that ground points that fall between the active areas of the first row are imaged by the second row, like this focal plane from ESA Sentinel-2.  Once stitched, the resulting imagery has no seams.


Tiled area array sensors are more difficult, but have the advantage of being able to shoot video, as well as a few long exposures on the night side of the Earth.
  • The image must be held steady while the field of view slides along the ground.  Although this can be done by rotating the whole satellite, survey work is going to require rapidly stepping the stabilized field forward along the optical path, several times a second.  Fast cycling requires a lightweight optical element, usually the secondary mirror, to have a fast and super precise tip/tilt mechanism to track out the motion.  Cycling this element back into position between shots can add vibration to the satellite.
  • While the secondary mirror is moving the image back into position, the pixel photodiodes must not accumulate charge that affects the values read out.  This typically means that either the cycling time can't be used for readout, or (as in the VisionMap A3) the sensor is an interline CCD with two capacitors per pixel, one of which is shielded.  With this choice comes a bunch of minor but annoying problems.
  • In one line time, charge is transferred from the pixels all the way across the array to the readout.  The bit lines can be long and capacitive and add noise.
  • Take another look at the first pic in this blog post, and note the seams between the active arrays.  These are annoying.  It's possible to take them out with clever combinations of sparse arrays and stepping patterns.
Lenses generally resolve a circular field of view, and pushbroom sensors take a rectangular stripe down the middle.  It's possible to put an area array sensor in the leftover upper or lower crescent around a pushbroom sensor.  This gives a smaller area sensor, but in the context of a 50,000 pixel diameter focal plane, a "smaller" area sensor might be 10,000 pixels on a side, with 50 times the pixel count of an HD video sensor.  This allows for a 10:1 "digital zoom" for context with no loss of display resolution.

If I were building a government spysat today, I'd want it to do survey work, and I'd make surveillance the secondary mission.  Airplanes and drones are better for most surveillance work.  I'd want to shoot the whole Earth each year, which can be done with three satellites at 300 km altitude.  I'd use a staggered pushbroom array as the primary sensor and a smaller area array for surveillance.

The step-stare approach that SkyBox is using makes sense when a big, fast area array sensor covering the whole field of view can be had at low risk.  Sensors are developing quickly, so this envelope is growing over time, but it's still an order of magnitude away from what large-aperture spysats can do.

Maybe I'm wrong about that order of magnitude.  In 2010 Canon announced a 205 mm square CMOS sensor that supposedly reads out 60 frames per second.  Here it is pictured next to a full-frame 35mm DSLR sensor -- it's slightly bigger than the tiled array at the top of this post.  Canon did not announce the resolution, but they did say the sensor had 100 times the sensitivity of a DSLR, which suggests a pixel size around 35 microns.  That's too big for a spysat focal plane, unless it's specifically for use at night.
No subsequent announcement was made suggesting a purpose for this sensor.  Canon claims it was a technology demonstration, and I believe that (they would not have been allowed to show a production part for a spysat to the press).  Who were they demonstrating that technology to?  Is this the focal plane for a Japanese spysat?

Thursday, December 12, 2013

The SkyBox camera

Christmas (and Christmas shopping) is upon us, and I have a big review coming up, but I just can't help myself...

SkySat-1, from a local startup SkyBox Imaging, was launched on November 21 on a Russian Dnepr rocket, along with 31 other microsatellites and a package bolted to the 3rd stage.  They have a signal, the satellite is alive, and it has seen first light.  Yeehah!

These folks are using area-array sensors.  That's a radical choice, and I'd like to explain why.  For context, I'll start with a rough introduction to the usual way of making imaging satellites.

A traditional visible-band satellite, like the DubaiSat-2 that was launched along with SkySat-1, uses a pushbroom sensor, like this one from DALSA.  It has an array of 16,000 (swath) by 500 (track) pixels.
The "track" pixel direction is divided into multiple regions, which each handle one color, arranged like this:
Digital pixels are little photodiodes with an attached capacitor which stores charge accumulated by the exposure.  A CCD is a special kind of circuit that can shift a charge from one pixel's capacitor to the next.   CCDs are read by shifting the contexts of the entire array along the track direction, which in this second diagram would be to the right.  As each line is shifted into the readout line, it is very quickly shifted along the swath direction.  At multiple points along the swath there are "taps" where the charge stored is converted into a digital number which represents the brightness of the light on that pixel.

A pushbroom CCD is special in that it has a readout line for each color region.  And, a pushbroom CCD is used in a special way.  Rather than expose a steady image on the entire CCD for tens of milliseconds, a moving image is swept across the sensor in the track direction, and in synchrony the pixels are shifted in the same direction.

A pushbroom CCD can sweep out a much larger image than the size of the CCD.  Most photocopiers work this way.  The sensor is often the full width of the page, perhaps 9 inches wide, but just a fraction of an inch long.  To make an 8.5 x 11 inch image, either the page is scanned across the sensor (page feed), or the sensor is scanned across the page (flatbed).

In a satellite like DubaiSat-2, a telescope forms an image of some small portion of the earth on the CCD, and the satellite is flown so that the image sweeps across the CCD in the track direction.
Let's put some numbers on this thing.  If the CCD has 3.5 micron pixels like the DALSA sensor pictured, and the satellite is in an orbit 600 km up, and has a telescope with a focal length of 3 meters, then the pixels, projected back through that telescope to the ground, would be 70 cm on a side.  We call 70 cm the ground sample distance (GSD).  The telescope might have an aperture of 50cm, which is as big as the U.S. Defense Department will allow (although who knows if they can veto a design from Dubai launched on a Russian rocket).  If so, it has a relative aperture of f/6, which will resolve 3.5 micron pixels well with visible light, if diffraction limited.

The satellite is travelling at 7561 m/s in a north-south direction, but it's ground projection is moving under it at 6911 m/s, because the ground projection is closer to the center of the earth.  The Earth is also rotating underneath it at 400 m/s at 30 degrees north of the equator.  The combined relative velocity is 6922 m/s.  That's 9,900 pixels per second.  9,900 pixels/second x 16,000 pixel swath = 160 megapixels/second.  The signal chain from the taps in the CCD probably will not run at this speed well, so the sensor will need at least 4 taps per color region to get the analog to digital converters running at a more reasonable 40 MHz.  This is not a big problem.

A bigger problem is getting enough light.  If the CCD has 128 rows of pixels for one color, then the time for the image to slide across the column will be 13 milliseconds, and that's the effective exposure time.  If you are taking pictures of your kids outdoors in the sun, with a point&shoot with 3.5 micron pixels, 13 ms with an f/6 aperture is plenty of light.  Under a tree that'll still work.  From space, the blue sky (it's nearly the same blue looking both up and down) will be superposed on top of whatever picture we take, and images from shaded areas will get washed out.  More on this later.

Okay, back to SkySat-1.  The Skybox Imaging folks would like to shoot video of things, as well as imagery, and don't want to be dependent on a custom sensor.  So they are using standard area array sensors rather than pushbroom CCDs.

In order to shoot video of a spot on the ground, they have to rotate the satellite at almost 1 degree/second so that the telescope stays pointing at that one point on the ground.  If it flies directly over that spot, it will take about 90 seconds to go from 30 degrees off nadir in one direction to 30 degrees off in the other direction.  In theory, the satellite could shoot imagery this way as well, and that's fine for taking pictures of, ahem, targets.

A good chunk of the satellite imagery business, however, is about very large things, like crops in California's Central Valley.  To shoot something like that, you must cover a lot of area quickly and deal with motion blur, both things that a pushbroom sensor does well.

The image sliding across a pushbroom sensor does so continuously, but the pixel charges get shifted in a more discrete manner to avoid smearing them all together.  As a result, a pushbroom sensor necessarily sees about 1 pixel of motion blur in the track direction.  If SkySat-1 also has 0.7 meter pixels, and just stared straight down at the ground, then to have the same motion blur it would have to have a 93 microsecond exposure.  That is not enough time to make out a signal from the readout noise.

Most satellites use some kind of Cassegrain telescope, which has two mirrors.  It's possible to cancel the motion of the ground during the exposure by tilting the secondary mirror, generally with some kind of piezoelectric actuator.  This technique is used by the Visionmap A3 aerial survey camera.  It seems to me that it's a good match to SkyBox's light problem.  If the sensor is a interline transfer CCD, then it can expose pictures while the secondary mirror stabilizes the image, and cycle the mirror back while the image is read out.  Interline transfer CCDs make this possible because they expose the whole image array at the same time and then, before readout, shift the charges into a second set of shielded capacitors that do not accumulate charge from the photodiodes.

Let's put some numbers on this thing.  They'd want an interline transfer CCD that can store a lot of electrons in each pixel, and read them out fast.  The best thing I can find right now is the KAI-16070, which has 7.4 micron pixels that store up to 44,000 electrons.  They could use a 6 meter focal length F/12 Cassegrain, which would give them 74 cm GSD, and a ground velocity of 9,350 pixels/sec.

The CCD runs at 8 frames per second, so staring straight down the satellite will advance 865 m or 1170 pixels along the ground.  This CCD has a 4888 x 3256 pixel format, so we would expect 64% overlap in the forward direction.  This is plenty to align the frames to one another, but not enough to substantially improve signal-to-noise ratio (with stacking) or dynamic range (with alternating long and short exposures).

And this, by the way, is the point of this post.  Area array image sensors have seen a huge amount of work in the last 10 years, driven by the competitive and lucrative digital camera market.  16 megapixel interline CCDs with big pixels running at 8 frames per second have only been around for a couple of years at most.  If I ran this analysis with the area arrays of five years ago the numbers would come out junk.

Back to Skybox.  When they want video, they can have the CCD read out a 4 megapixel region of interest at 30 fps.  This will be easily big enough to fill a HDTV stream.

They'd want to expose for as long as possible.  I figure a 15 millisecond exposure ought to saturate the KAI-16070 pixels looking at a white paper sheet in full sun.  During that time the secondary mirror would have to tilt through 95 microradians, or about 20 seconds of arc for those of you who think in base-60.  Even this exposure will cause shiny objects like cars to bloom a little, any more and sidewalks and white roofs will saturate.

To get an idea of how hard it is to shoot things in the shade from orbit, consider that a perfectly white sheet exposed to the whole sky except the sun will be the same brightness as the sky.  A light grey object with 20% albedo shaded from half the sky will be just 10% of the brightness of the sky.  That means the satellite has to see a grey object through a veil 10 times brighter than the object.  If the whole blue sky is 15% as bright as the sun, our light grey object would generate around 660 electrons of signal, swimming in sqrt(7260)=85 electrons of noise.  That's a signal to noise ratio of 7.8:1, which actually sounds pretty good.  It's a little worse than what SLR makers consider minimum acceptable noise (SNR=10:1), but better than what cellphone camera makers consider minimum acceptable noise (SNR=5:1, I think).

But SNR values can't be directly compared, because you must correct for sharpness.  A camera might have really horrible SNR (like 1:1), but I can make the number better by just blurring out all the high spatial frequency components.  The measure of how much scene sharpness is preserved by the camera is MTF (stands for Modulation Transfer Function).  For reference, SLRs mounted on tripods with top-notch lenses generally have MTFs around 40% at their pixel spatial frequency.

In summary, sharpening can double the high-frequency MTF by reducing SNR by a factor of two.  Fancy denoise algorithms change this tradeoff a bit, by making assumptions about what is being looked at.  Typical assumptions are that edges are continuous and colors don't have as much contrast as intensity.

The atmosphere blurs things quite a bit on the way up, so visible-band satellites typically have around 7-10% MTF, even with nearly perfect optics.  If we do simple sharpening to get an image that looks like 40% MTF (like what we're used to from an SLR), that 20% albedo object in the shade will have SNR of around 2:1.  That's not a lot of signal -- you might see something in the noise, but you'll have to try pretty hard.

The bottom line is that recent, fast CCDs have made it possible to use area-array instead of pushbroom sensors for survey satellites.  SkyBox Imaging are the first ones to try this idea.  Noise and sharpness will be about as good as simple pushbroom sensors, which is to say that dull objects in full-sky shade won't really be visible, and everything brighter than that will.

[Updated] There are a lot of tricks to make pushbroom sensors work better than what I've presented here.

  • Most importantly, the sensor can have more rows, maybe 1000 instead of 128 for 8 times the sensitivity.  For a simple TDI sensor, that's going to require bigger pixels to store the larger amount of charge that will be accumulated.  But...
  • The sensor can have multiple readouts along each pixel column, e.g. readouts at rows 32, 96, 224, 480, 736, and 992.  The initial readouts give short exposures, which can see sunlit objects without accumulating huge numbers of photons.  Dedicated short exposure rows mean we can use small pixels, which store less charge.  Small pixels enable the use of sensors with more pixels.  Multiple long exposure readouts can be added together once digitized.  Before adding these long exposures, small amounts of diagonal image drift, which would otherwise cause blur, can be compensated with a single pixel or even half-pixel shift.

[Updated] I've moved the discussion of whether SkyBox was the first to use area arrays to the next post.

Thursday, October 31, 2013

Hyperloop Traffic

This is a huge post, about a subject that may not be terribly interesting. I suspect most of you will want to skim all but the first section, and come back later when I refer to this post from later posts.

Bottom line: If Hyperloop can get daily commuter traffic, at first within the Bay Area and Los Angeles areas, and later between then, then it can gather at least $7b/year of revenue. This is much larger than the $2.2b/year of revenue from the California High Speed Rail projection.

Daily commute traffic is the most important market. The better Hyperloop addresses this market, the more revenue it will get.

The big picture
I have looked at the California High Speed Rail project’s expected traffic volume (example here).  They are expecting an average of 32,600 people/day to take the train between the LA basin and the Bay Area, and another 8,800 people/day between San Diego and the Bay Area.  For comparison, 29,000 people/day currently fly those routes.  So they are expecting everyone who currently flies to take the train instead.  While this is possible, it’s neither likely (door-to-door times using the train will be slower for most people), nor sufficient (it doesn’t bring in enough money), nor interesting (replacing one service with an equivalent doesn’t grow the economy).


The required investment is $68 billion and they expect $2.2 billion/year in revenue.  That’s just not enough revenue.  The goal is apparently to break even on operating costs and not need government subsidy, which I find appalling.  Of what use is a train if it doesn’t get anyone anywhere faster and it doesn’t make money?  About the only other thing it might accomplish is removing traffic from some other system that would otherwise have to be expanded.  The trouble is that the overcrowded system most in need of relief is local highways, and the HSR doesn’t do anything about that.


I think Hyperloop should have three goals:
  • Most Californians should see decreased travel times and improved travel flexibility.
  • 6% return on capital invested.
  • Massive new economic activity beyond the billions spent on the transport system directly.


To bring in an order of magnitude more revenue, Hyperloop must be used by a lot more people a lot more often.  There is only one way to do that: Hyperloop must significantly improve the daily commutes of a million Californians.  Just as the freeway system allows drivers to bypass most surface streets for journeys longer than 20 minutes, Hyperloop must allow drivers to bypass most of the freeway system for journeys longer than 40 minutes.


The average California commute is about 30 minutes.  12% of Bay Area and Los Angelino commuters accept commutes at least an hour long.  There are two opportunities here.  The first and more immediate is to cut 20 or more minutes out of hundreds of thousands of existing commutes within the Bay Area and Los Angeles.  The second is to enable daily commuting between Northern and Southern California, and over larger distances in general.


Practical commuting over distances like this will cause massive changes, just as the automobile disrupted the previous shapes of cities.  Hyperloop can bring together the labor markets in Northern and Southern California, open up gigantic new areas of real estate, save Californians perhaps a hundred million hours a year, and attract a half million passengers a day. (Em, my numbers don't actually support a million per day.)


As detailed below, I project revenue of at least $7b/year.
  • $2.8b from existing north/south traffic
  • $2.3b from existing commuters
  • Eventually, at least $1.9b from new long distance commuters, and perhaps multiple times this much.



The key to faster commuting is quick transitions between Hyperloop and ordinary car travel, so I have diverged from Elon Musk’s proposal.  I will summarize here and leave the details to another post.

  • The capsules I envision have no seats at all -- they are primarily car ferries.
  • Security would be the same as on our freeway system -- open access and zero delay, along with police surveillance.
  • The time between capsules while underway would be 1-2 seconds, similar to that of cars on the freeway.
  • I envision routing the tubes underwater.  I just don’t see voters accepting massive overhead tubes in cities.
My last point of departure is that I propose to carry truck traffic for more diversified revenue.

Northern California to/from Southern California non-commute traffic
The following analysis leads me to expect that, perhaps five years after initial operation, the north-south link would carry 26 thousand one-way revenue-generating capsule trips per day, from the replacement of trips that people take today.
  • 10k/day replace I-5 truck traffic
  • 8k/day replace I-5 car traffic
  • 7k/day replace flights and subsequent car rentals
  • 350/day replaces flights which are segments of longer flights


To be attractive to truck traffic, a north/south capsule ride must be priced around $300, which makes a car ride $75 and a bus ride under $20.  North/south revenue will be around $2.8b/year.

Why Trucks?

Carrying 18-wheeler trailers will require a substantially bigger capsule and tube than carrying sedans, and so substantially more capital investment.  I don’t have an estimate of how much more capital investment, but I do have an estimate for the expected revenue from truck replacement traffic: about $1b/year from 10k capsules/day on the north-south link.  This is perhaps 15% of the total revenue stream.


Nationally, people spend one-third as much on truck freight as on car travel, but they spend twice as much on truck freight as air travel.  This leads me to believe that truck replacement revenue for Hyperloop will eventually be more like 20% of the total revenue stream.
ca. 2009
tonne-km freight
(million)
revenue
($/tonne-km)
passenger-km
(million)
revenue
($/pass-km)
2010 user costs
(billion $)
Car


4,507,134
0.168*
$757*
Truck
1,929,201
0.113


$250
Air
17,559
0.671
887,941
0.075
$110
Intercity Rail
2,309,811
0.021
9,518
0.191
$  50


The decision to carry trucks will hinge on the return off an incremental billion dollars of revenue versus the incremental investment for bigger tubes.

I-5 Truck bridge case: 10k capsules/day

A fleet operator with tractors in both LA and SF can move freight between the two more cheaply over Hyperloop than over I-5.

Over-the-road truck drivers (the ones on the road for two weeks at a time) are paid $0.19 to $0.25/km.  The vehicle depreciates $.06 to $.07/km.  They burn $0.27/km of diesel.  This adds to around $0.55/km in 2013.  Trucks averaged 11.3 cents per km-ton in 2009, which suggests the average load was around 5 tons, which seems reasonable.

So, the 600 km from SF to LA costs a truck operator around $330.  It’s possible for Hyperloop to charge a premium, because the capsule trip gets the load to the destination 5 hours sooner.  But the premium will only be paid for a small number of loads.  In order to get most of this business, a capsule trip will cost around $300.

The current truck traffic on I-5 is 10k trucks/day (one-way).  10k capsules/day is more traffic than I expect from air traffic replacement.  Because the truck bridge case will also be more price sensitive, it will probably set the capsule trip price for long-distance routes..

Payloads

The initial payloads with the greatest revenue potential are cars and 18-wheeler trailers, and eventually busses and container freight.



Max weight
Frontal area
Length
*
10.5 tonnes
1.7 m x 2.0 m
21.0 m
*
30.8 tonnes
4.12 m x 2.43 m
16.15 m
(just the trailer)
*
23 tonnes
3.5 m x 2.6 m
13.7 m
*
32.5 tonnes
2.9 m x 2.5 m
13.7 m

The containing capsule will have a payload diameter of 4 to 5 meters.  The larger number is if we wish to back standard 18-wheelers directly into the capsule.  The smaller number is if we are willing to take the wheels off the trailer first.

Tube diameter will be 6 to 7 meters, about 2x that of the Hyperloop-alpha proposal.

I-5 Car bridge case: 8k capsules/day

Here’s an interesting statistic: more people drive from northern to southern California than fly: Caltrans: 2011 California Traffic Volumes

There are currently 30k cars/day travelling between LA and SF, each burning $60 of fuel and 6 hours of driver time.  As above, four cars can share a Hyperloop capsule, with an amortized ticket cost of $75/car.  The driver can save nearly a day for $15.

I’ll assume nearly all drivers will take the Hyperloop, and traffic may increase due to greater convenience.  This would be 8000 capsules/day.

Flight+rental case: 7k capsules/day

Consider someone taking a flight down to LA, then renting a car for 5 days.
Shuttle $70 1 hour (at one-way, might have to pay 2-ways, or pay parking)
Security 1 hour
Flight $138 1 hour (one way)
Car $188 30 min (5 days, compact)
Total $396 3:30

8 million people do this every year between the Bay Area and Los Angeles or San Diego.

The Hyperloop is a total win, even if only a single car takes a capsule.
  • 4 cars share a $300 ticket, $75 each, about 5 times cheaper, and you get your own car.  And, you don’t have to pay to park your own car at the airport.
  • Assuming it takes 30 minutes to drive to the Hyperloop station, and an hour to get to LA, you’ve saved two hours in each direction.  
  • If there are more people in the car (say, a family of 4), you must buy extra plane tickets and shuttle fares.  The Hyperloop option costs nothing more.

Assume Hyperloop gets all of this traffic.  Assume average vehicle loading is 1.5 people/car, this is 22,000 people/day and 14,800 cars/day.  Assuming an average of 2 cars/capsule (many people will want their own capsule), that’s 7380 capsules/day or $800m/year in revenue to Hyperloop.

At 20% to 75% of the price, and less than half the time, we should expect an increase in this traffic volume, and Hyperloop will see all that additional traffic.

Airport Shuttle flight case: 350 capules/day

Not all the people flying between the Bay Area and Southern California are renting a car.  For 2.5m people per year, this hop is one of at least two.  For instance, when flying from San Francisco to Phoenix one generally stops in LAX along the way.

Airports could run a bus-over-Hyperloop service between airport pairs to move all this traffic off airplanes.  They win in two ways: first, they open up runway slots to more profitable longer-distance routes.  Second, the airports essentially get into a high-margin local airline business.  Finally, the airlines win because they can pack their airplanes better, since passengers may be more willing to accept a one-Hyperloop, one-plane trip instead of a nonstop, if the Hyperloop-using hop gets them to the destination sooner.

It’s about 7000 people/day.  Assuming busses with 20 people (⅓ full), that’s about 350 capsules per day.  This would be incredibly convenient for passengers, as there would be a bus leaving from each of the three major airports in each area about every 20 minutes.

Commute traffic

Just the traffic from replacing portions of existing long commutes is huge:
  • 105k/day Bay Area commute capsules (half of all existing >50 minute commutes)
  • 125k/day Los Angeles commute capsules (¼ of existing >50 minute commutes)

Capsule rides would average about $60 and carry four cars.  Yearly revenue would be $2.3b/year. However, this estimate is sensitive to the distribution of Hyperloop terminal, and the time it takes to get through these terminals.

Quick trips really matter. For every minute saved, per trip, I expect an additional 9k/day commute capsules and $135m/year in revenue. This traffic increase is strongly nonlinear, however. If we could get the trip times down to around 35 minutes per trip, we'd expect to see 40k/day extra commute capsules per minute saved (and $600m/year in additional revenue).

Extra terminals (in the right places) would really matter, especially in inland Los Angeles, Orange, San Diego, and Contra Costa counties, where I expect each terminal to support 16k/day commute capsules and $235m/year in revenue.

Because the north/south door-to-door time will be about an hour, it will be possible to have a daily commute between northern and southern California.  Even if just 1% of commuters use Hyperloop over long distance runs, this is a colossal amount of traffic: 25k capsules/day, bringing in $1.9b/year.

Existing Bay Area commuter case: 105k capsules/day

The Bay Area has the largest fraction of long distance commutes in the nation.  2% of commuters travel at least 50 miles and 90 minutes, each way.  About 12% of commuters travel 60 minutes each way, and the average commute is 30 minutes.

Using the 2011 U.S. Census ACS data, I predict there are 420,000 commuters in the Bay Area with at least a 50 minute commute.  As shown in the map below (created with Trulia’s excellent tool), at least half of these commuters could be within 15 minutes of a Hyperloop terminal, and so could reduce their commute by 20 minutes and 15 miles with a Hyperloop jump.  So a local Hyperloop (with 21 terminals as shown) would have a market of around 420k car trips per day.  At four cars per capsule, that’s 105k capsule trips per day.

20 miles of commuting costs around $4.05 each way (using AAA’s $0.27/mile incremental cost for medium sedan in 2013).  20 minutes of the person’s time is worth something as well, at least $6.  Each local car trip could be sold for $10, so yearly revenue for trips within the Bay Area would be $1.05b.

Existing Los Angeles commuter case: 125k capsules/day

The Los Angeles commute market is both more lucrative than the Bay Area’s (620,000 commutes are at least 60 minutes, 1,100,000 are at least 50 minutes) and more problematic, as more of the population is farther from the water.  Nonetheless, a Hyperloop can be run down the coast and reach perhaps ¼ of the population in 15 minutes or less.
Again using ACS data, I predict there are 1 million commuters in Los Angeles with at least a 50 minute commute.  The core 8 Hyperloop transfer stations shown would service 250,000 of these commuters and bring in $1.25b/year.

The map above shows a terminal in the southern San Fernando Valley, which would require a 10 mile tunnel bore through the Santa Monica mountains.  There are several other places where tunnel bores or perhaps cut-and-cover through lower-cost real estate could get to lucrative markets.  The map above also shows 5 terminals in Santa Barbara, Ventura, and San Diego, which are not currently suburbs of Los Angeles for many commuters.

Tunneling cost is not necessarily prohibitive: A 5 mile x 15-foot diameter tunnel was recently completed under San Francisco Bay for $286 million. The tunnel imagined above would be three times the diameter and twice and long, so perhaps four times the cost. A $1b capital outlay to bring in $150m/year seems quite reasonable.

There is a significant externalized benefit: these 250,000 commuters would no longer be on the 405 freeway for most of their trip.  The Hyperloop would unload a huge amount of traffic from the freeway system, which should speed up even those commutes that can’t be serviced by Hyperloop.