Monday, August 27, 2012

ARGUS-IS

Here's an exotic high flying camera: ARGUS-IS.  I've been trying to figure what these folks have been up to for years, and today I found an SPIE paper they published on the thing.  What follows is a summary and my guesses for some of the undocumented details. [Updated 30-Jan-2013, to incorporate new info from a Nova documentary and a closer reading of the SPIE paper.]
Vexcel Ultracams also use four cameras
with interleaved sensors


  • It looks like BAE is the main contractor.  They've subcontracted some of the software, probably the land station stuff, to ObjectVideo.  BAE employs Yiannis Antoniades, who appears to be the main system architect.  The lenses were subcontracted out to yet another unnamed vendor, and I suspect the electronics were too.
  • Field of view: 62 degrees.  Implied by altitude 6 km and object diameter 7.2 km.
  • Image sensor: 4 cameras, each has 92 Aptina MT9P031 5 megapixel sensors.  The paper has a typo claiming MT9P301, but no such sensor exists.  The MT9P031 is a nice sensor, we used it on the R5 and R7 Street View cameras.
    • 15 frame/second rate, 96 MHz pixel clock, 5 megapixels, 2592 x 1944.
    • It's easy to interface with, has high performance (quantum efficiency over 30%, 4 electrons readout noise, 7000 electrons well capacity), and is (or was) easy to buy from Digikey.  (Try getting a Sony or Omnivision sensor in small quantities.)
  • Focal length: 85mm.  Implied by 2.2 micron pixel, altitude of 6 km, GSD of 15 cm.  Focal plane diameter is 102mm.  The lens must resolve about 1.7 gigapixels.  I must say that two separate calculations suggest that the focal length is actually 88mm, but I don't believe it, since they would have negative sensor overlap if they did that.
  • F/#: 3.5 to 4.  There is talk of upgrading this system to 3.7 gigapixels, probably by upgrading the sensor to the Aptina MT9J003.  An f/4.0 lens has an Airy disk diameter of 5.3 microns, and it's probably okay for the pixels to be 2.2 microns.  But 1.66 micron pixels won't get much more information from an f/4.0 lens.  So, either the lens is already faster than f/4.0, or they are going to upgrade the lens as well as the sensors.
  • The reason to use four cameras is the same as the Vexcel Ultracam XP: the array of sensors on the focal plane cannot cover the entire field of view of the lens.  So, instead, they use a rectangular array of sensors, spaced closely enough so that the gaps between their active areas are smaller than the active areas.  By the way guys (Vexcel and ObjectVideo), you don't need four cameras to do this problem, it can be solved with three (the patent just expired on 15-Jul-2012).  You will still need to mount bare die.
  • The four cameras are pointed in exactly the same direction.  Offsetting the lenses by one sensor's width reduces the required lens field of view by 2.86 degrees, to about 59 degrees.  That's not much help.  And, you have to deal with the nominal distortion between the lenses.  Lining up the optical axes means the nominal distortion has no effect on alignment between sensors, which I'm sure is a relief.
  • The sensor pattern shown in the paper has 105 sensors per camera, and at one point they mention 398 total sensors.  The first may be an earlier configuration and the latter is probably a typo.  I think the correct number is 92 sensors per camera, 368 total.  So I think the actual pattern is a 12x9 rectangular grid with 11.33mm x 8.50mm centers.  16 corner sensors (but not 4 in each corner) are missing from the 9x12=108 rectangle, to get to 92 sensors per focal plane.  The smallest package that those sensors come in is 10mm x 10mm, which won't fit on the 8.5mm center-to-center spacing, so that implies they are mounting bare die to the focal plane structure.
  • They are carefully timing the rolling shutters of the sensors so that all the rolling shutters in each row are synchronized, and each row starts it's shutter right as the previous row finishes.  This is important, because otherwise when the camera rotates around the optical axis they will get coverage gaps on the ground.  I think there is a prior version of this camera called Gorgon Stare which didn't get this rolling shutter synchronization right, because there are reports of "floating black triangles" in the imagery, which is consistent with what you would see on the outside of the turn if all the rolling shutters were fired simultaneously while the camera was rotating.  Even so, I'm disappointed that the section on electronics doesn't mention how they globally synchronize those rolling shutters, which can be an irritatingly difficult problem.
  • They are storing some of the data to laptop disk drives with 160 GB of storage.  It appears they may have 32 of these drives, in which case they've got enough space to potentially store the entire video stream, but only with very lossy video compression.  The design presented has only JPEG2000 (not video) compression, which will be good for stepping through the frames, but the compression ratio will be bulky enough that there is no way they are storing all the video.
  • They have 184 FPGAs at the focal plane for local sensor control, timestamping, and serialization of the data onto 3.3 Gb/s fiber optics.  Supposedly the 3.3 Gb/s SerDes is on the FPGA, which sounds like a Virtex-5 20T.  But something is odd here, because having the SerDes on the FPGA forces them to choose a fairly beefy FPGA, but then they hardly do anything with it: the document even suggests that multiplexing the two sensor data streams, as well as serialization of those streams, happens outside the FPGA (another typo?).  So what's left for a Virtex-5 to do with a pair of sensors?  For comparison, I paired one Spartan-3 3400A with each sensor in R7, and we were able to handle 15 fps compression as well as storage to and simultaneous retrieval from 32 GB of SLC flash, in that little FPGA.  Maybe the SerDes is on some other device, and the FPGA is more of a PLD.
  • The data flows over fiber optics to a pile of 32 6U single board computers, each of which has two mezzanine cards with a Virtex 5 FPGA and two JPEG2000 compressors on it.
Now here's my critique of this system design:
  • They pushed a lot of complexity into the lens.
    • It's a wide angle, telecentric lens.  Telecentric means the chief rays coming out the back, heading to the focal plane, are going straight back, even at the edges of the focal plane.  Said another way, when you look in the lens from the back, the bright exit pupil that you see appears to be at infinity.  Bending the light around to do that requires extra elements.  This looks a lot like the lenses used on the Leica ADS40/ADS80, which are also wide angle telecentric designs.  The Leica design is forced into a wide angle telecentric because they want consistent colors across the focal plane, and they use dichroic filters to make their colors.  The ARGUS-IS doesn't need consistent color and doesn't use dichroics... they ended up with a telecentric lens because their focal plane is flat.  More on that below.
    • The focal lengths and distortions between the four lenses must be matched very, very closely.  The usual specification for a lens focal length is +/- 1% of nominal.  If the ARGUS-IS lens were built like that, the image registration at the edge of field would vary by +/- 500 microns.  If my guesses are right, the ARGUS-IS focal plane appears to have 35x50 microns of overlap, so the focal lengths of the four lenses will have to match to within +/- 0.07%.  Wow.
    • "The lenses are athermalized through the choice of glasses and barrel materials to maintain optical resolution and focus over the operational temperature range."  Uh, sure.  The R7 StreetView rosette has 15 5 megapixel cameras.  Those lenses are athermalized over a 40 C temperature range, and it was easy as pie.  We just told Zemax a few temperature points, assumed an isothermal aluminum barrel, and a small tweak to the design got us there.  But those pixels have a field of view of 430 microradians, compared to the pixels behind the ARGUS-IS lens, which have a 25 microradian PFOV.  MIL-STD-810G, test 520.3, specifies -40 C to 54 C as a typical operating temperature range for an aircraft equipment bay.  If they had anything like this temperature range specified, I would guess that this athermalization requirement (nearly 100 degrees!) came close to sinking the project.  The paper mentions environmental control within the payload, so hopefully things aren't as bad as MIL-STD-810G.
    • The lenses have to be pressure compensated somehow, because the index of refraction of air changes significantly at lower pressures.  This is really hard, since glasses, being less compressible than air, don't change their refractive indices as fast as air.  I have no particularly good ideas how to do it, other than to relax the other requirements so that the lens guys have a fighting chance with this one.  Maybe the camera can be specified to only focus properly over a restricted range of altitudes, like 4km to 8km.  (ARGUS-IR specifies 0 to 10km.  It's likely ARGUS-IS is the same, so no luck there.)  Or maybe everything behind their big flat window is pressurized.
  • They made what I think is a classic system design mistake: they used FPGAs to glue together a bunch of specialized components (SerDes, JPEG compressors, single board computers), instead of simply getting the job done inside the FPGAs themselves.  This stems from fear of the complexity of implementing things like compression.  I've seen other folks do exactly the same thing.  Oftentimes writing the interface to a off-the-shelf component, like a compressor or an encryption engine, is just as large as writing the equivalent functionality.  They mention that each Virtex-5 on the SBC has two 0.6 watt JPEG2000 chips attached.  It probably burns 200 mW just talking to those chips.  It seems to me that Virtex could probably do JPEG2000 on 80 Mpix/s in less than 1.4 watts.  Our Spartan-3 did DPCM on 90+ Mpix/s, along with a number of other things, all in less than 1 watt.
  • I think I remember reading that the original RFP for this system had the idea that it would store all the video shot while airborne, and allow the folks on the ground to peruse forward and backward in time.  This is totally achievable, but not with limited power using an array of single-board PCs.
Let me explain how they ended up with a telecentric lens.  A natural 85mm focal length lens would have an exit pupil 85mm from the center of the focal plane.  Combine that with a flat focal plane and sensors that accept an f/1.8 beam cone (and no offset microlenses), and you get something like the following picture.  The rectangle on the left is the lens, looking from the side.  The left face of the right rectangle is the focal plane.  The big triangle is the light cone from the exit pupil to a point at the edge of the focal plane, and the little triangle is the light cone that the sensor accepts.  Note that the sensor won't accept the light from the exit pupil -- that's bad.


There are two ways to fix this problem.  One way is to make the lens telecentric, which pushes the exit pupil infinitely far away from the focal plane.  If you do that, the light cone from the exit pupil arrives everywhere with it's chief ray (center of the light cone) orthogonal to the focal plane.  This is what ARGUS-IS and ADS-80 do.

The other way is to curve the focal plane (and rename it a Petzval surface to avoid the oxymoron of a curved focal plane).  Your retina is curved behind the lens in your eye, for example.  Cellphone camera designers are now looking at curving their focal planes, but it's pretty hard with one piece of silicon.  The focal plane array in ARGUS-IS is made of many small sensors, so it can be piecewise curved.  The sensors are 7.12 mm diagonally, and the sag of a 85 mm radius sphere across 7.12 mm is 74 microns.  The +/- 9 micron focus budget won't allow that, so curving the ARGUS-IS focal plane isn't going to allow a natural exit pupil.  The best you can do is curve the focal plane with a radius of 360 mm, getting 3.6 mm of sag, and push the exit pupil out to about 180 mm.  It's generally going to be easier to design and build a lens with an exit pupil at 2x focal length rather than telecentric, but I don't know how much easier.  Anyway, the result looks like this:
As I said, the ARGUS-IS designers didn't bother with this, but instead left the focal plane flat and pushed the exit pupil to infinity.  It's a solution, but it's not the one I would have chosen.

Here's what I would have done to respond to the original RFP at the time.  Note that I've given this about two hours' thought, so I might be off a bit:
  • I'd have the lenses and sensors sitting inside an airtight can with a thermoelectric cooler to a heat sink with a variable speed fan, and I'd use that control to hold the can interior to between 30 and 40 C (toward the top of the temperature range), or maybe even tighter.  I might put a heater on the inside of the window with a thermostat to keep the inside surface isothermal to the lens.  I know, you're thinking that a thermoelectric cooler is horribly inefficient, but they pump 3 watts for every watt consumed when you are pumping heat across a level.  The reason for the thermoelectric heat pump isn't to get the sensor cold, it's to get tight control.  The sensors burn about 600 mW each, so I'm pumping 250 watts outs with maybe 100 watts.
  • I'd use a few more sensors and get the sensor overlap up to 0.25mm, which means +/-0.5% focal length is acceptable.  I designed R5 and R7 with too little overlap between sensors and regretted it when we went to volume production.  (See Jason, you were right, I was wrong, and I've learned.)
  • Focal plane is 9 x 13 sensors on 10.9 x 8.1 mm centers.  Total diameter: 105mm.  This adds 32 sensors, so we're up to an even 400 sensors.
  • Exiting the back of the fine gimbal would be something like 100 flex circuits carrying the signals from the sensors.
  • Hook up each sensor to a Spartan-3A 3400.  Nowadays I'd use an Aptina AR0330 connected to a Spartan-6, but back then the MT9P001 and Spartan-3A was a good choice.
  • I'd have each FPGA connected directly to 32GB of SLC flash in 8 TSOPs, and a 32-bit LPDDR DRAM, just like we did in R7.  That's 5 bytes per pixel of memory bandwidth, which is plenty for video compression.
  • I'd connect a bunch of those FPGAs, let's say 8, to another FPGA which connects to gigabit ethernet, all on one board, just like we did in R7.  This is a low power way to get connectivity to everything.  I'd need 12 of those boards per focal plane.  This all goes in the gimbal.  The 48 boards, and their power and timing control are mounted to the coarse gimbal, and the lenses and sensors are mounted to the fine gimbal.
  • Since this is a military project, and goes on a helicopter, I would invoke my fear of connectors and vibration, and I'd have all 9 FPGAs, plus the 8 sensors, mounted on a single rigid/flex circuit.  One end goes on the focal plane inside the fine gimbal and the other goes on the coarse gimbal, and in between it's flexible.
  • I'd connect all 52 boards together with a backplane that included a gigabit ethernet switch.  No cables -- all the gigE runs are on 50 ohm differential pairs on the board.  I'd run a single shielded CAT-6 to the chopper's avionics bay.  No fiber optics.  They're really neat, but power hungry.  Maybe you are thinking that I'll never get 274 megabits/second for the Common Data Link through that single gigE.  My experience is otherwise: FPGAs will happily run a gigE with minimum interpacket gap forever, without a hiccup.  Cheap gigE switches can switch fine at full rate but have problems when they fill their buffers.  These problems are fixed by having the FPGAs round-robin arbitrate between themselves with signals across that backplane.  Voila, no bandwidth problem.
  • The local FPGA does real time video compression directly into the flash.  The transmission compression target isn't all that incredible: 1 bit per pixel for video.  That gets 63 channels of 640x400x15 frames/sec into 274 Mb/s.  The flash should give 1 hour of storage at that rate.  If we want 10 hours of storage, that's 0.1 bits/pixel, which will require more serious video compression.  I think it's still doable in that FPGA, but it will be challenging.  In a modern Spartan-6 this is duck soup.
  • The computer tells the local FPGAs how to configure the sensors, and what bits of video to retrieve.  The FPGAs send the data to the computer, which gathers it up for the common data link and hands it off.
  • I'll make a guess of 2 watts per sensor+FPGA+flash, or 736 watts.  Add the central computer and switch and we're at 1 kilowatt.  Making the FPGAs work hard with 0.1 bit/pixel video compression might add another 400 watts, at most.
  • No SSDs, no RAID, no JPEG compression chips, no multiplexors, no fiber optic drivers, no high speed SerDes, no arrays of multicore X86 CPUs.  That's easily half the electronics complexity, gone.
UPDATE 25-Jan-2013: Nova ran a program on 23-Jan-2013 (Rise of the Drones) which talks about ARGUS-IS.  They present Yiannis Antoniades of BAE systems as the inventor, which suggests I have the relationship between BAE and ObjectVideo wrong in my description above.  They also say something stupid about a million terabytes of data per mission, which is BS: if the camera runs for 16 hours the 368 sensors generate 2,000 terabytes of raw data.

They also say that the ARGUS-IS stores the entire flight's worth of data.  I don't think they're doing that at 12 hertz, certainly not on 160 GB drives.  They've got 32 laptop drives in the system (one per single board computer).  If those store 300 GB apiece, that's 10 terabytes of total storage.  16 hours of storage would require 0.05 bits/pixel -- no way without actual video compression.  The JPEG2000 compressor chips are more likely to deliver at best 0.2 bits/pixel, which means they might be storing one of every four frames.

UPDATE 27-Jan-2013: An alert reader (thanks mgc!) sent in this article from the April/May 2011 edition of Science and Technology Review, which is the Lawrence Livermore National Laboratory's own magazine.  It has a bunch of helpful hints, including this non-color-balanced picture from ARGUS-IS which lets you see the 368 sensor array that they ended up with.  It is indeed a 24 x 18 array with 16 sensors missing from each corner, just as I had hypothesized.
The article mentions something else as well: the Persistics software appears to do some kind of super-resolution by combining information from multiple video frames of the same nearly static scene.  They didn't mention the other two big benefits of such a scheme: dynamic range improvement and noise reduction (hence better compression).  With software like this, the system can benefit from increasing the focal plane to 3.8 gigapixels by using the new sensor with 1.66 micron pixels.  As I said above, if the lens is f/3.5 to f/4.0 lens they won't get any more spatial frequency information out of it with the smaller pixels, but they will pick up phase information.  Combine that with some smart super-resolution software and they ought to be able to find smaller details.  Question though: why not just go to the MT9F002, which gives you 14 million 1.4 micron pixels?  This is a really nice, fast sensor -- I've used it myself.

The article also mentions 1000:1 video compression.  That's very good: for comparison, H.264 level 4 compresses 60 megapixels/second of HDTV into 20 megabits/second, which is 0.33 bits/pixel or 36:1 compression.  This isn't a great comparison, though, because Persistics runs on almost completely static content and H.264 has to deal with action movie sequences.  In any case, I think the Persistics compression is being used to archive ARGUS-IS flight data.  I don't think they are using this compression in the aircraft.

Monday, August 13, 2012

More vision at 360nm

I thought of two other consequences of birds, especially hawks, seeing ultraviolet light.

The first has to due with scattered light.  The nitrogen and oxygen molecules in the atmosphere act like little dipoles, scattering some of the light passing through.  The amount of light scattered increases as the fourth power of frequency (or inverse fourth power of wavelength).  The process is called Rayleigh scattering (yep, same guy as the last rule).

Because of that strong wavelength dependence, our blue-sensitive cones receive 3 times as much light scattered by nitrogen and oxygen as our red cones, and when we look up at a cloudless sky in the day, we see the sky is blue.  Here are the response curves for the three types of cones and the rods in human vision.  (That's right, you actually have four color vision, but you can only see blue-green (498nm) with your rods when it's too dim for your cones to see.)


But now imagine what a hawk sees.  It has another color channel at 360nm, which sees 6 times as much scattered light as red.  When looking up, the sky will appear more UV than blue.  But there is more to it than that.

Rayleigh scattering is not isotropic.  The dipoles scatter most strongly at right angles to the incoming light.  When you look up at the sky, the intensity of blue changes from near the sun to 90 degrees away.  It's a little hard to see because when looking up you also see Mie scattering which adds yellow light to the visible sky near the sun.  But when you look down, like a hawk does, Mie scattering isn't an issue (instead you have less air Rayleigh scattering and more ground signal).  The overall color gradient from Rayleigh scattering that a hawk sees looking down will be twice as strong as the color gradient we see, because the hawk sees in UV. In direct sunlight, the hawk has a measure of the sun's position whenever it is looking at the ground, even when there are no shadows to read.

The other consequence is axial chromatic aberration.  Most materials have dispersion, that is, they present a different index of refraction to different wavelengths of light.  One consequence of dispersion is that blue focusses in front of red (for lenses like the eye).  I used to think this was a bad thing.  But if your cones tend to absorb light of one color and pass light of the others, and your resolution is limited by the density of cones, a little chromatic aberration is a good thing, because it allows you to stack the UV cones in front of the blue cones in front of the green and red cones, and they all get light focussed from the same range.  I know that retinas have layered stacks of rods, cones, and ganglion cells, but I don't know that any animals, hawks in particular, have actually taken advantage of axial chromatic aberration to stack cones of different colors.  It's certainly something I'll be looking for now.

Sunday, August 12, 2012

Hawk eyed



The resolution of a good long-distance camera is limited by diffraction. The simple rule for this is the Rayleigh criterion:

Theta is the angular resolution, the smallest angular separation between two bright lines that can be differentiated by the system.

Lambda is the wavelength of light. You can see reds down at 700nm, and blues to 400nm. Interestingly, birds can see ultraviolet light at 360nm. The usual explanation for this is that some flowers have features only visible in ultraviolet, but I think raptors are using ultraviolet to improve their visual acuity.

D is the diameter of the pupil. Bigger pupils not only gather more light, but they also improve the diffraction limit of the optical system. The trouble with bigger pupils is that they make various optical aberrations, like spherical aberration, worse. These aberrations are typically minimized near the center of the optical axis and get much worse farther from the optical axis.  So a big pupil is good if you want a high-resolution fovea and are willing to settle for crummy resolution but good light gathering outside that fovea.  This is the tradeoff the human eye makes.

A human's eye has a pupil about 4mm across in bright light. According the Rayleigh criterion, human resolution at 550nm should be about 170 microradians. According to a Wikipedia article on the eye, humans can see up to 60 counts per degree, which corresponds to 290 microradians per line pair. That suggests the human eye is not diffraction limited, but rather limited by something else, such as a combination of focal length and the density of cone cells on the retina.

I wasn't able to find a good reference for the pupil diameter of a red-tailed hawk. Judging from various pictures, I'm guessing it could be smaller than a human pupil, since it appears that the hawk's eyeball is quite a bit smaller than the human eye (absolute scale). This doesn't seem good enough, since hawks are reputed to have fabulous vision. The first reference I found online suggested that hawks have visual acuity that's actually worse than humans.

Suppose that this last study was using paints that were undifferentiated in UV, in particular, around 360 nm. The researchers would not have noticed this. Suppose further that hawks are using 360 nm light for high acuity vision. The diffraction limit of a 4 mm aperture in 360 nm light is 110 microradians. This isn't 8 times better than human vision, but it is sufficient to distinguish two twigs 1 cm apart from 100 meters up.

Sunday, May 27, 2012

More on hot-rock geothermal


When you drill a hole in the ground and flush water or CO2 through it, you cool the rock you've drilled through.  The amount of heat energy that comes up from that hole depends on how much rock you can cool.  And that depends on how conductive and permeable the rock is.

To get a feel for how much energy we're used to getting out of a hole in the ground, let's consider another industry that gets energy out of drilling holes in the ground: oil wells in the United States.  Half of US oil production comes from wells producing over 118 barrels of oil per day (ftp://www.eia.doe.gov/pub/oil_gas/petrosystem/us_table.html).  That's about 6.6 megawatts of chemical energy, which can be converted into about 4 megawatts of electricity.  These wells typically run for 20 years.

Geothermal wells will have to produce similar amounts of energy or the cost of drilling the well will make them uneconomic.

To get 4 MW of electricity from rocks that you cool from, say, 220 C to 170 C, you first need about 16 megawatts of heat, since your heat source is cooler than an oil flame and the heat engine it runs will have a lower Carnot efficiency.  Over 20 years, that flow will cool about 100,000,000 cubic meters of rock 50 degrees C.  If your well is 1 kilometer deep, you need to have cooled a region about 350 meters in diameter around the well.  Geothermal on a scale large enough to make a difference will need thousands of these wells separated by 500 meters or more, but connected by hot steam pipelines to bring otherwise diffuse steam to turbines large enough to be cost effective.

If we are to lose at most 10 degrees C of temperature delta to conduction, the cracks in the rock that carry the flow of fluid cooling it must be separated by less than 25 meters.  If there is significant flow restriction in those cracks, the separation must be closer still.  Rock 1 km down has been under 230 atmospheres of pressure for millions of years, which tends to crush the pores and small cracks the rock would otherwise have.

If the rock is heavily fractured, as it is in the Northern California Geysers field, then water flow through the rock can cool enormous expanses like this.  The Geysers geothermal field has been a great success over many decades, but it has unusual geology.

If the rock is not heavily fractured, then the production of heat from the well will look good at first but then fall off as the rock immediately surrounding the well cools off.  For the investment to pay off, heat production must stay high for two decades or so.

My guess is that using CO2 rather than water as a heat transport fluid might improve the permeability of the rock and make more rock viable for geothermal production.  This might make some areas that would not work with water-based heat extraction economically viable, but I don't think it's going to make hot-rock geothermal work across the country.

Tuesday, January 03, 2012

Geothermal is not renewable

You can group geothermal plants into two types:
  • The kind that pump water underground and take steam or hot high pressure water out.
  • The kind that drill holes in the ground and use conduction to get heat out.
Admittedly, I don't know a great deal about geothermal systems, but I do understand heat flow reasonably well. And geothermal systems are all about heat flow. Here are the problems that I see:

Conduction is an impractical way to move utility-scale amounts of heat through anything but the thin walls of a heat exchanger. For instance, ground temperatures typically rise about 3 C for every 100 meters you go underground. Ground conductivity is about 1.5 watts/meter/kelvin. Multiply those two and get 45 kW/km^2. Remember that utility-scale power means you need hundreds of megawatts of heat. Bottom line: geothermal isn't renewable. It works by cooling down some chunk of rock in place, rather than by converting heat that rises from the earth's core.

You might think that a big hunk of rock can provide a lot of heat for a very long time. For instance, a cubic kilometer of granite, cooling 30 C, provides 2 gigawatt-years of heat. Figure 20% of that gets converted to electricity. Over 30 years, that cubic kilometer of granite will run a 13 megawatt power plant. We're going to need dozens of cubic kilometers.

You might think that dozens of cubic kilometers would be cheap. Ranch land out in Idaho goes for $360,000/km^2. Assuming you can suck the heat out of a vertical kilometer of rock, the 13 megawatts from that ground are going to bring you a present value of $90 million. Sounds great!

But wait... before you get started, you are going to have to shatter that cubic kilometer of rock so you can pump water through it to pick up the heat. The hydrofracking folks have learned quite a lot about getting fluid out of tight underground formations. I think the useful comparison to make is the value of the fluid extracted. Oil is worth $100/barrel right now, which is $850/m^3. Water from which we will extract 30 C of heat to make electricity at 4 cents a kilowatt-hour at 20% efficiency is worth 28 cents/m^3, ignoring the cost of capital to convert the heat to electricity. There is a factor of 3000 difference in the value of that fluid. Now hydrofracking rocks for heat doesn't have to be as thorough as hydrofracking them for oil, since you can count on ground conduction to do some work for you. But I don't think the difference is going to save a factor of 3000 in the fracking cost.

So, that means geothermal is going to be confined to places where the rock is already porous enough to pump water through. Like the Geysers in Northern California, which is a set of successful gigawatt geothermal plants. I think it's interesting that the output has been declining since 1987.

The problem is thought to be partial depletion of the local aquifer that supplies the steam, because steam temperatures have gone up as steam pressures have dropped. This sounds right to me, but I'll point out that the water in the aquifer is probably sitting in a zone of relatively cool rock which has hotter rock above it. As the aquifer has drained, the steam has to travel through more rock, causing more pressure drop, and thus less steam transport. Where water used to contact rock, steam does now, pulling less heat from the rock, so that the rock face heats up from conduction from hotter impermeable areas.

Wait... how did cool rock end up under hotter rock? The 150 or so gigawatt-years of heat that have been pulled out of the 78 km^2 area over the last 50 years have probably cooled a kilometer stack of rock by about 30 C (or maybe a thinner layer of rock by a larger temperature swing). I'm not at all convinced by the USGS claim that the heat source is the magma chamber 7km down. Assuming the magma surface is at 1250 C (the melting point of granite) and the permeable greywacke is at 230 C, a 78 km^2 area 6 km thick will conduct about 20 megawatts, an insignificant fraction of the energy being taken out.

Refilling the aquifer will help pull heat out of the shallower rock, but that's not going to last decades. To keep going longer they'll need to pull heat from deeper rock, and that's going to require hydrofracking the deeper greywacke.

And that's expensive.

Tuesday, July 26, 2011

Wicked

Just read "Wicked: the Life and Times of the Wicked Witch of the West". This after we saw Wicked in London as part of our summer Europe trip. My cousin Christopher plays the drums and various other percussion pieces in that production.

The musical is a blast. We had great seats, the music and singing were fabulous, the staging sometimes overwhelming, the kids loved it, Chris showed us around a little bit... what fun.

When we got back, we got the music, and now the kids have mostly memorized all the songs. It's slightly disturbing seeing my 4 year old singing "No one mourns the wicked".

Martha got the book out of the library, and we both read it. Like all tragedies, it's frustrating. It's a vastly more complex and subtle story than the musical. If you're reading this but haven't read the book or seen the musical, see the musical first, as it'll be tough to enjoy after the book. And don't read the rest of this post.

SPOILERS.

Like most things that I really like, I wanted the book to be better. In particular, a good tragedy should make me feel the inertia of doom, the sense that the characters are carrying themselves towards their downfall. In the book, there was definitely some of that, but I also got the feeling that doom was coming in the form of spunky little Dorothy Gale, and that the characters were bending their wills to the needs of that other story arc. So that wasn't as impressive. And why the hell couldn't a smart girl like Elphaba convert the tactical (and unsuspected) advantage of being able to fly on a broom into a way to pick off the wizard and his chief lieutenants.

The part that I really liked was how Elphaba's desire to do good was a significant part of what drove her to her doom. I find myself strongly agreeing with the idea that the desire to do good is not good in itself, and can actually lead to evil. If you want to be good you have to actually *do* it.

But there were so many things to love about the book. The dialog, the political machinations, tictocism, Elphaba's reaction to Dorothy asking for the forgiveness that Elphaba herself had been denied... I love reading an author who has insights way past my own.

Thursday, February 17, 2011

Ouch

Ed Lu gave a talk at Google today on his B612 foundation.

He mentioned that the asteroid that caused the K-T extinction was probably 10 miles in diameter, um hummm.... which meant that the top had not yet entered the bulk of the atmosphere when the bottom hit the ocean. That image really got me.

The speed of sound through granite is 5950 m/s, which is substantially less than the speed of an incoming asteroid. Things in low earth orbit go at about 7800 m/s, and Ed said incoming asteroids are around 3-4 times that. So that means that when the asteroid smacks into the earth, there is a really good chance that the back of the asteroid will hit the earth before the shock wave gets to it -- it'll punch all the way into the Earth surface. Koeberl and MacLeod have a nice table here which shows a granite on ice impact needs only 6 km/s vertical velocity to punch all the way in (they neglected water as a target material, an odd oversight since the majority of the earth is covered in water, more or less a solid at these velocities). If the incoming velocity is 25 km/s, which is on the low side of what Ed suggested, then anything striking within 76 degrees of vertical is going all the way in. It seems to me that most impacts would be like that.

So after the impact, most of the energy is added to stuff below the ground surface. That's 1e25 joules for the K-T asteroid. Enough to melt 5e18 kg of rock, which is 100 times as much mass as the asteroid itself. Figure a small fraction of that will vaporize and the whole mess goes suborbital.

For the next hour you have thousands of trillions of tons of glowing molten lava raining down on everything. Everything that can burn (6e14 kg of carbon in the world's biomass) does so, promptly.

And this asteroid impact thing has actually happened, many times. As Ed says, it's like pushing Ctrl-Alt-Del on the whole world.

Side notes: There is 1e18 kg of oxygen in the atmosphere, far more than necessary to burn every scrap of biomass on earth. The story I read was that the oxygen in the atmosphere came from plants. If so, there must be a lot of carbon buried somewhere: 8.6e14 kg of known coal reserves are less than 1%.

Another interesting point, vis-a-vis ocean fertilization: there is about as much carbon in the atmosphere as in the world's biomass. We'd have to boost the productivity of 1/10 of the world's ocean by a factor of 8, from existing productivity (125 gC/m^2/year), to fix the CO2 problem in the atmosphere in 15 years. Ocean CO2 would take a century or more. That productivity boost is like converting that much ocean into a coral reef! This seems like a lot to me.

Friday, October 01, 2010

Do powerplants use too much water?

Coal and nuclear powerplants make heat, convert some of that to electriciy, and reject the rest. They use water, and lots of it, to reject the heat.

The USGS says that thermoelectric powerplants (nearly all coal and nuclear) use 49% of the water withdrawn in the US. That sounds like a lot, and it is. It's also misleading.

92% of the water used by powerplants is used for once-through cooling. That means they suck water from the river, use it to cool their condensers, then pump it back the river at somewhat higher temperature. There are legal limits to the temperature they can send back out, and as the intake water temperature rises closer to those limits, they have to pump more water, and eventually shut down the powerplant. This has happened, famously, in France during a heat wave, right when everyone wanted to run their air conditioners.

The other 8% of the water used by powerplants is used in recirculating cooling. In these systems water is used to cool the condensers, but then some of that water is evaporated in those familiar hyperbolic cooling towers, which cools the rest, and the water is cycled around. These systems use a lot less water because they only need to make up the water that evaporates. Of that 8%, about 70% is evaporated and 30% returned to the lake or river it came from.

Since 1990, the US has mostly built gas-turbine powerplants. These reject heat in the form of incredibly hot jet exhaust, and don't need water. But they burn natural gas which has caused us to send our plastics industry to China. I don't think many people appreciate how dumb that was.

New nuclear plants in the US will be either on the coastline or evaporatively cooled, because there is no appetite for increasing the amount of once-through freshwater cooling. And I don't think there will be many evaporatively cooled plants at either greenfield or brownfield sites: Greenfield evaporatively cooled plants require new water rights which are very difficult to secure. Brownfield replacements of older coal fired powerplants will be difficult because nuclear plants are much bigger than older coal plants, reject a lot more heat, and so need a lot more water, getting back to the new water rights problem. That leaves new PWR development for areas with a lot of water (US southeast) and coastlines. [Edit: And any new coastline PWR developments are going to face new hurdles as a result of Fukushima.]

Water rights are one reason why I'm so interested in molten salt reactors. MSR cores and turbines run at higher temperatures than those of pressurized water reactor cores, so they can be air cooled without killing their efficiency (and thus jacking up their costs a lot). Air cooling is a good thing because it removes an entire class of regulatory problems, and thus an entire kind of project risk.

Wednesday, August 18, 2010

Deep Ocean O2 Used by Gulf Oil Spill

About 150 million gallons of oil have spilled into the Gulf. Estimates vary, but something like 50 to 100 million gallons of that oil are now dispersed into the water. Bacteria are supposed to "break down" that oil. What does that mean?

It means they will oxidize the oil. On average, oil has two hydrogen atoms to one carbon atom. CH2 + 1.5O2 => CO2 + H2O, so consuming 14 grams of oil requires 48 grams of oxygen. So, that means that consuming e.g. 50 million gallons of oil requires 150,000 metric tons of oxygen.

The oxygen concentration of the sea is around 7 mg/kg. So, 150,000 metric tons of oxygen is all the oxygen in 22 cubic kilometers of seawater. 22 cubic kilometers in the Gulf of Mexico is approximately one drop in a 5 gallon bucket.

Bottom line: there is plenty of oxygen, even at depth, to oxidize all the oil spilled.

Monday, July 05, 2010

Lanthanum Phosphate

As you know, I've built this nice new pool. The pool is designed to stay at 85 F for most of the year, and so it has solar panels in their own insulated glass boxes so that it can collect heat even when the outside air temperature is low. Because those panels can get ridiculously hot (338 F) in the sun when water is not flowing through them, they are made of copper rather than plastic as is more commonly done. I've been learning the consequences of that difference.

A couple weeks after filling the pool I added chlorine (hypochlorous acid, HClO), which immediately caused a brown stain on my nice white plaster pool. Copper from the panels had leached into the pool water in the form of copper ions. The HClO oxidized those ions, probably to Cu2O, which promptly came out of solution and bound to the plaster. So, now I have a brown pool. It really pissed me off for a while, but I've grown used to the look and now I actually think it looks good in some places. One of the interesting side effects of having all this copper in the water is that the copper is antimicrobial. The pool has often gone for days with zero chlorine, at 92 F, and with tons of phosphate, and has essentially no algae growth. Without copper that would be a recipe for an algae bloom.

I have these little strips that claim to test the copper content of the water, and that test result drops to zero shortly after I add chlorine. After four or five days it will bounce back up to 0.5 ppm (ppm meaning mg/liter in this case). That means my 38500 gallon pool has 2.5 ounces of copper in it that used to be on the roof. Clearly I can't go on like this for too long, or I'm going to have pinhole leaks in the roof panels. Happily, I've noticed recently that the copper seems to be zero even after a few days of no chlorine, and if this is real, it means the copper has stopped leaching into the water, perhaps because I now have a protective patina on the interior of the pipes.

In my various attempts to keep my white plaster white, I added "chelating agents" to the pool. These things are basically dish soap, and load the water with lots of phosphate (HPO4 2-). The idea is that the phosphate ion will preferentially bind to the copper ions and form Cu3(PO4)2, which is insoluble in water. Presumably this copper phosphate doesn't bind to the plaster but instead can be filtered out. In combination with a low pH this was supposed to reduce the Cu2O in the plaster, then convert it to Cu3(PO4)2, but if that was happening it sure wasn't obvious. I eventually gave up. This exercise left the pool with over 2500 ppb phosphate, which is way too high. When the copper levels recently stopped going up without chlorine, an interesting thing happened: algae! So this weekend I determined that it was time to take out that phosphate.

One takes out phosphate by adding Lanthanum Chloride. This reacts with the phosphate to make LaPO4, which immediately comes out of solution and forms this white fluffy stuff on the bottom of the pool. I tried vacuuming this up -- big mistake. The stuff will plug any filter, instantly. I tried a few times adding lots of DE to the filter, and gave up. That's when I read online that you have to vacuum this stuff out of your pool entirely, just dumping the water to the street. Then I realized I hadn't really plumbed my pool with an option to vacuum to waste. D'oh!

[Update: fixed the plumbing so I can now pump to the street or sewer. Since I have very low-resistance pipes, it looks like I can pump over 80 gallons/minute to the street, which is very impressive to see.]

For a little while I was feeling pretty beaten. The phosphate levels weren't reading any lower, the pool was a mess, I had no plan to get it clean, and my fingernails were gone from cleaning the DE out of the backwash tank. This Lanthanum Phosphate is amazing -- a tiny amount turns DE into a watertight membrane, something like Bentonite clay.

Then I got back to work. I pumped the water out of the spa, then set the valves to bypass both the filter and backwash tank, suck from the pool and return to the spa. With my vacuum in place I was vacuuming the pool into the spa. Before turning on the pump I put a sump pump in the spa and pumped it out into a hose which went to the sewer. Yes, this was pumping phosphates into the city's sewer. I think I pumped out about 2.5 pounds of phosphate, or about one box of dishwashing detergent. I had no copper in the water, nor any chlorine, and my pH was 7.5. It shouldn't be a problem for anything downstream. The vacuum sucked everything off the bottom and we dumped about 800 gallons of water, or $2.40. Even better, my phosphate is now down to 300 ppb, which isn't perfect but it's nice to be out of the unmeasurable range.

In doing the research for this blog post I found out that my tap water already has around 190 ppb phosphate in it, added by Los Altos to control corrosion in copper pipes. So, I need to measure my copper levels now that I have low phosphate to see if I'm back into corroding my solar panels. Hopefully something else is protecting the panels.

Also, I've learned that the patina that can protect copper is soluble in ammonium salts. That's significant because ammonia in the pool is what you measure when you measure "combined chlorine" (really chloramines). It seems I'm going to have to stay on top of that, not just because chloramines are smelly eye irritants, but because the stuff attacks my panels.

I can keep after it with adding chlorine, but my main line of defense was supposed to be a nightly dose of ozone, intended to burn anything organic down to N2, CO2, and water. The ozone is on hold until I can re-cover my DE filter grids in stainless steel mesh, instead of the polypropylene mesh they have now. Meh. More work. Some other weekend.

[Update: stainless steel mesh won't hold diatomaceous earth. I got some mesh samples and tried pouring DE mud on them, and it fell right through. Under the microscope, it's clear that DE grains are just tens of microns across, and so a macroscoping mesh isn't going to do it.

I've read that glass is compatible with ozone, so I'm going to try fiberglass as a filter material to hold the DE, and we'll see how that goes.]

[Update2: BGF fiberglass filter fabric style 421 style 580 holds diatomaceous earth just fine. I had to hand sew it with stainless steel wire, since fiberglass strands are completely useless for sewing, as they fray and break at the slightest provocation. Next up: long term test of fiberglass grid covering to see if it breaks or does some other bad thing.]

Sunday, May 02, 2010

A Great Day

Today my dad and I took my daughter Anya to an event hosted by the UC Berkeley Nuclear Engineering group. I got my BA at Berkeley, and my dad was a post-doc there in what was then called the Nuclear Chemistry group. It was great fun showing Anya around a small portion of the campus where I spent nearly seven years.

We started with lunch and about half an hour of Frisbee on Memorial Glade. After that, Anya (7 years old) spent three consecutive hours in 6 classrooms listening to and participating in discussion and demonstrations about radioactivity and nuclear power.

Anya spent probably 20 minutes of that wriggling around in her seat and the rest of the time she was engaged. The presentations were perfect for someone her age. The students doing the presentations were energetic and interesting. We got to see the old reactor room in Etcheverry Hall which now houses a bunch of interesting experiments. I got to meet a bunch of other parents, and it was fabulous to talk with other people who don't freak out when they discover that every home's smoke detector has a little bit of Americium-241 which sends the Geiger counter up several orders of magnitude once you get it close enough that the air isn't shielding it. (Pop - pop - pop - bzzzzzzzeeeeeeee)

Afterwards Anya and I walked around campus some more, watched some students learning to walk on a tightrope, and then went out to dinner together at Zachary's Pizza. For most of the hour long drive home we talked about how to design an "earth-friendly" town -- things like arranging the houses around a central area for the kids to plan in, building at the edge of but not in a forest, etc.

So, basically, I had a perfect day.

Monday, March 29, 2010

Pumping pool water

I've been looking into the costs of pumping water through the filters and valves and pipe work around the pool, and it's quite interesting. For context, you need to understand that we live in California, in the San Francisco Bay Area (known to our utility PG&E as area X). That means that we're going to be using the pool 9 months a year, and the marginal electricity we'll be using for the pool will cost us $0.424 to $0.497 per kilowatt-hour (Those are the charges for >24.2 kWh/day and >36.3 kWh/day). That sounds like a lot of money, and it's going to sound worse as you read below.

In 1997, marginal electricity with a basic residential rate was $0.122/kWh, so the average cost growth since then is 10.0% per year. That's substantially faster than the discount rate of 6% per year that I usually apply to future money. If the cost growth of energy exceeds the discount rate into the future, it means that the energy I spend to pump pool water 10 years from now will cost more in current dollars than the energy I spend today.

You can't extrapolate a growth curve indefinitely into the future, and the price of electricity in current dollars cannot rise without limit. That said, I'm fairly confident that over the next 40 years the price of electricity will at least keep up with a 6% discount rate. That means that I'm predicting that the electricity the pumps burn 40 years from now costs me just as much, today, as the electricity that the pumps burn today.

I want to compare the costs of running the pool with the cost of building it, so I can make decisions about what sort of equipment to install. In the face of electricity that does not discount into the future, I have to put some sort of time limit on how long the pool will be operated. I'm going to pick 40 years. At that point my grandkids are likely to have learned to swim, and there is an about even chance that I'll be dead and won't have to worry about money any more. Over 40 years (9 months a year), each kilowatt-hour burned, per day, costs a total of $4643 in 2010 dollars. The bottom line is that, had I built my new pool in a standard fashion, the electricity to run it would have cost over $150,000 in present value. That's more than the pool! Instead, I've spent a few thousand dollars improving the efficiency of the plumbing so that my projected present value cost is more like $40,000 (which can be improved further).

Those of you with astronomical electricity bills, and pools, may be wondering how did I do that?

Let's get started with a simple result that is relatively new for the residential pool industry: slower is better. If I run a pump at 42 GPM for 16 hours, it'll turn over about 40,000 gallons. If, instead, I run the pump at 84 GPM for 8 hours, it'll move the same number of gallons. However, and this is really important, the pump has to push about twice as hard to move water twice as fast. Power is pressure times flow, so pumping twice as much water up twice the pressure head takes 4 times as much flow power. 4x the power in half the time is 2x the energy, and energy is what you pay for. Bottom line: slow the pump down.

You should also get the most efficient pump possible. If you don't have solar and so don't need variable head, and can mount the pump below the level of the water surface, then get something like a Sequence 5100. Otherwise get an Intelliflow 4x160, which is what I got.

Slower-is-better isn't the complete answer, because of variable pump efficiency and in-line spring-loaded valves. So most pools won't run their pumps 24 hours a day, as I do.

Note that I talked about flow power in the paragraph above. Flow power is pressure (e.g. psi) times gallons per minute. Try it. Most pumps turn electricity into flow power with an efficiency that varies between 15 and 50%. 15% is the efficiency at low speeds (right before the pump stalls and efficiency goes to zero), and 50% is the efficiency at something near top speed. Variable-speed pumps, like the Intelliflow 4x160, usually hit their maximum efficiency at high RPM settings and high flow rates (but I include a counterexample in my spreadsheet link below). As a result, when you run the pump more slowly, less energy is required to move the water, but the pump takes more electricity to produce each joule of flow energy. You can see the pump power and head curves for the 4x160 on page 47 of the user's manual. And I've extracted the numbers off the chart and done efficiency calculations here. Bottom line: best efficiency is at 30 GPM for a normal pool. I'll be forced up to 42 GPM (at a measured 31% efficiency) because the pool is big.

The analysis gets a little complicated because most pools have in-line spring loaded valves, which should be banned. Many solar installations suggest a check valve between the filter and the water line going up to the roof, because the panels drain at the end of the day and you don't want those panels draining backward, through your filter, taking all the scum in there back out through your skimmer. An innocuous little check valve seems like just the ticket. One wrinkle is that the check valve requires something like 1 psi to open, and since the flow is working against that spring, there is a 1 psi drop across the valve at all flow rates. Another wrinkle: yet another spring-loaded valve, in the heater this time, which drops about 5 psi. I'll get to that later.

1 psi doesn't seem like much until you consider that 40,000 gallons pushed through 1 psi at 31% efficiency is 0.935 kilowatt hours. Still not much? The present value is $4343. It's actually a bit worse than that because the constant pressure drop causes the pump to stall at very low speeds, which changes the best operation point to something faster and thus even less efficient.

My last pool (not my design) ran the pump at about 35 psi. As I said, the present value of each psi is $4343, so if this pump had to do the same thing, the present value would be $150,000. That number is so large that you are thinking that it is funny money. It's not. That is this year's electric bill for $3800, and next year's bill for $4000, and the year after's bill for $4200, and so on. It's committed money: if I turn off the pump, the pool turns into a pond and the City folks come by and suggest how to abate mosquitoes. How did I end up with this problem?

Pool equipment is generally designed for low equipment cost, not low energy cost, because the equipment cost is what most homeowners look at. But when we look at cost of energy to drive this equipment, it becomes clear that most pool equipment design is completely insane. The details will come in following blog posts, but in short:

PartCostPressure drop @ 42 GPMPower costAlternativeCostPressure drop @ 42 GPMPower cost
Rooftop pressure relief to drain solar panels$508.6 psi for 2-story house$37,400Extra 3-port valve and parallel drain to pool$4000.1 psi$430
Gas heater with internal spring-loaded valves (check page 9)$18005 psi$21,700Bypass heater with 3" 3-port valve and an actuator$20500.1 psi$400
DE filter with multiport valve (check page 8)$7004 psi$17,400DE filter with 2 3-port valves and ozonator$21001 psi$4,300
100 feet of 2" pipe$601.19 psi$5,170100 feet of 3" pipe$1100.173 psi$751
2" check valve$451 psi$4,3003" 2-port valve with actuator$2500.1 psi$430
2" 3-port valve$450.5 psi$2,2003" 3-port valve$900.1 psi$430

I've implemented all but one of these on my pool, and it works: I can pump 14 gallons/minute (nearly twice what is required) up through solar panels on my two-story house with 9.5 psi from the pump, and I can pump 40 gallons a minute through the filter at 6 psi. With a fix to bypass the heater (I didn't see that one ahead of time), the pressure drop from circulating water through the solar panels should come down to 5 psi or so.

If you work at Hayward or Pentair and you are wondering how you might fix up your product line to make it more attractive, let me make the following suggestions:
  • A $1500 variable-frequency pump with a proper volute, achieving 80% efficiency when operating at 30 gallons/minute and a 5 psi would be a game changer, as it would cut most pool owners' electric bills in half. It would require 3" ports. Please make the main shaft seal ozone compatible.
  • Rewrite your manuals to show pool installers how to use 3-port valves to bypass the heater and drain solar panels without spring-loaded pressure relief or check valves. I'll have diagrams for this on my blog shortly.
  • A DE filter to go with that snazzy pump, with 3" ports and a valve system with less than 1 psi total drop when clean at 40 gpm. Make the grids ozone-compatible (stainless steel?), and figure out a way to either send the separated air back to the ozone generator or through a catalyst to break down the ozone and nitrogen oxides.
  • Introduce or work with Del to produce an ozone generator which has the ozone passively sucked into the intake of the main pump. Alternatively, figure out how to make a low power air pump that forces the ozone into the high pressure water stream after the filter. The current Del air pump is far too power hungry.
  • 3" sweep 90 degree and 45 degree elbows, with interiors matched to PVC schedule 40 IDs. There is a trick where you twist the flow as it goes through the corner which might improve losses a bit.
  • 3" valves with smooth interior bores. This could be the breakthrough that gets people to take you as seriously as Jandy. The gussets on the valve door interiors now save tens of cents of plastic and cost homeowners hundreds of dollars.
  • Since all the big pipework and valves will take up a lot of space, pay careful attention to how all the bits fit together into the space allowed for existing pool equipment pads. Maybe a replacement for the multi-port valve which is made of 3-inch Jandy-type valves would work.
  • Fix the pool control systems so that a basic control box:
    • can handle 10 valves,
    • connects to (and comes with) a flow meter that works down to 10 gpm,
    • and has thermistor inputs for solar return as well as intake water, so you can calculate how many BTUs are coming off the roof.
I think that's enough for now. I've been tweaking this post for 10 months, it's time to post!

Saturday, March 13, 2010

Pool is filled


Thursday was a big day for all of us. Any passers-by seeing the girls running around in bathing suits for a few days now may have guessed the reason: we plastered and then filled the swimming pool.

Monday, March 01, 2010

Good, Bad, and Ugly

The Good:
I just made myself a goat cheese and apple omelette. Yum! Could have use just a little red onion, and the apple chunks were too large. I will definitely get that right next time.

The Bad:
SolidWorks is crashing on me about every 30 minutes right now. This is not helping me get ready for my presentation tomorrow.

The Ugly:
My eyes. I have red eye, which has all kinds of consequences. Like, getting to stay at home and make myself omelettes. Like, not being able to read small text anymore (should be temporary -- being even slightly blind would truly suck). Like, not being able to make my presentation tomorrow. I sure hope my boss does a decent job.

Sunday, February 21, 2010

Bill Gates nails it

http://www.huffingtonpost.com/bill-gates/why-we-need-innovation-no_b_430699.html

His essential argument is:
  • We have some agreement on two goals: 30% reduction of CO2 output by 2025, 80% reduction by 2050.
  • Some countries will not make much reduction, and some countries, like China and India, will expand their CO2 output quite a bit as their huge populations pass through their own industrial revolution.
  • Some portions of our western economies will not reduce their CO2 output easily. (I think this is a minor point.)
  • The former goal might be achieved through conservation and improved efficiency.
  • The latter goal requires that CO2 output from two sectors, transportation and electricity generation, be reduced to zero. Still more will be required, but this is a baseline.
  • Once transport and electricity have been reduced to zero CO2 output, conservation in these areas will not improve our CO2 outputs. This is, for instance, why France doesn't bother subsidizing more efficient electric appliances, as many other countries do -- France's electricity is close to zero CO2, so improved electric efficiency doesn't reduce CO2 emissions.
  • Therefore, reworking the economy to reduce transportation and electric consumption does not help towards the 2050 goal. To the extent that it costs money that could otherwise be spent on zero-CO2 electricity and transport, it frustrates progress towards the 2050 goal.
So, what does it take to get to zero CO2 from electricity and transport by 2050? These two subgoals are tied together: transport must be electrified.

Our transport sector currently burns 146 billion gallons of gasoline and diesel every year. In 2050, assuming an increase of 2%/year in transport miles and a fleet efficiency increase from 17 to 23 MPG, it will consume the equivalent of 248 billion gallons of petroleum. If we replace those vehicles with electric vehicles getting 3 km/kWh, those vehicles will consume 3 billion megawatt hours per year. The Nissan Leaf gets 5 km/kWh, so I think an estimate of 3 km/kWh average may be reasonable.

So, the big question raised by Gates' insight is, what can deliver energy like that? To my mind, there are two contenders, wind and nuclear.

The first problem is generation. And the second problem is storage, to cover variations in production as well as consumption.

Here is the generation problem:

The US consumed an average of 470 gigawatts in 2008. The EIA predicts annual increases of 2%/year, so that the average might be 1038 gigawatts in 2050, for the same uses we have today.

The additional 3 billion kWh per year needed to run the electric car fleet, if spread evenly through the year, amounts to 350 GW, which isn't really so bad when thought of in the context of total electric generation. So the grid in 2050 will have to deliver an average of 1400 GW.

1400 average gigawatts could come from 1 million 5 megawatt wind turbines spread over 1.2 million km^2 (at 1.2 watts/m^2). Right now, the US has 1.75 million km^2 of cultivated cropland, so switching US electricity and transport to wind would require a wind farming sector nearly as physically large as our crop farming sector. This is conceivable. After all, 150 years ago most farms had a wind turbine for pumping water. However, 150 years ago that turbine was not the majority of the capital on the farm. These new turbines will cost about $5000/acre, compared with the $2100/acre that farm real estate is currently worth. From an economic standpoint, wind farming would be a much larger activity than crop farming.

The turbines have a 30-year lifespan, so the cost is more than just the initial capital expense. By 2050 all of the turbines installed in the next decade will have worn out, and we'd be into a continuous replacement mode. Cost? $5 trillion in capital outlay for the turbines, another $5 trillion for the infrastructure, and around $160 billion a year (present dollars) for worn turbine replacement.

Here's the storage problem:

The morning commute in any major US city lasts for about 3 hours, with most of the activity in that last hour. The evening commute is longer and more centrally distributed. If we have east-west transmission lines capable of moving most of the commute peak power, we can smooth the U.S. commute peaks into two with four-hour wide centers. Even assuming this transmission capacity, electric consumption during commute hours would be about 500 GW above average.

The current thrust of electric-car research is to improve the batteries so significantly that the cars can be charged overnight and the batteries can provide all necessary power for daytime use. Per vehicle, that's about 18 kilowatt-hours per car, which sounds possible. There is a problem, however: there simply isn't enough material to make these batteries for all our cars.  [Edit: I was wrong, there is.  Lead-acid batteries require 240 kg lead for 18 kWh.  Lithium-ion batteries require 8.5 kg lithium for 18 kWh.]
  • Lead-acid batteries would require 60 million metric tons of lead for the 254 million U.S. cars. World production of lead is around 4 million tons/year, and total reserves are around 170 million tons.
  • Lithium-ion batteries store 75 watt-hours per pound, and can use about 60% of that (although a five-year life is a goal rather than a deliverable). 18 kWh would require 400 pounds of battery per car, which is physically possible. The U.S. fleet would require 2 million tons of lithium. Total recoverable worldwide lithium is 35 million tons.
The most economical way to store electricity is pumped hydro. Cars could pick up their electricity from metal strips in the freeway. Pumped hydro is at least plausible: the commute surge could be stored by pumping water from Lake Ontario back up to Lake Erie, raising Lake Erie by 60 cm twice a day.

Another way to achieve this goal is with nuclear reactors. Thousands of them. A nuclear electric infrastructure would have five big advantages over a wind infrastructure:
  1. It would cost far less to build.
  2. It would last 60 years or more.
  3. It would not be weather dependent.
  4. It would not require secondary storage (still more cost).
  5. It would have far less environmental impact (no lakes with tides, no dead birds).
And the biggest advantage of all: it could keep getting bigger.

However, if we are to scale up the existing fleet of 104 reactors by over an order of magnitude, some things are going to have to change.
  • Nobody really knows how much it will cost to build the next American reactor. We know that it costs the Koreans and Chinese $1.70/watt, and we know that it used to cost about that much in the U.S. If we build thousands of reactors, the cost will drop back into this range or below.
  • Most of the new powerplants will have to be cooled by seawater or air, but not fresh water as is most commonly done today. We do not have enough fresh water to cool thousands of plants. Quite the contrary, by 2050 electric power and waste heat from reactors will be used to desalinate seawater for residential use, as is already the case in Florida and some California municipalities.
  • Typical reactor sites will have a dozen or more gigawatt-class reactors, rather than the two or three as is common today. Far from being "extra large", gigawatt reactors are right-sized.
  • Either very large new deposits of uranium will be discovered, or most reactors will be breeder reactors.
Bill Gates knows that the nuclear option is going to be the one we eventually choose, and he has a company, TerraPower, developing a new reactor which he hopes will cash in on the $100 billion/year domestic market for nuclear plants. I wish him the best.

Friday, February 05, 2010

Prediction, reviewed

In December 2008 Obama fingered Stephen Chu to be the new Secretary of Energy. This got me in such a good mood that I made a bunch of "predictions", things that might be done right. Maybe these were more along the lines of wishful thinking.

Somehow, all this wishful thinking no longer seems wishful.
  • Yucca Mountain shutdown. They did it! The idea of Yucca Mountain was to build a geological repository for spent nuclear fuel. Sounds good, except:
    • Nevada didn't want everyone else dumping their waste in Nevada.
    • The stuff they wanted to bury was spent nuclear fuel from our light water reactors. This stuff is physically hot! These reactors fission hardly any of their fuel and breed almost as much non-weapons plutonium as they burn uranium. [Edit: it's actually the fission products that make most of the heat, and so it doesn't matter that the reactors aren't fuel efficient.  My bad.]  As a result, the stuff that comes out pumps out prodigious amounts of heat for decades, making it very difficult to cool via conduction through solid rock. Storing it aboveground in air cooled containers next to the reactor is a much better idea.
  • As an addendum to the Yucca Mountain thing getting shut down, they've appointed a commission to come up with a new nuclear policy for the US. Per Petersen is on that commission. He is a professor at UC Berkeley who understands the advantages of a fluid-fuelled reactor, and is also doing really good research in how to get there in a practical manner.
  • NASA just canned Ares-I, Ares-V, and Orion in favor of spending that development money on multiple private-sector launch systems that will ferry people to the ISS. What a great idea! This is an astounding choice, one that I talked about four years ago in one of my most popular blog posts ever: Why Merlin 2?
  • Mandating short-term demand management for air conditioners and other heat pumps. This hasn't happened, but at this rate, I guess I won't be shocked if it does.
  • Standardizing recharable batteries. In particular, I had in mind cellphones. While this itself hasn't happened, Europe has standardized the cellphone charger, and that's a good step in the right direction.
I suppose I should have some new wishes. Let's see:
  • I'd like to see at least four of those umpteen nuclear plant license applications actually turn into plants being built. I'd like to see hard hats and concrete.
  • I'd like to see the Sierra Club or Greenpeace change to a pro-nuclear stance.
  • I'd like to see the Federal government "make jobs" on projects that make long-term wealth, not just jobs.

Monday, January 11, 2010

Fountain Advice

So, if I were to design another fountain, how would I do it differently? (This is for you, Diane.)
  • 8 inch flow straighteners. There is no sense in messing around with Reynold's numbers. You want it to be around 2000, and that means you need huge internal cross section. 8 inch PVC is actually reasonably easy to get. SCP in Santa Clara has it and all the fittings you need.
  • Design gimballed nozzles from the beginning. Trying to get this right with careful assembly just did not work, the tolerances are far too tight. I think it can be done with screws between the gimballing nozzle and the glued-in support, so that you could adjust the angle after building it.
  • Rebar connecting the inside and outside rebar curtains. We have a crack all the way around our hot tub which I'm sure is going to make it's way through the tile some day.
  • I would build all the fountain jets to a fixture, rather than just half of them. The fixture worked really well and would have worked even better if I'd designed it to be independent of swelling due to moisture. This can be done -- all the surfaces that locate plumbing have to be on radial lines to the center.
  • Make the fixture locate a center #4 rebar spike, and drive the center spike at least 18" into the ground, and leave it in place while shooting the gunite. This will give the gunite crews something to locate off when they are drawing their circles. One problem that we had was that we kept re-finding the center of the hot tub, and as a result the circles for the plumbing and the circles for the tile and gunite are not concentric.
  • I would have the guys doing the gunite get the gunite surface level and ready for tiling. We spent a lot of time levelling that gunite out.
There were also a couple of things we've learned about the tiling, which would have saved us a bunch of time had we known it a year ago.

The glass tile is 1/4" thick, and can be set with just 1/8" of thinset, but this isn't the stackup you want. There are two problems: the thinset shrinks, a lot, as it cures, and this tends to bend and eventually break the tile. Also, the plaster guys want to be 5/8" thick, not 3/8" thick. They can feather down to 3/8", but they don't like it, as explained below.

So, you want to put down on the gunite 1/4" of some kind of low-shrinkage mortar, and screed it so that it is flat. This is the stuff that is going to take out all the uneveness in the wall. However, you can't be sure it's going to stick really well to the gunite. So, the stackup we used under the last mosaics ended up being:
  • 2 coats of Hydroban, sticking out at least 1" past any of the rest of the stack. This forms a structural watertight barrier. The principle issue being protected against is water leaking through the cold joint between the plaster and the tile, and then leaking into the gunite from there. Hydroban is expensive, but only comes in 5 gallon buckets, which is enough to cover a ridiculously large area. I think they are trying to make sure you use a lot.
  • Some thinset, as thin as it can be, to adhere the mason mix to the hydroban.
  • 1/4" mason mix (either "deck mud" for the tile on the pool bottom, or "fat mud" for the tile on the walls), screed to be dead flat.
  • After that all sets up (ideally about 4 hours so it gets a chance to shrink, but isn't fully hard, think about covering it with plastic to keep the water in), a super-thin layer of thinset on both the mason mix and the back of the tile.
  • Blue tape out the wazoo for anything on a wall.
  • Then cut the stiff but not yet really hard mason mix away from the edge of the tile. We never let the mason mix set up so hard that it was sticking to the hydroban tightly, so this was pretty easy to do without nicking the hydroban.
On our dam wall, we had to build up the wall top with an angle to hold our teflon strip. The easy way to do this would have been to just do it with wall mud, screed with one of those adjustable angle things running along a strip screwed to the side of the wall.

Now back to the reason the plaster guys want to put down 5/8" material. For any kind of exposed aggregate surface, they shoot the mix, then trowel it. The troweling is usually done (on a smooth plaster finish) to bring the "cream" to the surface, but for an exposed aggregate surface they are using the side effect, which is to compact the aggregate below the surface. Then, when they wash away the surface, they expose more tightly-packed aggregate. Since cement but not aggregate will erode from chlorine attack and mechanical erosion, it's better to have a surface which has more aggregate.

The reason they don't simply mix more aggregate into the mix before they shoot it on the wall is that, when the aggregate and sand grains are randomly oriented, they simply can't pack densely enough. If you subtract sand/cement/water, you don't end up with more aggregate in the as-shot mix, you end up with more air. And air is no good because the plaster has to be watertight (and dense and strong). Once the mix is on the wall, they work it with the trowels, which helps settle the aggregate and sand grains together more tightly. This is why the quality of an exposed aggregate surface has a lot to do with the skill of the guys doing it.

In looking at the samples provided, it's clear to me that most folks aren't actually attempting to get a maximum packing of aggregate, and I don't understand why. Why don't they mix larger and smaller aggregate together, so that the small aggregate packs in the spaces between the larger aggregate?