Postcards from a distant moon

While digging around Unmannedspaceflight.com (which is full of really cool images of the other bodies in the solar system), I got inspired to try to find some of the images that I used to work on when I did my Ph.D. at Lancaster University all those years ago. Fortunately I hadn’t lost them as I initially suspected and they’re all safely backed up online now, so I’m going to show some of them off in this and later posts and explain a bit about the story behind them.

You’re probably used to seeing planetary images on the news or in an article – a nice panoramic mosaic taken by Spirit or Opportunity on the surface of Mars, or an image of some terrain on a distant moon – but there’s actually quite a lot of work that goes into making these images presentable (and scientifically useful). You can get a lot of the raw images online from NASA nowadays, and new images are being released frequently (for example, head over to the Cassini raw images website, and click “browse latest 500 images” to see what’s hot off the press from the Saturnian system). While these raw images are useful for basic visual examination and interpretation, a lot of the time one needs to process the images somewhat before they’re usable for scientific purposes.

Let’s take the example that I’m going to show you here – a global mosaic of Jupiter’s largest moon Ganymede that I lovingly hand-made back in 1999 (click the image to see the full-size mosaic in all its glory):
VGR2 Ganymede OBV antijovian

This mosaic shows the anti-jovian hemisphere of Ganymede – like Earth’s moon, Ganymede is tidally locked to Jupiter which means that one side (the “sub-jovian hemisphere”) always faces the planet. We’re looking at the opposite side here, that always faces away from Jupiter – equivalent to what we’d call the “far side of the moon” if we were talking about Luna here. The mosaic is presented as if we were looking at the globe of the planet, with the north pole at the top of the image and the south pole at the bottom (though we can’t actually see either here because they weren’t covered in the mosaic), and the equator running from left to right across the middle of the mosaic. You can see that Ganymede is brownish in colour, with light and darker brown areas (the large dark area on the top right is Galileo Regio, and the lighter linear strip marking its western border is called Uruk Sulcus), and whiter areas where asteroids have smashed into its surface to make craters and expose relatively fresh ice (e.g. Osiris crater at the bottom). I’ll probably get on to talking about Ganymede itself more in a later post, but today I want to talk about the epic image processing that went into making the mosaic (if you want to look at some other mosaics of Ganymede that I made, check out my Ganymede gallery on Flickr. I’ll probably talk about the Voyager 1 mosaics in a later post).

This mosaic is made up of 18 separate images (some of which you can see as thumbnails of their original format here), taken by the Voyager 2 spacecraft way back in 1979. Now, space probes don’t actually take colour images – they take greyscale images through various camera filters, which are then combined (on the ground) to make a colour image. This happens in a modern handheld digital camera too – light passes through red, green and blue filters (or a Bayer filter made of all three), and that’s combined within the camera to make the colour image. Conversely, a digital colour image can be broken down into red, green, and blue channels, which would all look slightly different because of the way the subject reflects light in the red, green and blue parts of the visible spectrum (if you’ve got photoshop, you can play around with this by opening up a photo and going to the “channels” window and turning some of them on or off).

So to make a colour image from Voyager’s greyscale images, we have to combine three greyscale images by putting them in the red, green, and blue (R/G/B) channels. Ideally, to get a “true colour” image, you would take three greyscale images of a target in rapid succession (or even at the same time) – one through a Red filter, one through a Green filter, and one through a Blue filter – and these would be directly equivalent to the R/G/B channels in the colour image. Unfortunately, Voyager’s vidicon cameras didn’t come with all those filters – it has an Orange filter but not a Red filter, and it also has Methane (infrared), Violet and Ultraviolet filters whose responsiveness peaks at those wavelengths of (and around) the visible spectrum. This means we can’t get a truly accurate representation of the colours we would see with our own eyes, but we can get close if we have Orange, Green, and Blue images. We can also make other combinations such as Orange/Blue/Violet or even Methane (IR)/Green/Ultraviolet if we have images from those filters, but those are progressively more “false-colour”. In each case, we put the reddest filter in the Red channel of the colour image, the middle filter in the Green channel, and the bluest filter in the Blue channel, which is what I did in this mosaic – I combined Orange, Blue and Violet images to make a slightly false-colour view.

The other thing I had to do was to use the ISIS image processing software to import the images, calibrate them (remove the dots called reseau marks and correct for known distortions in the camera optics), correct the camera pointing (itself an epic tale), find enough match-points between each pair of images in the set, reproject them, account for the way light reflects from the target’s surface, and then stitch them all together to make a single mosaic – and I had to do that for all the images taken with each filter that I was using.

Finding the match points was a hugely time-consuming and incredibly tedious task, since I had to do that by hand (if you were in a rich US university you could use a separate IDL program to do it automatically, but that was horribly expensive and completely unaffordable for us – so we did the next best thing). Matchpoints are pairs of (x,y) co-ordinates of features that are visible in two overlapping images – by finding those and telling ISIS where they are in each image pair, the program can put the images on top of eachother when it’s mosaicking them in a way that they match up properly (modern cameras that have a “Panorama” mode do the same thing (automatically), so that when you take three overlapping photos it can stitch them together to make a continuous mosaic). Somewhere at home I have a notebook that is completely filled with (at least) three pairs of x and y pixel co-ordinates for each overlapping image pair, that I would then meticulously enter by hand into a text file that I would then pipe into ISIS – this mosaic alone had had a total of 81 matchpoints in it (for all of the filters)!

Once that was all done, the three separate mosaics taken through each filter could then be combined in photoshop to create the resulting (false) colour image, which is what you see here (the magenta/yellow/green strips are where the images in each channel don’t quite cover eachother, so if you have say a red and blue component without the green one there, the image looks like it’s tinted magenta).

Now, I made this mosaic back in 1999. I can’t exactly recall the specs of the computer we were using – looking at the specs for computers from that era that I can find on the net, it could have been a 400 MHz Pentium III (but more likely was whatever came before that) with 128MB RAM, a 20GB Hard drive, and running Red Hat Linux (that was the only OS that the ISIS image processing software would work on). We thought that was pretty awesome at the time, but it’s quite mind-boggling to think that I’m currently typing this entry on a home PC that at that time would have been called an honest-to-god Supercomputer (2.83 GHz Intel Q9550 Quadcore CPU, 4 GB DDR3 RAM, and a 500 GB hard drive)! The (estimated) specs may help you appreciate just how big a deal making this mosaic was at the time, because I’m pretty sure I had to leave the computer running overnight to stitch everything together (whereas it’d probably be done in less than 30 minutes today or something) – and sometimes it didn’t work because the computer ran out of scratch memory or the match-points weren’t quite right!

Of course, as processing power and hard drive space has increased over time, all this has become less time-consuming (though the process hasn’t actually changed all that much), but spare a thought for all the poor saps who had to make these mosaics in the 1970s, ’80s, and ’90s ;). I will say though that I wouldn’t have gone to all that effort if I didn’t think the results would be worthwhile, and it was rather satisfying to see the final mosaic at the end – as it is, this particular mosaic made it into my thesis as well :).

In my next post, I’ll show you what the other side of Ganymede looks like (as seen by Voyager 1) and demonstrate the difference that filters make to the appearance of the image.

6 Replies to “Postcards from a distant moon”

  1. Very cool! I’m inspired to post some images I took during my Ph.D. work, but unfortunately all I got were lousy spectra, no actual images.

    The odd thing is I now work in image processing, and many of the people here specialise in aligning images, both in different spectral channels and for mosaicing. I could probably run your original raw images through some code I have just lying around and have them aligned within a few seconds. 🙂

    1. Yeah, it’s kinda annoying that handheld cameras can just do this on the fly now ;). I wonder if there is any standalone windows software that can mosaic any images entered into it (and if it’d work on these).

  2. Thank you for that great explanation of the tedious process that goes into making these gorgeous images. Also, thank you for investing the time into creating these images. Without people like you, we, the average public, would not be able to appreciate the beauty of what is in our universe.