Tag Archive for 'image processing'

Blue Saturn and Rhea Transit

As Emily points out on this post on the Planetary Society blog, it can be fun to have a look through the Recent Cassini Images page and see what’s arrived from Cassini recently. So here’s a couple of interesting images I found from today’s batch.

First, here’s a nice picture of Saturn with its satellite Rhea passing in front of it. This is actually “true colour” (actually taken with red, green and blue camera filters), though since the component images haven’t been calibrated and corrected for camera distortions yet they’re not quite what we’d see with our own eyes – but it’s close. Saturn’s a little yellowish but there’s not much colour visible. The coloured ‘afterimages’ of Rhea are caused by the fact that it moved across the field of view while each component picture was being taken (it’s moving from right to left). Below the ringplane you can see a foreshortened dark dot on Saturn – that’s a shadow of another moon passing between Saturn and the sun, so if you were in the cloud deck at that location then you’d see a solar eclipse! The component images are: 65795 (blue), 65794 (green), and 65973 (red)
This next one’s kinda fun – Saturn’s gone blue! It’s actually a false-colour image made by using a Methane2 (727nm) filter instead of a Red filter (649nm) in the red channel of the image. This filter is sensitive to infrared wavelengths, which allows us to see the details in the cloud deck more clearly (compare with the true colour image above, and you’ll notice that you can’t see the banding in the atmosphere there). Because the Methane image has more varied contrast than the Green and Blue images, it tints the planet a rather nice shade of blue as well – though it’s amazing how much of a difference that 78 nanometres makes! This is why it pays to have a variety of different filters on a spacecraft. The component images are 65788 (blue), 65789 (green), 65790 (methane2)

The neat thing is that the Cassini Images site gets updated pretty much on a daily basis, so there’s always something new to look at there!

Seeing in a different light.

In my last post I talked about the fun I had making huge mosaics of Ganymede and showed off one that I made of Ganymede’s anti-jovian hemisphere taken by Voyager 2. Today I’m going show you the side of Ganymede that permanently faces Jupiter, taken by Voyager 1 when it passed through the Jupiter system in January 1979, six months before its sister ship arrived there.

So, without further ado, here it is – the Subjovian hemisphere of Ganymede, again mosaicked by me in 1999 (click the image to see the full-size mosaic in all its glory):
VGR1 Ganymede OBV subjovian

This mosaic shows Perrine Regio (the dark blocks of terrain at top left) and Barnard and Nicholson Regiones (the dark areas in the central part of the hemisphere). The bright ray crater Tros is visible in the bright terrain of Phrygia Sulcus, between Perrine and Barnard Regiones. You can also make out the “polar caps” of Ganymede, visible as the paler terrain at the north and south poles (at top and bottom of the image). If you keep going west around the satellite from Perrine Regio, you’ll come to the eastern edge of Galileo Regio, which was the large area of dark terrain seen at the top right of the Voyager 2 image in my previous post.

This mosaic was made using the same filters as the Voyager 2 mosaic, but if you look closely you’ll see that the top-left part of Ganymede looks blurry. This is actually due to the motion of the spacecraft as it was taking the pictures. Voyager was moving pretty fast as it travelled through the Jupiter system (I’m not sure what its exactly velocity was at the time, but right now Voyager 1 is travelling at about 17 km/s relative to the sun!), and that had to be compensated for when taking pictures by rotating the camera on its scan platform or by rotating the whole spacecraft.

Another issue is that the solar illumination drops as the inverse square of the distance from the sun. This means that if you go twice as far from the sun as the Earth then the illumination there will drop to a quarter (1/4) of what it is as the Earth’s distance (conversely, if you go towards the sun to half the Earth’s distance then the sun will appear four times brighter). At Jupiter’s distance from the sun – just over five times further from the sun than Earth – the sunlight is about 27 times dimmer than at Earth. I don’t think this would actually be all that noticeable to the human eye, and it’d still probably be sufficient to blind us if we looked directly at the sun without any protection, but the dimmer illumination does make a difference for cameras, requiring longer exposures – these particular images were taken with 360 second (6 minute) exposures.

Despite the fast-moving camera (and spacecraft) and a long exposure times, most of the images taken by the Voyagers actually came out really sharp, which is a testimony to the skill of the people who planned the images and engineered the spacecraft – but there were still a few cases (like this one) when things didn’t quite turn out according to plan and the compensation wasn’t enough, and that’s why some of the images are blurry. Unfortunately there’s no way to fix this after the fact.

What about other filter combinations though? One simple trick that I tried was to simply take the Orange mosaic and duplicate that (changing the brightness and contrast to more closely match the Blue mosaic) and put the modified Orange mosaic in the green channel of the image instead of the blurry blue one. The result is shown below:
VGR1 Ganymede OOV subjovian

While this does sharpen the mosaic (since we’ve got rid of the blurry parts of the image), this is purely for aesthetic purposes though – it’s not accurate or useful for scientific purposes at all (you might also notice that the overall colour of Ganymede appears more yellow-tinged than in the OBV mosaic).

What about true colour? You might recall from my previous post that Voyager’s cameras don’t have a Red filter, so the closest that we can get to true colour is an Orange/Green/Blue combination. It so happens that Voyager 1 did take some Green images in this sequence, but unfortunately they don’t cover the whole hemisphere. But at least we can see something close to what Ganymede might look like to human eyes, and that’s shown in the next mosaic:
VGR1 Ganymede OGB subjovian

We’re interested in the central part of the image – going from top to bottom – which is covered by Orange, Green, and Blue filters. The rest of the image looks tinted magenta and dark-green because the Green filter images don’t cover those areas, so we’re just seeing those parts of Ganymede through the red and blue channels of the image. The colour difference is quite dramatic – in the OBV mosaics Ganymede looks a lot browner in colour, whereas in the OGB mosaic it looks more greyish/beige colour. But that’s closer to the true colour of Ganymede that we would see with our own eyes if we were there.

It occurs to me that I might be able to write a program to interpolate between the filters and give a more-accurate-but-still-simulated “true colour” view – so as an example I could take an existing Green image and Violet image, and linearly interpolate a Blue filter image (since Blue is about 43% of the way between Violet and Green in the spectrum). Of course, it wouldn’t be accurate because the way that the surface reflects blue wavelengths probably isn’t on a nice straight line between Violet and Green, but it’d be better than just eyeballing and fiddling with an image like I did in the mosaic where I modified the Orange to replace the Blue. Hrm…

All of this goes to show that one has to be careful when looking at colour images taken using different filters, because they are false-colour – not what one would see with the naked eye. That said, there’s still obviously a lot of very useful science that can be done with these images (including subtracting one filter from the other, taking ratios etc) which can tell us a lot about the nature of the surface material that we’re looking at, which I might discuss in a later post. But for me, nothing’s quite as satisfying as being able to see something in as close to true colour as possible, because that’s the closest thing to being there and seeing it with my own eyes – which is essentially why I got into planetary science and astronomy in the first place!

Postcards from a distant moon

While digging around Unmannedspaceflight.com (which is full of really cool images of the other bodies in the solar system), I got inspired to try to find some of the images that I used to work on when I did my Ph.D. at Lancaster University all those years ago. Fortunately I hadn’t lost them as I initially suspected and they’re all safely backed up online now, so I’m going to show some of them off in this and later posts and explain a bit about the story behind them.

You’re probably used to seeing planetary images on the news or in an article – a nice panoramic mosaic taken by Spirit or Opportunity on the surface of Mars, or an image of some terrain on a distant moon – but there’s actually quite a lot of work that goes into making these images presentable (and scientifically useful). You can get a lot of the raw images online from NASA nowadays, and new images are being released frequently (for example, head over to the Cassini raw images website, and click “browse latest 500 images” to see what’s hot off the press from the Saturnian system). While these raw images are useful for basic visual examination and interpretation, a lot of the time one needs to process the images somewhat before they’re usable for scientific purposes.

Let’s take the example that I’m going to show you here – a global mosaic of Jupiter’s largest moon Ganymede that I lovingly hand-made back in 1999 (click the image to see the full-size mosaic in all its glory):
VGR2 Ganymede OBV antijovian

This mosaic shows the anti-jovian hemisphere of Ganymede – like Earth’s moon, Ganymede is tidally locked to Jupiter which means that one side (the “sub-jovian hemisphere”) always faces the planet. We’re looking at the opposite side here, that always faces away from Jupiter – equivalent to what we’d call the “far side of the moon” if we were talking about Luna here. The mosaic is presented as if we were looking at the globe of the planet, with the north pole at the top of the image and the south pole at the bottom (though we can’t actually see either here because they weren’t covered in the mosaic), and the equator running from left to right across the middle of the mosaic. You can see that Ganymede is brownish in colour, with light and darker brown areas (the large dark area on the top right is Galileo Regio, and the lighter linear strip marking its western border is called Uruk Sulcus), and whiter areas where asteroids have smashed into its surface to make craters and expose relatively fresh ice (e.g. Osiris crater at the bottom). I’ll probably get on to talking about Ganymede itself more in a later post, but today I want to talk about the epic image processing that went into making the mosaic (if you want to look at some other mosaics of Ganymede that I made, check out my Ganymede gallery on Flickr. I’ll probably talk about the Voyager 1 mosaics in a later post).

This mosaic is made up of 18 separate images (some of which you can see as thumbnails of their original format here), taken by the Voyager 2 spacecraft way back in 1979. Now, space probes don’t actually take colour images – they take greyscale images through various camera filters, which are then combined (on the ground) to make a colour image. This happens in a modern handheld digital camera too – light passes through red, green and blue filters (or a Bayer filter made of all three), and that’s combined within the camera to make the colour image. Conversely, a digital colour image can be broken down into red, green, and blue channels, which would all look slightly different because of the way the subject reflects light in the red, green and blue parts of the visible spectrum (if you’ve got photoshop, you can play around with this by opening up a photo and going to the “channels” window and turning some of them on or off).

So to make a colour image from Voyager’s greyscale images, we have to combine three greyscale images by putting them in the red, green, and blue (R/G/B) channels. Ideally, to get a “true colour” image, you would take three greyscale images of a target in rapid succession (or even at the same time) – one through a Red filter, one through a Green filter, and one through a Blue filter – and these would be directly equivalent to the R/G/B channels in the colour image. Unfortunately, Voyager’s vidicon cameras didn’t come with all those filters – it has an Orange filter but not a Red filter, and it also has Methane (infrared), Violet and Ultraviolet filters whose responsiveness peaks at those wavelengths of (and around) the visible spectrum. This means we can’t get a truly accurate representation of the colours we would see with our own eyes, but we can get close if we have Orange, Green, and Blue images. We can also make other combinations such as Orange/Blue/Violet or even Methane (IR)/Green/Ultraviolet if we have images from those filters, but those are progressively more “false-colour”. In each case, we put the reddest filter in the Red channel of the colour image, the middle filter in the Green channel, and the bluest filter in the Blue channel, which is what I did in this mosaic – I combined Orange, Blue and Violet images to make a slightly false-colour view.

The other thing I had to do was to use the ISIS image processing software to import the images, calibrate them (remove the dots called reseau marks and correct for known distortions in the camera optics), correct the camera pointing (itself an epic tale), find enough match-points between each pair of images in the set, reproject them, account for the way light reflects from the target’s surface, and then stitch them all together to make a single mosaic – and I had to do that for all the images taken with each filter that I was using.

Finding the match points was a hugely time-consuming and incredibly tedious task, since I had to do that by hand (if you were in a rich US university you could use a separate IDL program to do it automatically, but that was horribly expensive and completely unaffordable for us – so we did the next best thing). Matchpoints are pairs of (x,y) co-ordinates of features that are visible in two overlapping images – by finding those and telling ISIS where they are in each image pair, the program can put the images on top of eachother when it’s mosaicking them in a way that they match up properly (modern cameras that have a “Panorama” mode do the same thing (automatically), so that when you take three overlapping photos it can stitch them together to make a continuous mosaic). Somewhere at home I have a notebook that is completely filled with (at least) three pairs of x and y pixel co-ordinates for each overlapping image pair, that I would then meticulously enter by hand into a text file that I would then pipe into ISIS – this mosaic alone had had a total of 81 matchpoints in it (for all of the filters)!

Once that was all done, the three separate mosaics taken through each filter could then be combined in photoshop to create the resulting (false) colour image, which is what you see here (the magenta/yellow/green strips are where the images in each channel don’t quite cover eachother, so if you have say a red and blue component without the green one there, the image looks like it’s tinted magenta).

Now, I made this mosaic back in 1999. I can’t exactly recall the specs of the computer we were using – looking at the specs for computers from that era that I can find on the net, it could have been a 400 MHz Pentium III (but more likely was whatever came before that) with 128MB RAM, a 20GB Hard drive, and running Red Hat Linux (that was the only OS that the ISIS image processing software would work on). We thought that was pretty awesome at the time, but it’s quite mind-boggling to think that I’m currently typing this entry on a home PC that at that time would have been called an honest-to-god Supercomputer (2.83 GHz Intel Q9550 Quadcore CPU, 4 GB DDR3 RAM, and a 500 GB hard drive)! The (estimated) specs may help you appreciate just how big a deal making this mosaic was at the time, because I’m pretty sure I had to leave the computer running overnight to stitch everything together (whereas it’d probably be done in less than 30 minutes today or something) – and sometimes it didn’t work because the computer ran out of scratch memory or the match-points weren’t quite right!

Of course, as processing power and hard drive space has increased over time, all this has become less time-consuming (though the process hasn’t actually changed all that much), but spare a thought for all the poor saps who had to make these mosaics in the 1970s, ’80s, and ’90s ;). I will say though that I wouldn’t have gone to all that effort if I didn’t think the results would be worthwhile, and it was rather satisfying to see the final mosaic at the end – as it is, this particular mosaic made it into my thesis as well :).

In my next post, I’ll show you what the other side of Ganymede looks like (as seen by Voyager 1) and demonstrate the difference that filters make to the appearance of the image.