Site hosted by Angelfire.com: Build your free website today!

 

Webcam Image Processing Part 1

Home | Supernova Search | Photometry | ART285 Gallery | DSLR Gallery | Webcam Gallery | Equipment | Techniques | Webcam Image Processing Part 1 | About me

 

 

 

 

 

 

 

 

 

These Image Processing pages explain the procedure I used for webcam images.   

Having now started to use a DSLR and a CCD camera, I now only use the webcam for autoguiding.  These webcam image processing pages will be kept for reference, but will probably not be updated anymore.

 

Introduction (Webcam Image Processing)

I will illustrate the procedure that I use to process images using, as an example, the image of M83.

First a summary of the procedure:

bullet

Acquire, in I420 mode, and save the set of raw images in an AVI file, as well as a set of dark frames

bullet

Select the good raw frames

bullet

Extract the Y (luminosity) component, as well as the RGB (colour) components

bullet

Convert these files into FITS format, with a set of files for each of the Y, R, G, B channels

bullet

Prepare the average dark frames for each of these channels

bullet

Subtract the respective dark frames

bullet

Register the set of images in each channel

bullet

Stack (add together) the aligned images

bullet

Check for noise or uneven sky background and correct as necessary

bullet

Export files in a form readable by Photoshop

bullet

In Photoshop, do histogram stretching on both Luminosity and Colour images

bullet

Noise reduction on Luminosity image

bullet

Combine Luminosity and Colour images

bullet

Colour saturation enhancement

bullet

Final cosmetic adjustments

 

Stacking

Individual frames taken with a webcam, particularly of dim objects, have a lot of noise.  Some of this noise is non random and can be removed by subtracting dark frames.  However, the remaining noise is largely random.  In addition, single frames from a webcam have a very restricted dynamic range because the output from the camera is an image of just 8 bits per channel (for the Y channel - effectively less for the colour channels - see below).

The way around this is the add together (stack) many images that have been aligned (registered) so that the weak signal from the multiple images are reinforced and the noise reduced relative to the signal.  This happens because random noise increases just proportional to the number of frames stacked, but the signal increases as the square of the number of frames.  Later, this article will show the practical effect of stacking.

 

The use of the Y channel as Luminosity

Some explanation is needed for why I choose to use the Y component of a YUV image as the luminosity component of the final image.

Webcams, due to their design as USB video devices are restricted in the bandwidth available to transmit video information to the computer.  Even at the lowest frame rate of 5 fps, there is insufficient bandwidth to transmit full resolution frames in all 3 colour channels (RGB).  Consequently, the not all the data that is produced by the CCD at every pixel can be transmitted.  During image acquisition, I set the codec used by my camera to I420 which produces a 12bit per pixel YUV image.  Y is the luminance channel and U and V are the colour ones.  The human eye resolves differences in brightness levels better than differences in colour, so in nornal video transmission the camera needs to transmit only the Y as a full resolution image (in the case of the SAC7, 640 x 480).  The U and V are subsampled to reduce the bandwidth required.

The camera can be set to save images in RGB format (eg BMP, with 24bits per pixel), but the R, G and B components are simply calculated from the YUV.  Thus there is actually no more information in a 24bit RGB file than in the 12bit YUV file.  In addition, each of the R, G and B components have in them a part of the subsampled U and V, thus compromising resolution.  The G component, because it has the highest proportion of Y, has the highest resolution of the three. 

A good explanation of the the I420 codec and YUV is given by Peter Katreniak, author of K3CCDTools here.

Below are the results of stacking Y, R, G and B frames from the M83 raws.  The worst of the lot is the B image.  The noise and lack of resolution is readily apparent.  The R image shows that the fine banding seen in the individual raws (see later) has somehow combined into a sort of wider spaced banding.  This type of artifact is difficult to remove effectively.  The G image is actually not bad, except that in comparison with the Y, it has less dynamic range.  In this case the loss of resolution is hard to see, even in the full, uncompressed images, but in other cases in my experience, the G has been noticeably worse than the Y image.

 

                                   Y                                                                                    R

                                  G                                                                                    B

So there is good detail in the Y image but poor detail in the colour components.  How do we incorporate the colour information while still presenting an image with the best possible resolution?  Fortunately, Photoshop (as well as some other programs, including Iris) can do this.  A luminosity layer in Photoshop will retain all its detail while taking its colour from underlying colour layers - this technique is referred to as LRGB, except that since I use the Y channel, I call my technique YRGB.  More on this later.

 

My Image Processing Procedure

I acquire raw frames in K3CCDTools and save them in an AVI file.  This is a video format, but is used in K3CCDTools as a convenient form in which to store multiple picture frames, even though the exposure time of each frame may be many seconds.  K3CCDTools is an excellent, free, image acquisition program for webcams.  It can be found here.  

The first thing to notice is that the raw frames captured by a webcam, particularly of dim deep sky objects are awful!  Below right is a typical good quality raw that eventually went into the M83 image, with a reduced size final image at its left for comparison.

Only the bright galaxy core is apparent.  The arms are only vaguely visible.  I have learnt that if I am able to get this sort of raw frame, with some suggestion of the faint bits visible, albeit faintly, it will be a good picture if I am able to stack a hundred or more frames.

Other things to note about the raw.  There is fine diagonal banding running down from top left.  This is characteristic of my camera and I have had reports from others of the same in their webcams.  I acquire at a very high gain setting to get maximum sensitivity for deep sky objects, and this accentuates the banding.  The fact that I do not cool the camera at my ambient temperatures of circa 27 deg C doesn't help.  (New:  I've been experimenting with removal of the bands by subtraction - see the Advanced Processing page)

Some of the bright spots on the raw are not stars, but are 'hot pixels' due to imperfections in the CCD chip.  These will be removed by subtracting dark frames.  Also note that at the top left of the raw there is an area with a slight glow.  This is 'amp-glow' and is caused by the on-chip amplifier giving off radiation that is captured on the CCD in long exposure.

 

Frame selection

That was one of the better frames.  Many other frames are much worse.  My typical exposure time for each raw is 25s.  All motorised mounts have tracking errors that show up as star images trailing on many (in my case) frames.  Here's a typical trailed frame which was discarded.

Only good frames are selected.  I typically take between 300 to 400 raw frames of each object and keep just about 30%.  The selection is done in K3CCDTools by ticking the frames that are to be kept.

 

Export selected frames

It is possible to align and add together, or stack, the selected frames in K3CCDTools, but because I will stack the Y components separately from the R, G, B, and because Iris provides more flexibility, I prefer to export the selected frames.

First the selected frames are exported as YUV components.  Then the same set of frames are exported as RGB. (Note that in order to be able to export frames as YUV, the capture mode of the camera must have been set to I420 at the time of image acquisition.)

For each frame, then, 6 files are created.  I delete the U and V files as I do not need them.  All the colour information is in the RGB.

 

Rename files to Iris format

K3CCDTools writes file in xx0001.bmp, xx0002.bmp, ... format, but the next program in the process, Iris, expects files in the format xx1, xx2, ..., so the files will have to be renamed.  I have written a batch file that automates the process of converting the filenames.

This batch file, named 'reindex.bat' file has the following text;

------------------

rem Renames files with other indices into that suitable for IRIS ie starting with 1

rem using the names stored in result.txt

rem Syntax reindex

for /f "tokens=1,2*" %%i in (result.txt) do rename %%i %%j

-------------------

It assumes that the old filenames are in one column and new filenames are in an adjacent column in the text file 'result.txt'. I use Excel to set up this text file.  The advantage of this approach is that any number of files with any naming convention can be renamed into an orderly series for further processing.

 

Prepare dark frames

Most of the time I take specific dark frames for each session, always at the end of the imaging session.  I take 30 frames with the aperture of the telescope covered and with the same settings as the image frames.  The purpose of this is to subtract the non-random noise inherent in every CCD due to defects (eg. hot pixels), or heat.

I follow the same steps as described above to prepare and export the frames for stacking.

Visitor  page counter  since 20th Sept 2008

Next: Part 2, Aligning and Stacking in Iris

(Part 3, Finishing the Image in Photoshop)  (Part 4, Noise Reduction and Colour) (Advanced Processing)

Copyright 2003 to 2014, by TG Tan.  All rights reserved.  Copyright exists in all original material available on this website.  This material is for your personal individual, nonprofit use only.  Redistribution and/or public reproduction of this material is strictly prohibited without prior express written permission from the author.