cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Long exposure limit

Tsleel2811
Enthusiast
Rather stupid question, I think. I own a 60d and I want to get into astrophotography. I know the effective iso range is about 1000. Maybe 1600 if I'm good with lightroom noise reduction. the maximum exposure time is 30 sec. But in bulb mode I can leave it open indefinitely so long as I keep the button pressed. Most photography tutorials say that exposure times can reach upwards of 3 minutes if I want visible streaks. My question is as follows: can the sensor be damaged if I leave the Shutter open too long in nighttime conditions. I'm thinking overheating or too much light hitting the sensor.

Thanks for your input....
21 REPLIES 21

Tsleel2811
Enthusiast
Wow. Thank you very much. The next I go out, iso 800 it is.

Did you make the adjustments and stacking in lightroom?

And what is the 60d filter?


@Tsleel2811 wrote:
Wow. Thank you very much. The next I go out, iso 800 it is.

Did you make the adjustments and stacking in lightroom?

And what is the 60d filter?

Every typical camera has an internal filter which blocks both UV and IR light.  

 

The filter is there for two reasons...

 

(1) the camera would normally be sensitive to both UV and IR but people are not.  Since UV, visible light, and IR all focus at different distances, you'll get sharper images by blocking those UV and IR wavelengths (which focus at different distances -- so that light wont be sharp when visible light is sharp) and people can't see that light anyway.

 

(2) the visible light spectrum starts at the short end (400nm) where colors are at the "blue" end... and works it way up to the longer wavelengths (700nm) where teh colors are "red".   The problem is that humans are not actually equally sensitive to all of these colors.  We are most sensitive to "green" more than any other color.  Green happens to be in the very center of the visible light spectrum.  We aren't actually particularly sensitive to reds... there is a lot more "red" in the world that our eyes don't see.   So the filter doesn't actually allow the full "visible" spectrum of light to pass through untouched... it begins blocking reds as early as around 500nm and slowly ramps up how much of the lght is blocked until it reaches 700nm (the end of the visible spectrum) at which point it just blocks everything.    This is designed to immitate the sensitivity of the human eye so that the pictures you take resemble what your eyes see.

 

90% of all ordinary matter in the universe is made up of hydrogen atoms (element #1 on the period table).  All atoms absorb or emit energy based on their electrons and the hydrogen atom has one of the simplist (since it only has 1 electron).  It follows a pattern called the "balmer" series.  There are four different wavelengths of light that a hydrogen atom can emit (there's a couple more but they are in the UV spectrum so we can't see them.)  But of these four, the "Hydrogen alpha" wavelength is the most common by far (it's the easiest one for the atom to absorb or emit.)  That color happens to be a fire-engine red with a wavelength of 656.28nm.  That's not very far from the end of the visible spectrum (it's a rich intense red).   So it turns out a "normal" camera is blocking about 75-80% of the light at that wavelength.

 

When you take photos of deep-sky objects in the universe (particularly emission nebulae) these objects will glow with that *specific* wavelength of light.  So if you take a photo of the Rosette Nebula ... or the Horse Head Nebula ... you don't get much with a normal camera.  But if you eliminate that filter so the camera is sensitive to all of the H-alpha light then you get QUITE a bit more.  So astrophotography cameras are typically either modified to replace the standard filter with a special filter that captures the whole visible spectrum (doing a hard cut at the UV & IR end but allowing everything else through without blocking it) -or- they completely elminate the filter (unfiltered cameras) and the photographer attaches filters to the optical path as needed depending on what they are imaging.

 

 

As for stacking... I use special stacking software.  Mostly it's images of "deep sky objects" (galaxies, nebulae, etc.) that are stacked.  Images of night-time landscapes with the Milky Way galaxy overhead *might* be stacked but usually are not stacked images.    It takes time to collect all the frames to make the stacked images so a "stacked" image that includes a night-time landscape usually involves taking lots of images of the sky for stacking purposes, and then just a single image of the foreground, and combining the two (because the sky will technically be in a slightly different position relative to the foreground for each of those images so you'd get a wonky result if you tried to stack the whole thing as one.)

 

 

It's possible to use Photoshop but I think it's more complicated and there are other programs that are purpose-built for just astrophotography processing.

 

I use something called PixInisght but I should warn you that it isn't free, it isn't cheap, and it has a very long learning curve.  So I typically don't recommend anyone start with it (but it is very very powerful).

 

An easier program to start with for images that have "stars" in them is "DeepSkyStacker" (and it's free).  The software uses the positions of the stars in each frame to match up each image for precise image registration before stacking.  There's different software for images of planets because planets are bright enough that the correct exposure on the planet is typically to dim to have any stars... so instead of registering the images using stars it registers the images using the position of the "disk" of the planet.  But "DeepSkyStacker" uses stars and it's for stacking images of the deep sky that include stars.

 

If you want star trails, however, this is completely different... the program used to stack those images is called StarStaX (and it's also free). 

 

The stacking software can deal with noise.  Since noise is usually somewhat random, but the positions of the stars is not random, the computer can realize that the stars are "real" but the noise is not real and it can eliminate it (this is the major benefit to stacking is that it improves the signal to noise ratio of the images by reducing the noise).  The noise can be improved by a factor equal to the square root of the number of frames you shoot.  In other words if I shoot 16 frames of the same object, then I can reduce the noise by a factor of 4x since 4 is the square root of 16.

 

Also if you shoot (typically) at least 10 frames there are better algorithms.  If I only shoot a few frames then typically the noise has to be "averaged" out of the image.  That means the noise gets dimmer but it doesn't actually go away.  If I have enough samples of the sky (e.g. I shot at least 10 frames) then I can use statistical analysis instead of averaging.  Now I can use "sigma clipping" (I use "windsorized sigma clipping").  The idea is that after the frames are all nicely "registered" (adjusted so all the stars match up in each frame in preperation for stacking) then the same corresponding pixel in each of the ten frames can be thought of like a "vote" for the pixel value.  So if an airplane trail flew through just ONE of my frames but the other 9 are fine, the computer will notice than in 9 out of the 10 frames that pixel is "black" but in one frame the pixel is "white".  It'll build a statistical model and realize that the 10th pixel is basically an outlier... it's pixel value doesn't fall within a standard deviation of the value of the mean.  It can therefore safely "ignore" that outlier value.  It doesn't actually reject the whole image... it just rejects the specific pixels which are outliers but it uses all the pixels which are not outliers.

 

The result is that when I stack... the airplane trail in that was found in just one image will just magically disappear (it's a wonderful thing).

 

These are examples of things that Photoshop wasn't designed to do.

 

When I'm done processing with astrophotography stacking software I typically will pull the image into Photoshop (or Lightroom) for some final tweaks for artistic value.  I might tamper with color saturation, contrast, etc.  I also might work a little more on the noise based using tools that reduce noise in dark areas but ignore bright areas, or reduce noise in non-contrast areas, but stay away from edges of high contrast (where noise reduction would soften the image and we don't want that.)

 

The book "Lessons from the Masters" is probably one of the best books for explaining techniques.  Each chatper is written by the astrophotographer who is considered to be more or less "the" master at that particular type of astrophotography.  And they explain how they capture *and* process their data.

 

Astrophotography can be expensive... so I jokingly tell the new people that you might want to consider a less expensive hobby such as Formula 1 Racing or Yachting.  😉

 

 

Tim Campbell
5D III, 5D IV, 60Da
Announcements