03-31-2019 12:41 AM
04-02-2019 11:10 AM
This is my first post I created an account just to tell you thank you. I am a noob but aspire to one day take something like your Andromeda photo. I’ve a little success on Orion Nebula..just enough to get me hooked. I have an AVX equatorial mount which I agree is pretty much the minimum for tracking deep sky objects. OTA is a Meade 70mm Apo astrograph quad, and I don’t think I’ll outgrow that anytime soon. Using a Cannon T6 which I think is the first thing I’ll need to upgrade but as there is quite a steep learning curve and so much cool stuff in space, I think the T6 will suffice until I work out all the software, plate solving, basic processing, etc.
Your post was awesome particularly all the info about how the camera sensor applies gain. I’m finding the ISO values to use a bit confusing. I’ve heard there is a certain point on the histo (like 1/3rd or something)...do you think 800 is going to be pretty much always the optimal setting for exposures of to 2 mins? So far from what I’ve seen the AVX will track that long. I have a 50mm off axis guidescope/cam but haven’t gotten into guiding yet but soon will via phd2
Your Andromeda photo is awesome! Thanks for the info and clear skies.
The optimal ISO will vary based on the camera sensor being used. But the T6 uses the same Canon 18MP sensor that my 60Da uses ... so ISO 800 would also be the optimal ISO for that camera assuming you are shooting deep-sky objects.
The basic idea is that, for deep-sky images, the longer the exposure time, the greater the probability of tracking errors, etc. So it's nice to be able to boost the ISO a little. But ISO is really an application of "gain" (an "amplification" of the information). Your camera knows how to apply both "analog" gain (before the imformation is converted to digital form) as well as "digital" gain (after the conversion has occurred.)
The camera sensor collects photons and converts them into electron volts (this is not unlike what a solar panel does). The voltage stored for each photo-site on the sensor can simply go through an analog amplifier to read a higher voltage.
The problem is, all camera sensors have to be "powered up" to work. If you simply power up a sensor and then perform a read-out (without actually taking a photo), you'll find that the values in each photo site are not actually zero ... there's something there. This is the bias level of the sensor. The sensor also gets something called "read noise". There are many causes of noise, but read-noise is probably the biggest contributor to overall noise. So one goal of astrophotography processing is to capture "bias" frames and "dark" frames so the computer software can figure out just how much "noise" is on that sensor (and this helps it supress some noise when the images are processed.)
If you simply apply ONLY analog gain, you'll amplify that noise. At some point it becomes obnoxious. Canon decided that for this 18MP sensor, that anything beyond ISO 800 is where they consider the noise from analog gain to be unacceptable. So they stop doing analog gain, perform the analog-to-digital conversion (ADC) and now you have digital data.
In digital form, they can apply more "gain" by simply multiplying the values. Usually there's some stretched algorithm (not just linear multiplication) in an effort to bring up the "signal" (the stuff you want) more than they bring up the "noise" (the stuff you don't want).
The problem with the "digital" gain is that the camera has a fixed cap on the largest digital value you can hold (considering it's a 14-bit chip). 14-bits means it can hold values between 0 and 16,383.
Suppose you multiply everything by 2 (to bring up the exposure by 1 full stop). Anything below the mid-brightness value gets doubled. But anything above the mid-brightness level gets doulbed to a value which is GREATER than the max value the chip can hold. This results in the data getting "clipped". If you had a pixel that was at 15,000 and another pixel at 10,000 (both are less than 16,383) then the pixel which is 15,000 is clearly brigher than the 10,000 value (by about 50% brigher). But when you multiple both by 2x, they both become 16,383 (the max value) because the data "clipped". Now niether is brighter than the other -- they are the same. You lost tonality or the abilty to distinguish between them.
This basically means you lose dynamic range. The application of digital gain results in a trade-off in that you lose dynamic range.
But in deep-sky objects, you WANT dynamic range. So you try to resist inching up the ISO too much. A common problem is that the deep-sky object gets a bit brighter ... but the stars all get clipped. Clipped stars will just appear "white". In reality the stars should have some color cast. Some are near-white. Some are yellow. Some are a bit blue. Some are a bit orange. etc. But they shouldn't all be uniform white. If you see that, it usually means the shot was over-exposed.
Here's an article that may be helpful: http://dslr-astrophotography.com/iso-dslr-astrophotography/
(the article is about 4 years old ... so it doesn't include the newest camera models, but you can still get the idea of how this works.)
As for the histogram and that "1/3rd" point. Generally you'll find the data is down in the lower 1/3rd or lower 1/4 of the histogram.
Here's an example:
(the above is from Canon DPP 4)
And here's the image it was taken from:
The above is a single un-processed frame (on of the frames from the finished photo). I took roughly an hour's worth of data (the above image is an 8-minute exposure). You can see it looks like it is upside-down (it is ... that's how it came out of the camera). Notice the contrast isn't as good. The disk structure is weak. It doesn't even look like there is color in it (but there is). Astrophotography images are usually heavily processed and the data is "stretched". The color in the final image is based on separating the image into separate LRGB channels (L = Luminance channel) and then re-combining such that the luminance applies to saturation (the brighter it is, the more strongly it will be saturated).
I primarily used PixInsight to process this ... but I did do a little tweaking in Photoshop and also in Lightroom as well. Learning to process takes a while and frankly is a never-ending quest to learn more (I'm still learning ... and frankly I feel like a beginner. This is one of those ... the more you learn, the more you realize how much you still don't know.)
You will need a *precise* polar alignment. I use a gadget by QHY called a "PoleMaster". It's a camera that has roughly a 5° field of view and is usually attached to the mount (not the scope). It takes an image of the sky near the pole and the software prompts you to turn the mount along the RA axis so it can take about 3 different images of the northern sky (each image is rotated a bit more). It has you identify a star in each of the 3 frames (the same star) and it plots a circle through those three positions. From this it can compute the axis of your mount. It also applies a template to the sky to match up against the stars in the image. From here it finds the TRUE pole ... but also finds your telescope's axis. It then draws a circle on your computer screen and tells you to adjust the mount (using only the knobs -- not the electronics) until that star is centered in the circle. At this point you have an extremely precise polar alignment and can do long exposure images without the stars drifting.
Celestron includes a feature they call "All Star Polar Alignment" (or ASPA). I have not used it. You might try it (they have YouTube videos that explain how to use it). I'm not sure how it compares to the precision alignment you get with a PoleMaster. Nearly every astrophotographer I know who has to setup & take-down the gear each night (no permanent observatory) uses a PoleMaster. It's very popular because it achieves a shockingly accurate alignment in just a couple of minutes.
An off-axis guider (OAG) is a bit different than what you have. You have a separate guide-scope. An OAG is an adapter that fits on the back of your actual scope (not a separate scope) and has a tiny little pick-off mirror. The guide camera is off at a 90° angle (the "off axis") and the regular imaging camera is still at the back ("on axis"). OAG's are usually used for very long focal length scopes (e.g. big SCTs) to avoid flexure issues. You wont have to worry about that with a 70mm APO.
The 70mm APO is a great way to start becasue it is more forgiving when it comes to tracking accuracy. I see many frustrated beginners who start with big SCTs ... and if you don't nail the polar alignment, the balance, the flexure issues, tuning the guide-software agressiveness, etc. you lose all your hair trying to learn. Much easier to start with shorter focal lengths, get some early successes ... then work your way up to longer focal lengths.
I'm not sure what you have as a guide-camera. With your scope you may need minimal guiding.
After you balance the weigths on your AVX, slightly "un-balance" them so that the east side of the mount is heavier by just a tiny amount. All mounts have a little gear backlash. By "un-balancing" you keep the weight on one side of the backlash so it doesn't "float" in the backlash (which would result in elongation in stars in the direction of the RA travel).
When you start using a guider, one common mistake is to leave the "agressiveness" at 100%. Aggressiveness is the amount of movement it will apply to the mount based on the error in star position. The problem is, stars will "wobble" a bit based on atmospheric "seeing" conditions. If agressiveness is too high, the guider will be "chasing the seeing conditions" (the star didn't *really* move .. it only *appeared* to have moved based on atmospheric distortions. So you don't want to "chase" those subtle movements too agressively. By de-tuning the agressiveness it's willing to let the star move by a tiny amounts without over-reacting. Common agressiveness values are in the 70-85% range.