you observe stars with your eyes, or with a telescope, you are
receiving starlight that has traveled vast distances. Amazingly, the
light remains virtually unaffected by the first 99.999999999999% or
so of its journey. However, in the trip through the Earth’s
atmosphere, and even through the optics of the telescope, the light
may finally be affected causing the brightness of the star to differ
from one observation to the next.
is the process of measuring the amount of light received from an
object. When you display
an image using HOU, you can use the cursor to see the amount of light
registered by each pixel in the image. This value is given in
Counts. The Auto Aperture and Aperture, routines add
up all the Counts within a specific range of pixels to give the total
Counts for a star. These routines are designed to subtract
background light caused by other objects and give only the Counts
created by the star itself. The brightness of the stars in an image
also depends on the exposure time for the observation. If the
telescope observes the star for a longer period of time, it will
gather more light so the star will appear brighter. In most cases,
if the exposure time is doubled the amount of light the telescope
receives is doubled.
way astronomers use photometry is to measure the brightness variation
of an object such as a variable star or a supernova. To measure
variation, images are taken on several successive nights of the same
star and its Counts are compared with those of a steady star in the
image. Another reason for using photometry is to measure the
apparent brightness of a star in order to calculate its distance.
This method involves calibration using a standard star. The
Photometry Techniques Unit explains each of these
processes. Photometry words used in the unit are defined as follows:
Counts - The measure of light that each pixel of the CCD
receives from the star. This measurement is particular to the
equipment used and to the atmospheric conditions during the
observation. When we display an image, the grayness or color at each
pixel is based on the Counts for that pixel.
Apparent Brightness - The amount of light reaching Earth per
second from a star under ideal conditions (as if there were no
atmosphere). This is a standard value that anyone could obtain from
their measurements after correcting for observing conditions. The
units for apparent brightness are Watts/meter2.
Luminosity - The amount of light emitted per second by a star.
It is an inherent property of the star, unlike Apparent Brightness,
and is independent of where the observations were made or what
telescope is used. Generally the luminosity of a star cannot be
measured directly but must be inferred from other characteristics of
the star. The units for luminosity are Watts.
Reference Star - A star whose apparent brightness and
luminosity does not change from one night to the next. The apparent
brightness value of the star, however, is typically not known.
Standard Star - A steady star is like a reference star but
with a known, agreed upon value of apparent brightness.
Apparent Magnitude - A measure of apparent brightness commonly
used by astronomers. The magnitude scale is inverse, meaning
brighter stars have lower magnitudes.
Absolute Magnitude - This quantity is analogous to the
luminosity but is expressed on the magnitude scale.
Measuring Brightness Variations
Suppose you have images of the same region of the sky taken on two
different nights. The region contains two stars. One is your target
star, the star you have chosen to study. It may be a Cepheid
variable star or a star that has just gone supernova or any other
star for which you wish to measure brightness variation. The other
star is known to have constant luminosity, meaning the brightness of
the star itself does not change from one night to the next. This
star is called the reference star. You do not need to know
the exact brightness of the reference star, just that it remains
constant. For images of objects beyond our own galaxy, foreground
stars are typically chosen. These are stars within our galaxy that
are in the same line of sight to the further away object. If the
observing conditions did not change from one night to the next, the
reference star would have the same brightness in both images. If the
second night was clearer than the first, the reference star will be
brighter in the second image than in the first. The target star may
appear brighter or dimmer in either image, but until changing
observing conditions are accounted for, it can be unclear whether the
change in brightness in the target star is caused by changing
observing conditions or by changes in the star itself.
One way to find the brightness variation of the target star over time
is to use the ratio of the Counts measured for the target star to the
Counts measured for the reference star in each image. Variations in
this ratio are comparable to variations in brightness of the target
Counts ratio = Ct /Cr
where Ct = Counts measured for the target star
Cr = Counts measured for the reference star
Calibration To Find Apparent Brightness
The Counts ratio gives a way of finding variation in brightness of a
star but not the Apparent Brightness value itself. You need a
further procedure for finding the brightness of a star that is
independent of observing conditions so that anyone, anywhere on
Earth, under any observing conditions, will get the same brightness.
In addition, each CCD reacts differently to light and yields a
different number of Counts for a given brightness. In order to use
your data in the context of other observations and reference tables,
you need to get the brightness of the star in units that are
independent of a particular CCD.
Calibration allows you to deal with both changing observing
conditions and different CCDs simultaneously. The process of
calibration involves an image of a star whose brightness you want to
measure (the target star) and another image of a standard star. The
standard star should be in the same region of the sky as the target
star so it will experience the same observing conditions as the
target star at any given time. It also helps to have a star within
the optimal brightness range for the telescope (not too bright and
not too dim) to assure a good image. Most importantly, the standard
star is a star with known apparent brightness – an agreed
upon standard value.
With identical observing conditions for the target and standard
stars, the ratio of their Counts is equal to the ratio of their
apparent brightness. This means that on the basis of one pair of
images, not the series of images over time necessary for measuring
variation, the value for the target star's apparent brightness can be
Let Ct = Counts measured for the target star
Cs = Counts measured for the standard star
Bt = apparent brightness of target star
Bs = apparent brightness of standard star
Then Ct /Cs =
Bt /Bs or
equivalently Ct /Bt
= Cs /Bs