Answers
I-1: The eye shares functions common to a camera. To the left is a diagram of the human eye. Key components are the cornea, the iris and pupil, the lens, and the retina. Light from objects close to distant pass through the outer layer, or sclera, of the eye into the cornea which encloses a liquid, the vitreous humor whose index of refraction is 1.3 (air is 1.0). Refraction (bending) of light occurs in the cornea, and is diverted inward to pass through the iris (which has some distinctive color, which varies in individuals according to precise composition of a pigment). The iris responds to varying amounts of light (brightness) by diameter changes of the pupil (as it enlarges [gap between the lines in the illustration defines the circular pupil], more light is admitted). This refracted light now passes through the lens, a transparent material that serves to focus light; light is further refracted as it passes across the curved boundaries. Focusing is done by changing the shape (thickness) of the lens through the action of the ciliary muscle; in a thickened mode, the lens focuses near objects. These changed light rays now pass through the gelatin-like vitreous humor onto the retina which can detect rays extending against a hemisphere more than 180° in arc. Most of the focused light strikes within 20° of a central point called the fovea. The retina (somewhat analogous to camera film) consists of layers that include nerve cells and light receptors called cones and rods. Individual cones, in a central region called the macula, are receptive to red, green, or blue light (because of pigments sensitive to each); the rods are tuned to variations in brightness and respond by sending signals in shades of gray. The processed light is converted to electrical impulses that move to the optic nerve (itself about 20° off center, in the eye's blind spot) and eventually to the brains visual center, where the inverted image on the retina is converted to upright and the color and brightness signals are combined to yield what the eye perceives to be an image of the external world. (Note: the eye lens focuses by changing thickness; the camera by forward/backward movements). Human vision extends from a spectral region (the visible) that covers wavelengths of 400 to 700 nanometers (a nanometer is 10-9, or a billionth, of a meter). Under contrasting conditions, the eye is capable of seeing a line 2 millimeters wide at a distance of 5 meters. BACK
I-2: Acquisition/measurement; data/information; properties; phenomenon,...material; recording device; not in contact; measuring fields.radiation; instruments. BACK
I-3: Geospatial connotes the distribution of something in a geographic sense; it refers to entities that can be located by some co-ordinate system. The three terms "feature; object; class" have different but somewhat overlapping meanings. Feature can refer to "overall appearance", "mark", or "characteristic", and/or to a set of measurable properties, or, more narrowly, to a specific geometric or geomorphic entity on the surface of a planet such as Earth. Object relates to a single entity, of a physical nature, capable of being sensed (e.g., seen or touched), to which a descriptive name can be given, such as "house", "road". Class has a more general connotation, pointing to a group of features or objects of identical or similar types that have taxonomic significance; examples are "forest"; "urban"; "mountains". Classes can be hierarchical, that is, can be subdivided into subclasses; thus "urban" includes "inner city/suburbia", "road networks", "neighborhoods", "shopping centers", etc. The term "theme" is often interchangeable with "class". The term "material" is sometimes a substitute for any of the other terms but this really should be used only to refer to the physical nature of a class or object that affects its spectral properties.BACK
I-4: One might think there is an error, at first glance. The delta E on the left denotes a change in energy level. As an equation, there should be a delta variable on the right, in this case "v" (this is the letter "vee" which here is similar in appearance to the small Greek letter "nu" [ν] that was not available on the html editor; some textbooks use the letter "f" for frequency). This seems to give an inconsistency in which both E and delta E = hv. But, in effect, this is "built in" to the meaning of the equation: when there is a change in energy level, as for example an orbiting electron in an atom moving to a higher energy state, this change is represented by some delta E which has its characteristic frequency. For the formula: E = hv, at some energy level, there is a corresponding frequency at some value unique to the discrete energy value for that particular E. BACK
I-5: Simply put "Longer is lower", meaning that longer wavelengths are associated with lower energy levels. It follows that "Shorter is higher". BACK
I-6: The appropriate equation is E = hc/wavelength, or Wavelength (in meters) = hc/E. Thus, Wavelength = (6.626 x 10-34) (3.00 x 108)/(2.10 x 10-19) = 9.4657 x 10-7m = 946.6 nm; = 9466 angstroms; = 0.9466 µm (in the near-infrared region of the spectrum). BACK
I-7: Here the operative equation is: Wavelength = c/frequency. Thus, Wavelength = (3.00 x 108) (120 x 106) = 2.5 meters. BACK
I-8: Go first to the peak close to 500 nanometers of the irradiance curve for sunlight as it reaches the outer atmosphere. On the ordinate, its spectral irradiance reads 2100. For the sea level irradiance curve at the same peak position, the value is estimated to be 1300. Thus, the percent loss is: (2100 - 1300)/2100 x 100 = 38%. Note that at most other spectral locations, the loss is less. BACK
I-9: 1000 nm/micrometer; 10000 A/micrometer. BACK
I-10: (Intervals in micrometers). 1) Visible-Near IR (0.4 - 2.5); 2) Mid-IR (3 - 5); 3) Thermal IR (8 - 14); 4) Microwave (1 - 30 centimeters). BACK
I-11: (In micrometers) a) 0.8 - 0. 95; b) 0.8 - 0.95; the percent reflectance is less than grasslands; c) 0.59; d) 0.57. The four classes should be distinguishable at 0.6 micrometers, provided the instrument's sensitivity (ability to distinguish reflectances of small differences) is 5% or better. BACK
I-12: In the visible, the red sand (a rock material) is brighter than either vegetation type (water is least bright); however some rocks can be darker (such as black shales). In these curves this reverses in the Near-IR (1.2 micrometers), so that normally vegetation will be brighter (here, pinewoods is brighter) in this part of the Near-IR relative to most rock types. BACK
I-13:By extrapolating to the abscissa and ordinate, the values for 0.5 and 1.1 micrometers respectively are a) water:10 and 1; b) rock (sand): 40 and 32; c) grasslands: 20 and 40. When these are plotted in the diagram, both rock and vegetation are about equidistant from the unknown X. This is a case where one might choose a different wavelength. Try 1.0 micrometers and see if discriminating between those two classes improves. The conclusion you should now reach is a strong argument for having larger numbers of spectral bands (each can also cover a narrower interval of wavelengths) on the sensor. BACK
I-14: In a false color composite, for example, the forest and growing crops are likely both to appear red (we will find out why later in this Section). But, in an image most fields tend to have straight boundaries - usually rectangles - whereas forests commonly are irregular in shape and distribution. Shape is a prime tool by which an interpreter can distinguish classes that have similar spectral characteristics but are distinct in their outlines. Look at the Salt Lake City image again - note the widespread forests in the mountains and the small, regularly-shaped fields in the lowlands. BACK
I-15: For the Non-Vegetated classes, at ~0.87 micrometers; Vegetated at ~0.78 micrometers. BACK
I-16: The gray levels will depend on the specific point locations you chose. As a general rule, a feature or object or patch seen in a filtered aerial black and white photo (or a space image) will be light-toned if its color is close to that of the filter's color, and will be varying levels of darker in images made with other colors depending on color proximity in the visible light scale (red is spectrally farther from blue than from green). BACK
I-17: The white feature is light (clear-toned) in all three color layers (or projections) and represents a white object, such as light sand, on the ground. The blue represents materials that are brightest in shorter wavelengths and diminish in reflectance in longer visible wavelengths, such as may be the case for soils and some rock materials. Dark red is typical of trees and thick shrubs; light red associates with grass cover (also, early leafing or even certain plant disease). BACK
I-18: This is an example of what I call a "subjective" question in which no specific answer can easily be given owing to the particular choices of feature, location, etc. Hereafter, I will treat similar questions and their answers with the simple statement: Subjective. However, the red tones in the f.c. composite in this scene are medium gray in the bottom black and white photo but dark in the other two. BACK
I-19: 13,300 sq. miles; 33,225 sq. km; 8,512,000 acres. BACK
I-20: The detectors are electronic in nature and tend to respond similarly to each other when any particular number of photons strike each (over the same time). But, they are not exactly matched. So, when several scan lines cross the same ground feature (assume spectral homogeneity), they may have slightly different responses, causing in the images a distinguishable variation in gray level from one to the next. This helps to single out individual scan lines. Sometimes also, a given detector will experience a fluctuation in response from its normal mode, so that the line can be notably lighter or darker in its overall feature variations than its neighboring two lines. Computer processing can minimize differences, so that properly processed images seem almost free of individual lines.(red is spectrally farther from blue than from green).
I-21: Subjective. But, at the least you should have found Trenton, New Jersey.
I-22: Band 4 is rather "washed out" in that it shows an overall darkest tone with little contrast. But, it best shows the silt in the ocean waters. Band 5 shows the sharpest contrast (greatest spread of gray levels). Vegetation appears in the darkest gray tones. These tones emphasize the ridges in the upper left of this band image. In both 4 and 5, the Pine Barrens are somewhat darker; this speaks to the presence of considerable evergreen vegetation, with its darker needles. Band 6 is dominated by light tones, indicating that highly reflective vegetation is widespread. The dark patches are urban areas. Water in this Band, and 7, is extremely dark, as expected from its reflectances near zero. Band 7 is marked by even lighter tones. Both 6 and 7 show the Barrens to be a bit darker, suggesting that evergreens are somewhat less reflective than deciduous trees. Note that the ridges can be discerned even though they have little contrast with the vegetation-rich valleys in between. Also note that the cities don't stand out in Bands 4 and 5.
I-23: Subjective. You should have found the table's criteria to work fairly well.
I-24: For full scenes printed at a scale of 1:1,000,000 (the standard product, about 10 inches on a side, with white margins), the TM and MSS products show strong similarities, even in the details. This is because the effects of the added pixels (representing areas on the ground) at 30 meters do not appear sharply different in the TM images when compared with the 79 meters of the MSS. For an image of this scale, which extends over 7 inches on a side, for the 79 meter case, the size of a pixel is 79/185000 x 7 = 0.003 inches (the 185000 refers to the ground equivalent length of 185 km for the scene). The 30 meter case gives a pixel size of 0.001 inches. The eye cannot clearly differentiate between these two sizes. Where the higher resolution TM image produces obvious improvement occurs when the scale is changed to larger sizes (1:250,000 produces a 28 inch wide full image), with the pixels remaining still small enough (0.004 inches) not to be distracting to the eye (but the 79 meter MSS would have pixels of 0.012 inches, so that pixels will be seen and give a tiny but disturbing blocky effect). Likewise, when a subscene (part of a full image) is produced from TM data, it also appears sharper when printed at some typical size (e.g., 10 inches).
I-25: Because individual materials that occur within a class (for example, several species of minerals within a mineral group) express their variations in composition as slight shifts in peaks and troughs within a spectral curve continuum, hyperspectral sensors are capable of not only being excellent in identifying the different groups but also can determine the identities of individual members of such groups.
I-26: The more obvious difference is the distortion in the shape of features that have a strong three-dimension expression, such as mountains. In Landsat images, the mountains near San Francisco appear "normal", that is, they have slopes on either side of the mountain crests that are similar in slope angles. But, in the radar image one slope side seems stretched out and the opposing slope appears shortened; this is a hallmark of radar imagery known as layover. A second difference relates to gray tonal levels. In radar, some features have tonal signatures quite unlike those in Landsat Visible-Near IR images, the causes of which are covered in Section 8. A good example is the San Francisco Airport, which in radar is quite black but would have various shades of gray in most Landsat bands.