8th IT, All Subject Question Bank

Download all question Bank

[Link contains question bank of Advance Computer Network(ACN), Data Compression(DC) & Design and Analysis of Algorithm(DAA) ]


Download all MSE-I Syllabus

20120815

Disaster Assessment 4 Questions which is to be asked in the Class Test on 16th August 2012






Note:- Please prepare all these 4 Question as it will be asked in the class test on 16th August i.e Thursday, these questions + material given below is IMP for the MSE-I Examination


1) What is spectral signature?
Ans:-According to spectral reflectance curve different objects have different reflectivity at different wavelengths. It causes any object to form a pattern on spectral reflectance curve which is unique. This pattern can be referred to as the spectral signature of the object. In other words it can be said that the measurements taken at different wavelength of different objects shows their spectral signature.  
At certain point on the spectral signature the object will appear the most distinct therefore it helps to recognise particular object and also allow the observer to select the most preferable wavelength for the object. Take an example of water and vegetation; both will appear same in visible spectrum but there is a considerable difference between them in near infrared range. Therefore if the observer is interested in observing vegetation, infrared wavelength will be preferred as it will allow the most distinct appearance of vegetation which will appear bright due to higher reflectivity in infrared region.


2) Explain image enhancement ?
Ans:- Image enhancement
Image reduction
The images that cannot be viewed on the screen due to their size are reduced in size by image reduction technique. In this the number of pixels are reduced in such a way that the image is reduced to 50% or 25%. However while doing so important data may be lost as rows and columns of alternating pixels are to be removed.

Image magnification
It is opposite to the image reduction as here the image is zoomed by increasing the number of pixels in the image. To make the image twice bigger additional three pixels of same digital values are added to the original pixel.

Colour compositing
There are three basic colours blue, green and red. Different colours can be obtained through addition of proportionate mixture of these three colours. This method is called additive colour composite. If three pigments of primary colours i.e. cyan, magenta and yellow are used than different colours are obtained through subtraction and therefore this method is known as subtraction colour composite.

If the obtained image has waves from Infrared region than they are converted into visible colours which means the object appears in completely different colour than the real world situation. This is known as false colour composite. If the image is in visible colour region then it is known as true colour composite.

Transect Extraction
In some images there is a clear distinction between two regions. It suggests that some images do not require analysis with as many bands as they did. Therefore transect extraction is carried out in order to determine which band is giving the highest differences in reflection profile of a particular image.

Contrast enhancement

There are two different types through which contrast enhancement can be carried out...

Linear stretch
The shades are increased in order to increase the number of distinctive objects in the image. As the name suggests the image is stretched.

Non linear stretch
It is good for low contrast image and there are two main procedures through which non linear stretch can be carried out.

Histogram equalisation
With the help of histogram all objects are specified in different shades according to their brightness value. This method greatly helps in differentiating different shaded regions on the basis of their histogram.

Gaussian Stretch
Linear starch method and Histogram provides different shades for a given image but they greatly reduce the distinctive appearance of highest and lowest bright contrasts in the image. Gaussian stretch greatly improves the distinctive quality of highest and lowest bright contrasts. On the other hand it reduces the differential contrast in the middle region.

Density slicing
It converts number of shades (between black and white) into different regions of colours so that human eye can easily extract information from the image which instead would have appeared as non recognisable shades. Sometimes symbols are also given instead of colours.

Spatial filtering
It is known that the image has lower and higher frequencies depending upon the reflectance profile of different objects. If image has objects having almost same spectral signature than it is said to have low frequency while on the other hand if the image contains objects having distinct spectral signature than the frequency is considered as higher.

Depending upon the requirement the frequencies are required to be higher or lower in terms of frequency. In order have such images filtering is carried out which may be low pass or high pass filtering.

Low pass filters remove high frequency brightness values and are known as spatial smoothing filters. High pass filters remove low frequency brightness values and are known as spatial sharpening filters.

Edge detection
Some natural features such as geographical faults or the river passing by are clearly distinguishable due to abrupt colour difference on the image. Same way there are some manmade that features that can be separated from the background easily such as roads, canals, railway lines etc.

If in some case this type of contrast doesn’t appear than it has to be enhanced by edge enhancement techniques. Two edge enhancement techniques are discussed below,

Directional filters: These filters enhance the linear edges in horizontal, vertical and diagonal direction.
Non directional filters: These filters can enhance linear filters as well as non linear filters by Laplacian filters.

Ques3) What is image classification? Explain supervised and unsupervised classification.

Ans:- Image classification
Supervised classification

As the name suggests supervised classification consist of data entry from the analyst. It provides information about several features such as forest areas, dry sand, and water to the computer by allocating sample regions. These sample regions help the computer in determining DNs for each region. It then autonomously classifies all regions based on their brightness values.

There are different algorithms to perform the autonomous classification such that minimum distance to means and maximum likely hood.

Unsupervised classification

Unsupervised classification has computer deciding everything i.e. from the initial information to the final classification. The user remains unaware of the method based on which the classification is carried out. The computer uses its own database to allocate DNs to the features in the image.

Unsupervised classification is very useful in knowing the spectral classes into which the pixel fall while it has to be defined by user in case of supervised classification. However the data provided by the unsupervised classification has to be analysed further for precise outcome by the supervised classification.


Ques4) Explain RADAR.?
Ans:- Radar uses microwave as a medium to sense objects that cannot be or hard to be analysed with visible spectrum. They provide distinct information about the objects and many applications exist due to their visibility and penetration ability.
Radar sends and receives continuous pulses in order to collect information in terms of the objects position relative to the radar’s position. It also provides details such as size and shape of the objects.
Radar mainly consists of the following components,
Pulse generator discharges times pulses
Transmitter generates successive short burst at regular interval
Duplexer co ordinates active transmitted & received microwave energy
Directional antenna shapes and focuses pulses into stream of pulses
Receiver antenna receives returned pulses and sends it to receiver
Receiver converts pulses and sends it to receiver
Recording device stores the information in digital mode for later processing in digital tape recorder or hard disk
Cathode ray tube monitor produces real time display
Applications of radar
Static radars are used on airport for the air traffic control
Navy ships have radars to sense disturbances and intrusion
Air borne radars are used for taking microwave images of ground bases areas
Possible infiltration in air space is effectively encountered by surveillance through radars in air crafts
Space borne radars are used to gather information about planet’s atmosphere
Information about metal’s ores beneath the earth soil can be easily obtained using radar
Types of radar
PPI (plan position indicator radar)

o This type of radar has circular display screen on which continuous radar pulses creates echoes of the objects
o Its spatial resolution is lower therefore it is used in weather broadcasting, air traffic controls and navigation purposes
SLR (side looking radar)

o Side looking radar has a side pointed antenna that directs the transmitting pulses;  the antenna is generally placed below the aircraft or spacecraft
o The radar produces continuous side strips that cover huge ground areas parallel to the air borne or space borne vehicle.
o If SLR is on air craft than it is called SLAR (side looking airborne radar)

 ********************************************************************************************

For Reading Purpose........


Image statistics and sub setting
A digital image contains pixels having different digital numbers. Each number represents specific smallest portion of the image. Therefore these digital numbers in combination provides a complete view of the original image. Allocation of digital numbers to the image is carried out in such a way that specific information can be extracted easily.
It is quite obvious that more number of smaller pixels creates better image as smallest features can be included with decreasing pixel size. However this type of image occupies more space. On the other hand if bigger pixels are used than they occupy least amount of space but the fineness of the image has to be compromised.
Number of bands used to capture the image also matters while discussing about image statistics as they play a major role in deciding the quality and information content within the image. If a single band is selected for detection by sensor than it will show all the rest of colours in changing shade It is to be noted that in this case the detailing depends upon the number of pixels allocated to the image. Such image which is known as panchromatic image however is not considered as a detailed and easily understandable image due to its single colour display.
Multispectral image on the other hand has multiple bands of different colour regions such that combination of all bands give real image in which all the colours can be recognised in easy and similar manner to the actual human eye view.

Sub setting
Sub setting means breaking of the bigger image into smaller parts. When the analyst is interested in only few objects or features from the image than the images is broken into several parts using the function known as sub setting. There are two types of sub setting i.e. spatial sub setting and spectral sub setting.
Spatial sub setting is breaking of the image into several parts by selecting specific region of pixels from the image.
Spectral sub setting is selecting particular wavelength from the multispectral image to acquire information about specific band. Thus this image converts multispectral image into lesser amount of bands or into panchromatic image.    

***********************
For Reading Purpose........
Digital interpretation of true colour and false colour composite
The number of colours that can be represented by the monitor is known as colour resolution. As in case of radiometric resolution the numbers that can be obtained depend upon the bits of sensor. 4 bit sensor will give 24=16 different shades in black and white monitor.
Colour monitor has these bits shared between three colours equally. Therefore if 24 bit sensor is used than 8 bits are allotted for each colour and therefore 28 shade for each colour can be obtained. Suppose 6 different bands are used than obtained colours will be 6! / (6-3)! = (1.2.3.4.5.6)/ (1.2.3) = 120.
Colourful image consists of basically three main colours, i.e. blue, green and red. It’s the vastly different proportion of these colours that provides many different colours. Its major advantage is the obtained similarity of the captured image in terms of colours with the one that humans are used to see with their eye.
There are two different colour composite types that fairly explain all colourful images that are captured through remote sensing technique,
1)      True colour composite
2)      False Colour composite
True colour composite
As the name suggests this type of image is made up of three main different colours therefore the image is almost similar in colour to the actual vision of human eye. Such resemblance removes many complexities while understanding the features of the image.
False colour composite
For the image having false colour composites the actual colour of the image doesn’t appear in the image instead there is a considerable difference compared to the actual one that is noticed through human eye on earth.
If one of the band in multi spectral image is in region apart from visible one than it becomes necessary to allot a colour to that invisible band so that difference between various objects in the image can be identified by the analyst.
One of the most common colour composite schemes is shown below,
R=XS3 (NIR band)
G=XS2 (Red band)
B=XS1 (Blue band)
where red colour is allotted to the NIR band while green colour is for red band and consequently blue colour for green band.
In similar way
R= SWIR band
G=NIR band
B=Red band.


*********************
For Reading Purpose........
Visual Image Interpretation
Image interpretation process includes image reading, image measurement and image analysis.
Image Reading: - It is the most basic form of image interpretation. Image reading is carried out by properties of the objects in the image, i.e. shape, size, pattern, tone, texture, colour, shadow etc.
Image measurement: - As the name suggests it provides information about dimensional features such as length, location, height, density, temperature.
Image analysis: - It is execution of information from the image through formerly obtained data through image reading and image measurement.
The information obtained through such way is then used to create more understandable map which is known as interpretation map or thematic map.
Elements of visual image interpretation

X,Y location
Size
Shape
Shadow
Tone/colour
Texture
Pattern
Height/ depth
Site/situation/association
X, Y location: - There are two basic methods through which important information can be obtained; I) Surveying in the field and II) Collecting remote sensing data
Size: - It provides information about length, width, area, perimeter etc about the object.  
Shape: - It provides information about shape of the object such as an airport can be visualized as a straight strip from the space while the football stadium will appear as the large oval shaped object.
Shadow: - shadow of the object sometimes gives more information compare to the object itself. Especially height can be obtained from the map. If the time is known than the angle made by the incident sunrays to the surface is used to find out the height. On the other hand if the height is known than the time of the image taken can be found out.
Tone: - In temporal resolution the shade differs from black to white. This depends upon the inherent characteristics of the object. The object at higher temperature will emit high amount of radiation compared to the object with the lower temperature.
Colour: - The vegetation will appear as green if EM waves in visible range are sent towards the object because the object will reflect green colour and rest is absorbed by the leaves of vegetation.
Texture: - Texture can be defined as placement and arrangement of repetitions of colour or tone in the image. Smaller individual objects can not be identified easily. In this case texture turned out to be useful feature to find out presence of smaller objects as a group.
Pattern: - As the name suggests the patterns are useful in determining specific areas which differs from others. Take an example of comparison of urban areas and areas of outskirts. Urban areas have pattern of house rows and roads in denser manner compared to the outskirts.
 Height and depth: - Height and depth of objects can be found out using parallax which depends upon the displacement of the object due to change in observation point. 
Site situation and association: - It depends upon the understanding and intuitive nature of the humans. Sometimes there are obvious places on the map which suggests particular site to be existed. Such as coal storage and cooling pond nearer to the thermal power station. 




No comments:

Post a Comment

Floating Vertical Bar With Share Buttons widget