Hyper Spectral Multi Spectral Images Fusion Using Matlab

Description

Abstract:

Multimodal medical image fusion is effectuated to minimize the redundancy while augmenting the necessary information from the input images acquired using different medical imaging sensors. The sole aim is to yield a single fused image, which could be more informative for an efficient clinical analysis. This paper presents ? multimodal fusion framework using the non ? sub-sampled Contour let transform (NSCT) domains for images acquired using two distinct medical imaging sensor modalities (i.e., magnetic resonance imaging and computed tomography scan). The major advantage of using NSCT is to improve upon the shift variance, directionality, and phase information in the finally fused image. The first stage employs a NSCT domain for fusion? and then second stage to enhance the contrast of the diagnostic features by using Guided filter. A quantitative analysis of fused images is carried out using dedicated fusion metrics. The fusion responses of the proposed approach are also compared with other state-of-the-art fusion approaches; depicting the superiority of the obtained fusion results. To finally segment the tumour part by applying Fuzzy C-Means Clustering.


OVERVIEW AND SCOPE

The main ? purpose is to scan the medical image as a fusion by using image processing technique and its one of the technique which we used . and also we using deep learning which comes under image processing. It is used to diagnosis like CT SCAN ,MRI SCAN etc?

Existing method:

  • Image averaging and maximization method
  • Principal component analysis
  • Discrete Cosine Transform

Drawbacks:-

  • Contrast information loss due to averaging method
  • Maximizing approach is sensitive to sensor noise
  • Spatial distortion is high
  • Limited performance interms of edge and texture representation

PROPOSED SYSTEM

  • PREPROCESSING?
  • STATIONARY WAVELET TRANSFORM
  • CONVOLUTIONAL NEURAL NETWORK
  • NSCT

ADVANTAGE

  • Efficient compression ratio
  • Accuracy is high
  • Visual quality is high
  • Security is high
  • NSCT provides better edges and texture region than other transforms

Block diagram

Hyper Spectral Multi Spectral Images Fusion Using Matlab


DIGITAL IMAGE PROCESSING

The identification of objects in an image would probably start with image processing techniques such as noise removal, followed by (low-level) feature extraction to locate lines, regions and possibly areas with certain textures.

The clever bit is to interpret collections of these shapes as single objects, e.g. cars on a road, boxes on a conveyor belt or cancerous cells on a microscope slide. One reason this is an AI problem is that an object can appear very different when viewed from different angles or under different lighting. Another problem is deciding what features belong to what object and which are background or shadows etc. The human visual system performs these tasks mostly unconsciously but a computer requires skillful programming and lots of processing power to approach human performance. Manipulating data in the form of an image through several possible techniques. An image is usually interpreted as a two-dimensional array of brightness values, and is most familiarly represented by such patterns as those of a photographic print, slide, television screen, or movie screen. An image can be processed optically or digitally with a computer.

To digitally process an image, it is first necessary to reduce the image to a series of numbers that can be manipulated by the computer. Each number representing the brightness value of the image at a particular location is called a picture element, or pixel. A typical digitized image may have 512 ? 512 or roughly 250,000?pixels, although much larger images are becoming common. Once the image has been digitized, there are three basic operations that can be performed on it in the computer. For a point operation, a pixel value in the output image depends on a single pixel value in the input image. For local operations, several neighbouring pixels in the input image determine the value of an output image pixel. In a global operation, all of the input image pixels contribute to an output image pixel value.?

These operations, taken?singly?or in combination, are the means by which the image is enhanced, restored, or compressed. An image is enhanced when it is modified so that the information it contains is more clearly evident, but enhancement can also include making the image more visually appealing.

?An example is noise smoothing. To smooth a?noisy?image, median filtering can be applied with a 3 ? 3 pixel window. This means that the value of every pixel in the noisy image is recorded, along with the values of its nearest eight neighbours. These nine numbers are then ordered according to size, and the median is selected as the value for the pixel in the new image. As the 3 ? 3 window is moved one pixel at a time across the noisy image, the filtered image is formed.

Another example of enhancement is contrast manipulation, where each pixel’s value in the new image depends solely on that pixel’s value in the old image; in other words, this is a point operation. Contrast manipulation is commonly performed by adjusting the brightness and contrast controls on a television set, or by controlling the exposure and development time in?printmaking. Another point operation is that of?pseudo colouring?a black-and-white image, by assigning arbitrary colours to the gray levels. This technique is popular in?thermograph?(the imaging of heat), where hotter objects (with high pixel values) are assigned one color (for example, red), and cool objects (with low pixel values) are assigned another color (for example, blue), with other colours assigned to intermediate values.

Recognizing object classes in real-world images is a long standing goal in Computer vision. Conceptually, this is challenging due to large appearance variations of object instances belonging to the same class. Additionally, distortions from background clutter, scale, and viewpoint variations can render appearances of even the same object instance to be vastly different. Further challenges arise from interclass similarity in which instances from different classes can appear very similar. Consequently, models for object classes must be flexible enough to accommodate class variability, yet discriminative enough to sieve out true object instances in cluttered images. These seemingly paradoxical requirements of an object class model make recognition difficult.? This paper addresses two goals of recognition are image classification and object detection. The task of image classification is to determine if an object class is present in an image, while object detection localizes all instances of that class from an image. Toward these goals, the main contribution in this paper is an approach for object class recognition that employs edge information only. The novelty of our approach is that we represent contours by very simple and generic shape primitives of line segments and ellipses, coupled with a flexible method to learn discriminative primitive combinations. These primitives are complementary in nature, where line segment models straight contour and ellipse models curved contour. We choose an ellipse as it is one of the simplest circular shapes, yet is sufficiently flexible to model curved shapes.? These shape primitives possess several attractive properties. First, unlike edge-based descriptors they support abstract and perceptually meaningful reasoning like parallelism and adjacency. Also, unlike contour fragment features, storage demands by these primitives are independent of object size and are efficiently represented with four parameters for a line and five parameters for an ellipse.?

Additionally, matching between primitives can be efficiently computed (e.g., with geometric properties), unlike contour fragments, which require comparisons between individual edge pixels. Finally, as geometric properties are easily scale normalized, they simplify matching across scales. In contrast, contour fragments are not scale invariant, and one is forced either to rescale fragments, which introduces aliasing effects (e.g., when edge pixels are pulled apart), or to resize an image before extracting fragments, which degrades image resolution.?

In recent studies it is shown that the generic nature of line segments and ellipses affords them an innate ability to represent complex shapes and structures. While individually less distinctive, by combining a number of these primitives, we empower a combination to be sufficiently discriminative. Here, each combination is a two-layer abstraction of primitives: pairs of primitives (termed shape tokens) at the first layer, and a learned number of shape tokens at the second layer. We do not constrain a combination to have a fixed number of shape-tokens, but allow it to automatically and flexibly adapt to an object class. This number influences a combination?s ability to represent shapes, where simple shapes favor fewer shape-tokens than complex ones. Consequently, discriminative combinations of varying complexity can be exploited to represent an object class. We learn this combination by exploiting distinguishing shape, geometric, and structural constraints of an object class. Shape constraints describe the visual aspect of shape tokens, while geometric constraints describe its spatial layout (configurations). Structural constraints enforce possible poses/structures of an object by the relationships (e.g., XOR relationship) between shape-tokens.?

CLASSIFICATION OF IMAGES:

There are 3 types of images used in Digital Image Processing. They are

  1. Binary Image
  2. Gray Scale Image
  3. Color Image

BINARY IMAGE:???????????

A?binary image is a?digital image?that has only two possible values for each?pixel.??Typically the two colors used for a binary image are black and white though any two colors can be used.??The color used for the object(s) in the image is the foreground color while the rest of the image is the background color.

Binary images are also called?bi-level?or?two-level. This means that each pixel is stored as a single bit (0 or 1).This name black and white, monochrome or monochromatic are often used for this concept, but may also designate any images that have only one sample per pixel, such as?gray scale images

?Binary images often arise in?digital image processing?as?masks?or as the result of certain operations such as?segmentation,?thresholding, and?dithering. Some input/output devices, such as?laser printers,?fax machines, and bi-level?computer displays, can only handle bi-level images

GRAY SCALE IMAGE

A?gray scale Image?is?digital image?is an image in which the value of each?pixel?is a single?sample, that is, it carries only?intensity?information. Images of this sort, also known as?black-and-white, are composed exclusively of shades of?gray (0-255), varying from black (0) at the weakest intensity to white (255) at the strongest.

Gray scale images are distinct from one-bit?black-and-white?images, which in the context of computer imaging are images with only the two?colors,?black, and?white?(also called?bi-level?or?binary images). Gray scale images have many shades of gray in between. Gray scale images are also called? ? monochromatic, denoting the absence of any?chromatic?variation.

Gray scale images are often the result of measuring the intensity of light at each pixel in a single band of the?electromagnetic spectrum?(e.g.?infrared,?visible light,?ultraviolet, etc.), and in such cases they are monochromatic proper when only a given?frequency?is captured. But also they can be synthesized from a full color image; see the section about converting to grayscale.

COLOUR IMAGE:

A?(digital) color image?is a?digital image?that includes?color?information for each?pixel. Each pixel has a particular value which determines it?s appearing color. This value is qualified by three numbers giving the?decomposition of the color in the three primary colors?Red, Green and Blue. Any color visible to human eye can be represented this way. The decomposition of a color in the three primary colors is quantified by?a number between 0 and 255. For example, white will be coded as ? ? ? ? R = 255, G = 255, B = 255; black will be known as (R,G,B) = (0,0,0); and say, bright pink will be : (255,0,255).?

??????????In other words, an image is an enormous two-dimensional array of color values, pixels, each of them coded on 3 bytes, representing the three primary colours. This allows the image to contain a total of 256x256x256 = 16.8 million different colours. This technique is also known as?RGB encoding, and is specifically adapted to human vision

???????????It is observable that our behaviour and social interaction are greatly influenced by emotions of people whom we intend to interact with. Hence a successful emotion recognition system could have great impact in improving human computer interaction systems in such a way as to make them be more user-friendly and acting more human-like.

Moreover, there are a number of applications where emotion recognition can play an important role including biometric authentication, high-technology surveillance and security systems, image retrieval, and passive demographical data collections.

It is unarguable that face is one the most important feature that characterises human beings. By only looking ones? faces, we are not only able to tell who they are but also perceive a lot of information such as their emotions, ages and genders.

This is why emotion recognition by face has received much interest in computer vision research community over past two decades.

IMAGE ACQUISATION

Image Acquisition is to collect a digital photograph. To collect this requires an picture sensor and the functionality to digitize the sign produced thru the sensor. The sensor might be monochrome or coloration TV camera that produces an entire photo of the trouble area each 1/30 sec. The photograph sensor may also be line test virtual digicam that produces a single photo line at a time. In this situation, the gadgets movement beyond the road.?

IMAGE ENHANCEMENT

Image enhancement is most of the best and maximum appealing regions of digital image processing. Basically, the concept in the back of enhancement strategies is to perform detail that is obscured, or clearly to spotlight certain features of exciting an picture. A familiar instance of enhancement is at the same time as we growth the assessment of an photograph due to the fact ?it appears higher.? It is essential to remember the fact that enhancement is a completely subjective place of photo processing.

IMAGE RESTORATION

Image recuperation is a place that also deals with enhancing the appearance of an image. However, not like enhancement, that is subjective, photograph recuperation is goal, in the experience that restoration techniques will be predisposed to be based on mathematical or probabilistic fashions of picture degradation.?

CONCCLUSION

In this research, we proposed the wavelet based fusion approach for the PET and MRI image diagnosis. The experiment has tested on three dieses dataset named as for normal axial, normal coronal and Alzheimer?s disease brain images. The wavelet decomposition of the dataset has been done four level with low and high activity region. The quality of the fused image is tested using the MSE and PSNR approach. This proposed method gives the 90-95% accuracy for the fusion. The experiment is tested over the haar wavelet approach. This experiment can be extended towards the haar and db1 wavelet for the three dimensional medical multi-model database with for fusion. Medical image fusion plays a dynamic role in medical imaging applications by helping the radiologists for spotting the abnormality especially tumor in MRI brain images. The proposed image fusion algorithm has been analyzed for different types of MRI and CT images From the obtained results it is noted that proposed method NSCT is giving better results than other methods.


REFERENCE:

[1] R Yuqian Li, Xin Liu,Feng Wei, ?An Advanced MRI and MRSI data fusion scheme for enhancing unsupervised brain tumor differentiation?, Elsevier, coomputers in biology and medicine 81, pg.no.121-129, 2017.

[2] Tian Lan, Zhe Xiao, Yi Li, Yi Ding, Zhiguang Qin, ?Multimodal Medical Image Fusion using wavelet transform and human vision system?, ICALIP,978-1-4799-3903-9/4, IEEE 2014.

[3] K.P.Indira, Dr.R.Hemamalini,?Impact of co-efficient selection rules on the performance of DWT based fusion on medical images?, International Conference on Robotics, Automation, Control and Embedded Systems, ISBN 978-81-925974-3-0, 2015.

[4] Sonia kuruvilla, J.Anitha,?Comparision of registered multimodal medical image fusion techniques?, International Conference on Electronics and Communication systems,2014.

[5] Ramandeep kaur, Sukhpreet kaur,?An approach for image fusion using PCA and Genetic Algorithm?, International Journal of computer applications (0975-8887), volume 145, no.6, July 2016.

[6] Arati Kushwaha, Ashish khare, Om Prakash, Jong-In Song, Moongu Jeon ?3D Medical Image Fusion using Dual tree complex wavelet transform?, International conference on control, information and automation sciences, 978-1-4799-9892-0/15/, IEEE 2015.

[7] Tannaz akbarpour, Mousa shamsi, Sabalan Daneshvar, ?Structural medical image fusion by means of dual tree complex wavelt?, IEEE Iranian conference on electrical engineering, 978-1-4799-4409-5/14, 2014.

[8] Richa Srivastava1, Om Prakash, Ashish Khare, ?Local energy-based multimodal medical image fusion in curvelet domain?, IET computer vision, volume 10, issue 6, pp.513-527, 2016.

[9] S. Wuerger, G. Meyer, M. Hofbauer, C. Zetzsche, K. Schill, Motion extrapolation of auditory?visual targets, Information Fusion 11 (1) (2010) 45?50.?

[10] T. D. Dixon, S. G. Nikloov, J. J. Lewis, J. Li, E. F. Canga, J. M. Noyes, T. Troscianko, D. R. Bull, C. N. Canagarajah, Task-based scanpath assessment of multi-sensor video fusion in complex scenarios, Information Fusion 11 (1) (2010) 51?65.

[11] J.-B. Lei, J.-B. Yin, H.-B. Shen, Feature fusion and selection for recognizing cancer-related mutations from common polymorphisms, in: Pattern Recognition (CCPR), 2010 Chinese Conference on, IEEE, 2010, pp. 1?5.

?[12] S. Tsevas, D. Iakovidis, Dynamic time warping fusion for the retrieval of similar patient cases represented by multimodal time-series medical data, in: Information Technology and Applications in Biomedicine (ITAB), 2010 10th IEEE International Conference on, IEEE, 2010, pp. 1?4.?

[13] H. M?ller, J. K.-Cramer, The Image CLEF Medical Retrieval Task at ICPR 2010?Information Fusion to Combine Visual and Textual Information, in: Recognizing Patterns in Signals, Speech, Images and Videos, Springer, 2010, pp. 99?108.?

[14] Z. R. Mnatsakanyan, H. S. Burkom, M. R. Hashemian, M. A. Coletta, Distributed information fusion models for regional public health surveillance, Information Fusion 13 (2) (2012) 129?136.?

[15] S. Marshall, G. Matsopoulos, Morphological data fusion in medical imaging, in: Nonlinear Digital Signal Processing, 1993. IEEE Winter Workshop on, IEEE, 1993, pp. 6?1.


 

Customer Reviews

There are no reviews yet.

Be the first to review “Hyper Spectral Multi Spectral Images Fusion Using Matlab”

Your email address will not be published. Required fields are marked *