This paper presents fusion of three biometric traits, i.e., iris, face and fingerprint, at matching score level architecture using weighted sum of score technique. The features are extracted from the pre-processed images of iris, face and fingerprint. These features of a query image are compared with those of a database image to obtain matching scores. The individual scores generated after matching are passed to the fusion module. This module consists of three major steps i.e., Pre-Processing, DWT Segmentation and Image fusion. The final fusion is then used to declare the person as Authenticate or Un-Authenticate with Secret Key Analysis.????
The identification of objects in an image. This process would probably start with image processing techniques such as noise removal, followed by (low-level) feature extraction to locate lines, regions and possibly areas with certain textures. The clever bit is to interpret collections of these shapes as single objects, e.g. cars on a road, boxes on a conveyor belt or cancerous cells on a microscope slide. One reason this is an AI problem is that an object can appear very different when viewed from different angles or under different lighting. Another problem is deciding what features belong to what object and which are background or shadows etc. The human visual system performs these tasks mostly unconsciously but a computer requires skill full programming and lots of processing power to approach human performance.
Manipulating data in the form of an image through several possible ways of iris, finger print and face recognition. The automated method of iris recognition is relatively young, existing in patent only since in 1994.The human iris, an annular region located around the pupil and covered by the cornea, can provide independent and unique information of a person. A?facial recognition system?is a?computer application?capable of?identifying?or?verifying?a person from a?digital image?or a?frame from a?video?source. One of the ways to do this is by comparing selected?facial features?from the image and a facial?database.
It is typically used in?security systems?and can be compared to other?biometrics?such as?fingerprint?or eye?iris recognition?systems.Recently, it has also become popular as a commercial identification and marketing tool.
?? Existing Systems
- Edge detection
- Feature vector
- Existing is done using Finger printing .Finger printing is that much not flexible because we can make duplicates of fingers and bluff people. It is not that much efficient.
- Only the spatial domain is calculated.
- Biometric system based on the combination of iris palm print and finger print features for person authentication
- We will be using PCA i.e. Principal Component Analysis algorithm to find out co-variance and variance.
- Sequential Haar coefficient requires only two bytes to store each of the extracted coefficients.
- The cancellation of the division in subtraction results avoids the usage of decimal numbers while preserving the difference between two adjacent pixels.
- This system gives more security compared to uni-modal system because of two biometric features
CLASSIFICATION OF IMAGES
- Binary Image
- Gray Scale Image
- Colour Image
Image Acquisition is to collect a digital photograph. To collect this requires a picture sensor and the functionality to digitize the sign produced thru the sensor. The sensor might be monochrome or coloration TV camera that produces an entire photo of the trouble area each 1/30 sec. The photograph sensor may also be line test virtual digital cam that produces a single photo line at a time. In this situation, the gadgets movement beyond the road.
Image enhancement is most of the best and maximum appealing regions of digital image processing. Basically, the concept in the back of enhancement strategies is to perform detail that is obscured, or clearly to spotlight certain features of exciting a picture. A familiar instance of enhancement is at the same time as we growth the assessment of a photograph due to the fact ?it appears higher.? It is essential to remember the fact that enhancement is a completely subjective place of photo processing.
Image recuperation is a place that also deals with enhancing the appearance of an image. However, not like enhancement, that is subjective, photograph recuperation is goal, in the experience that restoration techniques will be predisposed to be based on mathematical or probabilistic fashions of picture degradation.
Image restoration is the operation of taking a corrupted/noisy image and estimating the clean original image. Corruption may come in many forms such as motion blur, noise, and camera misfocus.? Image restoration is different from image enhancement in that the latter is designed to emphasize features of the image that make the image more pleasing to the observer, but not necessarily to produce realistic data from a scientific point of view. Image enhancement techniques (like contrast stretching or de-blurring by a nearest neighbour procedure) provided by “Imaging packages” use no a priori model of the process that created the image.? With image enhancement noise can be effectively be removed by sacrificing some resolution, but this is not acceptable in many applications. In a Fluorescence Microscope resolution in the z-direction is bad as it is. More advanced image processing techniques must be applied to recover the object.? De-Convolution is an example of image restoration method. It is capable of: Increasing resolution, especially in the axial direction removing noise increasing contrast. In this process we are going to process the finger print, face. we are resize and apply some filters on it in finger print.
The image fusion process is defined as?gathering all the important information from multiple images and their inclusion into fewer images, usually a single one. This single image is more informative and accurate than any single source image, and it consists of all the necessary information. The image fusion process is defined as gathering all the important information from multiple images and their inclusion into fewer images, usually a single one. The purpose of image fusion is not only to reduce the amount of data but also to construct images that are more appropriate and understandable for the human and machine perception. In computer vision, multisensory image fusion is the process of combining relevant information from two or more images into a single image. The resulting image will be more informative than any of the input images.
In?computer vision, Multisensory?Image fusion?is the process of combining relevant information from two or more images into a single image.?The resulting image will be more informative than any of the input images.
In?remote sensing?applications, the increasing availability of space borne sensors gives a motivation for different image fusion algorithms. Several situations in image processing require high spatial and high?spectral resolution?in a single image. Most of the available equipment is not capable of providing such data convincingly. Image fusion techniques allow the integration of different information sources. The fused image can have complementary spatial and spectral resolution characteristics. However, the standard image fusion techniques can distort the spectral information of the multispectral data while merging.
In?satellite imaging, two types of images are available. The?panchromatic?image acquired by satellites is transmitted with the maximum resolution available and the multispectral data are transmitted with coarser resolution. This will usually be two or four times lower. At the receiver station, the panchromatic image is merged with the multispectral data to convey more information.
Many methods exist to perform image fusion. The very basic one is the?high pass filtering?technique. Later techniques are based on?Discrete Wavelet Transform, uniform rational filter bank, andLaplacian pyramid.
Why Image Fusion
Multisensory data fusion has become a discipline which demands more general formal solutions to a number of application cases. Several situations in image processing require both high spatial and high spectral information in a single image. This is important in remote sensing. However, the instruments are not capable of providing such information either by design or because of observational constraints. One possible solution for this is?data fusion.
?Standard Image Fusion Methods
Image fusion methods can be broadly classified into two groups – spatial domain fusion and transform domain fusion.
The fusion methods such as averaging, Brovey method, principal component analysis (PCA) and?IHS?based methods fall under spatial domain approaches. Another important spatial domain fusion method is the high pass filtering based technique. Here the high frequency details are injected into up sampled version of MS images. The disadvantage of spatial domain approaches is that they produce spatial distortion in the fused image. Spectral distortion becomes a negative factor while we go for further processing, such as classification problem. Spatial distortion can be very well handled by frequency domain approaches on image fusion. The multiresolution analysis has become a very useful tool for analyzing remote sensing images. The?discrete wavelet transform?has become a very useful tool for fusion. Some other fusion methods are also there, such as Laplacian pyramid based, curvelet transform based etc. These methods show a better performance in spatial and spectral quality of the fused image compared to other spatial methods of fusion.
? Hardware Requirements
- 4 GB of RAM
- 500 GB of Hard disk
- MATLAB 2018b
 S. Prabhakar, S. Pankanti, and A. K. Jain, ?Biometric recognition: Security and privacy concerns,? IEEE Security Privacy, vol. 1, no. 2, pp. 33?42, Mar./Apr. 2003.
 T. Matsumoto, ?Artificial irises: Importance of vulnerability analysis,? in Proc. AWB, 2004.
 J. Galbally, C. McCool, J. Fierrez, S. Marcel, and J. Ortega-Garcia, ?On the vulnerability of face verification systems to hill-climbing attacks,? Pattern Recognit., vol. 43, no. 3, pp. 1027?1038, 2010.
 A. K. Jain, K. Nandakumar, and A. Nagar, ?Biometric template security,? EURASIP J. Adv. Signal Process., vol. 2008, pp. 113?129, Jan. 2008.
 J. Galbally, F. Alonso-Fernandez, J. Fierrez, and J. Ortega-Garcia, ?A high performance fingerprint liveness detection method based on quality related features,? Future Generat. Comput. Syst., vol. 28, no. 1, pp. 311?321, 2012.
 K. A. Nixon, V. Aimale, and R. K. Rowe, ?Spoof detection schemes,? Handbook of Biometrics. New York, NY, USA: Springer-Verlag, 2008, pp. 403?423.
 ISO/IEC 19792:2009, Information Technology?Security Techniques? Security Evaluation of Biometrics, ISO/IEC Standard 19792, 2009.
 Biometric Evaluation Methodology. v1.0, Common Criteria, 2002.
 K. Bowyer, T. Boult, A. Kumar, and P. Flynn, Proceedings of the IEEE Int. Joint Conf. on Biometrics. Piscataway, NJ, USA: IEEE Press, 2011.
 G. L. Marcialis, A. Lewicke, B. Tan, P. Coli, D. Grimberg, A. Congiu, et al., ?First international fingerprint liveness detection competition? LivDet 2009,? in Proc. IAPR ICIAP, Springer LNCS-5716. 2009, pp. 12?23.