1.Introduction
The word ‘biometric’ refers to a few unique characteristics of a
person’s physiology or behavior which do not usually change with
time. Examples of such physiological characteristics include
fingerprint, iris, palmprint, face, etc. Examples of behavioral
biometric includes hand-written signature, voice, gait and typing
style on a keyboard.
Biometric can play a major role to verify or to identify an individual.
Verification is the process to confirm the identity of a claimant. In
this process, one or more biometric features of the claimant are
validated against the known biometric profile of the individual.
Therefore, the verification process requires a one-to-one match. In
the case of identifying an individual, the biometric identity of the
unknown individual is matched with the biometric of several others
in an existing database. Hence, identification involves a one-to-many
comparison. Uniqueness of biometric characteristics stops an
imposter against making a false verification and identification
attempts.
The following example can be considered to explain the concept of
the verification process. To avail an Aadhaar-enabled service, the
user types her Aadhaar number to specify her identity. Then, the
system compares the biometric of the user with that of the enrolled
user. Here, one-to-one matching takes place. In this case, an
acceptable similarity between the captured biometric of the
claimant and the biometric of the enrolled user establishes the
genuineness of the claim. Otherwise, the claimant is considered as
an impostor and her claim is rejected. Hence, the verification process
attempts to establish the following: “you are who you say you are”.
On the contrary, in an identification process, an individual claims
that she is one of the registered members as per the record. As a
part of this process, the individual’s biometric is matched with that
of every member in the record to identify her as one with whom the
highest similarity score has been found. But, if the highest similarity
score is less than a threshold, then it can be concluded that there is
no similarity between the input and the registered members. This
establishes the claimant as an impostor. Therefore, an identification
process requires 1-to-N comparisons. N is the number of registered
members in the above discussion. The identification process
establishes an individual as “someone who is already enrolled”.
This article revisits various biometric traits, the steps involved in
recognition, multimodal biometric systems and the security issues in
biometric recognition systems. The organization of the rest of the
article is as follows the steps in a biometric recognition system have
been depicted in Section 2, various categories of biometric traits are
presented in Section 3 and Section 4 discusses about multimodal
systems. Fusion in multimodal biometric system can take place at
various levels as discussed in Section 5. Section 6 discusses the
security concerns supposed to be addressed by a biometric
recognition system. Amidst all these theoretical discussions and
practical challenges, banks across the globe have embraced
biometrics as a factor of authentication. Section 7 provides a glimpse
of such adoptions of biometric by banks and the article concludes
with Section 8.
2. Steps in Biometric System
Registration (or enrollment) and recognition are two phases of a
biometric based recognition system. The block diagram is presented
in Figure 1. The registration phase includes pre-processing, regionof-interest detection and feature extraction steps. The extracted
features are then stored in the database. The recognition phase
includes pre-processing, region-of-interest detection, feature
extraction, matching and decision making steps. Matching module
compares the extracted features with the stored features in the
database for either identification or verification task.
2.1. Pre-processing
The pre-processing step is primarily used to improve the acquired
image (or signal) in order to obtain an accurate extraction of region
of interest and the biometric features. Generally, rescaling, perexample mean subtraction and feature standardisation are
commonly used as pre-processing for several of the biometric
recognition systems. The approach of biometric modality-specific
pre-processing is abundant in the literature. Few examples of these
are presented below.
An experimental study of several illumination pre-processing
methods for face recognition is reported in [Han et al, 2013]. These methods have been divided into three main categories as – graylevel transformation, gradient or edge extraction and reflectance
field estimation. [Jahanbin et al, 2011] has used following four steps
as pre-processing for a face recognition system: gamma correction,
Difference of Gaussian (DoG) filtering, masking and equalization of
variation. The axis-symmetric nature of a face is considered to
generate an approximately symmetrical face image in [Xu et al,
2016] for face recognition. This increases the accuracy of face
recognition methods.
Filtering, equally-spacing, location, size and time normalization are
key pre-processing steps for an online signature verification in
[López-García et al, 2014]. To avoid the acquisition device
dependency, the acquired data is also normalized in a fixed range in
an online signature verification system [Tolosana et al, 2015].
2.2. Finding Region of Interest
Locating the Region of Interest (ROI) for a biometric trait is essential
precursor for feature extraction step. This step identifies the main or
interesting portion of the image (or signal) from where the biometric
traits are extracted. For example, several studies on palmprint
recognition consider the size of the palm to determine the ROI.
Similarly, iris localization is integral to iris recognition [Lili and Mei,
2005].
The methods to extract ROI certainly depends on the modality of the
biometric system. The techniques to identify the region of interest
can be grouped into three major divisions, namely:
(i) Bottom-Up Feature Based Approach: This approach does not
assume any apriori information about the region of interest. Hence,
these approaches are purely driven by detection of key points in a
purely bottom-up approach. For example, face localization can be
carried out using the Scale Invariant Feature Transform (SIFT). A
scale invariant region detector and a descriptor based on the gradient distribution in the detected regions play major role in this
approach.
(ii) Top-Down Knowledge Based Approach: This approach is
influenced by additional relevant information about the physiological
characteristics. For example, [Jones and Viola, 2006] has considered
individual’s motion and appearance in determining the region of
interest.
(iii) Appearance Based Approach: This approach considers the
inherent physiological appearance of a biometric trait. As an
example, it can be studied how region of interest of a palm is
extracted in [Saliha et al, 2014]. A key point localization is developed
to spot the crucial points of a palm. It helps in the proper alignment
of the hand image. This approach is further based on a projection of
the X-axis and the projection of the upper and lower edge. Hence, it
extracts the horizontal limits of the hand contour. In [Belahcene et
al, 2014], a 3D face recognition is proposed by finding regions of
interest in a face, which include mouth, nose, pair of eyes, etc.
2.3. Feature Extraction
In the feature extraction step, the properties or inherent patterns in
a biometric trait is extracted from the input (and possibly preprocessed image/signal). Thus, the derived properties or patterns
are a better representation of the unique elements of an individual’s
biometric trait. Definitely, the type of biometric decides the feature
extraction step. The following paragraph highlights few examples in
support of this dependency.
Two different feature extraction approaches are present for handwritten signatures as they aim to capture the static or the dynamic
features. Geometrical features of the signature are considered for
the static approach. Dynamic features of hand-written signature
includes the speed and the acceleration of the pen movement, penup and pen-down times, etc. Ear curves are extracted for an earrecognition system in [Ghoualmi et al, 2015]. The face recognition
system in [Kumar and Kanhangad, 2015] has used the techniques like
wavelet transform, spatial differentiation and twin pose testing
scheme for feature extraction from faces. According to [Ukpai et al,
2015], principal texture pattern and dual tree complex wavelet
transform produce iris-specific features from an iris image. The next
section on various biometric traits will lead to a better
understanding of this through narration of different biometric traits.
2.4. Matching and Decision
In this step, the extracted features are compared with the enrolled
features to obtain a matching score. The subsequent decision
making step either accepts or rejects an individual using this
matching score.
3. Types of Biometrics
3.1 Fingerprint
Uniqueness and consistency in performance have established the
fingerprint as the most widely used biometric trait. Usage of
fingerprint can be traced back to previous centuries. Ease in
acquisition, availability of 10 different fingers and its acceptance for
law enforcement and immigration purposes have established it as a
very popular form of biometric.
A fingerprint is obtained from the friction ridges of the finger. High
and peaking part of the skin causes the dark lines in the fingerprint
as it is shown in Figure 2. White spaces in between dark lines are due
to the shallow parts of the skin, which are also called the valleys. The
ridges and furrows (as appearing in Figure 2) enable us to firmly hold
objects. Their presence causes a friction, which is needed to grab any
object. But uniqueness of fingerprint is not due to these ridges and
furrows. Uniqueness is achieved due to minutiae points. The
minutiae points are defined as the points where the ridges end, split and join, or appear as a simple dot. The patterns of placement of
these minutiae points lead to uniqueness. The minutiae consists of
bifurcations, ridge dots, ridge endings and enclosures. The minutiae
points are further broken down into sub minutiae such as pores,
crossovers and deltas to ensure further uniqueness. Tiny depressions
within the ridge are called the pores in a fingerprint. An ‘X’ pattern in
the ridge is called crossover. A triangle-shaped pattern in the ridge is
called delta.
The widespread adoption of fingerprint for biometric recognition is
due to several factors like its reasonably good accuracy, ease of use
and the small amount of memory space to store biometric template.
With the emergence of mobile based applications, the above
strengths of fingerprint biometric have led to the use of it for mobile
authentication. But the performance of fingerprint recognition drops
due to scaly or dirty skin of finger and changes with age.
The combination of minutiae points of two different fingers of an
individual enables privacy protection in [Li and Kot, 2013].
3.2 Iris
Iris is considered to be another reliable biometric trait. The iris is the
muscle in the eye which controls the pupil size in order to regulate
the amount of light rays entering the eye. The iris can be identified
as an annular region in between the sclera (white portion of the eye)
and the pupil (Figure 3). The pattern of iris of an individual is also
unique [Jain et al, 2004]. A set of twins also possess distinguishing
iris patterns. The speed and accuracy of iris recognition have also
caused widespread adoption of iris biometric.
Registration of iris takes relatively longer time as it needs several iris
images. A test template is generated upon scanning of an individual’s
iris. Subsequently, the produced template is matched with the
existing templates which were produced at the time of registration.
Zero crossing representation of the one-dimensional wavelet
transform has been proposed in [Radu et al, 2012] to encode the
texture in iris. In [Sun et al, 2005], an integration of Local Feature
Based Classifier (LFC) and an iris blob matcher increases the accuracy
of iris recognition. The noisy images are detected by the iris blob
matcher. Therefore, it helps in the situations where the LFC does not
guarantee the performance.
In [Dong et al, 2011], a set of training images is used to learn classspecific weight map for iris matching technique. Experiments have revealed the effectiveness of the technique. Moreover, a preprocessing method has been suggested to enhance the performance
of iris biometric on mobile phones which usually are constrained by
computing power. The system has demonstrated its capability to
decide whether the person is dead or alive.
3.3 Face
The structure of human face is characterized by peaks and valleys at
different altitudes and features which are present at different
specific latitudes and longitudes as demonstrated in Figure 4. This
distinguishes one individual from another.
Figure 4: Extraction of Key Features from Human Face
Earlier attempts of face recognition used simple geometric models.
But sophisticated mathematical representation of features has led to
better models of face recognition [Jain et al, 2004]. Combination of
certain features with AdaBoost leads to a face and eye detection
method in [Parris et al, 2011]. Results are encouraging enough to
adopt face based authentication in mobile phones. The face
recognition method by [Lai et al, 2014] is assisted with motion
sensors. Apple’s iDevice has a face recognition system to lock and
unlock it [Gao et al, 2014]. According to [Srinivasan and
Balamurugan, 2014], Pictet and Banquiers (one of the leading banks
in Switzerland) has deployed an efficient 3D face recognition system
for providing access to its staff within the bank’s environment. A graph based model for face recognition has been proposed in [Cao et
al, 2012]. Based on 3D features, a face recognition system has been
proposed to improve the performance of the system. For detecting
facial features, the active appearance model has been used in
[Drosou et al, 2012]. A support vector machine based face
recognition system has been proposed in [Hayat et al, 2012]. In this
technique, an elastic graph matching has been utilized to locate the
feature points of the facial image.
A face recognition system fails when the face is partly covered. In
this case of occlusion, the important characteristics of the face
cannot be captured.
3.4 Ear
The appearance and shape of the human ear is also found to be
unique. It changes little during an individual’s lifetime. Three main
steps of an ear biometric system are – (a) imaging of the ear, (b)
image segmentation, and (c) recognition. A camera is obviously used
for image acquisition. Segmentation is carried out to isolate the ear
from the background in the image. A convex curved boundary is
identified to locate the ear as in [Maity and Abdel-Mottaleb, 2015].
But experiments have revealed a high false positive due to occlusion.
Recognition is performed by comparing the ear biometric traits with
stored templates in the database. Local surface patch representation
at 3D space leads to a 3D ear recognition system in [Abate et al,
2006].
The ear biometric is captured by Nippon Electric Company (NEC) as
the vibration of sound as influenced by the shape of an individual’s
ear. It is claimed to be unique for every person. According to this
system, an earphone with a built-in microphone captures the sounds
as they vibrate within the ear.
3.5 Hand Geometry
Measurements of a human hand are used to recognize an individual
in the case of hand geometry as biometric trait [Jain et al, 2004].
Shape and size of the palm along with shape, width and length of
each finger are considered as important measurements in this
context. Edge detectors like Sobel or Canny operators can be used to
detect palm lines. Ease of use of this biometric trait leads to wide
acceptance of this biometric even in mobile devices [Chen et al,
2007]. But lack of uniqueness of this trait is a major drawback.
Hence, its usage is confined only to one-to-one matching. For
example, this can be used for access control, where the concern is
about an individual’s attempt to gain access through someone else’s
access card or personal identification number. The individual’s
physical presence is ensured through the presentation of her hand to
the hand reader. Though, it can be combined with other biometric
traits [Javidnia et al, 2016].
3.6 Palm Vein
Vein pattern in the palm is considered to be another unique trait to
recognize an individual. The presence of blood vessels underneath
the skin causes this pattern. It is less susceptible to external
distortion. Forgery is also difficult for this biometric trait. Moreover,
the vein pattern is said to remain static during the lifetime of an
individual. The acquisition device throws an infra-red beam on the
palm as it is put on the sensor. The veins in the palm are identified as
black lines in the captured image. They are matched with an existing
vein pattern to recognize an individual [Tome and Marcel, 2015].
Haemoglobin, which is a key ingredient of blood, absorbs the near
infra-red (NIR) light. Hence, [Sugandhi et al, 2014] has suggested
usage of near infrared (NIR) light to acquire the vein image of fingers.
As a result, the vein pattern in fingers appears as shadows.
Inspired by fingerprint recognition, palm vein recognition system in
[Vaid and Mishra, 2015] also extracts vein minutiae.
3.7 Palmprint
A comparatively new biometric trait has been discovered in terms of
the palmprint (Figure 5). Reliable and unique characteristics of
palmprint justify its high usability. Similar to fingerprint, the
palmprint also has unique features, namely, principal lines, minutiae
features, delta points, wrinkles, and ridges. Additionally, a wider
surface area of the palm (as against the surface area being captured
for the fingerprint) leads to more number of unique traits. Hence,
palmprint biometric is believed to mature quickly as a reliable
recognition system.
Figure 5: Palmprint
But deformation of images due to challenges of acquisition pulls
down the accuracy of a palmprint recognition system. A contact
based acquisition device which can pose constraints on acquisition
environment is used to solve this problem. Research is still required
to tackle the issues arising out of positioning, rotating, and stretching
the palm. Moreover, the bigger size of the acquisition device does
not allow its usage over mobile phones. Contact based acquisition
may also be considered unhygienic. Hence, contactless palmprint
acquisition has also been introduced in [Wu and Zhao, 2015]. The
users need not touch the acquisition device.3.8 Retina
Each individual possesses unique retina vasculature. Replication of it
is not easy. The acquisition environment demands an individual to
focus her eye on a scanner. Therefore, the system may cause some
medical complications like hypertension. This is one reason why this
biometric system has not received a full acceptance by the public.
3.9 Radio Biometric
Certain physical characteristics (such as height and mass), the
condition of the skin, the volume of total body water, and nature of
other biological tissues influence the wireless propagation around
the human body. Radio biometrics is defined as the identity
information as specified by the human-affected wireless signal under
alterations and attenuations. The variability of these physical
characteristics and biological features among different individuals
ensures that two humans are less likely to demonstrate the identical
radio biometric. As the chance of two persons having exactly same
physical and biological characteristics is very little, the multi-path
profiles of the electromagnetic waves after interference from human
body vary for each individual. Consequently, human radio biometric,
which records how the wireless signal interacts with a human body,
are altered according to individuals’ biological and physical
characteristics and can be viewed as unique among different
individuals.
Radio biometric captures the response of radio waves from the
entire body including the face of an individual. Hence, it shows more
uniqueness than a face. The human identification system in [Xu et al,
2017] uses the entire profile of physical characteristic of an
individual.
3.10 Signature
Signature defines the way in which one individual writes a specific
word (mostly her name or a symbol). It is one of the oldest forms of biometric and has gained acceptance for several application
scenarios. With the progress of technology, two different kinds of
signature biometric emerged. Offline signature considers
geometrical features of the signature biometric. Online signature
provides cue about dynamic features of hand-written signature.
These dynamic features include the speed and the acceleration of
the pen movement, pen-up and pen-down times, etc.
3.11Gait
The posture and the way a person walks maintain uniqueness about
the person. It is non-invasive and hard to conceal. It can be easily
captured even at public places at a low resolution. Unlike other
biometric systems, the individual is not required to pay any attention
while her gait is being captured. As the gait can be captured from a
distance, it becomes very useful for security. It requires detection of
the subject, silhouette extraction, extraction of features, selection of
features and classification.
A lot of research has been carried out in gait recognition system.
Body part (specially, feet and head) trajectories were introduced for
extracting gait features. A gait energy image will lead to a set of view
intensive features [Liu and Sarkar, 2006]. Recognition is carried out
by a match of similar patterns in the gait energy image. Silhouette
quality quantification has a key role in the method in [Han and
Bhanu, 2006]. A one-dimensional foreground sum signal modeling is
used in [Han and Bhanu, 2006] to analyse the silhouettes.
Segmenting the human into components and subsequent integration
of the results to derive a common distance metric has lead to an
improved performance for gait recognition [Vera-Rodriguez et al,
2013]. Using a population based generic walking model, a gait
recognition system attempts to solve the challenges which the
system encounters due to surface time, movement speed, and
carrying condition [Liu and Sarkar, 2005].
The concept of point cloud registration has been proposed in [Lee et
al, 2009] in order to avoid the problem of occlusion during gait
recognition. Moreover, face and gait characteristics have been
extracted using principal component analysis from a side face image
and gait energy image, respectively. A set of synthetic features have
been attained by integration of these features and application of
multiple discriminant analysis. Performance improvement is
achieved in comparison to individual biometric features [Tan et al,
2006].
3.12Voice
The voice recognition system extracts several characteristics of voice
to identify an individual. Enrollment phase of voice biometric records
the voice sample of an individual, extracts a template from it, and
uses it for verification of the individual at later phase.
Apple’s Siri is a question-answering system based on a voice
recognition technology. Mel-frequency cepstral coefficients (MFCC)
and support vector machine is used to recognize an individual
speaker. One of the drawbacks of this biometric is that a prerecorded voice can easily be played back by an imposter for
unauthorized identification. Moreover, a few specific kind of illness
(e.g., catching cold) affects the voice and thus, causes hurdle for the
voice biometric.
3.13 Key Stroke
It is believed that there is a pattern about how a person types on a
keyboard. This behavioral biometric, which is referred as key stroke
dynamics, shows traits to identify or verify a person. But it is not
considered to be as unique as several other biometric traits.
4. Multimodal Biometric System
A unimodal biometric system identifies or verifies an individual
based on a single biometric trait. Reliability and accuracy of unimodal biometric systems have improved over time. But they
always do not demonstrate desired performance in real world
applications because of lack of accuracy in the presence of noisy
data, non-universal nature of some biometric characteristics, and
spoofing. The problems associated with unimodal biometric systems
are discussed below.
4.1 Noisy Data
Lack of maintenance of sensors may introduce noise within
biometric data. For example, typical presence of dirt is common in a
fingerprint sensor. It generates a noisy fingerprint. Inability to
present the original voice generates a noisy data too. Moreover, iris
and face image may not appear as clear without an accurate focus of
the camera.
4.2 Non-universality
In a universal biometric system, every individual must be capable of
producing a biometric trait for recognition. But biometric traits are
not always universal. An estimation reveals that about 2% of a
population may not be able to produce a good quality fingerprint.
Disability of individuals may cause problem for a smooth registration
process. Successful enrollment is not possible for such individuals.
4.3 Lack of Individuality
Sometimes similar traits are extracted from a biometric system. For
example, faces may appear quite similar for father and son, and even
more for identical twins. As a consequence of lack of uniqueness, the
false acceptance rate increases.
4.4 Susceptibility to Circumvention
Sometimes biometric traits are spoofed by an impostor. It is
established how fake fingers can be generated by using fingerprints.
These can be used to illicitly gain access to a biometric system.
Because of these problems, the error rates are, at times, high for
unimodal biometric systems. Hence, they are not always acceptable
for security applications. Multimodal biometric systems are
conceived to tackle the above mentioned issues. In a multimodal
biometric system, multiple biometric features are considered for
recognizing an individual. In general, the usage of multiple biometric
features contributes in an improved biometric recognition system.
For example, a typical error can be caused by worn fingerprints.
Presence of other biometric modalities may save the system from
failure in the case of multimodal biometric. Thus, multimodal
biometric system has less failure to enroll rate. It is considered to be
main advantage of multimodal biometric.
Multimodal biometric system can be of three types based on how
information is fused from various sources of information: (a) fusion
of multiple representations of single biometric, (b) fusion of multiple
classifiers of single biometric, and (c) fusion of multiple biometrics.
Good recognition rate is achieved in a multimodal biometric system
involving multiple evidences of a single biometric through fusion of
multiple representations or multiple classifiers. But, to a true sense
of multimodal biometric system, the use of multiple biometric traits
is beneficial than usage of multiple forms of a single biometric in the
terms of performance issues, including resistance to low quality
samples, lack of individuality, user acceptance, etc. A detailed review
of multimodal biometric system can be found in [Oloyede and
Hancke, 2016].
Multimodal biometric systems are of three types: 1) multiphysiological, 2) multi-behavioral, and 3) hybrid multimodal systems.
In multi-physiological category, only physiological characteristics (for
example, fingerprint, retina, face, etc.) are fused. As an example, a
multimodal biometric system in [Chang et al, 2003] combines face
and ear biometrics. Most of the initial researches in multimodal
biometrics belong to this category. Over the past few years, the
rapid developments in human-machine interface have triggered an evolution in behavioral biometric recognition. Hence, the field of
behavior based multimodal biometric system has drawn attention of
many researchers. A multi-behavioral biometric system in [Fridman
et al, 2013] considers inputs from mouse, keyboard, writing sample,
and history of web browsing. In [Bailey et al, 2014], another multibehavioral biometric system considers inputs from graphical user
interface interactions alongside mouse and keyboard inputs.
Moreover, a hybrid multimodal biometric system combines
physiological and behavioral features. Notable works on hybrid
multimodal biometric include fusion of face, audio and speech using
multiple classifiers by [Fox et al, 2007], fusion of face and gait by
[Tan et al, 2006], and signature, face, and ear biometric fusion at the
score level by [Monwar and Gavrilova, 2009]. Another good hybrid
multimodal biometric system in [Paul et al, 2014] combines
signature, face and ear biometric alongside social network analysis. It
has been shown here that inputs from social network analysis
further strengthen the biometric recognition system.
Contextual information (such as spatiotemporal information,
appearance, and background) has a key role alongside soft
biometrics (for example, height, weight, facial marks, and ethnicity)
in identifying a person [Park and Jain, 2010]. In [Bharadwaj et al,
2014], face biometric and social contextual information have shown
a significant improvement over performance in a challenging
environment. It is to be noted that neither of extraction of
appropriate contextual information or acquisition of soft biometric
are easy tasks. These tasks may even require image processing.
Moreover, social behavioral information is a common contributor in
the normal recognition process in the human brain. [Sultana et al,
2017] administers a reinforcing stimulus in the form of social
behavioral information to the matching decisions of traditional face
and ear based biometric recognition system.
5. Levels of Fusion in Multimodal Biometric System
A detailed classification of various fusion techniques for multimodal
biometric can be found in [Dinca and Hancke, 2017]. Fusion in
multimodal biometric can occur at various levels – such as sensor,
feature, matching score, rank and decision level fusion. Each of these
is explained in this section.
5.1 Sensor Level Fusion
This fusion strategy directly mixes raw data from various sensors
(Figure 6) – for example, from the iris and fingerprint sensors. Raw
information is captured at several sensors to fuse at the very first
level to generate raw fused information. Sensor level fusion
strategies can be put into following three groups: (i) single sensor
multiple instances, (ii) intra-class multiple sensors, and (iii) inter-class
multiple sensors. In the case of single sensor multiple instances, a
single sensor captures multiple instances of the same biometric trait.
For example, a fingerprint sensor may capture multiple images of the
same finger to reduce the effect of noise. Simple or weighted
averaging, and mosaic construction are some of the common fusion
methods in this case [Yang et al, 2005]. Multiple sensors are used to
acquire multiple instances of the same biometric trait in the intraclass multiple sensors category [Yang et al, 2005]. For example, a 3D
face image is obtained by using multiple face images taken from
various cameras. In the case of inter-class multiple sensors, two or
more different biometric traits are used together. For example,
images of palmprint and palm vein can be fused together for
biometric recognition.
Mosaicing is a nice application of sensor level fusion. Several
researchers [Ratha et al, 1998; Jain and Ross, 2002; Ross et al, 2005]
have proposed fingerprint mosaicing. It provides a good recognition
accuracy as it combines multiple images of the same fingerprint.
Therefore, it can handle the difficulty in recognition due to data
quality. The fingerprint mosaicing technique uses a modified Iterative Closest Point (ICP) [Jain and Ross, 2002] algorithm to
generate 2D or 3D surfaces by considering the inputs from multiple
instances. In [Fatehpuria et al, 2006], a touchless fingerprint system
is developed using a 3D touchless setting with multiple cameras and
structured light illumination (SLI) to generate 2D fingerprint images
and 3D fingerprint shape. This kind of set up is expensive due to
deployment of multiple cameras. Alternatively, usage of a single
camera and two mirrors are suggested in [Choi et al, 2010]. Two
mirrors have been used to obtain finger side views. Sensor level fusion is generally applied for the same trait. There are
also instances of applying sensor level fusion for different traits. Few
of these are mentioned here. Face and palmprint images are
combined in [Jing et al, 2007]. Pixel level fusion is preceded by Gabor
transform of the images. Infrared images of palmprint and palm vein
are fused in [Wang et al, 2008]. At first, image, registration is carried
out on these images. Subsequently, a pixel level fusion takes place.
5.2 Feature Level Fusion
Features extracted from several biometric traits are integrated into a
single vector. According to this fusion strategy, biometric sensor signals (from camera or microphone) are preprocessed and feature
vectors are derived from them independently. Then a composite
feature vector is generated by combining these individual feature
vectors (Figure 7). The feature level fusion exhibits a better
performance than score level and decision level fusion techniques as
feature level fusion techniques directly deals with the unique
biometric features.
Normalization and selection of features are two important processes
in feature level fusion. Min-max technique and media scheming
based normalization is carried out to change the scale and location
of feature values. Scale invariant feature transform is also carried out
from the normalized images.
Figure 7: Feature Level Fusion
Dimensionality reduction through appropriate feature selection also
enhances the accuracy of the techniques. Sequential forward
selection, sequential backward selection, and partition about
medoids are standard feature selection techniques. Particle Swarm
Optimization (PSO) is applied on the feature vector for
dimensionality reduction. The multimodal biometric techniques in
[Raghavendra et al, 2009; 2011] uses this concept while combining
face and palmprint features.
Incompatibility of the feature sets among different biometric traits
and non-linearity of the joint feature set of different biometric traits
poses challenges for feature level fusion. A feature vector can be
generated using weighted average of multiple feature vectors if
those vectors correspond to same biometric. For example, this
becomes possible if all of these vectors are obtained from fingerprint
images of an individual. If these vectors correspond to different
biometrics, then they are concatenated to obtain a single vector.
Another example of feature level fusion can be found in [Kim et al,
2011]. Simultaneous use of time-of-flight (ToF) depth camera and
near infrared (NIR) camera acquires face and hand vein images in a
touchless acquisition set up.
Several multimodal system also combines face and ear, as ear is
considered to be one of the most unchangeable feature of the
human traits. Unlike face, human ear is not generally affected by
age. PCA based feature extraction and a sparse representation
method for feature level fusion is proposed in [Huang et al, 2013].
Experimental results reveal that this technique performs better than
their unimodal components. Experiments also show that the
performance is similar to that of the unimodal systems even if one of
the modality is corrupted. Local 3D features (L3DF) are generated
from ear and frontal face images in [Islam et al, 2013]. Feature level
fusion is applied in these cases.
A matrix interleaved concatenation based new approach is
presented in [Ahmad et al, 2016] for face and palmprint biometrics.
Discrete Cosine Transform (DCT) is used here to extract the features.
Then, these features are concatenated in an interleaved matrix
which estimates the parameters of the feature concatenation and
exhibits their statistical distribution.
A fingerprint and iris based multimodal biometric recognition
technique has been proposed in [Gawande et al, 2013]. Minutiae
and wavelet features are extracted from fingerprint images. Haar wavelet and block sum techniques produce features from iris
images. A feature level fusion of these four feature vectors exhibit
better performance than a unimodal fingerprint or iris biometric.
A feature level fusion of fingerprint and palm biometric traits has
been proposed in [Mohi-ud-Din et al, 2011].
Another example of feature level fusion can be found for hand
geometry recognition in the contactless multi-sensor system in
[Svoboda et al, 2015] using an Intel RealSense 3D camera. This
technique carries out foreground segmentation of the acquired hand
image to determine the hand silhouette and contour. Then, the
fingertips and the valleys are located alongside determination of the
wrist line from the identified contour. Subsequently, two features
vectors have been formed as follows: (i) comprising finger length,
width, and wrist valley distances and (ii) finger widths as computed
using a traversal of the overall hand surface and median axis to
surface distances.
A finger and finger-vein based system has been proposed in [Yang
and Zhang, 2012]. Gabor features are extracted and the feature
fusion strategy is based on a Supervised Local-Preserving Canonical
Correlation Analysis (SLPCCAM). In [Yan et al, 2015], feature level
fusion has also been used for a contactless multi-sample palm vein
recognition technique.
Automated access control systems in buildings and other secure
premises are based on the capability of identifying an individual from
a distance. Because of its importance in securing important
establishments, it is emerging as an area of interest to the research
community. Improper lighting condition or not-so-high resolution
surveillance cameras pose a constraint for recognizing an individual
based on her face. Use of multimodal biometric involving face and
gait exhibits better performance [Ben et al, 2012; Huang et al, 2012].
Unlike traditional methods of face and gait multimodal biometric recognition, [Xing et al, 2015] fuses the features without
normalization using coupled projections.
A Robust Linear Programming (RLP) method [Miao et al, 2014] for
multi-biometric recognition exhibits good results in noisy
environments in spite of using less training data. It uses uncertain
constraints and concatenates heterogeneous features from different
biometric traits. Each biometric modality has been assigned a weight
to specify its degree of contribution in the fusion. More weight is a
greater relevance of the corresponding biometric trait.
In the multimodal biometric recognition system by [Chang et al,
2003], features are extracted from face and ear using Principle
Component Analysis (PCA). Subsequently, fusion takes place at the
feature level. A method of liveliness detection to prevent spoofing
attack in [Chetty and Wagner, 2005] also uses a feature level fusion.
In a recent development, sparse-based feature-fusion [Huang et al,
2015] of physiological traits has drawn sufficient interest of
researchers due to robust performance.
5.3 Matching Score Level Fusion
In the case of matching score level fusion, matching scores are
separately obtained for each biometric trait and subsequently fused
to arrive at an overall matching score. The block diagram is
presented in Figure 8. Matching score level fusion is also referred as
measurement level fusion. There exists three different approaches for matching score based
fusion – density based, classification based, and transformation
based. The density based scheme is on the basis of distribution of
scores and its application to popular models like Naive Bayesian and
Gaussian Mixture Model [Murakami and Takahashi, 2015]. In the
classification based approach, the matching scores of individual
matching modules are concatenated to obtain a single feature
vector. The decision to either accept or reject an individual is based
on the classification of this feature vector. According to the
transformation based approach, the scores of individual matching
module are, at first, transformed (normalized) into a pre-decided
range. This transformation changes the position and scale
parameters of the matching score distribution so that these
normalized scores can be combined to obtain a single scalar score
[Murakami and Takahashi, 2015]. These normalization techniques to
handle the dissimilarities in matching score also draw attention of
researchers.
In another noted development of recent time, an order-preserving
score fusion method has been proposed in [Liang et al, 2016].
5.4 Rank Level Fusion
Ranking the potential matches between the query template and the
templates in the database generates an ordered list of all templates
in the database. The first choice is the match. These ranked are
obtained for every biometric trait. In a multimodal biometric
recognition system, these rank orders are fused to generate a final
ranking of each template in the database. Unlike score level fusion,
normalization is not required for rank level fusion.
Rank level fusion is applied in [Kumar and Shekhar, 2011] for
combining various methods for palmprint identification. A Nonlinear
Weighted Ranks (NWR) method aggregates the ranks as obtained
from individual matching modules.
Rank level fusion may not always perform well in noisy conditions
having low quality data. Though it has been applied on low quality
fingerprints [Abaza and Ross, 2009]. It applies a derivation of the
Borda count method, which involves image quality. This approach
has a similarity with logical regression. But unlike logical regression,
this approach does not need a training phase. Image quality is
considered in this approach instead of weights.
As ranks are assigned to only few of the stored templates with
possible match, the rank level fusion may create a challenge for large
databases. Ranks do not cover every template in the database. In
this context, a Markov chain based method has been proposed in
[Monwar and Gavrilova, 2011] for rank level fusion. Markov chain is
used to represent a stochastic series of events, where the present or
the preceding states determine the next state. A graph is used to
formally model the Markov chain. A vertex in the graph represents a
state or an event. An edge in the graph denotes the transition from
one state to another state. At first, ranks are generated for each
biometric trait. If the matching module creates partial ranking (for
example, the first three ranking results), elements are inserted
randomly to complete the list. The state transition probabilities are computed and the stationary distribution of the Markov chain is
obtained. The templates in the database are ranked based on a
decreasing order of the scores of the stationary distribution starting
from the highest score. This fusion strategy is applied on a
multimodal biometric recognition system involving iris, ear, and face.
[Monwar and Gavrilova, 2009] proposes another method to solving
this discussed problem of rank level fusion. The multimodal
biometric recognition system has three matchers for each of
signature, face and ear. However, fusion is carried out between the
identities put out by at least two matchers.
5.5 Decision Level Fusion
In the case of a decision level fusion, each individual matcher, at
first, takes its own decision. Subsequently, the fusion of various
biometric modalities takes place by combining the decisions of these
individual matchers. Hence, each biometric trait is independently
pre-classified and the final classification is based on the fusion of the
outputs of the various modalities (Figure 9). The simplest forms of
decision level fusion uses logical operations such as ‘AND’ or ‘OR’.
Some advanced fusion strategies at this level also use behavior
knowledge space, the Dempster-Shafer theory of evidence, and
Bayesian decision fusion.
Figure 9: Decision Level Fusion
When every individual decision module supplies positive outcome,
then the ‘AND’ rule positively recognizes a query template.
Otherwise, the ‘AND’ rule rejects the query. Hence, ‘AND’ rule is
generally reliable with extremely low false acceptance rate (FAR).
But false rejection rate (FRR) is higher than that of individual trait.
On the contrary, the ‘OR’ rule provides positive output about a query
template when at least one decision module gives positive response
about it. As a result, FRR is extremely low and FAR is higher than
individual trait. In [Tao and Veldhuis, 2009], an optimized threshold
method has been proposed using the ‘AND’ and ‘OR’ rule. The
thresholds of the classifiers are optimized during the training phase.
Majority voting is another common approach for decision fusion. If
the majority of the individual traits decide positively about the query
template, then final decision is positive. Majority voting method
gives equal importance to each individual decision modules.
Otherwise, a weighted majority voting can be applied. In this
method, higher weights are assigned to decision modules which
perform better.
A multi-algorithm decision level fusion is used in [Prabhakar and Jain,
2002] for fingerprints. This method considers four distinct fingerprint
matching algorithms. These are based on Hough transform, string
distance, 2D dynamic programming, and texture. This method
selects appropriate classifiers prior to applying decision fusion.
A threshold for each individual classifier influences the outcome of a
decision level fusion. Here, threshold specifies a minimum score to
decide whether the sample is genuine or an impostor. If the
matching score of the sample is higher than the threshold, then the
sample is considered as genuine. On the contrary, if the matching
score of the sample is less than the threshold, then it belongs to an
imposter. The classifiers are assumed to be independent of one
another in some biometric systems. However, other works have
assumed the dependency among the classifiers. A verification system
has been introduced in [Veeramachaneni et al, 2008] based on two fusion strategies for correlated threshold classifiers. Between these
two strategies, Likelihood Ratio Test (LRT) still depends on the
threshold of each individual classifier. The Particle Swarm
Optimisation (PSO) based decision strategy is considered effective in
comparison. Even the PSO strategy performs better than some of the
score level fusion methods. A real time sensor management using
PSO is suggested in [Veeramachaneni et al, 2005] for a multimodal
biometric management. This method performs a real time search of
the optimal sensor configuration and optimal decision rule. A similar
concept is proposed in [Kumar et al, 2010] which uses Ant Colony
Optimization (ACO) technique, for a multimodal biometric system
involving palmprint and hand vein. [Kumar and Kumar, 2015]
extends this experiment on multiple multimodal databases involving
palmprint and iris, fingerprint and face, and face and speech.
Another decision level fusion for multimodal biometric recognition is
proposed in [Paul et al, 2014]. In this multimodal system, signature,
face and ear biometric features are combined with social network
analysis. The Fisher image feature extraction, which is a combination
of PCA and Linear Discriminant Analysis (LDA).
5.6 Hybrid Fusion Model
A hybrid fusion model which uses both pixel level fusion and score
level fusion demonstrates good performance in [Kusuma and Chua,
2011]. This multi-sample face recognition (both in 2D and 3D) in
[Kusuma and Chua, 2011] recombines images using principal
component analysis (PCA). Two recombined images are fused using a
pixel level fusion scheme. Additionally, score level fusion is also
applied to produce good result.
6. Security and Privacy Issues in Biometric
There are several security and privacy concerns associated with
usage of biometric. These are listed below:
• Biometric is not secret. Biometric data can be captured by a
third party very easily. At times, even the original user may
not be aware of the spying of her biometric data. For
example, voice recording during a telephonic conversation
or by a rogue mobile app can disclose the user’s voice
biometric. Similarly, a video recording in the guise of
surveillance cameras captures the user’s face or gait
biometric
• Biometric cannot be changed or revoked. Unlike password
or pin, it is impossible to issue a new biometric trait, e.g.,
fingerprint. It is permanent. One-time compromise makes it
unusable. Moreover, the user can never be dissociated with
the compromised data
• Biometric can be used to track an individual forever.
There exists eight different ways to attack a biometric system [Ratha
et al, 2001]. These are demonstrated in Figure 10 with numbers
which are explained below:
1. Fake biometric: Fake biometric is presented to the sensor
with an intention of fooling the system. There have been
several successful demonstrations of this kind of attack.
2. Resubmitting stored signals: The biometric signal can be
pre-recorded and can be presented to the system at later
times
3. Overriding feature extraction process: The feature
extraction module is compromised. The actual feature set
is replaced with the desired one by an attacker.
4. Tampering with biometric representation: The templates
representing the actual biometric trait are replaced with a
desired one by the attacker
5. Corrupting the matcher: The matching module is
compromised and the attacker generates a matching score
as desired by herself
Compromising stored templates: An attacker may illicitly
gain access to the stored templates. She can steal those
templates to spoof the users’ identities
7. Communication interception: The information being
passed between the matcher and the database can be
altered if an attacker intercepts the communication
between these two modules
8. Overriding the final decision: An attacker may override the
decision being taken by the matcher.
All the above points indicate how a biometric recognition system can
be compromised. But most of the fraudulent attempts take place in
the context of faking a biometric data or tampering with the stored
template. Fake biometrics can be recreated from the stored
templates. Even it can be acquired directly from the sensor without
the knowledge of the user. Moreover, compromising the database
may lead to editing or deleting of the templates. These two topics
are discussed here at length.
6.1 Faking Biometric Data
Numerous researches reveal how biometrics can be faked. Following
are the three easy steps to spoof fingerprint data: (i) The residual
fingerprint of a user can be obtained from a mobile phone or any
other surface; (ii) Lifted fingerprint impression can be used to create
a dummy finger; (iii) The dummy finger can be put on the fingerprint
sensor to claim the identity. Demonstrations of this kind of attack against the fingerprint sensors of popular smart phone brands are
publicly available [web1].
Biometric data is often captured in Internet of Things (IoT) devices
with or without user’s knowledge. Hence, these devices represent
the most danger to biometric. Biometric identity can be spoofed
using the captured data. Recently, 5.6 million fingerprint records
have been stolen from Office of Personnel Management (OPM) of US
military. Such a large scale breach reveals devastating consequences
of poor data security practices. Moreover, it strengthens the concern
over storage of biometric data, as the OPM unit of US military is
considered to enforce stringent security practices while storing
biometric data in comparison to several private companies. Apple’s
Siri, Google Now, and Microsoft Cortana record every uttering of a
user and sends the data back to the servers of their organisations
through Internet [Sparkes, 2015]. Samsung TVs automatically record
conversations of the users to use these for automatic speech
recognition [Matyszczyk, 2015]. Behavioral biometric is captured by
many of the wearable devices.
Several forms of biometric data can also be captured through a
smartphone. Hence, smartphones pose a risk to privacy. Though
misuse of biometric data by big corporations is debatable, numerous
third party applications installed in smartphones may appear as
security and privacy risk. These third party applications often ask for
more permission in the device than what is actually needed for them
to complete their tasks. Permissions for accessing the camera and
the microphone of a smartphone are the most misused ones [Felt et
al, 2011]. These permissions enable the application to capture face,
retina, and voice samples of the user. Moreover, many applications
request for root permission to access every device sensor such as
fingerprint, gait, heart monitoring and key logging for getting
behavioral information about keystrokes and screen shots. These are
even very handy in spying for user credentials. It is debatable
whether these third party applications use the data with malicious intensions. But their business models, in several cases, allow them to
sell the users’ data to advertisers. With the increasing demand of
users’ data in underground market, it may be more lucrative to sell
users’ data than to make money through in-app advertising. As per a
2012 report [Labs, 2012] from Zcaler’s labs, over 40% of mobile
applications communicate data to third parties. During installation of
a new application, majority of the smartphone users do not check
the permission requests. Even many of them may not be aware of
the implications. Additionally, the entire permission system can be
bypassed using root exploits [Zhou and Jiang, 2012]. Moreover,
storage of data at third party servers poses a risk. Past cases show
that there have been breaches even in military servers. Then,
breaching the security of a small smartphone application vendor may
not be challenging for the attackers.
A comprehensive review of biometric authentication in a
smartphone is presented by [Meng et al, 2015]. Twelve types of
biometric authentication can be carried out in a smartphone. Among
those, following six are physiological: fingerprint, iris, retina, face,
hand and palmprint. Remaining six types are behavioral in nature.
These are signature, gait, voice, keystroke dynamics, touch
dynamics, and behavioral profiling. A survey of successful attacks on
smartphones is presented in this report [Meng et al, 2015]. Another
successful attack to guess passwords using touch devices is also
reported in [Zhang et al, 2012].
The proposals for securing smartphone authentication schemes, and
authentication in general, are use of multimodal biometrics, check
for liveness, combining with other authentication techniques (dual
factor authentication) and use cancelable biometrics to store the
templates. These proposals are useless if the user installs malware or
a free application including a root kit that bypasses the permission
system and captures biometric data from the user as he uses the
smartphone.
Several problems exist with smartphone permission across various
operating systems. Experts have also suggested improvements in this
regard. But all of these efforts in security may fail at a single point,
i.e., the user. This is because the user, sometimes, installs rogue
applications without proper checking of the source or vendor. There
exist numerous fake applications which pretend to be popular ones.
Even the users, at several times, do not read the permissions while
installing the applications. Hence, all good security practices fail.
Certain permissions to an application provide the application an
access to several device features and implicitly to data – biometric or
otherwise. Even if the application owner is not misusing the data,
there may be a breach by a malicious attacker to steal the data for
illicit gains.
An artificial finger can spoof a fingerprint on many fingerprint
sensors. Several researchers have demonstrated it in time and again
[Cappelli et al, 2007; Galbally et al, 2010; Espinoza et al, 2011].
Research also suggests how this spoofing attack on fingerprints can
be prevented [Marcialis et al, 2010]. Similarly, iris biometric is also
vulnerable to spoofing through fake iris scans. Several techniques
[Wei et al, 2008; Rigas and Komogortsev, 2015] suggests how to
detect a fake iris. Another biometric very susceptible to spoofing
attacks is face authentication by using pictures. There are a lot of
techniques proposed to detect this issue [Maata et al, 2011;
Komulainen et al, 2013; Pereira et al, 2014]. Hands geometry can
also be spoofed by creating fake hands. [Chen et al, 2005] who
proposed a practical model using plaster to create fake hands. The
authors demonstrate that the fake hands can be created without the
user knowledge from hand templates stored into the database.
Other soft biometrics can be easily spoofed – voice can be easily
recorded or spoofed artificially [Alegre et al, 2012], gait can be
spoofed using a video camera from a distance to capture the user
motion [Gafurov et al, 2007].
6.2 Template Security
Until one decade ago, it was believed that a stored template cannot
recreate the original biometric data. Several researchers proved this
wrong [Ross et al, 2007; Jain et al, 2008]. Encryption cannot be used
to prevent a template compromise, as it is not possible to carry out
recognition in the encrypted domain [Jain et al, 2008]. Tamper
resistant storage in a smart card seems feasible for a single template
for verification. Otherwise, it cannot be applied to large biometric
databases. Solutions exist in the form of private templates [Davida et
al, 1998] and cancelable biometrics [Ratha et al, 2007]. Still, several
biometric recognition systems have not adopted these solutions to
secure the templates in the database.
The concept of cancelable biometric to tackle the above stated
problem was first proposed in [Ratha et al, 2007]. Several other
template protection schemes have been developed subsequently.
These schemes can be grouped into following two categories –
cancelable transformations and biometric cryptosystems.
The characteristics of a template protection scheme are mentioned
here:
• Diversity: The template of a biometric of same individual has
to be distinct in different databases. It will prevent an
attacker from gaining access to multiple systems through a
compromise in one database
• Revocability: In the case of a compromise in an individual’s
biometric template, it will be possible to issue a new
template to her from the same biometric data
• Security: It will not be possible to recreate the original
biometric data from a template. It is a one-way
transformation
• Performance: There should not be any impact on the
performance of the biometric system in terms of false
acceptance rate (FAR) and false rejection rate (FRR).
[Jain et al, 2008] describes the advantages and disadvantages of each
template protection type.
There is extensive literature on cancelable uni-biometric schemes
and cryptosystems thoroughly surveyed by [Rathgeb and Uhl, 2011].
Even a subsequent chapter in this Staff Series discusses the issue of
cancelable biometric at depth.
6.3 Other Types of Attacks on Biometric Systems
Similar to a brute force approach of password guessing, a brute force
attack with a large number of input fingerprints can be used. The
difficulty of such an attack is that the search space for guessing the
fingerprint is prohibitively large. But for fingerprint authentication on
several mobile devices, only a part of the full fingerprint is utilized.
This provides the attacker with a much smaller search space.
On the contrary, a dictionary attack tries only those possibilities
which are deemed most likely to succeed. Although dictionary
attacks have been extensively studied and analyzed for traditional
password-based authentication systems, they have not been
systematically considered by the research community in the context
of fingerprint verification. To perform a guessing attack with
fingerprints, the question arises as to whether there are some
fingerprints that are more likely to match a target than the others? It
has been observed in the previous literature, that different users
have different performance characteristics based on their
fingerprint. [Yager and Dunstone, 2010] has introduced a menagerie
consisting of dove (users with high genuine scores and low imposter
scores), chameleons (high genuine scores and high imposter scores,
thus are easy to match with everyone, including themselves),
phantom (hard to match with most of the users), and worm (hard to
match with themselves but easy to match with others). [Yager and
Dunstone, 2010] have identified the existence of chameleons in
datasets of full fingerprints.
A metric to estimate the strength of a biometric recognition system
against impersonation attacks, namely Wolf Attack Probability
(WAP), is proposed in [Une et al, 2007]. In this context, a wolf
indicates an input sample which wrongly matches with multiple
biometric templates. [Roy et al, 2017] shows how a master print can
be located or generated. Such a master print can be used to match
with multiple biometric templates. In case of partial fingerprints, the
probability of detecting a master print and the attack accuracy
increase. This finding reveals the risks of usage of partial fingerprints
for authentication. In [Nagar et al, 2012], an evidential value analysis
is carried out for latent fingerprints. It has been observed that less
number of minutiae points or a small surface area of the latent
fingerprint leads to a low evidential value of the fingerprint. Hence,
probability of matching error increases in these cases. Based on this
finding, an estimate in [Roy et al, 2017] shows that the probability of
finding masterprints that incorrectly match with a large number of
templates is high for partial fingerprints.
7. Usage of Biometric in Banks and Financial Institutions
Amidst all these practical challenges and concerns over security and
privacy issues in biometric, banks and financial institutions have
taken significant steps in embracing it. Banks in India have also
embraced biometric as one of the authentication factor. Here is a
small illustrative list of initiatives by banks in India:
1. DCB Bank: Fingerprint based cash withdrawal is possible
from the ATMs of DCB Bank. These ATMs connects with
Aadhaar database to authenticate a customer using her
fingerprint as enrolled with Aadhaar.
2. Federal Bank: A zero balance selfie account opening is
possible with Federal Bank using its banking app. For an
instantaneous account opening, a user scans her Aadhaar
and PAN cards and then, clicks her selfie photo.
3. HDFC Bank: As part of financial inclusion initiatives by HDFC
Bank, the bank has introduced fingerprint verification using a hand-held device or a micro-ATM. Fingerprint based
verification with the Aadhaar database enables the bank for
instant KYC (know your customer) check for its users.
4. ICICI Bank: Voice recognition enables the customers of ICICI
Bank to interact smoothly with the bank’s call center. During
such an interaction, authentication credentials are not being
asked to the customers, as their voice can authenticate
themselves.
5. State Bank of India: A fingerprint based authentication is
carried out in order to provide the bank’s employees an
access to the core banking system.
There are several use cases of biometric in banking and financial
service industry around the globe too. Fingerprint is commonly being
used by mobile banking applications to authenticate a user for last
few years. For example, Bank of America, Chase and PNC are some
of the institutions which have adopted fingerprint based user
authentication for their mobile applications. Master Card has
launched a ‘selfie-pay’ to authenticate online purchases through
either face or fingerprint recognition. Citi has registered its
customers’ voice samples.
USAA, which serves to members of the military and their families in
United States, has rolled out three different biometrics – fingerprint,
face and voice recognition – for customer authentication. Pictet and
Banquiers (one of the leading banks in Switzerland) has deployed an
efficient 3D face recognition system for providing access to its staff
within the bank’s environment.
8. Conclusion
Biometric recognition is gaining popularity for identification and
verification of individuals through their specific physiological and
behavioral traits. In certain scenarios, its importance is perceived in
the form of a second factor of authentication, in addition to knowledge based or possession based authentication requiring
security for transactions. It enables the government as well as
private and public businesses to reduce identity theft and related
crimes.
In this article, the strengths and weaknesses of several biometric
recognition systems have been discussed through a comprehensive
review of the developments in this field. Furthermore, both
unimodal and multimodal biometric systems have been discussed.
Various types of fusion strategies have also been explained in the
context of multimodal biometric systems. This article also discusses
various security and privacy concerns which are associated with
usage of biometrics. A compilation of the progress in this field helps
the readers get an overall grasp.
References
[web1] “Chaos Computer Club Breaks Apple TouchID”, http://www.ccc.de
/en/updates/2013/ccc-breaksapple-touchid, January 2016
A. F. Abate, M. Nappi, D. Riccio, and S. Ricciardi, “Ear Recognition by Means
of a Rotation Invariant Descriptor,” in Proc. 18th International Conference
on Pattern Recognition (ICPR), pp. 437-440, August 2006
A. Abaza and A. Ross, “Quality Based Rank-Level Fusion in Multibiometric
Systems,” in Proc. IEEE 3rd International Conference on Biometrics, Theory,
Applications and Systems (BTAS), pp. 1-6, September 2009
M. I. Ahmad, W. L. Woo, and S. Dlay, “Non-Stationary Feature Fusion of
Face and Palmprint Multimodal Biometrics,” Neurocomputing, Vol. 177, pp.
49-61, February 2016
F. Alegre, R. Vipperla, N. Evans, and B. Fauve, “On The Vulnerability of
Automatic Speaker Recognition to Spoofing Attacks with Artificial Signals,”
in Proc. 20th European Signal Processing Conference (EUSIPCO), pp. 36-40,
August 2012
K. O. Bailey, J. S. Okolica, and G. L. Peterson, “User Identification and
Authentication Using Multi-Modal Behavioral Biometrics,” Computer
Security, Vol. 43, pp. 77–89, June 2014
M. Belahcene, A. Chouchane, and H. Ouamane, “3D Face Recognition in
Presence of Expressions by Fusion Regions of Interest,” in Proc. 22nd Signal
Processing and Communications Applications Conference (SIU), pp. 2269-
2274, April 2014
X. Y. Ben, M. Y. Jiang, Y. J. Wu, and W. X. Meng, “Gait Feature Coupling for
Low-Resolution Face Recognition,” Electronics Letters, Vol. 48, No. 9, pp.
488-489, April 2012
S. Bharadwaj, M. Vatsa, and R. Singh, “Aiding Face Recognition with Social
Context Association Rule Based Re-Ranking,” in Proc. IJCB, Clearwater, pp.
1–8, FL, USA, 2014
X. Cao, W. Shen, L. G. Yu, Y. L. Wang, J. Y. Yang, and Z. W. Zhang,
“Illumination Invariant Extraction for Face Recognition Using Neighboring
Wavelet Coefficients,” Pattern Recognition, Vol. 45, No. 4, pp. 1299-1305,
2012
R. Cappelli, D. Maio, A. Lumini, and D. Maltoni, “Fingerprint Image
Reconstruction from Standard Templates,” IEEE Transactions on Pattern
Analysis and Machine Intelligence, Vol. 29, No. 9, pp. 1489-1503,
September 2007
K. Chang, K. W. Bowyer, S. Sarkar, and B. Victor, “Comparison and
Combination of Ear and Face Images in Appearance-Based Biometrics,” IEEE
Transactions on Pattern Analysis and Machine Intelligence, Vol. 25, No. 9,
pp. 1160–1165, September 2003
W.-S. Chen, Y.-S. Chiang, and Y.-H. Chiu, “Biometric Verification by Fusing
Hand Geometry and Palmprint,’ in Proc. 3rd International Conference on
Intelligent Information Hiding and Multimedia Signal Processing (IIHMSP),
pp. 403-406, November 2007
H. Chen, H. Valizadegan, C. Jackson, S. Soltysiak, and A. K. Jain, “Fake Hands:
Spoofing Hand Geometry Systems,” Biometric Consortium, Washington DC,
USA, 2005
G. Chetty and M. Wagner, “Liveness Detection Using Cross-Modal
Correlations in Face-Voice Person Authentication,” in Proc. 9th European
Conference on Speech Communication and Technology (Eurospeech),
Lisbon, Portugal, pp. 2181–2184, 2005
H. Choi, K. Choi, and J. Kim, “Mosaicing Touchless and Mirror-Reflected
Fingerprint Images,” IEEE Transactions on Information Forensics and
Security, Vol. 5, No. 1, pp. 52-61, March 2010
M. Cornett, “Can Liveness Detection Defeat the M-Commerce Hackers?”,
Biometric Technology Today, Vol. 2015, No. 10, pp. 9-11, October 2015
G. I. Davida, Y. Frankel, and B. Matt, “On Enabling Secure Applications
through Off-Line Biometric Identification,” in IEEE Symposium on Security
and Privacy, pp. 148-157, May 1998
L. M. Dinca and G. P. Hancke, “The Fall of One, the Rise of Many: A Survey
on Multi-Biometric Fusion Methods”, IEEE Access, Vol. 5, pp. 6247-6289,
April 2017
W. Dong, Z. Sun, and T. Tan, “Iris Matching Based on Personalized Weight
Map,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.
33, No. 9, pp. 1744-1757, September 2011
A. Drosou, N. Porfyriou, and D. Tzovaras, “Enhancing 3D face Recognition
Using Soft Biometrics,” in Proc. 3DTV Conference: The True Vision –
Capture, Transmission and Display of 3D Video, pp. 1-4, October 2012
M. Espinoza, C. Champod, and P. Margot, “Vulnerabilities of Fingerprint
Reader to Fake Fingerprints Attacks,” Forensic Science International, Vol.
204, Nos. 1-3, pp. 41-49, January 2011
A. Fatehpuria, D. L. Lau, and L. G. Hassebrook, “Acquiring a 2D Rolled
Equivalent Fingerprint Image from a Non-Contact 3D Finger Scan,” Proc.
SPIE, Vol. 6202, pp. 62020C, April 2006
A. P. Felt, E. Chin, S. Hanna, D. Song, and D. Wagner, “Android Permissions
Demystified,” in Proc. 18th ACM Computer and Communications Security
Conference, New York, USA, pp. 627-638, 2011
N. A. Fox, R. Gross, J. F. Cohn, and R. B. Reilly, “Robust Biometric Person
Identification Using Automatic Classifier Fusion of Speech, Mouth, and Face
Experts,” IEEE Transactions on Multimedia, Vol. 9, No. 4, pp. 701–714, June
2007
A. Fridman et al., “Decision Fusion for Multimodal Active Authentication,” IT
Professional, Vol. 15, No. 4, pp. 29–33, 2013
J. Galbally et al., “An Evaluation of Direct Attacks using Fake Fingers
Generated from ISO Templates,” Pattern Recognition Letters, Vol. 31, No. 8,
pp. 725-732, June 2010
Z. Gao, D. Li, C. Xiong, J. Hou, and H. Bo, “Face Recognition with Contiguous
Occlusion Based on Image Segmentation,” in Proc. International Conference
on Audio, Language and Image Processing (ICALIP), pp. 156-159, July 2014
U. Gawande, M. Zaveri, and A. Kapur, “Bimodal Biometric System: Feature
Level Fusion of Iris and Fingerprint,” Biometric Technology Today, Vol. 2013,
No. 2, pp. 7-8, 2013
L. Ghoualmi, A. Draa, and S. Chikhi, “An Efficient Feature Selection Scheme
Based on Genetic Algorithm for Ear Biometrics Authentication,” in Proc. 12th
International Symposium on Programming and Systems (ISPS), pp. 1-5, April
2015
M. Gomez-Barrero, J. Galbally, and J. Fierrez, “Efficient Software Attack to
Multimodal Biometric Systems And Its Application To Face And Iris Fusion,”
Pattern Recognition Letters, Vol. 36, pp. 243-253, January 2014
J. Han and B. Bhanu, “Individual Recognition Using Gait Energy Image,” IEEE
Transactions on Pattern Analysis and Machine Intelligence, Vol. 28, No. 2,
pp. 316-322, February 2006
H. Han, S. Shan, X. Chen, and W. Gao, “A Comparative Study On Illumination
Preprocessing in Face Recognition,” Pattern Recognition, Vol. 46, No. 6, pp.
1691-1699, June 2013
M. Hayat, M. Bennamoun, and A. A. El-Sallam, “Fully Automatic Face
Recognition from 3D Videos,” in Proc. 21st International Conference on
Pattern Recognition (ICPR), pp. 1415-1418, November 2012
Z. Huang, Y. Liu, X. Li, and J. Li, “An Adaptive Bimodal Recognition
Framework Using Sparse Coding for Face and Ear,” Pattern Recognition
Letters, Vol. 53, pp. 69–76, February 2015
Z. Huang, Y. Liu, C. Li, M. Yang, and L. Chen, “A Robust Face and Ear Based
Multimodal Biometric System Using Sparse Representation,” Pattern
Recognition, Vol. 46, No. 8, pp. 2156-2168, August 2013
Y. Huang, D. Xu, and F. Nie, “Patch Distribution Compatible Semi-Supervised
Dimension Reduction for Face and Human Gait Recognition,” IEEE
Transactions on Circuits and Systems for Video Technology, Vol. 22, No. 3,
pp. 479-488, March 2012
S. M. S. Islam, R. Davies, M. Bennamoun, R. A. Owens, and A. S. Mian,
“Multibiometric Human Recognition Using 3D Ear and Face Features,”
Pattern Recognition, Vol. 46, No. 3, pp. 613-627, March 2013
S. Jahanbin, H. Choi, and A. C. Bovik, “Passive Multimodal 2-D+3-D Face
Recognition using Gabor Features and Landmark Distances,” IEEE
Transactions on Information Forensics and Security, Vol. 6, No. 4, pp. 1287-
1304, December 2011
A. K. Jain, S. C. Dass, and K. Nandakumar, “Soft Biometric Traits for Personal
Recognition Systems,” in Biometric Authentication, Berlin, Germany:
Springer, pp. 731-738, 2004
A. K. Jain, K. Nandakumar, and A. Nagar, “Biometric Template Security,”
EURASIP Journal on Advances in Signal Processing, Vol. 2008, pp. 113, April
2008
A. Jain and A. Ross, “Fingerprint Mosaicking,” in Proc. IEEE International
Conference on Acoustics, Speech and Signal Processing (ICASSP), Vol. 4, pp.
IV-4064-IV-4067, May 2002
H. Javidnia, A. Ungureanu, C. Costache, and P. Corcoran, “Palmprint as a
Smartphone Biometric,” in Proc. IEEE International Conference on
Consumer Electronics (ICCE), pp. 463-466, January 2016
X.-Y. Jing, Y.-F. Yao, D. Zhang, J.-Y. Yang, and M. Li, “Face and Palmprint Pixel
Level Fusion and Kernel DCV-RBF Classifier for Small Sample Biometric
Recognition,” Pattern Recognition, Vol. 40, No. 11, pp. 3209-3224,
November 2007
M. J. Jones and P. Viola, “Method and System for Object Detection in Digital
Images,” US Patents 7 099 510, August 29, 2006
T.-C. Kim, K.-M. Kyung, and K. Bae, “New Biometrics-Acquisition Method
Using Time-Of-Light Depth Camera,” in Proc. IEEE International Conference
on Consumer Electronics (ICCE), pp. 721-722, January 2011
J. Komulainen, A. Hadid, M. Pietikainen, A. Anjos, and S. Marcel,
“Complementary Countermeasures for Detecting Scenic Face Spoofing
Attacks,” in Proc. International Conference on Biometrics (ICB), pp. 1-7,
June 2013
A. Kumar, M. Hanmandlu, H. Sanghvi, and H. M. Gupta, “Decision Level
Biometric Fusion Using Ant Colony Optimization,” in Proc. 17th IEEE
International Conference on Image Processing (ICIP), pp. 3105-3108,
September 2010
T. S. Kumar and V. Kanhangad, “Face Recognition Using Two-Dimensional
Tunable-Q Wavelet Transform,” in Proc. International Conference on Digital
Image Computing: Techniques and Applications (DICTA), pp. 1-7, 2015
A. Kumar and A. Kumar, “Adaptive Management of Multimodal Biometrics
Fusion Using Ant Colony Optimization,” Information Fusion, Vol. 32, pp. 49-
63, November 2015
A. Kumar and S. Shekhar, “Personal Identification Using Multibiometrics
Rank-Level Fusion,” IEEE Transactions on Systems, Man, and Cybernetics,
Part C (Applications and Reviews), Vol. 41, No. 5, pp. 743-752, September
2011
G. P. Kusuma and C.-S. Chua, “PCA-based Image Recombination for
Multimodal 2D C 3D Face Recognition,” Image and Vision Computing, Vol.
29, No. 5, pp. 306-316, April 2011
Z. Labs., “10% of Mobile Apps Leak Passwords, 40% Communicate with
Third Parties j Cloud Security Solutions j Zscaler” [Online]. Available:
https://www.zscaler.com/press/10-mobile-apps-leak-passwords-40-
communic%ate-third-parties, 2012
K. Lai, S. Samoil, and S. N. Yanushkevich, “Multi-spectral Facial Biometrics in
Access Control,” in Proc. Symposium on Computational Intelligence in
Biometrics and Identity Management (CIBIM), pp. 102-109, December 2014
T. K. Lee, M. Belkhatir, and S. Sanei, “On the Compensation for the Effects
of Occlusion in Fronto-Normal Gait Signal Processing,” in Proc. 11th IEEE
International Symposium on Multimedia (ISM), pp. 165-170, December
2009
S. Li and A. C. Kot, “Fingerprint Combination for Privacy Protection,” IEEE
Transactions on Information Forensics and Security, Vol. 8, No. 2, pp. 350-
360, February 2013
Y. Li, K. Xu, Q. Yan, Y. Li, and R. H. Deng, “Understanding OSN-based Facial
Disclosure Against Face Authentication Systems,” in Proc. ACM Symposium
on Information, Computer and Communications Security, New York, USA,
pp. 413-424, 2014
Y. Liang, X. Ding, C. Liu, and J.-H. Xue, “Combining Multiple Biometric Traits
with an Order-Preserving Score Fusion Algorithm,'' Neurocomputing, Vol.
171, pp. 252-261, January 2016
P. Lili and X. Mei, “The Algorithm of Iris Image Preprocessing,” in Proc. 4th
IEEE Workshop on Automatic Identification Advanced Technologies
(AutoID), pp. 134-138, October 2005
Z. Liu and S. Sarkar, “Effect of Silhouette Quality On Hard Problems in Gait
Recognition,” IEEE Transactions on Systems, Man, and Cybernetics, Part B
(Cybernetics), Vol. 35, No. 2, pp. 170-183, April 2005
S. Maity and M. Abdel-Mottaleb, “3D Ear Segmentation and Classification
Through Indexing,” IEEE Transactions on Information Forensics and Security,
Vol. 10, No. 2, pp. 423-435, February 2015
G. Marcialis, F. Roli, and A. Tidu, “Analysis of Fingerprint Pores for Vitality
Detection,” in Proc. 20th International Conference Pattern Recognition
(ICPR), pp. 1289-1292, August 2010
C. Matyszczyk, “Samsung’s Warning: Our Smart TVs Record Your Living
Room Chatter” [Online]. Available: http://www.cnet.com/news/samsungswarning-our-smart-tvs-recordyour-liv%ing-room-chatter/, 2015
W. Meng, D. Wong, S. Furnell, and J. Zhou, “Surveying the Development of
Biometric User Authentication on Mobile Phones,” IEEE Communications
Surveys & Tutorials, Vol. 17, No. 3, pp. 1268-1293, 3rd Quarter, 2015
D. Miao, Z. Sun, and Y. Huang, “Fusion of Multi-Biometrics Based on a New
Robust Linear Programming,” in Proc. 22nd International Conference on
Pattern Recognition (ICPR), pp. 291-296, August 2014
S.-U.-D. G. Mohi-ud-Din, A. B. Mansoor, H. Masood, and M. Mumtaz,
“Personal Identification Using Feature and Score Level Fusion of Palm and
Fingerprints,” Signal, Image and Video Processing, Vol. 5, No. 4, pp. 477-
483, August 2011
M. M. Monwar and M. L. Gavrilova, “Multimodal Biometric System Using
Rank-Level Fusion Approach,” IEEE Transactions on Systems, Man, and
Cybernetics, Part B: Cybernetics, Vol. 39, No. 4, pp. 867–878, August 2009
M. M. Monwar and M. Gavrilova, “Markov Chain Model for Multimodal
Biometric Rank Fusion,” Signal, Image and Video Processing, Vol. 7, No. 1,
pp. 137-149, April 2011
T. Murakami and K. Takahashi, “Information-Theoretic Performance
Evaluation of Likelihood-Ratio Based Biometric Score Fusion under Modality
Selection Attacks,” in Proc. 7th International Conference on Biometrics
Theory, Applications and Systems (BTAS), pp. 1-8, September 2015
A. Nagar, H. Choi, and A. K. Jain, “Evidential Value of Automated Latent
Fingerprint Comparison: An Empirical Approach,” IEEE Transactions on
Information Forensics and Security, Vol. 7, No. 6, pp. 1752–1765, December
2012
M. O. Oloyede and G. P. Hancke, “Unimodal and Multimodal Biometric
Sensing Systems: A Review”, IEEE Access, Vol. 4, pp. 7532-7555, September
2016
U. Park and A. K. Jain, “Face Matching and Retrieval Using Soft Biometrics,”
IEEE Transactions on Information Forensics and Security, Vol. 5, No. 3, pp.
406–415, September 2010
J. Parris et al., “Face and Eye Detection on Hard Datasets,” in Proc.
International Joint Conference on Biometrics (IJCB), pp. 1-10, October 2011
P. P. Paul, M. L. Gavrilova, and R. Alhajj, “Decision Fusion for Multimodal
Biometrics Using Social Network Analysis,” IEEE Transactions on Systems,
Man, and Cybernetics: Systems, Vol. 44, No. 11, pp. 1522–1533, November
2014
M. Paunwala and S. Patnaik, “Biometric Template Protection with DCT
Based Watermarking,” Machine Vision and Applications, Vol. 25, No. 1, pp.
263-275, July 2013
T. D. F. Pereira et al., “Face Liveness Detection Using Dynamic Texture,”
EURASIP Journal on Image and Video Processing, Vol. 2014, No. 1, pp. 1-15,
January 2014
N. Poh, J. Kittler, C.-H. Chan, and M. Pandit, “Algorithm to Estimate
Biometric Performance Change Over Time,” IET Biometrics, Vol. 4, No. 4,
pp. 236-245, 2015
S. Prabhakar and A. K. Jain, “Decision-level Fusion in Fingerprint
Verification,” Pattern Recognition, Vol. 35, No. 4, pp. 861-874, 2002
P. Radu, K. Sirlantzis, W. Howells, S. Hoque, and F. Deravi, “Image
Enhancement Vs Feature Fusion in Colour Iris Recognition,” in Proc.
International Conference on Emerging Security Information, Systems and
Technologies (EST), pp. 53-57, September 2012
R. Raghavendra, B. Dorizzi, A. Rao, and G. Hemantha, “PSO versus AdaBoost
for Feature Selection in Multimodal Biometrics,” in Proc. IEEE 3rd
International Conference on Biometrics: Theory, Applications and Systems
(BTAS), Sep. 2009, pp. 1-7.
R. Raghavendra, B. Dorizzi, A. Rao, and G. H. Kumar, “Designing Efficient
Fusion Schemes for Multimodal Biometric Systems Using Face and
Palmprint,” Pattern Recognition, Vol. 44, No. 5, pp. 1076-1088, May 2011
N. K. Ratha, S. Chikkerur, J. H. Connell, and R. M. Bolle, “Generating
Cancelable Fingerprint Templates,” IEEE Transactions on Pattern Analysis
and Machine Intelligence, Vol. 29, No. 4, pp. 561-572, April 2007
N. K. Ratha, J. H. Connell, and R. M. Bolle, “Image Mosaicing for Rolled
Fingerprint Construction,” in Proc. 14th International Conference on Pattern
Recognition, Vol. 2, pp. 1651-1653, August 1998
N. K. Ratha, J. H. Connell, and R. M. Bolle, “Enhancing Security and Privacy
in Biometrics-based Authentication Systems,” IBM Syst. J., Vol. 40, No. 3,
pp. 614-634, April 2001
C. Rathgeb and A. Uhl, “A Survey on Biometric Cryptosystems and
Cancelable Biometrics,” EURASIP Journal on Information Security, Vol. 2011,
No. 1, pp. 1-25, 2011
I. Rigas and O. V. Komogortsev, “Eye Movement-Driven Defense against Iris
Print-Attacks,” Pattern Recognition Letters, Vol. 68, pp. 316-326, December
2015
R. N. Rodrigues, N. Kamat, andV. Govindaraju, “Evaluation of Biometric
Spoofing in a Multimodal System,” in Proc. 4th IEEE International
Conference on Biometrics Theory, Applications and Systems (BTAS, pp. 1-5),
September 2010
A. Ross, S. Dass, and A. Jain, “A Deformable Model for Fingerprint
Matching,” Pattern Recognition, Vol. 38, No. 1, pp. 95-103, 2005
A. Ross, J. Shah, and A. K. Jain, “From Template to Image: Reconstructing
Fingerprints from Minutiae Points,” IEEE Transactions on Pattern Analysis
and Machine Intelligence, Vol. 29, No. 4, pp. 544–560, April 2007
A. Roy, N. Memon, and A. Ross, “Masterprint: Exploring The Vulnerability Of
Partial Fingerprint-Based Authentication Systems”, IEEE Transactions on
Information Forensics and Security, Vol. 12 ( 9), pp. 2013-2025, September
2017
A. Saliha, B. Karima, K. Mouloud, D. H. Nabil, and B. Ahmed, “Extraction
Method of Region of Interest from Hand Palm: Application with Contactless
and Touchable Devices,” in Proc. 10th International Conference on
Information Assurance and Security (IAS), pp. 77-82, November 2014
S. Shah, A. Ross, J. Shah, and S. Crihalmeanu, “Fingerprint Mosaicing Using
Thin Plate Splines,” in Proc. Biometric Consortium Conference, 2005, pp. 1-2
H. M. Sim, H. Asmuni, R. Hassan, and R. M. Othman, “Multimodal
Biometrics: Weighted Score Level Fusion Based On Nonideal Iris and Face
Images,” Expert Systems with Applications, Vol. 41, No. 11, pp. 5390-5404,
September 2014
M. Sparkes, “Why Your Smartphone Records Everything You Say To It”,
[Online], Available:
http://www.telegraph.co.uk/technology/news/11434754/Why-yoursmartphone%-records-everything-you-say-toit.html, 2015
A. Srinivasan and V. Balamurugan, “Occlusion Detection and Image
Restoration in 3D Face Image,” in Proc. IEEE Region 10 Conference
(TENCON), pp. 1-6, October 2014
N. Sugandhi, M. Mathankumar, and V. Priya, “Real Time Authentication
System Using Advanced Finger Vein Recognition Technique,” in Proc.
International Conference on Communication and Signal Processing (ICCSP),
pp. 1183-1187, April 2014
M. Sultana, P. P. Paul, M. L. Gavrilova, “Social Behavioural Information
Fusion in Multimodal Biometrics”, IEEE Transactions on Systems, Man, and
Cybernetics: Systems, 2017
Z. Sun, Y. Wang, T. Tan, and J. Cui, “Improving Iris Recognition Accuracy Via
Cascaded Classifiers,” IEEE Transactions on Systems, Man, and Cybernetics
Part C: Applications and Reviews, Vol. 35, No. 3, pp. 435-441, August 2005
Y. Sutcu, Q. Li, and N. Memon, “Secure Biometric Templates from
Fingerprint-Face Features,” in Proc. IEEE Computer Society Conference on
Computer Vision and Pattern Recognition (CVPR), pp. 1-6, June 2007
J. Svoboda, M. Bronstein, and M. Drahansky, “Contactless Biometric Hand
Geometry Recognition Using a Low-Cost 3D Camera,” in Proc. International
Conference on Biometrics (ICB), pp. 452-457, May 2015
D. Tan, K. Huang, S. Yu, and T. Tan, “Efficient Night Gait Recognition Based
On Template Matching,” in Proc. 18th International Conference on Pattern
Recognition (ICPR), August 2006
Q. Tao and R. Veldhuis, “Threshold-Optimized Decision-Level Fusion and Its
Application to Biometrics,” Pattern Recognition, Vol. 42, No. 5, pp. 823-836,
May 2009
K. Tewari and R. L. Kalakoti, “Fingerprint Recognition and Feature Extraction
Using Transform Domain Techniques,” in Proc International Conference on
Advanced Communication Control and Computing Technologies (ICACACT),
pp. 1-5, August 2014
R. Tolosana, R. Vera-Rodriguez, J. Ortega-Garcia, and J. Fierrez,
“Preprocessing and Feature Selection for Improved Sensor Interoperability
in Online Biometric Signature Verification,” IEEE Access, Vol. 3, pp. 478-489,
2015
P. Tome and S. Marcel, “On The Vulnerability of Palm Vein Recognition to
Spoofing Attacks,” in Proc. International Conference on Biometrics (ICB, pp.
319-325), May 2015
C. O. Ukpai, S. S. Dlay, and W. L. Woo, “Iris Feature Extraction Using
Principally Rotated Complex Wavelet Filters (PR-CWF),” in Proc.
International Conference Computer Visual Image Analysis Application
(ICCVIA), pp. 1-6, January 2015
U. Uludag, S. Pankanti, S. Prabhakar, and A. Jain, “Biometric Cryptosystems:
Issues and Challenges,” Proc. IEEE, Vol. 92, No. 6, pp. 948-960, June 2004
M. Une, A. Otsuka, and H. Imai, “Wolf Attack Probability: A New Security
Measure in Biometric Authentication Systems,” in Proc. International
Conference on Biometrics, pp. 396–406, 2007
S. Vaid and D. Mishra, “Comparative Analysis of Palm-Vein Recognition
System Using Basic Transforms,” in Proc. International Conference on
Current Trends in Advanced Computing (IACC), pp. 1105-1110, June 2015
K. Veeramachaneni, L. Osadciw, A. Ross, and N. Srinivas, “Decision Level
Fusion Strategies for Correlated Biometric Classifiers,” in Proc. IEEE
Computer Society Conference on Computer Vision and Pattern Recognition.
Workshops (CVPRW), pp. 1-6, June 2008
K. Veeramachaneni, L. A. Osadciw, and P. K. Varshney, “An Adaptive
Multimodal Biometric Management Algorithm,” IEEE Transactions on
Systems, Man, and Cybernetics, Part C, Vol. 35, No. 3, pp. 344-356, August
2005
R. Vera-Rodriguez, J. Fierrez, J. S. D. Mason, and J. Orteua-Garcia, “A Novel
Approach of Gait Recognition Through Fusion with Footstep Information,”
in Proc. International Conference on Biometrics (ICB, pp. 1-6), June 2013
K. Vishi and S. Y. Yayilgan, “Multimodal Biometric Authentication Using
Fingerprint and Iris Recognition in Identity Management,” in Proc. 9th
International Conference Intelligent Information Hiding and Multimedia
Signal Processing, pp. 334-341, October 2013
N. Wang, Q. Li, A. A. A. El-Latif, X. Yan, and X. Niu, “A Novel Hybrid
Multibiometrics Based On the Fusion of Dual Iris, Visible and Thermal Face
Images,” in Proc. International Symposium on Biometrics and Security
Technologies (ISBAST), pp. 217-223, July 2013
J.-G. Wang, W.-Y. Yau, A. Suwandy, and E. Sung, “Person Recognition by
Fusing Palmprint and Palm Vein Images Based On ‘Laplacianpalm’
Representation,” Pattern Recognition, Vol. 41, No. 5, pp. 1514-1527, May
2008
Z. Wei, X. Qiu, Z. Sun, and T. Tan, “Counterfeit Iris Detection Based On
Texture Analysis,” in Proc. 19th International Conference on Pattern
Recognition (ICPR), pp. 1-4, December 2008
P. Wild, P. Radu, L. Chen, and J. Ferryman, “Robust Multimodal Face and
Fingerprint Fusion in the Presence of Spoofing Attacks,” Pattern
Recognition, Vol. 50, pp. 17-25, September 2016
X. Wu and Q. Zhao, “Deformed Palmprint Matching Based On Stable
Regions,” IEEE Transactions on Image Processing, Vol. 24, No. 12, pp. 4978-
4989, December 2015
X. Xing, K. Wang, and Z. Lv, “Fusion of Gait and Facial Features Using
Coupled Projections for People Identification at a Distance,” IEEE
Transactions on Signal Processing, Vol. 22, No. 12, pp. 2349-2353,
December 2015
Q. Xu, Y. Chen, B. Wang, and K. J. R. Liu, “Radio Biometrics: Human
Recognition Through a Wall”, IEEE Transactions on Information Forensics
and Security, Vol. 12 (5), pp. 1141-1155, May 2017
Y. Xu, L. Fei, and D. Zhang, “Combining Left and Right Palmprint Images for
More Accurate Personal Identification,” IEEE Transactions on Image
Processing, Vol. 24, No. 2, pp. 549-559, February 2015
Y. Xu, Z. Zhang, G. Lu, and J. Yang, “Approximately Symmetrical Face Images
for Image Preprocessing in Face Recognition and Sparse Representation
Based Classification,” Pattern Recognition, Vol. 54, pp. 68-82, June 2016
X. Yan, W. Kang, F. Deng, and Q. Wu, “Palm Vein Recognition Based On
Multi-Sampling and Feature-Level Fusion,” Neurocomputing, Vol. 151, pp.
798-807, March 2015
F. Yang, M. Paindavoine, H. Abdi, and A. Monopoli, “Development of a Fast
Panoramic Face Mosaicking and Recognition System,” Optical Engineering,
Vol. 44, No. 8, pp. 087005-1_087005-10, 2005
J. Yang and X. Zhang, “Feature-level Fusion of Fingerprint and Finger-vein
for Personal Identification,” Pattern Recognition Letters, Vol. 33, No. 5, pp.
623-628, April 2012
Y.-F. Yao, X.-Y. Jing, and H.-S. Wong, “Face and Palmprint Feature Level
Fusion for Single Sample Biometrics Recognition,” Neurocomputing, Vol. 70,
Nos. 7-9, pp. 1582-1586, March 2007
N. Yager and T. Dunstone, “The Biometric Menagerie,” IEEE Transactions on
Pattern Analysis and Machine Intelligence, Vol. 32, No. 2, pp. 220–230,
February 2010
Y. Zhang, P. Xia, J. Luo, Z. Ling, B. Liu, and X. Fu, “Fingerprint Attack Against
Touch-enabled Devices,” in Proc. 2nd ACM Workshop on Security & Privacy
in Smartphones and Mobile Devices, New York, USA, pp. 57-68, 2012
Y. Zhou and X. Jiang, “Dissecting Android Malware: Characterization and
Evolution,” in Proc. IEEE Symposium on Security and Privacy (SP), pp. 95-
109, May 2012.
No comments:
Post a Comment