What is Image Classification

The classification includes a broad range of decision-theoretic approaches to identifying images (or parts thereof). All classification algorithms are based on the assumption that the image in question depicts one or more features (e.g., geometric details in the case of a manufacturing classification system, or spectral regions in the point of remote sensing, as shown in the examples below) and that each of these features belongs to one of several distinct and exclusive classes. The classes may be specified a priori by an analyst (as in supervised classification) or automatically clustered (i.e. as in unsupervised classification) into sets of prototype classes, where the analyst merely specifies the number of desired categories. (Classification and segmentation have closely related objectives, as the former is another form of component labeling that can result in the segmentation of various features in a scene.)

Definition

Image classification is the process of assigning land cover classes to pixels. It refers to the task of extracting information classes from a multiband raster image. The resulting raster from image classification can be used to create thematic maps. Depending on the interaction between the analyst and the computer during classification, there are two types of classification: supervised and unsupervised. Image classification plays an important role in environmental and socioeconomic applications. In order to improve classification accuracy, scientists have laid paths in developing advanced classification techniques.

Figure 1 Image classification

Image classification is perhaps the most important part of digital image analysis. It is very nice to have a “pretty picture” or an image, showing a magnitude of colors illustrating various features of the underlying terrain, but it is quite useless unless to know what the colors mean.

How does It work?

Image classification analyzes the numerical properties of various image features and organizes data into categories. Classification algorithms typically employ two phases of processing: training and testing. In the initial training phase, characteristic properties of typical image features are isolated, and, based on these, a unique description of each classification category, i.e. training class, is created. In the subsequent testing phase, these feature-space partitions are used to classify image features. The description of training classes is an extremely important component of the classification process. In supervised classification, statistical processes (i.e. based on an a priori knowledge of probability distribution functions) or distribution-free processes can be used to extract class descriptors. The unsupervised classification relies on clustering algorithms to automatically segment the training data into prototype classes. In either case, the motivating criteria for constructing training classes are that they are:

 

  • Independente.a change in the description of one training class should not change the value of another,
  • Discriminatorye.different image features should have significantly different descriptions, and
  • Reliable, all image features within a training group should share the common definitive descriptions of that group.

A convenient way of building a parametric description of this sort is via a feature vector, where n is the number of attributes, which describe each image feature and training class. This representation allows us to consider each image feature as occupying a point, and each training class as occupying a sub-space (i.e. a representative point surrounded by some spread, or deviation), within the n-dimensional classification space. Viewed as such, the classification problem is that of determining to which sub-space class each feature vector belongs.

Image Classification Technique

Two major categories of image classification techniques include unsupervised (calculated by the software) and supervised (human-guided) classification:

  1. Unsupervised Classification

Unsupervised classification is where the outcomes (groupings of pixels with common characteristics) are based on the software analysis of an image without the user providing sample classes. The computer uses techniques to determine which pixels are related and groups them into classes. The user can specify which algorism the software will use and the desired number of output classes but otherwise does not aid in the classification process. However, the user must have knowledge of the area being classified when the groupings of pixels with common characteristics produced by the computer have to be related to actual features on the ground (such as wetlands, developed areas, coniferous forests, etc.)

  1. Supervised Classification

Supervised classification is based on the idea that a user can select sample pixels in an image that are representative of specific classes and then direct the image processing software to use these training sites as references for the classification of all other pixels in the image. Training sites (also known as testing sets or input classes) are selected based on the knowledge of the user. The user also sets the bounds for how similar other pixels must be to group them together. These bounds are often set based on the spectral characteristics of the training area, plus or minus a certain increment (often based on “brightness” or strength of reflection in specific spectral bands). The user also designates the number of classes that the image is classified into. Many analysts use a combination of supervised and unsupervised classification processes to develop final output analysis and classified maps.

References

[1] “Image Classification”, Tutorial, available online at:  http://www.microimages.com/documentation/Tutorials/classify.pdf

[2] What is image classification? Available online at: http://desktop.arcgis.com/en/arcmap/latest/extensions/spatial-analyst/image-classification/what-is-image-classification-.htm

[3] S. V. S. Prasad , T. Satya Savithri and Iyyanki V. Murali Krishna, “Techniques in Image Classification; A Survey”, Global Journal of Researches in Engineering: F Electrical and Electronics Engineering, Volume 15, Issue 6, Version 1.0, Year 2015.

What is Machine Vision and How It Works

Vision plays a fundamental role for living beings by allowing them to interact with the environment in an effective and efficient way. Where human vision is best for qualitative interpretation of a complex, unstructured scene, machine vision excels at quantitative measurement of a structured scene because of its speed, accuracy, and repeatability. For example, on a production line, a system can inspect hundreds, or even thousands, of parts per minute. A machine vision system built around the right camera resolution and optics can easily inspect object details too small to be seen by the human eye.

What is it?

Machine vision (also called “industrial vision” or “vision systems”) is the use of digital sensors (wrapped in cameras with specialized optics) that are connected to processing hardware and software algorithms to visually inspect pretty much anything. It is a truly multi-disciplinary field, encompassing computer science, optics, mechanical engineering, and industrial automation. While historically the tools were focused on manufacturing, that’s quickly changing, spreading into medical applications, research, and even movie making.

This is the technology to replace or complements manual inspections and measurements with digital cameras and image processing. The technology is used in a variety of different industries to automate production, increase production speed and yield, and improve product quality.

Machine vision in operation can be described by a four-step flow:

  • Imaging: Take an image.
  • Processing and analysis: Analyze the image to obtain a result.
  • Communication: Send the result to the system in control of the process.
  • Action: Take action depending on the vision system’s result.

Figure 1: Machine Vision Operations

Machine-Vision System Components

It allows you to obtain useful information about physical objects by automating the analysis of digital images of those objects. This is one of the most challenging applications of computer technology. There are two general reasons for this: almost all vision tasks require at least some judgment on the part of the machine, and the amount of time allotted for completing the task usually is severely limited. While computers are astonishingly good at elaborate, high-speed calculation, they still are very primitive when it comes to judgment.

Such systems have five key components.

  • Illumination: Just as a professional photographer uses lighting to control the appearance of subjects, the use must consider the color, direction, and shape of an illumination. For objects moving at high speed, a strobe often can be used to freeze the action.
  • Camera: For many years, the standard machine-vision camera has been monochromatic. It outputs many shades of gray but not color, provides about 640 × 480 pixels, produces 30 frames per second, uses CCD solid-state sensor technology, and generates an analog video signal defined by television standards.
  • Frame Grabber: A frame grabber interfaces the camera to the computer that is used to analyze the images. One common form for a frame grabber is a plug-in card for a PC.
  • Computer: Often an ordinary PC is used, but sometimes a device designed specifically for image analysis is preferred. The computer uses the frame grabber to capture images and specialized software to analyze them and is responsible for communicating results to automation equipment and interfacing with human operators for setup, monitoring, and control.
  • Software: The key to successful performance is the software that runs on the computer and analyzes the images. Software is the only component that cannot be considered a commodity and often is a vendor’s most important intellectual property.

Machine Vision Goals

The goals can be divided into the following from a technical point of view:

Strategic Goal  Applications
Higher quality Inspection, measurement, gauging, and assembly verification
Increased productivity Repetitive tasks formerly done manually are now done by the System
Production flexibility Measurement and gauging / Robot guidance / Prior operation verification
Less machine downtime and reduced setup time Changeovers programmed in advance
More complete information and tighter process control Manual tasks can now provide computer data feedback
Lower capital equipment costs Adding vision to a machine improves its performance, avoids obsolescence
Lower production costs One vision system vs. many people / Detection of flaws early in the process
Scrap rate reduction Inspection, measurement, and gauging
Inventory control Optical Character Recognition and Identification
Reduced floor space Vision system vs. operator

References

[1] “Machine Vision Introduction”, Version 2.2, December 2006, SICK IVP, available online at: www.sickivp.com

[2] Bill Silver, Cognex, “An Introduction to Machine Vision a Tutorial”, available online at: https://www.evaluationengineering.com/an-introduction-to-machine-vision-a-tutorial

[3] “Introduction to Machine Vision: A guide to automating process & quality improvements”, available online at: www.cognex.com

[4] David Phillips, “Machine vision: a survey”, Western CEDAR, 2008.

What is Template Matching in Object Recognition

Template matching is one of the areas of profound interest in recent times. It has turned out to be a revolution in computer vision. Template Matching is a high-level machine vision technique that identifies the parts on an image that match a predefined template. Advanced template matching algorithms allow us to find occurrences of the template regardless of their orientation and local brightness. Template Matching techniques are flexible and relatively straightforward to use, which makes them one of the most popular methods of object localization. Their applicability is limited mostly by the available computational power, as identification of big and complex templates can be time-consuming.

What is it?

It is a technique used in classifying an object by comparing portions of images with another image. One of the important techniques in Digital image processing is template matching. Templates are usually employed to print characters, identify numbers and other simple objects. It can be used for the detection of edges in figures, in manufacturing as a part of quality control, and as a means to navigate a mobile robot.

Figure 1: Example of Template Matching

Figure 1 depicts an example of it. It is a strategy for discovering zones of an image, which match (are indistinguishable) a template image (patch). We require 2 crucial segments. We need two vital segments:

  • Source image (I): The picture inside which we are hoping to find out a match to the template image.
  • Template image (T): The patch image that can be compared to the template image and our objective is to discover the most effective technique for the best matching region. The matching technique not solely takes the similarity measure however it computes the inaccuracy among images reckoning on the difference by means of Mean Squared Error (MSE) metric.

Figure 2 Real-world application

Template Matching Approaches

General categorizations of template or image matching approaches are Featured-based approach and Template or Area based approach:

  • Featured-based approach: The Featured-based method is appropriate while both reference and template images have more connection with regard to features and control points. The features comprise points, a surface model which needs to be matched, and curves. At this point, the goal is to position the pairwise association amongst reference and layout pictures using their spatial relations or descriptors of components.
  • Template-based approach: Template-based template matching approach could probably require sampling of a huge quantity of points, it is possible to cut back the number of sampling points by diminishing the resolution of the search and template images via the same factor and performing an operation on the resulting downsized images (multiresolution, or Pyramid (image processing)), providing a search window of data points inside the search image in order that the template doesn’t have to be compelled to look for each possible data point and the mixture of the two.
  • Area-based approach: Area-based methods are typically referred to as correlation-like methods or template-matching methods, which is the blend of feature matching, feature detection, motion tracking, occlusion handling, etc. Area-based methods merge the matching part with the feature detection step. These techniques manage the pictures without attempting to identify the remarkable article. Windows of predefined size is used for the estimation of correspondence.
  • Motion Tracking and Occlusion Handling: For the templates which cannot provide and may not provide an instantaneous match, in that case, Eigenspaces may be used, which provides the details of matching images beneath numerous conditions, appropriate matching poses, or color contrast. For instance, if the person turned into searching out a specimen, the Eigen spaces may include templates of specimens totally different in numerous positions to the camera with different lighting conditions or expressions. There is feasibility for the matching figure to be occluded via an associated item or issues involved in movement turn out to be ambiguous. One of the probable answers for that can be to separate the template into more than one sub-images and carry out matching on them.

Limitation

The following are the limitations of template matching:

  • Templates are not rotation or scale invariant.
  • Slight changes in size or orientation variations can cause problems.
  • It often uses several templates to represent one object.
  • Templates may be of different sizes.
  • Rotations of the same template.
  • Particularly if you search the entire image or if you use several templates in that case template matching is a very expensive operation.
  • It is easily “parallelized”.
  • It requires high computational power because the detection of large patterns of an image is very time taking.

References

[1] Paridhi Swaroop and Neelam Sharma, “An Overview of Various Template Matching Methodologies in Image Processing”, International Journal of Computer Applications (IJCA), Volume 153 – No 10, November 2016

[2] T. Mahalakshmi, R. Muthaiah and P. Swaminathan, “Review Article: An Overview of Template Matching Technique in Image Processing”, Research Journal of Applied Sciences, Engineering, and Technology, Volume 4, Number 24, pp. 5469-5473, 2012.

[3] Nazil Perveen, Darshan Kumar and Ishan Bhardwaj, “An Overview on Template Matching Methodologies and its Applications”, International Journal of Research in Computer and Communication Technology, Volume 2, Issue 10, October- 2013.

An Overview of Image Denoising

Any form of signal processing having the image as an input and output (or a set of characteristics or parameters of an image) is called image processing. In image processing, we work in two domains i.e., spatial domain and frequency domain. The spatial domain refers to the image plane itself, and the image processing method in this category is based on the direct manipulation of pixels in an image, and coming to the frequency domain is the analysis of mathematical functions or signals with respect to frequency rather than time.

Overview

The search for efficient image denoising methods still is a valid challenge, at the crossing of functional analysis and statistics. Image denoising refers to the recovery of a digital image that has been contaminated by noise. The presence of noise in images is unavoidable. It may be introduced during the image formation, recording, or transmission phase. Further processing of the image often requires that the noise must be removed or at least reduced. Even a small amount of noise is harmful when high accuracy is required. The noise can be of different types. The most popular ones are additive white Gaussian noise (AWGN).

An image denoising procedure takes a noisy image as input and outputs an image where the noise has been reduced. Numerous and diverse approaches exist Some selectively smooth parts of a noisy image. Other methods rely on the careful shrinkage of wavelet coefficients. A conceptually similar approach is to denoise image patches by trying to approximate noisy patches using a sparse linear combination of elements of a learned dictionary. Learning a dictionary is sometimes accomplished through learning on a noise-free dataset. Other methods also learn a global image prior to a noise-free dataset, for instance. An image is often corrupted by noise in its acquisition and transmission. Image denoising is used to remove the additive noise while retaining as much as possible the important signal features. Generally, data sets collected by image sensors are contaminated by noise. Imperfect instruments, problems with the data acquisition process, and interfering natural phenomena can all corrupt the data of interest. Thus noise reduction is an important technology in Image Analysis and the first step to be taken before images are analyzed. Therefore, Image Denoising techniques are necessary to prevent this type of corruption from digital images.

The technique of Image Denoising

Various image denoising techniques have been developed so far and their application depends upon the type of image and noise present in the image. Image denoising is classified into two categories:

Spatial domain filtering: This is the traditional way to remove the noise from digital images to employ spatial filters. Spatial domain filtering is further classified into linear filters and nonlinear filters.

Linear Filters: A mean filter is the optimal linear for Gaussian noise in the sense of mean square error. Linear filters tend to blur sharp edges and destroy lines and other fine details of the image. It includes the Mean filter and Wiener filter.

Transform domain filtering: Transform domain filtering can be subdivided into data-adaptive and non-adaptive filters. The Transformer domain mainly includes wavelet-based filtering techniques.

Wavelet Transform: Wavelet transform is a mathematical function that analyzes the data according to scale or resolution. Noise reduction using wavelets is performed by first decomposing the noisy image into wavelet coefficients i.e. approximation and detail coefficients. Then, by selecting a proper Thresholding value the detail coefficients are modified based on the Thresholding function. Finally, the reconstructed image is obtained by applying the inverse wavelet transform on modified coefficients. The basic procedure for all Thresholding methods is:

  • Calculate DWT if the Image.
  • Threshold the wavelet components.
  • Compute IDWT to obtain a denoised estimate.

There are two Thresholding functions frequently used i.e. Hard Threshold, Soft threshold. The hard Thresholding function keeps the input if it is larger than the threshold; otherwise, it is set to zero. The soft-Thresholding function takes the argument and shrinks it toward zero by the threshold. The soft-Thresholding rule is chosen over hard Thresholding, for the soft-Thresholding method yields more visually pleasant images over hard Thresholding. The result may still be noisy. A large threshold alternatively, produces a signal with a large number of zero coefficients. This leads to a smooth signal. So much attention must be paid to selecting the optimal threshold.

Figure 1 Example of Image Denoising

References

[1] Burger, Harold C., Christian J. Schuler, and Stefan Harmeling, “Image denoising: Can plain neural networks compete with BM3D?” In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pp. 2392-2399.

[2] Alisha P B and Gnana Sheela K, “Image Denoising Techniques-An Overview”, IOSR Journal of Electronics and Communication Engineering (IOSR-JECE), Volume 11, Issue 1, Ver. I (Jan. – Feb .2016), PP 78-84

[3] C. Kervrann and J. Boulanger, “Patch-based Image Denoising”, available online at: https://www.irisa.fr/vista/Themes/Demos/Debruitage/ImageDenoising.html

What is Data Compression

Compression is used just about everywhere. Data compression involves the development of a compact representation of information. Most representations of information contain large amounts of redundancy. Redundancy can exist in various forms. Internet users who download or upload files from/to the web, or use email to send or receive attachments will most likely have encountered files in compressed format.

General Overview

With the extended use of computers in various disciplines, the number of data processing applications is also increasing which requires the processing and storing of large volumes of data. It is primarily a branch of information theory, which deals with techniques related to minimizing the amount of data to be transmitted and stored. It is often called coding, where coding is a general term encompassing any special representation of data that satisfies a given need. Information theory is the study of efficient coding and its consequences, in the form of speed.

What is it?

Today, with the growing demands of information storage and data transfer, data compression is becoming increasingly important. Compression is the process of encoding data more efficiently to reduce file size. One type of compression is available is referred to as lossless compression. This means the compressed file will be restored exactly to its original state with no loss of data during the decompression process. This is essential to data compression as the file would be corrupted and unusable should data be lost. It is the art of reducing the number of bits needed to store or transmit data. It is one of the enabling technologies for multimedia applications. It would not be practical to put images, audio, and video on websites if they do not use compression algorithms. Mobile phones would not be able to provide communication clearly without data compression. With compression techniques, we can reduce the consumption of resources, such as hard disk space or transmission bandwidth.

Data Compression Principles

Below, data compression principles are listed:

  • It is the substitution of frequently occurring data items, or symbols, with shortcodes that require fewer bits of storage than the original symbol.
  • Saves space, but requires time to save and extract.
  • Success varies with the type of data.
  • Works best on data with low spatial variability and limited possible values.
  • Works poorly with high spatial variability data or continuous surfaces.
  • Exploits inherent redundancy and irrelevancy by transforming a data file into a smaller one

Figure 1: Data Compression Process

Data Compression Technique

Data compression is the function of the presentation layer in the OSI reference model. Compression is often used to maximize the use of bandwidth across a network or to optimize disk space when saving data.

There are two general types of compression techniques:

Figure 2: Classification of Compression

Lossless Compression

Lossless compression compresses the data in such a way that when data is decompressed it is exactly the same as it was before compression i.e. there is no loss of data. Lossless compression is used to compress file data such as executable code, text files, and numeric data because programs that process such file data cannot tolerate mistakes in the data. Lossless compression will typically not compress files as much as lossy compression techniques and may take more processing power to accomplish the compression.

Lossless data compression is compression without any loss of data quality. The decompressed file is an exact replica of the original one. Lossless compression is used when it is important that the original and the decompressed data be identical. It is done by re-writing the data in a more space-efficient way, removing all kinds of repetitions (compression ratio 2:1). Some image file formats, notably PNG, use only lossless compression, while those like TIFF may use either lossless or lossy methods.

Lossless Compression Algorithms

The various algorithms used to implement lossless data compression are:

Run Length Encoding

  • This method replaces the consecutive occurrences of a given symbol with only one copy of the symbol along with a count of how many times that symbol occurs. Hence the name ‘run length’.
  • For example, the string AAABBCDDDD would be encoded as 3A2BIC4D.
  • A real-life example where run-length encoding is quite effective is the fax machine. Most faxes are white sheets with the occasional black text. So, a run-length encoding scheme can take each line and transmit a code for while then the number of pixels, then the code for black and the number of pixels, and so on.
  • This method of compression must be used carefully. If there is not a lot of repetition in the data then it is possible the run length encoding scheme would actually increase the size of a file.

Differential Pulse Code Modulation

  • In this method first, a reference symbol is placed. Then for each symbol in the data, we place the difference between that symbol and the reference symbol used.
  • For example, using symbol A as the reference symbol, the string AAABBC DDDD would be encoded as AOOOl123333, since A is the same as the reference symbol, B has a difference of 1 from the reference symbol, and so on.

Dictionary Based Encoding

  • One of the best-known dictionary-based encoding algorithms is Lempel-Ziv (LZ) compression algorithm.
  • This method is also known as substitution coder.
  • In this method, a dictionary (table) of variable-length strings (common phrases) is built.
  • This dictionary contains almost every string that is expected to occur in data.
  • When any of these strings occur in the data, then they are replaced with the corresponding index to the dictionary.
  • In this method, instead of working with individual characters in text data, we treat each word as a string and output the index in the dictionary for that word.
  • For example, let us say that the word “compression” has an index of 4978 in one particular dictionary; it is the 4978th word is usr/share/dict/words. To compress a body of text, each time the string “compression” appears, it would be replaced by 4978.

Lossy Compression

A lossy compression method is one where compressing data and then decompressing it retrieves data that may well be different from the original, but is “close enough” to be useful in some way. The algorithm eliminates irrelevant information as well and permits only an approximate reconstruction of the original file. Lossy compression is also done by re-writing the data in a more space-efficient way, but more than that: less important details of the image are manipulated or even removed so that higher compression rates are achieved. Lossy compression is dangerously attractive because it can provide compression ratios of 100:1 to 200:1, depending on the type of information being compressed. But the cost is loss of data.

The advantage of lossy methods over lossless methods is that in some cases a lossy method can produce a much smaller compressed file than any known lossless method, while still meeting the requirements of the application.

Examples of Lossy Methods are:

  • PCM
  • JPEG
  • MPEG

References

[1] “Compression Concepts”, available online at: http://www.gitta.info/DataCompress/en/html/CompIntro_learningObject2.html

[2] Dinesh Thakur, “Data Compression-What is the Data Compression? Explain Lossless Compression and Lossy Compression”, available online at: http://ecomputernotes.com/computer-graphics/basic-of-computer-graphics/data-compression

[3] Gaurav Sethi, Sweta Shaw, Vinutha K, and Chandrani Chakravorty, “Data Compression Techniques”, (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 5 (4), 2014, pp. 5584-5586

[4] Hosseini, Mohammad, “A survey of data compression algorithms and their applications.” Network Systems Laboratory, School of Computing Science, Simon Fraser University, BC, Canada (2012).

Fingerprint Recognition Applications, Advantage and Limitations

Fingerprint identification is one of the most well-known and publicized biometrics. Because of their uniqueness and consistency over time, fingerprints have been used for identification for over a century, more recently becoming automated (i.e. a biometric) due to advancements in computing capabilities. Fingerprint identification is popular because of the inherent ease in acquisition, the numerous sources (ten fingers) available for collection, and their established use and collections by law enforcement and immigration.

What is it?

Fingerprint recognition is one of the most popular and accurate Biometric technologies. Fingerprint recognition (identification) is one of the oldest methods of identification with biometric traits. A large no. of archeological artifacts and historical items shows the signs of fingerprints of humans on stones. The ancient people were aware of the individuality of fingerprints, but they were not aware of scientific methods of finding individuality.

Fingerprints have remarkable permanency and uniqueness throughout time. Fingerprints offer more secure and reliable personal identification than passwords, id-cards or keys can provide. Examples such as computers and mobile phones equipped with fingerprint sensing devices for fingerprint-based password protection are being implemented to replace ordinary password protection methods.

Finger-scan technology is the most widely deployed biometric technology, with a number of different vendors offering a wide range of solutions. Among the most remarkable strengths of fingerprint recognition, we can mention the following:

  • Its maturity provides a high level of recognition accuracy.
  • The growing market of low-cost small-size acquisition devices allows its use in a broad range of applications, e.g., electronic commerce, physical access, PC logon, etc.
  • The use of easy-to-use, ergonomic devices, does not require complex user-system interaction. On the other hand, a number of weaknesses may influence the effectiveness of fingerprint recognition in certain cases:
  • Its association with forensic or criminal applications

State of the Art in Fingerprint Recognition

This section provides a basic introduction to fingerprint recognition systems and their main parts, including a brief description of the most widely used techniques and algorithms.

Figure 1: Main Modules of a Fingerprint Verification System

The main modules of a fingerprint verification system are: a) fingerprint sensing, in which the fingerprint of an individual is acquired by a fingerprint scanner to produce a raw digital representation; b) preprocessing, in which the input fingerprint is enhanced and adapted to simplify the task of feature extraction; c) feature extraction, in which the fingerprint is further processed to generate discriminative properties, also called feature vectors; and d) matching, in which the feature vector of the input fingerprint is compared against one or more existing templates. The templates of approved users of the biometric system, also called clients, are usually stored in a database. Clients can claim an identity and their fingerprints can be checked against stored fingerprints.

Strengths and Weaknesses of Fingerprint Recognition

Strengths

  • Proven technology capable of high levels of accuracy
  • Range of deployment environments
  • Ergonomic easy-to-use devices
  • Ability to enroll multiple fingers

Weaknesses

  • Inability to enroll some users
  • Performance deterioration over time
  • Association with forensic applications
  • Need to deploy specialized devices

Applications of Fingerprint Recognition

  • Fingerprint recognition is widely used in various applications ranging from law enforcement and international border control to personal laptop access. Almost all law enforcement agencies worldwide routinely collect fingerprints of apprehended criminals to track their criminal history.
  • To enhance border security in the United States, the US-VISIT program acquires fingerprints of visa applicants to identify high-profile criminals on a watch list and detect possible visa fraud. India’s UIDAI project was initiated to issue a unique 12-digit identification number to each resident. Given the large population in India (approximately 1.2 billion), an identification number for an individual is associated with his biometric information (i.e., ten fingerprints and two irises) to ensure that each resident has only one identification number.
  • Fingerprint recognition systems are now pervasive in our daily life. Disney Parks, for example, captures the fingerprints of visitors when they initially enter the park to link the ticket to the ticket holder’s fingerprint.
  • Fingerprint verification is performed whenever the same ticket is presented for reuse to prevent fraudulent use of the ticket (e.g., sharing of a ticket by multiple individuals).
  • Many automated teller machines (ATMs) in Brazil use fingerprint recognition as a replacement for personal identification numbers (PINs).
  • Also, several laptop computer models are equipped with fingerprint sensors and authenticate users based on their fingerprints

References

[1] “CHAPTER – 2: Introduction to Fingerprint and Face Recognition”, available online at: http://shodhganga.inflibnet.ac.in/bitstream/10603/130555/8/08_chapter%202.pdf

[2] Fierrez, Hartwig Fronthaler, Klaus Kollreider, and Javier Ortega-Garcia, “Fingerprint Recognition”, pp. 51-90.

[3] Om Sri, Satyasai, and Tatsat Naik, “Study of Fingerprint Recognition System” Btech dissertation, 2011.

[4] Le Hoang Thai and Ha Nhat Tam, “Fingerprint recognition using standardized fingerprint model”, IJCSI International Journal of Computer Science Issues, Volume 7, Issue 3, No 7, May 2010

[5] Soweon Yoon, “Fingerprint recognition: models and applications”, Michigan State University, 2014.

what is Computer Vision and it Applications

Computer vision is the science and technology of machines that see, and seeing, in this case, means that the machine is able to extract from an image some information that is necessary for solving some task. As a scientific discipline, computer vision is concerned with the theory behind artificial systems that extract information from images. The image data can take many forms, such as video sequences, views from multiple cameras, or multi-dimensional data from a medical scanner. As a technological discipline, computer vision seeks to apply its theories and models to the construction of computer vision systems.

General

The human ability to interact with other people is based on their ability to recognize. This innate ability to effortlessly identify and recognize objects, even if distorted or modified, has induced research on how the human brain processes these images. This skill is quite reliable, despite changes due to viewing conditions, emotional expressions, aging, added artifacts, or even circumstances that permit seeing only a fraction of the face. Furthermore, humans are able to recognize thousands of individuals during their lifetime. Understanding the human mechanism, in addition to cognitive aspects, would help to build a system for the automatic identification of faces by a machine. However, face recognition is still an area of active research since a completely successful approach or model has not yet been proposed to solve the face recognition problem. Automated face recognition is a very popular field nowadays. Face recognition can be used in a multitude of commercial and law enforcement applications. For example, a security system could grab an image of a person and the identity of the individual by matching the image with the one stored on the system database.

Typical tasks of computer vision are:

  • Recognition
  • Motion analysis
  • Scene reconstruction
  • Image restoration

The Difficulty with Computer Vision

At present, a computing machine is not able to actually understand what it sees. This level of comprehension is still a faraway goal for computers, as the ability to understand an image is not just to collect some pixels. The capability to identify an object perfectly is truly incredible

Computers only “see” just a grid of numbers from the camera or from a disk, and that is how far it can go. Those parameters have rather a large noise component, so the profitable information is quite small at the end. Many computer vision problems are difficult to specify, especially because the information is lost in the transformation from the 3D world to a 2D image. Furthermore given a two-dimensional view of a 3D world, there is no unique solution to reconstruct the 3D image. The noise in computer vision is typically dealt with with the use of statistical methods. However, other techniques account for noise or distortions by building explicit models learned directly from the available data.

Figure 1: Fields of Computer Vision

The image seen in Figure 1 displays various fields of computer vision which include pattern recognition and image processing. These fields can be considered as abstractly related because usually, advances in one field could potentially lead to advances in other fields as well. Developing a successful face recognition system requires cumulative knowledge from all of these fields.

Computer Vision: Applications

The good news is that computer vision is being used today in a wide variety of real-world applications, which include:

  • Optical character recognition (OCR): reading handwritten postal codes on letters and automatic number plate recognition (ANPR);
  • Machine inspection: rapid parts inspection for quality assurance using stereo vision with specialized illumination to measure tolerances on aircraft wings or auto body parts or looking for defects in steel castings using X-ray vision;
  • Retail: object recognition for automated checkout lanes;
  • 3D model building (photogrammetric): fully automated construction of 3D models from aerial photographs used in systems such as Bing Maps;
  • Medical imaging: registering pre-operative and intra-operative imagery or performing long-term studies of people’s brain morphology as they age;
  • Automotive safety: detecting unexpected obstacles such as pedestrians on the street, under conditions where active vision techniques such as radar or lidar do not work well.
  • Match move: merging computer-generated imagery (CGI) with live-action footage by tracking feature points in the source video to estimate the 3D camera motion and shape of the environment. Such techniques are widely used them also require the use of precise matting to insert new elements between foreground and background elements.
  • Motion capture (MOCAP): using retro-reflective markers viewed from multiple cameras or other vision-based techniques to capture actors for computer animation;
  • Surveillance: monitoring for intruders, analyzing highway traffic, and monitoring pools for drowning victims;
  • Fingerprint recognition and biometrics: for automatic access authentication as well as forensic applications.

References

[1] Bradski, G. and Kaehler, A. 2008, Learning OpenCV: Computer Vision with the OpenCV Library. Sebastopol: O’Reilly.

[2] Bambach, S, A survey on recent advances of computer vision algorithms for egocentric video. arXiv preprint arXiv:1501.02825, 2015.

[3] Chuang, Y.-Y., Agarwala, A., Curless, B., Salesin, D. H., and Szeliski, R. (2002), Video matting of complex scenes, ACM Transactions on Graphics (Proc. SIGGRAPH 2002), 21(3):243–248.

[4] Richard Szeliski, “Computer Vision: Algorithms and Applications”, September 3, 2010 draft 2010 Springer.

What is Biometrics Authentication, Types and Applications

One of our highest priorities in the world of information security is confirmation that a person accessing sensitive, confidential, or classified information is authorized to do so. Such access is usually accomplished by a person’s proving their identity by the use of some means or method of authentication. Biometrics Authentication is a field of technology that has been and is being used in the identification of individuals based on some physical attribute. Biometric Authentication is used for automatic personal recognition based on biological traits—fingerprint, iris, face, palm print, hand geometry, vascular pattern, voice or behavioral characteristics gait, signature, and typing pattern. Fingerprinting is the oldest of these methods and has been utilized for over a century by law enforcement officials who use these distinctive characteristics to keep track of criminals.

Basic Overview

In this computer-driven era, identity theft and the loss or disclosure of data and related intellectual property are growing problems. We each have multiple accounts and use multiple passwords on an ever-increasing number of computers and Web sites. Maintaining and managing access while protecting both the user’s identity and the computer’s data and systems has become increasingly difficult. Central to all security is the concept of authentication – verifying that the user is who he claims to be.

Biometrics Authentication seems to be everywhere these days. Consumer preference has turned the technology into a must-have for the modern smartphone or laptop. Biometric authentication is a security process that relies on the unique biological characteristics of an individual to verify that he is who is says he is. Biometric authentication systems compare a biometric data capture to stored, confirmed authentic data in a database. If both samples of the biometric data match, authentication is confirmed. Typically, biometric authentication is used to manage access to physical and digital resources such as buildings, rooms, and computing devices.

Figure 1: A Thumb for Biometric Verification

Generally speaking, there are four factors of physical attributes that are used or can be used in user authentication:

  • Fingerprint scans, have been in use for many years by law enforcement and other government agencies and are regarded as a reliable, unique identifier.
  • Retina or iris scans have been used to confirm a person’s identity by analyzing the arrangement of blood vessels in the retina or patterns of color in the iris
  • Voice recognition, uses a voice print that analyses how a person says a particular word or sequence of words unique to that individual

There are seven basic criteria for biometric security systems: uniqueness, universality, permanence, collectability, performance, acceptability, and circumvention.

Figure 2 Criteria of Biometric Security

Types of Biometrics

A number of biometric methods have been introduced over the years, but few have gained wide acceptance.

Signature dynamics: Based on an individual’s signature, but considered unforgeable because what is recorded isn’t the final image but how it is produced — i.e., differences in pressure and writing speed at various points in the signature.

Typing patterns: Similar to signature dynamics but extended to the keyboard, recognizing not just a password that is typed in but the intervals between characters and the overall speeds and pattern. This is akin to the way World War II intelligence analysts could recognize a specific covert agent’s radio transmissions by his “hand” — the way he used the telegraph key.

Eye scans: This favorite of spy movies and novels presents its own problems. The hardware is expensive and specialized, and using it is slow and inconvenient and may make users uneasy.

In fact, two parts of the eye can be scanned, using different technologies: the retina and the iris

Fingerprint recognition: Everyone knows fingerprints are unique. They are also readily accessible and require little physical space either for the reading hardware or the stored data.

Voice recognition: This is different from speech recognition. The idea is to verify the individual speaker against a stored voice pattern, not to understand what is being said.

Facial recognition: Uses distinctive facial features, including upper outlines of eye sockets, areas around cheekbones, the sides of the mouth, and the location of the nose and eyes. Most technologies avoid areas of the face near the hairline so that hairstyle changes won’t affect recognition.

Biometrics Authentication Applications

Biometric technology can be used for a great number of applications. Chances are if security is involved, biometrics can help make operations, transactions, and everyday life both safer and more convenient. Here you will find a list of the many areas of deployment for biometrics and the companies that provide applicable identity solutions

Biometric Security

Biometric Security As connectivity continues to spread across the globe, it is clear that old security methods are simply not strong enough to protect what’s most important. Thankfully, biometric technology is more accessible than ever before, ready to bring enhanced security and greater convenience to whatever needs protecting, from a door to your car to…

  • Border Control/Airports

Border Control and Airport Biometrics A key area of application for biometric technology is at the border. Anyone who’s traveled by air can tell you security checkpoints and border crossings are some of the most frustrating places to have to move through. Thankfully, biometric technology is helping automate the process.

  • Consumer/Residential Biometrics

Consumer and Residential Biometrics Recent innovations in mobility and connectivity have created a demand for biometrics in the homes and pockets of consumers. Smartphones with fingerprint sensors, apps that allow for facial and voice recognition, and mobile wallets: are the increasingly popular ways that consumers around the world are finding biometrics in their lives.

  • Fingerprint & Biometric Locks

Fingerprint Biometric Locks If you have something worth protecting why not give it the star treatment? Biometric physical access control solutions are stronger authentication methods than keys, key cards, and PINs for a simple reason: they’re what you are, not what you have.

  • Healthcare Biometrics

Biometrics in Healthcare Biometrics bring security and convenience wherever they’re deployed, but in some instances, they also bring increased organization. In the field of healthcare, this is particularly true. Health records are some of the most valuable personal documents out there, doctors need access to them quickly, and they need to be accurate.

References

[1] Tanuj Tiwari, Tanya Tiwari, and Sanjay Tiwari, “Biometrics Based User Authentication”

[2] Russell Kay, “Biometric Authentication”, available online at: https://www.computerworld.com/article/2556908/security0/biometric-authentication.html

[3] “Applications”, available online at: https://findbiometrics.com/applications/

Object Recognition: Computer Vision

Object recognition in computer vision is the task of finding an object in an image or video sequence. It is a fundamental vision problem. Humans recognize a huge number of objects in images with little effort. Similarly, Computer vision is the ability of machines to see and understand what is in their surroundings. Image of the objects may vary in different viewpoints, in many different sizes/scales, or even when they are translated or rotated. Object recognition is an important task in image processing. This field contains methods for acquiring, processing, and analyzing images for computer vision.

Overview

Object recognition plays an important role in computer vision. It is indispensable for many applications in the area of autonomous systems or industrial control. An object recognition system finds objects in the real world from an image of the world, using object models which are known as a priori. With a simple glance of an object, humans are able to tell its identity or category despite the appearance variation due to changes in pose, illumination, texture, deformation, and under occlusion. Furthermore, humans can easily generalize from observing a set of objects to recognizing objects that have never been seen before. It is concerned with determining the identity of an object being observed in an image from a set of known tags. Humans can recognize any object in the real world easily without any effort; on the contrary machines by themselves cannot recognize objects. Algorithmic descriptions of recognition tasks are implemented on machines; which is an intricate task. Thus object recognition techniques need to be developed which are less complex and efficient.

Definition

Object recognition is a process for identifying a specific object in a digital image or video. Object recognition is concerned with determining the identity of an object being observed in the image from a set of known labels. Oftentimes, it is assumed that the object being observed has been detected or there is a single object in the image. Object recognition algorithms rely on matching, learning, or pattern recognition algorithms using appearance-based or feature-based techniques. Object recognition is useful in applications such as video stabilization, advanced driver assistance systems (ADAS), and disease identification in bioimaging. Common techniques include deep learning-based approaches such as convolutional neural networks, and feature-based approaches using edges, gradients, histogram of oriented gradients (HOG), Haar wavelets, and linear binary patterns.

Object recognition methods frequently use extracted features and learning algorithms to recognize instances of an object or images belonging to an object category. Object class recognition deals with classifying objects into a certain class or category whereas object detection aims at localizing a specific object of interest in digital images or videos. Every object or object class has its own particular features that characterize it and differentiate them from the rest, helping in the recognition of the same or similar objects in other images or videos. Significant challenges stay in the field of object recognition. One main concern is about robustness with respect to variation in scale, viewpoint, illumination, non-rigid deformations, and imaging conditions. Another current issue is the scaling up to thousands of object classes and millions of images, which is called large-scale image retrieval.

Model Design

the architecture and main components of object recognition are given below

 

Figure 1: Different Components of Object Recognition

A block diagram showing interactions and information flow among different components of the system is given in Figure 1

The model database contains all the models known to the system. The information in the model database depends on the approach used for the recognition. It can vary from a qualitative or functional description to precise geometric surface information. In many cases, the models of objects are abstract feature vectors, as discussed later in this section. A feature is some attribute of the object that is considered important in describing and recognizing the object in relation to other objects. Size, color, and shape are some commonly used features.

The feature detector applies operators to images and identifies locations of features that help in forming object hypotheses. The features used by a system depend on the types of objects to be recognized and the organization of the model database. Using the detected features in the image, the hypothesizer assigns likelihoods to objects present in the scene. This step is used to reduce the search space for the recognizer using certain features. The model base is organized using some type of indexing scheme to facilitate the elimination of unlikely object candidates from possible consideration. The verifier then uses object models to verify the hypotheses and refines the likelihood of objects. The system then selects the object with the highest likelihood, based on all the evidence, as the correct object.

All object recognition systems use models either explicitly or implicitly and employ feature detectors based on these object models. The hypothesis formation and verification components vary in their importance in different approaches to object recognition. Some systems use only hypothesis formation and then select the object with the highest likelihood as the correct object. Pattern classification approaches are a good example of this approach. Many artificial intelligence systems, on the other hand, rely little on hypothesis formation and do more work in the verification phases. In fact, one of the classical approaches, template matching, bypasses the hypothesis formation stage entirely.

References

[1] Latharani T.R. and M.Z. Kurian, “Various Object Recognition Techniques for Computer Vision”, Journal of Analysis and Computation, Vol. 7, No. 1, (January-June 2011), pp. 39-47

[2] Simon Achatz, “State of the Art of Object Recognition Techniques”, Neuroscientific System Theory, Seminar Report, 2016

[3] “Chapter 15 Object Recognition”, available online at: http://www.cse.usf.edu/~r1k/MachineVisionBook/MachineVision.files/MachineVision_Chapter15.pdf

 

 

Artificial Neural Network (ANN) An Introduction

An Artificial Neural Network (ANN) is an information processing model. That is inspired by the biological nervous systems like a human brain which process information. It is composed of a number of interconnected processing elements. These processing elements are known as neurons. In order to solve specific problems, for an application. Different ANN structures are need to be configured. Using a learning process. In biological systems adjustments in synaptic connections are happened, in the same way the connected edges update their weights. That exist between the neurons.

Neural Network  Definition

Artificial neural networks, commonly referred to as “neural networks”. It is a highly complex, nonlinear, and parallel information-processing system. It has the capability to organize its structure (neurons) to perform computations. That can be used for (e.g., pattern recognition, perception, and motor control). For example, human vision, which is an information-processing task. It is the function to provide a representation of the environment around us, to input the data we need to interact with the environment. The Neural Network accomplishes perceptual tasks.

More specifically, “A neural network is an interconnected assembly of simple processing elements, units or nodes, whose functionality is loosely based on the animal neuron. The processing ability of the network is stored in the inter-unit connection strengths, or weights, obtained by a process of adaptation to, or learning from, a set of training patterns”.

Artificial neural networks are an attempt at modeling the information processing capabilities of nervous systems. Thus, first of all, we need to consider the essential properties of biological neural networks from the viewpoint of information processing. This will allow us to design abstract models of artificial neural networks, which can then be simulated and analyzed. The human brain has capabilities in processing information and marking instantaneous decisions. Many researchers have shown that the human brain makes computations in a radically different manner to that done by binary computers. The neuron is a massive network of parallel and distributed computing elements, many scientists are working for the last few decades to build a computational system called a neural network, which is also called a connectioned model. A neural network is composed of a set of parallel and distributed processing units called nodes or neurons, these neurons are interconnected by means of unidirectional or bidirectional links by ordering them in layers.

The basic unit of the neural network is the neuron, it consists of N no of inputs to the network are represented by and each input is multiplied by a connection weight these weights are represented by. The product of input and weight are simply summed and feed through a transfer function (activation function) to generate the result (output).

Benefits of Neural Networks

It is apparent that a neural network derives its computing power through, first, its massively parallel distributed structure and, second, its ability to learn and therefore generalize. Generalization refers to the neural network’s production of reasonable outputs for inputs not encountered during training (learning). These two information processing capabilities make it possible for neural networks to find good approximate solutions to complex (large-scale) problems that are intractable.

Neural networks offer the following useful properties and capabilities:

Non-linearity: An artificial neuron can be linear or nonlinear. A neural network, made up of an interconnection of nonlinear neurons, is itself nonlinear. Moreover, the non-linearity is of a special kind in the sense that it is distributed throughout the network. Non-linearity is a highly important property, particularly if the underlying physical mechanism responsible for the generation of the input signal (e.g., speech signal) is inherently nonlinear.

Input-Output Mapping: A popular paradigm of learning, called learning with a teacher or supervised learning, involves modification of the synaptic weights of a neural network by applying a set of labeled training examples, or task examples. Each example consists of a unique input signal and a corresponding desired (target) response. The network is presented with an example picked at random from the set, and the synaptic weights (free parameters) of the network are modified to minimize the difference between the desired response and the actual response of the network produced by the input signal in accordance with an appropriate statistical criterion. The training of the network is repeated for many examples in the set until the network reaches a steady state where there are no further significant changes in the synaptic weights. The previously applied training examples may be reapplied during the training session but in a different order. Thus the network learns from the examples by constructing an input-output mapping for the problem at hand.

Evidential Response: In the context of pattern classification, a neural network can be designed to provide information not only about which particular pattern to select but also about the confidence in the decision made. This latter information may be used to reject ambiguous patterns, should they arise, and thereby improve the classification performance of the network.

Contextual Information: Knowledge is represented by the very structure and activation state of a neural network. Every neuron in the network is potentially affected by the global activity of all other neurons in the network. Consequently, contextual information is dealt with naturally by a neural network.

VLSI Implementability: The massively parallel nature of a neural network makes it potentially fast for the computation of certain tasks. This same feature makes a neural network well suited for implementation using very-large-scale-integrated (VLSI) technology. One particular beneficial virtue of VLSI is that it provides a means of capturing truly complex behavior in a highly hierarchical fashion.

Uniformity of Analysis and Design: Basically, neural networks enjoy universality as information processors. We say this in the sense that the same notation is used in all domains involving the application of neural networks. This feature manifests itself in different ways:

  • Neurons, in one form or another, represent an ingredient common to all neural networks.
  • This commonality makes it possible to share theories and learning algorithms in different applications of neural networks.
  • Modular networks can be built through seamless integration of modules.

 

References

[1] Haykin, Simon S., et al. Neural Networks and Learning Machines, Volume 3, Upper Saddle River, NJ, USA, Pearson, 2009.

[2] Shruti B. Hiregoudar, Manjunath. K and K. S. Patil, “A Survey: Research Summary on Neural Networks”, International Journal of Research in Engineering and Technology, Volume 03, Special Issue: 03, May-2014

[3] Rojas, Raúl. Neural networks: a systematic introduction, Springer Science & Business Media, 2013.

Exit mobile version