What is Template Matching in Object Recognition

Template matching is one of the areas of profound interest in recent times. It has turned out to be a revolution in computer vision. Template Matching is a high-level machine vision technique that identifies the parts on an image that match a predefined template. Advanced template matching algorithms allow us to find occurrences of the template regardless of their orientation and local brightness. Template Matching techniques are flexible and relatively straightforward to use, which makes them one of the most popular methods of object localization. Their applicability is limited mostly by the available computational power, as identification of big and complex templates can be time-consuming.

What is it?

It is a technique used in classifying an object by comparing portions of images with another image. One of the important techniques in Digital image processing is template matching. Templates are usually employed to print characters, identify numbers and other simple objects. It can be used for the detection of edges in figures, in manufacturing as a part of quality control, and as a means to navigate a mobile robot.

Figure 1: Example of Template Matching

Figure 1 depicts an example of it. It is a strategy for discovering zones of an image, which match (are indistinguishable) a template image (patch). We require 2 crucial segments. We need two vital segments:

  • Source image (I): The picture inside which we are hoping to find out a match to the template image.
  • Template image (T): The patch image that can be compared to the template image and our objective is to discover the most effective technique for the best matching region. The matching technique not solely takes the similarity measure however it computes the inaccuracy among images reckoning on the difference by means of Mean Squared Error (MSE) metric.

Figure 2 Real-world application

Template Matching Approaches

General categorizations of template or image matching approaches are Featured-based approach and Template or Area based approach:

  • Featured-based approach: The Featured-based method is appropriate while both reference and template images have more connection with regard to features and control points. The features comprise points, a surface model which needs to be matched, and curves. At this point, the goal is to position the pairwise association amongst reference and layout pictures using their spatial relations or descriptors of components.
  • Template-based approach: Template-based template matching approach could probably require sampling of a huge quantity of points, it is possible to cut back the number of sampling points by diminishing the resolution of the search and template images via the same factor and performing an operation on the resulting downsized images (multiresolution, or Pyramid (image processing)), providing a search window of data points inside the search image in order that the template doesn’t have to be compelled to look for each possible data point and the mixture of the two.
  • Area-based approach: Area-based methods are typically referred to as correlation-like methods or template-matching methods, which is the blend of feature matching, feature detection, motion tracking, occlusion handling, etc. Area-based methods merge the matching part with the feature detection step. These techniques manage the pictures without attempting to identify the remarkable article. Windows of predefined size is used for the estimation of correspondence.
  • Motion Tracking and Occlusion Handling: For the templates which cannot provide and may not provide an instantaneous match, in that case, Eigenspaces may be used, which provides the details of matching images beneath numerous conditions, appropriate matching poses, or color contrast. For instance, if the person turned into searching out a specimen, the Eigen spaces may include templates of specimens totally different in numerous positions to the camera with different lighting conditions or expressions. There is feasibility for the matching figure to be occluded via an associated item or issues involved in movement turn out to be ambiguous. One of the probable answers for that can be to separate the template into more than one sub-images and carry out matching on them.

Limitation

The following are the limitations of template matching:

  • Templates are not rotation or scale invariant.
  • Slight changes in size or orientation variations can cause problems.
  • It often uses several templates to represent one object.
  • Templates may be of different sizes.
  • Rotations of the same template.
  • Particularly if you search the entire image or if you use several templates in that case template matching is a very expensive operation.
  • It is easily “parallelized”.
  • It requires high computational power because the detection of large patterns of an image is very time taking.

References

[1] Paridhi Swaroop and Neelam Sharma, “An Overview of Various Template Matching Methodologies in Image Processing”, International Journal of Computer Applications (IJCA), Volume 153 – No 10, November 2016

[2] T. Mahalakshmi, R. Muthaiah and P. Swaminathan, “Review Article: An Overview of Template Matching Technique in Image Processing”, Research Journal of Applied Sciences, Engineering, and Technology, Volume 4, Number 24, pp. 5469-5473, 2012.

[3] Nazil Perveen, Darshan Kumar and Ishan Bhardwaj, “An Overview on Template Matching Methodologies and its Applications”, International Journal of Research in Computer and Communication Technology, Volume 2, Issue 10, October- 2013.

An Overview of Image Denoising

Any form of signal processing having the image as an input and output (or a set of characteristics or parameters of an image) is called image processing. In image processing, we work in two domains i.e., spatial domain and frequency domain. The spatial domain refers to the image plane itself, and the image processing method in this category is based on the direct manipulation of pixels in an image, and coming to the frequency domain is the analysis of mathematical functions or signals with respect to frequency rather than time.

Overview

The search for efficient image denoising methods still is a valid challenge, at the crossing of functional analysis and statistics. Image denoising refers to the recovery of a digital image that has been contaminated by noise. The presence of noise in images is unavoidable. It may be introduced during the image formation, recording, or transmission phase. Further processing of the image often requires that the noise must be removed or at least reduced. Even a small amount of noise is harmful when high accuracy is required. The noise can be of different types. The most popular ones are additive white Gaussian noise (AWGN).

An image denoising procedure takes a noisy image as input and outputs an image where the noise has been reduced. Numerous and diverse approaches exist Some selectively smooth parts of a noisy image. Other methods rely on the careful shrinkage of wavelet coefficients. A conceptually similar approach is to denoise image patches by trying to approximate noisy patches using a sparse linear combination of elements of a learned dictionary. Learning a dictionary is sometimes accomplished through learning on a noise-free dataset. Other methods also learn a global image prior to a noise-free dataset, for instance. An image is often corrupted by noise in its acquisition and transmission. Image denoising is used to remove the additive noise while retaining as much as possible the important signal features. Generally, data sets collected by image sensors are contaminated by noise. Imperfect instruments, problems with the data acquisition process, and interfering natural phenomena can all corrupt the data of interest. Thus noise reduction is an important technology in Image Analysis and the first step to be taken before images are analyzed. Therefore, Image Denoising techniques are necessary to prevent this type of corruption from digital images.

The technique of Image Denoising

Various image denoising techniques have been developed so far and their application depends upon the type of image and noise present in the image. Image denoising is classified into two categories:

Spatial domain filtering: This is the traditional way to remove the noise from digital images to employ spatial filters. Spatial domain filtering is further classified into linear filters and nonlinear filters.

Linear Filters: A mean filter is the optimal linear for Gaussian noise in the sense of mean square error. Linear filters tend to blur sharp edges and destroy lines and other fine details of the image. It includes the Mean filter and Wiener filter.

Transform domain filtering: Transform domain filtering can be subdivided into data-adaptive and non-adaptive filters. The Transformer domain mainly includes wavelet-based filtering techniques.

Wavelet Transform: Wavelet transform is a mathematical function that analyzes the data according to scale or resolution. Noise reduction using wavelets is performed by first decomposing the noisy image into wavelet coefficients i.e. approximation and detail coefficients. Then, by selecting a proper Thresholding value the detail coefficients are modified based on the Thresholding function. Finally, the reconstructed image is obtained by applying the inverse wavelet transform on modified coefficients. The basic procedure for all Thresholding methods is:

  • Calculate DWT if the Image.
  • Threshold the wavelet components.
  • Compute IDWT to obtain a denoised estimate.

There are two Thresholding functions frequently used i.e. Hard Threshold, Soft threshold. The hard Thresholding function keeps the input if it is larger than the threshold; otherwise, it is set to zero. The soft-Thresholding function takes the argument and shrinks it toward zero by the threshold. The soft-Thresholding rule is chosen over hard Thresholding, for the soft-Thresholding method yields more visually pleasant images over hard Thresholding. The result may still be noisy. A large threshold alternatively, produces a signal with a large number of zero coefficients. This leads to a smooth signal. So much attention must be paid to selecting the optimal threshold.

Figure 1 Example of Image Denoising

References

[1] Burger, Harold C., Christian J. Schuler, and Stefan Harmeling, “Image denoising: Can plain neural networks compete with BM3D?” In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pp. 2392-2399.

[2] Alisha P B and Gnana Sheela K, “Image Denoising Techniques-An Overview”, IOSR Journal of Electronics and Communication Engineering (IOSR-JECE), Volume 11, Issue 1, Ver. I (Jan. – Feb .2016), PP 78-84

[3] C. Kervrann and J. Boulanger, “Patch-based Image Denoising”, available online at: https://www.irisa.fr/vista/Themes/Demos/Debruitage/ImageDenoising.html

Understanding of Data deduplication in Cloud

Rendering efficient storage and security for all data is very important for the cloud. With the rapidly increasing amounts of data produced worldwide, networked and multi-user storage systems are becoming very popular. However, concerns over data security still prevent many users from migrating data to remote storage.

Data deduplication refers to a technique for eliminating redundant data in a data set. In the process of deduplication, extra copies of the same data are deleted, leaving only one copy to be stored. Data is analyzed to identify duplicate byte patterns to ensure the single instance is indeed the single file. Then, duplicates are replaced with a reference that points to the stored chunk.

Data deduplication is a technique to reduce storage space. By identifying redundant data using hash values to compare data chunks, storing only one copy, and creating logical pointers to other copies instead of storing other actual copies of the redundant data. Deduplication reduces data volume so disk space and network bandwidth can be reduced which reduces costs and energy consumption for running storage systems.

Figure 1 Data de-duplication View

It is a technique whose objective is to improve storage efficiency. With the aim to reduce storage space, in traditional deduplication systems, duplicated data chunks identify and store only one replica of the data in storage. Logical pointers are created for other copies instead of storing redundant data. Deduplication can reduce both storage space and network bandwidth. However such techniques can result in a negative impact on system fault tolerance. Because there are many files that refer to the same data chunk, if it becomes unavailable due to failure can result in reduced reliability. Due to this problem, many approaches and techniques have been proposed that not only provide solutions to achieve storage efficiency but also improve fault tolerance.

Applications

Data deduplication provides practical ways to achieve these goals, including

  • Capacity optimization. It stores more data in less physical space. It achieves greater storage efficiency than was possible by using features such as Single Instance Storage (SIS) or NTFS compression. It uses subfile variable-size chunking and compression, which deliver optimization ratios of 2:1 for general file servers and up to 20:1 for virtualization data.
  • Scale and performance. It is highly scalable, resource-efficient, and nonintrusive. It can process up to 50 MB per second in Windows Server 2012 R2, and about 20 MB of data per second in Windows Server 2012. It can run on multiple volumes simultaneously without affecting other workloads on the server.
  • Reliability and data integrity. When it is applied, the integrity of the data is maintained. Data Deduplication uses checksum, consistency, and identity validation to ensure data integrity. For all metadata and the most frequently referenced data, data deduplication maintains redundancy to ensure that the data is recoverable in the event of data corruption.
  • Bandwidth efficiency with BranchCache.Through integration with BranchCache, the same optimization techniques are applied to data transferred over the WAN to a branch office. The result is faster file download times and reduced bandwidth consumption.
  • Optimization management with familiar tools. It has optimization functionality built into Server Manager and Windows PowerShell. Default settings can provide savings immediately, or administrators can fine-tune the settings to see more gains.

Data de-duplication Methods

Data deduplication identifies duplicate data, removing redundancies and reducing the overall capacity of data transferred and stored. There are two methods Block-level and byte-level data deduplication methods deliver the benefit of optimizing storage capacity. When, where, and how the processes work should be reviewed for your data backup environment and its specific requirements before selecting one approach over another.

  1. Block-level Approaches

Block-level data deduplication segments data streams into blocks, inspecting the blocks to determine if each has been encountered before (typically by generating a digital signature or unique identifier via a hash algorithm for each block). If the block is unique, it is written to disk, and its unique identifier is stored in an index; otherwise, only a pointer to the original, unique block is stored. By replacing repeated blocks with much smaller pointers rather than storing the block again, disk storage space is saved.

  1. Byte-level data de-duplication

Analyzing data streams at the byte level is another approach to deduplication. By performing a byte-by-byte comparison of new data streams versus previously stored ones, a higher level of accuracy can be delivered. Deduplication products that use this method have one thing in common: It’s likely that the incoming backup data stream has been seen before, so it is reviewed to see if it matches similar data received in the past

References

[1] How Does Data Deduplication Work? Online available at: http://www.enterprisestorageguide.com/how-data-deduplication-works

[2] “Data Deduplication Overview”, available online at: https://technet.microsoft.com/en-us/library/hh831602(v=ws.11).aspx

[3] Leesakul, Waraporn, Paul Townend, and Jie Xu. “Dynamic data deduplication in cloud storage.” Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on, IEEE, 2014.

What is Data Compression

Compression is used just about everywhere. Data compression involves the development of a compact representation of information. Most representations of information contain large amounts of redundancy. Redundancy can exist in various forms. Internet users who download or upload files from/to the web, or use email to send or receive attachments will most likely have encountered files in compressed format.

General Overview

With the extended use of computers in various disciplines, the number of data processing applications is also increasing which requires the processing and storing of large volumes of data. It is primarily a branch of information theory, which deals with techniques related to minimizing the amount of data to be transmitted and stored. It is often called coding, where coding is a general term encompassing any special representation of data that satisfies a given need. Information theory is the study of efficient coding and its consequences, in the form of speed.

What is it?

Today, with the growing demands of information storage and data transfer, data compression is becoming increasingly important. Compression is the process of encoding data more efficiently to reduce file size. One type of compression is available is referred to as lossless compression. This means the compressed file will be restored exactly to its original state with no loss of data during the decompression process. This is essential to data compression as the file would be corrupted and unusable should data be lost. It is the art of reducing the number of bits needed to store or transmit data. It is one of the enabling technologies for multimedia applications. It would not be practical to put images, audio, and video on websites if they do not use compression algorithms. Mobile phones would not be able to provide communication clearly without data compression. With compression techniques, we can reduce the consumption of resources, such as hard disk space or transmission bandwidth.

Data Compression Principles

Below, data compression principles are listed:

  • It is the substitution of frequently occurring data items, or symbols, with shortcodes that require fewer bits of storage than the original symbol.
  • Saves space, but requires time to save and extract.
  • Success varies with the type of data.
  • Works best on data with low spatial variability and limited possible values.
  • Works poorly with high spatial variability data or continuous surfaces.
  • Exploits inherent redundancy and irrelevancy by transforming a data file into a smaller one

Figure 1: Data Compression Process

Data Compression Technique

Data compression is the function of the presentation layer in the OSI reference model. Compression is often used to maximize the use of bandwidth across a network or to optimize disk space when saving data.

There are two general types of compression techniques:

Figure 2: Classification of Compression

Lossless Compression

Lossless compression compresses the data in such a way that when data is decompressed it is exactly the same as it was before compression i.e. there is no loss of data. Lossless compression is used to compress file data such as executable code, text files, and numeric data because programs that process such file data cannot tolerate mistakes in the data. Lossless compression will typically not compress files as much as lossy compression techniques and may take more processing power to accomplish the compression.

Lossless data compression is compression without any loss of data quality. The decompressed file is an exact replica of the original one. Lossless compression is used when it is important that the original and the decompressed data be identical. It is done by re-writing the data in a more space-efficient way, removing all kinds of repetitions (compression ratio 2:1). Some image file formats, notably PNG, use only lossless compression, while those like TIFF may use either lossless or lossy methods.

Lossless Compression Algorithms

The various algorithms used to implement lossless data compression are:

Run Length Encoding

  • This method replaces the consecutive occurrences of a given symbol with only one copy of the symbol along with a count of how many times that symbol occurs. Hence the name ‘run length’.
  • For example, the string AAABBCDDDD would be encoded as 3A2BIC4D.
  • A real-life example where run-length encoding is quite effective is the fax machine. Most faxes are white sheets with the occasional black text. So, a run-length encoding scheme can take each line and transmit a code for while then the number of pixels, then the code for black and the number of pixels, and so on.
  • This method of compression must be used carefully. If there is not a lot of repetition in the data then it is possible the run length encoding scheme would actually increase the size of a file.

Differential Pulse Code Modulation

  • In this method first, a reference symbol is placed. Then for each symbol in the data, we place the difference between that symbol and the reference symbol used.
  • For example, using symbol A as the reference symbol, the string AAABBC DDDD would be encoded as AOOOl123333, since A is the same as the reference symbol, B has a difference of 1 from the reference symbol, and so on.

Dictionary Based Encoding

  • One of the best-known dictionary-based encoding algorithms is Lempel-Ziv (LZ) compression algorithm.
  • This method is also known as substitution coder.
  • In this method, a dictionary (table) of variable-length strings (common phrases) is built.
  • This dictionary contains almost every string that is expected to occur in data.
  • When any of these strings occur in the data, then they are replaced with the corresponding index to the dictionary.
  • In this method, instead of working with individual characters in text data, we treat each word as a string and output the index in the dictionary for that word.
  • For example, let us say that the word “compression” has an index of 4978 in one particular dictionary; it is the 4978th word is usr/share/dict/words. To compress a body of text, each time the string “compression” appears, it would be replaced by 4978.

Lossy Compression

A lossy compression method is one where compressing data and then decompressing it retrieves data that may well be different from the original, but is “close enough” to be useful in some way. The algorithm eliminates irrelevant information as well and permits only an approximate reconstruction of the original file. Lossy compression is also done by re-writing the data in a more space-efficient way, but more than that: less important details of the image are manipulated or even removed so that higher compression rates are achieved. Lossy compression is dangerously attractive because it can provide compression ratios of 100:1 to 200:1, depending on the type of information being compressed. But the cost is loss of data.

The advantage of lossy methods over lossless methods is that in some cases a lossy method can produce a much smaller compressed file than any known lossless method, while still meeting the requirements of the application.

Examples of Lossy Methods are:

  • PCM
  • JPEG
  • MPEG

References

[1] “Compression Concepts”, available online at: http://www.gitta.info/DataCompress/en/html/CompIntro_learningObject2.html

[2] Dinesh Thakur, “Data Compression-What is the Data Compression? Explain Lossless Compression and Lossy Compression”, available online at: http://ecomputernotes.com/computer-graphics/basic-of-computer-graphics/data-compression

[3] Gaurav Sethi, Sweta Shaw, Vinutha K, and Chandrani Chakravorty, “Data Compression Techniques”, (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 5 (4), 2014, pp. 5584-5586

[4] Hosseini, Mohammad, “A survey of data compression algorithms and their applications.” Network Systems Laboratory, School of Computing Science, Simon Fraser University, BC, Canada (2012).

Cover Letter Format / Template for Research Paper/ Journal Submission

This cover letter format helps to prepare a cover letter during the submission of a research paper. This template can be utilized by replacing the required information according to you.

___________________________Format Start____________________________________________

My Name
University of Research
Complete address of the Institute
Your mobile number
email@email.com

Editor-in-Chief
“Journal name complete”

[date]

Dear sir/madam:

I am pleased to submit a research article entitled “Title of Research Paper” by “Authors Names” for consideration for publication in “Journal Complete Name”.  This manuscript builds on our study to “Key sentence for your research work or solution introduced”.

In this manuscript, we show that: “Key Highlight or relevant problem statement of your paper”. We believe that this manuscript is appropriate for publication by “Journal Complete Name”.

This manuscript has not been published and is not under consideration for publication elsewhere.  We have no conflicts of interest to disclose.

Thank you for your consideration!

Sincerely,
My Name, PhD
Professor, Department
University Address

___________________________Format End____________________________________________

Exit mobile version