## Fuzzy C Mean Clustering Algorithm With Steps

The fuzzy C Means clustering algorithm is developed to overcome the issue of k-means clustering. The k-means clustering specifies the classes strictly, but Fuzzy C Means clustering can assign more than one class label to an instance. This algorithm works by assigning membership to each data point corresponding to each cluster center on the basis of the distance between the cluster center and the data point. The more the data is near to the cluster center more is its membership in the particular cluster center. Clearly, the summation of membership of each data point should be equal to one.

After each iteration membership and cluster centers are updated according to the formula:

1.  Gives the best result for the overlapped data set and is comparatively better than the k-means algorithm.
2.  Unlike k-means where data point must exclusively belong to one cluster center here data point is assigned membership to each cluster center as a result of which data point may belong to more than one cluster center.

1. With a lower value of β we get a better result but at the expense of more iteration.
2.  Euclidean distance measures can unequally weigh underlying factors.

Reference

## K-Means Clustering Algorithm with Steps

k-means is one of the simplest unsupervised learning algorithms that solve the well-known clustering problem. The procedure follows a simple and easy way to classify a given data set through a certain number of clusters. The main idea is to define k centers, one for each cluster. These centers should be placed in a cunning way because of different location causes the different result. So, the better choice is to place them as much as possible far away from each other. The next step is to take each point belonging to a given data set and associate it with the nearest center. When no point is pending, the first step is completed and an early group age is done. At this point, we need to re-calculate k new centroids as barycenters of the clusters resulting from the previous step. After we have these k new centroids, a new binding has to be done between the same data set points and the nearest new center. A loop has been generated. As a result of this loop, we may notice that the k centers change their location step by step until no more changes are done or in other words, centers do not move anymore. Finally, this algorithm aims at minimizing an objective function known as the squared error function given by:

1.  Fast, robust, and easier to understand.
2.  Relatively efficient: O(t*k*n*d), where n is # objects, k is # clusters, d is # dimension of each object, and t is # iterations. Normally, k, t, d << n.
3.  Gives the best result when the data set is distinct or well separated from each other.

1.  The use of Exclusive Assignment – If there are two highly overlapping data then k-means will not be able to resolve that there are two clusters.
2. The learning algorithm is not invariant to non-linear transformations i.e. with a different representation of data we get different results (data represented in form of cartesian co-ordinates and polar co-ordinates will give different results).
3.  Euclidean distance measures can unequally weigh underlying factors.
4.  The learning algorithm provides the local optima of the squared error function.

References

## Decision Tree C4.5 or J48 – Algorithm, Applications, Advantages & Disadvantage

C4.5 uses “Information gain,” This computation does not, in itself, produce anything new. However, it allows for measuring a gain ratio. The Gain ratio is defined as follows:

The C4.5 Algorithm is as follows.

Applications of Decision Tree C4.5

In an analysis of coal logistics customer

For the analysis of logistic customers need to build a decision tree based on the C4.5 algorithm for coal logistics customer analysis, adopt Pessimistic Error Pruning (PEP) to simplify it, and apply the rules extracted to the CRM of the coal logistics road company which shows that the method can be able to accurately classify the types of customers.

In Scholarship evaluation

Manual evaluation of educational scholarships always revolves around the rank of scholarship and the number of students who are rewarded rather than analyzing the facts which can influence the achievement of the scholarship. As a result, the evaluation is apparently lacking fairness and efficiency. Hence based on the C4.5 decision tree, a higher education scholarship evaluation model is built.

In soil quality grade forecasting model

C4.5 to establish a soil quality grade prediction model and combines the soil composition in Lishu to be a training sample. C4.5 algorithm also expresses the acquired knowledge by means of quantitative rules. The experiment results manifest that the expression of the C4.5 algorithm’s knowledge is easy to understand, is convenient for practical application, improves forecasting accuracy, and provides a reliable theoretical basis for precision fertilization.

In Cattle Disease Classification

In cattle disease classification also C4.5 algorithm has been used and it brought success to predict and classify disease in cattle and so that we can treat the cattle accordingly without any further delay.

In English emotional classification

C4.5 Algorithm of a decision tree to classify semantics (positive, negative, neutral) for the English documents. The C4.5 algorithm on the 70,000 English positive sentences generates a decision tree and many association rules of the positive polarity are created by the decision tree. Also, the C4.5 algorithm on the 70,000 English negative sentences to generate a decision tree, and many association rules of the negative polarity are created by the decision tree. Classifying sentiments of one English document is identified based on the association rules of the positive polarity and the negative polarity.

1. C4.5 is an easy algorithm to implement.
2. It deals with noise.
3. It doesn’t get much affected by missing data.
4. It can convert the tree to rules
5. It can also deal with continuous attributes.

1. The small variation in data can lead to a different decision tree
2.   It does not work well on the small training data set.
3.   It is over-fitting.
4.  Only one attribute at a time is tested for making decisions.

References

## What is Clustering ?– Applications, Advantages and Limitations

Clustering is the technique to separate the information. That focuses on the grouping of data that has similar information. Two groups contain different information but have the same information in their own group. Clustering is an unsupervised learning technique that is used to group or categorize similar pattern information. In basic words, the technique to isolate data based on comparable qualities and allot them a group is clustering. To understand this, let us take an example. Suppose we have a set of mixed fruit as demonstrated in the below figure. This set of fruit is the data to be categorized. This set of fruits is utilized with the algorithm to separate them. We can see the outcome of the clustering as the well-categorized fruits according to their similarity.

### Applications of Clustering

• Helps in Identifying Cancerous Data: It can be utilized in distinguishing Cancerous information. First, we consider some known samples of cancerous and noncancerous data. Calculate centroid using the available information. We apply diverse clustering algorithms for this task. The aim of algorithms is to calculate the centroids by which we can distinguish among different groups of data.
• Search Engines: The clustering algorithms are the spine behind the search engines. It attempts to gather items in a single group and different articles from another. It gives the outcome to the searched information the closest comparative article.
• In Academics: The performance screening is essential for the advancement of students’ academic performance. The clustering algorithm can be utilized to screen the student’s performance. Based on the obtained scores, we can employ different algorithms such as k-means, or FCM to group students according to their performance.
• Identifying Fake News: The algorithm works by taking in the substance of the phony news story, the corpus, analyzing the words utilized, and afterward clustering them. These bunches are what enable the algorithm to figure out which pieces are real and which are phony news. Certain words are discovered all the more normally in sensationalized, misleading content articles. At the point when you see a high level of explicit terms in an article, it gives a higher likelihood of the material being phony news.
• Spam Filters: The clustering Algorithms have been demonstrated to be a successful method of distinguishing spam. The way that it works is by taking a gander at the various areas of the email (header, sender, and substance). The information is then gathered. These gatherings would then be able to be ordered to recognize which are spam. Remembering clustering for the characterization cycle improves the exactness of the channel to 97%. This is magnificent news for individuals who need to be certain they’re not passing up your most liked bulletins and offers.

1.  Increased resource availability: On the off chance that one Intelligence Server in a group comes up short, the other Intelligence Servers in the bunch can get the outstanding task at hand. This forestalls the loss of important time and data if a worker falls flat.
2.  Strategic resource usage: Projects can be distributed across nodes in whatever configuration you prefer. This is how we can reduce overhead because not all machines need to be running all projects, and allows you to use your resources flexibly
3.  Increased performance: Multiple machines provide greater processing power.
4.  Greater scalability: As your user base grows and report complexity increases, your resources can grow.
5. Simplified management: Clustering simplifies the management of large or rapidly growing systems.

Limitations of Clustering

A decision tree is a type of Supervised Learning. In which we continuously split data according to some fixed parameters. It has two entities which are decision nodes and leaves. Where leaves are the final outcome of the decision of the tree and where data splits those are called decision nodes.  A decision tree is drawn upside down with its root at the top. In the figure given below, we have made a decision tree to check whether a person is fit or not. Here in decision nodes, we have some parameters like age, eating habits, and exercise and data will split on the nodes to make further nodes or leaves. We can see final outcome which is Fit or Unfit in a particular case.

Decision trees are built using an algorithmic methodology that recognizes approaches to part an informational index dependent on various conditions. It is one of the most broadly utilized and commonsense strategies for supervised learning. Decision Trees are a non-parametric supervised learning strategy utilized for both classification and regression errands.

Here are some common terms which we use in the decision tree.

Branches – The division of the whole tree is called branches.

Root Node – It represents the entity that will be divided further.

Terminal Node – A node that cannot be split further is called a terminal node.

Pruning – Removal of sub-nodes from a decision node.

Splitting – The division of nodes is called splitting.

Decision Node –  A node that will be divided further into different sub-nodes and will be a sub-node.

Parent and Child Node – When a node gets divided further then it becomes a parent node and the divided nodes or the sub-nodes become a child node of the parent node.

Applications:

In the previous many years, numerous associations had made their information bases to improve their customer administrations. Decision trees are a potential method to separate helpful data from information bases and they have just been utilized in numerous applications in the space of business and management. Specifically, decision tree displaying is broadly utilized in customer relationship management and misrepresentation identification, which are introduced in the subsections beneath.

In Fraud Detection

Another generally utilized business application is the location of Fraudulent Financial Statements (FFS). Such an application is especially significant on the grounds that the presence of FFS may bring about lessening the administration’s expense pay (Spathis et al., 2003). A conventional method to recognize FFS is to utilize factual strategies. Notwithstanding, it is hard to find all shrouded data because of the need of making a colossal number of suspicions and reclassifying the relationships among countless factors in a financial explanation.

In Healthcare

As decision tree displaying can be utilized for making forecasts, there is an expanding number of studies that examine the utilization of decision trees in medical care management. For example, Chang (2007) has built up a decision tree model based on 516 bits of information to investigate the concealed information situated inside the clinical history of formatively deferred youngsters. The made model distinguishes that most sicknesses will bring about postponements in psychological turn of events, language advancement, and engine improvement, of which correctness are 77.3%, 97.8%, and 88.6% individually. Such discoveries can bring about helping medical care proficient to have an early intercession on formatively postponed youngsters to assist them with making up for lost time with their ordinary friends in their turn of events and development.

To Find Prospective Clients

Another utilization of decision trees is in the utilization of demographic information to discover prospective customers. They can help in smoothing out a promoting financial plan and in settling on educated decisions on the objective market that the business is centered around. Without it, the business may spend its promoting market without a particular demographic as a primary concern, which will influence its general income.

To Find Prospective Growth Opportunity

One of the utilizations of decision trees includes assessing prospective development open doors for organizations dependent on recorded information. Recorded information on deals can be utilized that may prompt the creation of revolutionary changes in the procedure of a business to help develop and grow.

1.  It creates a comprehensive analysis of the consequences along each branch and identifies decision nodes that need further analysis.
2.  It assigns specific values to each problem, decision path, and outcome.
3.  Using specific values identifies the relevant decision paths, reduces uncertainty, clears up ambiguity, and clarifies the financial consequences of various courses of action.
4.  It presents visually all of the decision alternatives for quick comparisons in a format that is easy to understand with only brief explanations.
5.  Missing values in the data also do NOT affect the process of building a decision tree to any considerable extent.
6.  It does not require the normalization of data.

1. A little change in the information can cause an enormous change in the structure of the decision tree causing shakiness.
2.  For a Decision tree in some cases, an estimation can go undeniably more mind-boggling contrasted with different calculations.
3. A decision tree frequently includes a higher chance to prepare the model.
4.  Decision tree preparation is generally costly as the intricacy and time taken are more.
5. The Decision Tree calculation is lacking for applying relapse and anticipating ceaseless qualities.

References

CART (Classification And Regression Trees) is a widely used machine learning algorithm. It is a classification algorithm used to perform the classification task. It gives the output as a tree where the outcomes are represented in a leaf node and other dataset attributes are used to create branches of the tree.  The CART algorithm was introduced in 1984 by Leo Breiman, Jerome Friedman, Richard Olshen, and Charles Stone. The CART algorithm can be applied for both regression and classification. The main elements of CART (and other decision tree algorithms) are:

1. Rules for splitting data at a node based on the value of one variable;
2.  Stopping rules for deciding when a branch is terminal and can be split no more; and
3. Finally, a prediction for the target variable in each terminal node.

The CART algorithm works via the following process:

1.   The best split point of each input is obtained.
2.  Based on the best split points of each input in Step 1, the new “best” split point is identified.
3.   Split the chosen input according to the “best” split point.
4.  Continue splitting until a stopping rule is satisfied or no further desirable splitting is available.

As the algorithm works by using impurity measures to quantify the purity of the nodes. A node is deemed to be “pure” when the target values or categories are homogenous, and further splits are undesirable.

Applications of the CART Algorithm

CART for Quick Data Insights

The CART model is used to find relationships between attributes. After building the decision tree, the Cp value is checked across the levels of the tree to find out the optimum level, at which the relative error is minimum. The optimum Cp value is also used during the pruning of the tree.

CART in Blood Donors Classification

CART decision tree algorithm implemented in Weka. So Numerical experimental results on the UCI ML blood transfusion data with the enhancements helped to identify donor classification. Conclusion: The CART-derived model along with the extended definition for identifying regular voluntary donors provided a good classification accuracy-based model.

CART for Environmental and Ecological Data

The CART algorithm is adapted to the case of spatially dependent samples, focusing on environmental and ecological applications. Two approaches are considered. The first one takes into account the irregularity of the sampling by weighting the data according to their spatial pattern using two existing methods based on Voronoi tessellation and regular grid, and one original method based on kriging. The second one uses spatial estimates of the quantities involved in the construction of the discriminate rule at each step of the algorithm. These methods are tested on simulations and on a classical dataset to highlight their advantages and drawbacks. They are then applied to an ecological data set to explore the relationship between pollen data and the presence/absence of tree species, which is an important question for climate reconstruction based on paleoecological data.

CART in  Psychiatric Services

CART was used to identify potential high users of services among low-income psychiatric outpatients. Sociodemographic variables, clinical variables (e.g., psychiatric diagnosis and type of presenting complaint), source of referral, and the most recent psychiatric treatment setting used were studied. Discharge from inpatient psychiatric treatment right before admission to outpatient psychiatric treatment was found to be the most consistent, the most powerful, and the only necessary predictor of high use of outpatient psychiatric services.

CART in the Financial Sector

The main idea is that the learning sample is consistently replenished with new observations. It means that the CART tree has an important ability to adjust to the current situation in the market. Many banks are using the Basel II credit scoring system to classify different companies to risk levels, which uses a group of coefficients and indicators. This approach, on the other hand, requires continuous correction of all indicators and coefficients in order to adjust to market changes.

1. . CART requires minimal supervision and produces easy-to-understand models.
2.  It focuses on finding interactions and signal discontinuity.
3.  It finds important variables automatically.
4. It is invariant to monotone transformations of predictors.
5. It uses any combination of continuous/ discrete variables.

1. This does not use combinations of variables.
2. The tree structure may be unstable.
3.  It has a limited number of positions to accommodate available predictors.
4.  It has a limited number of positions to accommodate available predictors.

References

ID3 is a decision tree algorithm. ID3 is an abbreviation for Iterative Dichotomiser 3. It was invented by J. Ross Quinlan in 1975. It uses a fixed set of examples to build a decision tree. The resulting tree is used to classify further examples or samples. The class name contains the leaf nodes and the non-leaf nodes are the decision nodes. It is used to generate a decision tree from a given data set by employing a top-down, greedy search, to test each attribute at every node of the tree. The resulting tree used to classify future samples.

We use two mathematical factors to implement the ID3 algorithm.

1. Entropy

It is a fundamental theorem which commonly used in information theory to measure the importance of information relative to its size. Let x is our training set contains positive and negative examples, then the entropy of x relative to this classification is:

2. Information Gain

In multivariate calculus, we have learned how to use a partial derivative of each variable relative to all other variables to find local optimum. In information theory, we used a similar concept, we derive the original entropy of the population to measure the information gain of each attribute. For training set x and its attribute y, the formula of Information Gain is:

Steps of ID3 Algorithm

1. Calculate Entropy for the dataset.
2. For each attribute/ feature,
•  Calculate entropy for its all categorical values.
• Calculate Information Gain for the features.
3.  Find the feature with maximum Information gain.
4. Repeat, until we get the desired tree.

Applications of ID3 Algorithm

ID3 In Information Assets Identification

The algorithm selects attributes with the largest amount of information gain as the test attribute of the current node. It makes the minimum amount of information needed by the data classification and reflects the principle of minimum randomness. The ID3 algorithm is applied to the recognition of the value of information assets. Thus, we can get the value of the assets identification rules, and provide important support for information security risk evaluations.

In Soil Quality Grade Forecasting Model

ID3 to establish a soil quality grade prediction model and combines the soil composition in Lishu to be a training sample. ID3 algorithm also expresses the acquired knowledge by means of quantitative rules. The experiment results manifest that the expression of the ID3 algorithm’s knowledge is easy to understand, is convenient for practical application, improves forecasting accuracy, and provides a reliable theoretical basis for precision fertilization.

In Cattle Disease Classification

In cattle disease classification also ID3 algorithm has been used and it brought success to predict and classify disease in cattle and so that we can treat the cattle accordingly without any further delay.

In Scholarship Evaluation

Manual evaluation of educational scholarships always revolves around the rank of scholarship and the number of students who are rewarded rather than analyzing the facts which can influence the achievement of the scholarship. As a result, the evaluation is apparently lacking fairness and efficiency. Hence based on the ID3 decision tree, a higher education scholarship evaluation model is built.

In An Analysis of Coal Logistics Customer

For the analysis of logistic customers need to build a decision tree based on the C4.5 algorithm for coal logistics customer analysis, adopt Pessimistic Error Pruning (PEP) to simplify it, and apply the rules extracted to the CRM of the coal logistics road company which shows that the method can be able to accurately classify the types of customers.

1.  In ID3 understandable prediction rules are created from the training data.
2. It builds the fastest tree.
3.  In this only need to test enough attributes until all data is classified.
4.  The whole dataset is searched to create a tree.
5.  It easily handles irrelevant attributes.

1.  If a small sample is tested, then data may be overfitted or over classified.
2. Only one attribute at a time is tested for making a decision.
3. Does not handle streaming data easily.
4.  Classifying continuous data may be computationally expensive, as many trees must be generated to see where to break the continuum.

References

AES algorithm is a symmetrical block cipher algorithm that takes in input the plain text in blocks of 128 bits and converts them using keys of 128, 192, and 256 bits in the Ciphertext. It is used by the US government to protect classified information. AES is implemented in software and hardware throughout the world to encrypt sensitive data. It is essential for government computer security, cybersecurity, and electronic data protection. The National Institute of Standards and Technology (NIST) started the development of AES in 1997.

AES is an iterative rather than Feistel cipher. It is based on ‘substitution–permutation network’. It comprises of a series of linked operations, some of which involve replacing inputs by specific outputs (substitutions) and others involve shuffling bits around (permutations). The features of AES are as follows −

1.  Symmetric key symmetric block cipher.
2. 128-bit data, 128/192/256-bit keys.
3. Stronger and faster than Triple-DES.
4. Provide full specification and design details.
5.  Software implementable in C and Java.

Working of AES Algorithm

The AES algorithm uses a substitution-permutation, or SP network, with multiple rounds to produce ciphertext. The number of rounds depends on the key size being used. A 128-bit key size dictates ten rounds, a 192-bit key size dictates 12 rounds, and a 258-bit key size has 14 rounds. Each of these rounds requires around key, but since only one key is inputted into the algorithm, this key needs to be expanded to get keys for each round, including round 0. Each round in the algorithm consists of four steps.

Byte Substitution

(SubBytes) The 16 input bytes are substituted by looking up a fixed table (S-box) given in design. The result is in a matrix of four rows and four columns.

Shifting of Rows

Each of the four rows of the matrix is shifted to the left. Any entries that ‘fall off’ are re-inserted on the right side of the row. A shift is carried out as follows −

1. The first row is not shifted.
2.  The second row is shifted one (byte) position to the left.
3. The third row is shifted two positions to the left.
4. The fourth row is shifted three positions to the left.
5.  The result is a new matrix consisting of the same 16 bytes but shifted with respect to each other.

Mixing of Columns

Each column of four bytes is now transformed using a special mathematical function. This function takes as input the four bytes of one column and outputs four completely new bytes, which replace the original column. The result is another new matrix consisting of 16 new bytes. It should be noted that this step is not performed in the last round.

The 16 bytes of the matrix are now considered as 128 bits and are XORed to the 128 bits of the round key. If this is the last round then the output is the Ciphertext. Otherwise, the resulting 128 bits are interpreted as 16 bytes and we begin another similar round.

Decryption Process

The process of decryption of an AES ciphertext is similar to the encryption process in the reverse order. Each round consists of the four processes conducted in the reverse order −

– Mix columns

– Shift rows

– Byte substitution

Since sub-processes in each round are in a reverse manner, unlike for a Feistel Cipher, the encryption and decryption algorithms need to be separately implemented, although they are very closely related.

AES Features

NIST specified the new AES algorithm must be a block cipher capable of handling 128-bit blocks, using keys sized at 128, 192, and 256 bits. Other criteria for being chosen as the next AES algorithm included the following:

Security

Competing algorithms were to be judged on their ability to resist attack — as compared to other submitted ciphers. Security strength was to be considered the most important factor in the competition.

Cost

Intended to be released on a global, nonexclusive, and royalty-free basis, the candidate algorithms were to be evaluated on computational and memory efficiency.

Implementation

Factors to be considered included the algorithm’s flexibility, suitability for hardware or software implementation, and overall simplicity.

1. AES is a very strong algorithm.
2.  Can be designed for maximum of 256 bits.
3. It required less memory space.
4. This required minimum sample preparation.
5.  It gives rapid results.
6. No preliminary treatment of the sample is required.

1.  Theoretical attacks more effective than brute force are known.
2. 32-bit oriented does not take all advantages of the 64-bit platform.

References

## 5G Technology – Working, Advantages.

At the end of 2018, the industry association 3GPP(1) defines any system that uses “5G NR” (5G New Radio) software as “5G”, which is the fifth generation of cellular network technology.5G has brought three new aspects – higher speed, lower latency, and connection of multiple devices both as sensors and IoT devices. 5G system is a non-stand-alone network because it still needs active 4G support for the initial connection. It still needs several years of development to become a stand-alone system.

The 5th generation mobile network offers key technological features beyond what legacy 4G currently provides (5GPPP 2015).

1.  Very low latency: less than 1ms.
2.  Higher data speeds: up to 10 Gbps.
3. Significantly higher wireless capacity (mmWave spectrum), allowing massive-device connectivity.
4.  Reduced energy consumption.
5.  Unconventional resource virtualization.
6. . On-demand service-oriented resource allocation.
7.  Automated management and orchestration.
8.  Multi-tenancy.

How 5G Works

5G technology will introduce advances throughout network architecture. 5G New Radio, the global standard for a more capable 5G wireless air interface, will cover spectrums not used in 4G. New antennas will incorporate a technology known as massive MIMO (multiple inputs, multiple outputs), which enables multiple transmitters and receivers to transfer more data at the same time. But 5G technology is not limited to the new radio spectrum. It is designed to support a converged, heterogeneous network combining licensed and unlicensed wireless technologies. This will add bandwidth available for users.

5G architectures will be software-defined platforms, in which networking functionality is managed through software rather than hardware. Advancements in virtualization, cloud-based technologies, and IT and business process automation enable 5G architecture to be agile and flexible and to provide anytime, anywhere user access. 5G networks can create software-defined subnetwork constructs known as network slices. These slices enable network administrators to dictate network functionality based on users and devices.

5G also enhances digital experiences through machine-learning (ML)-enabled automation. Demand for response times within fractions of a second (such as those for self-driving cars) requires 5G networks to enlist automation with ML and, eventually, deep learning and artificial intelligence (AI). Automated provisioning and proactive management of traffic and services will reduce infrastructure costs and enhance the connected experience.

Applications of  5G Technology

Autonomous Vehicles

Autonomous vehicles are one of the most anticipated 5G applications. Vehicle technology is advancing rapidly to support the autonomous vehicle future. 5G networks will be an enormous enabler for autonomous vehicles, due to the dramatically reduced latency, as vehicles will be able to respond 10-100 times faster than over current cellular networks. The ultimate goal is a vehicle-to-everything (V2X) communication network. This will enable vehicles to automatically respond to objects and changes around them almost instantaneously. A vehicle must be able to send and receive messages in milliseconds to break or shift directions in response to road signs, hazards, and people crossing the street.

5G IoT in Smart City Infrastructure and Traffic Management

Many cities around the world today are deploying intelligent transportation systems (ITS), and are planning to support connected vehicle technology. Aspects of these systems are relatively easy to install using current communications systems that support smart traffic management to handle vehicle congestion and route emergency vehicles. Connected vehicle technology will enable bidirectional communications from vehicle to vehicle (V2V), and vehicle to infrastructure, (V2X) to promote safety across transportation systems. Smart cities are now installing sensors in every intersection to detect movement and cause connected and autonomous vehicles to react as needed.

5G IoT Applications in Industrial Automation

The key benefits of 5G in the industrial automation space are wireless flexibility, reduced costs, and the viability of applications that are not possible with current wireless technology. With 5G, industrial automation applications can cut the cord and go fully wireless, enabling more efficient smart factories.

Augmented Reality (AR) and Virtual Reality (VR)

The low latency of 5G will make AR and VR applications both immersive and far more interactive. In industrial applications, for example, a technician wearing 5G AR goggles could see an overlay of a machine that would identify parts, provide repair instructions, or show parts that are not safe to touch. The opportunities for highly responsive industrial applications that support complex tasks will be extensive. In business environments, you can have AR meetings where it appears two people are sitting together in the same room, turning boring phone or 2D video conferences into more interactive 3D gatherings. Sporting events and experiences will likely be some of the top applications for 5G in the consumer space. Anytime you need to react quickly to a stimulus, such as in a sports training application, it must happen with minimal latency.

5G IoT Applications for Drones

Drones have a vast and growing set of use cases today beyond the consumer use for filming and photography. With 5G, however, you will be able to put on goggles to “see” beyond current limits with low latency and high-resolution video. 5G will also extend the reach of controllers beyond a few kilometers or miles. These advances will have implications for use cases in search and rescue, border security, surveillance, drone delivery services, and more.

1.  Increased speed and bandwidth.
2.  Greater device density aids mobile e-commerce.
3.  Improved WAN connections.
4.  Better battery life for remote IoT devices.
5.  Enhanced security with hardened endpoints.

Reference

## Resource Provisioning: A Significant View

The cloud computing paradigm offers users rapid on demand access to computing resources such as CPU, RAM and storage, with minimal management overhead. Recent commercial cloud platforms, organize a shared resource pool for serving their users. Virtualization technologies help cloud providers pack their resources into different types of virtual machines (VMs), for allocation to cloud users. Under static provisioning, the cloud assembles its available resources into different types of VMs based on simple heuristics or historical VM demand patterns, before the auction starts. Under dynamic provisioning, the cloud conducts VM assembling in an online fashion upon receiving VM bundle bids, targeting maximum possible social welfare given the current bid profile [15].

System model for cloud environment is comprised of cloud producer, virtual machine repository, cloud brokers and cloud consumers as shown in Figure

Services are offered to users on rental basis to run their applications and pay- by-the-time basis for creating instances. Since these services are publicly available we often refer to them as public clouds. Parallel to this are the private clouds which are managed for solitary purpose. These clouds are dedicated to either inter organization or single consumer services. A quantity of hybrid clouds i.e. combination of private and public clouds are also available for consumers. Large numbers of cloud services are also available to share infrastructure between several organizations from specific community with common concerns for example security, compliance and jurisdiction. This can be managed internally or by a third party and hosted internally or externally [16]. All these models are shown in Figure

Resource provisioning will vary from consumer to consumer as the requirements can vary. In general, consumer makes a request to the producer for a specific resource. On receiving consumer‘s request the producer will perform a search in his list of available resources. If the resource is available then the producer allocates resource based on the priority of the request for that particular resource. In case if the resource is not available, consumer has to send a request to another resource provider. For such cases the producer will encounter matchmaking problem i.e. for every request made by different consumers, the producer has to initiate the search mechanism. Consumer on the other end will send the request to multiple producers and will opt for the fastest available resource [17].