Get 15% Discount on your first purchase

Cart (0)
ant colony algorithm

Ant Colony Optimization Algorithm in Machine Learning

There are even increasing efforts in searching and developing algorithms that can find solutions to combinatorial optimization problems. In this way, the Ant Colony Optimization Metaheuristic takes inspiration from biology and proposes different versions of still more efficient algorithms.

Overview

Ant Colony Optimization (ACO) is a paradigm for designing metaheuristic algorithms for combinatorial optimization problems. The essential trait of ACO algorithms is the combination of a priori information about the structure of a promising solution with a posteriori information about the structure of previously obtained good solutions.

ACO is a class of algorithms, whose first member, called the Ant System, was initially proposed by Colorni, Dorigo, and Maniezzo The main underlying idea, loosely inspired by the behavior of real ants, is that of a parallel search over several constructive computational threads based on local problem data and on a dynamic memory structure containing information on the quality of the previously obtained result. The collective behavior emerging from the interaction of the different search threads has proved effective in solving combinatorial optimization (CO) problems.

More specifically, we can say that “Ant Colony Optimization (ACO) is a population-based, general search technique for the solution of difficult combinatorial problems which is inspired by the pheromone trail laying behavior of real ant colonies.”

ACO Principle

Ant Colony Optimization principles are based on the natural behavior of ants. In their daily life, one of the tasks ants have to perform is to search for food, in the vicinity of their nest. While walking in such a quest, the ants deposit a chemical substance called pheromone in the ground.

At first, the ants wander randomly. When an ant finds a source of food, it walks back to the colony leaving "markers" (pheromones) that show the path has food. When other ants come across the markers, they are likely to follow the path with a certain probability. If they do, they then populate the path with their own markers as they bring the food back. As more ants find the path, it gets stronger until there are a couple of streams of ants traveling to various food sources near the colony. Because the ants drop pheromones every time they bring food, shorter paths are more likely to be stronger, hence optimizing the "solution."

Ant Colony Optimization (ACO) is an example of how inspiration can be drawn from seemingly random, low-level behavior to counter problems of great complexity. A specific focus lies on the collective behavior of ants after being confronted with a choice of path when searching for a food source (see Figure 1). Ants deposit pheromones on the ground having selected a path, with the result that fellow ants tend to follow the path with a higher pheromone concentration. This form of communication allows ants to transport food back to their nest in a highly effective manner. Following random fluctuations, one of the bridges presents a higher pheromone concentration. Eventually, the entire colony converges toward the use of the same bridge.

ant colony algorithm

Figure 1: This shows the potential paths a colony can take from nest (N) to food (F), with the route eventually converging as a result of random fluctuations in pheromone deposits.

ACO Algorithm

The ant colony algorithm is an algorithm for finding optimal paths that are based on the behavior of ants searching for food. Here we are presenting an ACO algorithm

Table 1: ACO Algorithm

ACO algorithm

References

[1] Paul Sharkey, “Ant Colony Optimization: Algorithms and Applications”, March 6, 2014, available online at: http://www.lancaster.ac.uk/pg/sharkeyp/Topic1.pdf

[2] Maniezzo, Vittorio, and Antonella Carbonaro, "Ant colony optimization: an overview", In Essays and surveys in metaheuristics, pp. 469-492, Springer, Boston, MA, 2002.

[3] Parsons, Simon, "Ant Colony Optimization by Marco Dorigo and Thomas Stützle, MIT Press, 305, The Knowledge Engineering Review 20, no. 1 (2005): 92.

Read More
Layered View of Cyber Security Framework

what is cyber security and why it is required

The term cyber security is often used interchangeably with the term information security. Cyber security is the activity of protecting information and information systems (networks, computers, databases, data centers, and applications) with appropriate procedural and technological security measures. Cybersecurity has become a matter of global interest and importance. It refers to a set of techniques used to protect the integrity of networks, programs, and data from attack, damage, or unauthorized access.

Definition

Cyber security is the collection of tools, policies, security concepts, security safeguards, guidelines, risk management approaches, actions, training, best practices, assurance, and technologies that can be used to protect the cyber environment and organization and user’s assets. Organization and user assets include connected computing devices, personnel, infrastructure, applications, services, telecommunications systems, and the totality of transmitted and/or stored information in the cyber environment. It strives to ensure the attainment and maintenance of the security properties of the organization and user’s assets against relevant security risks in the cyber environment.

Cyber security refers to the body of technologies, processes, and practices designed to protect networks, devices, programs, and data from attack, damage, or unauthorized access. Cyber security may also be referred to as information technology security.”

Why it is required?

The core functionality involves protecting information and systems from major cyber threats. These cyber threats take many forms (e.g., application attacks, malware, ransomware, phishing, and exploit kits). Unfortunately, cyber adversaries have learned to launch automated and sophisticated attacks using these tactics – at lower and lower costs. As a result, keeping pace with security strategy and operations can be a challenge, particularly in government and enterprise networks where, in their most disruptive form, cyber threats often take aim at secret, political, military, or infrastructural assets of a nation, or its people. Some of the common threats are outlined below in detail.

  • Cyberterrorism is the disruptive use of information technology by terrorist groups to further their ideological or political agenda. This takes the form of attacks on networks, computer systems, and telecommunication infrastructures.
  • Cyber warfare involves nation-states using information technology to penetrate another nation’s networks to cause damage or disruption. In the U.S. and many other nations, cyber warfare has been acknowledged as the fifth domain of warfare (following land, sea, air, and space). Cyber warfare attacks are primarily executed by hackers who are well-trained in exploiting the intricacies of computer networks and operate under the auspices and support of nation-states. Rather than “shutting down” a target’s key networks, a cyber warfare attack may intrude into networks to compromise valuable data, degrade communications, impair such infrastructural services as transportation and medical services, or interrupt commerce.
  • Cyber espionage is the practice of using information technology to obtain secret information without permission from its owners or holders. Cyber espionage is most often used to gain strategic, economic, political, or military advantage, and is conducted using cracking techniques and malware

Types of cyber security threats

Ransomware:  Ransomware is a type of malicious software. It is designed to extort money by blocking access to files or the computer system until the ransom is paid. Paying the ransom does not guarantee that the files will be recovered or the system restored.

Malware: Malware is a type of software designed to gain unauthorized access or to cause damage to a computer.

Social engineering: Social engineering is a tactic that adversaries use to trick you into revealing sensitive information. They can solicit a monetary payment or gain access to your confidential data. Social engineering can be combined with any of the threats listed above to make you more likely to click on links, download malware, or trust a malicious source.

Phishing: Phishing is the practice of sending fraudulent emails that resemble emails from reputable sources. The aim is to steal sensitive data like credit card numbers and login information. It’s the most common type of cyber attack. You can help protect yourself through education or a technology solution that filters malicious emails.

Layered View of Cyber Security Framework

Figure 1: Layered View of Cyber Security Framework

Cyber Security Trends

  • It regulations improvement
  • Data theft turning into data manipulation
  • Demand will continue to rise for security skills
  • Security in the Internet of Things (IoT)
  • Attackers will target consumer devices
  • Attackers will become bolder, more commercial less traceable
  • Cyber risk insurance will become more common
  • New job titles appearing – CCO (chief cybercrime officer)

References

[1] Atul M. Tonge and Suraj S. Kasture, “Cyber security: challenges for society- literature review”, IOSR Journal of Computer Engineering (IOSR-JCE), Volume 12, Issue 2 (May. - Jun. 2013), pp. 67-75

[2] “What is Cyber security? A Definition of Cyber security”, available online at: https://www.paloaltonetworks.com/cyberpedia/what-is-cyber-security

[3] “What Is Cyber security?”, https://www.cisco.com/c/en/us/products/security/what-is-cybersecurity.html

[4] Rossouw von Solms and Johan van Niekerk, “From information security to cyber security”, computers & security 38 (2013), pp. 97-102

Read More
Business Intelligence Cycle

What is Business Intelligence

Every business is dynamic in nature and is affected by various external and internal factors. These factors include external market conditions, competitors, internal restructuring and re-alignment, operational optimization, and paradigm shifts in the business itself. New regulations and restrictions, in combination with the above factors, contribute to the constant evolutionary nature of compelling, business-critical information; the kind of information that an organization needs to sustain and thrive. Business Intelligence (“BI”) is a broad term that encapsulates the process of gathering information pertaining to a business and the market it functions in.

What is it?

Business intelligence (BI) has two basic different meanings related to the use of the term intelligence. The primary, less frequently, is the human intelligence capacity applied in business affairs/activities. Intelligence of Business is a new field of investigation of the application of human cognitive faculties and artificial intelligence technologies to the management and decision support in different business problems. The second relates to intelligence as information valued for its currency and relevance. It is expert information, knowledge, and technologies efficient in the management of organizational and individual business. Therefore, in this sense, business intelligence is a broad category of applications and technologies for gathering, providing access to, and analyzing data for the purpose of helping enterprise users make better business decisions

Business Intelligence (BI) is a vital subject that covers a vast area of interest for today’s businessmen. BI consists of both internal and external categories that deal with the ability of a company to determine what its competitors are doing as well as understand what forces may be at work against them. Finally, how does your business incorporate the data that it collects into useful information yielding a competitive advantage? The field of BI is frequently murky and can easily cross the confused boundaries of business ethics as well as federal law. Using current academic literature, case studies, and an interview with a BI provider, we have outlined the key aspects of BI that your business needs to understand in today’s competitive environment.

Business Intelligence Cycle

Figure 1: Business Intelligence Cycle

Defining BI

The term Business Intelligence (BI) refers to technologies, applications, and practices for the collection, integration, analysis, and presentation of business information. The purpose of Business Intelligence is to support better business decision-making. Essentially, Business Intelligence systems are data-driven Decision Support Systems (DSS). Business Intelligence is sometimes used interchangeably with briefing books, report and query tools, and executive information systems.

“Business Intelligence is the art of gaining a business advantage from data by answering fundamental questions, such as how various customers rank, how business is doing now and if continued the current path, what clinical trials should be continued and which should stop having money dumped into!”

With a strong BI, companies can support decisions with more than just a gut feeling. Creating a fact-based “decisioning” framework via a strong computer system provides confidence in any decisions made.

Business Intelligence (BI) is a set of methodologies, processes, architectures, and technologies that transform raw data into meaningful and useful information which can be used to enable more effective strategic, tactical, and operational insights and decision-making. Within this are included a variety of technologies, including data quality and master data management.

BI Examples

Business intelligence (BI) is the use of data analysis in taking strategic decisions in the business environment. This definition might seem somewhat abstract if it is not illustrated with some concrete examples

Stock Optimization

Sectors with a pronounced seasonal business cycle often find it very difficult to optimize their stock. For example, if sales of particular product shoot up in summer or at Christmas, it is a big challenge to store the right amount of stock in order to maximize profit.

To address this issue, some companies in the canning, preserving and general food sector have been able to increase profitability by nearly 10% using BI techniques based on:

  • The adoption of a decision support system (DSS).
  • The exhaustive analysis of historical sales and stocktaking data for warehouse products.

In many cases, the results obtained have made possible a much more efficient and profitable redesign of the entire logistical and productive warehousing process.

Increasing Customer Loyalty

Business intelligence processes are also very useful for identifying the most profitable customers of, for example, a supermarket or clothing chain, who can subsequently be brought into loyalty schemes.

To do this, a great deal of data must be correctly analysed in order to find the ideal profile: age, sex, geographical location, marital status, number of children, etc. A good way of obtaining this information might be the creation of "discount cards", where in exchange for a card; the client has to provide a range of personal details.

Detecting and Correcting Budget Deviations

There are plenty of companies, especially large ones that are affected by significant budget deviations, discrepancies between the estimated operational parameters and targets set at the beginning of the year, and the actual results produced twelve months later.

An analysis of the strategic objectives of the company itself by means of a Balance Scorecard can quickly detect the reason for these deviations and enable their rapid correction. Sometimes, the problem might be a mismatch between a company's advertising and marketing operations and its real needs.

Problems for Small Businesses

The view that BI is only of any use to large companies is as widely-held, as it is wrong. Simple business intelligence systems can be of great help to small businesses in deciding, for example, what the best opening hours are, or what day of the week is best to take off.

BI Vendors

The BI market is in constant flux. New vendors frequently appear, and just as frequently disappear or become acquired by a larger company. Following are the BI Vendors are listed here:

Cloud-based 1010 data provides big data discovery options within the same location where it is stored, speeding up important business decisions by giving all users easier, quicker access requiring fewer clicks.

Actuate’s BIRT business intelligence software, known for its focus on open source, utilizes an Eclipse platform to streamline reports and help generate useful insights with three unique types of reporting tools.

Alteryx’s BI platform is powered by a unique data blending capability, which seamlessly unites cloud data, third-party data, and internal company information, creating a smoother, more efficient workflow.

Arcplan offers their customers two platforms as a way to deliver Business Intelligence functionality – the Enterprise platform and engaging platform and can integrate with other BI tools.

Birst’s BI solutions include a wide variety of features, such as big data, data warehouse automation, and data mashups. Users can choose between two platforms based on their needs.

References

[1] Dejan Zdraveski and Igor Zdravkoski, “Business Intelligence Tools for Data Analysis and Decision Making”, available online at: file:///C:/Users/maxtech-10/Downloads/cks_2011_economy_art_006.pdf

[2] Jayanthi Ranjan, “Business Intelligence: Concepts, Components, Techniques and Benefits”, Journal of Theoretical and Applied Information Technology, Volume 9, Number 1, pp 060 – 070, 2005-2009

[3] Greg Nelson, "Introduction to the SAS® 9 Business Intelligence Platform: A Tutorial", In SAS Global Forum. 2007.

[4] Captio, “Some practical examples of the use of business intelligence”, available online at: https://www.captio.com/blog/some-practical-examples-of-the-use-of-business-intelligence

[5] Justin Heinze, “The Ultimate List of Business Intelligence Vendors”, available online at: https://www.betterbuys.com/bi/business-intelligence-vendors/

Read More
Social Network Community Structure

What is Community Detection

Community detection is one of the most relevant topics to the machine learning technique clustering. The community term is being used to indicate a group of similar objects based on their differential behaviors.  However, Advances in technology and computation have provided the possibility of collecting and mining a massive amount of real-world data. Mining such “big data” allows us to understand the structure and function of real systems and to find unknown and interesting patterns. This section provides a brief overview of the community structure.

General

In the actual interconnected world, and with the rising of online social networks graph mining and community detection become completely up-to-date. Understanding the formation and evolution of communities is a long-standing research topic in sociology in part because of its fundamental connections with the studies of urban development, criminology, social marketing, and several other areas. With the increasing popularity of online social network services like Facebook, the study of community structures assumes more significance. Identifying and detecting communities are not only of particular importance but have immediate applications. For instance, for effective online marketing, such as placing online ads or deploying viral marketing strategies [1], identifying communities in social networks could often lead to more accurate targeting and better marketing results. Albeit online user profiles or other semantic information is helpful to discover user segments this kind of information is often at a coarse-grained level and overlooks community structure that reveals rich information at a fine-grained level.

Significance

Many real-world complex systems, such as social or computer networks can be modeled as large graphs, called complex networks. Because of the increasing volume of data and the need to understand such huge systems, complex networks have been extensively studied over the last ten years. Communities are clearly overlapping in real-world systems, especially in social networks, where every individual belongs to various communities: family, colleagues, groups of friends, etc. Finding all these overlapping communities in a huge graph is very complex: in a graph of nodes, there are  2n such possible communities and such possible community structures. Even if these communities could be efficiently computed, it may lead to uninterruptable results. Because of the complexity of overlapping communities’ detection, most studies have restricted the community structure to a partition, where each node belongs to one and only one community [14].

Identifying network communities can be viewed as a problem of clustering a set of nodes into communities, where a node can belong to multiple communities at once. Because nodes in communities share common properties or attributes, and because they have many relationships among themselves, there are two sources of data that can be used to perform the clustering task. The first is the data about the objects (i.e., nodes) and their attributes. Known properties of proteins, users’ social network profiles, or authors’ publication histories may tell us which objects are similar, and to which communities or modules they may belong. The second source of data comes from the network and the set of connections between the objects. Users form friendships, proteins interact, and authors collaborate [14].

Definition

Community detection is a key to understanding the structure of complex networks, and ultimately extracting useful information from them. An excessively studied structural property of real-world networks is their community structure. The community structure captures the tendency of nodes in the network to group together with other similar nodes into communities. This property has been observed in many real-world networks. Despite excessive studies of the community structure of networks, there is no consensus on a single quantitative definition for the concept of community and different studies have used different definitions. A community, also known as a cluster, is usually thought of as a group of nodes that have many connections to each other and few connections to the rest of the network. Identifying communities in a network can provide valuable information about the structural properties of the network, the interactions among nodes in the communities, and the role of the nodes in each community [15].

In the clustering framework, a community is a cluster of nodes in a graph, but a very important question is what a cluster is. but most of the time, the objects in a cluster must be more similar than objects out of this cluster: the objects are clustered or grouped based on the principle of maximizing the intra-class similarity and minimizing the inter-class similarity. Let us remark that, this definition implies the necessity to define the notions of similarity measure and/or cluster fitness measure

Social Network Community Structure

Figure 1: Social Network Community Structure

Community detection has been widely used in social network analysis to study the behavior and interaction patterns of people in social networks. Community detection is also important to identify powerful nodes in the network, based on their structural position to initiate influential campaigns.

References

[12] Chayant Tantipathananandh, “Detecting and Tracking Communities in Social Networks”, Dissertation Northwestern University, 2013

[2] J. Chang and D. M. Blei, Relational topic models for document networks. In AISTATS ’09, 2009

[3] Clara Granell, Sergio G´omez and Alex Arenas, “Data clustering using community detection algorithms”, Int. J. Complex Systems in Science volume 1 (2011), pp. 21–24

[4] S. Fortunato, “Community detection in graphs”, Physics Reports, vol. 486, no. 3-5, pp. 75 – 174, 2010, online available at: http://www.sciencedirect.com/science/article/B6TVP-4XPYXF1- 1/2/99061fac6435db4343b2374d26e64ac1

Read More
Application of Brain Computer Interface

What is Brain Computer Interface

As the power of modern computers grows alongside our understanding of the human brain, we move ever closer to making some pretty spectacular science fiction into reality. Imagine transmitting signals directly to someone's brain that would allow them to see, hear or feel specific sensory inputs. Consider the potential to manipulate computers or machinery with nothing more than a thought. It isn't about convenience -- for severely disabled people, the development of a brain-computer interface (BCI) could be the most important technological breakthrough in decades. In this article, we'll learn all about how BCIs work, their limitations, and where they could be headed in the future.

What is it?

Brain-computer interface technology represents a highly growing field of research with application systems. Its contributions in medical fields range from prevention to neuronal rehabilitation for serious injuries. Brain-Computer Interface (BCI) technology is a powerful communication tool between users and systems. It does not require any external devices or muscle intervention to issue commands and complete the interaction

Definition

A BCI is a computer-based system that acquires brain signals, analyzes them, and translates them into commands that are relayed to an output device to carry out a desired action. Thus, BCIs do not use the brain’s normal output pathways of peripheral nerves and muscles. This definition strictly limits the term BCI to systems that measure and use signals produced by the central nervous system (CNS). Thus, for example, a voice-activated or muscle-activated communication system is not a BCI. Furthermore, an electroencephalogram (EEG) machine alone is not a BCI because it only records brain signals but does not generate an output that acts on the user’s environment. It is a misconception that BCIs are mind-reading devices. Brain-computer interfaces do not read minds in the sense of extracting information from unsuspecting or unwilling users but enable users to act on the world by using brain signals rather than muscles. The user and the BCI work together. The user, often after a period of training, generates brain signals that encode intention, and the BCI, also after training, decodes the signals and translates them into commands to an output device that accomplishes the user’s intention.

Following is the depiction of the Brain-computer interaction scenario in Figure 1:

Brain Computer Interface

Figure 1: Basic Block Diagram of Brain-Computer Interface System incorporating Signal detection, processing, and deployment

BCI Research areas

Brain-computer interface is a method of communication based on the neural activity generated by the brain and is independent of its normal output pathways of peripheral nerves and muscles. The goal of BCI is not to determine a person’s intent by eavesdropping on brain activity, but rather to provide a new channel of output for the brain that requires voluntary adaptive control by the user.

  • Bioengineering applications: Devices with assisting purposes for disabled people.
  • Human subject monitoring: Research and detection of sleep disorders, neurological diseases, attention monitoring, and/or overall “mental state”.
  • Neuroscience research: real-time methods for correlating observable behavior with recorded neural signals.
  • Human-Machine Interaction: Interface devices between humans, computers, or machines.

Brain-computer interfaces have contributed to various fields of research. As briefed in Figure 2, they are involved in medical, neuroeconomics and smart environment, neuromarketing and advertisement, educational and self-regulation, games and entertainment, and Security and authentication fields.

Application of Brain Computer Interface

Figure 2: Applications of BCI

  1. Medical

Healthcare field has a variety of applications that could take advantage of brain signals in all associated phases including prevention, detection, diagnosis, rehabilitation and restoration.

  1. Neuroergonomics and smart environment

As previously mentioned, deploying brain signals is not exclusive to the medical field. Smart environments such as smart houses, workplaces or transportations could also exploit brain computer interfaces in offering further safety, luxury and physiological control to humans’ daily life. They are also expected to witness cooperation between Internet of Things (IoT) and BCI technologies.

  1. Neuromarketing and advertisement

Marketing field has also been an interest for BCI researches. BCI based assessment measures the generated attention accompanying watching activity. On the other hand, most of the researchers have considered the impact of another cognitive function in neuromarketing field. They have been interested in estimating the memorization of TV advertisements thus providing another method for advertising evaluation.

  1. Educational and self-regulation

Neurofeedback is a promising approach for enhancing brain performance via targeting human brain activity modulation. It invades the educational systems, which utilizes brain electrical signals to determine the degree of clearness of studied information. Personalized interaction to each learner is established according to the resultant response experienced.

  1. Games and entertainment

Entertainment and gaming applications have opened the market for nonmedical brain computer interfaces. Various games are presented like in [81] where helicopters are made to fly to any point in either a 2D or 3D virtual world.

  1. Security and Authentication

Security systems involve knowledge based, object based and/or biometrics based authentication. They have shown to be vulnerable to several drawbacks such as simple insecure password, shoulder surfing, theft crime, and cancelable biometrics. Cognitive Biometrics or electrophysiology, where only modalities using biosignals (such as brain signals) are used as sources of identity information, gives a solution for those vulnerabilities.

References

[1] Erik Andreas Larsen, “Classification of EEG Signals in a BrainComputer Interface System”, Master Dissertation report, Norwegian University of Science and Technology, June 2011

[2] Abdulkader, Sarah N., Ayman Atia, and Mostafa-Sami M. Mostafa, "Brain computer interfacing: Applications and challenges", Egyptian Informatics Journal 16, no. 2 (2015): pp. 213-230.

[3] Ed Grabianowski, “How Brain-computer Interfaces Work”, available online at: https://computer.howstuffworks.com/brain-computer-interface.htm

[4] Shih, Jerry J., Dean J. Krusienski, and Jonathan R. Wolpaw, "Brain-computer interfaces in medicine", In Mayo Clinic Proceedings, Volume 87, Number 3, pp. 268-279, Elsevier, 2012.

Read More
Context Aware Computing

what is Context Aware Computing

Context-aware computing promises a smooth interaction between humans and technology but few studies have been conducted regarding how autonomously an application should perform. Context-aware computing is a style of computing in which situational and environmental information about people, places, and things is used to anticipate immediate needs and proactively offer enriched situation-aware, and usable content, functions, and experiences. The notion of context is much more widely appreciated today. The term “context-aware computing” is commonly understood by those working in ubiquitous/pervasive computing, where it is felt that context is key in their efforts to disperse and enmesh computation into our lives.

Overview

Context is a powerful, and longstanding, concept in human-computer interaction. Interaction with computation is by explicit acts of communication (e.g., pointing to a menu item), and the context is implicit (e.g., default settings). Context can interpret explicit acts, making communication much more efficient. Thus, carefully embedding computing into the context of our lived activities can serve us with minimal effort on our part. Communication can be not only effortless but also naturally fit in with our ongoing activities.

A great deal of effort has gone into the field of context-aware computing over the past few years, building applications that have a greater awareness of the physical and social situations in which they are embedded. From a computational perspective, there are four goals for context-aware computing:

  • Increasing the number of input channels into computers
  • Pushing towards the more implicit acquisition of data
  • Creating better models that can take advantage of this increased input
  • Using the increased input and improved models in new and useful ways

Context-aware computing is not a new concept, but the ongoing mobile revolution makes it both necessary and feasible. Context-aware computing involves first acquiring context and then taking context-dependent actions.

  • Necessary because the mobile phone display is small and information must be delivered with much higher relevance and precision to meet user needs.
  • Feasible because small, light-weight mobile devices allow users to almost always carry them around, and much can be learned via a phone about its user’s habits and states

Context Aware Computing

Figure 1: Context-Aware Computing

Context-aware computing can be applied to benefit applications in many areas including but not limited to information retrieval, facility management, and productivity enhancement, in addition to the aforementioned three examples representing power management, health care, and commerce, respectively.

Defining Context Aware Computing

The Context-Aware Computing group uses "context knowledge" such as where we are, how we feel and what we have done to drive machines to use our intentions to work with us. Context awareness is the ability of a system or system component to gather information about its environment at any given time and adapt behaviors accordingly. Contextual or context-aware computing uses software and hardware to automatically collect and analyze data to guide responses.

Context-aware computing is essentially a type of computer operation that anticipates cases of use or, in other words, works in customized ways based on the context of user activities. This can apply either to a user's activities on the device, or the physical environment in which the device is being used. Context-aware computing has a lot in common with the principles of human-computer interaction; one notable difference, however, is that, with context-aware computing, most of the solutions that deliver this higher and more sophisticated functionality are applied at runtime, according to input, on the overall context of that particular use.

Example

Examples of context-aware computing include the new design of mobile devices that switch between a vertical and a landscape orientation depending on how they are positioned. Another example is devices that change their screens and backlighting according to the amount of light in the room where they are being used. One very new concept that could be called context-aware computing is the inclusion of mechanical and sensory elements in future mobile devices that help them adjust themselves to minimize damage when they are dropped. Context-aware computing seeks to anticipate the ways that computers will need support from users in specific situations, whether it is indoors or outdoors, on manufacturing floors or in offices, or in any other kind of situation where a person relies on a piece of hardware to complete a task. This is a major element in the design of cutting-edge technology for today's consumer and business markets.

References

[1] Chang, Edward Y, "Context-aware computing: opportunities and open issues." Proceedings of the VLDB Endowment 6, no. 11 (2013), pp. 1172-1173

[2] Schilit, Bill, Norman Adams, and Roy Want, "Context-aware computing applications", In Mobile Computing Systems and Applications, WMCSA, First Workshop on, pp. 85-90, IEEE, 1994.

[3] Context-Aware Computing, available online at: https://www.techopedia.com/definition/31012/context-aware-computing

Read More