Get 15% Discount on your first purchase

Cart (0)
digital-identity

Digital Identity in Network Security

Having an identity and expressing it have been of that importance from an early time. Digital identity-related mechanisms are the core of modern systems, networks, and applications security. In an increasingly borderless and digital world, privacy and security cannot be ensured through the construction of walls around sensitive information. Identity is the new frontier of privacy and security, where the very nature of entities is what allows them to complete some transactions but is denied from completing others.  To understand the importance of identity and the criticality of strong identity protocols that protect against cyber risk and suit the needs of transacting parties, it is essential to understand what identity is, and its role in enabling transactions.

Overview of Digital Identity

Digital identity is said to be at the heart of many contemporary strategic modernizations and innovations, ranging from crime, misconduct, offense, internal and external security, business models, etc. This necessitates disclosing personal information within a ubiquitous environment.

A digital identity is an online or networked identity adopted or claimed in cyberspace by an individual, organization, or electronic device. These users may also project more than one digital identity through multiple communities. In terms of digital identity management, key areas of concern are security and privacy.

Trusting the link between a real identity and a digital identity first requires someone to validate the identity, or in other words, to prove you are who you say you are. Once established, using a digital identity involves some type of authentication-a way to prove it is really you when you are using digital connections like the Internet. The more valuable the digital identity, the more work is required to validate it and establish secure authentication.

For example, you can set up webmail with no validation of your identity other than an email address, and then use the email address and a password to provide authentication. For something more valuable, like cell phone service, your carrier will make sure they know who you are and where to send your bills.

Definition of Digital Identity

This identity is the network or Internet equivalent to the real identity of a person or entity when used for identification in connections or transactions from PCs, cell phones, or other personal devices. Whether physical or digital in nature, identity is a collection of individual information or attributes that describe an entity and is used to determine the transactions in which the entity can rightfully participate.  Identities can be assigned to three main kinds of entities:

  • Individuals, the entity we most associate with identity;
  • Legal entities, like corporations, partnerships, and trusts; and
  • Assets, which can be tangible, e.g., cars, buildings, smartphones; or intangible, e.g., patents, software, data sets

The identity of each of these entities is based on all its individual attributes, which fall into three main categories:

  • Inherent: “Attributes that are intrinsic to an entity and are not defined by relationships to external entities.”  Inherent attributes for individuals include age, height, date of birth, and fingerprints; for a legal entity it includes business status and industry - e.g., retail, technology, and media; and for an asset, it includes the nature of the asset and the asset’s issuer.
  • Accumulated:  “Attributes that are gathered or developed over time.  These attributes may change multiple times or evolve throughout an entity’s lifespan.”  For individuals, these include health records, job history, Facebook friend lists, and sports preferences.
  • Assigned: “Attributes that are attached to the entity, but are not related to its intrinsic nature.  These attributes can change and generally are reflective of relationships that the entity holds with other bodies.”  For individuals, these include e-mail address, login IDs and passwords, telephone number, social security IDs, and passport numbers.

Category of Digital Identity 

From time immemorial, identity systems have been based on face-to-face interactions and on physical documents and processes.  But, the transition to a digital economy requires radically different identity systems.  In a world that’s increasingly governed by digital transactions and data, our existing methods for managing security and privacy are no longer adequate.  Data breaches, identity theft, and large-scale fraud are becoming more common.  In addition, a significant portion of the world’s population lacks the necessary digital credentials to fully participate in the digital economy. This fall into five basic categories.

  • The first is internal identity management. In this kind of system, the same party serves as an identity provider and relies on the party. For example, a company might let employees access different services based on their attributes.
  • The second type of system is external authentication. It’s similar to the first type of system, but with an extra set of identity providers to authenticate users. The advantage here is that users can use one set of credentials rather than maintaining different usernames and passwords for each service.
  • Centralized identity is another. In this type of system, one party (such as a government) is an identity provider that transfers user attributes to relying parties. An example is a citizen registry that lets users vote, file taxes, and so forth. A relying party can be a public entity or a private one. A private entity might access data after paying a fee and obtaining user consent.
  • Next are federated authentication systems where one identity provider uses a set of third parties to authenticate users to relying parties. These systems are similar to centralized identity systems except that a variety of private brokers issue the digital identities as a service to whoever subscribes.
  • Lastly, distributed identity systems connect many identity providers to many relying parties. This type of system sets users up with a digital “wallet” that serves as a universal login to multiple websites and applications. Generally, these systems are privately held and rely on common operating standards rather than a governing body.

 

Figure 1: Category of Digital Identity

References

[1] G. Ben Ayed, “Chapter 2:  Digital Identity”, Springer Theses, Springer International Publishing Switzerland 2014, pp. 11-55

[2] “What is digital identity?” online available at: https://www.justaskgemalto.com/en/what-is-digital-identity/

[3] Irving Wladawsky-Berger, “Digital Identity: The Key to Privacy and Security in the Digital World”, available online at: http://ide.mit.edu/news-blog/blog/digital-identity-key-privacy-and-security-digital-world

[4] “Picture perfect a blueprint for digital identity”, online available at: https://www2.deloitte.com/content/dam/Deloitte/global/Documents/Financial-Services/gx-fsi-digital-identity-online.pdf

 

 

Read More
VANET Overview

An Introduction of Vehicular Ad Hoc Networks (VANET)

Vehicular Ad Hoc Networks (VANETs) is a technology that will enable connectivity of users on the move and also implement Intelligent Transportation Systems (ITS). In VANET, nodes can not freely move around an area or surface; their movements are restricted within the roads. VANET is a type of Mobile Ad Hoc Network (MANET), which is a group of mobile wireless nodes, which cooperatively form an IP-based network. A node communicates directly with nodes within its wireless communication range. Nodes of the MANET beyond each other's wireless range communicate using a multi-hop route through intermediate nodes. The multi-hop routes can change the network topology with time. The best route is determined using a routing protocol such as DSDV, DSR, AODV, TORA, ZRP, etc.

Vehicular Ad Hoc Networks (VANET) are used to provide communication among nearby vehicles and between vehicles and nearby fixed equipment, usually described as Road Side Units (RSU). VANET technologies aim at enhancing traffic safety for drivers, providing comfort, or reducing transportation time and fuel consumption. VANET is a technology that uses moving cars as nodes to create a network. Vehicular Ad Hoc Networks (VANET) turn every car into a wireless router or node, allowing cars approximately 100 to 300 meters of each other and, creating a network with a wide range. One of the greatest challenges of VANETs is to establish cost-effective connections between vehicles and vehicles or between vehicles and RSUs.

Vehicular Ad Hoc Networks (VANET) is an emerging technology to achieve intelligent inter-vehicle communications, and seamless internet connectivity resulting in improved road safety, essential alerts, and accessing comforts and entertainment. The technology integrates WLAN/cellular and Ad Hoc networks to achieve continuous connectivity. Broadcasting in vehicular ad hoc networks (VANET) is emerging as a critical area of research. One of the challenges posed by this problem is the confinement of the routing problem to vehicle-to-vehicle (V2V) scenarios as opposed to also utilizing the wireless infrastructure. At a fundamental level, safety and transport efficiency is a mandate for current car manufacturers and this has to be provided by the cars on the road as opposed to also using the existing wireless communications infrastructure.

VANET suggests an unlimited advantage to companies of any size. Vehicles access fast-speed internet which will change the automobiles’ onboard system from an effective widget to necessary production equipment, making nearly any internet technology accessible in the car. Thus this network does pretend to have specific security concerns as one problem is no one can type an email during driving safely. This is not a potential limit of VANET as production equipment. It permits the time which has been wasted for something in waiting called “dead time”, has turned into the time which is used to achieve tasks called “live time”.

If a traveler downloads his email, he can transform jam traffic into a productive task and read the on-board system and read it himself if traffic is stuck. One can browse the internet when someone is waiting in the car for a relative or friend. If a GPS system is integrated it can give us a benefit about traffic related to reports to support the fastest way to work. Finally, it would permit for free, like Skype or Google Talk services within workers, reducing telecommunications charges.

The main goal of Vehicular Ad Hoc Networks (VANET) is to provide safety and comfort for passengers. To this end, special electronic devices will be placed inside each vehicle which will provide an Ad-hoc network and server communication. Each vehicle equipped with a VANET device will be a node in the ad-hoc network and can receive and relay other messages through the wireless network. There are also multimedia and internet connectivity facilities for passengers, all provided within the wireless coverage for each car.

example of Vehicular Ad Hoc Networks (VANET)

Characteristics of VANET

  • Rapid topology changes and frequent fragmentation, result in small effective network diameter
  • Virtually no power constrains
  • Variable, highly dynamic scale and network density
  • The driver might adjust his behavior in reacting to the data received from the network, inflicting a topology change

High Dynamic topology: The speed and choice of path define the dynamic topology of VANET. If we assume two vehicles moving away from each other with a speed of 60 mph ( 25m/sec) and if the transmission range is about 250m, then the link between these two vehicles will last for only 5 seconds ( 250m/ 50ms-1). This defines its highly dynamic topology.

Frequent disconnected Network: The above feature necessitates that about every 5 seconds or so, the nodes needed another link with a nearby vehicle to maintain seamless connectivity. But in case of such failure, particularly in the case of low vehicle density zone, frequent disruption of network connectivity will occur. Such problems are at times addressed by road-side deployment of relay nodes.

Mobility Modeling and Prediction: The above features for connectivity, therefore, needed the knowledge of node positions and their movements which as such is very difficult to predict keeping in view the nature and pattern of movement of each vehicle. Nonetheless, a mobility model and node prediction based on a study of predefined roadways model and vehicle speed are of paramount importance for effective network design.

Communication Environment: The mobility model highly varies from highways to that of the city environment. The node prediction design and routing algorithm also, therefore, need to adapt to these changes. The highway mobility model, which is essentially a one-dimensional model, is rather simple and easy to predict. But for the city mobility model, street structure, variable node density, presence of buildings, and trees that behave as obstacles to even small distance communication make the model application very complex and difficult.

Example Image

Features of VANET

  • The nodes in a Vehicular Ad Hoc Networks (VANET) are vehicles and roadside units
  • The movement of these nodes is very fast
  • The motion patterns are restricted by road topology
  • The vehicle acts as a transceiver i.e. sending and receiving at the same time while creating a highly dynamic network, which is continuously changing.
  • The vehicular density varies from time to time for instance their density might increase during peak office hours and decrease at night times.

 Application of VANET

Three major classes of applications possible in VANET are

  • Safety oriented
  • Convenience oriented
  • Commercial oriented

 Routing protocols in VANET

  • Ad-hoc routing
  • Position-based routing
  • Cluster routing
  • Broadcast-based routing
  • Geocast based routing

Ad-hoc routing: AODV (Ad Hoc on-demand distance vector) and DSR (Dynamic source routing) can be applied to VANET. However, the simulation of these algorithms in VANET brought out frequent communication breaks due to the highly dynamic nature of its nodes. To meet the VANET challenges, these existing algorithms are suitably modified. The following application in their model:

  • A highly partitioned highway scenario is used where most path segments are relatively small.
  • The initial simulation with the AODV algorithm resulted in frequent link breaks as expected, owing to the dynamic nature of the node’s mobility.
  • Two predictions are added to AODV to upgrade the algorithm.
  • In one, node position and their speed information are fed in AODV to predict link lifetime. This is referred to as PR-AODV and it constructs a new alternate link before the end of the estimated link lifetime. (In AODV, the link is created only after the failure of connectivity occurs).
  • In second modified algorithm (PRAOVD-M), it computed the maximum predicted lifetime among various route options (in contrast to selecting the shortest path as in PRAODV or AODV).
  • The simulation on both showed an improved packet driving ratio.
  • However, the success of this algorithm largely depends on the authenticity of node position and mobility.

In another model, AODV is modified to forward the route request within a zone (rectangular or circular) of relevance (ZOR) from the point of event occurrence to make the algorithm more effective.

Position-based Routing: The technique employs the awareness of vehicles about the position of another vehicle to develop the routing strategy. One of the best-known position-based routings is GPSR (Greedy Perimeter Stateless Routing) which works in the principle of combining greed forwarding and face routing. This algorithm has the following advantages and constraints.

  • It works best in open space scenarios (Highways) with evenly distributed nodes. The absence of fewer obstacles in highway scenarios is attributed to its good performance.
  • The comparison of simulation results of GPSR from that of DSR in highway scenarios is generally considered to be better.
  • In city conditions, GPSR suffers from many problems:
  1. Greedy forwarding is restricted owing to obstacles
  2. Routing performance degrades because of the longer path resulting in higher delays
  3. Node mobility can induce routing loops for face routing
  4. The packet can at times be forwarded in the wrong direction resulting in higher delays

Cluster-based routing: In cluster-based routing, several clusters of nodes are formed. Each cluster is represented by a cluster head. Inter-communication among different clusters is carried through cluster heads whereas intra-communication within each cluster is made through the direct link. This cluster algorithm, in general, is more appropriate for MANET. But for VANET, owing to its high speed, and unpredictable variation of mobility, the continuity of links in the cluster often breaks. Certain modifications in the algorithm (COIN - Clustering for Open IVC Network put forth by Blum et al.LORA-CBF – Location-based Routing Algorithm using Cluster-based Flooding suggested by Santos et al.) such as the incorporation of a dynamic movement scheme, expected decisions of a driver under a certain scenario, enhancing the tolerance limit of inter-vehicle distances are included that on are observed to provide more stable structure at the cost of little additional overhead.

Broadcast-based Routing: This is the most frequently used routing protocol in Vehicular Ad Hoc Networks (VANETs) especially to communicate safety-related messages. The simplest broadcast method is carried by flooding in which each node rebroadcasts the message to other nodes. This ascertains the arrival of messages to all targeted destinations but has a higher overhead cost. Moreover, it works well with a lesser number of nodes in the network. A larger density of nodes causes an exponential increase in message transmission leading to collisions, higher bandwidth consumption, and a drop in overall performance. Several selective forwarding schemes such as BROADCOMM (by Durresi et al.), UMB (Urban Multihop Broadcast Protocol), Vector-based Tracking Detection (V-TRADE), History Enhanced V-TRADE (HV-TRADE), etc are proposed to counter this network congestion.

  • BROADCOMM Scheme: In this, the highway is segmented to define virtual cells which move along with the vehicles. Only the selected few nodes in each virtual cell ( cell reflectors) are responsible for handling messages within its cell nodes and forwarding / receiving the messages to/ from neighboring cell reflectors. The protocol works well with a smaller number of nodes with a simple highway structure.
  • UMB: In UMB protocol, each node while broadcasting the message, assigns only the farthest node to forward the message (rebroadcast). At the street intersections, repeaters are installed to forward the package to all road segments. This scheme has a higher success ratio and also can overcome interference, packet collisions, etc. to a great extent.
  • V-TRADE / HV-TRADE: This scheme is a GPS-based protocol. Based on position and movement information, each node classifies its neighboring nodes into different groups and while forwarding the message to neighboring nodes, it assigns only a few border nodes of each group to forward the packets. Because of the lesser number of nodes assigned for multi-hopping, it indicated significant bandwidth utilization.
  • Geocast-based Routing: It is a location-based multicast routing protocol. As the name implies, each node delivers the message/ packet to other nodes that lie within a specified geographic region predefined based on ZOR (zone of relevance). The philosophy is that the sender node need not deliver the packet to nodes beyond the ZOR, as the information (related to the accident, important alerts for example) would have the least importance to distant nodes. The scheme followed a directed flooding strategy within a defined ZOR so that it can limit the message overhead.

 

KEY MANAGEMENT FOR SECURE VANET

Key management mechanisms for secure VANEToperation turn out to be a surprisingly intricate and challenging endeavor, because of multiple seemingly conflicting requirements. On one hand, vehicles need to authenticate vehicles that they communicate with; and road authorities would like to trace drivers that abuse the system. On the other hand, VANETs need to protect a driver’s privacy. In particular, drivers may not wish to be tracked down wherever they travel.

A VANET key management mechanism should provide the following desirable properties:

Authenticity: A vehicle needs to authenticate other legitimate vehicles, and messages sent out by other legitimate vehicles. A vehicle should filter out bogus messages injected by a malicious outsider and accept only messages from legitimate participants.

Privacy: RSUs and casual observers should not be able to track down a driver’s trajectory in the long term.  Authorities can already trace vehicles through cameras and automatic license-plate readers, however, Vehicular Ad Hoc Networks (VANETs) should not make such tracing any simpler. The privacy requirements are seemingly contradictory to the authenticity requirement: suppose each vehicle presents a certificate to vouch for its validity, then different uses of the same certificate can be linked to each other. In particular, suppose a vehicle presents the certificate to an RSU in one location; and later presents the same certificate to another RSU in a different location. Then if these two RSUs compare the information that they have collected, they can easily learn that the owner of the certificate has traveled from one location to another.

Traceability and Revocation: An authority should be able to trace a vehicle that abuses the Vehicular Ad Hoc Networks (VANETs). In addition, once a misbehaving vehicle has been traced, the authority should be able to revoke it in a timely manner. This prevents any further damage that the misbehaving vehicle might cause to the VANET.

Efficiency: To make VANETs economically viable, theOBUs have resource-limited processors. Therefore, the cryptography used in VANET should not incur heavy computational overhead.

REFERENCE

[1]Feliz Kristianto Karnadi, Kun-chan Lan and Zhi Hai Mo, “Rapid Generation of Realistic Mobility Models for VANET”

[2] Fan Bai, Priyantha Mudalige and Varsha Sadekar, “Broadcasting in VANET”

[3]Rezwana Karim, “VANET: Superior System for Content Distribution in Vehicular Network”

[4] Different Routing Techniques in VANET

[5] Aamir Hassan, “VANET Simulation”

[6] SanketNesargi, Ravi Prakash, “MANETconf: Configuration of Hosts in a Mobile Ad Hoc Network”

[7] PavlosSermpezis, GeorgiosKoltsidas, and Fotini-NioviPavlidou, “Investigating a Junction-based Multipath Source Routing algorithm for VANETs”

[8] Rongxing L, Xiaodong Lin, Haojin Zhu, and Xuemin (Sherman) Shen, “SPARK: A New VANET-based Smart ParkingScheme for Large Parking Lots”

[9] AhrenStuder, Elaine Shi, Fan Bai, and Adrian Perrig, “TACKing Together Efficient Authentication, Revocation, and Privacy in VANETs”, March 14, 2008

Read More
Object Recognition

Object Recognition: Computer Vision

Object recognition in computer vision is the task of finding an object in an image or video sequence. It is a fundamental vision problem. Humans recognize a huge number of objects in images with little effort. Similarly, Computer vision is the ability of machines to see and understand what is in their surroundings. Image of the objects may vary in different viewpoints, in many different sizes/scales, or even when they are translated or rotated. Object recognition is an important task in image processing. This field contains methods for acquiring, processing, and analyzing images for computer vision.

Overview

Object recognition plays an important role in computer vision. It is indispensable for many applications in the area of autonomous systems or industrial control. An object recognition system finds objects in the real world from an image of the world, using object models which are known as a priori. With a simple glance of an object, humans are able to tell its identity or category despite the appearance variation due to changes in pose, illumination, texture, deformation, and under occlusion. Furthermore, humans can easily generalize from observing a set of objects to recognizing objects that have never been seen before. It is concerned with determining the identity of an object being observed in an image from a set of known tags. Humans can recognize any object in the real world easily without any effort; on the contrary machines by themselves cannot recognize objects. Algorithmic descriptions of recognition tasks are implemented on machines; which is an intricate task. Thus object recognition techniques need to be developed which are less complex and efficient.

Definition

Object recognition is a process for identifying a specific object in a digital image or video. Object recognition is concerned with determining the identity of an object being observed in the image from a set of known labels. Oftentimes, it is assumed that the object being observed has been detected or there is a single object in the image. Object recognition algorithms rely on matching, learning, or pattern recognition algorithms using appearance-based or feature-based techniques. Object recognition is useful in applications such as video stabilization, advanced driver assistance systems (ADAS), and disease identification in bioimaging. Common techniques include deep learning-based approaches such as convolutional neural networks, and feature-based approaches using edges, gradients, histogram of oriented gradients (HOG), Haar wavelets, and linear binary patterns.

Object recognition methods frequently use extracted features and learning algorithms to recognize instances of an object or images belonging to an object category. Object class recognition deals with classifying objects into a certain class or category whereas object detection aims at localizing a specific object of interest in digital images or videos. Every object or object class has its own particular features that characterize it and differentiate them from the rest, helping in the recognition of the same or similar objects in other images or videos. Significant challenges stay in the field of object recognition. One main concern is about robustness with respect to variation in scale, viewpoint, illumination, non-rigid deformations, and imaging conditions. Another current issue is the scaling up to thousands of object classes and millions of images, which is called large-scale image retrieval.

Model Design

the architecture and main components of object recognition are given below

 

Figure 1: Different Components of Object Recognition

A block diagram showing interactions and information flow among different components of the system is given in Figure 1

The model database contains all the models known to the system. The information in the model database depends on the approach used for the recognition. It can vary from a qualitative or functional description to precise geometric surface information. In many cases, the models of objects are abstract feature vectors, as discussed later in this section. A feature is some attribute of the object that is considered important in describing and recognizing the object in relation to other objects. Size, color, and shape are some commonly used features.

The feature detector applies operators to images and identifies locations of features that help in forming object hypotheses. The features used by a system depend on the types of objects to be recognized and the organization of the model database. Using the detected features in the image, the hypothesizer assigns likelihoods to objects present in the scene. This step is used to reduce the search space for the recognizer using certain features. The model base is organized using some type of indexing scheme to facilitate the elimination of unlikely object candidates from possible consideration. The verifier then uses object models to verify the hypotheses and refines the likelihood of objects. The system then selects the object with the highest likelihood, based on all the evidence, as the correct object.

All object recognition systems use models either explicitly or implicitly and employ feature detectors based on these object models. The hypothesis formation and verification components vary in their importance in different approaches to object recognition. Some systems use only hypothesis formation and then select the object with the highest likelihood as the correct object. Pattern classification approaches are a good example of this approach. Many artificial intelligence systems, on the other hand, rely little on hypothesis formation and do more work in the verification phases. In fact, one of the classical approaches, template matching, bypasses the hypothesis formation stage entirely.

References

[1] Latharani T.R. and M.Z. Kurian, “Various Object Recognition Techniques for Computer Vision”, Journal of Analysis and Computation, Vol. 7, No. 1, (January-June 2011), pp. 39-47

[2] Simon Achatz, “State of the Art of Object Recognition Techniques”, Neuroscientific System Theory, Seminar Report, 2016

[3] “Chapter 15 Object Recognition”, available online at: http://www.cse.usf.edu/~r1k/MachineVisionBook/MachineVision.files/MachineVision_Chapter15.pdf

 

 

Read More

Asynchronous Transfer Mode (ATM)

Various network applications are requiring increasingly higher bandwidth and generating a heterogeneous mix of network traffic. Existing networks cannot provide the transport facilities to efficiently support a diversity of traffic with various service requirements. ATM was designed to be potentially capable of supporting heterogeneous traffic (e.g., voice, video, data) in one transmission and switching fabric technology. It promised to provide greater integration of capabilities and services, more flexible access to the network, and more efficient and economical service.

Overview of ATM

Internet applications have been written in the context of an IP-based network, and do not take advantage at all of the ATM network capabilities since they are hidden by this connectionless IP layer. The provision of Internet applications directly on top of ATM, described here, removes the overhead and the functional redundancies of a protocol layer, and makes it possible to take advantage of the various service categories offered by ATM networks, while maintaining the interworking with the IP-based network.

Asynchronous Transfer Mode (ATM) has emerged as the most promising technology in supporting future broadband multimedia communication services. To accelerate the deployment of ATM technology, the ATM Forum, which is a consortium of service providers and equipment vendors in the communication industries, has been created to develop implementation and specification agreements.ATM was intended to provide a single unified networking standard that could support both synchronous and asynchronous technologies and services while offering multiple levels of quality of service for packet traffic.

ATM allows the user to select the required level of service, provides guaranteed service quality, and makes reservations and preplans routes so those transmissions needing the most attention are given the best service.

ATM Architecture

Asynchronous Transfer Mode (ATM), sometimes called cell relay, is a widely deployed, high-speed, connection-oriented backbone technology that is easily integrated with technologies such as SDH, Frame Relay, and DSL. ATM uses short, fixed-length packets called cells to carry data, and combines the benefits of circuit switching (guaranteed capacity and constant transmission delay) with those of packet switching (flexibility and efficiency for intermittent traffic). ATM is more efficient than synchronous technologies such as time-division multiplexing (TDM), in which each user is assigned a specific time slot that no other station can use. Because ATM is asynchronous, time slots are available on demand. An ATM network is made up of ATM switches and ATM end systems (e.g. workstations, switches, and routers). An ATM switch is responsible for cell transit through an ATM network. It accepts an incoming call from an ATM end system (or another ATM switch), reads and updates the cell header information, and switches the cell to the appropriate output interface.

Characteristics of ATM

Asynchronous transfer mode (ATM) is a cell-oriented switching and multiplexing technology that uses fixed-length. ATM is a connection-oriented technology, in which a connection is established between the two endpoints before the actual data exchange begins. Asynchronous Transfer Mode (ATM) is a transfer protocol with the following characteristics:

  • It is scalable and flexible. It can support megabit-to-gigabit transfer speeds and is not tied to a specific physical medium.
  • It efficiently transmits video, audio, and data through the implementation of several adaptation layers.
  • Bandwidth can be allocated as needed, lessening the impact on and by high-bandwidth users.
  • It transmits data in fixed-length packets, called cells, each of which is 53 bytes long, containing 48 bytes of payload and 5 bytes of header.
  • It is asynchronous in the sense that although cells are relayed synchronously, particular users need not send data at regular intervals.
  • It is connection-oriented, using a virtual circuit to transmit cells that share the same source and destination over the same route.

Benefits of ATM

The high-level benefits delivered through ATM services deployed on ATM technology using international ATM standards can be summarized as follows:

  • Dynamic bandwidth for bursty traffic meeting application needs and delivering a high utilization of networking resources; most applications are or can be viewed as inherently bursty, for example, voice is bursty, as both parties are neither speaking at once nor all the time; video is bursty, as the amount of motion and required resolution varies over time.
  • Smaller header with respect to the data to make the efficient use of bandwidth.
  • Can handle mixed network traffic very efficiently: Variety of packet sizes makes traffic unpredictable. All network equipments should incorporate elaborate software systems to manage the various sizes of packets. ATM handles these problems efficiently with the fixed size cell.
  • Cell network: All data is loaded into identical cells that can be transmitted with complete predictability and uniformity.
  • Class-of-service support for multimedia traffic allowing applications with varying throughput and latency requirements to be met on a single network.
  • Scalability in speed and network size supporting link speeds of T1/E1 to OC–12 (622 Mbps).
  • Common LAN/WAN architecture allowing ATM to be used consistently from one desktop to another; traditionally, LAN and WAN technologies have been very different, with implications for performance and interoperability. But ATM technology can be used either as a LAN technology or a WAN technology.
  • International standards compliance in central-office and customer-premises environments allowing for multivendor operation.

References

[1] Siu, Kai-Yeung, and Raj Jain. "A brief overview of ATM: protocol layers, LAN emulation, and traffic management." ACM SIGCOMM Computer Communication Review 25, no. 2 (1995): 6-20.

[2] Durresi, Arjan, and Raj Jain. "Asynchronous Transfer Mode (ATM)." Handbook of Computer Networks: LANs, MANs, WANs, the Internet, and Global, Cellular, and Wireless Networks, Volume 2: 183-199.

[2] “Archived: What is Asynchronous Transfer Mode (ATM)?” available online at: https://kb.iu.edu/d/aequ

[3] “Asynchronous Transfer Mode (ATM)”, available online at: https://www.technologyuk.net/telecommunications/communication-technologies/asynchronous-transfer-mode.shtml

[4] “Lesson 6: Asynchronous Transfer Mode Switching (ATM)”, Version 2 CSE IIT, Kharagpur, available online at: http://nptel.ac.in/courses/106105080/pdf/M4L6.pdf

 

 

 

Read More

Artificial Neural Network (ANN) An Introduction

An Artificial Neural Network (ANN) is an information processing model. That is inspired by the biological nervous systems like a human brain which process information. It is composed of a number of interconnected processing elements. These processing elements are known as neurons. In order to solve specific problems, for an application. Different ANN structures are need to be configured. Using a learning process. In biological systems adjustments in synaptic connections are happened, in the same way the connected edges update their weights. That exist between the neurons.

Neural Network  Definition

Artificial neural networks, commonly referred to as “neural networks”. It is a highly complex, nonlinear, and parallel information-processing system. It has the capability to organize its structure (neurons) to perform computations. That can be used for (e.g., pattern recognition, perception, and motor control). For example, human vision, which is an information-processing task. It is the function to provide a representation of the environment around us, to input the data we need to interact with the environment. The Neural Network accomplishes perceptual tasks.

More specifically, “A neural network is an interconnected assembly of simple processing elements, units or nodes, whose functionality is loosely based on the animal neuron. The processing ability of the network is stored in the inter-unit connection strengths, or weights, obtained by a process of adaptation to, or learning from, a set of training patterns”.

Artificial neural networks are an attempt at modeling the information processing capabilities of nervous systems. Thus, first of all, we need to consider the essential properties of biological neural networks from the viewpoint of information processing. This will allow us to design abstract models of artificial neural networks, which can then be simulated and analyzed. The human brain has capabilities in processing information and marking instantaneous decisions. Many researchers have shown that the human brain makes computations in a radically different manner to that done by binary computers. The neuron is a massive network of parallel and distributed computing elements, many scientists are working for the last few decades to build a computational system called a neural network, which is also called a connectioned model. A neural network is composed of a set of parallel and distributed processing units called nodes or neurons, these neurons are interconnected by means of unidirectional or bidirectional links by ordering them in layers.

Neural Network Example

The basic unit of the neural network is the neuron, it consists of N no of inputs to the network are represented by and each input is multiplied by a connection weight these weights are represented by. The product of input and weight are simply summed and feed through a transfer function (activation function) to generate the result (output).

Benefits of Neural Networks

It is apparent that a neural network derives its computing power through, first, its massively parallel distributed structure and, second, its ability to learn and therefore generalize. Generalization refers to the neural network’s production of reasonable outputs for inputs not encountered during training (learning). These two information processing capabilities make it possible for neural networks to find good approximate solutions to complex (large-scale) problems that are intractable.

Neural networks offer the following useful properties and capabilities:

Non-linearity: An artificial neuron can be linear or nonlinear. A neural network, made up of an interconnection of nonlinear neurons, is itself nonlinear. Moreover, the non-linearity is of a special kind in the sense that it is distributed throughout the network. Non-linearity is a highly important property, particularly if the underlying physical mechanism responsible for the generation of the input signal (e.g., speech signal) is inherently nonlinear.

Input-Output Mapping: A popular paradigm of learning, called learning with a teacher or supervised learning, involves modification of the synaptic weights of a neural network by applying a set of labeled training examples, or task examples. Each example consists of a unique input signal and a corresponding desired (target) response. The network is presented with an example picked at random from the set, and the synaptic weights (free parameters) of the network are modified to minimize the difference between the desired response and the actual response of the network produced by the input signal in accordance with an appropriate statistical criterion. The training of the network is repeated for many examples in the set until the network reaches a steady state where there are no further significant changes in the synaptic weights. The previously applied training examples may be reapplied during the training session but in a different order. Thus the network learns from the examples by constructing an input-output mapping for the problem at hand.

Evidential Response: In the context of pattern classification, a neural network can be designed to provide information not only about which particular pattern to select but also about the confidence in the decision made. This latter information may be used to reject ambiguous patterns, should they arise, and thereby improve the classification performance of the network.

Contextual Information: Knowledge is represented by the very structure and activation state of a neural network. Every neuron in the network is potentially affected by the global activity of all other neurons in the network. Consequently, contextual information is dealt with naturally by a neural network.

VLSI Implementability: The massively parallel nature of a neural network makes it potentially fast for the computation of certain tasks. This same feature makes a neural network well suited for implementation using very-large-scale-integrated (VLSI) technology. One particular beneficial virtue of VLSI is that it provides a means of capturing truly complex behavior in a highly hierarchical fashion.

Uniformity of Analysis and Design: Basically, neural networks enjoy universality as information processors. We say this in the sense that the same notation is used in all domains involving the application of neural networks. This feature manifests itself in different ways:

  • Neurons, in one form or another, represent an ingredient common to all neural networks.
  • This commonality makes it possible to share theories and learning algorithms in different applications of neural networks.
  • Modular networks can be built through seamless integration of modules.

 

References

[1] Haykin, Simon S., et al. Neural Networks and Learning Machines, Volume 3, Upper Saddle River, NJ, USA, Pearson, 2009.

[2] Shruti B. Hiregoudar, Manjunath. K and K. S. Patil, “A Survey: Research Summary on Neural Networks”, International Journal of Research in Engineering and Technology, Volume 03, Special Issue: 03, May-2014

[3] Rojas, Raúl. Neural networks: a systematic introduction, Springer Science & Business Media, 2013.

Read More
Grid Computing

What is Grid Computing

Grid Computing is a technology to supports the sharing and coordinating of the use of diverse resources in dynamic Virtual Objects (VOs). That is the creation, from geographically and organizationally distributed components of virtual computing systems. Grid computing is integrated to deliver desired QoS [1]. Grid computing was first developed to enable resource sharing within far scientific collaborations.

Applications

The applications of Grid computing include:

  • Collaborative visualization of large scientific datasets
  • Distributed computing for computationally demanding data analyses (pooling of computing power and storage)
  • And coupling of scientific instruments with remote computers and archives (increasing functionality as well as availability).

Initially, it is designed for scientific and technical computing applications and then for commercial distributed computing applications, including enterprise applications and business-to-business (B2B) partner collaboration. Nevertheless, that is important for commercial computing not primarily as a means of enhancing capability, but as a solution to new challenges related to reliable, scalable, and secure distributed systems.

Issues and Challenges

The allocation of resources and the scheduling of tasks are basic problems in grid computing. The performance of grid resources is mainly achieved by means of two mechanisms: monitoring and prediction. Grid resource monitoring aims to acquire the status, distribution, load, and fault of the resources by means of monitoring methods. While grid resource prediction aims to handle the various principles and running traces of grid resources by means of modeling and analyzing historical monitoring data. Monitoring can provide historical information and current information, while prediction provides future variation information. The grid needs a large amount of monitoring and prediction data: [2]

  • To carry out performance analysis, service control, bottleneck elimination, and fault diagnosis;
  • For providing reliable direction for grid resource allocation, job scheduling as well as dynamic load balancing;
  • To help grid users finish computing tasks while minimizing the cost of time, space, and money.

References

[1]Ian Foster, Carl Kesselman, Jeffrey M. Nick, Steven Tuecke, "The Physiology of the Grid, An Open Grid Services Architecture for Distributed Systems Integration", IBM Corporation, Poughkeepsie, NY 12601

[2] Liang Hu, Xiaochun Cheng, Xilong Che, “Survey of Grid Resource Monitoring and Prediction Strategies”, International Journal of Intelligent Information Processing Volume 1, Number 2, December 2010 (Available at: https://www.researchgate.net/publication/220500493_Survey_of_Grid_Resource_Monitoring_and_Prediction_Strategies)

[3] [Image Source] https://computer.howstuffworks.com/grid-computing.htm

Read More