Get 15% Discount on your first purchase

Cart (0)
Simple Intrusion Detection Systems

What is Intrusion Detection System (IDS)

The Internet is a global public network. With the growth of the Internet and its potential, there has been subsequent change in the business model of organizations across the world. More and more people are getting connected to the Internet every day to take advantage of the new business model popularly known as e-Business. Internetwork connectivity has therefore become a very critical aspect of today's e-business.

“Intrusion is unauthorized access to the system with the intent of theft of information or harm the system. The act of detecting intrusions, monitoring the incidents occurring in the computer system, the suspicious or unusual activities, taking place in the system, which can be the possible attack, is known as intrusion detection”

If the computer is left unattended, any person can attempt to access and misuse the system. The problem is, however, far greater if the computer is connected to a network, particularly the Internet. Any user from around the world can reach the computer remotely (to some capacity) and may attempt to access private/confidential information or launch some form of attack to bring the system to a halt or cease to function effectively.

Overview

The Intrusion detection system in a similar way complements the firewall security. The firewall protects an organization from malicious attacks from the Internet and the Intrusion detection system detects if someone tries to break in through the firewall or manages to break into the firewall security and tries to have access to any system on the trusted side and alerts the system administrator in case there is a breach in security. Moreover, Firewalls do a very good job of filtering incoming traffic from the Internet; however, there are ways to circumvent the firewall. For example, external users can connect to the Intranet by dialing in through a modem installed in the private network of the organization. This kind of access would not be seen by the firewall.

Vulnerability is a known or suspected flaw in the hardware or software or operation of a system that exposes the system to penetration or accidental disclosure of information. Penetration is obtaining unauthorized (undetected) access to files and programs or the control state of the computer system. An attack is a specific formulation or execution of a plan to carry out a threat. An attack is successful when penetration occurs. Lastly, an Intrusion is a set of actions aimed to compromise the security goals, namely; integrity, confidentiality, or availability of a computing and networking resource. Figure 1 demonstrates the ideal intrusion detection system.

Simple Intrusion Detection Systems

Figure 1: Simple Intrusion Detection Systems

Intrusion detection systems (IDSs) are security systems used to monitor, recognize, and report malicious activities or policy violations in computer systems and networks. IDSs are based on the hypothesis that an intruder’s behavior will be noticeably different from that of a legitimate user and that many unauthorized actions are detectable. Some of the security violations that would create abnormal patterns of system usage include unauthorized users trying to get into the system, legitimate users doing illegal activities, trojan horses, viruses, and denial of service.

The goal of intrusion detection is to identify, preferably in real-time, unauthorized use, misuse, and abuse of computer systems by both system insiders and external penetrators. The intrusion detection problem is becoming more challenging due to the great increase in computer network connectivity, the thriving technological advancement, and the ease of finding hackers for hire. Intrusion detection systems (IDSs) are security systems used to monitor, recognize and report malicious activities or policy violations in computer systems and networks. IDSs are based on the hypothesis that an intruder’s behavior will be noticeably different from that of a legitimate user and that many unauthorized actions are detectable. Some of the security violations that would create abnormal patterns of system usage include unauthorized people trying to get into the system, legitimate users doing illegal activities, trojan horses, viruses, and denial of service.

Therefore, an Intrusion detection system (IDS) is a security system that monitors computer systems and network traffic and analyzes that traffic for possible hostile attacks originating from outside the organization and also for system misuse or attacks originating from inside the organization.

Different Intrusion Definitions 

There are many types of intrusion, which makes it difficult to give a single definition of the term. Some of the essential definitions are given as:

  • Surveillance/probing stage: The intruder attempts to gather information about target computers, by scanning vulnerabilities in software and configurations. That can be exploited. This includes password cracking.
  • Activity (exploitation) stage: Once weaknesses have been identified, the intruder can obtain administrator rights of the host. This will give the intruder free access to violate the system. This stage may also include Denial of Service (DoS) attacks.
  • Mark stage: Next, the attacker may be free to steal information from the system, destroy data (including logs that may reveal the attack), plant a virus or spyware, or use the host for conducting more attacks. In this stage, the attacker has achieved his goal of the attack.
  • Masquerading stage: In this final stage, the intruder will attempt to remove traces of the attack by, for example, deleting log entries that reveal the intrusion.

References

[1] Vegard Engen, “Machine Learning for Network Based Intrusion Detection”, June 2010, PhD. Dissertation, available online at: http://eprints.bournemouth.ac.uk/15899/1/Engen2010-PhD_single_sided.pdf

[2] D. Denning, An intrusion-detection model. Journal of Graph Theory, SE- 13(2): pp. 222–232, 1987.

[3] B. Mukherjee, L. Heberlein, and K. Levitt, Network intrusion detection, Network, IEEE, 8(3): pp. 26–41, 1994.

[4] “Intrusion Detection Systems: Definition, Need and Challenges”, SANS Institute 2001, available online at: https://www.sans.org/reading-room/whitepapers/detection/intrusion-detection-systems-definition-challenges-343

Read More
Example of Dimensionality Reduction

what is Dimensionality Reduction, methods, advantages and disadvantages

The recent explosion of data set size, in the number of records as well as attributes, has triggered the development of a number of big data platforms as well as parallel data analytics algorithms. At the same time though, it has pushed for the usage of data dimensionality reduction procedures. Dealing with a lot of dimensions can be painful for machine learning algorithms. High dimensionality will increase the computational complexity, increase the risk of overfitting (as your algorithm has more degrees of freedom) and the sparsity of the data will grow. Hence, dimensionality reduction will project the data in a space with fewer dimensions to limit these phenomena

What is it?

The problem of unwanted increase in dimension is closely related to fixation of measuring / recording data at a far granular level than it was done in past. This is in no way suggesting that this is a recent problem. It has started gaining more importance lately due to a surge in data.

In machine learning classification problems, there are often too many factors on the basis of which the final classification is done. These factors are basically variables called features. The higher the number of features, the harder it gets to visualize the training set and then work on it. Sometimes, most of these features are correlated, and hence redundant. This is where dimensionality reduction algorithms come into play. Dimensionality reduction is the process of reducing the number of random variables under consideration, by obtaining a set of principal variables. It can be divided into feature selection and feature extraction. Dimensionality reduction is not only useful to speed up algorithm execution but actually might help with the final classification/clustering accuracy as well. Too much noisy or even faulty input data often lead to a less than desirable algorithm performance. Removing un-informative or dis-informative data columns might indeed help the algorithm find more general classification regions and rules and overall achieve better performances on new data.

Manifold learning is a significant problem across a wide variety of information processing fields including pattern recognition, data compression, machine learning, and database navigation. In many problems, the measured data vectors are high-dimensional but we may have reason to believe that the data lie near a lower-dimensional manifold. In other words, we may believe that high-dimensional data are multiple, indirect measurements of an underlying source, which typically cannot be directly measured. Learning a suitable low-dimensional manifold from high-dimensional data is essentially the same as learning this underlying source. Dimensionality reduction can also be seen as the process of deriving a set of degrees of freedom that can be used to reproduce most of the variability of a data set. Consider a set of images produced by the rotation of a face through different angles.

Example

An intuitive example of dimensionality reduction can be discussed through a simple e-mail classification problem, where we need to classify whether the e-mail is spam or not. This can involve a large number of features, such as whether or not the e-mail has a generic title, the content of the e-mail, whether the e-mail uses a template, etc. However, some of these features may overlap. In another condition, a classification problem that relies on both humidity and rainfall can be collapsed into just one underlying feature, since both of the aforementioned are correlated to a high degree. Hence, we can reduce the number of features in such problems. A 3-D classification problem can be hard to visualize, whereas a 2-D one can be mapped to simple 2-dimensional space and a 1-D problem to a simple line. The below figure 1 illustrates this concept, where a 3-D feature space is split into two 1-D feature spaces, and later if found to be correlated, the number of features can be reduced even further.

Example of Dimensionality Reduction

Figure 1 Example of Dimensionality Reduction

Common Method for Dimensionality Reduction

There are many methods to perform Dimension reduction. I have listed the most common methods below:

Principal Component Analysis (PCA): In this technique, variables are transformed into a new set of variables, which are linear combinations of original variables. These new sets of variables are known as principal components. They are obtained in such a way that the first principle component accounts for most of the possible variation of original data after which each succeeding component has the highest possible variance.

Factor Analysis: Let’s say some variables are highly correlated. These variables can be grouped by their correlations i.e. all variables in a particular group can be highly correlated among themselves but have low correlation with variables of another group (s). Here each group represents a single underlying construct or factor. These factors are small in number as compared to a large number of dimensions. However, these factors are difficult to observe. There are basically two methods of performing factor analysis:

  • EFA (Exploratory Factor Analysis)
  • CFA (Confirmatory Factor Analysis)

Decision Trees: It is one of my favorite techniques. It can be used as an ultimate solution to tackle multiple challenges like missing values, outliers, and identifying significant variables. Several data scientists used decision trees and it worked well for them.

Random Forest: Similar to the decision tree is Random Forest. I would also recommend using the in-built feature importance provided by random forests to select a smaller subset of input features. Just be careful that random forests have a tendency to bias towards variables that have more no. of distinct values i.e. favor numeric variables over binary/categorical values.

High Correlation: Dimensions exhibiting higher correlation can lower the performance of the model. Moreover, it is not good to have multiple variables of similar information or variation also known as “Multicollinearity”. You can use Pearson (continuous variables) or Polychoric (discrete variables) correlation matrix to identify the variables with high correlation and select one of them using VIF (Variance Inflation Factor). Variables having higher values (VIF > 5) can be dropped.

Advantages of Dimensionality Reduction

  • It helps in data compression, and hence reduced storage space.
  • It reduces computation time.
  • It also helps remove redundant features, if any.

Disadvantages of Dimensionality Reduction

  • It may lead to some amount of data loss.
  • PCA tends to find linear correlations between variables, which is sometimes undesirable.
  • PCA fails in cases where mean and covariance are not enough to define datasets.
  • We may not know how many principal components to keep- in practice, some thumb rules are applied.

References

[1] Ali Ghodsi, “Dimensionality Reduction A Short Tutorial”, Waterloo, Ontario, Canada, 2006

[2] Sunil Ray, “Beginners Guide To Learn Dimension Reduction Techniques”, July 28, 2015, available online at: https://www.analyticsvidhya.com/blog/2015/07/dimension-reduction-methods/

[3] “Introduction to Dimensionality Reduction”, available online at: https://www.geeksforgeeks.org/dimensionality-reduction/

Read More