K-Means Clustering Algorithm with Steps

k-means is one of the simplest unsupervised learning algorithms that solve the well-known clustering problem. The procedure follows a simple and easy way to classify a given data set through a certain number of clusters. The main idea is to define k centers, one for each cluster. These centers should be placed in a cunning way because of different location causes the different result. So, the better choice is to place them as much as possible far away from each other. The next step is to take each point belonging to a given data set and associate it with the nearest center. When no point is pending, the first step is completed and an early group age is done. At this point, we need to re-calculate k new centroids as barycenters of the clusters resulting from the previous step. After we have these k new centroids, a new binding has to be done between the same data set points and the nearest new center. A loop has been generated. As a result of this loop, we may notice that the k centers change their location step by step until no more changes are done or in other words, centers do not move anymore. Finally, this algorithm aims at minimizing an objective function known as the squared error function given by:  


  1.  Fast, robust, and easier to understand. 
  2.  Relatively efficient: O(t*k*n*d), where n is # objects, k is # clusters, d is # dimension of each object, and t is # iterations. Normally, k, t, d << n.
  3.  Gives the best result when the data set is distinct or well separated from each other.


  1.  The use of Exclusive Assignment – If there are two highly overlapping data then k-means will not be able to resolve that there are two clusters.
  2. The learning algorithm is not invariant to non-linear transformations i.e. with a different representation of data we get different results (data represented in form of cartesian co-ordinates and polar co-ordinates will give different results).
  3.  Euclidean distance measures can unequally weigh underlying factors. 
  4.  The learning algorithm provides the local optima of the squared error function.


  1. https://towardsdatascience.com/k-means-clustering-algorithm-applications-evaluation-methods-and-drawbacks-aa03e644b48a#:~:text=Kmeans%20algorithm%20is%20an%20iterative,belongs%20to%20only%20one%20group.&text=The%20less%20variation%20we%20have,are%20within%20the%20same%20cluster.
  2. https://sites.google.com/site/dataclusteringalgorithms/k-means-clustering-algorithm
  3. https://en.wikipedia.org/wiki/K-means_clustering
  4. https://www.geeksforgeeks.org/k-means-clustering-introduction/
  5. https://www.analyticsvidhya.com/blog/2020/10/a-simple-explanation-of-k-means-clustering/