K-modes

I recently read an interesting Wired story about Chris McKinlay (a fellow alum of Middlebury College), who used a clustering algorithm to understand the pool of users on the dating site OkCupid (and successfully used this information to improve his own profile.) The data involved was answers to multiple-choice questions, which is very similar to the categorical data that I discussed a few posts back. But, instead of translating the data into vectors, like I discussed in that post, McKinlay used an algorithm called K-modes that works directly on this type of data. So, I thought this would be a good excuse to write a post about K-modes and how it compares to translating the data into vectors and then running K-means.

First, I want to review how we could turn answers to multiple-choice questions into vector data, and what the K-means algorithm would do to it. To make things simple, lets say that we have a multiple-choice questionnaire with ten questions, each with four possible choices. A number of people fill out the questionnaire and we record their responses.

surveydataAs I described in the post on token data, we can convert each of the filled-out questionnaires into a data point in a 40-dimensional space as follows: The first four dimensions/features will record the response to the first question. If they answered (A) to the first question, the first four values would be (1,0,0,0). If they answered (B), it would be (0,1,0,0) and so on. Similarly, the next four features would record the response to the second question in the same way, and so on, as in the Figure to the right. So each questionnaire would give us a 40-dimensional vector with ten 1s and the remaining places all 0s.

Recall that K-means works by selecting a small number of special points called centroids, which are in the data space but not necessarily data points, and working out which of the centroids each data points is closest to. Then, it replaces each centroid with the new centroid/center of mass of the data points that were associated to it.

To understand how this works for the type of data that we get from a questionnaire, lets start by looking at the step where we find the new centroids. Given a collection of data points, we find their center of mass by adding them all together and then dividing by the number of data points. (This is basically an average, but we use the fancy term centroid because an average usually refers to a single number rather than a vector.) When we add the vectors together, we’re just adding up all the 1s in a particular dimension/feature. So in the resulting vector, the value in the first spot will be equal to the number of questionnaires that answered (A) to the first question. The value in the second spot will be the number that answered (B) to the first question, etc. The value in the fifth spot will be the number who answered (A) to the second question and so on.

Next, we divide by the number of data points, which means we divide each entry of the vector by this number. The first spot in the resulting vector is the number of questionnaires that answered (A) to the first question, divided by the total number of questionnaires. This number will be between zero and one, and you can think of it as the percentage of questionnaires that answered (A) to the first question. (Technically, to get the percentage, you have to multiply the number by 100.) The number in the second spot is the percentage who answered (B), and so on. Think of this like a bar graph showing the responses to each question, where the value in each spot is the height of the bar.

The step where we decide which centroid is closest to each data point is a little trickier. To find the standard Euclidean distance between a data point and a centroid, we subtract the number in each spot of the data vector from each spot of the centroid vector (and take the absolute value so that the resulting number is always positive.) Then we square each of these numbers, add them all up, then take the square root of the final number. (There are other possible types of distance to calculate, but that’s a digression for another time.)

That’s getting a bit more technical than I usually like, so let me just point out three key things to understand about this calculation: First, if a lot of the data points that were used to make the centroid agree with the new data point on a given question, this will make the distance lower (as we would hope.) Second, if a lot of the original data points agreed on a different answer than the new data point to a given question, then this will make the distance higher (again, as we would hope.) Third, if a lot of the original data points disagree with the new data point, but are fairly evenly spread among the other possible answers, then this will add a little bit to the distance, but not as much as if they all agreed on a single answer that disagreed with the new data point.

What this process does is to pick out the questionnaire responses that are most similar to each centroid, and thus to each other. In other words, we would expect that the data points that are closest to any given centroid will have a lot of responses in common with each other, so that when we calculate the centroids the next time, there will be a stronger majority on each question.

So, we run K-means by repeating this process over and over until the centroids stabilize/converge to places that define our new clusters. We can then read off the “typical” response to the first question in each cluster by looking at the values of the first four spots in each centroid, and choosing the one that’s the highest. We can read off the “typical” response to the second question by looking at the next four entries, and so on. If the data set has well defined clusters, and we pick our K correctly, we would expect the values defining these “typical” responses to be pretty close to 1 (i.e. 100%).

The K-modes algorithm is based on a very similar idea, but as I mentioned above, it skips the intermediate step of transforming the questionnaire data into vectors. It consists of the same two steps, but they look slightly different. For the step in which we compute the centroids, we again start by adding up the number of questionnaires that responded with each possible answer to each of the questions. So far, this isn’t too different; it’s essentially the same as adding up the 40-dimensional vectors like we did with K-means.

However, instead of dividing by the number of questionnaires like we did with K-means, the K-modes algorithm simply records which answer to each question got the most votes. This is the mode of the responses -the most common answer – which is where the name K-modes comes from. So each centroid is in the same form as the original questionnaire data – a set of responses to the different questions – rather than a 40-dimensional vector.

The next step is again to calculate the “distance” from each data point to each centroid. For K-means, we were able to use the standard Euclidean distance, since we were working with vector data. For K-modes, we’re going to have to come up with a notion of distance from scratch, but this turns out not to be too hard. The most obvious notion of distance is as follows: For each data point and each centroid, we can define the distance to be the number questions they disagree on. As with the Euclidean distance we used in K-means, when they agree on a question, this will make the distance lower, and when they disagree on a question, it will make the distance higher.

Note, however, that the third point about the Euclidean distance doesn’t hold here: The K-modes centroids don’t keep track of how close the margin was between the answer that that got the most responses and the second most. So, if the data point disagrees with the most popular response among the original data points, it gets the same penalty whether or not the original data points strongly agreed on this answer.

This difference is a trade-off rather than a deficiency. The problem with the way K-means calculates distances is that it can lead to centroids where no one answer is much higher than the others. Essentially, K-means never makes a strong decision about which data points to abandon. Since K-modes forces the centroids to make this decision, it can lead to much better defined clusters. Of course, for data where there aren’t strong correlations to be found, having to make this decision (especially in the early rounds of K-means/K-modes) could make things worse.

As usual, the question of which algorithm is better depends entirely on the data set and the goals of the project. Both algorithms rely heavily on picking the right value for K, and the Wired article does a nice job of describing how McKinlay did this through trial and error. (I may have to borrow the lava lamp analogy.) McKinlay’s analysis was complicated somewhat by the fact that each profile in the data set that he looked at answered a slightly different set of multiple choice questions. While there was a lot of overlap between the sets, each question would have had a lot of no-answer data. There are a number of ways one could deal with this, and I don’t know which one McKinlay used, but as always the best solution to a problem like this depends on the particular data set.

This entry was posted in Clustering, Feature extraction. Bookmark the permalink.

39 Responses to K-modes

  1. rrenaud says:

    What I’d love to see is a discussion or characterization of problems when you expect K-modes will outperform K-means and vice versa. Likewise, mentioning particular problems where the K-means averaging step doesn’t really make any sense and so it’s not even really a consideration, compared to K-modes.

  2. outputlogic says:

    Simple 2-d or 3-d visualization would be helpful to understand k-means and k-modes approaches

    • nikhil says:

      hi,

      i got k-modes cluster for categorical(by converting it into binary) data(10 cluster),

      can you provide your guidance how to visualise it in jupyter(2d or 3d).

      checking acuuracy.

  3. Pingback: Sam svoj Kupid | Udomačena statistika

  4. Pingback: If you did not already know: “K-Modes” | Data Analytics & R

  5. Pingback: Think Big Expands Capabilities for Building Data Lakes With Apache Spark

  6. Pingback: Think Big breidt mogelijkheden uit om data lakes te bouwen met Apache Spark | CloudWorks

  7. Wiisco says:

    Hey Jesse,
    thank you for the great post. I’m workling on my Master Thesis right now and stumbled upon this very easily understandable explanaiton of K-Modes and how it works. I would like to use K-Mode to cluster the data I’m going to collect for my thesis but I can’t semm to find a software I could do it with. Do you have any idea that could help me? You can pm me a well. I’d be glad to hear from you. Keep up the good work!

  8. Wiisco says:

    Hey Jesse,
    thank you for the Info. I found that solution shortly after posting my first comment. The probem then and now is that I never used Python before. I’ll probably learn it one day but right now that won’t happen 😛 But that’s all right since the solution in R Peter told me about works just fine for me.
    Anyways, thanks for your reply and another rhumbs up for your website.

  9. Yuval Cohen says:

    Thank you vary much for your post, it was vary helpful!

  10. ahrmlee says:

    Hi Jesse,
    Thanks for your post!
    I also use K-modes algorithms for clustering my categorical variables.
    One thing that I struggle to solve is how to choose the optimal number of clusters.
    I tried to calculate Silhouette width though, as the number of cluster increases, the Silhouette width also increases..
    In case of Elbow method, the graph was continously jittered …

    So i could not find a proper way to find the proper number of clusters.

    How did you deal with this?

    • That’s a good question. I haven’t used K-modes for a particular project, so I don’t have much insight into how to choose the right number of clusters. Generally, I think it makes sense to run the algorithm multiple times with different numbers of clusters, and choose the result that are most suitable for your application.

    • Gdog says:

      Having the same issue. Even with a domain expert, in my case clinical data, I can’t determine best K.

  11. Drew says:

    Nice article – thanks

  12. Pingback: Think Big, a Teradata Company Expands Capabilities for Building Data Lakes with Apache Spark - Data Science Tidings

  13. Klevin DCunha says:

    Can you use K-modes if the data set is a mix of categorical and numerical data points?

  14. Pingback: Unsupervised Learning – My Learnings

  15. waficjfahme says:

    This is one amazing blog post, it is very helpdul as i struggled with questionaire data. I want to clarify one point on how kmode works, you mentioned that to select a centroid, kmode will record which answer to each question got the most votes, but looking at matrix created above, how can the algorithm know which answers (features) belong to which questions as the questions are not visible in matrix

  16. Pingback: Clustering continuous and categorical data with Alteryx – Vizery Business

  17. Pingback: Clustering categorical data in Alteryx - The Data School

  18. Pingback: Blogging for Freedom | Best links of the week #15

  19. Deepa says:

    I want to know if K modes can be used for clustering ordered categorical data

  20. Pingback: Popular DS, AI, ML Blogs & Articles – pmlogy

  21. Pingback: Popular DS, AI, ML Blogs & Articles -

  22. Pingback: Unsupervised Learning - My Learnings

  23. Pingback: Important DS AI ML Articles -

  24. Pingback: Practical Consideration in K Means Algorithm – My Blog

  25. Pingback: Teradata Expands Capabilities For Information Lakes With Apache Spark - TheBestEntrepreneurship

  26. Pingback: Teradata Expands Capabilities For Knowledge Lakes With Apache Spark - GIST BEAST

  27. Pingback: Teradata Expands Capabilities For Knowledge Lakes With Apache Spark - Houston Limo Association

  28. Pingback: Teradata Expands Capabilities For Knowledge Lakes With Apache Spark - Trust Businesses

  29. Pingback: Teradata Expands Capabilities For Knowledge Lakes With Apache Spark - TheBestEntrepreneurship

  30. Pingback: Teradata Expands Capabilities For Data Lakes With Apache Spark - Infor trabajo

Leave a reply to Dace Rostoka Cancel reply