SVM Variants and comparisons

Abhishek Biswas
5 min readMar 11, 2023

--

Table of Contents:

  • Variants of support vector machines
  • Examples of each in real time
  • Comparison of all variants
  • Pros and Cons of each of them.
  • Alternative or similar models for each of them

Introduction of variants:

  1. Linear SVM: This is the basic form of SVM that separates classes using a linear boundary in feature space.
  2. Nonlinear SVM: When the data is not linearly separable, a nonlinear SVM is used. It uses kernel functions to transform the input data into a higher-dimensional feature space, where the data can be linearly separated.
  3. One-Class SVM: This variant of SVM is used for outlier detection or novelty detection. It learns the boundary of a set of observations that contain no anomalies, and then detects new observations that fall outside of this boundary.
  4. Support Vector Regression (SVR): This variant of SVM is used for regression problems, where the goal is to predict continuous output variables. It works by minimizing the distance between the predicted output and the actual output.
  5. Nu-SVM: This variant of SVM introduces a parameter “nu” that controls the number of support vectors and the margin width.
  6. Weighted SVM: This variant of SVM allows for assigning different weights to different classes in the training data, which can help in cases where the classes are imbalanced.
  7. Multiple Kernel Learning (MKL): This variant of SVM allows for combining multiple kernel functions to achieve better performance on complex classification problems.

Examples of each in real time:

  1. Linear SVM: Linear SVM is used when the data is linearly separable. For example, let’s say we have a dataset of iris flowers with their sepal length and sepal width measurements. We want to predict whether a flower belongs to the setosa or versicolor species. We can use a linear SVM to separate the two species using a linear boundary in the feature space of sepal length and sepal width.
  2. Nonlinear SVM: Nonlinear SVM is used when the data is not linearly separable. For example, let’s say we have a dataset of images of handwritten digits, and we want to classify each image as one of the 10 digits (0–9). We can use a nonlinear SVM with a polynomial kernel function to transform the input data into a higher-dimensional feature space, where the data can be linearly separated.
  3. One-Class SVM: One-Class SVM is used for outlier detection or novelty detection. For example, let’s say we have a dataset of credit card transactions, and we want to identify transactions that are likely to be fraudulent. We can use a one-class SVM to learn the boundary of a set of transactions that contain no anomalies, and then detect new transactions that fall outside of this boundary as potential fraudulent transactions.
  4. Support Vector Regression (SVR): SVR is used for regression problems, where the goal is to predict continuous output variables. For example, let’s say we have a dataset of housing prices with their features such as square footage, number of bedrooms, and location. We can use an SVR to predict the price of a new house based on its features.
  5. Nu-SVM: Nu-SVM is a variant of SVM that introduces a parameter “nu” that controls the number of support vectors and the margin width. For example, let’s say we have a dataset of email messages, and we want to classify each message as spam or not spam. We can use a nu-SVM with a smaller value of “nu” to allow for more support vectors and a wider margin, which can help to reduce the number of false positives (i.e., classifying a non-spam message as spam).
  6. Weighted SVM: Weighted SVM is used when the classes in the training data are imbalanced. For example, let’s say we have a dataset of medical images, and we want to classify each image as benign or malignant. If the dataset has more benign images than malignant images, we can use a weighted SVM to assign a higher weight to the malignant class, which can help to improve the performance of the classifier on the minority class.
  7. Multiple Kernel Learning (MKL): MKL is a variant of SVM that allows for combining multiple kernel functions to achieve better performance on complex classification problems. For example, let’s say we have a dataset of protein sequences, and we want to classify each sequence as belonging to one of several protein families. We can use MKL to combine multiple kernel functions that capture different aspects of the protein sequences, such as their amino acid composition, secondary structure, and evolutionary conservation.

Comparison of all variants:

Pros and Cons of each of them:

  1. Linear SVM:

Pros:

  • Works well for linearly separable data.
  • Fast training time on large datasets.
  • Performs well on high-dimensional data.

Cons:

  • Not suitable for non-linear data.
  • Limited capacity to capture complex relationships between features.

2. Nonlinear SVM:

Pros:

  • Works well for non-linearly separable data.
  • Can capture complex relationships between features.
  • Performs well on high-dimensional data.

Cons:

  • Can be computationally expensive on large datasets.
  • May require more hyperparameter tuning than linear SVMs.

3. One-Class SVM:

Pros:

  • Can identify outliers and anomalies.
  • Can work with very small amounts of labeled data.
  • Can be used in unsupervised learning settings.

Cons:

  • Only works with one class of data.
  • Sensitive to the choice of hyperparameters.
  • May not work well for highly variable data.

4. Support Vector Regression (SVR):

Pros:

  • Can be used for continuous variable prediction.
  • Robust to outliers.
  • Can handle non-linear relationships between features.

Cons:

  • Requires careful hyperparameter tuning.
  • Can be computationally expensive for large datasets.
  • May not work well for highly variable data.

5. Nu-SVM:

Pros:

  • Can control the number of support vectors and margin width.
  • Works well for imbalanced datasets.
  • Low false positive rate.

Cons:

  • Sensitive to the choice of hyperparameters.
  • May not work well for highly variable data.
  • Can be computationally expensive on large datasets.

6. Weighted SVM:

Pros:

  • Can handle imbalanced datasets.
  • Allows for greater emphasis on one class than the other.
  • Performs well when the minority class is of greater interest.

Cons:

  • Sensitive to the choice of weighting.
  • May not work well for highly variable data.
  • Can result in overfitting if the weighting is not carefully chosen.

7. Multiple Kernel Learning (MKL):

Pros:

  • Can combine multiple kernel functions for improved performance.
  • Works well for complex, heterogeneous data.
  • Can capture complex relationships between features.

Cons:

  • Can be computationally expensive.
  • Requires careful selection and combination of kernel functions.
  • May not work well for highly variable data.

Alternative or similar models for each of them:

  • May not work well for highly variable data.

--

--

Abhishek Biswas
Abhishek Biswas

Written by Abhishek Biswas

Technologist | Writer | Mentor | Industrial Ambassador | Mighty Polymath

No responses yet