A Tour of Machine Learning Algorithms

A Tour of #MachineLearning Algorithms

  • You need to be a member of Data Science Central to add comments!
  • It’s related to instance based algorithms in the same way K-Means is related to k-NN — they typically use the same distance metric (Euclidean distance).
  • I was hoping to see ‘(Multiple) Correspondence Analysis’ under the list of “Dimensionality Reduction” methods.
  • Its good and explained well, clustered algorithm.
  • You can even download an algorithm map from the original article.

Originally published by Jason Brownlee in 2013, it still is a goldmine for all machine learning professionals.  The algorithms are broken down in several categ…

@analyticbridge: A Tour of #MachineLearning Algorithms

Originally published by   Jason Brownlee  in 2013, it still is a goldmine for all machine learning professionals.  The algorithms are broken down in several categories. Here we provide a high-level summary, a much longer and detailed version can be found here. You can even download an algorithm map from the original article. Below is a much smaller version.

It would be interesting to list, for each algorithm,

and generally speaking, compare these algorithms. I would add HDT, Jackknife regression, density estimation, attribution modeling (to optimize marketing mix), linkage (in fraud detection), indexation (to create taxonomies or for clustering large data sets consisting of text), bucketisation, and time series algorithms.

For more on machine learning (ML), click here.

Ensemble methods to fit data: see original paper

I love these kinds of lists! Nicely done. Biggest surprise? The nice list of dimensionality reduction methods (that includes the Sammon Projection). 

But, I have to make a couple quibbles:

1) back-propagation is an algorithm to fit a multi-layered perceptron (MLP) neural network. Other algorithms to find the weights include  QuickProp, R-Prop, Conjugate Gradient, Levenberg-Marquardt. “back-prop” should be changed to MLP.

I wouldn’t put this in instance based algorithms at all; it’s really a clustering algorithm very much like K-Means clustering. It’s related to instance based algorithms in the same way K-Means is related to k-NN — they typically use the same distance metric (Euclidean distance). But Kohonen is an unsupervised method (like K-Means).

A Tour of Machine Learning Algorithms

You might also like More from author

Comments are closed, but trackbacks and pingbacks are open.