Variables that are measured at different scales do not contribute equally to the analysis and might end up creating a bais.
For example, A variable that ranges between 0 and 1000 will outweigh a variable that ranges between 0 and 1. Using these variables without standardization will give the variable with the larger range weight of 1000 in the analysis. Transforming the data to comparable scales can prevent this problem.
Therefore we need standardisation or normalisation.
Normalization is a good technique to use when you do not know the distribution of your data or when you know the distribution is not Gaussian.
Standardization assumes that your data has a Gaussian (bell curve) distribution. This does not strictly have to be true, but the technique is more effective if your attribute distribution is Gaussian.
Any machine learning algorithm that computes the distance between the data points needs Feature Scaling (Standardization and Normalization). This includes all curve based algorithms.
Example:

KNN (K Nearest Neigbors)

SVM (Support Vector Machine)

Logistic Regression

KMeans Clustering
Algorithms that are used for matrix factorization, decomposition and dimensionality reduction also require feature scaling.
Example:

PCA (Principal Component Analysis)

SVD (Singular Value Decomposition)