Feature Scaling in Machine Learning
• Absolute Maximum Scaling: Absolute Maximum Scaling in Machine Learning
MaxAbs Scaling is a method used to scale features in your dataset, which is especially
useful when your data contains both positive and negative values. This technique is
designed to scale each feature by its absolute maximum value (i.e., the maximum
value by absolute magnitude) while preserving the sign of the data.
• Formula:
• Min-Max Scaling: Min-Max Scaling is a feature scaling technique that transforms
the data into a specific range, typically [0, 1]. This is achieved by subtracting the
minimum value of the feature and dividing by the range (the difference between the
maximum and minimum values of the feature).
• Formula:
• Standardization: Standardization (also known as Z-score normalization) is a
technique used to scale features so that they have a mean of 0 and a standard deviation
of 1. This is done by subtracting the mean of the feature and dividing by the standard
deviation.
• Formula:
• Robust Scaling: Robust Scaling is a feature scaling technique that uses the median
and interquartile range (IQR) to scale the data. It is robust to outliers because it does
not rely on the mean and standard deviation, which can be significantly affected by
outliers. Instead, it scales features based on the median and the IQR, making it more
resistant to the influence of extreme values.
• Formula:
• Z-score normalization: Z-score normalization (also known as Standardization) is a
technique used in machine learning to transform features so that they have a mean of
0 and a standard deviation of 1. This transformation is important when the data has
different scales or units, which can affect the performance of many machine learning
algorithms, especially those based on distance metrics (like k-nearest neighbors or
gradient-based algorithms).
• Formula: