-1
Hello!
I’m a beginner in Data Science and Machine Learning, I’m sorry if the doubt is kind of "beast".
I understand the importance of standardization/normalization of Features and in my studies, I always come across the use of Standardscaler(). Studying through Sklearn documentation, I saw that there is also preprocessing.Scale() and in practice, both Standardscaler() and only Scaler()in my test resulted in the same thing.
The documentation says that Standardscaler() applies "Transformer API". What would that be? What is the difference when using preprocessing.Cale() x preprocessing.Standardscaler() ?
My tests:
from sklearn import preprocessing
import numpy as np
X_train = np.array([[ 1., -1., 2.],
[ 2., 0., 0.],
[ 0., 1., -1.]])
X_scaled = preprocessing.scale(X_train)
X_scaled ## Features padronizadas
out[ ]:
array([[ 0. , -1.22474487, 1.33630621],
[ 1.22474487, 0. , -0.26726124],
[-1.22474487, 1.22474487, -1.06904497]])
Using the Standardscaler():
from sklearn.preprocessing import StandardScaler
scaler = preprocessing.StandardScaler().fit(X_train)
StandardScaler()
scaler.mean_
scaler.scale_
scaler.transform(X_train)
out [ ]
array([[ 0. , -1.22474487, 1.33630621],
[ 1.22474487, 0. , -0.26726124],
[-1.22474487, 1.22474487, -1.06904497]])
Thank you very much!