ML之sklearn:sklearn的make_pipeline函数、RobustScaler函数、KFold函数、cross_val_score函数的代码解释、使用方法之详细攻略

   日期:2020-07-12     浏览:89    评论:0    
核心提示:ML之sklearn:sklearn的make_pipeline函数、RobustScaler函数、cross_val_score函数的代码解释、使用方法之详细攻略目录sklearn的make_pipeline函数的代码解释、使用方法sklearn的make_pipeline函数的代码解释sklearn的make_pipeline函数的使用方法1、使用Pipeline类来表示在使用MinMaxScaler缩放数据之后再训练一个SVM的工作流程2、make_pipeline函数创建管_sklearn.pip

ML之sklearn:sklearn的make_pipeline函数、RobustScaler函数、KFold函数、cross_val_score函数的代码解释、使用方法之详细攻略

 

 

 

目录

sklearn的make_pipeline函数的代码解释、使用方法

sklearn的make_pipeline函数的代码解释

sklearn的make_pipeline函数的使用方法

1、使用Pipeline类来表示在使用MinMaxScaler缩放数据之后再训练一个SVM的工作流程

2、make_pipeline函数创建管道

sklearn的RobustScaler函数的代码解释、使用方法

RobustScaler函数的代码解释

RobustScaler函数的使用方法

sklearn的KFold函数的代码解释、使用方法

KFold函数的代码解释

KFold函数的使用方法

sklearn的cross_val_score函数的代码解释、使用方法

cross_val_score函数的代码解释

scoring参数可选的对象

cross_val_score函数的使用方法

1、分类预测——糖尿病

2、分类预测——iris鸢尾花

 

sklearn的make_pipeline函数的代码解释、使用方法

          为了简化构建变换和模型链的过程,Scikit-Learn提供了pipeline类,可以将多个处理步骤合并为单个Scikit-Learn估计器。pipeline类本身具有fit、predict和score方法,其行为与Scikit-Learn中的其他模型相同。

sklearn的make_pipeline函数的代码解释

def make_pipeline(*steps, **kwargs):
    """Construct a Pipeline from the given estimators.

    This is a shorthand for the Pipeline constructor; it does not require, and does not permit, naming the estimators. Instead, their names will be set  to the lowercase of their types automatically.

    Parameters
    ----------
    *steps : list of estimators,

    memory : None, str or object with the joblib.Memory interface, optional
        Used to cache the fitted transformers of the pipeline. By default, no caching is performed. If a string is given, it is the path to the caching directory. Enabling caching triggers a clone of  the transformers before fitting. Therefore, the transformer  instance given to the pipeline cannot be inspected directly. Use the attribute ``named_steps`` or ``steps`` to  inspect estimators within the pipeline. Caching the transformers is advantageous when fitting is time consuming.

根据给定的估算器构造一条管道

这是管道构造函数的简写;它不需要,也不允许命名估算器。相反,它们的名称将自动设置为类型的小写。

参数

    ----------

*steps :评估表、

memory:无,str或带有joblib的对象。内存接口,可选

用于缓存安装在管道中的变压器。默认情况下,不执行缓存。如果给定一个字符串,它就是到缓存目录的路径。启用缓存会在安装前触发变压器的克隆。因此,给管线的变压器实例不能直接检查。使用属性' ' named_steps ' ' '或' ' steps ' '检查管道中的评估器。当装配耗时时,缓存变压器是有利的。

    Examples
    --------
    >>> from sklearn.naive_bayes import GaussianNB
    >>> from sklearn.preprocessing import StandardScaler
    >>> make_pipeline(StandardScaler(), GaussianNB(priors=None))
    ...     # doctest: +NORMALIZE_WHITESPACE
    Pipeline(memory=None,
             steps=[('standardscaler',
                     StandardScaler(copy=True, with_mean=True, with_std=True)),
                    ('gaussiannb', GaussianNB(priors=None))])

    Returns
    -------
    p : Pipeline
    """
    memory = kwargs.pop('memory', None)
    if kwargs:
        raise TypeError('Unknown keyword arguments: "{}"'
                        .format(list(kwargs.keys())[0]))
    return Pipeline(_name_estimators(steps), memory=memory)

 

 

 

sklearn的make_pipeline函数的使用方法

    Examples
    --------
    >>> from sklearn.naive_bayes import GaussianNB
    >>> from sklearn.preprocessing import StandardScaler
    >>> make_pipeline(StandardScaler(), GaussianNB(priors=None))
    ...     # doctest: +NORMALIZE_WHITESPACE
    Pipeline(memory=None,
             steps=[('standardscaler',
                     StandardScaler(copy=True, with_mean=True, with_std=True)),
                    ('gaussiannb', GaussianNB(priors=None))])

    Returns
    -------
    p : Pipeline

 

1、使用Pipeline类来表示在使用MinMaxScaler缩放数据之后再训练一个SVM的工作流程

from sklearn.pipeline import Pipeline
pipe = Pipeline([("scaler",MinMaxScaler()),("svm",SVC())])
pip.fit(X_train,y_train)
pip.score(X_test,y_test)

 

2、make_pipeline函数创建管道

用Pipeline类构建管道时语法有点麻烦,我们通常不需要为每一个步骤提供用户指定的名称,这种情况下,就可以用make_pipeline函数创建管道,它可以为我们创建管道并根据每个步骤所属的类为其自动命名。

from sklearn.pipeline import make_pipeline
pipe = make_pipeline(MinMaxScaler(),SVC())

 

参考文章
《Python机器学习基础教程》构建管道(make_pipeline)
Python sklearn.pipeline.make_pipeline() Examples

 

sklearn的RobustScaler函数的代码解释、使用方法

RobustScaler函数的代码解释

class RobustScaler(BaseEstimator, TransformerMixin):
    """Scale features using statistics that are robust to outliers.

    This Scaler removes the median and scales the data according to  the quantile range (defaults to IQR: Interquartile Range).
    The IQR is the range between the 1st quartile (25th quantile)  and the 3rd quartile (75th quantile). Centering and scaling happen independently on each feature (or each sample, depending on the ``axis`` argument) by computing the relevant statistics on the samples in the training set. Median and  interquartile  range are then stored to be used on later data using the ``transform`` method.

    Standardization of a dataset is a common requirement for many  machine learning estimators. Typically this is done by removing the mean and scaling to unit variance. However, outliers can often influence the sample mean / variance in a negative way. In such cases, the median and  the interquartile range often give better results.

    .. versionadded:: 0.17

    Read more in the :ref:`User Guide <preprocessing_scaler>`.

    Parameters
    ----------
    with_centering : boolean, True by default
        If True, center the data before scaling.  This will cause ``transform`` to raise an exception when attempted on sparse matrices, because centering them entails building a dense matrix which in common use cases is likely to be too large to fit in  memory.

    with_scaling : boolean, True by default
        If True, scale the data to interquartile range.

    quantile_range : tuple (q_min, q_max), 0.0 < q_min < q_max < 100.0
        Default: (25.0, 75.0) = (1st quantile, 3rd quantile) = IQR
        Quantile range used to calculate ``scale_``.

        .. versionadded:: 0.18

    copy : boolean, optional, default is True
        If False, try to avoid a copy and do inplace scaling instead. This is not guaranteed to always work inplace; e.g. if the data is not a NumPy array or scipy.sparse CSR matrix, a copy may still be
        returned.

    Attributes
    ----------
    center_ : array of floats
        The median value for each feature in the training set.

    scale_ : array of floats
        The (scaled) interquartile range for each feature in the training set.

        .. versionadded:: 0.17
           *scale_* attribute.

    See also
    --------
    robust_scale: Equivalent function without the estimator API.

    :class:`sklearn.decomposition.PCA`
        Further removes the linear correlation across features with   'whiten=True'.

    Notes
    -----
    For a comparison of the different scalers, transformers, and normalizers, see :ref:`examples/preprocessing/plot_all_scaling.py
    <sphx_glr_auto_examples_preprocessing_plot_all_scaling.py>`.

    https://en.wikipedia.org/wiki/Median_(statistics)
    https://en.wikipedia.org/wiki/Interquartile_range
    """

 

使用对离群值稳健的统计数据来衡量特征

这个标量去除中值,并根据分位数范围(默认为IQR:四分位数范围)对数据进行缩放
IQR是第1个四分位数(第25分位数)和第3个四分位数(第75分位数)之间的范围。通过计算训练集中样本的相关统计数据,在每个特征(或每个样本,取决于“轴”参数)上独立地进行定心和缩放。然后,中值和四分位范围被存储起来,以便使用“变换”方法在以后的数据中使用。

数据集的标准化是许多机器学习估计器的常见需求。这通常是通过去除平均值和缩放到单位方差来实现的。然而,异常值往往会对样本均值/方差产生负面影响。在这种情况下,中位数和四分位范围通常会得到更好的结果

. .versionadded:: 0.17

详见:ref: ' User Guide  '。</preprocessing_scaler>

参数
 ----------
with_center:boolean,默认为True
如果为真,在缩放前将数据居中。这将导致“转换”在尝试处理稀疏矩阵时引发异常,因为围绕它们需要构建一个密集的矩阵,在常见的用例中,这个矩阵可能太大而无法装入内存。

with_scaling:布尔值,默认为True
如果为真,将数据缩放到四分位范围。

quantile_range:元组(q_min, q_max), 0.0 < q_min < q_max < 100.0
默认:(25.0,75.0)=(第1分位数,第3分位数)= IQR
用于计算' ' scale_ ' '的分位数范围。

. .versionadded:: 0.18

布尔值,可选,默认为真
如果为False,则尽量避免复制,而改为就地缩放。这并不能保证总是有效的;例如,如果数据不是一个NumPy数组或scipy。稀疏CSR矩阵,仍可复制
返回。

属性
----------
浮点数数组
训练集中每个特征的中值。

浮点数数组
训练集中每个特征的(缩放的)四分位范围。

. .versionadded:: 0.17
* scale_ *属性。

另请参阅
 --------
没有estimator API的等价函数。

类:“sklearn.decomposition.PCA”
进一步消除了“whiten=True”特征之间的线性相关性。

笔记
-----
对于不同的标量、转换器和规整器的比较,请参见:ref: ' examples/preprocessing/ plot_all_scale .py
< sphx_glr_auto_examples_preprocessing_plot_all_scaling.py >”。

https://en.wikipedia.org/wiki/Median_(统计)
https://en.wikipedia.org/wiki/Interquartile_range
”“”

 

    def __init__(self, with_centering=True, with_scaling=True,
                 quantile_range=(25.0, 75.0), copy=True):
        self.with_centering = with_centering
        self.with_scaling = with_scaling
        self.quantile_range = quantile_range
        self.copy = copy

    def _check_array(self, X, copy):
        """Makes sure centering is not enabled for sparse matrices."""
        X = check_array(X, accept_sparse=('csr', 'csc'), copy=self.copy,
                        estimator=self, dtype=FLOAT_DTYPES)

        if sparse.issparse(X):
            if self.with_centering:
                raise ValueError(
                    "Cannot center sparse matrices: use `with_centering=False`"
                    " instead. See docstring for motivation and alternatives.")
        return X

    def fit(self, X, y=None):
        """Compute the median and quantiles to be used for scaling.

        Parameters
        ----------
        X : array-like, shape [n_samples, n_features]
            The data used to compute the median and quantiles
            used for later scaling along the features axis.
        """
        if sparse.issparse(X):
            raise TypeError("RobustScaler cannot be fitted on sparse inputs")
        X = self._check_array(X, self.copy)
        if self.with_centering:
            self.center_ = np.median(X, axis=0)

        if self.with_scaling:
            q_min, q_max = self.quantile_range
            if not 0 <= q_min <= q_max <= 100:
                raise ValueError("Invalid quantile range: %s" %
                                 str(self.quantile_range))

            q = np.percentile(X, self.quantile_range, axis=0)
            self.scale_ = (q[1] - q[0])
            self.scale_ = _handle_zeros_in_scale(self.scale_, copy=False)
        return self

    def transform(self, X):
        """Center and scale the data.

        Can be called on sparse input, provided that ``RobustScaler`` has been
        fitted to dense input and ``with_centering=False``.

        Parameters
        ----------
        X : {array-like, sparse matrix}
            The data used to scale along the specified axis.
        """
        if self.with_centering:
            check_is_fitted(self, 'center_')
        if self.with_scaling:
            check_is_fitted(self, 'scale_')
        X = self._check_array(X, self.copy)

        if sparse.issparse(X):
            if self.with_scaling:
                inplace_column_scale(X, 1.0 / self.scale_)
        else:
            if self.with_centering:
                X -= self.center_
            if self.with_scaling:
                X /= self.scale_
        return X

    def inverse_transform(self, X):
        """Scale back the data to the original representation

        Parameters
        ----------
        X : array-like
            The data used to scale along the specified axis.
        """
        if self.with_centering:
            check_is_fitted(self, 'center_')
        if self.with_scaling:
            check_is_fitted(self, 'scale_')
        X = self._check_array(X, self.copy)

        if sparse.issparse(X):
            if self.with_scaling:
                inplace_column_scale(X, self.scale_)
        else:
            if self.with_scaling:
                X *= self.scale_
            if self.with_centering:
                X += self.center_
        return X

 

 

RobustScaler函数的使用方法

lasso = make_pipeline(RobustScaler(), Lasso(alpha =0.5, random_state=1))
ENet = make_pipeline(RobustScaler(), ElasticNet(alpha=0.5, l1_ratio=.9, random_state=3))

 

 

 

sklearn的KFold函数的代码解释、使用方法

KFold函数的代码解释

class KFold Found at: sklearn.model_selection._split

class KFold(_BaseKFold):
    """K-Folds cross-validator
    Provides train/test indices to split data in train/test sets. Split  dataset into k consecutive folds (without shuffling by default).
    Each fold is then used once as a validation while the k - 1 remaining  folds form the training set.
    Read more in the :ref:`User Guide <cross_validation>`. 
    Parameters
    ----------
    n_splits : int, default=3
    Number of folds. Must be at least 2.
    
    shuffle : boolean, optional
    Whether to shuffle the data before splitting into batches.
    
    random_state : int, RandomState instance or None, optional, 
     default=None
    If int, random_state is the seed used by the random number  generator;
    If RandomState instance, random_state is the random number  generator;
    If None, the random number generator is the RandomState  instance used  by `np.random`. Used when ``shuffle`` == True.

在:sklearn.model_select ._split找到的类KFold

类KFold (_BaseKFold):
”““K-Folds cross-validator
提供训练/测试索引来分割训练/测试集中的数据。将数据集分割成k个连续的折叠(默认情况下没有洗牌)
然后,每条折叠使用一次作为验证,而k - 1条剩余折叠形成训练集。
更多信息参见:ref: ' User Guide <cross_validation> '。</cross_validation>
参数
----------
n_splits :int,默认=3
折叠的数量。必须至少是2。

shuffle :布尔型,可选
在分割成批之前是否打乱数据。

random_state :int, RandomState实例或None,可选,
默认=没有
如果int, random_state是随机数生成器使用的种子;
如果是RandomState实例,则random_state为随机数生成器;
如果没有,随机数生成器是“np.random”使用的RandomState实例。当' ' shuffle ' == True时使用。

    Examples
    --------
    >>> from sklearn.model_selection import KFold
    >>> X = np.array([[1, 2], [3, 4], [1, 2], [3, 4]])
    >>> y = np.array([1, 2, 3, 4])
    >>> kf = KFold(n_splits=2)
    >>> kf.get_n_splits(X)
    2
    >>> print(kf)  # doctest: +NORMALIZE_WHITESPACE
    KFold(n_splits=2, random_state=None, shuffle=False)
    >>> for train_index, test_index in kf.split(X):
    ...    print("TRAIN:", train_index, "TEST:", test_index)
    ...    X_train, X_test = X[train_index], X[test_index]
    ...    y_train, y_test = y[train_index], y[test_index]
    TRAIN: [2 3] TEST: [0 1]
    TRAIN: [0 1] TEST: [2 3]
    
    Notes
    -----
    The first ``n_samples % n_splits`` folds have size
    ``n_samples // n_splits + 1``, other folds have size
    ``n_samples // n_splits``, where ``n_samples`` is the number of 
     samples.
    
    See also
    --------
    StratifiedKFold
    Takes group information into account to avoid building folds with  imbalanced class distributions (for binary or multiclass  classification tasks).
    GroupKFold: K-fold iterator variant with non-overlapping groups.
    RepeatedKFold: Repeats K-Fold n times.
    """
另请参阅
--------
StratifiedKFold
考虑组信息,以避免构建不平衡的类分布的折叠(对于二进制或多类分类任务)。
GroupKFold:不重叠组的K-fold迭代器变体。
RepeatedKFold:重复K-Fold n次。
”“”
    def __init__(self, n_splits=3, shuffle=False, 
        random_state=None):
        super(KFold, self).__init__(n_splits, shuffle, random_state)
    
    def _iter_test_indices(self, X, y=None, groups=None):
        n_samples = _num_samples(X)
        indices = np.arange(n_samples)
        if self.shuffle:
            check_random_state(self.random_state).shuffle(indices)
        n_splits = self.n_splits
        fold_sizes = (n_samples // n_splits) * np.ones(n_splits, dtype=np.
         int)
        fold_sizes[:n_samples % n_splits] += 1
        current = 0
        for fold_size in fold_sizes:
            start, stop = current, current + fold_size
            yield indices[start:stop]
            current = stop
 

 

 

KFold函数的使用方法

    Examples
    --------
    >>> from sklearn.model_selection import KFold
    >>> X = np.array([[1, 2], [3, 4], [1, 2], [3, 4]])
    >>> y = np.array([1, 2, 3, 4])
    >>> kf = KFold(n_splits=2)
    >>> kf.get_n_splits(X)
    2
    >>> print(kf)  # doctest: +NORMALIZE_WHITESPACE
    KFold(n_splits=2, random_state=None, shuffle=False)
    >>> for train_index, test_index in kf.split(X):
    ...    print("TRAIN:", train_index, "TEST:", test_index)
    ...    X_train, X_test = X[train_index], X[test_index]
    ...    y_train, y_test = y[train_index], y[test_index]
    TRAIN: [2 3] TEST: [0 1]
    TRAIN: [0 1] TEST: [2 3]

 

 

 

 

 

sklearn的cross_val_score函数的代码解释、使用方法

cross_val_score函数的代码解释

def cross_val_score Found at: sklearn.model_selection._validation

def cross_val_score(estimator, X, y=None, groups=None, scoring=None, cv=None,  n_jobs=1, verbose=0, fit_params=None,   pre_dispatch='2*n_jobs'):
    """Evaluate a score by cross-validation
    Read more in the :ref:`User Guide <cross_validation>`.

通过交叉验证来评估一个分数

更多信息参见:ref: ' User Guide '。

  Parameters
    ----------
    estimator : estimator object implementing 'fit' The object to use to fit the data.
    
    X : array-like
    The data to fit. Can be for example a list, or an array.
    
    y : array-like, optional, default: None
    The target variable to try to predict in the case of  supervised learning.
    
    groups : array-like, with shape (n_samples,), optional
    Group labels for the samples used while splitting the dataset into  train/test set.
    
    scoring : string, callable or None, optional, default: None
    A string (see model evaluation documentation) or a scorer callable object / function with signature  ``scorer(estimator, X, y)``.
    
    cv : int, cross-validation generator or an iterable, optional
    Determines the cross-validation splitting strategy.
    Possible inputs for cv are:
    - None, to use the default 3-fold cross validation,
    - integer, to specify the number of folds in a `(Stratified)KFold`,
    - An object to be used as a cross-validation generator.
    - An iterable yielding train, test splits.
    For integer/None inputs, if the estimator is a classifier and ``y`` is  either binary or multiclass, :class:`StratifiedKFold` is used. In all  other cases, :class:`KFold` is used.
    
    Refer :ref:`User Guide <cross_validation>` for the various cross-validation strategies that can be used here.
    
    n_jobs : integer, optional
    The number of CPUs to use to do the computation. -1 means   'all CPUs'.
    
    verbose : integer, optional
    The verbosity level.
    
    fit_params : dict, optional
    Parameters to pass to the fit method of the estimator.
    
    pre_dispatch : int, or string, optional
    Controls the number of jobs that get dispatched during parallel  execution. Reducing this number can be useful to avoid an explosion of memory consumption when more jobs get dispatched  than CPUs can process. This parameter can be:
    
    - None, in which case all the jobs are immediately  created and spawned. Use this for lightweight and fast-running jobs, to avoid delays due to on-demand spawning of the jobs
    - An int, giving the exact number of total jobs that are spawned
    - A string, giving an expression as a function of n_jobs, as in '2*n_jobs'
    
    Returns
    -------
    scores : array of float, shape=(len(list(cv)),)
    Array of scores of the estimator for each run of the cross validation.

参数

    ----------

estimator:实现“适合”对象以适合数据。

    

X:数组类

需要匹配的数据。可以是列表,也可以是数组。

    

y : 类似数组,可选,默认:无

在监督学习的情况下,预测的目标变量。

    

groups : 类数组,形状(n_samples,),可选

将数据集分割为训练/测试集时使用的样本的标签分组。

    

scoring : 字符串,可调用或无,可选,默认:无

一个字符串(参见模型评估文档)或签名为' ' scorer(estimator, X, y) ' '的scorer可调用对象/函数。

    

cv : int,交叉验证生成器或可迭代,可选

确定交叉验证分割策略。

cv可能的输入有:

-无,使用默认的三折交叉验证,

-整数,用于指定“(分层的)KFold”中的折叠数,

-用作交叉验证生成器的对象。

-一个可迭代产生的序列,测试分裂。

对于整数/无输入,如果估计器是一个分类器,并且' ' y ' '是二进制的或多类的,则使用:class: ' StratifiedKFold '。在所有其他情况下,使用:class: ' KFold '。

    

请参考:ref: ' User Guide ',了解可以在这里使用的各种交叉验证策略。

    

n_jobs:整数,可选

用于进行计算的cpu数量。-1表示“所有cpu”。

    

verbose:整数,可选

冗长的水平。

    

fit_params :dict,可选

参数传递给估计器的拟合方法。

    

pre_dispatch: int或string,可选

控制并行执行期间分派的作业数量。当分配的作业多于cpu能够处理的任务时,减少这个数量有助于避免内存消耗激增。该参数可以为:

-无,在这种情况下,立即创建并派生所有作业。将此用于轻量级和快速运行的作业,以避免由于按需生成作业而造成的延迟

-一个int,给出生成的作业的确切总数

一个字符串,给出一个作为n_jobs函数的表达式,如'2*n_jobs'

    

返回

    -------

(len(list(cv)),)

交叉验证的每次运行估计器的分数数组。

    Examples
    --------
    >>> from sklearn import datasets, linear_model
    >>> from sklearn.model_selection import cross_val_score
    >>> diabetes = datasets.load_diabetes()
    >>> X = diabetes.data[:150]
    >>> y = diabetes.target[:150]
    >>> lasso = linear_model.Lasso()
    >>> print(cross_val_score(lasso, X, y))  # doctest: +ELLIPSIS
    [ 0.33150734  0.08022311  0.03531764]
    
    See Also
    ---------
    :func:`sklearn.model_selection.cross_validate`:
    To run cross-validation on multiple metrics and also to return  train scores, fit times and score times.
    
    :func:`sklearn.metrics.make_scorer`:
    Make a scorer from a performance metric or loss function.
    
    """
    # To ensure multimetric format is not supported
    scorer = check_scoring(estimator, scoring=scoring)
    cv_results = cross_validate(estimator=estimator, X=X, y=y, groups=groups, 
        scoring={'score':scorer}, cv=cv, 
        return_train_score=False, 
        n_jobs=n_jobs, verbose=verbose, 
        fit_params=fit_params, 
        pre_dispatch=pre_dispatch)
    return cv_results['test_score']
另请参阅
---------
:func:“sklearn.model_selection.cross_validate”:
在多个指标上进行交叉验证,并返回训练分数、适应时间和得分时间。

:func:“sklearn.metrics.make_scorer”:
从性能度量或损失函数中制作一个记分员。

”“”
#以确保不支持多度量格式

scoring参数可选的对象

https://scikit-learn.org/stable/modules/model_evaluation.html#scoring-parameter

Scoring

Function

Comment

Classification

   

‘accuracy’

metrics.accuracy_score

 

‘balanced_accuracy’

metrics.balanced_accuracy_score

 

‘average_precision’

metrics.average_precision_score

 

‘neg_brier_score’

metrics.brier_score_loss

 

‘f1’

metrics.f1_score

for binary targets

‘f1_micro’

metrics.f1_score

micro-averaged

‘f1_macro’

metrics.f1_score

macro-averaged

‘f1_weighted’

metrics.f1_score

weighted average

‘f1_samples’

metrics.f1_score

by multilabel sample

‘neg_log_loss’

metrics.log_loss

requires predict_proba support

‘precision’ etc.

metrics.precision_score

suffixes apply as with ‘f1’

‘recall’ etc.

metrics.recall_score

suffixes apply as with ‘f1’

‘jaccard’ etc.

metrics.jaccard_score

suffixes apply as with ‘f1’

‘roc_auc’

metrics.roc_auc_score

 

‘roc_auc_ovr’

metrics.roc_auc_score

 

‘roc_auc_ovo’

metrics.roc_auc_score

 

‘roc_auc_ovr_weighted’

metrics.roc_auc_score

 

‘roc_auc_ovo_weighted’

metrics.roc_auc_score

 

Clustering

   

‘adjusted_mutual_info_score’

metrics.adjusted_mutual_info_score

 

‘adjusted_rand_score’

metrics.adjusted_rand_score

 

‘completeness_score’

metrics.completeness_score

 

‘fowlkes_mallows_score’

metrics.fowlkes_mallows_score

 

‘homogeneity_score’

metrics.homogeneity_score

 

‘mutual_info_score’

metrics.mutual_info_score

 

‘normalized_mutual_info_score’

metrics.normalized_mutual_info_score

 

‘v_measure_score’

metrics.v_measure_score

 

Regression

   

‘explained_variance’

metrics.explained_variance_score

 

‘max_error’

metrics.max_error

 

‘neg_mean_absolute_error’

metrics.mean_absolute_error

 

‘neg_mean_squared_error’

metrics.mean_squared_error

 

‘neg_root_mean_squared_error’

metrics.mean_squared_error

 

‘neg_mean_squared_log_error’

metrics.mean_squared_log_error

 

‘neg_median_absolute_error’

metrics.median_absolute_error

 

‘r2’

metrics.r2_score

 

‘neg_mean_poisson_deviance’

metrics.mean_poisson_deviance

 

‘neg_mean_gamma_deviance’

metrics.mean_gamma_deviance

 

 

cross_val_score函数的使用方法

1、分类预测——糖尿病

    >>> from sklearn import datasets, linear_model
    >>> from sklearn.model_selection import cross_val_score
    >>> diabetes = datasets.load_diabetes()
    >>> X = diabetes.data[:150]
    >>> y = diabetes.target[:150]
    >>> lasso = linear_model.Lasso()
    >>> print(cross_val_score(lasso, X, y))  # doctest: +ELLIPSIS
    [ 0.33150734  0.08022311  0.03531764]

 

2、分类预测——iris鸢尾花

from sklearn import datasets	#自带数据集
from sklearn.model_selection import train_test_split,cross_val_score	#划分数据 交叉验证
from sklearn.neighbors import KNeighborsClassifier  #一个简单的模型,只有K一个参数,类似K-means
import matplotlib.pyplot as plt
iris = datasets.load_iris()		#加载sklearn自带的数据集
X = iris.data 			#这是数据
y = iris.target 		#这是每个数据所对应的标签
train_X,test_X,train_y,test_y = train_test_split(X,y,test_size=1/3,random_state=3)	#这里划分数据以1/3的来划分 训练集训练结果 测试集测试结果
k_range = range(1,31)
cv_scores = []		#用来放每个模型的结果值
for n in k_range:
    knn = KNeighborsClassifier(n)   #knn模型,这里一个超参数可以做预测,当多个超参数时需要使用另一种方法GridSearchCV
    scores = cross_val_score(knn,train_X,train_y,cv=10,scoring='accuracy')  #cv:选择每次测试折数  accuracy:评价指标是准确度,可以省略使用默认值,具体使用参考下面。
    cv_scores.append(scores.mean())
plt.plot(k_range,cv_scores)
plt.xlabel('K')
plt.ylabel('Accuracy')		#通过图像选择最好的参数
plt.show()
best_knn = KNeighborsClassifier(n_neighbors=3)	# 选择最优的K=3传入模型
best_knn.fit(train_X,train_y)			#训练模型
print(best_knn.score(test_X,test_y))	#看看评分

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 
打赏
 本文转载自:网络 
所有权利归属于原作者,如文章来源标示错误或侵犯了您的权利请联系微信13520258486
更多>最近资讯中心
更多>最新资讯中心
0相关评论

推荐图文
推荐资讯中心
点击排行
最新信息
新手指南
采购商服务
供应商服务
交易安全
关注我们
手机网站:
新浪微博:
微信关注:

13520258486

周一至周五 9:00-18:00
(其他时间联系在线客服)

24小时在线客服