Documentation
Classes
ValidationCurveDisplay

ValidationCurveDisplay

Validation Curve visualization.

It is recommended to use from\_estimator to create a ValidationCurveDisplay instance. All parameters are stored as attributes.

Read more in the User Guide for general information about the visualization API and detailed documentation regarding the validation curve visualization.

Python Reference (opens in a new tab)

Constructors

constructor()

Signature

new ValidationCurveDisplay(opts?: object): ValidationCurveDisplay;

Parameters

NameTypeDescription
opts?object-
opts.param_name?stringName of the parameter that has been varied.
opts.param_range?ArrayLikeThe values of the parameter that have been evaluated.
opts.score_name?stringThe name of the score used in validation\_curve. It will override the name inferred from the scoring parameter. If score is undefined, we use "Score" if negate\_score is false and "Negative score" otherwise. If scoring is a string or a callable, we infer the name. We replace \_ by spaces and capitalize the first letter. We remove neg\_ and replace it by "Negative" if negate\_score is false or just remove it otherwise.
opts.test_scores?ArrayLike[]Scores on test set.
opts.train_scores?ArrayLike[]Scores on training sets.

Returns

ValidationCurveDisplay

Defined in: generated/model_selection/ValidationCurveDisplay.ts:25 (opens in a new tab)

Methods

dispose()

Disposes of the underlying Python resources.

Once dispose() is called, the instance is no longer usable.

Signature

dispose(): Promise<void>;

Returns

Promise<void>

Defined in: generated/model_selection/ValidationCurveDisplay.ts:122 (opens in a new tab)

from_estimator()

Create a validation curve display from an estimator.

Read more in the User Guide for general information about the visualization API and detailed documentation regarding the validation curve visualization.

Signature

from_estimator(opts: object): Promise<any>;

Parameters

NameTypeDescription
optsobject-
opts.X?ArrayLike[]Training data, where n\_samples is the number of samples and n\_features is the number of features.
opts.ax?anyAxes object to plot on. If undefined, a new figure and axes is created.
opts.cv?numberDetermines the cross-validation splitting strategy. Possible inputs for cv are:
opts.error_score?"raise"Value to assign to the score if an error occurs in estimator fitting. If set to ‘raise’, the error is raised. If a numeric value is given, FitFailedWarning is raised.
opts.errorbar_kw?anyAdditional keyword arguments passed to the plt.errorbar used to draw mean score and standard deviation score.
opts.estimator?anyAn object of that type which is cloned for each validation.
opts.fill_between_kw?anyAdditional keyword arguments passed to the plt.fill\_between used to draw the score standard deviation.
opts.fit_params?anyParameters to pass to the fit method of the estimator.
opts.groups?ArrayLikeGroup labels for the samples used while splitting the dataset into train/test set. Only used in conjunction with a “Group” cv instance (e.g., GroupKFold).
opts.line_kw?anyAdditional keyword arguments passed to the plt.plot used to draw the mean score.
opts.n_jobs?numberNumber of jobs to run in parallel. Training the estimator and computing the score are parallelized over the different training and test sets. undefined means 1 unless in a joblib.parallel\_backend (opens in a new tab) context. \-1 means using all processors. See Glossary for more details.
opts.negate_score?booleanWhether or not to negate the scores obtained through validation\_curve. This is particularly useful when using the error denoted by neg\_\* in scikit-learn. Default Value false
opts.param_name?stringName of the parameter that will be varied.
opts.param_range?ArrayLikeThe values of the parameter that will be evaluated.
opts.pre_dispatch?string | numberNumber of predispatched jobs for parallel execution (default is all). The option can reduce the allocated memory. The str can be an expression like ‘2*n_jobs’. Default Value 'all'
opts.score_name?stringThe name of the score used to decorate the y-axis of the plot. It will override the name inferred from the scoring parameter. If score is undefined, we use "Score" if negate\_score is false and "Negative score" otherwise. If scoring is a string or a callable, we infer the name. We replace \_ by spaces and capitalize the first letter. We remove neg\_ and replace it by "Negative" if negate\_score is false or just remove it otherwise.
opts.score_type?"both" | "test" | "train"The type of score to plot. Can be one of "test", "train", or "both". Default Value 'both'
opts.scoring?stringA string (see The scoring parameter: defining model evaluation rules) or a scorer callable object / function with signature scorer(estimator, X, y) (see Defining your scoring strategy from metric functions).
opts.std_display_style?"errorbar" | "fill_between"The style used to display the score standard deviation around the mean score. If undefined, no representation of the standard deviation is displayed. Default Value 'fill_between'
opts.verbose?numberControls the verbosity: the higher, the more messages. Default Value 0
opts.y?ArrayLikeTarget relative to X for classification or regression; undefined for unsupervised learning.

Returns

Promise<any>

Defined in: generated/model_selection/ValidationCurveDisplay.ts:141 (opens in a new tab)

init()

Initializes the underlying Python resources.

This instance is not usable until the Promise returned by init() resolves.

Signature

init(py: PythonBridge): Promise<void>;

Parameters

NameType
pyPythonBridge

Returns

Promise<void>

Defined in: generated/model_selection/ValidationCurveDisplay.ts:68 (opens in a new tab)

plot()

Plot visualization.

Signature

plot(opts: object): Promise<any>;

Parameters

NameTypeDescription
optsobject-
opts.ax?anyAxes object to plot on. If undefined, a new figure and axes is created.
opts.errorbar_kw?anyAdditional keyword arguments passed to the plt.errorbar used to draw mean score and standard deviation score.
opts.fill_between_kw?anyAdditional keyword arguments passed to the plt.fill\_between used to draw the score standard deviation.
opts.line_kw?anyAdditional keyword arguments passed to the plt.plot used to draw the mean score.
opts.negate_score?booleanWhether or not to negate the scores obtained through validation\_curve. This is particularly useful when using the error denoted by neg\_\* in scikit-learn. Default Value false
opts.score_name?stringThe name of the score used to decorate the y-axis of the plot. It will override the name inferred from the scoring parameter. If score is undefined, we use "Score" if negate\_score is false and "Negative score" otherwise. If scoring is a string or a callable, we infer the name. We replace \_ by spaces and capitalize the first letter. We remove neg\_ and replace it by "Negative" if negate\_score is false or just remove it otherwise.
opts.score_type?"both" | "test" | "train"The type of score to plot. Can be one of "test", "train", or "both". Default Value 'both'
opts.std_display_style?"errorbar" | "fill_between"The style used to display the score standard deviation around the mean score. If undefined, no standard deviation representation is displayed. Default Value 'fill_between'

Returns

Promise<any>

Defined in: generated/model_selection/ValidationCurveDisplay.ts:313 (opens in a new tab)

Properties

_isDisposed

boolean = false

Defined in: generated/model_selection/ValidationCurveDisplay.ts:23 (opens in a new tab)

_isInitialized

boolean = false

Defined in: generated/model_selection/ValidationCurveDisplay.ts:22 (opens in a new tab)

_py

PythonBridge

Defined in: generated/model_selection/ValidationCurveDisplay.ts:21 (opens in a new tab)

id

string

Defined in: generated/model_selection/ValidationCurveDisplay.ts:18 (opens in a new tab)

opts

any

Defined in: generated/model_selection/ValidationCurveDisplay.ts:19 (opens in a new tab)

Accessors

ax_

Axes with the validation curve.

Signature

ax_(): Promise<any>;

Returns

Promise<any>

Defined in: generated/model_selection/ValidationCurveDisplay.ts:395 (opens in a new tab)

errorbar_

When the std\_display\_style is "errorbar", this is a list of matplotlib.container.ErrorbarContainer objects. If another style is used, errorbar\_ is undefined.

Signature

errorbar_(): Promise<any>;

Returns

Promise<any>

Defined in: generated/model_selection/ValidationCurveDisplay.ts:449 (opens in a new tab)

figure_

Figure containing the validation curve.

Signature

figure_(): Promise<any>;

Returns

Promise<any>

Defined in: generated/model_selection/ValidationCurveDisplay.ts:422 (opens in a new tab)

fill_between_

When the std\_display\_style is "fill\_between", this is a list of matplotlib.collections.PolyCollection objects. If another style is used, fill\_between\_ is undefined.

Signature

fill_between_(): Promise<any>;

Returns

Promise<any>

Defined in: generated/model_selection/ValidationCurveDisplay.ts:503 (opens in a new tab)

lines_

When the std\_display\_style is "fill\_between", this is a list of matplotlib.lines.Line2D objects corresponding to the mean train and test scores. If another style is used, line\_ is undefined.

Signature

lines_(): Promise<any>;

Returns

Promise<any>

Defined in: generated/model_selection/ValidationCurveDisplay.ts:476 (opens in a new tab)

py

Signature

py(): PythonBridge;

Returns

PythonBridge

Defined in: generated/model_selection/ValidationCurveDisplay.ts:55 (opens in a new tab)

Signature

py(pythonBridge: PythonBridge): void;

Parameters

NameType
pythonBridgePythonBridge

Returns

void

Defined in: generated/model_selection/ValidationCurveDisplay.ts:59 (opens in a new tab)