DocumentationClassesValidationCurveDisplay

Class: ValidationCurveDisplay

Validation Curve visualization.

It is recommended to use from_estimator to create a ValidationCurveDisplay instance. All parameters are stored as attributes.

Read more in the User Guide for general information about the visualization API and detailed documentation regarding the validation curve visualization.

Python Reference

Constructors

new ValidationCurveDisplay()

new ValidationCurveDisplay(opts?): ValidationCurveDisplay

Parameters

ParameterTypeDescription
opts?object-
opts.param_name?stringName of the parameter that has been varied.
opts.param_range?ArrayLikeThe values of the parameter that have been evaluated.
opts.score_name?stringThe name of the score used in validation_curve. It will override the name inferred from the scoring parameter. If score is undefined, we use "Score" if negate_score is false and "Negative score" otherwise. If scoring is a string or a callable, we infer the name. We replace _ by spaces and capitalize the first letter. We remove neg_ and replace it by "Negative" if negate_score is false or just remove it otherwise.
opts.test_scores?ArrayLike[]Scores on test set.
opts.train_scores?ArrayLike[]Scores on training sets.

Returns ValidationCurveDisplay

Defined in generated/model_selection/ValidationCurveDisplay.ts:25

Properties

PropertyTypeDefault valueDefined in
_isDisposedbooleanfalsegenerated/model_selection/ValidationCurveDisplay.ts:23
_isInitializedbooleanfalsegenerated/model_selection/ValidationCurveDisplay.ts:22
_pyPythonBridgeundefinedgenerated/model_selection/ValidationCurveDisplay.ts:21
idstringundefinedgenerated/model_selection/ValidationCurveDisplay.ts:18
optsanyundefinedgenerated/model_selection/ValidationCurveDisplay.ts:19

Accessors

ax_

Get Signature

get ax_(): Promise<any>

Axes with the validation curve.

Returns Promise<any>

Defined in generated/model_selection/ValidationCurveDisplay.ts:351


errorbar_

Get Signature

get errorbar_(): Promise<any>

When the std_display_style is "errorbar", this is a list of matplotlib.container.ErrorbarContainer objects. If another style is used, errorbar_ is undefined.

Returns Promise<any>

Defined in generated/model_selection/ValidationCurveDisplay.ts:405


figure_

Get Signature

get figure_(): Promise<any>

Figure containing the validation curve.

Returns Promise<any>

Defined in generated/model_selection/ValidationCurveDisplay.ts:378


fill_between_

Get Signature

get fill_between_(): Promise<any>

When the std_display_style is "fill_between", this is a list of matplotlib.collections.PolyCollection objects. If another style is used, fill_between_ is undefined.

Returns Promise<any>

Defined in generated/model_selection/ValidationCurveDisplay.ts:459


lines_

Get Signature

get lines_(): Promise<any>

When the std_display_style is "fill_between", this is a list of matplotlib.lines.Line2D objects corresponding to the mean train and test scores. If another style is used, line_ is undefined.

Returns Promise<any>

Defined in generated/model_selection/ValidationCurveDisplay.ts:432


py

Get Signature

get py(): PythonBridge

Returns PythonBridge

Set Signature

set py(pythonBridge): void

Parameters

ParameterType
pythonBridgePythonBridge

Returns void

Defined in generated/model_selection/ValidationCurveDisplay.ts:55

Methods

dispose()

dispose(): Promise<void>

Disposes of the underlying Python resources.

Once dispose() is called, the instance is no longer usable.

Returns Promise<void>

Defined in generated/model_selection/ValidationCurveDisplay.ts:111


from_estimator()

from_estimator(opts): Promise<any>

Create a validation curve display from an estimator.

Read more in the User Guide for general information about the visualization API and detailed documentation regarding the validation curve visualization.

Parameters

ParameterTypeDescription
optsobject-
opts.ax?anyAxes object to plot on. If undefined, a new figure and axes is created.
opts.cv?numberDetermines the cross-validation splitting strategy. Possible inputs for cv are:
opts.error_score?"raise"Value to assign to the score if an error occurs in estimator fitting. If set to ‘raise’, the error is raised. If a numeric value is given, FitFailedWarning is raised.
opts.errorbar_kw?anyAdditional keyword arguments passed to the plt.errorbar used to draw mean score and standard deviation score.
opts.estimator?anyAn object of that type which is cloned for each validation.
opts.fill_between_kw?anyAdditional keyword arguments passed to the plt.fill_between used to draw the score standard deviation.
opts.fit_params?anyParameters to pass to the fit method of the estimator.
opts.groups?ArrayLikeGroup labels for the samples used while splitting the dataset into train/test set. Only used in conjunction with a “Group” cv instance (e.g., GroupKFold).
opts.line_kw?anyAdditional keyword arguments passed to the plt.plot used to draw the mean score.
opts.n_jobs?numberNumber of jobs to run in parallel. Training the estimator and computing the score are parallelized over the different training and test sets. undefined means 1 unless in a joblib.parallel_backend context. \-1 means using all processors. See Glossary for more details.
opts.negate_score?booleanWhether or not to negate the scores obtained through validation_curve. This is particularly useful when using the error denoted by neg_\* in scikit-learn.
opts.param_name?stringName of the parameter that will be varied.
opts.param_range?ArrayLikeThe values of the parameter that will be evaluated.
opts.pre_dispatch?string | numberNumber of predispatched jobs for parallel execution (default is all). The option can reduce the allocated memory. The str can be an expression like ‘2*n_jobs’.
opts.score_name?stringThe name of the score used to decorate the y-axis of the plot. It will override the name inferred from the scoring parameter. If score is undefined, we use "Score" if negate_score is false and "Negative score" otherwise. If scoring is a string or a callable, we infer the name. We replace _ by spaces and capitalize the first letter. We remove neg_ and replace it by "Negative" if negate_score is false or just remove it otherwise.
opts.score_type?"both" | "test" | "train"The type of score to plot. Can be one of "test", "train", or "both".
opts.scoring?stringA string (see The scoring parameter: defining model evaluation rules) or a scorer callable object / function with signature scorer(estimator, X, y) (see Defining your scoring strategy from metric functions).
opts.std_display_style?"errorbar" | "fill_between"The style used to display the score standard deviation around the mean score. If undefined, no representation of the standard deviation is displayed.
opts.verbose?numberControls the verbosity: the higher, the more messages.
opts.X?ArrayLike[]Training data, where n_samples is the number of samples and n_features is the number of features.
opts.y?ArrayLikeTarget relative to X for classification or regression; undefined for unsupervised learning.

Returns Promise<any>

Defined in generated/model_selection/ValidationCurveDisplay.ts:130


init()

init(py): Promise<void>

Initializes the underlying Python resources.

This instance is not usable until the Promise returned by init() resolves.

Parameters

ParameterType
pyPythonBridge

Returns Promise<void>

Defined in generated/model_selection/ValidationCurveDisplay.ts:68


plot()

plot(opts): Promise<any>

Plot visualization.

Parameters

ParameterTypeDescription
optsobject-
opts.ax?anyAxes object to plot on. If undefined, a new figure and axes is created.
opts.errorbar_kw?anyAdditional keyword arguments passed to the plt.errorbar used to draw mean score and standard deviation score.
opts.fill_between_kw?anyAdditional keyword arguments passed to the plt.fill_between used to draw the score standard deviation.
opts.line_kw?anyAdditional keyword arguments passed to the plt.plot used to draw the mean score.
opts.negate_score?booleanWhether or not to negate the scores obtained through validation_curve. This is particularly useful when using the error denoted by neg_\* in scikit-learn.
opts.score_name?stringThe name of the score used to decorate the y-axis of the plot. It will override the name inferred from the scoring parameter. If score is undefined, we use "Score" if negate_score is false and "Negative score" otherwise. If scoring is a string or a callable, we infer the name. We replace _ by spaces and capitalize the first letter. We remove neg_ and replace it by "Negative" if negate_score is false or just remove it otherwise.
opts.score_type?"both" | "test" | "train"The type of score to plot. Can be one of "test", "train", or "both".
opts.std_display_style?"errorbar" | "fill_between"The style used to display the score standard deviation around the mean score. If undefined, no standard deviation representation is displayed.

Returns Promise<any>

Defined in generated/model_selection/ValidationCurveDisplay.ts:276