Class: GaussianProcessRegressor
Gaussian process regression (GPR).
The implementation is based on Algorithm 2.1 of [RW2006].
In addition to standard scikit-learn estimator API, GaussianProcessRegressor
:
Constructors
new GaussianProcessRegressor()
new GaussianProcessRegressor(
opts
?):GaussianProcessRegressor
Parameters
Parameter | Type | Description |
---|---|---|
opts ? | object | - |
opts.alpha ? | number | ArrayLike | Value added to the diagonal of the kernel matrix during fitting. This can prevent a potential numerical issue during fitting, by ensuring that the calculated values form a positive definite matrix. It can also be interpreted as the variance of additional Gaussian measurement noise on the training observations. Note that this is different from using a WhiteKernel . If an array is passed, it must have the same number of entries as the data used for fitting and is used as datapoint-dependent noise level. Allowing to specify the noise level directly as a parameter is mainly for convenience and for consistency with Ridge . |
opts.copy_X_train ? | boolean | If true , a persistent copy of the training data is stored in the object. Otherwise, just a reference to the training data is stored, which might cause predictions to change if the data is modified externally. |
opts.kernel ? | any | The kernel specifying the covariance function of the GP. If undefined is passed, the kernel ConstantKernel(1.0, constant_value_bounds="fixed") \* RBF(1.0, length_scale_bounds="fixed") is used as default. Note that the kernel hyperparameters are optimized during fitting unless the bounds are marked as “fixed”. |
opts.n_restarts_optimizer ? | number | The number of restarts of the optimizer for finding the kernel’s parameters which maximize the log-marginal likelihood. The first run of the optimizer is performed from the kernel’s initial parameters, the remaining ones (if any) from thetas sampled log-uniform randomly from the space of allowed theta-values. If greater than 0, all bounds must be finite. Note that n_restarts_optimizer \== 0 implies that one run is performed. |
opts.n_targets ? | number | The number of dimensions of the target values. Used to decide the number of outputs when sampling from the prior distributions (i.e. calling sample_y before fit ). This parameter is ignored once fit has been called. |
opts.normalize_y ? | boolean | Whether or not to normalize the target values y by removing the mean and scaling to unit-variance. This is recommended for cases where zero-mean, unit-variance priors are used. Note that, in this implementation, the normalisation is reversed before the GP predictions are reported. |
opts.optimizer ? | "fmin_l_bfgs_b" | Can either be one of the internally supported optimizers for optimizing the kernel’s parameters, specified by a string, or an externally defined optimizer passed as a callable. If a callable is passed, it must have the signature: |
opts.random_state ? | number | Determines random number generation used to initialize the centers. Pass an int for reproducible results across multiple function calls. See Glossary. |
Returns GaussianProcessRegressor
Defined in generated/gaussian_process/GaussianProcessRegressor.ts:25
Properties
Property | Type | Default value | Defined in |
---|---|---|---|
_isDisposed | boolean | false | generated/gaussian_process/GaussianProcessRegressor.ts:23 |
_isInitialized | boolean | false | generated/gaussian_process/GaussianProcessRegressor.ts:22 |
_py | PythonBridge | undefined | generated/gaussian_process/GaussianProcessRegressor.ts:21 |
id | string | undefined | generated/gaussian_process/GaussianProcessRegressor.ts:18 |
opts | any | undefined | generated/gaussian_process/GaussianProcessRegressor.ts:19 |
Accessors
alpha_
Get Signature
get alpha_():
Promise
<ArrayLike
>
Dual coefficients of training data points in kernel space.
Returns Promise
<ArrayLike
>
Defined in generated/gaussian_process/GaussianProcessRegressor.ts:623
feature_names_in_
Get Signature
get feature_names_in_():
Promise
<ArrayLike
>
Names of features seen during fit. Defined only when X
has feature names that are all strings.
Returns Promise
<ArrayLike
>
Defined in generated/gaussian_process/GaussianProcessRegressor.ts:704
kernel_
Get Signature
get kernel_():
Promise
<any
>
The kernel used for prediction. The structure of the kernel is the same as the one passed as parameter but with optimized hyperparameters.
Returns Promise
<any
>
Defined in generated/gaussian_process/GaussianProcessRegressor.ts:569
L_
Get Signature
get L_():
Promise
<ArrayLike
[]>
Lower-triangular Cholesky decomposition of the kernel in X_train_
.
Returns Promise
<ArrayLike
[]>
Defined in generated/gaussian_process/GaussianProcessRegressor.ts:596
log_marginal_likelihood_value_
Get Signature
get log_marginal_likelihood_value_():
Promise
<number
>
The log-marginal-likelihood of self.kernel_.theta
.
Returns Promise
<number
>
Defined in generated/gaussian_process/GaussianProcessRegressor.ts:650
n_features_in_
Get Signature
get n_features_in_():
Promise
<number
>
Number of features seen during fit.
Returns Promise
<number
>
Defined in generated/gaussian_process/GaussianProcessRegressor.ts:677
py
Get Signature
get py():
PythonBridge
Returns PythonBridge
Set Signature
set py(
pythonBridge
):void
Parameters
Parameter | Type |
---|---|
pythonBridge | PythonBridge |
Returns void
Defined in generated/gaussian_process/GaussianProcessRegressor.ts:80
X_train_
Get Signature
get X_train_():
Promise
<ArrayLike
[]>
Feature vectors or other representations of training data (also required for prediction).
Returns Promise
<ArrayLike
[]>
Defined in generated/gaussian_process/GaussianProcessRegressor.ts:515
y_train_
Get Signature
get y_train_():
Promise
<ArrayLike
>
Target values in training data (also required for prediction).
Returns Promise
<ArrayLike
>
Defined in generated/gaussian_process/GaussianProcessRegressor.ts:542
Methods
dispose()
dispose():
Promise
<void
>
Disposes of the underlying Python resources.
Once dispose()
is called, the instance is no longer usable.
Returns Promise
<void
>
Defined in generated/gaussian_process/GaussianProcessRegressor.ts:136
fit()
fit(
opts
):Promise
<any
>
Fit Gaussian process regression model.
Parameters
Parameter | Type | Description |
---|---|---|
opts | object | - |
opts.X ? | ArrayLike [] | Feature vectors or other representations of training data. |
opts.y ? | ArrayLike | Target values. |
Returns Promise
<any
>
Defined in generated/gaussian_process/GaussianProcessRegressor.ts:153
get_metadata_routing()
get_metadata_routing(
opts
):Promise
<any
>
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
Parameters
Parameter | Type | Description |
---|---|---|
opts | object | - |
opts.routing ? | any | A MetadataRequest encapsulating routing information. |
Returns Promise
<any
>
Defined in generated/gaussian_process/GaussianProcessRegressor.ts:194
init()
init(
py
):Promise
<void
>
Initializes the underlying Python resources.
This instance is not usable until the Promise
returned by init()
resolves.
Parameters
Parameter | Type |
---|---|
py | PythonBridge |
Returns Promise
<void
>
Defined in generated/gaussian_process/GaussianProcessRegressor.ts:93
log_marginal_likelihood()
log_marginal_likelihood(
opts
):Promise
<number
>
Return log-marginal likelihood of theta for training data.
Parameters
Parameter | Type | Description |
---|---|---|
opts | object | - |
opts.clone_kernel ? | boolean | If true , the kernel attribute is copied. If false , the kernel attribute is modified, but may result in a performance improvement. |
opts.eval_gradient ? | boolean | If true , the gradient of the log-marginal likelihood with respect to the kernel hyperparameters at position theta is returned additionally. If true , theta must not be undefined . |
opts.theta ? | any | Kernel hyperparameters for which the log-marginal likelihood is evaluated. If undefined , the precomputed log_marginal_likelihood of self.kernel_.theta is returned. |
Returns Promise
<number
>
Defined in generated/gaussian_process/GaussianProcessRegressor.ts:230
predict()
predict(
opts
):Promise
<ArrayLike
>
Predict using the Gaussian process regression model.
We can also predict based on an unfitted model by using the GP prior. In addition to the mean of the predictive distribution, optionally also returns its standard deviation (return_std=True
) or covariance (return_cov=True
). Note that at most one of the two can be requested.
Parameters
Parameter | Type | Description |
---|---|---|
opts | object | - |
opts.return_cov ? | boolean | If true , the covariance of the joint predictive distribution at the query points is returned along with the mean. |
opts.return_std ? | boolean | If true , the standard-deviation of the predictive distribution at the query points is returned along with the mean. |
opts.X ? | ArrayLike [] | Query points where the GP is evaluated. |
Returns Promise
<ArrayLike
>
Defined in generated/gaussian_process/GaussianProcessRegressor.ts:282
sample_y()
sample_y(
opts
):Promise
<any
>
Draw samples from Gaussian process and evaluate at X.
Parameters
Parameter | Type | Description |
---|---|---|
opts | object | - |
opts.n_samples ? | number | Number of samples drawn from the Gaussian process per query point. |
opts.random_state ? | number | Determines random number generation to randomly draw samples. Pass an int for reproducible results across multiple function calls. See Glossary. |
opts.X ? | ArrayLike [] | Query points where the GP is evaluated. |
Returns Promise
<any
>
Defined in generated/gaussian_process/GaussianProcessRegressor.ts:332
score()
score(
opts
):Promise
<number
>
Return the coefficient of determination of the prediction.
The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true \- y_pred)\*\* 2).sum()
and \(v\) is the total sum of squares ((y_true \- y_true.mean()) \*\* 2).sum()
. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y
, disregarding the input features, would get a \(R^2\) score of 0.0.
Parameters
Parameter | Type | Description |
---|---|---|
opts | object | - |
opts.sample_weight ? | ArrayLike | Sample weights. |
opts.X ? | ArrayLike [] | Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted) , where n_samples_fitted is the number of samples used in the fitting for the estimator. |
opts.y ? | ArrayLike | True values for X . |
Returns Promise
<number
>
Defined in generated/gaussian_process/GaussianProcessRegressor.ts:384
set_predict_request()
set_predict_request(
opts
):Promise
<any
>
Request metadata passed to the predict
method.
Note that this method is only relevant if enable_metadata_routing=True
(see sklearn.set_config
). Please see User Guide on how the routing mechanism works.
The options for each parameter are:
Parameters
Parameter | Type | Description |
---|---|---|
opts | object | - |
opts.return_cov ? | string | boolean | Metadata routing for return_cov parameter in predict . |
opts.return_std ? | string | boolean | Metadata routing for return_std parameter in predict . |
Returns Promise
<any
>
Defined in generated/gaussian_process/GaussianProcessRegressor.ts:434
set_score_request()
set_score_request(
opts
):Promise
<any
>
Request metadata passed to the score
method.
Note that this method is only relevant if enable_metadata_routing=True
(see sklearn.set_config
). Please see User Guide on how the routing mechanism works.
The options for each parameter are:
Parameters
Parameter | Type | Description |
---|---|---|
opts | object | - |
opts.sample_weight ? | string | boolean | Metadata routing for sample_weight parameter in score . |
Returns Promise
<any
>
Defined in generated/gaussian_process/GaussianProcessRegressor.ts:479