Class: LinearSVR
Linear Support Vector Regression.
Similar to SVR with parameter kernel=’linear’, but implemented in terms of liblinear rather than libsvm, so it has more flexibility in the choice of penalties and loss functions and should scale better to large numbers of samples.
The main differences between LinearSVR
and SVR
lie in the loss function used by default, and in the handling of intercept regularization between those two implementations.
This class supports both dense and sparse input.
Read more in the User Guide.
Constructors
new LinearSVR()
new LinearSVR(
opts
?):LinearSVR
Parameters
Parameter | Type | Description |
---|---|---|
opts ? | object | - |
opts.C ? | number | Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. |
opts.dual ? | boolean | "auto" | Select the algorithm to either solve the dual or primal optimization problem. Prefer dual=false when n_samples > n_features. dual="auto" will choose the value of the parameter automatically, based on the values of n_samples , n_features and loss . If n_samples < n_features and optimizer supports chosen loss , then dual will be set to true , otherwise it will be set to false . |
opts.epsilon ? | number | Epsilon parameter in the epsilon-insensitive loss function. Note that the value of this parameter depends on the scale of the target variable y. If unsure, set epsilon=0 . |
opts.fit_intercept ? | boolean | Whether or not to fit an intercept. If set to true , the feature vector is extended to include an intercept term: \[x_1, ..., x_n, 1\] , where 1 corresponds to the intercept. If set to false , no intercept will be used in calculations (i.e. data is expected to be already centered). |
opts.intercept_scaling ? | number | When fit_intercept is true , the instance vector x becomes \[x_1, ..., x_n, intercept_scaling\] , i.e. a “synthetic” feature with a constant value equal to intercept_scaling is appended to the instance vector. The intercept becomes intercept_scaling * synthetic feature weight. Note that liblinear internally penalizes the intercept, treating it like any other term in the feature vector. To reduce the impact of the regularization on the intercept, the intercept_scaling parameter can be set to a value greater than 1; the higher the value of intercept_scaling , the lower the impact of regularization on it. Then, the weights become \[w_x_1, ..., w_x_n, w_intercept\*intercept_scaling\] , where w_x_1, ..., w_x_n represent the feature weights and the intercept weight is scaled by intercept_scaling . This scaling allows the intercept term to have a different regularization behavior compared to the other features. |
opts.loss ? | "epsilon_insensitive" | "squared_epsilon_insensitive" | Specifies the loss function. The epsilon-insensitive loss (standard SVR) is the L1 loss, while the squared epsilon-insensitive loss (‘squared_epsilon_insensitive’) is the L2 loss. |
opts.max_iter ? | number | The maximum number of iterations to be run. |
opts.random_state ? | number | Controls the pseudo random number generation for shuffling the data. Pass an int for reproducible output across multiple function calls. See Glossary. |
opts.tol ? | number | Tolerance for stopping criteria. |
opts.verbose ? | number | Enable verbose output. Note that this setting takes advantage of a per-process runtime setting in liblinear that, if enabled, may not work properly in a multithreaded context. |
Returns LinearSVR
Defined in generated/svm/LinearSVR.ts:29
Properties
Property | Type | Default value | Defined in |
---|---|---|---|
_isDisposed | boolean | false | generated/svm/LinearSVR.ts:27 |
_isInitialized | boolean | false | generated/svm/LinearSVR.ts:26 |
_py | PythonBridge | undefined | generated/svm/LinearSVR.ts:25 |
id | string | undefined | generated/svm/LinearSVR.ts:22 |
opts | any | undefined | generated/svm/LinearSVR.ts:23 |
Accessors
coef_
Get Signature
get coef_():
Promise
<ArrayLike
[]>
Weights assigned to the features (coefficients in the primal problem).
coef_
is a readonly property derived from raw_coef_
that follows the internal memory layout of liblinear.
Returns Promise
<ArrayLike
[]>
Defined in generated/svm/LinearSVR.ts:397
feature_names_in_
Get Signature
get feature_names_in_():
Promise
<ArrayLike
>
Names of features seen during fit. Defined only when X
has feature names that are all strings.
Returns Promise
<ArrayLike
>
Defined in generated/svm/LinearSVR.ts:468
intercept_
Get Signature
get intercept_():
Promise
<ArrayLike
>
Constants in decision function.
Returns Promise
<ArrayLike
>
Defined in generated/svm/LinearSVR.ts:420
n_features_in_
Get Signature
get n_features_in_():
Promise
<number
>
Number of features seen during fit.
Returns Promise
<number
>
Defined in generated/svm/LinearSVR.ts:443
n_iter_
Get Signature
get n_iter_():
Promise
<number
>
Maximum number of iterations run across all classes.
Returns Promise
<number
>
Defined in generated/svm/LinearSVR.ts:493
py
Get Signature
get py():
PythonBridge
Returns PythonBridge
Set Signature
set py(
pythonBridge
):void
Parameters
Parameter | Type |
---|---|
pythonBridge | PythonBridge |
Returns void
Defined in generated/svm/LinearSVR.ts:102
Methods
dispose()
dispose():
Promise
<void
>
Disposes of the underlying Python resources.
Once dispose()
is called, the instance is no longer usable.
Returns Promise
<void
>
Defined in generated/svm/LinearSVR.ts:153
fit()
fit(
opts
):Promise
<any
>
Fit the model according to the given training data.
Parameters
Parameter | Type | Description |
---|---|---|
opts | object | - |
opts.sample_weight ? | ArrayLike | Array of weights that are assigned to individual samples. If not provided, then each sample is given unit weight. |
opts.X ? | ArrayLike | Training vector, where n_samples is the number of samples and n_features is the number of features. |
opts.y ? | ArrayLike | Target vector relative to X. |
Returns Promise
<any
>
Defined in generated/svm/LinearSVR.ts:170
get_metadata_routing()
get_metadata_routing(
opts
):Promise
<any
>
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
Parameters
Parameter | Type | Description |
---|---|---|
opts | object | - |
opts.routing ? | any | A MetadataRequest encapsulating routing information. |
Returns Promise
<any
>
Defined in generated/svm/LinearSVR.ts:214
init()
init(
py
):Promise
<void
>
Initializes the underlying Python resources.
This instance is not usable until the Promise
returned by init()
resolves.
Parameters
Parameter | Type |
---|---|
py | PythonBridge |
Returns Promise
<void
>
Defined in generated/svm/LinearSVR.ts:115
predict()
predict(
opts
):Promise
<any
>
Predict using the linear model.
Parameters
Parameter | Type | Description |
---|---|---|
opts | object | - |
opts.X ? | any | Samples. |
Returns Promise
<any
>
Defined in generated/svm/LinearSVR.ts:248
score()
score(
opts
):Promise
<number
>
Return the coefficient of determination of the prediction.
The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true \- y_pred)\*\* 2).sum()
and \(v\) is the total sum of squares ((y_true \- y_true.mean()) \*\* 2).sum()
. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y
, disregarding the input features, would get a \(R^2\) score of 0.0.
Parameters
Parameter | Type | Description |
---|---|---|
opts | object | - |
opts.sample_weight ? | ArrayLike | Sample weights. |
opts.X ? | ArrayLike [] | Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted) , where n_samples_fitted is the number of samples used in the fitting for the estimator. |
opts.y ? | ArrayLike | True values for X . |
Returns Promise
<number
>
Defined in generated/svm/LinearSVR.ts:281
set_fit_request()
set_fit_request(
opts
):Promise
<any
>
Request metadata passed to the fit
method.
Note that this method is only relevant if enable_metadata_routing=True
(see sklearn.set_config
). Please see User Guide on how the routing mechanism works.
The options for each parameter are:
Parameters
Parameter | Type | Description |
---|---|---|
opts | object | - |
opts.sample_weight ? | string | boolean | Metadata routing for sample_weight parameter in fit . |
Returns Promise
<any
>
Defined in generated/svm/LinearSVR.ts:327
set_score_request()
set_score_request(
opts
):Promise
<any
>
Request metadata passed to the score
method.
Note that this method is only relevant if enable_metadata_routing=True
(see sklearn.set_config
). Please see User Guide on how the routing mechanism works.
The options for each parameter are:
Parameters
Parameter | Type | Description |
---|---|---|
opts | object | - |
opts.sample_weight ? | string | boolean | Metadata routing for sample_weight parameter in score . |
Returns Promise
<any
>
Defined in generated/svm/LinearSVR.ts:363