Class: IncrementalPCA
Incremental principal components analysis (IPCA).
Linear dimensionality reduction using Singular Value Decomposition of the data, keeping only the most significant singular vectors to project the data to a lower dimensional space. The input data is centered but not scaled for each feature before applying the SVD.
Depending on the size of the input data, this algorithm can be much more memory efficient than a PCA, and allows sparse input.
This algorithm has constant memory complexity, on the order of batch_size \* n_features
, enabling use of np.memmap files without loading the entire file into memory. For sparse matrices, the input is converted to dense in batches (in order to be able to subtract the mean) which avoids storing the entire dense matrix at any one time.
The computational overhead of each SVD is O(batch_size \* n_features \*\* 2)
, but only 2 * batch_size samples remain in memory at a time. There will be n_samples / batch_size
SVD computations to get the principal components, versus 1 large SVD of complexity O(n_samples \* n_features \*\* 2)
for PCA.
For a usage example, see Incremental PCA.
Read more in the User Guide.
Constructors
new IncrementalPCA()
new IncrementalPCA(
opts
?):IncrementalPCA
Parameters
Parameter | Type | Description |
---|---|---|
opts ? | object | - |
opts.batch_size ? | number | The number of samples to use for each batch. Only used when calling fit . If batch_size is undefined , then batch_size is inferred from the data and set to 5 \* n_features , to provide a balance between approximation accuracy and memory consumption. |
opts.copy ? | boolean | If false , X will be overwritten. copy=False can be used to save memory but is unsafe for general use. |
opts.n_components ? | number | Number of components to keep. If n_components is undefined , then n_components is set to min(n_samples, n_features) . |
opts.whiten ? | boolean | When true (false by default) the components_ vectors are divided by n_samples times components_ to ensure uncorrelated outputs with unit component-wise variances. Whitening will remove some information from the transformed signal (the relative variance scales of the components) but can sometimes improve the predictive accuracy of the downstream estimators by making data respect some hard-wired assumptions. |
Returns IncrementalPCA
Defined in generated/decomposition/IncrementalPCA.ts:33
Properties
Property | Type | Default value | Defined in |
---|---|---|---|
_isDisposed | boolean | false | generated/decomposition/IncrementalPCA.ts:31 |
_isInitialized | boolean | false | generated/decomposition/IncrementalPCA.ts:30 |
_py | PythonBridge | undefined | generated/decomposition/IncrementalPCA.ts:29 |
id | string | undefined | generated/decomposition/IncrementalPCA.ts:26 |
opts | any | undefined | generated/decomposition/IncrementalPCA.ts:27 |
Accessors
batch_size_
Get Signature
get batch_size_():
Promise
<number
>
Inferred batch size from batch_size
.
Returns Promise
<number
>
Defined in generated/decomposition/IncrementalPCA.ts:761
components_
Get Signature
get components_():
Promise
<ArrayLike
[]>
Principal axes in feature space, representing the directions of maximum variance in the data. Equivalently, the right singular vectors of the centered input data, parallel to its eigenvectors. The components are sorted by decreasing explained_variance_
.
Returns Promise
<ArrayLike
[]>
Defined in generated/decomposition/IncrementalPCA.ts:540
explained_variance_
Get Signature
get explained_variance_():
Promise
<ArrayLike
>
Variance explained by each of the selected components.
Returns Promise
<ArrayLike
>
Defined in generated/decomposition/IncrementalPCA.ts:565
explained_variance_ratio_
Get Signature
get explained_variance_ratio_():
Promise
<ArrayLike
>
Percentage of variance explained by each of the selected components. If all components are stored, the sum of explained variances is equal to 1.0.
Returns Promise
<ArrayLike
>
Defined in generated/decomposition/IncrementalPCA.ts:590
feature_names_in_
Get Signature
get feature_names_in_():
Promise
<ArrayLike
>
Names of features seen during fit. Defined only when X
has feature names that are all strings.
Returns Promise
<ArrayLike
>
Defined in generated/decomposition/IncrementalPCA.ts:811
mean_
Get Signature
get mean_():
Promise
<ArrayLike
>
Per-feature empirical mean, aggregate over calls to partial_fit
.
Returns Promise
<ArrayLike
>
Defined in generated/decomposition/IncrementalPCA.ts:640
n_components_
Get Signature
get n_components_():
Promise
<number
>
The estimated number of components. Relevant when n_components=None
.
Returns Promise
<number
>
Defined in generated/decomposition/IncrementalPCA.ts:711
n_features_in_
Get Signature
get n_features_in_():
Promise
<number
>
Number of features seen during fit.
Returns Promise
<number
>
Defined in generated/decomposition/IncrementalPCA.ts:786
n_samples_seen_
Get Signature
get n_samples_seen_():
Promise
<number
>
The number of samples processed by the estimator. Will be reset on new calls to fit, but increments across partial_fit
calls.
Returns Promise
<number
>
Defined in generated/decomposition/IncrementalPCA.ts:736
noise_variance_
Get Signature
get noise_variance_():
Promise
<number
>
The estimated noise covariance following the Probabilistic PCA model from Tipping and Bishop 1999. See “Pattern Recognition and Machine Learning” by C. Bishop, 12.2.1 p. 574 or http://www.miketipping.com/papers/met-mppca.pdf.
Returns Promise
<number
>
Defined in generated/decomposition/IncrementalPCA.ts:686
py
Get Signature
get py():
PythonBridge
Returns PythonBridge
Set Signature
set py(
pythonBridge
):void
Parameters
Parameter | Type |
---|---|
pythonBridge | PythonBridge |
Returns void
Defined in generated/decomposition/IncrementalPCA.ts:64
singular_values_
Get Signature
get singular_values_():
Promise
<ArrayLike
>
The singular values corresponding to each of the selected components. The singular values are equal to the 2-norms of the n_components
variables in the lower-dimensional space.
Returns Promise
<ArrayLike
>
Defined in generated/decomposition/IncrementalPCA.ts:615
var_
Get Signature
get var_():
Promise
<ArrayLike
>
Per-feature empirical variance, aggregate over calls to partial_fit
.
Returns Promise
<ArrayLike
>
Defined in generated/decomposition/IncrementalPCA.ts:663
Methods
dispose()
dispose():
Promise
<void
>
Disposes of the underlying Python resources.
Once dispose()
is called, the instance is no longer usable.
Returns Promise
<void
>
Defined in generated/decomposition/IncrementalPCA.ts:116
fit()
fit(
opts
):Promise
<any
>
Fit the model with X, using minibatches of size batch_size.
Parameters
Parameter | Type | Description |
---|---|---|
opts | object | - |
opts.X ? | ArrayLike | Training data, where n_samples is the number of samples and n_features is the number of features. |
opts.y ? | any | Not used, present for API consistency by convention. |
Returns Promise
<any
>
Defined in generated/decomposition/IncrementalPCA.ts:133
fit_transform()
fit_transform(
opts
):Promise
<any
[]>
Fit to data, then transform it.
Fits transformer to X
and y
with optional parameters fit_params
and returns a transformed version of X
.
Parameters
Parameter | Type | Description |
---|---|---|
opts | object | - |
opts.fit_params ? | any | Additional fit parameters. |
opts.X ? | ArrayLike [] | Input samples. |
opts.y ? | ArrayLike | Target values (undefined for unsupervised transformations). |
Returns Promise
<any
[]>
Defined in generated/decomposition/IncrementalPCA.ts:172
get_covariance()
get_covariance(
opts
):Promise
<any
>
Compute data covariance with the generative model.
cov \= components_.T \* S\*\*2 \* components_ + sigma2 \* eye(n_features)
where S**2 contains the explained variances, and sigma2 contains the noise variances.
Parameters
Parameter | Type | Description |
---|---|---|
opts | object | - |
opts.cov ? | any | Estimated covariance of data. |
Returns Promise
<any
>
Defined in generated/decomposition/IncrementalPCA.ts:216
get_feature_names_out()
get_feature_names_out(
opts
):Promise
<any
>
Get output feature names for transformation.
The feature names out will prefixed by the lowercased class name. For example, if the transformer outputs 3 features, then the feature names out are: \["class_name0", "class_name1", "class_name2"\]
.
Parameters
Parameter | Type | Description |
---|---|---|
opts | object | - |
opts.input_features ? | any | Only used to validate feature names with the names seen in fit . |
Returns Promise
<any
>
Defined in generated/decomposition/IncrementalPCA.ts:250
get_metadata_routing()
get_metadata_routing(
opts
):Promise
<any
>
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
Parameters
Parameter | Type | Description |
---|---|---|
opts | object | - |
opts.routing ? | any | A MetadataRequest encapsulating routing information. |
Returns Promise
<any
>
Defined in generated/decomposition/IncrementalPCA.ts:286
get_precision()
get_precision(
opts
):Promise
<any
>
Compute data precision matrix with the generative model.
Equals the inverse of the covariance but computed with the matrix inversion lemma for efficiency.
Parameters
Parameter | Type | Description |
---|---|---|
opts | object | - |
opts.precision ? | any | Estimated precision of data. |
Returns Promise
<any
>
Defined in generated/decomposition/IncrementalPCA.ts:322
init()
init(
py
):Promise
<void
>
Initializes the underlying Python resources.
This instance is not usable until the Promise
returned by init()
resolves.
Parameters
Parameter | Type |
---|---|
py | PythonBridge |
Returns Promise
<void
>
Defined in generated/decomposition/IncrementalPCA.ts:77
inverse_transform()
inverse_transform(
opts
):Promise
<any
>
Transform data back to its original space.
In other words, return an input X_original
whose transform would be X.
Parameters
Parameter | Type | Description |
---|---|---|
opts | object | - |
opts.X ? | ArrayLike [] | New data, where n_samples is the number of samples and n_components is the number of components. |
Returns Promise
<any
>
Defined in generated/decomposition/IncrementalPCA.ts:356
partial_fit()
partial_fit(
opts
):Promise
<any
>
Incremental fit with X. All of X is processed as a single batch.
Parameters
Parameter | Type | Description |
---|---|---|
opts | object | - |
opts.check_input ? | boolean | Run check_array on X. |
opts.X ? | ArrayLike [] | Training data, where n_samples is the number of samples and n_features is the number of features. |
opts.y ? | any | Not used, present for API consistency by convention. |
Returns Promise
<any
>
Defined in generated/decomposition/IncrementalPCA.ts:390
set_output()
set_output(
opts
):Promise
<any
>
Set output container.
See Introducing the set_output API for an example on how to use the API.
Parameters
Parameter | Type | Description |
---|---|---|
opts | object | - |
opts.transform ? | "default" | "pandas" | "polars" | Configure output of transform and fit_transform . |
Returns Promise
<any
>
Defined in generated/decomposition/IncrementalPCA.ts:436
set_partial_fit_request()
set_partial_fit_request(
opts
):Promise
<any
>
Request metadata passed to the partial_fit
method.
Note that this method is only relevant if enable_metadata_routing=True
(see sklearn.set_config
). Please see User Guide on how the routing mechanism works.
The options for each parameter are:
Parameters
Parameter | Type | Description |
---|---|---|
opts | object | - |
opts.check_input ? | string | boolean | Metadata routing for check_input parameter in partial_fit . |
Returns Promise
<any
>
Defined in generated/decomposition/IncrementalPCA.ts:472
transform()
transform(
opts
):Promise
<ArrayLike
[]>
Apply dimensionality reduction to X.
X is projected on the first principal components previously extracted from a training set, using minibatches of size batch_size if X is sparse.
Parameters
Parameter | Type | Description |
---|---|---|
opts | object | - |
opts.X ? | ArrayLike | New data, where n_samples is the number of samples and n_features is the number of features. |
Returns Promise
<ArrayLike
[]>