Documentation
Classes
MLPClassifier

MLPClassifier

Multi-layer Perceptron classifier.

This model optimizes the log-loss function using LBFGS or stochastic gradient descent.

Python Reference (opens in a new tab)

Constructors

constructor()

Signature

new MLPClassifier(opts?: object): MLPClassifier;

Parameters

NameTypeDescription
opts?object-
opts.activation?"identity" | "logistic" | "tanh" | "relu"Activation function for the hidden layer. Default Value 'relu'
opts.alpha?numberStrength of the L2 regularization term. The L2 regularization term is divided by the sample size when added to the loss. Default Value 0.0001
opts.batch_size?numberSize of minibatches for stochastic optimizers. If the solver is ‘lbfgs’, the classifier will not use minibatch. When set to “auto”, batch\_size=min(200, n\_samples). Default Value 'auto'
opts.beta_1?numberExponential decay rate for estimates of first moment vector in adam, should be in \0, 1). Only used when solver=’adam’. Default Value 0.9
opts.beta_2?numberExponential decay rate for estimates of second moment vector in adam, should be in [0, 1). Only used when solver=’adam’. Default Value 0.999
opts.early_stopping?booleanWhether to use early stopping to terminate training when validation score is not improving. If set to true, it will automatically set aside 10% of training data as validation and terminate training when validation score is not improving by at least tol for n\_iter\_no\_change consecutive epochs. The split is stratified, except in a multilabel setting. If early stopping is false, then the training stops when the training loss does not improve by more than tol for n_iter_no_change consecutive passes over the training set. Only effective when solver=’sgd’ or ‘adam’. Default Value false
opts.epsilon?numberValue for numerical stability in adam. Only used when solver=’adam’. Default Value 1e-8
opts.hidden_layer_sizes?anyThe ith element represents the number of neurons in the ith hidden layer.
opts.learning_rate?"constant" | "invscaling" | "adaptive"Learning rate schedule for weight updates. Default Value 'constant'
opts.learning_rate_init?numberThe initial learning rate used. It controls the step-size in updating the weights. Only used when solver=’sgd’ or ‘adam’. Default Value 0.001
opts.max_fun?numberOnly used when solver=’lbfgs’. Maximum number of loss function calls. The solver iterates until convergence (determined by ‘tol’), number of iterations reaches max_iter, or this number of loss function calls. Note that number of loss function calls will be greater than or equal to the number of iterations for the MLPClassifier. Default Value 15000
opts.max_iter?numberMaximum number of iterations. The solver iterates until convergence (determined by ‘tol’) or this number of iterations. For stochastic solvers (‘sgd’, ‘adam’), note that this determines the number of epochs (how many times each data point will be used), not the number of gradient steps. Default Value 200
opts.momentum?numberMomentum for gradient descent update. Should be between 0 and 1. Only used when solver=’sgd’. Default Value 0.9
opts.n_iter_no_change?numberMaximum number of epochs to not meet tol improvement. Only effective when solver=’sgd’ or ‘adam’. Default Value 10
opts.nesterovs_momentum?booleanWhether to use Nesterov’s momentum. Only used when solver=’sgd’ and momentum > 0. Default Value true
opts.power_t?numberThe exponent for inverse scaling learning rate. It is used in updating effective learning rate when the learning_rate is set to ‘invscaling’. Only used when solver=’sgd’. Default Value 0.5
opts.random_state?numberDetermines random number generation for weights and bias initialization, train-test split if early stopping is used, and batch sampling when solver=’sgd’ or ‘adam’. Pass an int for reproducible results across multiple function calls. See [Glossary.
opts.shuffle?booleanWhether to shuffle samples in each iteration. Only used when solver=’sgd’ or ‘adam’. Default Value true
opts.solver?"lbfgs" | "sgd" | "adam"The solver for weight optimization. Default Value 'adam'
opts.tol?numberTolerance for the optimization. When the loss or score is not improving by at least tol for n\_iter\_no\_change consecutive iterations, unless learning\_rate is set to ‘adaptive’, convergence is considered to be reached and training stops. Default Value 0.0001
opts.validation_fraction?numberThe proportion of training data to set aside as validation set for early stopping. Must be between 0 and 1. Only used if early_stopping is true. Default Value 0.1
opts.verbose?booleanWhether to print progress messages to stdout. Default Value false
opts.warm_start?booleanWhen set to true, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary. Default Value false

Returns

MLPClassifier

Defined in: generated/neural_network/MLPClassifier.ts:23 (opens in a new tab)

Methods

dispose()

Disposes of the underlying Python resources.

Once dispose() is called, the instance is no longer usable.

Signature

dispose(): Promise<void>;

Returns

Promise<void>

Defined in: generated/neural_network/MLPClassifier.ts:268 (opens in a new tab)

fit()

Fit the model to data matrix X and target(s) y.

Signature

fit(opts: object): Promise<any>;

Parameters

NameTypeDescription
optsobject-
opts.X?ArrayLikeThe input data.
opts.y?ArrayLikeThe target values (class labels in classification, real numbers in regression).

Returns

Promise<any>

Defined in: generated/neural_network/MLPClassifier.ts:285 (opens in a new tab)

get_metadata_routing()

Get metadata routing of this object.

Please check User Guide on how the routing mechanism works.

Signature

get_metadata_routing(opts: object): Promise<any>;

Parameters

NameTypeDescription
optsobject-
opts.routing?anyA MetadataRequest encapsulating routing information.

Returns

Promise<any>

Defined in: generated/neural_network/MLPClassifier.ts:327 (opens in a new tab)

init()

Initializes the underlying Python resources.

This instance is not usable until the Promise returned by init() resolves.

Signature

init(py: PythonBridge): Promise<void>;

Parameters

NameType
pyPythonBridge

Returns

Promise<void>

Defined in: generated/neural_network/MLPClassifier.ts:198 (opens in a new tab)

partial_fit()

Update the model with a single iteration over the given data.

Signature

partial_fit(opts: object): Promise<any>;

Parameters

NameTypeDescription
optsobject-
opts.X?ArrayLikeThe input data.
opts.classes?any[]Classes across all calls to partial_fit. Can be obtained via np.unique(y\_all), where y_all is the target vector of the entire dataset. This argument is required for the first call to partial_fit and can be omitted in the subsequent calls. Note that y doesn’t need to contain all labels in classes.
opts.y?ArrayLikeThe target values.

Returns

Promise<any>

Defined in: generated/neural_network/MLPClassifier.ts:362 (opens in a new tab)

predict()

Predict using the multi-layer perceptron classifier.

Signature

predict(opts: object): Promise<ArrayLike>;

Parameters

NameTypeDescription
optsobject-
opts.X?ArrayLikeThe input data.

Returns

Promise<ArrayLike>

Defined in: generated/neural_network/MLPClassifier.ts:409 (opens in a new tab)

predict_log_proba()

Return the log of probability estimates.

Signature

predict_log_proba(opts: object): Promise<ArrayLike[]>;

Parameters

NameTypeDescription
optsobject-
opts.X?ArrayLike[]The input data.

Returns

Promise<ArrayLike[]>

Defined in: generated/neural_network/MLPClassifier.ts:442 (opens in a new tab)

predict_proba()

Probability estimates.

Signature

predict_proba(opts: object): Promise<ArrayLike[]>;

Parameters

NameTypeDescription
optsobject-
opts.X?ArrayLikeThe input data.

Returns

Promise<ArrayLike[]>

Defined in: generated/neural_network/MLPClassifier.ts:477 (opens in a new tab)

score()

Return the mean accuracy on the given test data and labels.

In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.

Signature

score(opts: object): Promise<number>;

Parameters

NameTypeDescription
optsobject-
opts.X?ArrayLike[]Test samples.
opts.sample_weight?ArrayLikeSample weights.
opts.y?ArrayLikeTrue labels for X.

Returns

Promise<number>

Defined in: generated/neural_network/MLPClassifier.ts:512 (opens in a new tab)

set_partial_fit_request()

Request metadata passed to the partial\_fit method.

Note that this method is only relevant if enable\_metadata\_routing=True (see sklearn.set\_config). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

Signature

set_partial_fit_request(opts: object): Promise<any>;

Parameters

NameTypeDescription
optsobject-
opts.classes?string | booleanMetadata routing for classes parameter in partial\_fit.

Returns

Promise<any>

Defined in: generated/neural_network/MLPClassifier.ts:563 (opens in a new tab)

set_score_request()

Request metadata passed to the score method.

Note that this method is only relevant if enable\_metadata\_routing=True (see sklearn.set\_config). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

Signature

set_score_request(opts: object): Promise<any>;

Parameters

NameTypeDescription
optsobject-
opts.sample_weight?string | booleanMetadata routing for sample\_weight parameter in score.

Returns

Promise<any>

Defined in: generated/neural_network/MLPClassifier.ts:602 (opens in a new tab)

Properties

_isDisposed

boolean = false

Defined in: generated/neural_network/MLPClassifier.ts:21 (opens in a new tab)

_isInitialized

boolean = false

Defined in: generated/neural_network/MLPClassifier.ts:20 (opens in a new tab)

_py

PythonBridge

Defined in: generated/neural_network/MLPClassifier.ts:19 (opens in a new tab)

id

string

Defined in: generated/neural_network/MLPClassifier.ts:16 (opens in a new tab)

opts

any

Defined in: generated/neural_network/MLPClassifier.ts:17 (opens in a new tab)

Accessors

best_loss_

The minimum loss reached by the solver throughout fitting. If early\_stopping=True, this attribute is set to undefined. Refer to the best\_validation\_score\_ fitted attribute instead.

Signature

best_loss_(): Promise<number>;

Returns

Promise<number>

Defined in: generated/neural_network/MLPClassifier.ts:685 (opens in a new tab)

best_validation_score_

The best validation score (i.e. accuracy score) that triggered the early stopping. Only available if early\_stopping=True, otherwise the attribute is set to undefined.

Signature

best_validation_score_(): Promise<number>;

Returns

Promise<number>

Defined in: generated/neural_network/MLPClassifier.ts:760 (opens in a new tab)

classes_

Class labels for each output.

Signature

classes_(): Promise<ArrayLike>;

Returns

Promise<ArrayLike>

Defined in: generated/neural_network/MLPClassifier.ts:637 (opens in a new tab)

coefs_

The ith element in the list represents the weight matrix corresponding to layer i.

Signature

coefs_(): Promise<any[]>;

Returns

Promise<any[]>

Defined in: generated/neural_network/MLPClassifier.ts:808 (opens in a new tab)

feature_names_in_

Names of features seen during fit. Defined only when X has feature names that are all strings.

Signature

feature_names_in_(): Promise<ArrayLike>;

Returns

Promise<ArrayLike>

Defined in: generated/neural_network/MLPClassifier.ts:881 (opens in a new tab)

intercepts_

The ith element in the list represents the bias vector corresponding to layer i + 1.

Signature

intercepts_(): Promise<any[]>;

Returns

Promise<any[]>

Defined in: generated/neural_network/MLPClassifier.ts:831 (opens in a new tab)

loss_

The current loss computed with the loss function.

Signature

loss_(): Promise<number>;

Returns

Promise<number>

Defined in: generated/neural_network/MLPClassifier.ts:662 (opens in a new tab)

loss_curve_

The ith element in the list represents the loss at the ith iteration.

Signature

loss_curve_(): Promise<any[]>;

Returns

Promise<any[]>

Defined in: generated/neural_network/MLPClassifier.ts:710 (opens in a new tab)

n_features_in_

Number of features seen during fit.

Signature

n_features_in_(): Promise<number>;

Returns

Promise<number>

Defined in: generated/neural_network/MLPClassifier.ts:856 (opens in a new tab)

n_iter_

The number of iterations the solver has run.

Signature

n_iter_(): Promise<number>;

Returns

Promise<number>

Defined in: generated/neural_network/MLPClassifier.ts:906 (opens in a new tab)

n_layers_

Number of layers.

Signature

n_layers_(): Promise<number>;

Returns

Promise<number>

Defined in: generated/neural_network/MLPClassifier.ts:929 (opens in a new tab)

n_outputs_

Number of outputs.

Signature

n_outputs_(): Promise<number>;

Returns

Promise<number>

Defined in: generated/neural_network/MLPClassifier.ts:954 (opens in a new tab)

out_activation_

Name of the output activation function.

Signature

out_activation_(): Promise<string>;

Returns

Promise<string>

Defined in: generated/neural_network/MLPClassifier.ts:979 (opens in a new tab)

py

Signature

py(): PythonBridge;

Returns

PythonBridge

Defined in: generated/neural_network/MLPClassifier.ts:185 (opens in a new tab)

Signature

py(pythonBridge: PythonBridge): void;

Parameters

NameType
pythonBridgePythonBridge

Returns

void

Defined in: generated/neural_network/MLPClassifier.ts:189 (opens in a new tab)

t_

The number of training samples seen by the solver during fitting.

Signature

t_(): Promise<number>;

Returns

Promise<number>

Defined in: generated/neural_network/MLPClassifier.ts:785 (opens in a new tab)

validation_scores_

The score at each iteration on a held-out validation set. The score reported is the accuracy score. Only available if early\_stopping=True, otherwise the attribute is set to undefined.

Signature

validation_scores_(): Promise<any[]>;

Returns

Promise<any[]>

Defined in: generated/neural_network/MLPClassifier.ts:735 (opens in a new tab)