Documentation
Classes
CountVectorizer

CountVectorizer

Convert a collection of text documents to a matrix of token counts.

This implementation produces a sparse representation of the counts using scipy.sparse.csr_matrix.

If you do not provide an a-priori dictionary and you do not use an analyzer that does some kind of feature selection then the number of features will be equal to the vocabulary size found by analyzing the data.

For an efficiency comparision of the different feature extractors, see FeatureHasher and DictVectorizer Comparison.

Read more in the User Guide.

Python Reference (opens in a new tab)

Constructors

constructor()

Signature

new CountVectorizer(opts?: object): CountVectorizer;

Parameters

NameTypeDescription
opts?object-
opts.analyzer?"word" | "char" | "char_wb"Whether the feature should be made of word n-gram or character n-grams. Option ‘char_wb’ creates character n-grams only from text inside word boundaries; n-grams at the edges of words are padded with space. If a callable is passed it is used to extract the sequence of features out of the raw, unprocessed input. Default Value 'word'
opts.binary?booleanIf true, all non zero counts are set to 1. This is useful for discrete probabilistic models that model binary events rather than integer counts. Default Value false
opts.decode_error?"ignore" | "strict" | "replace"Instruction on what to do if a byte sequence is given to analyze that contains characters not of the given encoding. By default, it is ‘strict’, meaning that a UnicodeDecodeError will be raised. Other values are ‘ignore’ and ‘replace’. Default Value 'strict'
opts.dtype?anyType of the matrix returned by fit_transform() or transform().
opts.encoding?stringIf bytes or files are given to analyze, this encoding is used to decode. Default Value 'utf-8'
opts.input?"filename" | "file" | "content"If 'filename', the sequence passed as an argument to fit is expected to be a list of filenames that need reading to fetch the raw content to analyze. Default Value 'content'
opts.lowercase?booleanConvert all characters to lowercase before tokenizing. Default Value true
opts.max_df?numberWhen building the vocabulary ignore terms that have a document frequency strictly higher than the given threshold (corpus-specific stop words). If float, the parameter represents a proportion of documents, integer absolute counts. This parameter is ignored if vocabulary is not undefined. Default Value 1
opts.max_features?numberIf not undefined, build a vocabulary that only consider the top max\_features ordered by term frequency across the corpus. Otherwise, all features are used. This parameter is ignored if vocabulary is not undefined.
opts.min_df?numberWhen building the vocabulary ignore terms that have a document frequency strictly lower than the given threshold. This value is also called cut-off in the literature. If float, the parameter represents a proportion of documents, integer absolute counts. This parameter is ignored if vocabulary is not undefined. Default Value 1
opts.ngram_range?anyThe lower and upper boundary of the range of n-values for different word n-grams or char n-grams to be extracted. All values of n such such that min_n <= n <= max_n will be used. For example an ngram\_range of (1, 1) means only unigrams, (1, 2) means unigrams and bigrams, and (2, 2) means only bigrams. Only applies if analyzer is not callable.
opts.preprocessor?anyOverride the preprocessing (strip_accents and lowercase) stage while preserving the tokenizing and n-grams generation steps. Only applies if analyzer is not callable.
opts.stop_words?any[] | "english"If ‘english’, a built-in stop word list for English is used. There are several known issues with ‘english’ and you should consider an alternative (see Using stop words). If a list, that list is assumed to contain stop words, all of which will be removed from the resulting tokens. Only applies if analyzer \== 'word'. If undefined, no stop words will be used. In this case, setting max\_df to a higher value, such as in the range (0.7, 1.0), can automatically detect and filter stop words based on intra corpus document frequency of terms.
opts.strip_accents?"ascii" | "unicode"Remove accents and perform other character normalization during the preprocessing step. ‘ascii’ is a fast method that only works on characters that have a direct ASCII mapping. ‘unicode’ is a slightly slower method that works on any characters. undefined (default) means no character normalization is performed. Both ‘ascii’ and ‘unicode’ use NFKD normalization from unicodedata.normalize (opens in a new tab).
opts.token_pattern?stringRegular expression denoting what constitutes a “token”, only used if analyzer \== 'word'. The default regexp select tokens of 2 or more alphanumeric characters (punctuation is completely ignored and always treated as a token separator). If there is a capturing group in token_pattern then the captured group content, not the entire match, becomes the token. At most one capturing group is permitted.
opts.tokenizer?anyOverride the string tokenization step while preserving the preprocessing and n-grams generation steps. Only applies if analyzer \== 'word'.
opts.vocabulary?anyEither a Mapping (e.g., a dict) where keys are terms and values are indices in the feature matrix, or an iterable over terms. If not given, a vocabulary is determined from the input documents. Indices in the mapping should not be repeated and should not have any gap between 0 and the largest index.

Returns

CountVectorizer

Defined in: generated/feature_extraction/text/CountVectorizer.ts:29 (opens in a new tab)

Methods

build_analyzer()

Return a callable to process input data.

The callable handles preprocessing, tokenization, and n-grams generation.

Signature

build_analyzer(opts: object): Promise<any>;

Parameters

NameType
optsobject

Returns

Promise<any>

Defined in: generated/feature_extraction/text/CountVectorizer.ts:237 (opens in a new tab)

build_preprocessor()

Return a function to preprocess the text before tokenization.

Signature

build_preprocessor(opts: object): Promise<any>;

Parameters

NameType
optsobject

Returns

Promise<any>

Defined in: generated/feature_extraction/text/CountVectorizer.ts:265 (opens in a new tab)

build_tokenizer()

Return a function that splits a string into a sequence of tokens.

Signature

build_tokenizer(opts: object): Promise<any>;

Parameters

NameType
optsobject

Returns

Promise<any>

Defined in: generated/feature_extraction/text/CountVectorizer.ts:293 (opens in a new tab)

decode()

Decode the input into a string of unicode symbols.

The decoding strategy depends on the vectorizer parameters.

Signature

decode(opts: object): Promise<any>;

Parameters

NameTypeDescription
optsobject-
opts.doc?stringThe string to decode.

Returns

Promise<any>

Defined in: generated/feature_extraction/text/CountVectorizer.ts:323 (opens in a new tab)

dispose()

Disposes of the underlying Python resources.

Once dispose() is called, the instance is no longer usable.

Signature

dispose(): Promise<void>;

Returns

Promise<void>

Defined in: generated/feature_extraction/text/CountVectorizer.ts:218 (opens in a new tab)

fit()

Learn a vocabulary dictionary of all tokens in the raw documents.

Signature

fit(opts: object): Promise<any>;

Parameters

NameTypeDescription
optsobject-
opts.raw_documents?anyAn iterable which generates either str, unicode or file objects.
opts.y?anyThis parameter is ignored.

Returns

Promise<any>

Defined in: generated/feature_extraction/text/CountVectorizer.ts:356 (opens in a new tab)

fit_transform()

Learn the vocabulary dictionary and return document-term matrix.

This is equivalent to fit followed by transform, but more efficiently implemented.

Signature

fit_transform(opts: object): Promise<any[]>;

Parameters

NameTypeDescription
optsobject-
opts.raw_documents?anyAn iterable which generates either str, unicode or file objects.
opts.y?anyThis parameter is ignored.

Returns

Promise<any[]>

Defined in: generated/feature_extraction/text/CountVectorizer.ts:396 (opens in a new tab)

get_feature_names_out()

Get output feature names for transformation.

Signature

get_feature_names_out(opts: object): Promise<any>;

Parameters

NameTypeDescription
optsobject-
opts.input_features?anyNot used, present here for API consistency by convention.

Returns

Promise<any>

Defined in: generated/feature_extraction/text/CountVectorizer.ts:434 (opens in a new tab)

get_metadata_routing()

Get metadata routing of this object.

Please check User Guide on how the routing mechanism works.

Signature

get_metadata_routing(opts: object): Promise<any>;

Parameters

NameTypeDescription
optsobject-
opts.routing?anyA MetadataRequest encapsulating routing information.

Returns

Promise<any>

Defined in: generated/feature_extraction/text/CountVectorizer.ts:472 (opens in a new tab)

get_stop_words()

Build or fetch the effective stop words list.

Signature

get_stop_words(opts: object): Promise<any>;

Parameters

NameType
optsobject

Returns

Promise<any>

Defined in: generated/feature_extraction/text/CountVectorizer.ts:507 (opens in a new tab)

init()

Initializes the underlying Python resources.

This instance is not usable until the Promise returned by init() resolves.

Signature

init(py: PythonBridge): Promise<void>;

Parameters

NameType
pyPythonBridge

Returns

Promise<void>

Defined in: generated/feature_extraction/text/CountVectorizer.ts:160 (opens in a new tab)

inverse_transform()

Return terms per document with nonzero entries in X.

Signature

inverse_transform(opts: object): Promise<any[]>;

Parameters

NameTypeDescription
optsobject-
opts.X?ArrayLikeDocument-term matrix.

Returns

Promise<any[]>

Defined in: generated/feature_extraction/text/CountVectorizer.ts:535 (opens in a new tab)

set_fit_request()

Request metadata passed to the fit method.

Note that this method is only relevant if enable\_metadata\_routing=True (see sklearn.set\_config). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

Signature

set_fit_request(opts: object): Promise<any>;

Parameters

NameTypeDescription
optsobject-
opts.raw_documents?string | booleanMetadata routing for raw\_documents parameter in fit.

Returns

Promise<any>

Defined in: generated/feature_extraction/text/CountVectorizer.ts:574 (opens in a new tab)

set_transform_request()

Request metadata passed to the transform method.

Note that this method is only relevant if enable\_metadata\_routing=True (see sklearn.set\_config). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

Signature

set_transform_request(opts: object): Promise<any>;

Parameters

NameTypeDescription
optsobject-
opts.raw_documents?string | booleanMetadata routing for raw\_documents parameter in transform.

Returns

Promise<any>

Defined in: generated/feature_extraction/text/CountVectorizer.ts:613 (opens in a new tab)

transform()

Transform documents to document-term matrix.

Extract token counts out of raw text documents using the vocabulary fitted with fit or the one provided to the constructor.

Signature

transform(opts: object): Promise<any[]>;

Parameters

NameTypeDescription
optsobject-
opts.raw_documents?anyAn iterable which generates either str, unicode or file objects.

Returns

Promise<any[]>

Defined in: generated/feature_extraction/text/CountVectorizer.ts:651 (opens in a new tab)

Properties

_isDisposed

boolean = false

Defined in: generated/feature_extraction/text/CountVectorizer.ts:27 (opens in a new tab)

_isInitialized

boolean = false

Defined in: generated/feature_extraction/text/CountVectorizer.ts:26 (opens in a new tab)

_py

PythonBridge

Defined in: generated/feature_extraction/text/CountVectorizer.ts:25 (opens in a new tab)

id

string

Defined in: generated/feature_extraction/text/CountVectorizer.ts:22 (opens in a new tab)

opts

any

Defined in: generated/feature_extraction/text/CountVectorizer.ts:23 (opens in a new tab)

Accessors

fixed_vocabulary_

True if a fixed vocabulary of term to indices mapping is provided by the user.

Signature

fixed_vocabulary_(): Promise<boolean>;

Returns

Promise<boolean>

Defined in: generated/feature_extraction/text/CountVectorizer.ts:709 (opens in a new tab)

py

Signature

py(): PythonBridge;

Returns

PythonBridge

Defined in: generated/feature_extraction/text/CountVectorizer.ts:147 (opens in a new tab)

Signature

py(pythonBridge: PythonBridge): void;

Parameters

NameType
pythonBridgePythonBridge

Returns

void

Defined in: generated/feature_extraction/text/CountVectorizer.ts:151 (opens in a new tab)

stop_words_

Terms that were ignored because they either:

Signature

stop_words_(): Promise<any>;

Returns

Promise<any>

Defined in: generated/feature_extraction/text/CountVectorizer.ts:734 (opens in a new tab)

vocabulary_

A mapping of terms to feature indices.

Signature

vocabulary_(): Promise<any>;

Returns

Promise<any>

Defined in: generated/feature_extraction/text/CountVectorizer.ts:684 (opens in a new tab)