Setting tab

Our goal is to ensure a seamless onboarding experience for the Digital Agent and the comprehensive suite of Digital Studio products.

Within the project settings, personalization options are available, but some are specifically designed for advanced users. It's essential to proceed carefully and have a thorough understanding before making adjustments.

Before delving into settings, it's recommended to save the current version in case of any issues. You can easily revert changes at any time using the Project Version History feature.

Basic Project Settings Include Three Categories:

  1. Configuration: Customize project settings such as project description, the starting node of the flow, and advanced configurations.

  2. Vocabulary: Customize the vocabulary used within the conversation flow.

  3. Stopwords: Add specific words that halt or stop the conversation flow when encountered.

For more detailed information and guidance regarding each setting, explore the specific sections within the project settings. Taking a cautious approach while modifying settings ensures a smooth and optimal project configuration.


Start node

The name of the node in the conversation flow from which the conversation begins. The default is set as START. Most of the time, when building digital agents from flow editor, is not necessary to change this setting.


Brief description/note about the project; not utilized in the code or low.

Allow jumping

allow_jumping (bool):

  • If set to True, users can leave the conversation tree and start a new topic accessible from the start_node.

  • If set to False, users can only visit graph nodes allowed in the conversation flow or nodes presented in allow_jumping_to_nodes.

allow_jumping_to_nodes (List[str]):

  • List of intents which can be jumped to from other intents. Confidence of such intent prediction incurs a penalty to prefer intents in trees.

Disabled utterance transformers

This configuration parameter allows advanced users to specify a list of transformers that should be disabled and not used during the preprocessing stage of the conversation flow. Each transformer is identified by its name as seen in the logging or diagnostic information.


  • Users can refer to tech logs (eg. in Test in debug mode) to identify the transformers they want to disable based on their names.

  • Transformers may perform various preprocessing tasks such as normalization, filtering, or augmentation of utterances.

  • Disabling specific transformers can be useful for customizing the preprocessing pipeline according to specific project requirements or preferences.


  - numbers
  - chitchat_annoyance
  - chitchat_greeting

In this example:

  • The numbers transformer is disabled, which means the preprocessing step responsible for normalizing numbers and digits will be skipped.

  • The chitchat_annoyance transformer is disabled, indicating that preprocessing related to filtering out offensive language or annoyance expressions will not be applied.

  • The chitchat_greeting transformer is disabled, suggesting that preprocessing related to normalizing greetings and farewells will be omitted.

By selectively disabling transformers, users have finer control over the preprocessing pipeline, allowing for tailored customization of the conversation processing flow. This feature caters to advanced users who are familiar with the underlying preprocessing mechanisms and wish to fine-tune them for specific use cases or preferences.

Note: This feature is relevant only when using a custom-trained model for intent recognition or smart functions, as they operate on preprocessed utterances. GPT intent recognition and generative AI nodes work with raw utterances (current_utterance) as obtained from chat or speech-to-text, unless it's manually set to differ.


Specifies the language code crucial for various functionalities dependent on language setting.

It impacts:

  1. Text-to-Speech and Speech-to-Text Services:

    • The language setting influences the accuracy and effectiveness of text-to-speech and speech-to-text services. These services are tailored to be language-specific.

    • Different languages have unique phonetic characteristics, pronunciation rules, and accents. Therefore, the underlying algorithms and models need to be adjusted accordingly for optimal performance.

    • Neural voice options for text-to-speech are dependent on language setting, as, for example, Czech offers only two public neural voice personas (male and female), whereas English have dozens of options with regional accents, sentiment features, age variety, a multitude of female and male personas and even unisex neural voice.

  2. Word to Vec Library Functionality:

    • The Word to Vec library functionality may be adjusted based on the language setting. This library is commonly used for word embedding, where words are represented as vectors in a high-dimensional space.

    • When using a custom training set, the Word to Vec library can be optimized to capture the language's unique features. This optimization helps improve various natural language processing tasks such as semantic similarity and intent recognition.

  3. Entity Extraction:

    • Entity extraction involves identifying and extracting specific entities or information from text, such as dates, names, addresses, and numerical values like phone numbers or registration plate formats.

    • Language-specific formats and conventions exist for different types of entities. For example, date formats may vary between languages, as demonstrated by the example of MM/DD/YYYY format in US-English versus DD/MM/YYYY in UK-English and Europe.

    • Therefore, entity extraction rules need to be tailored to recognize and extract entities according to the specific formats and patterns of the language being processed.

    • This means that for languages like Slovak, the system may not recognize Czech cities when extracting addresses from text, and vice versa. This limitation arises due to the language-specific nature of the entity dictionaries or lists used for extraction.

  4. Preprocessing Tasks:

    • Preprocessing tasks encompass various text processing steps applied before feeding data into downstream natural language processing (NLP) engine.

    • Language-specific preprocessing includes tasks such as handling default stopwords (common words like "the", "and", "is" that are often removed as they carry little semantic meaning), correction of spelling errors, text transformation (e.g., lowercase conversion), and other linguistic transformations.

    • Different languages may have distinct sets of stopwords and spelling correction rules, necessitating language-specific preprocessing pipelines to ensure accurate and consistent processing of text data.

Supported languages:

  • cs (Czech πŸ‡¨πŸ‡Ώ),

  • sk (Slovak πŸ‡ΈπŸ‡°),

  • en (English πŸ‡ΊπŸ‡ΈπŸ‡¬πŸ‡§πŸ‡¦πŸ‡ΊπŸ‡¨πŸ‡¦),

  • de (German πŸ‡©πŸ‡ͺπŸ‡¦πŸ‡ΉπŸ‡¨πŸ‡­),

  • pl (Polish πŸ‡΅πŸ‡±),

  • hu (Hungarian πŸ‡­πŸ‡Ί),

  • fr (French πŸ‡«πŸ‡·πŸ‡§πŸ‡ͺπŸ‡¨πŸ‡¦),

  • nl (Dutch πŸ‡³πŸ‡±),

  • pt (Portuguese πŸ‡΅πŸ‡Ή),

  • ro (Romanian πŸ‡·πŸ‡΄),

  • ru (Russian πŸ‡·πŸ‡Ί),

  • es (Spanish πŸ‡ͺπŸ‡Έ), es-mx (Mexican Spanish πŸ‡²πŸ‡½).

Classifier (lemmatizer, tokenizer, aspell, stopwords)

Configuration of the Natural Language Understanding (NLU) module.

  • tokenizer (str): Default value is 'nist'.

  • aspell (str):

    • Options: 'replace', 'duplicate', 'whitelist', 'null'.

    • 'replace': Replace misspelled words by their correct forms.

    • 'duplicate': Add the correct form of the misspelled word to the end of the utterance.

    • 'whitelist': Use only whitelist corrections, ignore aspell (suitable for speech channel).

    • 'null': No correction and accentuation.

  • stopwords (bool):

    • If True, remove stopwords from text (e.g., 'the', 'be', 'and').

    • A default stop word list is set for each language. You may also define a custom stop word list on the project level. In that case, the default stop word list is ignored on your project.

Intent thresholds

These intent thresholds serve as criteria for deciding whether the system can confidently determine a user's intent based on the input provided. Here's a more detailed explanation:


  • This threshold sets the minimum level of confidence required for the system to consider an intent as valid. If the confidence score of the top-ranked intent falls below this threshold, it indicates that the system lacks confidence in identifying the user's intent accurately. In such cases, instead of making a potentially erroneous intent selection, the system returns the I_DONT_UNDERSTAND intent, signaling to the user that their input couldn't be confidently interpreted. Subquentually, next state the flow continues is the target node set as fallback.

  • The intent threshold setting is global, meaning it's the same for all the ANS nodes in the flow.

  • The default value is 0,6; the recommended range is from 0,55 to 0,9.


  • This threshold introduces a comparative aspect to the intent selection process. It ensures that the confidence of the top-ranked intent significantly exceeds the confidence of the next-ranked intent by a specified margin.

  • If the confidence of the top-ranked intent is not substantially higher than the confidence of the second-ranked intent multiplied by this threshold value, it suggests that there isn't a clear distinction in confidence levels between the top two intents. Consequently, the system returns the I_DONT_UNDERSTAND intent, indicating uncertainty in intent prediction despite having multiple options and flow moves to next state based on the fallback target.


Threshold test and score boosting: Test is performed only if more than one intent is considered. Intent resolver predicts pairs of original confidences and labels (oc1, label1), (oc2, label2), …, where original confidence is a number in the range [0, 1], and the sum of original confidences is in the range [0, ∞).

Flow cuts off unreachable nodes and nodes not passing EXTRACTors entry conditions.

We also operate with Semantically Same Intent (SSI) groups. The rule is as follows: while the label with the highest score is in the same SSI group as the second-best node, merge scores of these nodes and keep the label of the winner.

Example intent resolver scores:

The above scores are normalized as follows:

After merging SSI nodes and computing normalized score, the threshold tests follow:

Original Threshold Test:

If the first label original confidence is less than the original threshold, no intent is selected.

  • With original_intent_threshold=0.001, the label yes_plain passes, and the test continues.

  • With original_intent_threshold=0.005, the label yes_plain fails, and no intent is selected.

Normalized Threshold Test:

If the first label normalized confidence is greater than the intent threshold, the first label is selected; otherwise, the next test follows.

  • With intent_threshold=0.8, the label yes_plain passes and is returned (intent is selected).

  • With intent_threshold=0.9, the label yes_plain fails, and the next test follows.

Relative Threshold Test:

If the best label confidence is X times greater than the second-best label confidence, the best label is accepted, and the intent is selected.

  • With intent_relative_threshold=10, the label yes_plain passes (0.067 * 10 < 0.87), and the intent is selected.

  • With intent_relative_threshold=20, the label yes_plain fails (0.067 * 20 > 0.87), and no intent is selected.

Training epoch and learning rate

Model parameters for deep neural network training.

  • epoch (int): Specifies how long the training takes. To be exact, this parameter specifies the number of cycles the neural network goes through your training set during the training process. The recommended value is between 10 and 30, depending on language and training set volume.

  • lr (float): Learning rate determining how fast the neural network learns. Must be a positive number. A recommended value is between 0,1 and 0,5.

❗This configuration is relevant only when utilizing a custom training set for intent recognition. If intent recognition is driven by methods such as GPT or keyword matching, this setting is unnecessary as a custom neural network model is not employed.

Speed and delay

speed_coefficient (float):

  • A smaller value results in a smaller delay between messages sent by the chatbot to the user. A value of 1 allows enough time for the average user to read everything before the next message is displayed.

max_delay_milliseconds (int):

  • Specifies the maximum delay between messages sent by the chatbot in milliseconds.

❗Speech and delay setting concerns chat-based digital assistants only!


The Vocabulary feature serves as a dictionary of corrections used during utterance processing. It allows users to specify custom corrections for words or phrases, which take precedence over the default corrections provided for each language.

Key Concepts:

  1. Custom Corrections:

    • Users can define custom corrections by specifying key-value pairs. The key represents the word or phrase to be corrected, while the value indicates the replacement text.

  2. Case Sensitivity:

    • Vocabulary corrections are case-sensitive. Each key is matched against the input utterance as a whole word, ensuring that partial matches of syllables within words are not corrected.

  3. Regular Expressions:

    • Vocabulary supports the use of regular expressions, enabling users to provide multiple word forms or patterns with a single correction entry.

  4. Sequential Evaluation of Vocabulary Corrections

    The Vocabulary feature evaluates corrections sequentially in the order they appear in the configuration. This means that if multiple corrections conflict, the correction listed first will be applied. Therefore, it is not advisable to order corrections alphabetically.

    Example: Suppose we have the following corrections:

    • "Hello" corrected to "Greeting"

    • "Hello World" corrected to "bonus program Hello word"

    If the input sentence is "I am interested in Hello World", the first correction will be applied, transforming the sentence into "I am interested in Greeting World". The second correction will not be applied because "hello world" is no longer present in the sentence. This can potentially impact intent recognition, as the sentence has been altered suboptimally.

πŸ’‘Here are some tips for custom vocabulary use-cases:

Semantic recognition enhancement

Custom corrections can expand or rewrite expressions to improve semantic recognition. For instance, mapping "car" to "automobile" ensures that both terms are recognized as referring to the same concept.

Acronym and abbreviation interpretation

Mapping acronyms to their expanded forms aids in interpreting user inputs containing abbreviations. For example, associating "ATM" with "automatic teller machine" ensures that users' requests involving ATMs are correctly understood.

Internal terminology understanding

Specifying internal terms or expressions improves the language model's understanding.

For instance, mapping "Joy" to "tariff Joy" and "Holiday" to "tariff Holiday" helps the system comprehend user requests related to specific tariff plans, eg. the utterance I would like to purchase Joy/Holiday.

(In)consistent transcription from speech-to-text

Vocabulary corrects systematically misspelled words or phrases from speech-to-text.

For example, a common issue with Czech and Slovak speech-to-text is transcription of the number six as aest. Mapping it to "6" in custom vocabulary ensures consistent transcription for smart functions to work with.

Normalization of user vocabulary

Standardizing user expressions reduces the need for extensive training data. Mapping synonymous terms to a common expression streamlines intent recognition. For example, mapping "invoice" to "bill" ensures that both terms are recognized interchangeably.

Slang and regional expression normalization

Vocabulary can map slang or regional expressions to more recognizable terms. For instance, mapping "hella" to "very", "barΓ‘k" to "dΕ―m", "Ε‘alina" to "tramvaj" etc. ensures that regional expressions are correctly interpreted in a wider context.

Preserving specific input

Users can specify transcriptions they prefer not to be altered. By specifying a key-value pair where the key and value are the same, certain transcriptions remain unchanged.

For example, mapping "Yello" to "Yello" ensures that the term "Yello" (a company name) is preserved without alteration by default corrector or other transformer.


Stopwords are common words in a language that are typically filtered out during text processing as they do not carry significant meaning and are unlikely to contribute to the understanding of the text. Examples of stopwords include articles, conjunctions, and prepositions.

Importance of Removing Stopwords:

  • Enhances Text Processing: By removing stopwords, text processing algorithms can focus on more meaningful content, improving the accuracy of tasks such as sentiment analysis, topic modeling, and text classification.

  • Reduces Noise: Stopwords often occur frequently in text but convey little semantic information. Removing them helps reduce noise and extract the most relevant information from the text.

Key Features of Stopwords Handling:

  1. Default stopword list:

    • For each language, a default list of stopwords is provided. These lists contain common stopwords in the respective language.

  2. Stopword removal configuration:

    • Stopword removal can be toggled on or off in the project configuration. This allows users to customize whether stopwords are removed during text processing.

  3. Custom stopword list:

    • Users have the flexibility to specify a custom list of stopwords. In cases where the default stopwords do not suit the project's requirements, the custom stopword list takes precedence. The custom list is used for stopwords removal, and the default list is ignored.

  4. Preprocessing order:

    • Stopword removal is one of the initial preprocessing steps. If stopwords are removed, subsequent preprocessing steps such as vocabulary corrections do not operate on them, as they are already eliminated from the text.

Last updated