Skip to main content

2.6: Testing your AI

2.6: Testing your AI

Updated this week

1. What the Testing section is for

The Testing section allows you to evaluate how your AI assistant responds before going live, and anytime you make changes to its configuration.

It helps you:

  • Verify that the assistant answers correctly based on your store information and policies

  • Check that the tone matches your brand voice

  • Improve performance through structured feedback

2. How to test your assistant

Before you start

Testing happens in a controlled environment where you can preview how your assistant will behave with real customers.

A few important things to keep in mind:

  • Testing always runs in a live chat format, it cannot fully simulate email-style conversations or tone

  • You can still use testing to validate your assistant’s tone, since AI Persona adjustments apply normally

⚠️ Testing works best when the assistant has the right foundations in place. In Konvo, there are two main components you may be testing:

Testing knowledge (answers)

If you are testing how the assistant responds to customer questions, the required information must already exist in your Knowledge Hub (article 2.1).

For example, to answer correctly about returns, shipping, or subscriptions, your policies and support content need to be available to the AI.

If the knowledge is missing, the assistant may respond with incomplete or generic answers.

Testing skills (actions)

If you are testing an assistant action, such as:

  • tracking an order

  • cancelling an order

  • processing a refund

  • editing a subscription

then the relevant Skill (and Process) must be fully configured and activated beforehand (articles 2.3 and 2.4).

Testing an action also requires realistic customer inputs. For example:

  • To test “Where is my order?”, you need a valid order ID and the customer email associated with it

  • To test cancellations or refunds, the order must meet the conditions defined in your setup

The assistant can only act when it has the correct data and permissions.


Add and run test questions

Once your foundations are in place, you can begin testing directly from the Test section in your Konvo dashboard.

Most teams start by typing the most common customer requests they receive, such as:

  • “Where is my order?”

  • “Can I return this item?”

  • “Can you cancel my subscription?”

You can also start from one of the pre-built templates, which include some of the most common customer questions

You can add as many questions as needed and return to them anytime.


Review and rate responses

For each test question, KonvoAI generates a response as if a real customer had asked it.

After reviewing the answer, you can rate it:

  • Good, if the response is correct and aligned with your expectations

  • Poor, if the response is incorrect, incomplete, or not appropriate

This feedback helps you identify what needs improvement before the assistant interacts with real customers.


If an answer is Poor, apply the right fix

When you mark an answer as Poor, you will be asked to choose a reason. This helps you understand what needs adjustment.

Common scenarios include:

Missing or incorrect knowledge

If the assistant does not have the right information to answer correctly, you may need to update your Knowledge Hub.

This often applies to questions about:

  • return policies

  • shipping rules

  • subscription terms

  • product-specific details

Adding or refining content in the Knowledge Hub (usually through Custom Replies) is the best way to improve these answers (article 2.1).


Skill or process not configured

If the assistant fails to complete an action, the issue is usually related to the Skill setup.

Make sure that:

  • the Skill is activated

  • the Process configuration is complete

  • the test input is valid (real order ID, correct customer email etc.)

Actions can only run when the assistant has the correct permissions and data.


Tone or style needs adjustment

If the answer is correct but does not match your brand voice, you can refine the assistant’s tone through your AI Persona settings (article 2.2).

Persona changes apply globally, so use them when you want consistent tone improvements across all conversations.


Iterate and re-test

Testing is an iterative process.

After making improvements, you can:

  • re-run the same question

  • confirm that the response has improved

  • continue testing new scenarios

Most teams repeat this loop until the assistant performs reliably across their most common customer requests.


3. Best practices

Test multi-turn conversations

Customer support requests often involve follow-up questions, not just single messages.

KonvoAI allows you to simulate this by continuing the conversation inside the Test environment.

To do this:

  1. Open a test question

  2. Click Add user response

  3. Write a realistic follow-up message

  4. Review whether the assistant stays consistent and uses context correctly

This is especially useful for flows like returns, order changes, or subscription updates.


Go-live readiness checklist

Before enabling the assistant for real customers, it is recommended to confirm that:

  • Your 5 most common customer questions are answered correctly

  • Your key policies are reflected accurately in responses

  • Activated Skills behave as expected with valid inputs

  • Tone matches your brand voice

  • Complex or edge cases escalate properly when needed

Once these areas are covered, you are ready to confidently move toward production use.


What to Explore Next

→ 2.7 Deploy your channels

Did this answer your question?