Aller au contenu principal

Chat with AI

This document is a step-by-step tutorial designed for beginners. It will guide you through the process of implementing AI chat functionality in your Casibase knowledge base system.

Introduction

In previous sections, we have deployed Casdoor and Casibase, integrated a storage provider, a model provider, and a embedding provider with Casibase, and added a store to use these providers.

Refer to the Add a Store section of our previous documentation for more information about stores.

Now, let's implement AI chat functionality in Casibase.

Step 1: Add a New Chat

Click the Chats button on the home page and then click the New Chat button to add a chat.

Casibase-chats-add

Step 2: Send a Message

Write a message and click the Send button to send it.

Casibase-chats-davinci003

Step 3: Knowledge Base Chat

Addtionally, you can chat with the chatbot in the knowledge base.

There are some requirements for the knowledge base chat:

  • The knowledge base must have a store.
  • The store must have a model provider.
  • The store must have a embedding provider.
  • The store must have a storage provider.
  • The storage provider must have a readable document (e.g. a markdown file, docx file and pdf file).

Once you have met these requirements, you can return to the Stores page and click the Refresh Vectors button to embedding the knowledge base data.

Casibase-stores-refresh-button

The button will be disabled when the embedding is in progress.

After the embedding is complete, you can click the Vectors button in the navigation bar to view the vectors.

Result:

Casibase-vectors-refresh-result

Let's chat with the chatbot in the knowledge base.

Casibase-chats-get-answer

Compare the results with non-knowledge base chat:

Casibase-chats-answer-wrong-casdoor

attention

The embedding rate is related two factors:

  • The documents in the knowledge base.
    • Documents number: The more documents, the longer the embedding time.
    • Documents size: The larger the document size, the longer the embedding time.
  • The embedding provider.
    • Api rate limit: The more api rate limit, the faster the embedding speed.
    • Api concurrency: The more api concurrency, the faster the embedding speed.

For example, if you use the OpenAI API as the embedding provider, the embedding rate is related to the OpenAI API rate limit and concurrency.

Conclusion

In this guide, we have learned how to implement AI chat functionality in Casibase.

Now, you can chat with the chatbot in Casibase. Enjoy it!

More information about Casibase can be found in the Core Concepts section of our documentation.