The Carbon AI chat provides your own server for the chat to interact with. It allows for both streaming and non-streamed results, or a mixture of both.
The Carbon AI chat provides a MessageRequest when someone sends a message. The Carbon AI chat expects a MessageResponse to be returned. You can stream the MessageResponse
. See ChatInstanceMessaging.addMessageChunk for an explanation of the streaming format.
For more information, see the examples page.
Inside the MessageResponse
the Carbon AI chat can accept response_types
. You can navigate to the properties for each response_type
by visiting the base GenericItem type.
The Carbon AI chat takes custom messaging server configuration part of its PublicConfig. You are required to provide a messaging.customSendMessage
(see PublicConfigMessaging.customSendMessage) function that the Carbon AI chat calls any time the user sends a message. It also gets called if you make use of the send
function on ChatInstance.
In this function, the Carbon AI chat passes three parameters:
instance
object.This function can return nothing or it can return a promise object. If you return a promise object, the Carbon AI chat does the following actions:
messaging.messageTimeoutSecs
timeout identified in your PublicConfig with PublicConfigMessaging.messageTimeoutSecs.If you do not return a promise object, the Carbon AI chat does not queue messages for you.
By default, the Carbon AI chat sends a MessageRequest with input.text
set to a blank string and history.is_welcome_request
set to true when a user first opens the chat. It is to allow you to inject a hard coded greeting response to the user. If you do not wish to use this functionality, you can set messaging.skipWelcome
to false
. See PublicConfigMessaging.skipWelcome.
Once you call messaging.customSendMessage
, you need to feed responses back into the chat. Use of the ChatInstance passed into the function is for this purpose.
For streaming operations see ChatInstanceMessaging.addMessageChunk. For non-streaming responses see ChatInstanceMessaging.addMessage. Your assistant can return responses in either format and can switch between.
Your history store returns an array of HistoryItem members. There is currently no recommended strategy for storing your LLM friendly history, if that is part of your use case.
The Carbon AI chat allows you to define a messaging.customLoadHistory
function in your PublicConfig. See PublicConfigMessaging.customLoadHistory. This method returns a promise that resolves when you have finished loading messages into the chat. During the Carbon AI chat's hydration process, you can call this function with the Carbon AI chat ChatInstance object as its only parameter. You can then call instance.messaging.insertHistory
to load a conversation into the chat. See ChatInstanceMessaging.insertHistory.
Some use cases can have more the one conversation attached to the chat. You can optionally forgo by using messaging.customLoadHistory
and directly call instance.messaging.insertHistory
as needed. In a use case where a user can change between different conversations, you can to call instance.messaging.clearConversation
to clear out the previous conversation before calling instance.messaging.insertHistory
. See ChatInstanceMessaging.clearConversation.