Carbon AI Chat
Carbon AI Chat
    Preparing search index...
    OverviewUsing with ReactUsing as a Web componentUI customizationService desksServer communicationDemo and ExamplesMigration 0.4.0 -> 0.5.1Migration 0.5.x -> 1.0.0ChatContainerPropsChatCustomElementPropsRenderUserDefinedStateRenderUserDefinedResponseRenderWriteableElementResponseChatContainerChatCustomElementCdsAiChatContainerAttributesCdsAiChatCustomElementAttributesCarbonThemeCornersTypeLayoutCustomPropertiesMinimizeButtonIconTypeOnErrorTypeChatHeaderConfigCustomMenuOptionDisclaimerPublicConfigHeaderConfigHomeScreenConfigHomeScreenStarterButtonHomeScreenStarterButtonsHomeScreenStateInputConfigLauncherCallToActionConfigLauncherConfigLayoutConfigOnErrorDataPublicConfigPublicConfigMessagingLanguagePackenLanguagePackViewTypeWriteableElementNameChatInstanceChatInstanceNotificationsChatInstanceServiceDeskActionsCustomPanelConfigOptionsCustomPanelInstanceCustomPanelsEventHandlersFileUploadCapabilitiesNotificationMessagePersistedHumanAgentStatePersistedStateSendOptionsTypeAndHandlerViewStateChangeFunctionEventBusHandlerIncreaseOrDecreasePublicChatHumanAgentStatePublicChatStateWriteableElementsBusEventTypeFeedbackInteractionTypeMessageSendSourceViewChangeReasonBusEventBusEventChatReadyBusEventChunkUserDefinedResponseBusEventClosePanelButtonClickedBusEventCustomPanelCloseBusEventCustomPanelOpenBusEventCustomPanelPreCloseBusEventCustomPanelPreOpenBusEventFeedbackBusEventHistoryBeginBusEventHistoryEndBusEventMessageItemCustomBusEventPreReceiveBusEventPreResetBusEventPreSendBusEventReceiveBusEventResetBusEventSendBusEventUserDefinedResponseBusEventViewChangeBusEventViewPreChangeErrorTypeFileStatusValueHumanAgentMessageTypeHumanAgentsOnlineStatusScreenShareStateAdditionalDataToAgentAgentAvailabilityBusEventHumanAgentAreAnyAgentsOnlineBusEventHumanAgentEndChatBusEventHumanAgentPreEndChatBusEventHumanAgentPreReceiveBusEventHumanAgentPreSendBusEventHumanAgentPreStartChatBusEventHumanAgentReceiveBusEventHumanAgentSendConnectingErrorInfoDisconnectedErrorInfoEndChatInfoFileUploadServiceDeskServiceDeskCallbackServiceDeskFactoryParametersServiceDeskPublicConfigStartChatOptionsUserMessageErrorInfoServiceDeskCapabilitiesServiceDeskErrorInfoButtonItemKindButtonItemTypeCancellationReasonChainOfThoughtStepStatusIFrameItemDisplayOptionMessageErrorStateMessageInputTypeMessageResponseTypesOptionItemPreferenceUserTypeWidthOptionsBaseGenericItemBaseMessageInputButtonItemCarouselItemChainOfThoughtStepChatInstanceMessagingChunkCompleteItemChunkConnectToHumanAgentItemConnectToHumanAgentItemTransferInfoConversationalSearchItemConversationalSearchItemCitationCustomSendMessageOptionsEventInputEventInputDataFinalResponseChunkGenericItemMessageFeedbackCategoriesGenericItemMessageFeedbackOptionsGenericItemMessageOptionsGridItemHistoryItemIFrameItemInlineErrorItemItemStreamingMetadataMediaItemMediaItemDimensionsMessageHistoryFeedbackMessageInputMessageItemPanelInfoMessageOutputMessageRequestMessageRequestHistoryMessageResponseMessageResponseHistoryMessageResponseOptionsOptionItemPartialItemChunkPartialResponsePauseItemResponseUserProfileSearchResultSingleOptionTextItemUserDefinedItemWithBodyAndFooterWithWidthOptionsAudioItemCardItemDateItemGenericItemHorizontalCellAlignmentImageItemMessagePartialOrCompleteItemChunkStreamChunkVerticalCellAlignmentVideoItemPageObjectIdTestIdDeepPartialObjectMap
    Server communication

    The Carbon AI Chat allows you to provide your own server for the chat to interact with. It supports both streaming and non-streaming results, or a mixture of both. Here we are going to cover the life-cycle of sending a message from the chat to your assistant and back.

    The Carbon AI Chat provides a MessageRequest when someone sends a message. The Carbon AI Chat expects a MessageResponse to be returned. You can stream the MessageResponse. See ChatInstanceMessaging.addMessageChunk for an explanation of the streaming format.

    For more information, see the examples page.

    Inside the MessageResponse the Carbon AI Chat can accept response_types. You can navigate to the properties for each response_type by visiting the base GenericItem type.

    The Carbon AI Chat takes custom messaging server configuration as part of its PublicConfig. You are required to provide a messaging.customSendMessage (see PublicConfigMessaging.customSendMessage) function that the Carbon AI Chat calls any time the user sends a message. It also gets called if you make use of the send function on ChatInstance.

    In this function, the Carbon AI Chat passes three parameters:

    1. MessageRequest: The message being sent.
    2. CustomSendMessageOptions: Options about that message. This includes an abort signal to cancel the request.
    3. ChatInstance: The Carbon AI Chat instance object.

    This function can return nothing or it can return a promise object. If you return a promise object, the Carbon AI Chat does the following actions:

    1. Set up a message queue and only pass the next message to your function when the message completes.
    2. Show a loading indicator if the message is taking a while to return (or return its first chunk if streaming).
    3. Throw a visible error and pass an abort signal if waiting for the message exceeds the messaging.messageTimeoutSecs timeout identified in your PublicConfig with PublicConfigMessaging.messageTimeoutSecs.

    If you do not return a promise object, the Carbon AI Chat does not queue messages for you or show any loading indicator if no first chunk is returned.

    For streaming operations see ChatInstanceMessaging.addMessageChunk. For non-streaming responses see ChatInstanceMessaging.addMessage. Your assistant can return responses in either format and can switch between.

    The streaming API uses three types of chunks ( StreamChunk) to progressively build and finalize a message response:

    Partial item chunks ( PartialItemChunk) allow you to stream incremental updates to individual message items. Each chunk contains:

    The client automatically merges partial chunks into the existing item based on the item's streaming_metadata.id. For text items, new text is appended. Multiple items can stream in parallel within the same message by using different item IDs.

    Example:

    const chunk: StreamChunk = {
    partial_item: {
    response_type: MessageResponseTypes.TEXT,
    text: `${new_chunk}`,
    streaming_metadata: {
    id: "1", // Identifies this item within the message
    cancellable: true, // Shows "stop streaming" button
    },
    },
    streaming_metadata: {
    response_id: responseID, // Identifies the entire message
    },
    partial_response: {
    message_options: {
    response_user_profile: userProfile,
    chain_of_thought: currentSteps,
    },
    },
    };
    await instance.messaging.addMessageChunk(chunk);

    A complete item chunk ( CompleteItemChunk) finalizes a specific item before the entire message is done. This is useful when:

    The complete item should contain all final data for that item, including any corrections to previous chunks.

    Example:

    const chunk: StreamChunk = {
    complete_item: {
    response_type: MessageResponseTypes.TEXT,
    text: finalText, // Complete, corrected text
    streaming_metadata: {
    id: "1",
    stream_stopped: wasCancelled, // Indicates if user cancelled
    },
    },
    streaming_metadata: {
    response_id: responseID,
    },
    partial_response: {
    message_options: {
    response_user_profile: userProfile,
    chain_of_thought: finalSteps,
    },
    },
    };
    await instance.messaging.addMessageChunk(chunk);

    If you're only streaming a single item, you can skip this step and go directly to the final response.

    The final response chunk ( FinalResponseChunk) signals the end of all streaming and provides the authoritative final state. This:

    Example:

    const finalResponse: MessageResponse = {
    id: responseID,
    output: {
    generic: [
    {
    response_type: MessageResponseTypes.TEXT,
    text: finalText,
    message_item_options: {
    feedback: feedbackOptions,
    },
    },
    ],
    },
    message_options: {
    response_user_profile: userProfile,
    chain_of_thought: chainOfThought,
    },
    };

    await instance.messaging.addMessageChunk({
    final_response: finalResponse,
    });
    1. Generate a unique response_id for the message
    2. Loop through your streaming source, sending partial item chunks for each update
    3. (Optional) Send complete item chunks when individual items are finalized
    4. Send a final response chunk with the complete message

    The Carbon AI Chat handles merging partial updates, rendering streaming text, and transitioning to the final state automatically.

    When streaming content, users can request to stop the stream in two ways:

    1. Clicking the "stop streaming" button in the input field
    2. Restarting or clearing the conversation

    Both actions trigger request cancellation. To handle this:

    Set cancellable: true in the ItemStreamingMetadata of your partial item chunks:

    const chunk: StreamChunk = {
    partial_item: {
    response_type: MessageResponseTypes.TEXT,
    text: streamedText,
    streaming_metadata: {
    id: "1",
    cancellable: true, // Shows the "stop streaming" button
    },
    },
    streaming_metadata: {
    response_id: responseID,
    },
    };

    The CustomSendMessageOptions.signal abort signal is triggered when a message request is cancelled. When aborted, the signal's reason property contains one of the values from the CancellationReason enum:

    You can check if the request was cancelled using signal.aborted or by listening to the "abort" event, and access the specific reason via signal.reason.

    import { CancellationReason } from "@carbon/ai-chat";

    async function customSendMessage(
    request: MessageRequest,
    requestOptions: CustomSendMessageOptions,
    instance: ChatInstance,
    ) {
    let isCanceled = false;

    // Listen to abort signal (handles stop button, restart/clear, and timeout)
    const abortHandler = () => {
    isCanceled = true;
    const reason = requestOptions.signal?.reason;

    // Use enum for type-safe comparisons
    if (reason === CancellationReason.STOP_STREAMING) {
    console.log("User clicked stop streaming");
    } else if (reason === CancellationReason.CONVERSATION_RESTARTED) {
    console.log("Conversation was restarted/cleared");
    } else if (reason === CancellationReason.TIMEOUT) {
    console.log("Request timed out");
    }

    // Stop your streaming loop and prepare to send the final response
    };
    requestOptions.signal?.addEventListener("abort", abortHandler);

    try {
    // Your streaming logic here, checking isCanceled periodically
    while (!isCanceled && hasMoreData) {
    // Stream chunks...
    }
    } finally {
    requestOptions.signal?.removeEventListener("abort", abortHandler);
    }
    }

    Subscribe to the BusEventType.STOP_STREAMING event. Note that this event is only fired for stop button clicks, not for conversation restarts/clears:

    let isCanceled = false;

    const stopGeneratingEvent = {
    type: BusEventType.STOP_STREAMING,
    handler: () => {
    isCanceled = true;
    // Stop your streaming loop and prepare to send the final response
    instance.off(stopGeneratingEvent); // Clean up the listener
    },
    };

    instance.on(stopGeneratingEvent);

    Note: Using the abort signal (Option A) is recommended as it provides unified handling for all cancellation scenarios.

    When cancellation is detected, exit your streaming loop and send the final response chunk. You have two options:

    You can skip the complete item chunk and go directly to the final response:

    const finalResponse: MessageResponse = {
    id: responseID,
    output: {
    generic: [
    {
    response_type: MessageResponseTypes.TEXT,
    text: partialText, // The text generated before cancellation
    },
    ],
    },
    };

    await instance.messaging.addMessageChunk({
    final_response: finalResponse,
    });

    If you want to explicitly indicate the stream was stopped (which triggers appropriate a11y states), you can optionally send a CompleteItemChunk with stream_stopped: true before the final response:

    // Optional: Send complete item with stream_stopped flag
    const chunk: StreamChunk = {
    complete_item: {
    response_type: MessageResponseTypes.TEXT,
    text: partialText, // The text generated before cancellation
    streaming_metadata: {
    id: "1",
    stream_stopped: true, // Triggers appropriate a11y states and messaging
    },
    },
    streaming_metadata: {
    response_id: responseID,
    },
    };

    await instance.messaging.addMessageChunk(chunk);

    // Then send the final response
    const finalResponse: MessageResponse = {
    id: responseID,
    output: {
    generic: [
    {
    response_type: MessageResponseTypes.TEXT,
    text: partialText,
    },
    ],
    },
    };

    await instance.messaging.addMessageChunk({
    final_response: finalResponse,
    });

    After receiving the final response, the Carbon AI Chat will hide the "stop streaming" button and enable normal input functionality.

    By default, if the homescreen is disabled, the Carbon AI Chat sends a MessageRequest with input.text set to a blank string and history.is_welcome_request set to true when a user first opens the chat. It is to allow you to inject a hard coded greeting response to the user. If you do not wish to use this functionality, you can set messaging.skipWelcome to true. See PublicConfigMessaging.skipWelcome.

    If you want to send your own "welcome" message (e.g. you send different text depending on the user and respond in kind) you can set messaging.skipWelcome to true and call instance.messaging.addMessage ( ChatInstanceMessaging.addMessage) on your own.

    By default, the chat will show a loading indicator if it does not get back a chunk or message before messaging.messageLoadingIndicatorTimeoutSecs expires. You can turn off this auto-showing of a loading indicator in this case by setting messaging.messageLoadingIndicatorTimeoutSecs to 0. If your message is taking a long time to stream or has many thinking steps or long running API calls, you may want to toggle the loading indicator on manually using ChatInstance.updateIsChatLoadingCounter.

    The Carbon AI Chat allows you to implement custom history loading to restore previous conversations when the chat is opened. History is represented as an array of HistoryItem objects, where each item contains either a MessageRequest or MessageResponse along with a timestamp.

    Note: The Carbon AI Chat only handles UI-level history (displaying previous messages). There is currently no recommended strategy for storing LLM-friendly conversation history if that is part of your use case.

    Each HistoryItem contains:

    The messages should include their history property ( MessageRequestHistory or MessageResponseHistory) which stores metadata like timestamps, labels, error states, and feedback.

    To automatically load history when the chat opens, define a PublicConfigMessaging.customLoadHistory function in your PublicConfig:

    const config = {
    messaging: {
    customLoadHistory: async (instance: ChatInstance) => {
    // Fetch history from your backend
    const history = await fetchHistoryFromAPI();

    // Return array of HistoryItem objects
    return history;
    },
    },
    };

    This function:

    The Carbon AI Chat automatically calls ChatInstanceMessaging.insertHistory with the returned items.

    For advanced use cases (like switching between conversations), you can skip customLoadHistory and directly call ChatInstanceMessaging.insertHistory:

    // Load history manually
    await instance.messaging.insertHistory(historyItems);

    This method:

    When users need to switch between different conversations:

    // Clear the current conversation
    await instance.messaging.clearConversation();

    // Load the new conversation's history
    await instance.messaging.insertHistory(newConversationHistory);

    ChatInstanceMessaging.clearConversation will:

    When using PublicConfigMessaging.customLoadHistory, the Carbon AI Chat automatically shows a fullscreen loading indicator during the hydration process. You do not need to manually control the loading state.

    However, if you manually call ChatInstanceMessaging.clearConversation or ChatInstanceMessaging.insertHistory (for example, when switching conversations), you may want to show a loading indicator while fetching data:

    async function switchToConversation(conversationId: string) {
    // Show loading indicator
    instance.updateIsChatLoadingCounter("increase");

    try {
    // Fetch history from your backend
    const history = await fetchHistoryFromAPI(conversationId);

    // Clear current conversation and load new one
    await instance.messaging.clearConversation();
    await instance.messaging.insertHistory(history);
    } finally {
    // Hide loading indicator
    instance.updateIsChatLoadingCounter("decrease");
    }
    }

    ChatInstance.updateIsChatLoadingCounter controls the fullscreen hydration loading state. The indicator shows when the internal counter is greater than zero. Always pair "increase" with "decrease" to ensure proper cleanup.

    For a complete example, see the history example.

    Search documentation