BrXM AI Content Assistant developer guide

Overview

This feature is available since Bloomreach Experience Manager verison 16.4.0, and requires a standard or premium license. Please contact Bloomreach for more information.

The BrXM AI Content Assistant, powered by Loomi, brings generative AI capabilities directly into the BrXM CMS, helping you automate and streamline content creation, editing, and management.

This guide shows you how to configure, initialize, and use the AI chat assistant, focusing on technical setup and supported operations. The AI chat assistant is the central feature, enabling developers and content teams to interact with large language models (LLMs) from supported providers. It can help you perform tasks like summarization, translation, and SEO optimization—all within the CMS interface.

Prerequisites

Before you begin, make sure you meet the following requirements:

  • Access to a BrXM CMS instance (version 16.4 or later).

  • Access to the console or the project's properties files.

  • Understanding of your organization’s data privacy and residency requirements.

  • An account and an API key with a supported AI provider, such as OpenAI or VertexAI Gemini, or an AI model running locally, such as Ollama or LiteLLM.

  • If using a Vector Store, a working instance of Redis is needed

Create an API key with OpenAI to access ChatGPT models. Create an account at OpenAI signup page and generate the API key on the API Keys page.
To authenticate with your VertexAI credentials, set up ADC using the ADC setup guide.
Ollama can be downloaded and run locally. LiteLLM must be installed and configured before use. It can be installed locally or a cloud (managed) service can be used.  

Supported LLM providers

The AI Content Assistant can use AI models from the following providers:

  • OpenAI
  • VertexAI Gemini
  • Ollama
  • LiteLLM

Installation

Please see Initialize and configure the AI Content Assistant documentation.

Technical architecture overview

The Content GenAI integration is built on a modular architecture that separates the UI, backend AI service, and model providers. Here’s how the main components interact:

  • Document editor: The main UI where users trigger AI operations.

  • AI backend service: Handles all AI-related requests from the UI. Initially exposed as an internal service, with a REST API layer planned for future releases.

  • Spring AI bridge: Acts as a middleware between the backend service and various model providers.

  • Model providers: External LLM services such as OpenAI, Gemini, or Ollama.

  • Vector Store providers: Local or External vector store implementation for storing Content embeddings.

  • Ingestion process: a background process that populates your Vector Store with embeddings generated by your configured model provider, on document save or on document publication.

How the AI Content Assistant works

  1. The user initiates an AI operation in the document editor.

  2. The UI sends a request to the AI backend service.

  3. The backend service prepares and forwards the request to the selected model provider via the Spring AI bridge.

  4. The model provider processes the request, optionally calls available tools, and returns a response

  5. The backend service sends the result back to the UI for display or further action.

This architecture abstracts the complexity of model integration and ensures that only approved operations are exposed to users, improving security and maintainability.

Extensibility

The AI module provides a number of extensibility points, please refer to the Extensibility Guide.

Vector Store Maintenance

Runtime manipulation of your Vector Store is possible via Groovy scripts, from within the CMS interface. This is an administrative operation aiming to ease the maintenance and updating of your embeddings. See details in Maintenance Groovy Scripts.

Supported operations

The AI chat assistant supports a range of content operations. You can trigger these actions directly via the assistant panel.

Refer to the BrXM AI Content Assistant User Manual for full details on usage and capabilities.

Enabling/Disabling the assistant

Assuming you have the assistant up and running, there are cases you may need to temporarily disable it. You can disable the backend and frontent parts of the AI module separately:

  • Disable the backend model provider, by ommiting the value for property brxm.ai.provider. This will disable also using the Vector Store. Requires a restart of the pod to take effect.

  • Disable only the Vector Store, by ommiting the value for property brxm.ai.vectorstore.  This disables also the background ingestion process. Requires a restart of the pod to take effect.

  • Hide the "Ask AI" button in the CMS by removing the value for property frontend:appPath at path /hippo:configuration/hippo:frontend/cms/cms-static/ai-service-client-perspective.

Conversation history (Available from v16.6.0):

Session behavior and infrastructure considerations:

  • The CMS session is tied to the pod(s) serving your requests. If the serving pod changes, the CMS creates a new session and the AI chat history displayed in the UI is cleared.

  • Conversation logs are retained in system logs and remain available even if the visible chat history is cleared.

Conversation auto-naming

The auto-generation of the  conversation name is initiated shortly after the first message. Note that the conversation name may take a few messages to generate; until then, the default “New conversation” name is displayed.

The auto-naming is automatically disabled once a name is generated or if the user has entered a custom name themselves.

The auto-naming request consumes tokens and is charged to the account of the user making the conversation. The request is included and can be monitored in the logs.

Conversation logs

All data that is transferred to and from the AI provider is available for inspection in your logs. To examine the requests and responses from the AI model provider, we provide two loggers, both of which need to be enabled in your log4j configuration in order to start logging:

  • A prompt logger that logs the conversation as typed by the user, as well as the responses from the AI model. To enable, set <Logger name="com.bloomreach.brx.ai.service.impl.client.advisors.PromptLoggerAdvisor" level="info"/>

  • Spring's default SimpleLoggerAdvisor. To enable, add the following property: <Logger name="org.springframework.ai.chat.client.advisor.SimpleLoggerAdvisor" level="debug"/>

The logs are printed in the terminal or Humio, depending on your configuration. Each log entry of the PromptLoggerAdvisor is formatted with identification and discoverability in mind. An example can be seen below:

INFO  http-nio-8080-exec-4 [PromptLoggerAdvisor.before:88] >>>>> Outgoing message for admin, conversation 559df082-955e-41ad-a1a7-39535ad4b20d (type:USER)  >>>>>
[INFO] What do you see in this document
[INFO] >>>>>

Here are the characteristics of the logs, and the information they contain:

  • The user and conversation IDs.

  • All outgoing and incoming requests (including document references and auto-naming requests).

  • Total number of tokens consumed by the user after each request.

Conversation token usage

The implementation provides insights on the usage of tokens, per conversation, per user. This information is available via logs, by setting the log level of the audit logger:

<Logger name="com.bloomreach.xm.ai.service.impl.audit.UsageLoggingStore" level="info"/>

Limitations

  • In version 16.4, the AI Content Assistant is limited to accessing only the published and unpublished content of a document; draft versions are not supported. Users must, therefore, save their changes for the AI to have the most up-to-date document information. The limitation is addressed in version 16.4.1, where the AI assistant now supports draft versions of documents. This enhancement allows users to access the most current document information without the need to save their changes beforehand.

  • Assets (fields and document types) are not supported.

  • Value list fields and document types are not supported.

  • The assistant is only available in the content perspective; other perspectives are not supported.

  • Document-level operations may require manual import of generated content by user.

Important: Incubating Features

Bloomreach is introducing a formal process to release some new functionalities as "Incubating Features" to accelerate innovation, particularly in rapidly evolving technologies. While these features are production-ready and tested, they may undergo significant changes (including backward-incompatible modifications or removal) outside of standard major releases. 

Such changes will not affect the out-of-the-box CMS experience, but may require configuration updates in custom integrations or extensions using these features. If you customize or extend an incubating feature, you may need to update your custom solution in subsequent minor or patch releases. All incubating features will be clearly documented and marked as such.

Please refer to the Incubating Features Policy for more information.

As of v16.6.0, AI Content Assistant includes Incubating features and modules. See also Upgrading to 16.6.0 
Did you find this page helpful?
How could this documentation serve you better?
On this page
    Did you find this page helpful?
    How could this documentation serve you better?