BrXM AI Content Assistant developer guide

Overview

This feature is available since Bloomreach Experience Manager verison 16.4.0, and requires a standard or premium license. Please contact Bloomreach for more information.

The BrXM AI Content Assistant, powered by Loomi, brings generative AI capabilities directly into the BrXM CMS, helping you automate and streamline content creation, editing, and management.

This guide shows you how to configure, initialize, and use the AI chat assistant, focusing on technical setup and supported operations. The AI chat assistant is the central feature, enabling developers and content teams to interact with large language models (LLMs) from supported providers. It can help you perform tasks like summarization, translation, and SEO optimization—all within the CMS interface.

Prerequisites

Before you begin, make sure you meet the following requirements:

  • Access to a BrXM CMS instance (version 16.4 or later).

  • Access to the console.

  • Understanding of your organization’s data privacy and residency requirements.

  • An API key from a supported AI provider, such as OpenAI or VertexAI Gemini.

Supported LLM providers

The AI Content Assistant can use AI models from the following providers:

  • OpenAI
  • VertexAI Gemini
  • Ollama

Technical architecture overview

The Content GenAI integration is built on a modular architecture that separates the UI, backend AI service, and model providers. Here’s how the main components interact:

  • Document editor: The main UI where users trigger AI operations.

  • AI backend service: Handles all AI-related requests from the UI. Initially exposed as an internal service, with a REST API layer planned for future releases.

  • Spring AI bridge: Acts as a middleware between the backend service and various model providers.

  • Model providers: External LLM services such as OpenAI, Gemini, or Ollama.

How the AI Content Assistant works

  1. The user initiates an AI operation in the document editor.

  2. The UI sends a request to the AI backend service.

  3. The backend service prepares and forwards the request to the selected model provider via the Spring AI bridge.

  4. The model provider processes the request and returns a response.

  5. The backend service sends the result back to the UI for display or further action.

This architecture abstracts the complexity of model integration and ensures that only approved operations are exposed to users, improving security and maintainability.

Initalize the AI Content Assistant

You can initialize the AI Content Assistant with the Essentials application. To do so:

  1. Go to Essentials.

  2. Go to Library. - Make sure Enterprise features are enabled.

  3. Look for Content AI and click Install feature

  1. Now, rebuild and restart your project.

  2. Once your project has restarted, go to Installed features.

  3. Find Content AI and click Configure.

  1. Choose the desired AI Model from the available options of supported providers.

  2. Configure the other details such as API URL (endpoint), API key, and so on. Each provider has different configuration options (see Configuration options section below).

  3. Once you’re done, click Save.

  4. Lastly, rebuild and restart your project again.

Configuration options

This feature is only works if you configure the API key / Project ID.

Each model provider can have different settings to configure, like:

  • API key/project ID: Enter your own configurable API keys or project ID. This gives you flexibility and control over data privacy but requires you to manage provider agreements and keys.

  • Model to use: Specify the exact model name and version to use in the AI Assistant. This allows you to choose the best performing model for a particular type of tasks.

  • Temperature: Set the sampling temperature of the model that controls the creativity and depth of the generated output.

  • Max tokens: Limit the maximum number of tokens to generate in the chat completion. This helps you keep your token usage in check, so it doesn't exceed your allowed limit.

  • Max messages(Available from v16.6.0): Limit the maximum number of messages allowed in a single conversation. The user would not be allowed to send more messages once they have exhausted the limit, thereby keeping token usage in check.

The default message displayed to the user on exceeding the maximum message limit in a conversation is “Unexpected error”.

Conversation history

The functionality to save different chat sessions (conversations) in the history is available for all providers starting from v16.6.

Conversation auto-naming

The system automatically assigns a name to each conversation for identification purposes. This identifier can be edited by the user at any time.

The request to generate a conversation name is initiated shortly after the first message. Note that the conversation name may take a few messages to generate; until then, the default “New conversation” name is displayed.

The auto-naming request is charged to the account of the user making the conversation, and can be monitored in the logs.

Conversation logs

To examine the requests and responses from the AI model provider, we provide two loggers, which need to be enabled in your log4j configuration:

  • A prompt logger that logs the conversation as typed by the user, as well as the responses from the AI model. To enable, set <Logger name="com.bloomreach.brx.ai.service.impl.client.advisors.PromptLoggerAdvisor" level="info"/>

  • Spring's default SimpleLoggerAdvisor. To enable, add the following property: <Logger name="org.springframework.ai.chat.client.advisor.SimpleLoggerAdvisor" level="debug"/>

The logs are printed in the terminal or Humio, depending on your configuration. Each log entry is formatted appropriately for identification and discoverability.

Here are the characteristics of the logs, and the information they contain:

  • The user and conversation IDs.

  • All outgoing and incoming requests (including document references and auto-naming requests).

  • Total number of tokens consumed by the user after each request.

LiteLLM support

LiteLLM is a versatile LLM model gateway. You can integrate with your LiteLLM account to use the available models in the AI Content Assistant.

To configure it, use the following settings:

  1. Select the OpenAI connector as the AI Model.

  2. Enter your LiteLLM API URL and API Key.

  3. Enter the Model to use in the format openai/gpt-4o.

Spring AI configuration

To configure the AI Assistant with Spring AI, use the supported properties for your provider in the project's  Properties file.

The brxm.ai.provider property is used to specify the name of the model provider.

The brxm.ai.chat.max-messages property is used to set the maximum number of messages allowed in a single conversation.

The brxm.ai.provider property was previously named spring.ai.provider.
If you use the old name in the Spring AI configuration, kindly update the property name in your Properties file to avoid errors.
If your configuration is stored in the Java Content Repository (JCR), an additional change is needed. The intermediate node with the model provider name (for example, /OpenAI:) is not needed anymore. Move all its properties one level up (that is, move them under /hippo:configuration/hippo:modules/ai-service/hipposys:moduleconfig)

Given below are the names of each model provider, and the lists of supported properties for them:

OpenAI

  • spring.ai.openai.api.url (required)
  • spring.ai.openai.api-key (required)
  • spring.ai.openai.chat.options.model (required)
  • spring.ai.openai.chat.options.temperature
  • spring.ai.openai.chat.options.maxTokens

You need to create an API with OpenAI to access ChatGPT models.

Create an account at OpenAI signup page and generate the token on the API Keys page.

VertexAIGemini

  • spring.ai.vertex.ai.gemini.project-id (required)

  • spring.ai.vertex.ai.gemini.location (required)

  • spring.ai.vertex.ai.gemini.chat.options.model (required)

  • spring.ai.vertex.ai.gemini.chat.options.temperature
  • spring.ai.vertex.ai.gemini.chat.options.max-tokens

To authenticate with your credentials, set up ADC using the ADC setup guide.

Ollama

  • spring.ai.ollama.api.url (required)
  • spring.ai.ollama.chat.options.model (required)
  • spring.ai.ollama.chat.options.model.pull.strategy (required)

You can download and run Ollama locally.

Using the AI Content Assistant

Please check AI Content Assistant User Manual for more information. 

Supported operations and API usage

The AI chat assistant supports a range of content operations. You can trigger these actions directly from the document editor or via the assistant panel.

Refer to the BrXM AI Content Assistant User Manual for full details on usage and capabilities.

Field-specific operations:

Field-specific operations generate a response from the AI for a single field. To update the content of that field, you need to manually copy the AI’s response and paste it into the desired field. Example operations include:

  • Summarize a field: Generate a concise summary of the selected field.

  • Expand content: Elaborate on existing text or expand bullet points.

  • Spelling and grammar checks: Identify and fix errors in a specific field.

Document-level operations:

These operate on the whole document. Example operations include:

  • Summarize a document: Create a summary of the entire document.

  • Tag extraction: Identify key themes or keywords for categorization.

  • Translate a document: Convert content into different languages.

  • Sentiment analysis: Analyze the emotional tone of the content.

  • SEO optimization suggestions: Get recommendations for improving search engine visibility.

  • Translation: Translate a document.

Image-based operations (Available from v16.4.1):

The AI assistant can now use images as a primary context. When you view or edit an image in the CMS, the assistant can be prompted to perform tasks related to that image, such as analysis or generating descriptive text.

If you prompt the AI assistant on an image type unsupported by the configured AI model, it may return a general response like "We are having trouble processing your request right now. Please try again later."

Repeatedly getting such a response on an image-based operation might mean that its file type is unsupported by the model used.

Extended Context with References (Available from v16.6.0):

The AI can reference and process content from multiple specified documents within the BrXM repository. This allows the AI to draw on broader internal CMS knowledge for creating and editing content accurately and consistently.

Please note that making requests to the model with references attached as context may consume more tokens.

Limitations

  • In version 16.4, the AI Content Assistant accesses only the unpublished document content. Draft versions are not yet supported. Therefore, users must save changes to provide the AI with the most current document information.

As of version 16.4.1, the AI assistant now supports draft versions of documents, allowing users to access the most current document information without needing to save changes.
  • Assets (fields and document types) are not supported.

  • Value list fields and document types are not supported.

  • The assistant is only available in the content perspective; other perspectives are not supported.

  • Document-level operations may require manual import of generated content via the Content API.

Important: Incubating Features

Bloomreach is introducing a formal process to release some new functionalities as "Incubating Features" to accelerate innovation, particularly in rapidly evolving technologies. While these features are production-ready and tested, they may undergo significant changes (including backward-incompatible modifications or removal) outside of standard major releases. 

Such changes will not affect the out-of-the-box CMS experience, but may require configuration updates in custom integrations or extensions using these features. If you customize or extend an incubating feature, you may need to update your custom solution in subsequent minor or patch releases. All incubating features will be clearly documented and marked.

Please refer to the Incubating Features Policy for more information.

As of v16.6.0, AI Content Assistant includes Incubating Features and modules. 

Changes in v16.6

GroupId

Change from com.bloomreach.brx.ai to com.bloomreach.xm.ai

Artifacts

The term 'incubating' was appended to two artifacts:

<dependency>

      <groupId>com.bloomreach.xm.ai</groupId>

      <artifactId>content-ai-service-impl-incubating</artifactId>

</dependency>


<dependency>

      <groupId>com.bloomreach.xm.ai</groupId>

      <artifactId>content-ai-service-rest-incubating</artifactId>

</dependency>

The following only need the <groupId> to change:

<dependency>

      <groupId>com.bloomreach.xm.ai</groupId>

      <artifactId>content-ai-service-client-bootstrap</artifactId>

    </dependency>


<dependency>

      <groupId>com.bloomreach.xm.ai</groupId>

      <artifactId>content-ai-service-client-assistant-angular</artifactId>

    </dependency>

 

 

Did you find this page helpful?
How could this documentation serve you better?
On this page
    Did you find this page helpful?
    How could this documentation serve you better?