Bloomreach XM performance test suite

The Bloomreach XM performance test suite measures CMS editorial performance against a running Bloomreach Experience Manager environment. It combines browser-based UI latency tests with API load tests so teams can validate CMS responsiveness, multi-user editing behavior, and performance regressions before a release or go-live.

The suite is intended for development, test, staging, and acceptance environments, regardless of whether Bloomreach Experience Manager is hosted in Bloomreach Cloud or self-hosted. Use an environment with production-like resources when you want results that are representative for production. Avoid running destructive setup or load tests against a live production CMS.

What the suite tests

The suite contains two complementary test types:

Test type Technology What it measures
UI latency tests Playwright and Chromium User-perceived CMS latency for editorial workflows such as navigation, document editing, saving, publishing, and concurrent editing
API load tests Gatling Backend Content Service performance for authenticated document editing flows, without browser rendering overhead

The generated report combines UI and API results into a single dashboard with latency metrics, scaling information, and optional baseline comparison.

Where to run the tests

Run performance tests on an environment that is representative of the target production setup:

  • Use a dedicated test or acceptance environment.
  • Use production-like resources when possible.
  • Ensure the test runner has network access to the CMS application.
  • Do not store valuable editorial content under /content/documents/_brxmperf; the setup tool owns this folder and may recreate it.

For Bloomreach Cloud, mark an environment as Acceptance before running performance tests if you need production-like resources without testing on the live production environment. For self-hosted deployments, use a staging or acceptance environment that mirrors the production topology, database, and relevant infrastructure settings as closely as possible.

Prerequisites

Before running the suite, make sure you have:

  • A running Bloomreach Experience Manager instance.
  • Java 17 or later.
  • Maven 3.8 or later.
  • Node.js 20 or later and npm 10 or later.
  • Access to the CMS.
  • Access to the SUT REST service (e.g. /cms/ws/qa/jcr/) from the machine or CI runner that runs the tests.
  • Administrative credentials for fixture setup and cleanup.
  • The performance bootstrap module deployed to the CMS.

Add the performance bootstrap module

The test suite uses a lightweight content type and namespace for generated test documents. Add the perf-bootstrap dependency to the CMS module and redeploy the CMS:

<dependency>
  <groupId>com.bloomreach.xm</groupId>
  <artifactId>bloomreach-xm-perf-bootstrap</artifactId>
  <version>${hippo.release.version}</version>
  <scope>runtime</scope>
</dependency>

After the CMS starts, verify that the /hippo:namespaces/brxmperf namespace exists in the repository console.

The bootstrap module provides:

  • The brxmperf namespace.
  • The brxmperf:brxmperfdoc document type.
  • Editor templates for title, introduction, rich text content, and date fields.

Prepare the test runner

Go to the performance test suite directory:

cd enterprise/performance-tests

Install the UI test dependencies once before running the full suite:

cd ui-latency-tests
npm ci
npx playwright install chromium
cd ..

Run a full performance test

The recommended command is the full lifecycle run:

./run-tests.sh --full --url=http://localhost:8080

This command performs the following steps:

  1. Installs test fixtures.
  2. Runs UI latency tests at 1, 3, and 5 concurrent users.
  3. Runs API load tests.
  4. Cleans up test fixtures.
  5. Generates the combined HTML report.

For a remote environment, pass the CMS base URL:

./run-tests.sh --full --url=https://cms.example.com

If the CMS uses non-default administrative credentials for the SUT REST service, provide them through environment variables:

SUT_ADMIN_USER=<admin-user> \
SUT_ADMIN_PASSWORD=<admin-password> \
./run-tests.sh --full --url=https://cms.example.com

Test fixtures and perf-env.json

The setup phase creates a controlled set of CMS users, folders, documents, and a manifest file named perf-env.json.

By default, setup creates:

  • Five author users named perf-author-01, perf-author-02, and so on.
  • Five editor users named perf-editor-01, perf-editor-02, and so on.
  • One test document per configured user.
  • A test content folder at /content/documents/_brxmperf.
  • A temporary folder at /content/documents/_brxmperf/temp-created for UI-created documents.

The generated perf-env.json file contains:

  • Installation metadata, including the target CMS URL and content size.
  • The resources created by setup, used during cleanup.
  • The users and documents consumed by the UI and API tests.

Keep perf-env.json in the performance test suite root directory when running the UI and API tests separately.

Common commands

Use these commands from the performance test suite root directory.

Command Description
./run-tests.sh --full --url=http://localhost:8080 Run setup, UI tests, API tests, cleanup, and report generation
./run-tests.sh --setup --url=http://localhost:8080 Install fixtures only
./run-tests.sh --status --url=http://localhost:8080 Check CMS connectivity, the bootstrap namespace, and fixture status
./run-tests.sh --cleanup --url=http://localhost:8080 Remove fixtures using perf-env.json
./run-tests.sh --ui --url=http://localhost:8080 Run the UI latency suite when fixtures already exist
./run-tests.sh --api --url=http://localhost:8080 Run the API load tests when fixtures already exist
./run-tests.sh --list-baselines List stored performance baselines

Use --users=N to control how many test users and documents setup creates:

./run-tests.sh --full --url=http://localhost:8080 --users=10

The default UI suite runs 1, 3, and 5 concurrent users. If you need custom UI concurrency levels, run the UI suite directly from ui-latency-tests:

npm run test:suite -- --users=1,3,5,10

UI latency tests

The UI latency tests use Playwright and Chromium to execute real CMS editorial workflows. The tests wait for CMS UI activity to become idle before recording timings, so the metrics represent user-perceived latency rather than only browser request timings.

The UI suite covers:

  • Folder and document navigation.
  • Document edit and save operations.
  • Author and editor document actions.
  • Multi-step editorial journeys.
  • Concurrent editing scenarios.

Useful environment variables:

Variable Default Description
CMS_BASE_URL http://localhost:8080 CMS base URL
CMS_CONTEXT_PATH /cms CMS web application context path
HEADLESS true Set to false to show browser windows
WARMUP_ITERATIONS 1 Warmup iterations before measurement
MEASURED_ITERATIONS 10 Measured iterations for atomic operations
JOURNEY_ITERATIONS 3 Measured iterations for journey tests
UI_IDLE_TIMEOUT 30000 Timeout in milliseconds while waiting for UI idle state
CONCURRENT_USERS 1 Number of browser sessions for direct concurrent test runs
MAX_USERS 5 Maximum users for the degradation curve test

Examples:

cd ui-latency-tests

# Run the single-user suite.
npm test

# Run the standard multi-user suite.
npm run test:suite

# Show browser windows while debugging.
HEADLESS=false npm run test:suite

# Increase the UI idle timeout for a slower environment.
UI_IDLE_TIMEOUT=60000 npm test

API load tests

The API load tests use Gatling to exercise the Channel Manager Content Service at /cms/ws/content/. They bypass browser rendering and focus on backend and repository behavior during document editing.

Each virtual user runs an edit loop:

  1. Authenticate with the CMS.
  2. Fetch document type metadata.
  3. Lock a document.
  4. Update one or more fields.
  5. Save the draft variant.
  6. Release the lock.

The tests use documents from perf-env.json when available. For accurate measurements, create at least as many test documents as concurrent API users so users do not compete for the same document lock.

Examples:

cd api-load-tests

# Verify that test documents are available.
mvn gatling:test -Dgatling.simulationClass=com.bloomreach.xm.performance.DiscoverySimulation

# Run a one-user smoke test.
mvn gatling:test -Dgatling.simulationClass=com.bloomreach.xm.performance.SmokeSimulation

# Run the default Content Service simulation with custom load.
mvn gatling:test -Dusers=5 -DrampDuration=10 -DtestDuration=180

Common Gatling properties:

Property Default Description
baseUrl http://localhost:8080 CMS base URL when no perf-env.json is loaded
users 1 Number of concurrent virtual users
rampDuration 10 Seconds to ramp up to full load
testDuration 120 Seconds to run at steady state
maxResponseTime 5000 Maximum response time assertion in milliseconds
minSuccessRate 95 Minimum successful request percentage

Reports

After a full run, open:

reports/index.html

The combined dashboard includes:

  • An executive summary with an overall verdict.
  • UI latency heatmaps for each user count.
  • Journey, journey operation, and standalone operation metrics.
  • Scaling information across user counts.
  • API throughput and latency metrics.
  • Baseline deltas when a baseline is available.

Detailed reports are written to:

Report Location
Combined dashboard reports/index.html
Machine-readable summary reports/summary.json
Playwright report ui-latency-tests/playwright-report/index.html
Gatling report api-load-tests/target/gatling/<simulation>/index.html

Baseline comparison

Baselines help detect performance regressions between releases.

Save a baseline after a known-good run:

./run-tests.sh --full --url=http://release-server:8080 --save-baseline=release-17.0

Compare a later run against a named baseline:

./run-tests.sh --full --url=http://test-server:8080 --compare=release-17.0

List available baselines:

./run-tests.sh --list-baselines

Baseline verdicts use these thresholds:

Verdict Meaning
Pass No more than 5% slower than the baseline, or faster
Warning More than 5% and up to 10% slower than the baseline
Regression More than 10% slower than the baseline

Cleanup

Use cleanup after manual setup or partial test runs:

./run-tests.sh --cleanup --url=http://localhost:8080

Cleanup reads perf-env.json, deletes the generated documents, folders, and users, and removes the manifest file. If cleanup cannot find the manifest, provide it explicitly through the setup CLI:

cd perf-setup
mvn exec:java -Dexec.args="cleanup --manifest=../perf-env.json --base-url=http://localhost:8080"

Troubleshooting

perf-env.json is missing

Run setup before running UI or API tests separately:

./run-tests.sh --setup --url=http://localhost:8080

The status check cannot reach the CMS

Verify that the CMS is running and reachable from the test runner:

curl http://localhost:8080/cms/

For remote environments, verify network routing, TLS configuration, and the CMS base URL passed through --url.

Setup fails because the namespace is missing

Confirm that the bloomreach-xm-perf-bootstrap dependency is deployed in the CMS and that /hippo:namespaces/brxmperf exists.

UI tests time out while waiting for the CMS to become idle

Increase the UI idle timeout:

UI_IDLE_TIMEOUT=60000 npm test

If the timeout persists, run the tests headed and inspect whether dialogs, overlays, or long-running CMS requests remain active:

HEADLESS=false npm run test:suite

 

Did you find this page helpful?
How could this documentation serve you better?
On this page
    Did you find this page helpful?
    How could this documentation serve you better?