Skip to main content

A/B Testing

Pro

A/B testing analysis is a Pro-only feature.

Burst Statistics Pro automatically detects A/B test variants in your campaign data and enriches the statistics datatable with winner and significance information. No separate test configuration is required — detection is driven entirely by naming conventions in your UTM or campaign parameters.

How It Works

When Burst renders a campaign conversion datatable, it scans every row for campaign parameter values that contain the string variation-a or variation-b (case-insensitive). Rows that share the same normalized campaign key but differ only by the -a / -b suffix are grouped as a test pair.

Once a complete pair (one A variant and one B variant) is found, Burst:

  1. Determines the winner based on conversion rate. If rates are equal, the variant with higher traffic wins.
  2. Runs a two-proportion z-test at 95% confidence to assess statistical significance.
  3. Annotates every matched row with winner, significant, and is_ab_test fields that the datatable UI uses to highlight results.

Naming Convention

To have Burst recognize your variants, include variation-a or variation-b anywhere in a campaign parameter value.

VariantTrigger string (case-insensitive)
Avariation-a
Bvariation-b

Example UTM content values:

utm_content=hero-banner-variation-a
utm_content=hero-banner-variation-b

The two rows are grouped by stripping the trailing -a / -b, so both resolve to the key hero-banner-variation. All other campaign parameter values on the row are included in the grouping key, so tests across different campaigns are kept separate.

Significance Calculation

Burst uses a two-proportion z-test. The result stored in each row's significant field is one of three string values:

ValueMeaning
significantThe difference between variants is statistically significant at the 95% confidence level (
still_runningNot enough data yet to draw a conclusion.
no_winnerTotal sessions ≥ 300 but the absolute conversion-rate difference is < 2 percentage points — the test has reached a futility cutoff.

Thresholds used internally:

ThresholdValue
Z-score for 95% confidence1.95
Minimum sessions for futility check300
Minimum effect size (futility)2 percentage points (0.02)

Row Fields Added

After processing, every row that belongs to a detected A/B test receives the following additional fields:

FieldTypeDescription
is_ab_testbooltrue for both the A and B rows.
winnerbooltrue on the winning row, false on the losing row.
significantstringOne of significant, still_running, or no_winner.

Hooks & Filters

burst_datatable_data

Filters the flat row array returned for the statistics datatable. The A/B Tests class hooks into this filter at priority 10 to append A/B test metadata to matched rows. You can hook at a higher priority to act on the enriched data.

Parameters:

ParameterTypeDescription
$dataarrayFlat array of datatable rows. Each row is an associative array of column name → value.
$query_dataBurst\Admin\Statistics\Query_DataThe query definition used to fetch $data, including the select columns and campaign filters.

Return value: The filtered $data array, potentially with is_ab_test, winner, and significant keys added to matched rows.

Example — reading A/B results after Burst has processed them:

add_filter( 'burst_datatable_data', function( array $data, $query_data ): array {
foreach ( $data as $row ) {
if ( ! empty( $row['is_ab_test'] ) ) {
$label = $row['winner'] ? 'WINNER' : 'loser';
$significant = $row['significant']; // 'significant' | 'still_running' | 'no_winner'
error_log( sprintf( '[Burst A/B] %s — %s — significance: %s', $row['utm_content'] ?? '', $label, $significant ) );
}
}
return $data;
}, 20, 2 );

Example — suppressing A/B test annotations entirely:

add_filter( 'burst_datatable_data', function( array $data, $query_data ): array {
foreach ( $data as &$row ) {
unset( $row['is_ab_test'], $row['winner'], $row['significant'] );
}
return $data;
}, 20, 2 );
caution

The A/B detection runs only when the datatable query selects at least one of sessions, visitors, or pageviews and is identified as a campaign conversion query. Datatables that do not meet both conditions are returned unchanged and will not contain A/B test fields.

Requirements

  • Burst Statistics Pro license active.
  • Campaign tracking enabled and conversion goals configured.
  • UTM / campaign parameter values must follow the variation-a / variation-b naming convention.
  • Both the A and B variants must appear in the same datatable result set. If only one variant is present in the selected date range, no A/B annotations are added.