A/B Testing
Pro
A/B testing analysis is a Pro-only feature.
Burst Statistics Pro automatically detects A/B test variants in your campaign data and enriches the statistics datatable with winner and significance information. No separate test configuration is required — detection is driven entirely by naming conventions in your UTM or campaign parameters.
How It Works
When Burst renders a campaign conversion datatable, it scans every row for campaign parameter values that contain the string variation-a or variation-b (case-insensitive). Rows that share the same normalized campaign key but differ only by the -a / -b suffix are grouped as a test pair.
Once a complete pair (one A variant and one B variant) is found, Burst:
- Determines the winner based on conversion rate. If rates are equal, the variant with higher traffic wins.
- Runs a two-proportion z-test at 95% confidence to assess statistical significance.
- Annotates every matched row with
winner,significant, andis_ab_testfields that the datatable UI uses to highlight results.
Naming Convention
To have Burst recognize your variants, include variation-a or variation-b anywhere in a campaign parameter value.
| Variant | Trigger string (case-insensitive) |
|---|---|
| A | variation-a |
| B | variation-b |
Example UTM content values:
utm_content=hero-banner-variation-a
utm_content=hero-banner-variation-b
The two rows are grouped by stripping the trailing -a / -b, so both resolve to the key hero-banner-variation. All other campaign parameter values on the row are included in the grouping key, so tests across different campaigns are kept separate.
Significance Calculation
Burst uses a two-proportion z-test. The result stored in each row's significant field is one of three string values:
| Value | Meaning |
|---|---|
significant | The difference between variants is statistically significant at the 95% confidence level ( |
still_running | Not enough data yet to draw a conclusion. |
no_winner | Total sessions ≥ 300 but the absolute conversion-rate difference is < 2 percentage points — the test has reached a futility cutoff. |
Thresholds used internally:
| Threshold | Value |
|---|---|
| Z-score for 95% confidence | 1.95 |
| Minimum sessions for futility check | 300 |
| Minimum effect size (futility) | 2 percentage points (0.02) |
Row Fields Added
After processing, every row that belongs to a detected A/B test receives the following additional fields:
| Field | Type | Description |
|---|---|---|
is_ab_test | bool | true for both the A and B rows. |
winner | bool | true on the winning row, false on the losing row. |
significant | string | One of significant, still_running, or no_winner. |
Hooks & Filters
burst_datatable_data
Filters the flat row array returned for the statistics datatable. The A/B Tests class hooks into this filter at priority 10 to append A/B test metadata to matched rows. You can hook at a higher priority to act on the enriched data.
Parameters:
| Parameter | Type | Description |
|---|---|---|
$data | array | Flat array of datatable rows. Each row is an associative array of column name → value. |
$query_data | Burst\Admin\Statistics\Query_Data | The query definition used to fetch $data, including the select columns and campaign filters. |
Return value: The filtered $data array, potentially with is_ab_test, winner, and significant keys added to matched rows.
Example — reading A/B results after Burst has processed them:
add_filter( 'burst_datatable_data', function( array $data, $query_data ): array {
foreach ( $data as $row ) {
if ( ! empty( $row['is_ab_test'] ) ) {
$label = $row['winner'] ? 'WINNER' : 'loser';
$significant = $row['significant']; // 'significant' | 'still_running' | 'no_winner'
error_log( sprintf( '[Burst A/B] %s — %s — significance: %s', $row['utm_content'] ?? '', $label, $significant ) );
}
}
return $data;
}, 20, 2 );
Example — suppressing A/B test annotations entirely:
add_filter( 'burst_datatable_data', function( array $data, $query_data ): array {
foreach ( $data as &$row ) {
unset( $row['is_ab_test'], $row['winner'], $row['significant'] );
}
return $data;
}, 20, 2 );
The A/B detection runs only when the datatable query selects at least one of sessions, visitors, or pageviews and is identified as a campaign conversion query. Datatables that do not meet both conditions are returned unchanged and will not contain A/B test fields.
Requirements
- Burst Statistics Pro license active.
- Campaign tracking enabled and conversion goals configured.
- UTM / campaign parameter values must follow the
variation-a/variation-bnaming convention. - Both the A and B variants must appear in the same datatable result set. If only one variant is present in the selected date range, no A/B annotations are added.