AJO's Send Time Optimization Isn't Doing What You Think It Is
The STO reporting widget disappears after 24 hours. The platform doesn't flag which profiles were actually time-shifted. Here's how to find out if STO is running on your audience — and what to build when it isn't.
The Report That Started the Investigation
The AJO Live report showed an "Optimized vs Normal" split in the STO widget. Open rates looked strong. The assumption was that STO was working — personalising send times, lifting engagement.
Then the Global report for the same journey showed no such split. The comparison had disappeared entirely.
That's not a bug. That's how AJO's STO reporting works: the Optimized vs Normal widget only exists in the 24-hour Live report window. Once a journey ages out of Live view, the comparison is gone. The Global report can't reconstruct it.
Which raised an obvious question: if the UI can't tell you whether STO actually did anything, how do you find out?
What AJO's Reporting Doesn't Tell You
AJO's STO widgets show you a split between "Optimized" and "Normal" sends — but only while the Live report is active. The platform doesn't stamp an "Optimized" or "Normal" flag on the underlying feedback or tracking event records.
This means:
- You can't query historical STO performance in Query Service using a flag
- You can't compare optimised vs normal cohorts post-hoc through the UI
- The only way to evaluate STO retrospectively is to infer it from send-time distribution data
| What AJO Shows You | What AJO Doesn't Store |
|---|---|
| Optimized vs Normal split (Live only) | "Optimized" flag on feedback events |
| Overall open rate | Which profiles were actually time-shifted |
| Send volume | How much dispersion STO introduced |
The implication: most teams are making decisions about STO based on a reporting widget that disappears after 24 hours, with no way to audit the historical record.
The Query That Reveals the Truth
To get past the UI limitation, run a send-time distribution query against ajo_message_feedback_event_dataset — looking at how sends are distributed across the STO window.
If STO is actively personalising send times, you should see dispersion: sends spread across the full optimisation window, with different profiles receiving messages at different times based on their predicted engagement peak.
If instead you see all profiles receiving their message within a narrow window — seconds or minutes of the scheduled time — STO is in cold-start. The model defaulted. Every profile got the scheduled time regardless of what the widget says.
This is more common than most teams realise. And the platform still labels these sends as "Optimized."
What Cold-Start Actually Means in Production
AJO's STO model requires historical engagement data to make predictions. Profiles without sufficient AJO engagement history — opens, clicks, tracked interactions — fall back to the scheduled send time.
The audiences most affected:
- New subscribers — no engagement history yet
- Re-engagement campaigns — lapsed profiles with sparse recent history
- New AJO implementations — limited historical data in AEP overall
These are often the highest-value audiences in your programme. And they're exactly the ones where STO has the least data to work with.
| Audience Type | STO Effectiveness | Why |
|---|---|---|
| Active, long-tenure subscribers | Moderate to good | Sufficient engagement history |
| New subscribers | Cold-start fallback | No AJO engagement history |
| Lapsed / re-engagement | Weak | Sparse or outdated history |
| Post-migration audiences | Unreliable | History may not have transferred |
Good open rates on an STO-enabled journey don't confirm STO is working. They may simply reflect smart manual scheduling into high-engagement time slots. The two look identical in the UI.
The Cold-Start Threshold Problem
AJO's documentation doesn't publish a specific engagement threshold for STO activation. Based on observed behaviour, the model appears to require a meaningful engagement history before it deviates from the scheduled time.
For many real-world implementations, a significant portion of the active audience is in cold-start at any given time — particularly in growth programmes where new subscriber acquisition is ongoing.
The practical implication: if you're running STO, audit your send-time distribution before assuming the model is active. If you're seeing minimal dispersion across the window, you're in cold-start territory — and the STO label in your reporting is misleading you.
What Actually Works: Building Your Own Engagement Intelligence
The limitation of native STO isn't that personalised send times don't work — it's that the model needs data you may not have yet, and gives you no visibility into whether it's actually running.
The alternative is to build your own send-time logic directly in Query Service. Instead of waiting for Sensei to accumulate enough history, you query your own engagement data, calculate preferred engagement windows per profile, and use those as journey conditions.
A well-structured query against ajo_email_tracking_experience_event_dataset can surface — per profile — their historical open times, time-to-open distribution, and peak engagement windows. You can classify profiles into engagement tiers and preferred time buckets. That data gets written back to the profile and consumed directly in your journey canvas as a condition node.
The result is a send-time layer you can fully audit, fully control, and fully explain. No black box. No cold-start problem. No reporting that disappears after 24 hours.
The query architecture to make this work correctly — handling new profiles, refresh cadence, writing attributes back in a way AJO can actually consume — is the part that takes precision to get right.
The Diagnostic Checklist
Before your next STO-enabled send:
- Run a send-time distribution query — how much dispersion are you actually seeing across the STO window?
- What percentage of your audience has sufficient engagement history to avoid cold-start fallback?
- Is your engagement data clean — Apple MPP filtered, bot traffic excluded?
- Does your campaign timeline conflict with a multi-day hold window?
- Are STO-held messages consuming frequency cap budget before delivery?
If question 1 shows minimal dispersion, the rest of the list is academic — STO isn't running on your audience in any meaningful way.
Native STO works when the conditions are right. Most implementations don't have those conditions for a meaningful portion of their audience — and the reporting makes it impossible to tell. If you want to audit whether STO is actually running on your journeys, understand your cold-start exposure, or build a custom send-time intelligence layer from Query Service — that's exactly what the 60-minute working session → covers.