Breaking news shouldn’t break your site: why publishing platforms fail

Breaking news doesn’t politely knock. It kicks down the door. Here's how to stop your site from breaking.

Talk to any CTO at a major publisher and they’ll tell you the same thing. Traffic spikes during breaking news or popular events are inevitable. What’s less inevitable is whether your platform survives them. Whether your site slows down or crashes isn’t luck, it’s architecture.

Let’s break down why publishing platforms fail under pressure, and what actually works when performance matters most.

The “Average Tuesday” trap: why websites fail

It’s how you build for it. Most publishing platforms crash for one reason: they were designed for the average Tuesday afternoon, not breaking news.

On a typical day, traffic is predictable, pages are served from cache, and your database hums quietly in the background. But breaking news changes everything. Traffic can spike 50 times in five minutes, with everyone hitting the same page whilst editors are publishing in real-time and systems are competing for the same resources.

The 3 most common bottlenecks in media tech stacks.

1. Your origin server blocks first

When every request hits your origin server, that’s fine on an average Tuesday afternoon, but not great during breaking news. The cascade is predictable: PHP workers max out, Node processes start queuing, CPU spikes, and latency snowballs. Once the origin struggles, everything else follows.

The fix is to design your platform so the origin is rarely touched during spikes.

2. Databases get hammered

Breaking news creates two simultaneous demands. Readers want pages (read-heavy loads on homepages and articles), whilst editors want to publish (write-heavy loads for live blogs, updates, and comments). When reads and writes compete for the same database resources, performance collapses fast.

The solution involves several approaches: separate read replicas, aggressive query optimisation, caching database-heavy fragments, and maintaining separate databases for editorial and public content.

3. Caching is too shallow (or missing)

A common mistake we hear: “But we have caching.” Which usually means short TTLs, logged-in users bypassing cache entirely, and personalisation killing cache efficiency.

During breaking news, weak caching is as good as none.

To fix this we’d recommend layered caching strategies: CDN edge caching for anonymous traffic, full-page caching where possible, fragment caching for dynamic elements

Your goal? Serve most requests without touching the application.

CDN selection isn’t optional anymore

Especially for media websites.

Images, video, embeds, charts; these explode during breaking news.

If your CDN can’t cache aggressively, handle sudden global spikes or serve media independently of your origin then it’s not doing its job.

What to look for in a CDN:

Your CDN needs proven performance under burst traffic, fine-grained cache control, instant purge and revalidation, and strong media delivery capabilities including image resizing and video streaming.

CDN selection for media is no longer a nice to have. It’s core infrastructure.

Diagram shows how a CDN works

The bottom line

Publishing platforms don’t fail because breaking news is unpredictable.

They fail because architecture wasn’t designed for extremes, caching strategies were an afterthought, and CDN choices didn’t match real-world demand.

The good news? All of this is fixable with the right decisions made early.

Image shows a screenshot taken from Standfirst Publish, depicting the sidebar and Magazine issues from The Fence magazine
Standfirst Publish specially designed for news and media publishers

Use a publishing platform that survives traffic spikes and viral news

If you’ve been firefighting performance issues and looking for an alternative, maybe it’s time to rethink your stack.

We built Standfirst Publish specifically for media and publishing companies. It’s SEO focused, comes with a page builder and won’t break with viral traffic surges.

If you’d like to know more drop us a message and we’ll be in touch.