Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Don't forget the version of Wordpress that you download is very different from the version that's on Wordpress.com


It's actually not that different, and of course they make as much use of caching as possible.

So does my self-hosted wp blog, for that matter. If you are using WP Super Cache correctly, you can deal with significant amounts of traffic. I've seen peaks of hundreds of simultaneous users on my blog, with a server load < 1. The server is a modest aws small instance.


Single author mini-blogs without logged in users are super easy to cache.

But multi-author blogs with thousands of logged in users is a nightmare with wp.

I suspect 90%+ of wp blogs are in the mini-blog category though.


Why would multiple authors make a difference?


Every time an article is created, saved or published, wp uses hundreds of non-cachable queries.

Every time an article is published, it causes the cache to be deleted for not only that article but related pages, which means all those pages have to be rendered again.

For one author, that can be managed. Many authors, the cache is constantly being defeated.


What matters is the ratio of reads per write. Blogs with multiple authors have a higher ratio than blogs with single authors.


>For one author, that can be managed. Many authors, the cache is constantly being defeated.

That's irrelevant. It's the pages view count that counts, not how many authors are in the same cms.


I know they replace the database class on wp.com to support replication (I think they even released the code once) but I don't think they do extensive changes otherwise unless you've actually read otherwise.


It's called HyperDB, and they did indeed release it: http://codex.wordpress.org/HyperDB




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: