sliqua-jcooter wrote:
nat4200 wrote:
EDIT: Also Sliqua people PLEASE consider putting an expires header on resources like images, etc. An Etag is nice but my browser still has to make to a bunch of HTTP requests to confirm the images have not changed. As the site's cookies also get sent to
the WP CDN domain and are several KB in size, a single page load (even with a primed cache) results in quite a bit of data going back and fro in HTTP headers

We don't control the CDN or what headers it exposes
You should contact the company that does control
the WP CDN domain then as
cdn.wrongplanet.net is then falsely claiming to be "Server:
Sliqua Server Environment" in it's HTTP responses.
If you mean Cloudflare is sitting in front of cdn.wrongplanet.net, then please be serious, Cloudflare is not removing expires headers and replacing them with Etags, and Cloudflare could probably generate even fewer requests back to the Sliqua servers if it could benefit from that expires information itself. Cloudflare is not the problem.
sliqua-jcooter wrote:
nor is it within our right to enforce an expiry header on our customers' data.
Fair enough, but I suspect you can ask "Hey Alex - can we do a good thing that makes things slightly faster for end users at next to no effort?"
You have previously asserted Alex is not technical and you seem to help him with such decisions... if I'm wrong and you just talk to him about cars and drinking and girls then I'm sorry I miss understood your familiarity.
sliqua-jcooter wrote:
Furthermore, since practically every modern browser can do concurrent requests, and we're talking about <120 bytes per request
No we aren't. We're talking alot more data per request than that, but your right that alot will be sent in parallel...
Refreshing the forum page (
www.wrongplanet.net/posts194419-start75.html ) with a fully primed cache for that unchanged page generated 16 requests to cdn.wrongplanet.net that responded 304 Not Modified, for which a total of 19.3KB in headers were uploaded (1.2KB uploaded per request). Sure downstream these 16 requests pulled only about 184 bytes each...
Because of the way you have ETags configured (corresponding to inodes I believe) a further two requests pulled down images because the server those requests was sent to had a different idea about which ETags it should see. This added a further 2.5KB up, and an additional 3.3KB down
Additionally there were 11 static assets from
www.wrongplanet.net which were responded with a 304 Not Modified header - the additional cookies on this domain mean that 69.6KB was sent in request headers for the browser to confirm these assets had not changed (the 304 responses weighed only 3.4 KB combined)
Sending almost 90KB to wrongplanet domains every page load is significant when it is almost completely avoidable, 90KB up is heavy! And I feel it's easy thing to make it go away...
90KB is a drop in the bucket compared to the overall page size, it's also a drop in the bucket in terms of the average user's available throughput. It would result in a negligible impact to end users, and has the potential to cause more issues with stale caches on the CDN and CloudFlare layers. There is no compelling reason for me to make such a suggestion, and no compelling reason to waste time debating about it, and no compelling reason for me to justify my actions to anyone.