Earth Notes: On Website Technicals (2017-07)
Updated 2024-03-30 15:03 GMT.By Damon Hart-Davis.
2017-07-31: Meta Charset
The canonical form is <meta charset="UTF-8"/>
(maybe without the trailing '/' ("solidus") to taste). But since W3C says the match is case-insensitive, and most of the rest of the text and HTML boilerplate is lowercase, it should be better for compression to make it <meta charset="utf-8"/>
. Lo and behold, making that change knocks 2 bytes off the zopfli
pre-compressed mobile index page's size.
(And indeed as of 2024 for a while that has been whittled down to a minimal
<meta charset=utf-8>
.)
2017-07-30: Share42 Update
Today I updated the (excellent) Share42
social media buttons to 32x32 to make them easier to click on mobile in particular, and replaced Pinterest with a button for the RSS (well, Atom) feed. The new JavaScript (and zopfli
-precompressed file) and PNG icons were given new names to avoid cache clash with the old versions. (The old ones may be deleted at some point, to reduce clutter, especially the copies for the mobile side).
2017-07-29: Waiting for Godot (OK, Google)
Google's Search Console "HTML Improvements" page as of the 26th lists only one remaining minor issue (fixed some time ago) on the main site.
The m.
mobile site has been free of reported issues for a while, but then it's almost entirely free of indexed pages too, I think because the main site holds the canonical versions. Plenty of Google search traffic continues to land at the mobile pages anyway.
Also, just because it's been annoying me, I have removed the implicit (and redirected to /
) /index.html
entries from the main and mobile site simple URL list sitemaps. Those entries add nothing, and force a redirect, so begone...
For the XML/Atom sitemaps/feeds I have re-written the
/index.html
entry to /
rather than zapping it, retaining the value of the timestamp for the search engine. I do want an SE to re-index a newly-updated homepage swiftly.
2017-07-26: FeedValidator
Attempted to validate my shiny new Atom feeds with FEEDValidator [fv]. They did not validate, so I have updated them to do so, while keeping them as slim as reasonably possible (after gzip
content encoding) for fetch efficiency for the search engines which are their primary target.
2017-07-23: The Dark Side: Bing
I have gone over to the dark side and signed up for Microsoft's Bing Webmaster Tools (BWT) today, though I have long used Google Search Console (GSC). It is probably worth seeing what the #2 search engine by volume is seeing, even though bizarrely I was persuaded to sign up by various claims on blogs etc that BWT had died with no posts on its blog in half a year, etc!
Anyway, the BWT sign-up process is OK and naturally a lot of GSC features are mapped pretty well 1:1. But there are a few different bits, such as the BWT "SEO Analyzer" that did find a fault in one of my pages, now partly fixed. BWT also shows a directory hierarchy for your site that you can navigate down, which shows what view Bing has of the site structure and content, which is useful. Google clearly has to be fairly Delphic and aloof to avoid being gamed and blamed, but Bing can maybe sail closer to the wind on some points.
I also added a warning to my page build script when a page is very thin (not much text) as it will be difficult for search engines to classify such articles, and probably I am not working hard enough for my visitors. The upshot is that I have significantly expanded the information on a handful of key pages, and will probably do a few more soon. If I have my own script give me a warning, my engineering head finds it hard to resist fixing that warning pronto! When a page is very low on text I am also now automatically keeping ads off it to avoid SPAMming users.
Atom
I have added a very basic Atom sitemap feed for articles which may serve as a bare-bones general feed too. I have knocked down the default expiry time on all these .xml sitemap file variants so as not to put search engines off rechecking them frequently, especially the Atom feed.
There is now a basic Atom feed for (recently-updated) data file/logs also.
A (free) PubSubHubbubb hub with Superfeedr might help grease the Atom update flow, with a ping to it potentially pinging many downstream consumers such as search engines.
2017-07-16: Keep-Alive, ETag, Expires
(Still on the trail of minimising HTTP response overheads to improve perceived performance for users on slow connections in particular...)
OK, this is more than a little geeky, but I noticed the response header:
Keep-Alive: timeout=5, max=100
100 is comfortably above the number of responses on a single HTTP connection over TCP, which is why it was chosen, so why not 99 and save a byte on every HTTP response? There's a directive for that:
MaxKeepAliveRequests 99
Actually it could probably be set to 9 to save another byte without affecting most connections, or to 0 for unlimited (to save 9 bytes on every response) if we trust Apache not to leak memory for example.
Another word of content squeezed into the first response packet, potentially! Woot!
Also, by default, Apache embeds the file's i-node
number in the ETag, which is
a bit dubious
for various reasons, and takes up a few more bytes. The suggested cure is the following directive, default by Apache 2.4 anyway:
FileETag MTime Size
Applying this server-wide gets to a typical mobile-site response header (~343 bytes including CRLFs):
Date: Sun, 16 Jul 2017 08:55:44 GMT Server: Apache Vary: Accept-Encoding Last-Modified: Sun, 16 Jul 2017 07:56:24 GMT Etag: "2ca8-5546a9e64fdc0" Content-Length: 11432 Cache-Control: max-age=1382400 Expires: Tue, 01 Aug 2017 08:55:44 GMT Keep-Alive: timeout=5 Connection: Keep-Alive Content-Type: text/html Content-Encoding: gzip
Turning off the ETag entirely is another possibility, then relying on other headers to control cacheing.
ETag Begone
That needs to be considered more selectively, maybe for all pre-compressed HTML files where the tag is on the critical rendering path (CRP), and for very short files where the overhead probably exceeds any saving that the other cacheing headers can't produce. The unconditional directives are:
Header unset Etag FileETag none
Given that a couple of the header fields depend on content length (and are smaller when the content is) virtue is it's own reward, automatically saving a couple of HTTP header bytes more for very short pages.
For the mobile site which is almost entirely small-ish pre-compressed HTML pages, the unconditional removal of ETag is probably safe, getting the HTTP response header down to ~315 bytes all-in, ~70 bytes less for every mobile HTML page request since yesterday!
Expires Begone
For anything other than ancient HTTP/1.0 browsers, sending both
Expires
and the newer Cache-Control
is redundant, and the former is larger, so can usefully be dropped. Note that sadly very few visitors are repeat visitors anyway, so a bit of cache wobble is unlikely to hurt overall.
In this case the Expires/Cache-Control headers are generated by Apache on the fly, relative to the access time, so the information in one is entirely captured by the other.
So again, for the mobile site I have removed Expires while leaving Cache-Control (and Last-Modified) in place.
Header unset Expires
A typical mobile page HTTP response overhead is now ~275 bytes, more than 100 bytes (~25%) shaved off since yesterday! And visibly more content text is arriving in the initial render/packet.
Date: Sun, 16 Jul 2017 09:34:11 GMT Server: Apache Vary: Accept-Encoding Last-Modified: Sun, 16 Jul 2017 09:23:41 GMT Content-Length: 12068 Cache-Control: max-age=1382400 Keep-Alive: timeout=5 Connection: Keep-Alive Content-Type: text/html Content-Encoding: gzip
Note that all of these responses are preceded by something like
HTTP/1.1 200 OK
and CRLF so ~17 bytes extra, to a total HTTP overhead of ~290 bytes.
Note that this site/server is already cookie free.
922222
Reducing the mobile site default expiry to ~11.5 days cuts the Cache-Control max-age value to 6 digits, saving one more byte! (For HTTP/2 HPACK it may also be possible to choose a value whose Huffman encoding is minimal eg with a leading digit 9 and the rest 0, 1, or 2 at 5 encoded bits each, so 922222 seconds may be optimal on that score.)
A typical 304 (not modified) mobile page response is now down to:
Date: Sun, 16 Jul 2017 10:40:48 GMT Server: Apache Connection: Keep-Alive Keep-Alive: timeout=5 Cache-Control: max-age=922222 Vary: Accept-Encoding
Possibly I could do without the Last-Modified
header, at least for the critical-path pre-compressed HTML pages. Removing that header would be like shunning an old friend though!
I currently verify my site ownership to Google with a meta tag in the home page (along with other methods); I could skip that and save ~56 bytes from the HTML head after compression for that one page. If I suddenly have crowds of 2G/dial-up visitors to that page then maybe I should!
background-color
Having been writing Web pages since the dinosaurs were using Mosaic, which defaulted to a nasty grey background difficult to read black text against, I instinctively set a white page background colour for the body (having migrated from bgcolor="white"
to CSS
background-colour:#fff
styling). However, every browser I can lay my hands on now seems to default to a white background, and I have
not been explicitly setting a contrasting text colour which I should have been. So to save another ~28 bytes of raw lead-in (maybe ~8 after compression), I am abolishing this. The browser should at least select sane background and text colours if not black and white, and my green sidebars set their text to be black.
Dial-up First Render Preview
I can get some idea of what is likely to make the first packet and thus first render on slow dial-up with some variation on:
dd < FILE.htmlgz bs=1160 count=1 | gzip -d | tail -5
2017-07-15: HTTP Response Header Diet
For extra credit it may be worth minifying the HTTP response header, eg minimising or eliminating the Server header, and Accept-Ranges for small objects where HTTP overhead would swamp any value of a range fetch or where it adds significant delay to first usable content seen by the browser on a critical path such as the smallish pre-compressed HTML files.
A typical response header may currently be something like the below, weighing in at about 387 bytes with CRLF line endings and a blank line terminating the headers.
Date: Sat, 15 Jul 2017 15:24:30 GMT Server: Apache/2.2.22 Vary: Accept-Encoding Last-Modified: Sat, 15 Jul 2017 10:43:08 GMT Etag: "1f502a-1cb4-55458d4d6de77" Accept-Ranges: bytes Content-Length: 7348 Cache-Control: max-age=1296000 Expires: Sun, 30 Jul 2017 15:24:30 GMT Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/html Content-Encoding: gzip
By setting ServerTokens Prod
in the 'security' configuration it is possible to avoid including the server version details, which saves a grand total of ~7 bytes for every HTTP response. Amazingly that's ~2% of HTTP/1.1 response overhead gone. Removing the header entirely does not seem possible.
For just the mobile site I have added a single directive to stop Apache from advertising ranges support with every response, a feature that is simply not useful for the objects on that site, saving another 22 bytes on every mobile response.
Header unset Accept-Ranges
It should be possible to selectively disable the Accept-Ranges for small and pre-compressed objects for the main site, but that level of engineering feels too much right now given that I am already allowing an extra ~100 bytes in a fatter head for those HTML pages anyway!
Those 29 bytes saved for every mobile page lead-in could allow a few more words of content into the first packet on the wire to reach the browser. That won't show up in TTFB metrics, but may improve perceived speed and usability a bit.
If a dial-up/2G audience were to prove to be key, then removing superfluous HTML tags and attributes (and even attribute quotes) in body text, and moving some of the aside/sidebar material down the flow of text so that real words start as early as possible, would help.
A WebPageTest fetching this page over a nominal dial-up (~50kbps, 120ms RTT) from Manchester ie ~200 miles away, a radius within which most of this site's readers are likely to be, shows a start to the content on first paint (1.3s) which I also take to be the contents of the first packet, unpacked. See Figure "mobile first render screenshot". WebPageTest gives a "C" for the TTFB of 874ms, against a target of 607ms. (Note that at 1.5s the full above-the-fold content is rendered.)
A WebPageTest fetch of the non-mobile page, with a fatter HTTP and HTML head, shows significantly less of the page rendered in the first paint/packet, indicating the potential value of chipping away even in these small ways. See Figure "non-mobile first render screenshot". (Note that at 1.5s the full above-the-fold content is rendered.)
Bloat is bad!
HTTP/2 header compression should significantly reduce response overheads, even for the first, critical-path, HTML page response, but these optimisations should not hurt.
2017-07-14: Fat Head
While (just for a laugh) assessing how well this site loads under more extreme conditions, eg on dial-up, I was yet again struck by how there is about 2kB of stuff in the HTML before the first actual word of core content arrives. A couple of seconds of delay to a dial-up user may not be the end of the world, though is not great. But even after compression that means that no core above-the-fold content arrives in the first TCP frame, even if the frame is 1460 bytes rather than 512 bytes.
I try hard to keep my head from getting too fat [hhead, uo], ho ho, and want to see bytes with real content delivered to the user as fast as possible, so I have been looking at it carefully again and decided to kill the 'meta keywords' which no human reads, and most search engines ignore or use as a negative signal if any. I am sure there's more than one opinion to be had, but I have slimmed the head down by more than 80 bytes pre-compression, and I am seeing what else I can squeeze out such as superfluous levels of tag nesting. Anything to get the first word of content nearer the start of the data flow towards the client.
This effort is not limited to the head, though since none of the head is visible in the page it is key, but also all boilerplate low-information preamble such as navigation is in my sights!
I have added code to the makefile to fail pages whose preamble is too large:
... @.work/wrap_art > .$@.tmp .$@ "$(PAGES)" @SZ=`awk '{print} /<main>/ {exit}' < .$@.tmp | gzip | wc -c`; \ if [ $$SZ -gt ${MAXCOMPHEADSIZE} ]; then \ echo "$@ header too large ($$SZ; max ${MAXCOMPHEADSIZE})."; \ exit 1; \ fi ...
As things stand a minimal page (eg minimal title, description,
og:image
, no ads) would compress to just about fit in a single typical 1460-byte TCP frame, though that does not account for HTTP response headers.
It is probably difficult to to fit an entire meaningful pleasant HTML page response body into one TCP frame (for a new visitor), without (say) displacing everything into other objects such as by avoiding inlining any CSS. But it may be worth aiming for, especially if the experience for first-time visitors with an empty cache can be kept fast.
2017-07-13: .htmlgz
I am adding the following to my global Apache mime.conf
configuration file to enable basic static gzip
encoding/compression for common types for all sites. (Some other content negotiation configuration is needed too.)
AddEncoding gzip .jsgz .cssgz .htmlgz .xmlgz AddType application/x-javascript .jsgz AddType text/css .cssgz AddType text/html .htmlgz AddType application/xml .xmlgz
(The Apache mod_mime
configuration is clear that the x-
type should be used in config where the client may ask with or without.)
Having restarted Apache I now can directly request the .htmlgz
version of files, a little smaller and with some signs of a reduced and (more stable) TTFB. Taking tens of milliseconds out of the critical path, which is like moving a little closer to the user...
Indeed it is evident that when the CPU is busy, the on-the-fly gzip
can take ~10 times longer than the pre-compressed version
to final byte delivered, eg 962ms in one case for the 53.0kB-on-the-wire
.html
fetch vs ~96ms for the 52.8kB-on-the-wire
.htmlgz
fetch. (At the moment gzip -9
is being used for pre-compression rather than zopfli
, so the on-the-wire savings will improve.) With an idle CPU those delivery times drop to more like ~90ms and ~30ms.
This throws up an interesting potential duplicate content problem, possibly robustly fixed by ensuring that even the www/main page version points back to the .html version of itself as canonical, which may help avoid problems where external links add spurious parameters for example. But slightly annoying to be immediately expanding the head to deal with this.
I initially enable this static serving to the normal .html
URLs on just the main site with this is the site configuration:
RewriteEngine on # Serve pre-compressed content (eg HTML) if possible. # If client accepts compressed files... RewriteCond %{HTTP:Accept-Encoding} gzip # ... and if the pre-compressed file exists... RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME}gz -s # ... then send .XXXgz content instead of .XXX compressed on the fly. RewriteRule ^(.+)\.(html|css|js|xml)$ $1.$2gz [L]
Note that %{DOCUMENT_ROOT}%{REQUEST_FILENAME}gz
is required for virtual hosts to check for the pre-compressed file.
I then copied these rules to the mobile (m.) configuration.
This set of rules seems to correctly capture an inbound '/
' request internally rewritten to '/index.html
' and then by these to '/index.htmlgz
'.
This tweak should typically shave tens of milliseconds off the critical path to displaying content, especially when the server is at all busy.
Smaller, Faster, Less CPU
Switching to using zopfli
(on default settings) to precompress the HTML seems to be yielding a few percent further transfer savings, eg down from 50588 to 48566 bytes for the largest (compressed) mobile page, and down from 7695 to 7497 bytes for the main site index page. The compression time on the RPi2 seems to be at most a few seconds.
Zopflipng For Key Image
As an experiment zopflipng
is now used when generating an image that is shown above the fold on the front page, taking several hundred bytes off optipng
's best efforts (down to 2029 bytes from about 2637).
Because It's There
Since all the stuff was in place I created the .jsgz
files alongside the plain .js
files for the Share42 buttons. Saves a grand total of 21 bytes per transfer, probably, but does avoid having to fire up mod_deflate at all, which may be better under load. It may yet be better to remove these again to simplify life.
2017-07-12: Zopfli and JS
There is only one, tiny (already minified other than one small copyright notice) JavaScript file currently used over most of the site, to support the Share42 social-media buttons.
% ls -al share42.js -rw-r--r--+ 1 dhd staff 3298 31 May 05:53 share42.js % gzip -v6 < !$ | wc -c gzip -v6 < share42.js | wc -c 61.3% 1277 % gzip -v9 < share42.js | wc -c 61.4% 1271 % ./zopfli -c share42.js | wc -c 1256 % ./zopfli --i5000 -c share42.js | wc -c 1256
So zopfli
is able to save a maximum of about 15 bytes over gizp
, maybe 21 bytes over on-the-fly compression. Also, loading this file is not on the critical path for page display. So unless the time to start up and run the dynamic compression is significantly more than a pure static
sendfile()
serve, there is little point supporting the complexity to handle this specially, compared to the risk of breaking something subtle! The response probably all just about fits in a single TCP frame anyway.
In an as-fast-as-possible London-base WebPageTest run the time to first byte once the connection is opened for each of the (similarly-sized) static icons PNG, compressed-on-the-fly favicon.ico
, and compressed-on-the-fly JS file is 21 or 26ms, suggesting that any mod-deflate start-up time is negligible at least for small files.
The new Chrome 59 coverage tool suggests that about half the JavaScript is not used in this configuration and could thus probably be manually removed to much greater effect!
Note though that TTFB for a typical HTML file (34kB uncompressed, 14kB transferred) is 32ms (followed by 11ms download time), while the monster 202kB uncompressed HTML file (28kB transferred) has 251ms TTFB (+24ms download). Thus this may benefit from being precompressed, and indeed the largest pages could possibly be served up to ~250ms faster if so. That TTFB can also be highly variable, eg 69ms to 759ms in one set of runs, which may be mod_deflate, or may be Apache starting a new thread to serve the page.
Maybe this could save as much as 1ms per uncompressed 1kB, implying that mod_deflate
manages a very decent ~1MB/s throughput on the RPi2, and potentially accelerates delivery as-is on any user-side line slower than about ~8Mbps.
2017-07-10: Zopfli
Following on from the precompression thoughts I downloaded and built zopfli
and zopflipng
from the official repo [zopfli
]. No 'apt-get
'-table package seemed to be available for my RPi. The makefile
worked first time on my Mac.
Straight out of the box zopflipng
seemed able to strip significant space from a PNG already shrunk by optipng
(on moderate settings), and conversely optipng -o7
was unable to improve on zopflipng
. (Also see [ZOLFB].)
% ./zopflipng 20170707-16WWmultisensortempL.png out.png Optimizing 20170707-16WWmultisensortempL.png Input size: 20633 (20K) Result size: 17619 (17K). Percentage of original: 85.392% Result is smaller % optipng -o7 out.png ** Processing: out.png 1280x480 pixels, 8 bits/pixel, 17 colors in palette Input IDAT size = 17499 bytes Input file size = 17619 bytes Trying: out.png is already optimized.
I then tried for 'maximum' lossless compression with zopflipng
with bruteforce exploration of filters, etc, per command-line help (it also offers some lossy modes), which was working my Mac very hard for ~30 minutes and would thus be untenable for the RPi:
% ./zopflipng --iterations=500 --filters=01234mepb 20170707-16WWmultisensortempL.png out.png Optimizing 20170707-16WWmultisensortempL.png Input size: 20633 (20K) Result size: 17562 (17K). Percentage of original: 85.116% Result is smaller
I think that running more iterations might nominally gain some more compression, but returns are likely to be vanishingly small; the default number of iterations is reported to be 15!
In any case, zopflipng
may well be worth using instead of optipng
when available, subject to runtime.
I then manually compressed the social-media-buttons PNG and saved a few bytes, which should very slightly reduce bandwidth for each new visitor (~60 bytes!), but it's a one-off job...
% ./zopflipng --iterations=5000 --filters=01234mepb icons.png icons-out.png Optimizing icons.png Input size: 2318 (2K) Result size: 2255 (2K). Percentage of original: 97.282% Result is smaller
HTML Super-compression
I then got to the main target: precompressing the HTML files.
I downloaded a snapshot of this page via my browser as a representative test file.
% ls -al test.html -rw-r--r--@ 1 dhd staff 15643 10 Jul 10:53 test.html % gzip -v1 < test.html | wc -c 54.8% 7062 % gzip -v3 < test.html | wc -c 56.1% 6861 % gzip -v6 < test.html | wc -c 58.3% 6511 % gzip -v9 < test.html | wc -c 58.4% 6507 % ./zopfli -c test.html | wc -c 6351 % ./zopfli -c --i5000 test.html | wc -c 6346
Annoyingly zopfli
does not use similar-type command-line arguments to all the other *nix compression tools such as pack
/compress
/gzip
.
Most of the extra compression with zopfli
happens with default settings and is quick (less than ~0.1s on my Mac with this test file). Plenty fast enough probably even on the RPi to precompress a single HTML file after wrapping it from sources.
Pushing up to 5000 iterations barely compresses more, but takes 13s.
By comparison, gzip -9
reports 0.00s, eg less than 0.01s.
Repeating with the current largest HTML pages on the site:
% ls -al test2.html -rw-r--r--@ 1 dhd staff 209700 10 Jul 11:35 test2.html % gzip -v6 < test2.html | wc -c 86.0% 29169 % gzip -v9 < test2.html | wc -c 86.5% 28244 % ./zopfli -c test2.html |wc -c 26340
With zopfli
taking a little under 2.5s on my Mac (so getting slow on the RPi!) to save ~1 TCP frame compared to gzip -9
and ~2 TCP frames compared to likely on-the-fly compression. This would also save server CPU compression effort each time, which may be significant, including for TTFB depending on configuration.
Anyhow, zopfli
seems likely to give me a few extra percent compression over gzip -9
(or the -6
probably used on-the-fly by Apache's mode_deflate
) for negligible run time per semi-static HTML file.
My strategy could be to statically compress with zopfli
if available, else with the universally-available gzip -9
.
2017-07-09: Description Length
I have pushed up my minimum (meta) descriptions length a little, and tried to shift them to be calls to action or otherwise less passive.
2017-07-06: Precompression And Bot Defence
Just because I can, I am contemplating pre-compressing some of the more popular (text, but could also be data) content, letting me use
gzip -9
for better compression just once when a page is re-wrapped, rather than on the fly every time with mod_deflate
(which would still be the default).
My favourite search engine dug me out this suggestion of which the key bit is:
AddEncoding gzip .jsgz .cssgz .htmlgz .datagz AddType application/x-javascript .jsgz AddType text/css .cssgz AddType text/html .htmlgz AddType text/plain .datagz # ... RewriteEngine on # If client accepts compressed files RewriteCond %{HTTP:Accept-Encoding} gzip # and if compressed file exists RewriteCond %{REQUEST_FILENAME}gz -f # send .html.gz instead of .html RewriteRule ^(.+)\.(html|css|js|data)$ $1.$2gz [L]
Magic: the .htmlgz
files can get a bit more compression at HTML generation time, and Apache can selectively serve the pre-compressed results with sendfile()
thus maximising throughput and minimising server load. This could even reduce Time To First Byte (TTFB) which correlates moderately well with perceived site speed.
It could also be a wicked way to entirely statically serve a logic bomb to those annoying bots continually trying to break into my server.
In late news: nearly half of the "Short meta descriptions" warnings have gone from the Google Search Console. That was fast!
2017-07-03: Meta Descriptions
Google's Search Console keeps whining about some of my meta descriptions being too short (the longest reported as such being ~50 characters). So I added code to my wrapper/build script to warn if shorter than 60 characters (or longer than 160), and I went and fixed all the 'short' descriptions. I expect the search console complaints to hang around for weeks though!
Interestingly those descriptions are often shown in search result snippets, so should probably tweaked to be an enticing call to action in a sentence or two. Later...
2017-07-02: ATF Ad Injection
I have adjusted the ad-injection code to now attempt to insert an above-the-fold ads (for desktops) just above the first substantial heading after ~50 words of text. (Ads will be omitted on noindex
pages, pages with 'raw' AdSense or multiple ads, etc, as before.)
I also added a NOADS header directive to prevent injection of any ads in pages marked up with it, beyond those excluded above, to keep pages clear of ads where problematic in some way.
The aim is to need no explicit ad markup on most pages (because it's a partly presentational task that gets in the way of creating good content) while still catching some decent revenue opportunities.
I will have to keep an eye out for interference with (eg) floats.
2017-07-01: Footer Ad Injection
I am trying a new scheme to inject a standard responsive ad into the bottom of each page providing that there is no raw AdSense already on the page, and no more than one generic ad already present (to avoid busting the AdSense 3-per-page limit). Also this trailing ad is excluded from pages marked 'noindex
' since they may be 'thin' on original content.
I note that my AdSense console is reporting not many more page impressions in 30 days than I estimate non-robot page views per week. Some of that ~4:1 discrepancy will be because I don't have ads on every page, and I don't run (separate) analytics so as to try to keep page weight down. But much will be due to rampant ad blocking by visitors.