Getting the job done, minimising energy use and CO2 production...
Is it possible to tune a piece of software to do the job it was doing before, but on less power and less CO2 generation (not exactly the same), or even write software from scratch to be 'greener' and more energy-efficient?
The answer is yes, though this goal is easier to achieve on hardware that is energy-efficient already such as smart handhelds, laptops and newer desktops/servers designed and built after it became fashionable to compete on computing power per Watt (~2006)!
Here's a (lightly-edited) sample from LessWatts.org on how good design and coding can let the energy-efficiency features in hardware do their thang:
- [Avoid] Polling. Avoid frequent, unnecessary polling
- Race to Idle. Save power by running at the highest speed. Processors tend to be so good at saving power during idle, that often it's better to go as fast as possible so that you can then be idle longer.
- Turn devices off. Open devices can prevent the system from entering power saving state.
- Group Timers. Many programs use timers, so group them to reduce idle wakeups.
- Use large buffers. Media playback requires a large buffer, large enough for a minute of audio or 20 minutes of video.
- Optimize Sleep Duty Cycle. It matters how frequently you go in and out of idle. Stay in idle for long periods of time. Avoid interrupting idle as much as possible.
- Beware of high level languages. High level languages are convenient tools to achieve results quickly and often have features to do complex things with minimal effort. However, be aware that some of these contructs are hard to implement and sometimes the runtime environment that implements the high level language does so using polling at a high frequency. When using high level languages such as Java*, Visual C#*, Python*, and Ruby, check the end result and try to avoid some of the more complex threading primitives. In addition, where you have a choice of runtime environment provider, evaluate different alternatives and versions.
Also, techniques to reduce or group disc and network activity can help, possibly allowing a quiet device to go into a low-power sleep mode.
You may want to tune the OS a little too, for example to wait a little longer to write cached non-critical data to disc.
Anywhere that you carefully tune to reduce energy consumption, you can often at the same time improve responsiveness and performance. Energy-efficiency is not your enemy!
All the above techniques can help reduce the overall energy consumed in handling a particular computational work-load, but all Watts and Joules/kWh are not equal in CO2 terms.
For example, if you draw energy when your local electricity grid demand is high then each unit of energy (eg kWh) may be more expensive and generated by more carbon-intense methods, and puts more strain on the grid. Postponing exactly the same work until a lower-demand time of day may reduce the CO2 generated, and if you have time-of-day metering (for example because you are in a large corporate data centre), you may also reduce your bills. Peak-demand times vary by season and location, but weekday evenings are often bad, as are summers in hot places because of air-con-driven demand, and winters in cold/dark places because of lighting and heating demand. Time your background and non-essential processing to run at night and you're probably saving both money and CO2 generation.
You can go one better if you have local RE (Renewable Energy) generation and/or you can monitor local time-of-day pricing explicitly, deferring non-essential processing as long as possible until energy is relatively abundant and cheap.
For example, for my Multimedia Gallery I defer such things as background AI work and self-healing data correction for up to several days until the local system detects good energy availability. For servers in data centres that is simply handled by time-of-day coupled with an understanding of local grid demand peaks via a couple of lines of cron jobs to set/clear a 'low-power' flag file. For my primary server the flag file(s) are set/cleared in response to the state of charge of the local RE system battery: when the battery is fully charged and the server/laptop is off-grid then the Gallery absorbs the 'excess' energy by catching up on deferred tasks, and when on-grid always minimises its consumption.
I've tuned other applications to be sensitive to the same flags, eg by having them skip some cron-driven runs when the low-power flags are set. Simple to do, and with no significant loss of performance/usefulness.
An alternative mechanism for computers always connected to the Internet (eg servers in data centres or PCs/laptops on home broadband) would be to regularly and automatically poll (maybe every few minutes) a centrally-managed Web site which can report when the grid is under stress. When such 'stress' (eg high demand and/or high CO2-intensity) is detected then the system could trim energy use, eg by reducing the maximum CPU speed, dimming the display one notch in laptops, bringing forward screen blanking, spinning down discs where possible or batching activity better, etc, etc. The software to do this could be entirely free and out of sight, and would likely be hardly noticed by users at all. This directable reduction in demand might be quite significant, eg ~10%, and providing that the infrastructure to support this is not expensive or power-hungry itself (and there's no especial reason to believe it would be) then this could be a significant easy (and largely transparent) addition to the load/demand management toolkit of grid operators. In the UK the service could be run by the National Grid or (say) NETA which already publishes real-time data on the Web, or the local DNO (electricity distributor) or even one of the 'green' suppliers/generators such as Ecotricity.
Note that for machines able to detect their local mains frequency (or voltage), this can be used as an addition/substitute for the Internet input. When the mains frequency drops (or in more extreme cases the voltage) it is an indication that the grid is under strain and can be used as a cue to trim CPU speed, display brightness, etc. This is probably harder in laptops with external adaptors than in desktops. This is like the Scottish 'Teleswitch' system (with radio rather than Internet). The same approach can be used in 'dumber' but energy-hungry domestic appliances such as the fridge/freezer, washing machine and dishwasher, to mitigate their draw when the mains frequency is low by adjusting thermostats slightly and reducing the rate of resistive heating, spreading their energy consumption over a longer time and away from the crisis. See my note on the potential value of "dynamic demand" control.
The machine that serves this site is powered by local off-grid solar; draw is ~1W.
Please email corrections, comments and suggestions.
Follow @EarthOrgUK. Copyright © Damon Hart-Davis 2007-2017.