Engaging Hard Problems

While related to the odd approaches to software bugs discussed here, NewScientist’sLet’s cut them some slack” (2 April 2016, paywall), by Paul Marks, makes the case that engaging with the harder computational problems require us to be able to accept being generally correct while being precisely incorrect, and using that acceptance to strategically underpower tomorrow’s big scientific supercomputers – a dimension to computing that I’ve never considered.

Let me illustrate the current context: Most programmers worry exclusively about functionality, which is to say that the proper answer is computed on demand. A few get tasked, often after the initial solution has been designed and implemented, with questions of performance – did the user go get a cup of coffee while we computed the solution? (The extreme example of this is the P=NP question discussed here.) And then come the large problems that consume unsupportable amounts of resources – the scalability problem. The goal is to reduce consumption of resources – commonly CPU cycles, memory, and access to databases – while still computing a proper solution such that, analogous to performance, growth in consumption of resources does not correspond to growth in input, but instead has a non-linear correspondence, such as log2N.

But this problem is literally that of power:

Next-generation “exaflop” machines, which are capable of 1018 operations a second, could consume as much as 100 megawatts, the output of a small power station.

And they want to engage this problem in hardware, not software. How?

[Krishna] Palem’s answer was to design a probabilistic version of CMOS technology that was deliberately unstable. His team built digital circuits in which the most significant bits – those representing values that need to be accurate – get a regular 5-volt supply, but the least significant bits get 1 volt. “The significant bits are running at a proper, well-behaved voltage, but the least significant get really slack,” says Palem. As many as half the bits representing a number can be hobbled like this.

This means that Palem’s version of an adder, a common logic circuit that simply adds two numbers, doesn’t work with the usual precision (see “Missing bits“). “When it adds two numbers, it gives an answer that is reasonably good but not exact,” he says. “But it is much cheaper in terms of energy use.

Assuming all the main memory is treated this way, it seems a bit like … metaphors fail me. The alternative, however, implies a language1 that can control which pieces of memory must be precise, and which can be fuzzy. While the hackers are no doubt drooling at the thought, the functional paradigm folks are probably twitching. (Not that this is completely unprecedented, as C used to support – and no doubt some implementations still do – the ability to assign certain variables to the CPU’s registers, rather than main memory. Registers are much faster to access than main memory.)

Researchers think application selection will be key. Climate forecasting is one keenly interested area:

The pay-offs could be huge. Today’s climate models tackle Earth’s atmosphere by breaking it into regions roughly 100 kilometres square and a kilometre high. [Tim Palmer, a climate physicist at the University of Oxford,] thinks inexact computing would get this down to cubes a kilometre across – detailed enough to model individual clouds.

“Doing 20 calculations inexactly could be much more useful than 10 done exactly,” says Palmer. This is because at 100-kilometre scales, the simulation is a crude reflection of reality. The computations may be accurate, but the model is not. Cutting precision to get a finer-grained model would actually give you greater accuracy overall. “It is more valuable to have an inexact answer to an exact equation than an exact answer to an inexact equation,” he says. “With the exact equation, I’m really describing the physics of clouds.”

But some parts of the job require exact precision, and some don’t, right? Do they know how to pick?

Researchers are attacking the problem from several different angles. Mostly, it comes down to devising ways to specify thresholds of accuracy in code so that programmers can say when and where errors are acceptable. The software then computes inexactly only in parts that have been designated safe.

Ah, no doubt I’d read this once already and it triggered my above speculations about computer languages, even though I don’t recall it. Brain inexactitude. But I do have my own observation here: if you can partition the computing space and you know the desired result for some particular set of inputs, or, better yet, multiple pairs of inputs and desired outputs, then you should be able to run repeated simulations in which members of the computing space are randomly selected to receive various levels of precision within some constraint of total computation. Run it a few hundred or thousand times, apply some statistical analysis, and soon you (or at least your computer) may understand where precision is required, and where it is not, in your problem model.

This all does make me wish I worked in this field. Sounds quite challenging.

And remember my “brain inexactitude” moment, above?

But there is a huge discrepancy in power consumption between the brain and a supercomputer, says Palmer (see “Power hungry”). “A supercomputer needs megawatts of power, yet a human brain runs on the power of a light bulb.” What could account for this?

Palmer and colleagues at the University of Sussex in Brighton, UK, are exploring whether random electrical fluctuations might provide probabilistic signals in the brain. His theory is that this is what lets it do so much with so little power. Indeed, the brain could be the perfect example of inexact computing, shaped by pressure to keep energy consumption down.

I’m somewhat suspicion of attempts to rate the brain in any measure we use for computers, be it flops or instructions/second. However, the overall point holds true: how do we solve the problems we solve while consuming power equivalent to that used by a light bulb? (And, yes, I understand the allusion to the old visual joke.)

And, finally, even scientists can be drama queens – which doesn’t mean they are wrong.

What’s clear is that to make computers better, we need to make them worse. Palmer is convinced that partly abandoning Turing’s concept of how a computer should work is the way forward if we are to discover the true risks we face from global warming. “It could be the difference between climate change being a relatively manageable problem and one that will be an existential problem for humanity.

My bold.


1We’ll just skip assembly and machine language.
Bookmark the permalink.

About Hue White

Former BBS operator; software engineer; cat lackey.

Comments are closed.