{"id":3458,"date":"2016-04-16T10:08:46","date_gmt":"2016-04-16T15:08:46","guid":{"rendered":"http:\/\/huewhite.com\/umb\/?p=3458"},"modified":"2016-04-16T10:08:46","modified_gmt":"2016-04-16T15:08:46","slug":"engaging-hard-problems","status":"publish","type":"post","link":"https:\/\/huewhite.com\/umb\/2016\/04\/16\/engaging-hard-problems\/","title":{"rendered":"Engaging Hard Problems"},"content":{"rendered":"<p>While related to the odd approaches to software bugs discussed <a href=\"https:\/\/huewhite.com\/umb\/2015\/12\/26\/future-software-design\/\" target=\"_blank\">here<\/a>, <em><strong>NewScientist&#8217;s<\/strong><\/em> &#8220;<a href=\"http:\/\/www.newscientist.com\/article\/mg23030672-500-to-make-computers-better-let-them-get-sloppy\/\" target=\"_blank\"><em>Let\u2019s cut them some slack<\/em><\/a>&#8221; (2 April 2016, paywall), by Paul Marks, makes the case that engaging with the harder computational problems require us to be able to accept being generally correct while being precisely incorrect, and using that acceptance to strategically <em>underpower<\/em> tomorrow&#8217;s big scientific supercomputers &#8211; a dimension to computing that I&#8217;ve never considered.<\/p>\n<p>Let me illustrate the current context: Most programmers worry exclusively about functionality, which is to say that the proper answer is computed on demand. A few get tasked, often after the initial solution has been designed and implemented, with questions of performance &#8211; did the user go get a cup of coffee while we computed the solution? (The extreme example of this is the P=NP question discussed <a href=\"http:\/\/en.wikipedia.org\/wiki\/P_versus_NP_problem\" target=\"_blank\">here<\/a>.) And then come the large problems that consume unsupportable amounts of resources &#8211; the scalability problem. The goal is to reduce consumption of resources &#8211; commonly CPU cycles, memory, and access to databases &#8211; while still computing a proper solution such that, analogous to\u00a0performance, growth in consumption of resources\u00a0does not correspond\u00a0to growth in input, but instead has a non-linear correspondence, such as <em>log<strong><sub>2<\/sub>N<\/strong><\/em>.<\/p>\n<p>But <em>this<\/em> problem is literally that of <em>power<\/em>:<\/p>\n<blockquote><p>Next-generation \u201cexaflop\u201d machines, which are capable of 1018 operations a second, could consume as much as 100 megawatts, the output of a small power station.<\/p><\/blockquote>\n<p>And they want to engage this problem in hardware, not software. How?<\/p>\n<blockquote><p>[Krishna] <a href=\"http:\/\/www.cs.rice.edu\/~kvp1\/\" target=\"_blank\">Palem\u2019s<\/a> answer was to design a probabilistic version of CMOS technology that was deliberately unstable. His team built digital circuits in which the most significant bits \u2013 those representing values that need to be accurate \u2013 get a regular 5-volt supply, but the least significant bits get 1 volt. \u201cThe significant bits are running at a proper, well-behaved voltage, but the least significant get really slack,\u201d says Palem. As many as half the bits representing a number can be hobbled like this.<\/p>\n<p>This means that Palem\u2019s version of an adder, a common logic circuit that simply adds two numbers, doesn\u2019t work with the usual precision (see \u201c<a href=\"https:\/\/www.newscientist.com\/article\/mg23030672-500-to-make-computers-better-let-them-get-sloppy\/#bx306725B1\">Missing bits<\/a>\u201c). \u201cWhen it adds two numbers, it gives an answer that is reasonably good but not exact,\u201d he says. \u201cBut it is much cheaper in terms of energy use.<\/p><\/blockquote>\n<p>Assuming all the main memory is treated this way, it seems a bit like &#8230; metaphors fail me. The alternative, however, implies a language<sup>1<\/sup> that can control which pieces of memory must be precise, and which can be fuzzy. While the hackers are no doubt drooling at the thought, the <a href=\"http:\/\/en.wikipedia.org\/wiki\/Functional_programming\" target=\"_blank\">functional paradigm<\/a> folks are probably twitching. (Not that this is completely unprecedented, as <a href=\"http:\/\/en.wikipedia.org\/wiki\/C_(programming_language)\" target=\"_blank\">C<\/a> used to support &#8211; and no doubt some implementations still do &#8211; the ability to assign certain variables to the CPU&#8217;s registers, rather than main memory. Registers are much faster to access than main memory.)<\/p>\n<p>Researchers think application selection will be\u00a0key. Climate forecasting is one keenly interested area:<\/p>\n<blockquote><p>The pay-offs could be huge. Today\u2019s climate models tackle Earth\u2019s atmosphere by breaking it into regions roughly 100 kilometres square and a kilometre high. [<a href=\"http:\/\/www2.physics.ox.ac.uk\/contacts\/people\/palmer\" target=\"_blank\">Tim Palmer<\/a>, a climate physicist at the University of Oxford,] thinks inexact computing would get this down to cubes a kilometre across \u2013 detailed enough to model individual clouds.<\/p>\n<p>\u201cDoing 20 calculations inexactly could be much more useful than 10 done exactly,\u201d says Palmer. This is because at 100-kilometre scales, the simulation is a crude reflection of reality. The computations may be accurate, but the model is not. Cutting precision to get a finer-grained model would actually give you greater accuracy overall. \u201cIt is more valuable to have an inexact answer to an exact equation than an exact answer to an inexact equation,\u201d he says. \u201cWith the exact equation, I\u2019m really describing the physics of clouds.\u201d<\/p><\/blockquote>\n<p>But some parts of the job require exact precision, and some don&#8217;t, right? Do they know how to pick?<\/p>\n<blockquote><p>Researchers are attacking the problem from several different angles. Mostly, it comes down to devising ways to specify thresholds of accuracy in code so that programmers can say when and where errors are acceptable. The software then computes inexactly only in parts that have been designated safe.<\/p><\/blockquote>\n<p>Ah, no doubt I&#8217;d read this once already and it triggered my above speculations about computer languages, even though I don&#8217;t recall it. Brain inexactitude. But I do have my own observation here: if you can partition the computing space and you know the desired result for some particular set of inputs, or, better yet, multiple pairs of\u00a0inputs and\u00a0desired outputs, then you should be able to run repeated simulations in which members of the computing space are randomly selected to receive various levels of precision within some constraint of total computation. Run it a few hundred or thousand times, apply some statistical analysis, and soon you (or at least your computer) may understand where precision is required, and where it is not,\u00a0in your\u00a0problem model.<\/p>\n<p>This all does make me wish I worked in this field. Sounds quite challenging.<\/p>\n<p>And remember my &#8220;brain inexactitude&#8221; moment, above?<\/p>\n<blockquote><p>But there is a huge discrepancy in power consumption between the brain and a supercomputer, says Palmer <a title=\"\" href=\"https:\/\/d1o50x50snmhul.cloudfront.net\/wp-content\/uploads\/2016\/03\/mg30672501.jpg\" data-rel=\"lightbox-0\">(see \u201cPower hungry\u201d)<\/a>. \u201cA supercomputer needs megawatts of power, yet a human brain runs on the power of a light bulb.\u201d What could account for this?<\/p>\n<p>Palmer and colleagues at the University of Sussex in Brighton, UK, are exploring whether random electrical fluctuations might provide probabilistic signals in the brain. His theory is that this is what lets it do so much with so little power. Indeed, the brain could be the perfect example of inexact computing, shaped by pressure to keep energy consumption down.<\/p><\/blockquote>\n<p>I&#8217;m somewhat suspicion of attempts to rate the brain in any measure we use for computers, be it <a href=\"http:\/\/en.wikipedia.org\/wiki\/FLOPS\">flops<\/a> or instructions\/second. However, the overall point holds true: how do we solve the problems we solve while consuming power equivalent to that used by a light bulb? (And, yes, I understand the allusion to the old visual joke.)<\/p>\n<p>And, finally, even scientists can be drama queens &#8211; which doesn&#8217;t mean they are wrong.<\/p>\n<blockquote><p>What\u2019s clear is that to make computers better, we need to make them worse. Palmer is convinced that partly abandoning Turing\u2019s concept of how a computer should work is the way forward if we are to discover the true risks we face from global warming. \u201c<strong>It could be the difference between climate change being a relatively manageable problem and one that will be an existential problem for humanity.<\/strong>\u201d<\/p><\/blockquote>\n<p>My <em><strong>bold<\/strong><\/em>.<\/p>\n<hr \/>\n<h5><sup>1<\/sup>We&#8217;ll just skip assembly and machine language.<\/h5>\n","protected":false},"excerpt":{"rendered":"<p>While related to the odd approaches to software bugs discussed here, NewScientist&#8217;s &#8220;Let\u2019s cut them some slack&#8221; (2 April 2016, paywall), by Paul Marks, makes the case that engaging with the harder computational problems require us to be able to accept being generally correct while being precisely incorrect, and using \u2026 <a class=\"continue-reading-link\" href=\"https:\/\/huewhite.com\/umb\/2016\/04\/16\/engaging-hard-problems\/\"> Continue reading <span class=\"meta-nav\">&rarr; <\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"nf_dc_page":"","_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[1],"tags":[],"class_list":["post-3458","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/huewhite.com\/umb\/wp-json\/wp\/v2\/posts\/3458","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/huewhite.com\/umb\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/huewhite.com\/umb\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/huewhite.com\/umb\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/huewhite.com\/umb\/wp-json\/wp\/v2\/comments?post=3458"}],"version-history":[{"count":4,"href":"https:\/\/huewhite.com\/umb\/wp-json\/wp\/v2\/posts\/3458\/revisions"}],"predecessor-version":[{"id":3462,"href":"https:\/\/huewhite.com\/umb\/wp-json\/wp\/v2\/posts\/3458\/revisions\/3462"}],"wp:attachment":[{"href":"https:\/\/huewhite.com\/umb\/wp-json\/wp\/v2\/media?parent=3458"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/huewhite.com\/umb\/wp-json\/wp\/v2\/categories?post=3458"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/huewhite.com\/umb\/wp-json\/wp\/v2\/tags?post=3458"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}