We can use the above technique in any language, so what’s special about Groovy? The answer is that in Groovy, memoization is already baked into the library as a function on closures. We don’t have to create the hashmap and look it up, that’s taken care under the covers. To make use of this built-in facility in Groovy, we have to transform the function fib into a closure. Let’s transform the original non-memoized version. def fib fib = { position - if (position 2) BigInteger.ONE else fib(position - 1) + fib(position - 2) }.memoize() We first defined a variable named fib. This initial definition is needed to bring the variable into the lexical scope of the closure to enable the recursive call. Then we created a closure, the body of which is pretty much the body of the original function. Finally, on the closure we call the memoize function, which is a Groovy added function on closures. The memoize function under the covers creates a wrapper function that looks up a hashmap for the value and does the memoization for us. Let’s invoke this modified version with some position values, and let’s be bold and increase the position size much further this time. timeit(5) { position - fib(position) } timeit(5) { position - fib(position) } timeit(40) { position - fib(position) } timeit(42) { position - fib(position) } timeit(500) { position - fib(position) } The larger position value is something we can’t even imagine with the poor performance of the overlapping recursion, but we’re confident the memoization has turned this into linear time complexity and so will handle this with reasonable speed. Let’s run the code and look at the speed and the results. Fib(5): 8, time: 0.02 sec Fib(5): 8, time: 0.00 sec Fib(40): 165580141, time: 0.00 sec Fib(42): 433494437, time: 0.00 sec Fib(500): 22559151616193633087251269503607207204601132491375 8190588638866418474627738686883405015987052796968498626, time: 0.01 sec The output confirms that the results of the implementation are consistent with the slow version, but the speed is simply remarkable. We hardly made a dent in time even for a large position. We made use of the memoize function in Groovy. That means less effort on our part, as we did not have to create and maintain the hashmap. But that’s just the beginning of the benefits. The underlying implementation of memoize is quite efficient. Recall that memoization is a time-space tradeoff—we use more memory—to store the values—in exchange for faster computation. If the input size is very large, we have the risk of running up the memory usage beyond reasonable levels. Groovy provides variations of the memoize method PragPub January 2013 20
Purchased by unknown, nofirst nolast From: Scampersandbox (scampersandbox.tizrapublisher.com)