Optimizing runtime in recursive function
Memoization! Another fancy technical term I encounter when I was climbing my way up the famous Mount. Algorithm. It helps improving the runtime and time complexity in recursive functions. Once you get a hang of it, your recursions’ runtime will be shortened tremendously, you can thank me later. Before we dive into implementation of memoization, let’s touch base on some other terms just to make sure we’re on the same page.
The process in which a function calls itself directly or indirectly is called recursion and the corresponding function is called as recursive function. Using recursive algorithm, certain problems can be solved quite easily.
Functions like this will continue to place itself on the callstack until it hits the machenism to return a value without making further recursive call, the base case. Take this function for example,
It’s rather a useless function but the point is, the function will be calling itself recursively until the base case it met, which is the input integer equals to 0. The call stack would grow in a leaner fashion according to its input.
Now that we get that out of the way, let’s look at another classic solution for finding the number of fibonacci sequence according to its order. Fibonacci sequence is the sequence starting at 0 and 1, a number is determined by the sum of two preceeding ones.
The solution is rather simple, it’s recursively calling itself as it subtract the input by one and two, returning the sum of the two function to its parent. However, when we need to figure out a number that’s in later order in the sequence, such as the 50th number, we have a problem. Because every parent calls for the same function twice as the numbers decrement down to the base case, the call stack is growing exponentially. First level will generates two children call, the two children will generate four more children, then eight more, sixteen more, so on and so forth. The bigO in this solution is a solid O(2^n) in an average case, which is not ideal. So how could we make this faster and less expensive? Memoization to the resue!
In computing, memoization or memoisation is an optimization technique used primarily to speed up computer programs by storing the results of expensive function calls and returning the cached result when the same inputs occur again.
Some of you might notice it sounds very familiar with another concept, Caching. They are both relying on keeping the result of a complicated process in a quick access point for repeated requests. When you’re visiting your favorite website the second time and it loads significantly faster than the first time you were there, it’s not because your internet speed got a free upgrade but your computer has cached most of the major elements of the webpage in your local storage rather than downloading everything from the web.
So how do we implement it in the recursive functions? Let’s first take a look at the call stack of the same finbonacci sequence in a tree graph.
Each node represents a function call, and the leaves are when the function hits a base case, therefore returning a value. If you look at the graph carefully, it’s not hard to notice there are multiple instances where the function was called with the same input, such as f(3) and f(4). Let’s cache the result of that into something that we’re able to quickly access for repeated calls.
Here we injected another argument named memo and default its value to an empty object to the function, the child function calls will inherit the memo boject and populate it with it’s returned result. On the first line we’re getting an early return if we find the key ’n’ in the memo without running the rest of the code, otherwise we’re calling the resursive calls on line 4 and keeping the result in the memo object and returning the same result at the end. This way we’re able to reconstruct the call stack tree into something like this,
Now the repeated input wouldn’t invoke more recursive functions for result but accessing the memo object insteak, which turn the time complexity from O(2^n) to O(n).
Memoization is a technique of caching function results within the function scope to make the function memorize a return value from a same input. We can turn any pure fucntion memoized but it’s the most beneficial when apply to function that does heavy computation through recursive calls.