Recursion is the idea of designing a function so that it calls itself.
Suppose we want to write factorial, where (factorial n) is the product of integers from 1 to n, inclusive. This non-recursive version updates product on each pass of a loop and then returns its value at the end:
We might notice that for n greater than 1, (factorial n) is always n multiplied by (factorial (- n 1)). Thus, we can rewrite factorial as a simpler recursive function:
Every recursive function, including factorial, has two essential ingredients:
The task to be done by the function is broken down into smaller versions of the same task. Here, each possible value of factorial can be described in terms of another value of factorial.
The function reaches a terminating condition. Here, once n reaches 1, the recursion stops, and the function simply returns 1. Without this condition, the function would recurse indefinitely.
Another common way of writing recursive functions is with a named let. The function below converts a natural number to a string, using a radix between 2 and 16. In this case, the inner let is named loop, which can be recursively invoked just like a function: + Under the hood, let is implemented as a macro that expands into lambda, so the kinship with functions is not accidental.
1 2 3 4 5 6 7 8 9 10 11 12 13 | (define (n->s num [radix 10]) (define digits (list->vector (string->list "0123456789abcdef"))) (let loop ([num num][acc empty]) (cond [(zero? num) (if (empty? acc) "0" (list->string acc))] [else (define-values (q r) (quotient/remainder num radix)) (loop q (cons (vector-ref digits r) acc))]))) (n->s 36 2) ; "100100" (n->s 36) ; "36" (n->s 36 16) ; "24" |
The idea of recursion also surfaces with data structures that are defined partly in terms of themselves. For instance, in Racket a list is precisely defined as:
Just like a recursive function, this recursive data structure is described in terms of smaller versions of itself (a list is made of nested lists) until it reaches a terminating condition (the value null). + We can also think of this as starting with null and building upward, which is why recursive structures are also called inductively defined structures.
Recursion is a favored technique in functional programming because it helps avoid mutation of values. The first version of our factorial function mutated the variable product on each pass. The second did not.
Furthermore, because so many data structures in functional languages are defined recursively—like lists—recursive functions end up being the most natural way to work with them. For instance, simplified versions of map and filter can be defined compactly as recursive functions that follow the recursive definition of a list:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | (define (my-map proc xs) (if (null? xs) null (cons (proc (first xs)) (my-map proc (rest xs))))) (my-map abs (range -3 3)) ; '(3 2 1 0 1 2) (define (my-filter pred xs) (if (null? xs) null (if (pred (first xs)) (cons (first xs) (my-filter pred (rest xs))) (my-filter pred (rest xs))))) (my-filter even? (range 10)) ; '(0 2 4 6 8) |
Here, we use first and rest to break the task into smaller versions of the same task. Then we test the input for null? as our terminating condition.
Tail recursion refers to the special case where the return value of the recursive branch is nothing more than the value of the next recursive call (also known as a tail call).
For instance, our factorial function was recursive—because it called itself—but not tail recursive, because the last line of the recursive branch had to multiply n by the result of the next recursive call to factorial:
By contrast, the tail-factorial function below uses an accumulator to pass each intermediate product as an argument to the next recursive call. By using an accumulator, we can move the multiplication inside the recursive call, thereby making this version tail recursive:
Making the function tail recursive causes the integers to be multiplied in a different order. Of course, this doesn’t change the result.
Tail recursion has special status in Racket because the compiler notices tail calls and optimizes them.
Ordinarily, each call to a function, including a recursive call, causes another set of arguments to be saved in a block of memory called the call stack. As one function calls another, more arguments pile up on the call stack. Later, as the return values are calculated, arguments are removed from the call stack and replaced with the return values.
For instance, let’s make a recursive sum function that’s like factorial, but adds the integers from 1 to n rather than multiplying them. sum pushes an argument onto the stack on each pass, so calculating (sum n) requires a maximum of n units of space on the stack (a process visualized below):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | (define (sum n) (if (zero? n) 0 (+ n (sum (- n 1))))) ; not tail recursive (sum 5) (+ 5 (sum 4)) (+ 5 (+ 4 (sum 3))) (+ 5 (+ 4 (+ 3 (sum 2)))) (+ 5 (+ 4 (+ 3 (+ 2 (sum 1))))) (+ 5 (+ 4 (+ 3 (+ 2 (+ 1 (sum 0)))))) (+ 5 (+ 4 (+ 3 (+ 2 (+ 1 0))))) (+ 5 (+ 4 (+ 3 (+ 2 1)))) (+ 5 (+ 4 (+ 3 3))) (+ 5 (+ 4 6)) (+ 5 10) 15 |
But there’s a lurking problem. For large enough values of n, we could use up all the memory available for the call stack (aka the dreaded stack overflow). In most programming languages, we’d be stuck. + Unlike most languages, Racket’s call stack isn’t restricted to a block of memory separate from the program memory. So stack overflows are less likely, but still possible.
But in Racket, we can rely on tail-call optimization. Recall that in a tail-recursive function, the return value doesn’t depend on the current arguments—just the result of the next call to the function. So when Racket sees a tail call, it simply discards the current arguments on the call stack, and replaces them with the arguments of the tail call. + The effect is to leave the call stack in a same state as if the current invocation of the function had never happened. Some people describe this as a type of goto. Arguably, it’s more like time travel into the past. This means that if we instead use a tail-recursive function called tail-sum, it will never grow the stack:
In practical terms, what tail-call optimization means is that we can use a tail-recursive function to avoid using space on the stack, and thereby recurse to any depth.
For instance, in DrRacket let’s go to Racket → Limit Memory and change the limit to 8 MB. Then let’s try using our original sum function to add the positive integers up to 123,456,789:
When we try to run this, we’ll get an Interactions disabled message that signals DrRacket has run out of memory. Why? sum needs to store over 123 million integers, but we only have about 8 million bytes of memory.
But if we use tail-sum, DrRacket can calculate an answer, because the tail call is optimized so it uses no extra space on the call stack:
(When you’re done, don’t forget to go back to Racket → Limit Memory in DrRacket and set the memory limit back to a reasonable level.)
Tail recursion in the Racket Guide
Lists, iteration, and recursion in the Racket Guide