**To understand recursion, you must understand recursion**. I will show you 13 different ways to traverse a tree to compare recursive and iterative implementations. This way, we will kill two birds with one stone: **recursion and data structures and algorithms**.

Bad programmers worry about the code. Good programmers worry about data structures and their relationships.

Recursion, iteration and how to traverse a tree are useful skills to have and common in interview questions. **Do not risk failing your next job interview because you didn’t take the time to read one article**.

I assume you have **basic coding skills** and are familiar with stacks, queues, recursion, and loops. If this seems too advanced for you, check this article where I have listed some resources to get you started.

For every problem, I will provide also a link to Leetcode so that you can play around with my solution or write your own in your preferred programming language.

My code examples will be in C++. If you are not familiar with the APIs or syntax, that is fine. I am sure you will understand the ideas. That is the goal of this article.

## What Is a Tree?

Trees are data structures that organize data hierarchically. They are defined **recursively** from a **root node**, that contains the **data** that needs to be stored and **pointers to its children**.

There are many types of trees well-suited for specific applications:

- Binary search trees, for efficient searches on sorted data.
- B-trees to store database indexes.
- Heaps to implement priority queues.
- General trees to implement filesystems.
- Etc.

**There is no point in storing information if you do not know how to access it**.

For simplicity, I will focus on binary trees. You will take the code and generalize to trees with up to N children per node.

I will present you with 13 ways to traverse a tree and discuss them in detail. As usual, there will be challenges for you to cement your knowledge. I strongly recommend solving each problem on your own before reading my solution.

**You are going to learn infinitely more by doing than by reading or watching.**

## 13 Ways To Traverse a Tree

Most of the problems in this section will be on binary trees. We will define a node as follows:

Using a `struct`

makes the fields public by default, which is enough for our purposes.

For the complexity analysis, we will assume that we will traverse a tree of height **H** that contains **N** nodes.

### 1. Pre-order Traversal – Recursive

Given the root of a binary tree, return *the preorder traversal of its nodes’ values*.

You can play around with this problem here.

#### Solution

The pre-order, in-order, and post-order traversal of a tree are defined by the order in which we visit the root, left, and right children of a tree. The pre in pre-order refers to the root. In a preorder traversal of a tree, we first visit the root node, then the left node, and finally the right node.

You can translate this into recursive code very easily. Your function will receive a node, starting with the root of the tree. You need to define:

**Base condition**: As long as the node is not null,**Recursive relationship**: you process it (for instance, print its value), and then call the same function in the left and right children.

For the base condition, you have two alternatives. I have called them A and B in my code. Have a look at the code and try to figure out the pros and cons of each before you look at my comments.

#### Alternatives

**Option A**. Every function call will test whether the node it receives is null or not. Then, it will trigger two recursive calls. This option will result in a larger stack.**Option B**. The function assumes that the node is not null. It will generate a recursive call only if the left/right child is not null. The function will execute more if statements but make less recursive calls.

**There is no right or wrong solution here without more context**. In an interview setting, discuss both options. It will show that you care about your code and think of the implications of what you write.

Your priority should be writing readable and maintainable working code. Only **focus on this level of detail when you have profiled your code** and have proof that these lines of code make a big impact on the overall performance. Don’t guess. Remember.

Premature optimization is the root of all evil.

#### Complexity

This algorithm visits every node once and adds an element to the end of a vector, which is constant work – amortized. Therefore, the time complexity is O(N).

Can you try to figure out the space complexity? Try to visualize the algorithm on the tree above. How many nodes will be stored in the stack in the “worst-case”?

I am sure you found the correct answer. The maximum number of nodes that you need to store will be equal to the height of the tree, H. Consequently, the space complexity is O(H).

In the next problem, I will show you a non-recursive implementation of the same algorithm to traverse a tree. But before that, I have a little challenge for you.

#### Challenge

Given an n-ary tree, return the *preorder* traversal of its nodes’ values.

You can code your solution here.

##### To make you think

A natural follow-up to this problem is to generalize it to trees with N children. Before opening the link to Leetcode, try to first **model the nodes yourself**. Some interesting questions are:

- What data structures are suitable to store
**at most N children**? - If you
**don’t know the maximum number of children**a node can have, can you still use this data structure? What other options do you have? - What if you want to make a generic tree that works for
**different data types**? The answer to this question will depend on your programming language: templates for C++, Generics for Java, etc.

For now, focus only on the recursive solution. Even if “it is trivial”, we have seen that you can still ask questions and discuss different alternatives.

### 2. Pre-order Traversal – Iterative

Traverse a tree using pre-order traversal. No recursion allowed.

Give it a try here.

#### Solution

Now we will see that recursion is not the only way to traverse a tree.

We need to mimic everything that happens under the hood when we use recursive functions. We know that:

- First, we visit the root of the tree – add it to the solution.
- Then we process the left child, and finally the right child.
- We will need to create and handle the stack ourselves instead of relying on recursive calls.
- A stack is a Last-In-First-Out data structure. Therefore we have to
**push the items in the reverse order we want to extract them**.

We can turn this description into the following algorithm:

- The iterative algorithm begins by adding the root of the tree to a stack
**s**. - As long as
**s**is not empty, we**pop**elements from it. We**add them to the**data structure that represents the**solution**because we want to process the parent node before its children. - We push the children to
**s**.**First the right child and then the left child**. This way, we will pop the left one before the right one, which is exactly the order in which we want to process them.

#### Complexity

The complexity analysis does not change with respect to the recursive version. We still need to visit the N nodes and do constant work per node. Therefore the time complexity is O(N).

Regarding the space complexity, the stack will have to store at most a number of nodes proportional to the height of the tree, O(H).

#### Challenge

It is time for you now to generalize this solution to work with a tree of N nodes. You can test your solution here.

### 3. In-Order Traversal – Recursive

Given the root of a binary tree, return *the inorder traversal of its nodes’ values*.

Give it a try here.

#### Solution

In-order traversal is a way to traverse a tree where you first visit the left child, then the root and then the right child. In a binary search tree (like the one above), it will visit the nodes in increasing order.

The recursive solution is straightforward. The only difference with the pre-order recursive solution is the order in which we visit the root and the children. It is a minor change. Go ahead and implement it yourself.

Everything we discussed about the extra if statements applies to this problem, as well as the discussion about stack vs RAM. Here is a solution with the same options A and B as before.

#### Complexity

The complexity analysis is the same as the previous recursive example. We are doing exactly the same, just in a different order.

### 4. In-Order Traversal – Iterative

Traverse a tree using *in-order* traversal. No recursion allowed.

Test your solution here.

#### Solution

I know what you are thinking. You are right. We need a stack.

Let’s visualize the in-order traversal of the previous tree.

- We call the helper function on the root.
- This calls itself on the left child of the root.
- This calls itself on the left child of the left child of the root.
- This process of moving to the left child repeats until we get to a node
**k**that has no left child. It is only then when we process**k**that we continue by exploring its right child. - The implicit stack has stored all the nodes between the root and k. Once we are done with k’s right child, the next node to process will be extracted from the implicit stack.

We need to mimic this with our stack. Let’s call it **s**.

- To kick-off the tree traversal, we add the root of the tree to s.
- If the root does
**not**have a left child or we have finished exploring the root’s left subtree, we process the root and move to its right child. - Otherwise, this is the case where we keep moving left:
- We add the root to the stack – to process it later
- We move to the left

From this algorithm, it looks like we need a **stack and a pointer n** to mark the node we are currently at. If n becomes null, it means we are done exploring a subtree and need to query the stack for the next node to process. If the stack is empty, we have processed all nodes.

#### Complexity

The complexity analysis is exactly the same as before: O(N) time complexity and O(H) space complexity.

### 5. Post-Order Traversal – Recursive

Given the root of a binary tree, return *the postorder traversal of its nodes’ values*.

Check your implementation here.

#### Solution

In a post-order traversal, we first visit the left child, then the last child and finally the root of the tree, hence the name **post**-order. This algorithm requires a minimal change with respect to the pre-order and in-order traversals that we have just seen.

#### Complexity

Same as pre-order’s and in-order’s.

#### Challenge

Try to generalize this to work with N nodes. Code your solution here.

### 6. Post-Order Traversal – Iterative

Traverse a binary tree using *post-order* traversal. No recursion allowed.

If you are thinking that we need a stack again, you are right.

I am familiar with relatively complex solutions based on one stack. This solution can be implemented with two stacks or just one stack and one vector. This can be done with a second stack. The space complexity is still O(N).

If you remember correctly, for iterative pre-order traversal we added the root to the solution and then pushed the left and right child. I know this comes out of nowhere, but let’s see what happens if we first push the left child and then the right child.

You can verify that we obtain [18, 21, 22, 20, 16], which is the same as reversing the pre-order traversal of this tree: [16, 20, 22, 21, 18]. We only need to add an extra step at the end of the algorithm to reverse the solution — or use two stacks.

I don’t have an intuition for this algorithm. I must admit I had to look up this solution on the internet because I did not want to present more complex alternatives.

Above all, it is worth noting is that **sometimes it is useful to play with similar problems to try to get a solution to the problem you are trying to solve**.

## Complexity

The complexity is linear both in time and space:

**Time**: we are doing constant work per element. Reversing a vector is also a linear-time operation.**Space**: the space complexity is linear, whether we use one or two stacks.

# 7. Morris Traversal — How to Traverse a Tree Using Constant Space

Traverse a tree using pre-order and in-order traversal. Your space complexity must be O(1) — you cannot use recursion or use any data structures to store information about the nodes.

# Solution

You are right. You still need to store some information about the nodes to visit them. This algorithm modifies the nodes to be able to traverse the tree without explicit data structures. At the end of the algorithm, the nodes is back to their original state.

Let’s focus on the in-order traversal. Pre-order just needs a minor modification.

We will have two pointers: *current* and *explorer*. We will use *explorer* to move around the tree and build links from some leaves back to *current*. **Current can only move left if we have previously built a link from its predecessor back to current**. **This is the only thing I have “memorized” about Morris**. From this, I can draw a tree and rebuild the algorithm every time.

This image has all the links that we will need to build. However, we do not need to keep any of them: **as soon as we are done using a bridge, we destroy it**.

The algorithm goes as follows:

- First, we set
*current*to root. - If
*current***does not**have a left child, we add*current*to the solution and move to its right child. - Otherwise:
- We find
*curren*t’s predecessor, using*explorer*. - If there is already a link between
*explorer*and*current*, we break it, add*current*to the solution and move to*current’s*right child. - Otherwise, we build a link between
*explorer*and*current*and move*current*to its left child. - Go back to 2 until
**current**becomes null.

#### Example

Let’s run the algorithm step by step on the tree above:

This is my C++ implementation of this algorithm.

The difference between in-order and pre-order is in the last block, where we check if there is an existing link or not. We just need to move around the line in which we add current to the solution.

For pre-order:

Morris post-order traversal is a bit trickier, so I will leave it out of this section.

#### Complexity

The time complexity is an exercise for you.

The space complexity is O(1). We are implicitly using more since we are modifying the tree, but technically we do not need extra space to execute this algorithm.

#### Challenge

Can you find some potential issues with this algorithm? For example, if multiple threads had to perform this algorithm on the same tree.

### 8. Level Order Traversal – Iterative

Given a binary tree, return the *level order* traversal of its nodes’ values. (ie, from left to right, level by level).

For this tree, the solution looks like: [[15], [10, 21], [5, 20, 23], [6]].

Try to solve it here.

### Solution

This time I will start with the iterative solution because I find it simpler to code. We can solve this problem by applying BFS. The only trick is how to know when to start a new level. We can do this in two ways:

- After we push all the nodes in a level, we introduce a special type of node that indicates that we need a new level – for example, a nullptr.
- Before we start processing the queue, we take note of its size s – the elements at that level. We create a new level after we have process
**s**elements.

Here I have taken the second approach.

#### Complexity

We visit N nodes and do constant work per node. In other words, the time complexity of this approach is O(N).

The space complexity is the maximum size of the queue. Since we store elements by level, it is a function of the width of the tree.

In the case of a **full tree** (every node has two children except for the leaves) of height H, the maximum number of nodes in a level will occur at the last level (the leaves), being this 2^H. For this type of tree, H = log2(N).

In conclusion, the space complexity is O(N) too.

#### Challenge

Try to solve this variant.

## 9. Level-order Traversal – Recursive

Given a binary tree, return the *level order* traversal of its nodes’ values. (ie, from left to right, level by level).

For this tree, the solution looks like: [[15], [10, 21], [5, 20, 23], [6]].

Try to solve the problem here.

### Solution

Always keep in mind the three basic ways to traverse a tree: pre-order, in-order and post-order. This problem is a **pre-order tree traversal in disguise**.

The only extra is that you want **to put nodes that are at the same depth together**. This can be easily achieved using a variable to keep track of the current level you are exploring.

Using a hash table, you can store every node you process at the right depth. Since we are exploring the left child first, the relative order of the elements at each level will be preserved.

#### Complexity

We visit N nodes doing constant work per node (adding an element to a hash table has constant amortized time). Therefore, the time complexity is O(N).

This implementation stores all the nodes in a hash table. Consequently, the space complexity is O(N).

## 10. Level-Order Zigzag Traversal

Given a binary tree, return the *zigzag level order* traversal of its nodes’ values. (ie, from left to right, then right to left for the next level and alternate between).

For this tree, the solution looks like [[8], [14, 4], [2, 5,12,19]].

Give it a try here.

#### Solution

This problem might seem challenging at first, but you have the tools to solve it. One of the problems above is very similar to this one. I will give you a minute to find it.

You are right. This is a level-order traversal. The only extra is that we have to process **some levels from left to right and others from right to left**. We can achieve this easily with two stacks, s1 and s2.

Imagine we have completed a level from left to right. The last element we pushed to the stack s1 will be the rightmost element at that level of the tree.

- Firstly, we create a new partial solution – the solution for the new level.
- We pop the stack s1 and add this element to the partial solution.
- The next level will be explored from right to left. Therefore, we add first the left child and the right child to the other stack s2.
- We keep processing elements from s1 until it is empty.
- In the next iteration, we will process elements from s2 to complete the next level from left to right.
- When we process nodes from s2, we first add their right child and then the left child – LIFO for the next level.
- We do this until both stacks are empty.

#### Complexity

Both the time and space complexity are linear. Can you see why?

#### Challenges

Here I have several variants of this problem for you to keep practicing:

### 11. Side View of a Tree

Given a binary tree, imagine yourself standing on the *right* side of it, return the values of the nodes you can see ordered from top to bottom.

The solution for this tree is [16, 19, 18, 13]. Looking from the right side, you cannot see the other nodes because the nodes in the solution “block their view”.

Try coding it here.

#### Solution

This may look like a challenging problem, but you have the tools you need to solve it. You need to print the rightmost node of each level. Some of the previous problems look very similar o this one.

You can solve this problem by modifying the level-order traversal and printing only the last node at each level.

This is one possible implementation.

#### Complexity

The analysis is the same as the level-order traversal’s.

#### Challenges

Here are two variants you can try:

- Solve this problem recursively
- Print the left side instead of the right side
- Print the leaves

## 12. Vertical-order traversal – Recursive

Given a binary tree, return the *vertical order* traversal of its nodes’ values.

For each node at position `(X, Y)`

, its left and right children respectively will be at positions `(X-1, Y-1)`

and `(X+1, Y-1)`

.

Running a vertical line from `X = -infinity`

to `X = +infinity`

, whenever the vertical line touches some nodes, we report the values of the nodes in order from top to bottom (decreasing `Y`

coordinates).

If two nodes have the same position, then the value of the node that is reported first is the value that is smaller.

Return a list of non-empty reports in order of `X`

coordinate. Every report will have a list of values of nodes.

The solution for this example is [[4], [6], [12], [21, 14, 23], [20], [24], [25]].

Code your solution here.

#### Solution

I bet you are already seeing a solution.

Every time you move left or right, you can keep track of your X position. This is easy to implement recursively. Every time you move to the left node, the X position decreases by 1. If you move to the right, the X position increases by 1.

Using a hash table, you can store all the nodes at a certain X position to build the solution.

#### Complexity

We visit every node once and do constant work per node. However, at the last step where we need to order the nodes at each level. This brings the complexity up. It is not trivial to calculate this new complexity but it is clearly bounded by O(NLogN).

Since we store all nodes in a hash table, the space complexity is O(N).

## 13. Vertical-order traversal – Iterative

Given a binary tree, return the *vertical order* traversal of its nodes’ values.

For each node at position `(X, Y)`

, its left and right children respectively will be at positions `(X-1, Y-1)`

and `(X+1, Y-1)`

.

Running a vertical line from `X = -infinity`

to `X = +infinity`

, whenever the vertical line touches some nodes, we report the values of the nodes in order from top to bottom (decreasing `Y`

coordinates).

If two nodes have the same position, then the value of the node that is reported first is the value that is smaller.

Return an list of non-empty reports in order of `X`

coordinate. Every report will have a list of values of nodes.

The solution for this example is [[4], [6], [12], [21, 14, 23], [20], [24], [25]].

Test your solution here.

### Solution

Based on my explanation for the previous problem and your understanding of BFS, you will be able to write the iterative version of the algorithm that solves the problem.

#### Complexity

I will leave this as an exercise for you. If you need hints, check the previous problems.

## Conclusions

This was supposed to be an article on how to traverse a tree. It ended up being an excuse to work on recursive and iterative solutions to the same problem. We can extract a few learnings from this:

- In general, it is not trivial to translate a recursive algorithm into an iterative. However, this is trivial for tail-recursive functions.
- Recursive solutions can lead to stack overflow. Always consider iterative solutions (limited by RAM size).
- Some problems have straightforward recursive solutions and complicated iterative implementations.
- Recursive solutions tend to be slower because of the stack calls. However, your focus should be on having a working code that is maintainable and readable. Only optimize after using a profiler and finding the real bottlenecks

As I have said before, it is not about solving a million problems or spending a million hours. **What is important is what you get out of each hour**. We have also seen that from a seemingly simple problem we have been able to extract a lot.

“No problem is too small or too trivial if we can really do something about it.”

The aim of this entry is to **make you think about each problem**. We have seen 13 ways to traverse a tree. We have work on recursive and iterative implementations. **You have become smarter by going through this article**. As the next logical step, I invite you to read my article on dynamic programming.

**If you found this article helpful, please share it. Chances are more people will find it useful too. You can help someone pass their exams, a job interview, or get them unblocked at work**.

Feel free to have a look at my blog www.yourdevopsguy.com and connect with me on Twitter.

## 1 Comment