Inorder traversal in constant space – Morris algorithm explained

The well known recursive binary tree inorder traversal needs O(H) space where H is the height of the tree.

static void inorder(Node root, List<Integer> elements) {
    if (root == null) {
    inorder(root.left, elements);
    inorder(root.right, elements);

It is not hard to figure out iterative implementation by simulating back tracking behavior using a stack data structure, which still needs O(H) space.

If parent pointers are present, they will naturally forms stack structure, which makes backtracking feasible in O(1) space.

I recently learnt Morris algorithm, which does inorder traversal in O(1) extra space without using parent pointer. The source code follows.

static void inorder(Node node, List<Integer> elements) {
    while (node != null) {
        if (node.left != null) {
            Node t = node.left;
            while (t.right != null && t.right != node) {
                t = t.right;
            if (t.right == node) {
                // visiting back
                t.right = null;
                node = node.right;
            } else {
                t.right = node;
                node = node.left;
        } else {
            node = node.right;
Why mutating pointers?

Morris algorithm use some null pointers (only right null pointers) in the tree as location to store temporary data. All mutations are guaranteed to be reverted when traversal finishes.

It is kinda cheating to claim it uses O(1) extra space, but it is still an impressive algorithm.

why it works?

The contract of the algorithm is intuitive: it takes in a binary tree, and performs inorder traversal on it, with all mutations reverted.

If left subtree is empty, it is trivial: visit root and start over for the right subtree.

The algorithm will in-order traverse the right subtree by contract.

Let’s focus on the case when left subtree is not empty.

Find the precessor (the red node) of root and set its right pointer to root. start over for the left subtree.

Now from the left subtree perspective, the tree looks like following.

Let’s ignore the weird upward pointer for now. Something we know for sure:

  • the blue node will be visited after the red node when traversing the tree above.
  • if we try to find the precessor of the blue node, we are going to hit blue node itself again in the process.

These fact basically gives us opportunity to revert mutation after traversing the left subtree:

When finding precessor of root in previous step, if we ever find some node’s right pointer points at root, set its right pointer to null.  Now the tree looks the same as beginning.

Visit root and start over for the right subtree.

The same algorithm will deal with the right subtree for you.

Note that mutation in red is reverted in the green step. Given that left and right subtrees are handled by the same algorithm, which reverts all mutations by contract, we guarantee that all mutations are reverted after the algorithm finishes.

When the a iteration starts, it might do or undo mutation. The way to tell which is by looking at the right pointer of max node in left subtree.

The key idea is, the algorithm performs specific mutate to allows “backtracking” to the exact right node after finishes traversing sub tree.

A full example follows.


Why O(n) time?

It is clear that everything except for the inner while loop (find the last node in left sub tree) sums up to O(n).

How long does it take to perform “find last node in left sub tree” for each node?

It is not hard to observe that each edge in the tree is visited no more than once in such process, so O(n) is the answer.

Therefore, overall time complexity is still O(n).


Leave a Reply

Your email address will not be published. Required fields are marked *