I won’t explain what backtracking is, Wikipedia article does it well enough.

Backtracking solver is usually implemented using recursive function that traverses a tree of potential candidates that is discovered at the same time.

Now that function can return early when it finds the first solution or traverse the whole tree and return all solutions at the same time.

But what if I want to find first solution and then maybe ask for the next one later?

As far as I know there is no feature in common programming languages that would allow function to return and at the same time give the possibility to resume it execution later.

I guess it could be quite easily emulated with threads:

- Solving function is run on separate thread.
- When it finds a solution it stores it in some external variable and pauses the thread (basically thread is used to store a call stack).
- When user needs next solution he simply resumes the thread.

Sadly not every environment is multi-threaded. What can I do when threads are not available? Do I have to modify my function to explicitly store the copy of a call stack which is then returned with the solution, and to manually recreate the recursion tree when function is called for a next solution?

1

Fortunately, there is a much simpler approach than your multi-threading alternative: implement your backtracking using queues or stacks, instead of recursivity.

In this case, when your top state is a solution, you can return it. And you can resume backtracking to find the next solution by processing the next state in the queue/on the stack.

This article could interest you: *“Backtracking”* It describes the recursive and non recursive version.

2

You could be using a semi-co-routine, or *generator*. They do not exist in all languages and do not always directly allow recursion. But the capability of doing so and availability of semicoroutines don’t depend on availability of threads (granted, they’re much easier to implement with threads or *fibers*, so they’re probably more readily available on some platforms than on others).

This is for scenarios where the solutions are explored iteratively but in no particular hierarchy. In this case you yield a solution after another and if the caller wants the *next* solution, it simply calls again the function, which will “resume” searching.

In most cases you have a slightly different scenario in which the solution-seeking algorithm explores a (more or less large) part of the phase space and ranks possible solutions, returning the **best** according to some metric. By calling again the function you don’t want **another** solution, you want the **second best** (and so on). In that case, except in specific cases where the problem structure might help you, the function needs must maintain a “stack” of the solutions found so far in the current choice branch (instead of simply returning the best of any given choice branch), which complicates memory requirements somewhat. Also, the function will have to return an array. You might have to specify the depth of the returned stack, say twenty, and accept being able to only call the semicoroutine nineteen more times at most, before getting an error (or getting all twenty solutions in one fell swoop, but no more). The routine will also have to merge and sort the stack of its call children (e.g. in a tree, the left recursion will return twenty solutions, the right recursion another fifteen, and the caller will merge-sort them into a new stack of twenty, and decide whether to yield one or pass the twenty backwards).

You might then end up (instead of getting solution 1 in time T, solution 2 in time 0.2T and so on) getting twenty solutions in 10x the time, i.e. 10x is the time until the first solution gets returned.

The above solution can be implemented more easily starting from a recursive function – instead of calling

```
var a := findSolution(phaseSpace)
```

you would have

```
var a[] := findSolutions(phaseSpace, 20)
```

Finally, in some scenarios the recursion *converges* on the best solution, and the others are “left behind” in such a way that it’s not easy to recover them, nor can you be expected to save the whole stack state of the function for all possible branches. To tackle this case you would need to pass to the recursive function a list of those solutions already returned, which are to be considered as invalidated, and the function will then have to *start from scratch each time*, each time potentially exploring different choice branches. And you get 20 solutions in, say, 30x the time (but time to first solution is still 1T).