The Way To Good Heuristics

In UCB CS188 course Pacman project, we should give some heuristics to search efficiently. Heuristics are easy to find. For example, the two trivial heuristics are always zero and the true minimum cost to the goal. The former is easy to compute but without boosting the searching efficiency. The latter is very hard to compute (actually it’s our original searching problem), but it gives the best searching efficiency. Thus, a good heuristic is a trade-off between the heuristic computing time and the approximate accuracy.

Here I give my way to find a good heuristic:

  • Relax the problem to ensure admissive and make it efficient to compute.
  • Check the consistency.

In the relaxing step, we can relax the problem to make it efficiency to be solved, for example, by the greedy algorithm, by dynamic programming, etc. The relaxing is to ignore some restrictions in the original problem. But be careful, we should relax the problem slightly, to approximate accurately. The relaxing can ensure admissive.

In the checking step, we should make sure our heuristic is consistent. The consistency is defined as heuristic “arc” cost <= actual arc cost for each arc, formally:

For the grid search problem, every arc cost is $1$, so we should check whether $h(u)-h(v)\leq 1$ for every grid point $u$ and its neighbour $v$. If we use Manhattan distance as the heuristic, it’s consistent, since the Manhattan distance cannot decrease more than $1$ in one move. It’s no hurt to the consistency if the heuristic increases in any move.

Log Sum Exp Issue

We often do some calculations in the log space to avoid the overflow of float64. The multiplication is the addition in the log space. The addition is the LogSumExp calculation in the log space. In the beginning, we want to calculate $\alpha \times \beta$ and $\alpha + \beta$. When we calculate them in the log space, let $a=log(\alpha)$, $b=log(\beta)$,

However, when we calculate the addition $log(\alpha + \beta)$, if $a$ is more than $710$, then $exp(a)$ will be +Inf in float64, even the real final result is not so big. Assumed $a > b$, we can do the algebra:

That’s why the calculation $log(1+x)$ is very important. Most of the languages give the function Log1p(x), which can calculate $log(1+x)$ more accurately without the underflow. Here is my LogSumExp function in Go.

import "math"

func LogSumExp(a, b float64) float64 {
	if a < b {
		a, b = b, a
	if math.IsInf(a, 0) {
		return a
	return a + math.Log1p(math.Exp(b - a))

Hello World

new Hello $ax^2+bx+c=0$ World!

package main

import "fmt"

func main() {
	fmt.Println("Hello World!")