Logic Programming Is Amazing

Ordinary programming Logic programming
Identify problem Identify problem
Assemble information Assemble information
Figure out solution coffee break :)
Encode solution Encode info in KB
Encode problem instance as data Encode problem instance as data
Apply program to data Ask queries

Wow, logic programming is really amazing.

But, reality is reality. The following process is:

Ordinary programming Logic programming
Apply program to data Ask queries
Start the program Start the program
Program finished Running
Graduate Running
Global travel Running

Most Constrained Variables And Least Constraining Value

There are two important ordering methods for solving CSP problems using backtracking search:

  • Minimum Remaining Values (MRV)
  • Least Constraining Value (LCV)

The former is for ordering variables. It tells us that it’s better to consider the variables with minimum remaining values, so it’s also known as the most constrained variables method. The latter is for ordering values. After we decided to consider one variable, there are many values to consider. It tells us that it’s better to consider the least constraining value firstly.

Most of the people believe that MRV leads the searching fail-fast, and LCV leads the searching succeed-fast. Choose the fail-fast variable but choose the succeed-fast value. Choose the most constrained variable but choose the least constraining value. It seems that the two ordering methods are not consistent, right?

Actually, this question confuses me a lot. After struggling hard, I believe the truth is in my mind now. Both methods want the searching end-fast. MRV leads the searching not only fail-fast but also succeed-fast. LCV leads the searching succeed-fast, but not fail-fast. When considering to choose variables, choosing the most constrained variable makes the searching tree thin, so it leads the searching end-fast. When considering to choose values, there are two cases. Suppose that we have $n$ values to consider, that means we have $n$ sub searching trees currently. If all the $n$ sub searching trees fail in the end, then it’s no matter how we order the values. If one of the sub searching trees succeeds, it’s better to consider this succeeding sub searching tree. We believe the least constraining value has the most probability to succeed, so we consider the least constraining value firstly, and so this leads succeed-fast.

Pacman Died Finally

pacman

After fighting for an hour, our little Ghost killed the eval Pacman. :P

Actually, it’s a task in UCB CS188 course Pacman project. In this task, we write a method controlling the pacman to earn more scores. The method in the above screencast is a depth-3 expectimax search with a tricky evaluation function. It’s hard for me to get more scores.

My Favorite Alpha Beta Pruning Bug

The following $\alpha \beta$ pruning algorithm looks clever, however, it has a tricky bug. Can you find where the bug is?

def alpha_beta(state, alpha, beta):
    if state.finished():
        return state.calcScore()
    for next_state in state.next():
        alpha = max(alpha, -alpha_beta(next_state, -beta, -alpha))
        if alpha >= beta:
            break
    return alpha

Tips: This code is able to find the correct solution, but may be more time-consuming.

The Way To Good Heuristics

In UCB CS188 course Pacman project, we should give some heuristics to search efficiently. Heuristics are easy to find. For example, the two trivial heuristics are always zero and the true minimum cost to the goal. The former is easy to compute but without boosting the searching efficiency. The latter is very hard to compute (actually it’s our original searching problem), but it gives the best searching efficiency. Thus, a good heuristic is a trade-off between the heuristic computing time and the approximate accuracy.

Here I give my way to find a good heuristic:

  • Relax the problem to ensure admissive and make it efficient to compute.
  • Check the consistency.

In the relaxing step, we can relax the problem to make it efficient to solve, for example, by the greedy algorithm, by dynamic programming, etc. The relaxing is to ignore some restrictions in the original problem. But be careful, we should relax the problem slightly, to approximate accurately. The relaxing can ensure admissive.

In the checking step, we should make sure our heuristic is consistent. The consistency is defined as heuristic “arc” cost <= actual arc cost for each arc, formally:

\[\forall (u,v) \in E, h(u)-h(v)\leq cost(u, v)\]

For the grid search problem, every arc cost is $1$, so we should check whether $h(u)-h(v)\leq 1$ for every grid point $u$ and its neighbour $v$. If we use Manhattan distance as the heuristic, it’s consistent, since the Manhattan distance cannot decrease more than $1$ in one move. It’s no hurt to the consistency if the heuristic increases in any move.