We consider a two-player zero-sum-game in a bounded open domain Ω
described as follows: at a point x ∈ Ω, Players I and II
play an ε-step tug-of-war game with probability α, and
with probability β (α + β = 1), a
random point in the ball of radius ε centered at x is
chosen. Once the game position reaches the boundary, Player II pays Player I the amount
given by a fixed payoff function F. We give a detailed proof of the fact
that the value functions of this game satisfy the Dynamic Programming Principle
for x ∈ Ω with
u(y) = F(y) when
y ∉ Ω. This principle implies the existence of
quasioptimal Markovian strategies.