We study the problem of a trader who wants to maximize the expected reward from liquidating a given stock position. We model the stock price dynamics as a geometric pure jump process with local characteristics driven by an unobservable finite-state Markov chain and the liquidation rate. This reflects uncertainty about the state of the market and feedback effects from trading. We use stochastic filtering to reduce the optimization problem under partial information to an equivalent one under complete information. This leads to a control problem for piecewise deterministic Markov processes (in short PDMP). We apply control theory for PDMPs to our problem. In particular, we derive the optimality equation for the value function and we characterize the value function as unique viscosity solution of the associated dynamic programming equation. The paper concludes with a detailed analysis of specific examples. We present numerical results illustrating the impact of partial information and feedback effects on the value function and on the optimal liquidation rate.
↧