Partially observable Markov decision processes with uniformly distributed signal processes

Authors

  • Ιωάννης Γκουλιώνης

Keywords:

Maintenance, dynamic-programming, P.O.M.D.P.

Abstract

A Partially observed Markov decision process (P.O.M.D.P.) is a sequential decision problem where information concerning parameters of interest is incomplete, and possible actions include sampling, surveying, or otherwise collecting additional information. Such problems can theoretically be solved as dynamics programs, but the relevant state space is
infinite which inhibits algorithmic solution. We formulate a (P.O.M.D.P.) with a continous signal space and a method to convert a problem with uniformly distributed signal processes. We discussed how to solve (P.O.M.D.P.) problems with continous signal processes. However, in order to obtain a value function which is close to the optimal value function, we might need to construct a step function with large number of signals.

 

Downloads

Published

09-09-2005

How to Cite

Γκουλιώνης Ι. (2005). Partially observable Markov decision processes with uniformly distributed signal processes. SPOUDAI Journal of Economics and Business, 55(3), 55–75. Retrieved from http://spoudai.org/index.php/journal/article/view/400