Crossref Citations
This article has been cited by the following publications. This list is generated based on data provided by Crossref.
Hosaka, Masanori
and
Kurano, Masami
1999.
NON-DISCOUNTED OPTIMAL POLICIES IN CONTROLLED MARKOV SET-CHAINS.
Journal of the Operations Research Society of Japan,
Vol. 42,
Issue. 3,
p.
256.
Hosaka, Masanori
Horiguchi, Masayuki
and
Kurano, Masami
2001.
Controlled Markov set-chains under average criteria.
Applied Mathematics and Computation,
Vol. 120,
Issue. 1-3,
p.
195.
Kurano, Masami
Yasuda, Masami
and
Nakagami, Jun-ichi
2002.
Markov Processes and Controlled Markov Chains.
p.
223.
Hyeong Soo Chang
2005.
Error bounds for finite step approximations for solving infinite horizon controlled Markov set-chains.
IEEE Transactions on Automatic Control,
Vol. 50,
Issue. 9,
p.
1413.
Hyeong Soo Chang
and
Chong, E.K.P.
2005.
On Solving Controlled Markov Set-Chains via Multi-Policy Improvement.
p.
8058.
Kurano, Masami
Yasuda, Masami
Nakagami, Jun-ichi
and
Yoshida, Yuji
2005.
Modeling Decisions for Artificial Intelligence.
Vol. 3558,
Issue. ,
p.
283.
Chang, Hyeong Soo
2006.
Perfect information two-person zero-sum markov games with imprecise transition probabilities.
Mathematical Methods of Operations Research,
Vol. 64,
Issue. 2,
p.
335.
Kurano, M.
Yasuda, M.
Nakagami, J.
and
Yoshida, Y.
2006.
A fuzzy approach to Markov decision processes with uncertain transition probabilities.
Fuzzy Sets and Systems,
Vol. 157,
Issue. 19,
p.
2674.
Chang, Hyeong Soo
and
Chong, Edwin K. P.
2007.
Solving Controlled Markov Set-Chains With Discounting via Multipolicy Improvement.
IEEE Transactions on Automatic Control,
Vol. 52,
Issue. 3,
p.
564.
Kurano, M.
Yasuda, M.
Nakagami, J.
and
Yoshida, Y.
2007.
Fuzzy optimality relation for perceptive MDPs—the average case.
Fuzzy Sets and Systems,
Vol. 158,
Issue. 17,
p.
1905.
González-Hernández, Juan
López-Martínez, Raquiel R.
and
Pérez-Hernández, J. Rubén
2007.
Markov control processes with randomized discounted cost.
Mathematical Methods of Operations Research,
Vol. 65,
Issue. 1,
p.
27.
Li, Baohua
and
Si, Jennie
2008.
Robust Optimality for Discounted Infinite-Horizon Markov Decision Processes With Uncertain Transition Matrices.
IEEE Transactions on Automatic Control,
Vol. 53,
Issue. 9,
p.
2112.
Chang, Hyeong Soo
2008.
Finite-Step Approximation Error Bounds for Solving Average-Reward-Controlled Markov Set-Chains.
IEEE Transactions on Automatic Control,
Vol. 53,
Issue. 1,
p.
350.
Baohua Li
and
Si, J
2010.
Approximate Robust Policy Iteration Using Multilayer Perceptron Neural Networks for Discounted Infinite-Horizon Markov Decision Processes With Uncertain Correlated Transition Matrices.
IEEE Transactions on Neural Networks,
Vol. 21,
Issue. 8,
p.
1270.
Li, Baohua
and
Si, Jennie
2011.
Belief function model for reliable optimal set estimation of transition matrices in discounted infinite-horizon Markov decision processes.
p.
1214.
Ribes, Jorge Luis B.
Dimuro, Gracaliz Pereira
and
Aguiar, Marilton Sanchotene de
2011.
On Vector and Matrices of Fuzzy Numbers.
p.
39.
Mastin, Andrew
and
Jaillet, Patrick
2012.
Loss bounds for uncertain transition probabilities in Markov decision processes.
p.
6708.
Hu, XiangTao
Huang, YongAn
Yin, ZhouPing
and
Xiong, YouLun
2012.
Driving force planning in shield tunneling based on Markov decision processes.
Science China Technological Sciences,
Vol. 55,
Issue. 4,
p.
1022.
Chang, Hyeong Soo
Hu, Jiaqiao
Fu, Michael C.
and
Marcus, Steven I.
2013.
Simulation-Based Algorithms for Markov Decision Processes.
p.
1.
Eckstein, Stephan
2019.
Extended Laplace principle for empirical measures of a Markov chain.
Advances in Applied Probability,
Vol. 51,
Issue. 01,
p.
136.