Hostname: page-component-78c5997874-mlc7c Total loading time: 0 Render date: 2024-11-09T05:46:29.040Z Has data issue: false hasContentIssue false

Markov Decision Programming for Process Control in Batch Production

Published online by Cambridge University Press:  27 July 2009

David D. Yao
Affiliation:
IEOR Department, Columbia University, New York, New York 10027
Shaohui Zheng
Affiliation:
School of Business and Management, Hong Kong University of Science and Technology, Clearwater Bay, Hong Kong

Abstract

A machining process produces a batch of n units every time period. At the end of the period, the units are inspected (provided inspection is cost free). Based on the inspection and the quality data as well as the system history, a decision is made as to whether the process is in control. If not, the process is “revised,” in terms of machine recalibration, maintenance, or repair. In the presence of inspection cost, there is also the need to decide whether to inspect the produced batch. We formulate the problem as Markov decision programs, considering both discounted and average costs. We prove the optimality of certain threshold policies, and characterize the monotone behavior of the optimal thresholds.

Type
Research Article
Copyright
Copyright © Cambridge University Press 1998

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1.Chen, J., Yao, D.D., & Zheng, S. Quality control for products supplied under warranty. Operations Research (to appear).Google Scholar
2.Chittayil, K., Kumar, R.T., & Cohen, P.H. (1994). Acoustic emission sensing for tool wear monitoring and process control in metal cutting. In Dorf, R.C. & Kusiak, A. (eds.), Handbook of design, manufacturing and automation. New York: Wiley.Google Scholar
3.Girshick, M.A. & Rubin, H. (1952). A Bayes approach to a quality control model. Annals of Mathematical Statistics 23: 114125.CrossRefGoogle Scholar
4.Keilson, J. & Sumita, U. (1982). Uniform stochastic ordering and related inequalities. Canadian Journal of Statistics 10: 181198.CrossRefGoogle Scholar
5.Ross, S.M. (1970). Average cost semi-Markov decision processes. Journal of Applied Probability 7: 649656.CrossRefGoogle Scholar
6.Ross, S.M. (1971). Quality control under Markovian deterioration. Management Science 17: 587596.CrossRefGoogle Scholar
7.Ross, S.M. (1983). Introduction to stochastic dynamic programming. New York: Academic Press.Google Scholar
8.Ross, S.M. (1996). Stochastic processes, 2nd ed.New York: Wiley.Google Scholar
9.Shanthikumar, J.G. & Yao, D.D. (1986). The preservation of likelihood ratio ordering under convolution. Stochastic Processes and Their Applications 23: 259267.CrossRefGoogle Scholar
10.Shanthikumar, J.G. & Yao, D.D. (1991). Bivariate characterization of some stochastic order relations. Advances in Applied Probability 23: 642659.CrossRefGoogle Scholar
11.Shewhart, W.A. (1986). Statistical method from the viewpoint of quality control. New York: Dover Publications.Google Scholar
12.Taylor, H. (1965). Markovian sequential replacement processes. Annals of Mathematical Statistics 36: 16771694.CrossRefGoogle Scholar
13.Thompson, J.R. & Koronacki, J. (1993). Statistical process control for quality improvement. New York: Chapman & Hall.Google Scholar
14.Tijms, H.C. (1994). Average reward optimality equation in Markov decision processes with a general state space. In Kelly, F.J. (ed.), Probability, statistics and optimization. New York: Wiley, pp. 485495.Google Scholar