Skip to main content Accessibility help
×
Hostname: page-component-78c5997874-j824f Total loading time: 0 Render date: 2024-11-06T10:52:20.789Z Has data issue: false hasContentIssue false

25 - Learning to Play Stackelberg Security Games

Published online by Cambridge University Press:  13 December 2017

Ali E. Abbas
Affiliation:
University of Southern California
Milind Tambe
Affiliation:
University of Southern California
Detlof von Winterfeldt
Affiliation:
University of Southern California
Get access

Summary

Image of the first page of this content. For PDF version, please use the ‘Save PDF’ preceeding this image.'
Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2017

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Auer, P., Cesa-Bianchi, N., & Fischer, P. (2002). Finite-time analysis of the multiarmed bandit problem. Machine Learning, 47(2), 235256.Google Scholar
Auer, P., Cesa-Bianchi, N., Freund, Y., & Schapire, R. E. (2002). The nonstochastic multiarmed bandit problem. SIAM Journal on Computing, 32(1), 4877.CrossRefGoogle Scholar
Awerbuch, B., & Kleinberg, R. (2008). Online linear optimization and adaptive routing. Journal of Computer and System Sciences, 74(1), 97114.Google Scholar
Awerbuch, B., & Mansour, Y. (2003). Adapting to a reliable network path. In Proceedings of the 22nd ACM symposium on principles of distributed computing (PODC) (pp. 360367).Google Scholar
Balcan, M.-F., Blum, A., Haghtalab, N., & Procaccia, A. D. (2015). Commitment without regrets: Online learning in Stackelberg security games. In Proceedings of the 16th ACM Conference on Economics and Computation (EC) (pp. 61–78).Google Scholar
Bertsimas, D., & Vempala, S. (2004). Solving convex programs by random walks. Journal of the ACM, 51(4), 540556.Google Scholar
Blum, A., Haghtalab, N., & Procaccia, A. D. (2014). Learning optimal commitment to overcome insecurity. In Proceedings of the 28th Annual Conference on Neural Information Processing Systems (NIPS) (pp. 18261834).Google Scholar
Blum, A., & Mansour, Y. (2007). Learning, regret minimization, and equilibria. In Nisan, N., Roughgarden, T., Tardos, É., & Vazirani, V. (Eds.), Algorithmic game theory (chapter 4). Cambridge University Press.Google Scholar
Browne, C., Powley, E. J., Whitehouse, D., Lucas, S. M., Cowling, P. I., Rohlfshagen, P., Tavener, S., Perez, D., Samothrakis, S., & Colton, S. (2012). A survey of Monte Carlo tree search methods. IEEE Transactions on Computational Intelligence and AI in Games, 4(1), 143, 2012.CrossRefGoogle Scholar
Cesa-Bianchi, N., Mansour, Y., & Stoltz, G. (2005). Improved second-order bounds for prediction with expert advice. In Proceedings of the 18th Conference on Computational Learning Theory (COLT) (pp. 217232).Google Scholar
Chakraborty, D., Agmon, N., & Stone, P. (2013). Targeted opponent modeling of memory-bounded agents. In Proceedings of the Adaptive Learning Agents Workshop (ALA) (pp. 211–226).Google Scholar
Conitzer, V., & Sandholm, T. (2006). Computing the optimal strategy to commit to. In Proceedings of the 7th ACM Conference on Economics and Computation (EC) (pp. 8290).Google Scholar
Grötschel, M., Lovász, L., & Schrijver, A. (1988). Geometric algorithms and combinatorial optimization. Springer.CrossRefGoogle Scholar
Hannan, J. (1957). Approximation to Bayes risk in repeated plays. In Dresher, M., Tucker, A., & Wolfe, P. (Eds.), Contributions to the theory of games (vol. 3, pp. 97139). Princeton University Press.Google Scholar
Khachiyan, L. G. (1979). A polynomial algorithm in linear programming. Soviet Mathematics Doklady, 20, 191194.Google Scholar
Kiekintveld, C., Islam, T., & Kreinovich, V. (2013). Security games with interval uncertainty. In Proceedings of the 12th International Conference on Autonomous Agents and Multiagent Systems (AAMAS) (pp. 231238).Google Scholar
Kiekintveld, C., & Jain, M. (2015). Basic solution concepts and algorithms for Stackelberg security games. In Abbas, A., Tambe, M., & von Winterfeldt, D. (Eds.), Improving homeland security decisions (chapter 17).Google Scholar
Kocsis, L., & Szepesvári, C. (2006). Bandit based Monte-Carlo planning. In Proceedings of the 17th European Conference on Machine Learning (ECML) (pp. 282293).Google Scholar
Letchford, J., Conitzer, V., & Munagala, K. (2009). Learning and approximating the optimal strategy to commit to. In Proceedings of the 2nd International Symposium on Algorithmic Game Theory (SAGT) (pp. 250262).Google Scholar
Littlestone, N., & Warmuth, M. K. (1994). The weighted majority algorithm. Information and Computation, pages 212261.CrossRefGoogle Scholar
Marecki, J., Tesauro, G., & Segal, R. (2012). Playing repeated Stackelberg games with unknown opponents. In Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems (AAMAS) (pp. 821828).Google Scholar
Nguyen, T. H., Yadav, A., An, B., Tambe, M., & Boutilier, C. (2014). Regret-based optimization and preference elicitation for Stackelberg security games with uncertainty. In Proceedings of the 28th AAAI Conference on Artificial Intelligence (AAAI) (pp. 756762).Google Scholar
Nguyen, T. H. Yang, R. Azaria, A. Kraus, S., & Tambe, M. (2013). Analyzing the effectiveness of adversary modeling in security games. In Proceedings of the 27th AAAI Conference on Artificial Intelligence (AAAI) (pp. 718724).Google Scholar
Tauman Kalai, A., & Vempala, S. (2006). Simulated annealing for convex optimization. Mathematics of Operations Research, 31(2), 253266.CrossRefGoogle Scholar
Yang, R., Ford, B. J., Tambe, M., & Lemieux, A. (2014). Adaptive resource allocation for wildlife protection against illegal poachers. In Proceedings of the 13th International Conference on Autonomous Agents and Multiagent Systems (AAMAS) (pp. 453460).Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×