Hostname: page-component-78c5997874-94fs2 Total loading time: 0 Render date: 2024-11-04T21:36:12.527Z Has data issue: false hasContentIssue false

Improving trust and reputation assessment with dynamic behaviour

Published online by Cambridge University Press:  17 June 2020

Caroline Player
Affiliation:
Department of Computer Science, University of Warwick, CoventryCV4 7AL, UK e-mails: [email protected], [email protected]
Nathan Griffiths
Affiliation:
Department of Computer Science, University of Warwick, CoventryCV4 7AL, UK e-mails: [email protected], [email protected]

Abstract

Trust between agents in multi-agent systems (MASs) is critical to encourage high levels of cooperation. Existing methods to assess trust and reputation use direct and indirect past experiences about an agent to estimate their future performance; however, these will not always be representative if agents change their behaviour over time.

Real-world distributed networks such as online market places, P2P networks, pervasive computing and the Smart Grid can be viewed as MAS. Dynamic agent behaviour in such MAS can arise from seasonal changes, cheaters, supply chain faults, network traffic and many other reasons. However, existing trust and reputation models use limited techniques, such as forgetting factors and sliding windows, to account for dynamic behaviour.

In this paper, we propose Reacting and Predicting in Trust and Reputation (RaPTaR), a method to extend existing trust and reputation models to give agents the ability to monitor the output of interactions with a group of agents over time to identify any likely changes in behaviour and adapt accordingly. Additionally, RaPTaR can provide an a priori estimate of trust when there is little or no interaction data (either because an agent is new or because a detected behaviour change suggests recent past experiences are no longer representative). Our results show that RaPTaR has improved performance compared to existing trust and reputation methods when dynamic behaviour causes the ranking of the best agents to interact with to change.

Type
Research Article
Copyright
© Cambridge University Press, 2020

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Anders, G., Seebach, H., Steghöfer, J.-P., Reif, W., André, E., Hähner, J., Müller-Schloer, C. & Ungerer, T. 2016. The social concept of trust as enabler for robustness in open self-organizing systems. In Trustworthy Open Self-Organising Systems. Springer.Google Scholar
Anderson, K., Lee, S. H. & Menassa, C. 2013. Impact of social network type and structure on modeling normative energy use behavior interventions. Journal of Computing in Civil Engineering 28(1), 30–39.Google Scholar
Bifet, A. & Gavaldà, R. 2007. Learning from time-changing data with adaptive windowing. In Proceedings of the 2007 SIAM International Conference on Data Mining, 443–448.Google Scholar
Bifet, A. & Gavaldà, R.Adaptive learning from evolving data streams. In International Symposium on Intelligent Data Analysis. Springer.Google Scholar
Bowling, M. & Veloso, M. 2001. Rational and convergent learning in stochastic games. In International Joint Conference on Artificial Intelligence, 17, 10211026. Lawrence Erlbaum Associates Ltd.Google Scholar
Burnett, C., Norman, T. J. & Sycara, K. 2010. Bootstrapping trust evaluations through stereotypes. In Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems, 241–248.Google Scholar
Carmel, D. & Markovitch, S. 1999. Exploration strategies for model-based learning in multi-agent systems: Exploration strategies. Autonomous Agents and Multi-Agent Systems 2(2), 141172.CrossRefGoogle Scholar
Castelfranchi, C. & Falcone, R. 1998. Principles of trust for MAS: Cognitive anatomy. social importance, and quantification. In Proceedings of the International Conference on Multi Agent Systems, 1998, 72–79. IEEE.Google Scholar
Chen, R., Guo, J. & Bao, F. 2016. Trust management for SOA-based IoT and its applications to service composition. IEEE Transactions on Services Computing 9(3), 482495.CrossRefGoogle Scholar
D’Angelo, G., Rampone, S. & Palmieri, F. 2017. Developing a trust model for pervasive computing based on apriori association rules learning and bayesian classification. Soft Computing 21(21), 62976315.CrossRefGoogle Scholar
Debra, M., Weick, K. E. & Kramer, R. M. 1995. Swift trust and temporary groups. Trust in Organizations: Frontiers of Theory and Research 166.Google Scholar
Fullam, K. K., Klos, T. B., Muller, G., Sabater, J., Schlosser, A., Topol, Z., Suzanne Barber, K., Rosenschein, J. S., Vercouter, L. & Voss, M. 2005. A specification of the agent reputation and trust (art) testbed: experimentation and competition for trust in agent societies. In Proceedings of the Fourth International Joint Conference on Autonomous Agents and Multiagent Systems, 512–518. ACM.CrossRefGoogle Scholar
Gambetta, D. 2000. Can We Trust Trust? Trust: Making and Breaking Coooperative Relations, 13, 213237.Google Scholar
Griffiths, N. 2006. A fuzzy approach to reasoning with trust, distrust and insufficient trust. In International Workshop on Cooperative Information Agents, 360–374. Springer.CrossRefGoogle Scholar
Hales, D. & Edmonds, B. 2003. Evolving social rationality for MAS using tags. In Proceedings of the Second International Joint Conference on Autonomous Agents and Multiagent Systems, 497–503. ACM.CrossRefGoogle Scholar
Harries, M. B., Sammut, C. & Horn, K. 1998. Extracting hidden context. Machine Learning 32(2), 101126.CrossRefGoogle Scholar
Hernandez-Leal, P., Taylor, M. E., Rosman, B., Enrique Sucar, L. & De Cote, E. M. 2016. Identifying and tracking switching, non-stationary opponents: A Bayesian approach. In Workshops at the Thirtieth AAAI Conference on Artificial Intelligence.Google Scholar
Hernandez-Leal, P., Zhan, Y., Taylor, M. E., Sucar, L. E. & de Cote, E. M. 2017. Efficiently detecting switches against non-stationary opponents. Autonomous Agents and Multi-Agent Systems 31(4), 767789.CrossRefGoogle Scholar
Hoens, R. T., Polikar, R. & Chawla, N. V. 2012. Learning from streaming data with concept drift and imbalance: An overview. Progress in Artifical Intelligence 1(1), 89101.CrossRefGoogle Scholar
Huynh, T. D. & Jennings, N. 2004. Fire: An integrated trust and reputation model for open multi-agent systems. In ECAI 2004: 16th European Conference on Artificial Intelligence, August 22–27, 2004, Valencia, Spain: Including Prestigious Applicants [sic] of Intelligent Systems (PAIS 2004): Proceedings, 110, 18.Google Scholar
Jøsang, A. & Ismail, R. 2002. The beta reputation system. Proceedings of the 15th Bled Electronic Commerce Conference 5, 25022511.Google Scholar
Josang, A. & Haller, J. 2007. Dirichelet reputation systems. In The Second International Conference on Availability, Reliability and Security, ARES 2007, 112–119. IEEE.CrossRefGoogle Scholar
Kamvar, S. D., Schlosser, M. T. & Garcia-Molina, H. 2003. The eigentrust algorithm for reputation management in p2p networks. In Proceedings of the 12th International Conference on World Wide Web, 640–651. ACM.CrossRefGoogle Scholar
Lim Choi Keung, S. N. & Griffiths, N. 2010. Trust and reputation. Agent-Based Service-Oriented Computing, 189–224.Google Scholar
Klusch, M. & Gerber, A. 2002. Dynamic coalition formation among rational agents. IEEE Intelligent Systems 3, 4247.CrossRefGoogle Scholar
Liang, Z. & Shi, W. 2005. Pet: A personalized trust model with reputation and risk evaluation for P2P resource sharing. In Proceedings of the 38th Annual Hawaii International Conference on System Sciences.Google Scholar
Liu, X., Datta, A., Rzadca, K. & Lim, E. 2009. Stereotrust: A group based personalized trust model. In Proceedings of the 18th ACM Conference on Information and Knowledge Management, 7–16.Google Scholar
Liu, X., Tredan, G. & Datta, A. 2014. A generic trust framework for large-scale open systems using machine learning. Computational Intelligence 30(4), 700721.CrossRefGoogle Scholar
Lu, G. & Lu, J. 2017. Introduction to the investigating in neural trust and multi agent systems. In Examining Information Retrieval and Image Processing Paradigms in Multidisciplinary Contexts, 269–273. IGI Global.CrossRefGoogle Scholar
Nguyen, D. T. 2017. Trust Management for Complex Agent Groups. PhD thesis, Auckland University of Technology.Google Scholar
Nguyen, T.D. & Bai, Q. 2014. Accountable individual trust from group reputations in multi-agent systems. In Pacific Rim International Conference on Artificial Intelligence, 1063–1075.Google Scholar
Nguyen, T. D. & Bai, Q. 2018. A dynamic Bayesian network approach for agent group trust evaluation. Computers in Human Behavior.CrossRefGoogle Scholar
Player, C. & Griffiths, N. 2017. Bootstrapping trust and stereotypes with tags. In Proceedings of the 19th International Workshop on Trust in Agent Societies (Trust at AAMAS).Google Scholar
Player, C. & Griffiths, N. 2018. Addressing concept drift in reputation assessment. In Proceedings of the 10th International Workshop on Adaptive Learning Agents (ALA@AAMAS).Google Scholar
Press, W. H., Teukolsky, S. A., Vetterling, W. T. & Flannery, B. P. 2007. Statistical description of data: Are two distributions different. Numerical Recipes: The Art of Scientific Computing, pages 730740.Google Scholar
Regan, K., Poupart, P. & Cohen, R. 2006. Bayesian reputation modelling in e-marketplaces sensitive to subjectivity, deception and change. In Proceedings of the National Conference on Artificial Intelligence.Google Scholar
Resnick, P., Kuwabara, K., Zeckhauser, R. & Friedman, E. 2000. Reputation systems. Communications of the ACM 43(12), 45–48.Google Scholar
Salehi-Abari, A. & White, T. 2012. Dart: A distributed analysis of reputation and trust framework. Computational Intelligence 28(4), 642682.CrossRefGoogle Scholar
Sensoy, M., Yilmaz, B. & Norman, T. J. 2016. Stage: Stereotypical trust assessment through graph extraction. Computational Intelligence 32(1), 72101.CrossRefGoogle Scholar
Srivasta, M., Xiong, L. & Liu, L. 2005. Trustguard: Countering vulnerabilities in reputation management for decentralized overlay networks. In Proceedings of the 14th International Conference on World Wide web, 422–431.Google Scholar
Tahta, U. E., Sen, S. & Can, A. B. 2015. Gentrust: A genetic trust management model for peer to peer systems. Applied Soft Computing 34, 693704.CrossRefGoogle Scholar
Taylor, P., Barakat, L., Miles, S. & Griffiths, N. 2018. Reputation assessment: A review and unifying abstraction. The Knowledge Engineering Review, 33.Google Scholar
Teacy, L., Patel, J., Jennings, N. & Luck, M. 2006. Travos: Trust and reputation in the context of inaccurate information sources. Autonomous Agents and Multi-Agent Systems 12(2), 183198.CrossRefGoogle Scholar
Teacy, L., Luck, M., Rogers, A. & Jennings, N. 2012. An efficient and versatile approach to trust and reputation using hierarchical bayesian modelling. Artificial Intelligence 193, 149185.CrossRefGoogle Scholar
Traverso, G., Cordero, C. G., Nojoumian, M., Azarderakhsh, R., Demirel, D., Habib, S. M. & Buchmann, J. 2017. Evidence-based trust mechanism using clustering algorithms for distributed storage systems. IACR Cryptology ePrint Archive.CrossRefGoogle Scholar
Tsymbal, A. 2004. The problem of concept drift: Definitions and related work. Computer Science Department, Trinity College Dublin 106(2).Google Scholar
Webb, G. I., Hyde, R., Cao, H., Nguyen, H. L. & Petitjean, F. 2016. Characterizing concept drift. Data Mining and Knowledge Discovery 30(4), 964994.CrossRefGoogle Scholar
Yang, Y., Wu, X. & Zhu, X. 2005. Combining proactive and reactive predictions for data streams. In Proceedings of the Eleventh ACM SIGKDD International Conference on Knowledge Discovery in Data Mining, 710–715. ACM.CrossRefGoogle Scholar
Xiong, L. & Liu, L. 2004. Peertrust: Supporting reputation-based trust for peer-to-peer electronic communities. IEEE Transactions on Knowledge and Data Engineering 16(7), 843857.CrossRefGoogle Scholar