Book contents
- Frontmatter
- Contents
- Preface
- Part I Secure Multiparty Computation
- 1 Introduction
- 2 Preliminaries
- 3 MPC Protocols with Passive Security
- 4 Models
- 5 Information-Theoretic Robust MPC Protocols
- 6 MPC from General Linear Secret-Sharing Schemes
- 7 Cryptographic MPC Protocols
- 8 Some Techniques for Efficiency Improvements
- 9 Applications of MPC
- Part II Secret Sharing
- List of Algorithms
- List of Exercises
- References
- Index
4 - Models
from Part I - Secure Multiparty Computation
Published online by Cambridge University Press: 05 August 2015
- Frontmatter
- Contents
- Preface
- Part I Secure Multiparty Computation
- 1 Introduction
- 2 Preliminaries
- 3 MPC Protocols with Passive Security
- 4 Models
- 5 Information-Theoretic Robust MPC Protocols
- 6 MPC from General Linear Secret-Sharing Schemes
- 7 Cryptographic MPC Protocols
- 8 Some Techniques for Efficiency Improvements
- 9 Applications of MPC
- Part II Secret Sharing
- List of Algorithms
- List of Exercises
- References
- Index
Summary
Introduction
The protocol we described in Chapter 3 is perfectly correct and perfectly private, but it cannot tolerate that any of the parties deviate from the protocol. In later chapters we will present protocols that are correct and private even if some parties do not follow the protocol. Such protocols are called robust. Before we can prove that these protocols are robust, we need a good definition of what we mean by that.
In this section we will describe a security model for cryptographic protocols known as the UC model. Here UC stands for universally composable. The name was adopted because a protocol proven secure in this model remains secure regardless of the context in which it is used. In other words, it can be “universally” composed with any set of other protocols. Our formulation of the UC model differs in several respects from the way it was originally formalized in the literature. We will hint at these differences as we go; some additional comments can be found in the Notes section of this chapter.
Before we look at the formal details, let us start by discussing the basic ideas of how to define privacy and robustness of a protocol and last, but not least, why and how these two concepts should be defined via one common definition because they are closely entangled.
Defining Privacy
It is convenient to first look back on how we defined privacy in Chapter 3. Informally, we defined a protocol to be private (against corruptions of size t) as follows: first, pick an input (x1,…,xn)for the protocol, and make a run of the protocol on this input. Then pick some C⊂{P1,…,Pn} with |C| ≤ t and consider the values {viewj}Pj∈C, where viewj is the view of party Pj in the execution. The values {viewj}Pj∈Cconstitute exactly the information leaked to the corrupted parties, C, during an execution. In the following we therefore call the values {viewj}Pj∈C the leaked values.
We then want to say that the leaked values do not allow the corrupted parties to learn anything that they should not learn. It is clear that the corrupted parties necessarily must learn their own inputs and outputs; in fact, this is the entire purpose of the protocol. We therefore call the values {xj, yj}Pj∈C the allowed values.
- Type
- Chapter
- Information
- Secure Multiparty Computation and Secret Sharing , pp. 51 - 103Publisher: Cambridge University PressPrint publication year: 2015