# Algebraic AlgorithmsIntroductory Number Theory and Abstract Algebra for Computer Science Applications

## [link]  1. Introduction, Background, and Motivation

When many real-world problems are addressed or solved mathematically and computationally, the details of those problems are abstracted away until they can be represented directly as idealized mathematical structures (e.g., numbers, sets, trees, graphs, matrices, and so on). In this course, we will study a collection of such idealized mathematical objects: integers, residues, groups, isomorphisms, and several others. We will see how these structures and their properties can be used for implementing useful computational solutions to problems such as random number generation, prime number generation, error correction, trusted and distributed storage and computation, secure communication, and others.
In covering the material for this course, we will use the standard language and conventions for discussing these mathematical structures that have been developed by the community of mathematicians over the course of history. You will need to become familiar with these conventions in order to find, identify, and use the structures and techniques that have already been developed for representing and solving certain computational problems. At the same time, we will also learn how modern programming languages and programming paradigms can be used to implement these structures and algorithms both accessibly and efficiently.
The development and application of mathematics involves abstraction. A problem can be viewed at multiple levels of abstraction, and in developing mathematics humans have adopted a variety of techniques that allow them to successfully employ abstraction to study natural phenomena and solve problems.
 symbolic abstract meaning concrete meaning inapplication domain 2 + 3 5 five objects {(1, 2), (1, 3)} acyclic graph file system {(1, 2), (2, 3), (3, 1)} graph with cycle network {(0,1), (1,2), (2,0)} permutation random number sequence
The above illustrates the different levels of abstraction that may exist for a given problem. We employ a language of symbols to denote certain abstract structures, which may correspond to actual structures in the world. A string of symbols corresponds to a particular abstract object. Notice that the actual object being modeled and the abstract structure behave the same way, and that this behavior implies certain rules about how we can manipulate the symbols without changing the object that they name. For example, we can represent the same graph using the two strings of symbols "{(1,2), (2,3), (3,1)}" and "{(1,2), (2,3), (3,1)}", or the same number of objects using "2 + 3", "3 + 2", "1 + 4", and so on.
In this course, we will begin to reviewing the terminology and concepts of logic, integer arithmetic, and set theory, which we will use throughout the course. We will then show that the algebraic properties of the integers also apply to congruence classes of integers (i.e., the properties of modular arithmetic operations), and we will derive and utilize theorems that have useful computer science applications (such as for generating random numbers and creating cryptographic protocols). We will then go further and show that some of the algebraic properties that hold in integer and modular arithmetic can also apply to any data structure, and we will study how to recognize and take advantage of these properties.

### [link]  1.2. Informal motivating example: random number generation

Let us informally consider the problem of generating a sequence of random positive integers. Random number generators are needed in many situations and applications, including:
• generating unique identifiers for database records, objects, etc.;
• generating a one-time pad for a simple encryption scheme;
• generating public and private keys for more sophisticated encryption and signature schemes;
• simulation and approximation methods that employ random sampling (Monte-Carlo, and so on).
Different applications will impose different requirements on what is and is not a sufficiently "random" sequence of number. Suppose we adopt the following method:
• n0 = a number in the range (inclusive) 0 to 5;
• ni = (2 ⋅ ni-1 + 1) mod 6.
We can consider another method:
• n0 = an initial seed integer 104 > n ≥ 103;
• ni = only the last four digits of ni-12.
Frequent repetition of a sequence may or may not be allowed in our given application. Does the above method produce repeating numbers? How often? For how many initial seeds? How do we choose a good seed? We can measure a physical process or component (a clock, a keyboard), but even under these circumstances we need a way to reason about the range of random values the measurement produces, and the range of random values the application requires. How do we begin to approach and formally characterize these aspects of the problem so that we are certain we are meeting the requirements imposed by the application?
One way to model a random number generation process is to view it is a permutation. In fact, there is more than one way to view the process as a permutation. We could simply count up from 0 to m and apply the same permutation to each 0 ≤ nm in order to produce the nth random number in the sequence. Is there an efficient way (i.e., using no more memory than O(log m)) to compute a random number from each n such that a number never repeats?
In this course we will learn about a variety of mathematical structures and their properties that will allow us to precisely specify the above problem and others like it, to identify what solutions are appropriate for such a problem, and to implement these solutions correctly and, where necessary, efficiently.

## [link]  2. Review of Logic with Sets, Relations, and Operators

In this section, we will review several abstract structures and associated properties (and the symbolic language used to represent them) that you should have already encountered in past courses. Simultaneously, we will review one way in which these structures can be implemented and manipulated within the modern programming language Python. As with most human languages that have developed organically over time, mathematics has a rich and often redundant vocabulary. We introduce many terms in this section that we will use consistently in this course. However, keep in mind that there are often other synonyms within mathematics and computer science for these structures.

### [link]  2.1. Formulas without quantifiers

Definition: A logical formula or formula is a string of symbols that follow a certain syntax. If the formula is written using a correct syntax, we can ask about its meaning (i.e., is the formula true or false). The symbols or, and, not, implies, and iff are logical operators.
The basic building blocks (a.k.a., base cases) for formulas are true, false, and predicates. When a formula consists of only one of these (and no operators), it is an atomic formula. Like any formula, each atomic formula has a particular meaning (it is either true or it is false). Atomic formulas can be combined using logical operators to build up larger formulas. The table below provides a way to determine the meaning of a formula by breaking it down into its constituent parts.
 formula meaning example of one possiblePython representation true always true True false always false False f1 and f2 only true if both f1 and f2 are true True and False f1 or f2 true if f1 or f2 (or both) are true True or (False and True) f1 implies f2 if f1 is true, then f2 must be true f1 is false, or f2 is true f1 is "less than or equal to" f2(if false is 0 and true is 1) False <= True f1 iff f2 f1 is true if and only if f2 is true f1 and f2 are eitherboth true or both false True == False ¬ f true if f is false not (True or (False and True)) ( f ) true if f is true (True and (not (False)) predicate example depends on the definitionof the predicate isPrime(7)
A predicate can have zero or more arguments. Whether a given atomic formula consisting of a predicate that takes at least one argument is true or false depends on the arguments supplied to it. For example, we see above for the predicate isPrime(?) that takes one argument, the meaning of isPrime(7) should be true but the meaning of isPrime(4) should be false.
The following table may help with gaining a good intuition for the meaning of the implies operator.
 meaning ofleft-hand side(premise) meaning ofright-hand side(conclusion) meaning ofentire formula comments true true true if the premise is true and the conclusionis true, the claim of implication is true;thus, the whole formula is true true false false if the premise is true but the conclusion isfalse, the conclusion is not impliedby the premise, so the claim of implicationis false; thus, the formula is false false true true if the conclusion is true on its own, it doesn't matterthat the premise is false, because anything impliesan independently true conclusion; thus, the claimof implication is true, and so is theentire formula false false true if we assume that a false premise is true, then "false"itself is "true"; in other words, falseimplies itself, so the formula is true
Example: Suppose we have the following formula involving two predicates the sun is visible and it is daytime:
 the sun is visible ⇒ it is daytime
This formula might describe a property of our real-world experience of a person that is in a particular fixed location on the surface of the Earth. We could state that the above formula is always true (i.e., it is always an accurate description of the system it describes). For every possible assignment of values to each variable, the above formula is indeed accurate, in that it is true exactly in those situations that might occur on Earth, and false in any situation that cannot occur:
 the sun is visible it is daytime meaning interpretation true true true a sunny day true false false false true true a cloudy day false false true nighttime
In particular, only one set of values causes the formula to be false: if the sun is in the sky, but it is not daytime. This is indeed impossible; all the others are possible (it may be day or night, or it may be cloudy during the day). The contrapositive of the formula is true if the formula is true:
 ¬(it is daytime) ⇒ ¬(the sun is visible)
Notice that the contrapositive of the above is a direct result of the fact that if the sun is visible it is daytime must be true, the rows in the truth table in which it is false must be ignored, and then the only possible row in the truth table in which it is daytime is false is the one in which the sun is visible is also false.

### [link]  2.2. Terms: integers and term operators that take integer inputs

Definition: A term is a string of symbols that represents some kind of mathematical structure. In our case, terms will initially represent integers or sets of integers. Terms may contain term operators. We can view these as functions that take terms as input and return terms as output. The term operators for terms that represent integers with which we will be working are +, -, ⋅, and mod.
 term what it represents example of one possiblePython representation 0 0 0 1 1 1 z1 + z2 the integer sum of z1 and z2 3 + 4 z1 − z2 the integer difference of z1 and z2 (1 + 2) - 4 z1 ⋅ z2 the integer product of z1 and z2 3 * 5 z1 mod z2 the remainder of the integer quotient z1 / z2z1 - ⌊ z1/z2 ⌋ ⋅ z2 17 % 5 z1z2 product of z2 instances of z1 2**3pow(2,3)

### [link]  2.3. Formulas: relational operators and predicates dealing with integers

Definition: A term can only appear in a formula if it is an argument to a predicate. A few common predicates involving integers are represented using relational operators (e.g, ≤, ≥).
 formula what it represents example of one possiblePython representation z1 = z2 true if z1 and z2have the same meaning;false otherwise 1 == 2 z1 < z2 true if z1 is less than z2;false otherwise 4 < 3 z1 > z2 true if z1 is greater than z2;false otherwise 4 > 3 z1 ≤ z2 true if z1 is less than or equal to z2;false otherwise 4 <= 3 z1 ≥ z2 true if z1 is greater than or equal to z2;false otherwise 4 >= 3 z1 ≠ z2 true if z1 is not equal to z2;false otherwise 4 != 3
Example: We can define our own predicates as well. Notice that one way we can represent these in Python is by defining a function that returns a boolean result.
 predicate definition example of one possiblePython representation P(x)     iff     x > 0 and x < 2 def P(x): return x > 0 and x < 2 Q(x)     iff     x > 3 Q = lambda x: x > 3
 formula what it represents example of one possiblePython representation P(1) true P(1) P(1) or P(2) true P(1) or P(2) Q(1) and Q(2) false Q(1) and Q(2)
We will use the following predicates throughout the course.
Definition: For any x,y ℤ, x | y    iff   y/x ℤ. If x | y, we then say that x is a factor of y.
Definition: For any y ℤ, y is prime    iff    for any integer x where 2 ≤ x < y, it is not true that x | y. In other words, y is prime if its only factors are 1 and y (itself).
 formula what it represents x | y y / x ∈ ℤ x divides y y is divisible by x y is an integer multiple of x y mod x = 0 x is a factor of y y is prime y > 1 andx | y implies x = 1 or x = y y > 1 andy is divisible only by 1 and itself
Example: We can define the divisibility and primality predicates in Python in the following way:

def divides(x, y):
return y % x == 0   # The remainder of y/x is 0.

def prime(y):
for x in range(2,y):
if divides(x,y):
return False
return True

Example: We can gradually generalize our primality predicate from the previous example to work for any other predicate. Note that we restate the property slightly: a number is prime if no smaller number can divide it evenly, so if we ever find one that doesn't satisfy this property, we immediately return False. This is effectively the implementation of a quantifier, which we introduce further below.

def doesNotDivide(x, y):
return y % x != 0   # The remainder of y/x is nonzero.

def prime(y):
for x in range(2,y):
if not doesNotDivide(x,y):
return False
return True

def checkAll(S, P):
for x in S:
if not P(x):
return False
return True

Given the above, it is now possible to get the same behavior provided by prime() by supplying appropriate arguments:

>>> checkAll(set(range(2,y)), lambda x: doesNotDivide(x,y))

Definition: For any x,y ℤ, x is a proper factor of y    iff    y/x ℤ and x < y.

### [link]  2.4. Terms: finite sets of integers, term operators that take set inputs, and set comprehensions

Definition: A finite set of integers is an unordered, finite collection of zero or more integers with no duplicates. The following are examples of terms the meaning of which is a finite set of integers (with the exception of the set size terms, the meaning of which is a positive integer).
 term what it represents example of one possiblePython representation ∅ a set with no elements in it set() {1,2,3} {1,2,3} {1,2,3} {2,..,5} {2,3,4,5} set(range(2,6)) { x | x ∈ {1,2,3,4,5,6}, x > 3 } {4,5,6} {x for x in {1,2,3,4,5,6} if x > 3} |{1,2,3,4}| 4 len({1,2,3,4})
The following are term operators on terms the meaning of which is a finite set of integers.
 term what it represents example of one possiblePython representation S1 ∪ S2 {z | z ∈ ℤ, z ∈ S1 or z ∈ S2} {1,2,3}.union({4,5}){1,2,3} | {4,5} S1 ∩ S2 {z | z ∈ ℤ, z ∈ S1 and z ∈ S2} {1,2,3}.intersection({2,3,5}){1,2,3} & {2,3,5} |S| the number of elements in S len({1,2,3})
While the terms below do not represent finite sets of integers, we introduce the following two set terms in order to reference them throughout the notes.
Definition: Let ℤ be the set of all integers, and let ℕ be the set of all non-negative integers (i.e., positive integers and 0).
 term what it represents ℕ {0, 1, 2, ...} ℤ {..., -2, -1, 0, 1, 2, ...}

### [link]  2.5. Formulas: quantifiers over finite sets of integers

Definition: Suppose we define the following two Python functions that take predicates (or, more specifically, functions that represent predicates) as input.

def forall(S, P):
for x in S:
if not P(x):
return False
return True

def exists(S, P):
for x in S:
if P(x):
return True
return False

We could redefine the above using comprehensions. We will also introduce a subset() operation on sets.

def forall(X, P):
S = {x for x in X if P(x)}
return len(S) == len(X)

def exists(X, P):
S = {x for x in X if P(x)}
return len(S) > 0

def subset(X,Y):
return forall(X, lambda x: x in Y)

Then we can introduce the following definitions and corresponding Python examples.
 formula what it represents example of one possiblePython representation 1 ∈ {1,2,3} true 1 in {1,2,3} 4 ∈ {1,2,3} false 4 in {1,2,3} ∀ x ∈ {1,2,3}, x > 0 and x < 4 true forall({1,2,3}, lambda x: x > 0 and x < 4) ∃ x ∈ {1,2,3}, x < 1 and x > 3 false exists({1,2,3}, lambda x: x < 1 or x > 3) ∀ x ∈ ∅, f true ∃ x ∈ ∅, f false
Notice that when we quantify over an empty set with a universal quantifier ∀, the formula is always true. When we quantify over an empty set with an existential quantifier, the formula is always false (since no element satisfying any formula could exist if no elements exist at all). We can see that the Python functions for these quantifiers are consistent with this interpretation.
Fact: Let X = {x1 , ..., xn} be a finite set and let P be a predicate that applies to a single integer argument. Then we have the following correspondences between quantifiers and logical operators:
 ∀ x ∈ X, P(x)
iff
 P(x1) and P(x2) and P(x3) and ... and P(xn)
 ∃ x ∈ X, P(x)
iff
 P(x1) or P(x2) or P(x3) or ... or P(xn)
Notice that if X is empty, the "base case" for ∀ must be true (since that is the identity of the and logical operator), while the "base case" for ∃ must be false (since that is the identity of the or logical operator).
Exercise: Implement Python functions that correspond to formulas which can be used to define each of the following statements about a set X and a predicate P.
• All the elements of a set X satisfy the predicate P.

# We provide two equivalent implementations.

def all(X, P):
return  forall(X, P)

def all(X, P):
S = {x for x in X if P(x)}
return len(S) == len(X)

• None of the elements of a set X satisfy the predicate P.

# We provide two equivalent implementations.

def none(X, P):
return  forall(X, lambda x: not P(x))

def none(X, P):
S = {x for x in X if P(x)}
return len(S) == 0

• At most one of the elements of a set X satisfy the predicate P.

def atMostOne(X, P):
S = {x for x in X if P(x)}
return len(S) <= 1

• At least one of the elements of a set X satisfy the predicate P.

# We provide two equivalent implementations.

def atLeastOne(X, P):
return exists(X, P)

def atLeastOne(X, P):
S = {x for x in X if P(x)}
return len(S) >= 1

Exercise: Use quantifiers to implement a Python function corresponding to the predicate p is prime for any integer p.

def prime(p):
return  p > 1 and forall(set(range(2, p)), lambda n: p % n != 0)


### [link]  2.6. Formulas: predicates dealing with finite sets of integers

Definition: The following are examples of formulas that contain relational operators dealing with finite sets of integers.
 formula what it represents example of one possiblePython representation 3 ∈ {1,2,3} true 3 in {1,2,3} {1,2} ⊂ {1,2,3} true subset({1,2}, {1,2,3}) {4,5} ⊂ {1,2,3} false subset({4,5}, {1,2,3})
Below are the general forms of formulas containing relational operators dealing with finite sets of integers.
 formula what it represents z ∈ S true if z is an element of S; false otherwise S1 ⊂ S2 ∀ z ∈ S1, z ∈ S2 S1 = S2 S1 ⊂ S2 and S2 ⊂ S1

### [link]  2.7. Terms: set products and binary relations

Definition: The product of two sets X and Y is denoted X × Y and is defined to be the set of ordered pairs (x,y) for every possible combination of x X and y Y.
Example:
 term what it represents example of one possiblePython representation {1,2} × {5,6,7} {(1,5),(1,6),(1,7),(2,5),(2,6),(2,7)} { (x,y) for x in {1,2} for y in {4,5,6,7} }
Definition: A set R is a relation between the sets X and Y if RX × Y. We also say that a set R is a relation on a set X if RX × X.
Example: Suppose we have the sets X = {a, b, c} and Y = {D, E, F}. Then one possible relation between X and Y is {(a, D), (c, E)}. One possible relation on X is {(a, a), (a, b), (a, c), (b, b), (c, a)}.

### [link]  2.8. Formulas: predicates dealing with relations

There are several common properties that relations may possess.
 predicate definition visual example X × Y is the set product of X and Y X × Y = { (x,y) | x ∈ X, y ∈ Y } !relation({'a','b','c'}, {'x','y','z'}, {('a','x'),('a','y'),('a','z'),('b','x'),('b','y'),('b','z'),('c','x'),('c','y'),('c','z')}) R is a relation between X and Y R ⊂ X × Y !relation({'a','b','c'}, {'x','y','z'}, {('a','x'),('b','x'),('b','z'),('c','z')}) R is a function from X to YR is a (many-to-one) map from X to Y R is a relation between X and Y and ∀ x ∈ X,     there is at most one     y ∈ Y s.t. (x,y) ∈ R !relation({'a','b','c'}, {'x','y','z'}, {('a','x'),('b','x'),('c','z')}) R is an injection from X to Y R is a function from X to Y and ∀ y ∈ Y,     there is at most one     x ∈ X s.t. (x,y) ∈ R !relation({'a','b','c'}, {'x','y','z'}, {('a','x'),('b','y')}) R is a surjection from X to Y R is a function from X to Y and ∀ y ∈ Y,     there is at least one     x ∈ X s.t. (x,y) ∈ R !relation({'a','b','c','d'}, {'x','y','z'}, {('a','x'),('c','y'),('d','z')}) R is a bijection between X and Y R is an injection from X and Y andR is a surjection from X and Y !relation({'a','b','c'}, {'x','y','z'}, {('a','y'),('b','z'),('c','x')}) R is a permutation on X R ⊂ X × X and R is a bijection between X and X !relation({'a','b','c'}, {('a','b'),('b','c'),('c','a')}) R is a reflexive relation on X R ⊂ X × X and∀ x ∈ X, (x,x) ∈ R !relation({'a','b','c'}, {('a','a'),('b','b'),('c','c')}) R is a symmetric relation on X R ⊂ X × X and∀ x ∈ X, ∀ y ∈ X, (x,y) ∈ R implies (y,x) ∈ R !relation({'a','b','c'}, {('a','b'),('b','a'),('c','c')}) R is a transitive relation on X R ⊂ X × X and ∀ x ∈ X, ∀ y ∈ X, ∀ z ∈ X,    ((x,y) ∈ R and (y,z) ∈ R) implies (x,z) ∈ R !relation({'a','b','c'}, {('a','b'),('b','c'),('a','c')}) R is an equivalence relation on XR is a congruence relation on X R ⊂ X × X and R is a reflexive relation on X and R is a symmetric relation on X and R is a transitive relation on X !relation({'a','b','c'}, {('a','a'),('b','b'),('a','b'),('b','a'),('c','c')})
Exercise: Define the set of all even numbers between 0 and 100 (inclusive). There are at least two ways we can do this:

evens = { 2 * x for x in set(range(0,51)) }
evens = { x for x in set(range(0,101)) if x % 2 == 0 }

Exercise: Implement a Python function that computes the set product of two sets X and Y.

def product(X, Y):
return { (x,y) for x in X for y in Y }

Exercise: Implement a Python function that takes a finite set of integers and builds the relation on that set correspondingto the operator relational operator ≤.

def leq(S):
return { (x, y) for x in S for y in S if x <= y }

Exercise: Implement a Python function that determines whether a relation R is a relation over a set X.

# Using our definition of subset().
def relation(R, X):
return subset(R, product(X, X))

# Using the built-in set implementation.
def relation(R, X):
return R.issubset(product(X, X))

Exercise: One property of relations that is studied in other subject areas within computer science and mathematics is asymmetry. We say that R is an asymmetric relation on a set X if:
 ∀ x ∈ X, ∀ y ∈ X, (x,y) ∈ R implies ¬((y,x) ∈ R)
One example of an asymmetric relation is the "less than" relation on integers, usually represented using the < relational operator. How can we write a Python function that takes as its input a relation R and a set X and determines whether that relation is asymmetric? Recall that we can represent the implication logical operator using the Python operator <=.

def isAsymmetric(X, R):
return relation(R,X) and forall(X, lambda a: forall(X, lambda b: ((a,b) in R) <= (not ((b,a) in R))))


### [link]  2.9. Terms: set quotients and quotient maps

Given an equivalence relation on a set, we can partition that set into a collection of distinct subsets, called equivalence classes, such that all the elements of each subset are equivalent to one another.
Definition: For any set X and equivalence relation R on X, let the quotient set of X with respect to R, denoted X/R, be defined as:
 X/R
=
 {{y | y ∈ X, (x,y) ∈ R} | x ∈ X}
Exercise: Implement a Python function that takes two inputs (a set X and an equivalence relation R on that set), and outputs the quotient set X/R.

def quotient(X,R):
return {frozenset({y for y in X if (x,y) in R}) for x in X}

Below, we evaluate the above function on an example input.

>>> quotient({1,2,3,4}, {(1,1),(2,2),(3,3),(2,3),(3,2),(4,4)})

{frozenset({4}), frozenset({2, 3}), frozenset({1})}

Definition: For a set X and a relation R over X, the relation that relates each x X to its equivalence class in X under R is called the quotient map. The function is typically denoted using [ ... ]. That is, [x] is the equivalence class of x under R.
Exercise: Implement a Python function that takes two inputs (a set X and an equivalence relation R on that set), and outputs the quotient map taking each element x X to its corresponding equivalence class [x] X/R.

def quotientMap(X,R):
return {(x, frozenset({y for y in X if (x,y) in R})) for x in X}

Exercise: Determine whether {(x,y) | x ℤ, y ℤ, (x + y) mod 2 = 0} is an equivalence relation.
Example: Let X be a set of humans. Let R be the following relation RX × X:
 R
=
 { (x, y) | x ∈ X, y ∈ X, x is y or x is a relative of y }
Then R is an equivalence relation (we assume everyone is related to themselves, and that if two people are both related to the same person, then they are themselves related). Furthermore, the quotient set X/R is a separation of the humans in X into families of relatives. No one in any equivalence class (a.k.a., a family) in X/R is related to anyone in any other equivalence class, and everyone in each equivalence class in X/R is related to everyone else in that equivalence class. Thus, |X/R| is the number of distinct families of humans in the set X.
More generally, we can view the quotient set X/R as the separation of X into as many groups as possible such that no two relatives are separated into separate groups. We can illustrate this with a Python example. Suppose we have the following relation on the set {'Alice', 'Bob', 'Carl', 'Dan', and 'Eve'}:

R = {\
('Alice', 'Alice'), ('Bob', 'Bob'), ('Carl', 'Carl'), ('Dan', 'Dan'), ('Eve', 'Eve'),\
('Alice', 'Carl'), ('Carl', 'Alice'), ('Dan', 'Eve'), ('Eve', 'Dan')\
}

We can then compute the set of families:

families = quotient({'Alice', 'Bob', 'Carl', 'Dan', 'Eve'}, R)

A visualization might look as follows:
#graph({'Alice','Carl','Bob','Dan','Eve'}, {('Alice','Alice'),('Bob','Bob'),('Carl','Carl'),('Dan','Dan'),('Eve','Eve'),('Alice','Carl'),('Carl','Alice'),('Dan', 'Eve'), ('Eve', 'Dan')})

Modular arithmetic can be viewed as a variant of integer arithmetic in which we introduce a congruence (or equivalence) relation on the integers and redefine the integer term operators so that they are defined on these congruence (or equivalence) classes.

### [link]  3.1. Terms: congruence classes in ℤ/mℤ, term operators, and relations

Definition: For any m ℤ, define:
 mℤ
=
 {x ⋅ m | x ∈ ℤ}
Definition: For any k ℤ and m ℤ, we define the congruence class k + mℤ below:
 k + mℤ
=
 {k + (x ⋅ m) | x ∈ ℤ}
The word congruence is a synonym for equivalence; in the next definition below, each congruence class k + mℤ is an equivalence class in a particular quotient set in which numbers are grouped by the remainder k that they have when divided by the chosen modulus m.
Exercise: Show that the relation R = {(x,y) | x ℤ, y ℤ, x mod 17 = y mod 17} is an equivalence relation.
Definition: For any given m ℤ, we define the set of all congruence classes modulo m:
 ℤ/mℤ
=
 ℤ/{(x,y) | x ∈ ℤ, y ∈ ℤ, x mod m = y mod m}
In English, we can read the notation ℤ/mℤ as "ℤ modulo m" or simply "ℤ mod m".
Example: How do we determine whether 7 2 + 5ℤ is a true formula? We can expand the notation 2 + 5ℤ into its definition:
 7
 2 + 5ℤ
 7
 { 2 + 5 ⋅ z | z ∈ ℤ }
Thus, if 7 is in the set of elements of the form 2 + 5 ⋅ z, then we must be able to solve the following equation on integers for z:
 7
=
 2 + 5 ⋅ z
 5
=
 5 ⋅ z
 1
=
 z
Since we can solve for z, it is true that 7 2 + 5ℤ.
Informally and intuitively, we could think of the structure of the above set as a logical consequence of letting all multiples of m be equivalent to 0. That is, if 0 = m = 2m = ..., then 1 = m + 1 = 2m + 1 = ..., and so on.
 term what it represents zz mod mz + mℤ {z + (a ⋅ m) | a ∈ ℤ} c1 + c2 {(x + y) | x ∈ c1, y ∈ c2} c1 − c2 {(x − y) | x ∈ c1, y ∈ c2} c1 ⋅ c2 {(x ⋅ y) | x ∈ c1, y ∈ c2} cz c ⋅ ... ⋅ c c! c ⋅ (c-1) ⋅ (c-2) ⋅ ... ⋅ 1
 formula what it represents c1 ≡ c2 true only if c1 ⊂ c2 and c2 ⊂ c1, i.e., set equalityapplied to the congruence classes c1 and c2;false otherwise

### [link]  3.2. Algebra of congruence classes

We use the familiar symbols +, -, ⋅, and 0, 1, 2, 3, 4, ... to represent operations on congruence classes. When these symbols are used to represent operations on integers, they have certain algebraic properties. This allows us, for example, to solve equations involving integers and variables, such as in the example below (in which we add the same integer to both sides, use associativity of + and commutativity of ⋅, and cancel 2 on both sides of the equation):
 2 ⋅ x − 3
=
 1
 (2 ⋅ x − 3) + 3
=
 1 + 3
 2 ⋅ x
=
 4
 2 ⋅ x
=
 2 ⋅ 2
 x ⋅ 2
=
 2 ⋅ 2
 x
=
 2
Do the operations on congruence classes, represented by the operators +, -, and ⋅, also share the familiar algebraic properties of the corresponding operations on integers? In many cases they do, but in some cases these properties only apply under specific circumstances.
Example: Suppose we write the term 3 + 4 ≡ 2 where 2, 3, and 4 are congruence classes in ℤ/5ℤ. What is the meaning of this term? First, note the following equivalence.
 { x + y | x ∈ ℤ, y ∈ ℤ} = {z | z ∈ ℤ }
Now, we expand the definitions of congruence classes and the operation + on congruence classes below.
 3 + 4
 (3 + 5ℤ) + (4 + 5ℤ)
=
 {3 + a ⋅ 5 | a ∈ ℤ} + {4 + b ⋅ 5 | b ∈ ℤ}
=
 {(x + y)   |   x ∈ {3 + a ⋅ 5 | a ∈ ℤ}, y ∈ {4 + b ⋅ 5 | b ∈ ℤ}}
=
 {(3 + a ⋅ 5) + (4 + b ⋅ 5)   |   a ∈ ℤ, b ∈ ℤ}
=
 {(3 + 4) + (a ⋅ 5) + (b ⋅ 5)   |   a ∈ ℤ, b ∈ ℤ}
=
 {2 + 5 + (a ⋅ 5) + (b ⋅ 5)   |   a ∈ ℤ, b ∈ ℤ}
=
 {2 + (1 + a + b) ⋅ 5   |   a ∈ ℤ, b ∈ ℤ}
=
 {2 + c ⋅ 5 | c ∈ ℤ}
 2 + 5ℤ
 2
Fact: The set ℤ/mℤ is closed under the operation represented by +.
Fact: It is the case that ℤ/mℤ = {0,...,m-1} where 0,...,m-1 are congruence classes, and thus, |ℤ/mℤ| = m.
Fact: The addition operation on congruence classes in ℤ/mℤ represented by + is commutative, associative, and has the additive identity 0 + mℤ (a.k.a., mℤ, or simply 0).
Fact: The multiplication operation on congruence classes in ℤ/mℤ represented by ⋅ is commutative, associative, and has the multiplicative identity 1 + mℤ (a.k.a., 1).
 property definition ℤ/mℤ is closed under + ∀ x,y ∈ ℤ/mℤ,     x + y ∈ ℤ/mℤ + is commutative on ℤ/mℤ ∀ x,y ∈ ℤ/mℤ,     x + y ≡ y + x + is associative on ℤ/mℤ ∀ x,y,z ∈ ℤ/mℤ,     (x + y) + z ≡ x + (y + z) + has a (left and right) identity 0 in ℤ/mℤ ∀ x ∈ ℤ/mℤ,     0 + x ≡ x and x + 0 ≡ x ℤ/mℤ has inverses with respect to + ∀ x ∈ ℤ/mℤ,     (m - x) + x ≡ 0 ℤ/mℤ is closed under ⋅ ∀ x,y ∈ ℤ/mℤ,     x ⋅ y ∈ ℤ/mℤ ⋅ is commutative on ℤ/mℤ ∀ x,y ∈ ℤ/mℤ,     x ⋅ y ≡ y ⋅ x + is associative on ℤ/mℤ ∀ x,y,z ∈ ℤ/mℤ,     (x ⋅ y) ⋅ z ≡ x ⋅ (y ⋅ z) + has a (left and right) identity 1 in ℤ/mℤ ∀ x ∈ ℤ/mℤ,     1 ⋅ x ≡ x and x ⋅ 1 ≡ x ⋅ distributes across + in ℤ/mℤ ∀ x,y,z ∈ ℤ/mℤ,     x ⋅ (y + z) ≡ (x ⋅ y) + (x ⋅ z)
In the rest of this subsection, we derive some familiar algebraic properties for congruence classes. We derive some of these properties from the properties of the divisibility predicate (i.e., for any x, y ℤ, x | y iff y/x ℤ). These properties will allow us to use algebra to solve equations involving congruence classes in ℤ/mℤ.
It is worth considering why we choose to work with the set of congruence classes ℤ/mℤ = {0 + mℤ, 1 + mℤ, 2 + mℤ, ..., (m-1) + mℤ} and operations over it rather than simply working with equations involving integer variables and the modulus operator. Modular arithmetic textbooks can be written (and such textbooks exist) in which the techniques covered in these notes are used to solve integer equations of the form f(x) mod m = g(x) mod m for some functions f and g. Some of the reasons for using the set of congruence classes ℤ/mℤ include:
• it is often possible to find the unique solution to an equation over ℤ/mℤ, while equations over ℤ involving the modulus operation may have infinitely many solutions;
• the set ℤ/mℤ is finite, so there is always a finite number of possible solutions to test, even if this is very inefficient, while equations over the integers involving modulus have an infinite range of possible solutions to test;
• the set ℤ/mℤ is a group and is a prototypical example of an algebraic structure, and gaining experience with algebraic structures is one of the purposes of this course, as algebraic structures are ubiquitous in computer science and its areas of application.
Fact: Given an equation involving congruence classes, we are allowed to add the same value to both sides. In other words, for any congruence classes a, b, c ℤ/mℤ, ab implies a + cb + c:
 a
 b (mod m)
 a + c
 a + c
To see that this is true, we can simply appeal to algebraic facts about integers:
 a
=
 b
 a + c
=
 b + c
 (a + c) mod m
=
 (b + c) mod m
 a + c
 b + c (mod m)
Thus, the two congruence classes contain the same elements, so they are equivalent.
Fact: For any congruence classes a, b, c ℤ/mℤ, ab implies a - cb - c. We can adjust the argument for + in the following way:
 a
=
 b
 a - c
=
 b - c
 (a - c) mod m
=
 (b - c) mod m
 a - c
 b - c (mod m)
We saw that we can add and subtract from both sides of an equation involving congruence classes. Can we also "divide" both sides by the same factor (or "cancel" that factor) in such an equation?
Example: Consider the following sequence of equations within ℤ/2ℤ:
 4
 6
 2 ⋅ 2
 2 ⋅ 3
 2
 3
Clearly, 2 ≢ 3 (mod 2) since the left-hand side is even and the right-hand side is odd. Thus, cancelling 2 on both sides of the equation in the above case is not correct. On the other hand, we have the following:
 10
 6
 2 ⋅ 5
 2 ⋅ 3
 5
 3
In the above case, cancelling 2 on both sides led to a true equation.
It seems that we cannot always "divide" by the same factor on both sides, but we can do so under certain conditions. In order to characterize at least some of the cases in which this is possible, we need a few preliminary facts.
Fact: For any a, m ℤ, a mod m = 0 iff we have that m | a. Thus, the following are all equivalent (i.e., all three are true at the same time):
 m
|
 a
 a mod m
=
 0
 a
 0 (mod m)
 0
 a (mod m)
We can derive the fact that m | a iff a mod m = 0 as follows. If a mod m = 0 then by definition of mod we have:
 a - ⌊ a/m ⌋ ⋅ m
=
 0
 a
=
 ⌊ a/m ⌋ ⋅ m
 a / m
=
 ⌊ a/m ⌋
 a / m
 ℤ
 m
|
 a
If m | a then by definition of m | a we have:
 m
|
 a
 a / m
 ℤ
 a / m
=
 ⌊ a/m ⌋
 a
=
 ⌊ a/m ⌋ ⋅ m
 a - ⌊ a/m ⌋ ⋅ m
=
 0
 a mod m
=
 0
Fact: For any a, b, c ℕ, if c|a then c|(ab).
Because c|a, it must be that a/c ℤ. But then we have that:
 (a ⋅ b) / c = (a / c) ⋅ b
Since (a / c) ℤ and b ℤ, (a / c) ⋅ b ℤ and (ab) / c ℤ. Thus, c|(ab).
Fact: In ℤ/mℤ, Multiplying by the 0 + mℤ congruence class yields the 0 + mℤ congruence class. For any a, b, m ℕ, if a ≡ 0 (mod m) then ab ≡ 0 (mod m).
We can show this as follows:
 a
 0 (mod m)
 m
|
 a
 m
|
 (a ⋅ b)
 a ⋅ b
 0 (mod m)
Thus, 0 ℤ/mℤ behaves with respect to multiplication over ℤ/mℤ much the same way that 0 ℤ behaves with respect to multiplication over ℤ.
Example: For any a, b, m ℕ, it is not necessarily the case that just because ab ≡ 0 (mod m), either a or b must be the congruence class 0 ℤ/mℤ. For example, let m = 6, a = 4, and b = 9. Then we have:
 4 ⋅ 9
 0 (mod 6)
 (2 ⋅ 2) ⋅ (3 ⋅ 3)
 0 (mod 6)
 2 ⋅ (2 ⋅ 3) ⋅ 3
 0 (mod 6)
 2 ⋅ 6 ⋅ 3
 0 (mod 6)
However, we have that:
 6
 4
 4
 0 (mod 6)
 6
 9
 9
 0 (mod 6)
Thus, the congruence class 0 ℤ/mℤ does not always behave the same way that 0 ℤ behaves.
Fact (Euclid's lemma): For any a, b, p ℤ, if p is prime and p | (ab), then it must be that p|a or p|b (or both).
Fact: For any congruence classes a, b, c ℤ/pℤ, if c is not divisible by p then acbc implies ab.
We can derive the above fact by using the following steps:
 a ⋅ c
 b ⋅ c
 (a ⋅ c) - (b ⋅ c)
 0
 ((a ⋅ c) - (b ⋅ c)) mod p
=
 0
 ((a - b) ⋅ c) mod p
=
 0
 p
|
 ((a - b) ⋅ c)
By Euclid's lemma, the fact that c is not divisible by p requires that a - b must be divisible by p. Thus:
 p
|
 (a - b)
 (a - b) mod p
=
 0
 a - b
 0
 a
 b
Example: Solve the following equation for all possible congruence classes x ℤ/3ℤ:
 6 ⋅ x
 0 (mod 3)
Since 6 ≡ 0 mod 3, we can rewrite the equation as follows:
 0 ⋅ x
 0 (mod 3)
Thus, any congruence class x {0 + 3ℤ, 1 + 3ℤ, 2 + 3ℤ} is a solution to the equation.
Example: Solve the following equation for all possible congruence classes x ℤ/5ℤ:
 2 ⋅ x
 0 (mod 5)
We know that 2 ⋅ 0 ≡ 0 (mod 5), so we can rewrite the above by substituting the right-hand side of the equation:
 2 ⋅ x
 2 ⋅ 0 (mod 5)
We can now cancel 2 on both sides of the equation using the cancellation law because 5 is prime and 2 ≢ 0 (mod 5):
 x
 0 (mod 5)
Thus, the only solution is the single congruence class x = 0 + 5ℤ.
Example: Solve the following equation for all possible congruence classes x ℤ/11ℤ:
 3 ⋅ x + 5
 6 (mod 11)
We can begin by subtracting 5 from both sides:
 3 ⋅ x
 1 (mod 11)
We can then see that 12 ≡ 1 (mod 11), so we can substitute 1 with 12 on the right-hand side:
 3 ⋅ x
 12 (mod 11)
We can then rewrite 12 as 3 ⋅ 4:
 3 ⋅ x
 3 ⋅ 4 (mod 11)
Since 11 is prime and 3 ≢ 0 (mod 11), we can cancel the 3 on both sides to solve the problem:
 x
 4 (mod 11)
Thus, the only solution is the single congruence class x = 4 + 11ℤ.
Example: Let a ℤ be any integer. Solve the following equation for all possible congruence classes x ℤ/7ℤ:
 a + 3 ⋅ x
 6 - 6 ⋅ a (mod 7)
We can begin by adding 6 ⋅ a to both sides:
 (a + 6 ⋅ a) + 3 ⋅ x
 6 (mod 7)
Now we can add a + 6 a to obtain 7 ⋅ a:
 7 ⋅ a + 3 ⋅ x
 6 (mod 7)
We know that 7 ≡ 0 (mod 7), so we know that for any a ℤ, 7 ⋅ a ≡ 0 (mod 7). Thus, we can substitute the term 7 ⋅ a with 0:
 0 + 3 ⋅ x
 6 (mod 7)
 3 ⋅ x
 6 (mod 7)
 3 ⋅ x
 3 ⋅ 2 (mod 7)
Since 7 is prime and 3 ≢ 0 (mod 7), we can cancel 3 on both sides:
 x
 2 (mod 7)
Thus, the only solution is the single congruence class x = 2 + 7ℤ.
Exercise: Solve the following equation for all possible congruence classes x ℤ/13ℤ:
 4 ⋅ x − 2
 10 (mod 13)
Exercise: Let a ℤ be any integer. Solve the following equation for all possible congruence classes x ℤ/19ℤ. Hint: notice that 17 + 19 = 36.
 6 ⋅ x − 11
 6 (mod 19)

### [link]  3.3. Multiplication by a congruence class as a permutation

We have seen that in certain situations, it is possible to cancel on both sides of an equation involving congruence classes. While Euclid's lemma made this possible, we might be interested in finding other ways to understand why cancelling is possible in this particular situation. In fact, the alternative explanation is useful in its own right because it can be applied to the practical problem of generating random numbers.
Let us consider the situations in which we can cancel on both sides in an equation involving integers. Suppose a,b,c ℤ, and:
 a ⋅ c
=
 b ⋅ c
It is possible to cancel in the above equation exactly when the operation of multiplication by c is invertible. In particular, if c is 0, then ac = 0, and all information about a is lost (likewise for bc = 0). So, if c = 0, the operation of multiplication by c is not invertible (i.e., multiple inputs map to the same output, namely 0, so multiplication by c = 0 is not a bijection), and it is not possible to cancel c on both sides. In all other situations where c ≠ 0, the operation is invertible (we can simply perform integer division by c on ac and bc). This raises a natural question: does the ability to cancel congruence classes on both sides of a congruence class equation also imply that the operation of multiplying by the congruence class that can be cancelled on both sides is an invertible operation? The answer is "yes".
For a prime p, multiplication by a congruence class in ℤ/pℤ corresponds to an invertible relation, also known as a bijection or a permutation.
Fact: For any p ℕ, for any a {1,...,p-1}, if p is prime then the following relation R is a permutation from {0, 1,...,p-1} to ℤ/pℤ (the non-zero congruence classes in ℤ/pℤ):
 R
=
 { (0, (0 ⋅ a) mod p), (1, (1 ⋅ a) mod p), (2, (2 ⋅ a) mod p), ..., (p-1, ((p-1) ⋅ a) mod p) }
=
 { (i, (i ⋅ a) mod p) | i ∈ {0,...,p-1} }
Recall that R is a permutation if R is a bijection. In order to be a bijection, R must be both an injection and a surjection.
To show that R is an injection, suppose that it is not. We will derive a contradiction from this assumption, which will tell us that the assumption must be false.
If it is not injective, then there exist distinct i {0,...,p-1} and j {0,...,p-1} where without loss of generality j < i such that:
 i
 j
 (i ⋅ a) mod p
=
 (j ⋅ a) mod p
But the above implies the following:
 (i ⋅ a) mod p
=
 (j ⋅ a) mod p
 ((i ⋅ a) - (j ⋅ a)) mod p
=
 0 mod p
 ((i - j) ⋅ a) mod p
=
 0 mod p
 p
|
 (i - j) ⋅ a
By Euclid's lemma, the above implies that p must divide either (ij) or a. But also know that:
• because a < p, p does not divide a;
• because p > i - j > 0, p cannot divide (ij).
Alternatively, notice that in (ia) mod p = (ja) mod p , we should be able to simply divide both sides of the equation by a because p is prime; however, this contradicts our initial assumption!
Since assuming that distinct i and j can be mapped to the same element when they are multiplied by a leads to a contradiction, it must be that this is not possible. Thus, no two distinct i and j map to the same result, so R is an injection from {0,...,p-1} to ℤ/pℤ and we have that:
 |{0,...,p-1}|
=
 |ℤ/pℤ|
Thus, since R maps to at least p distinct elements, and |ℤ/pℤ| has at most p elements, R must map to every element in ℤ/pℤ, so it is also a surjection by the Pigeonhole principle.
Since R is both an injection and a surjection from {1,...,p-1} to ℤ/pℤ - {0}, it must be a bijection, and thus a permutation.
Example: Consider 2 ℤ/5ℤ. We can write out the results of multiplying all the congruence classes in ℤ/5ℤ by the congruence class 2:
 2 ⋅ 0
 0 (mod 5)
 2 ⋅ 1
 2 (mod 5)
 2 ⋅ 2
 4 (mod 5)
 2 ⋅ 3
 1 (mod 5)
 2 ⋅ 4
 3 (mod 5)
Notice that each congruence class in ℤ/5ℤ appears exactly once as a result.

### [link]  3.4. Generating random numbers

Suppose we wish to automatically generate a sequence of "random" numbers using an algorithm. Before we can implement an algorithm and determine whether it solves our problem, we must first determine what constitutes an acceptable "random" sequence.
Example: Suppose we want to find a way to generate a "random" sequence v of positive integers. Assume we have only one requirement.
Requirement 1: The sequence v has m distinct positive integers between 0 and m-1, where vi is the ith element in the sequence.
In this case, a relation R ⊂ ℕ × ℤ/mℤ that is a permutation would be sufficient. One such relation is:
 v0
=
 0 mod m
 vi
=
 (vi-1 + 1) mod m
 R0
=
 {(i, vi) | i ∈ {0,...,m-1}}
Notice that the second term in (x, x mod m) is in this case the congruence class modulo m that corresponds to x. The relation R0 is indeed a permutation, but it does not satisfy our intuitive notion of a random sequence because it simply counts from 0 to m − 1, so we impose another requirement.
Requirement 2: The sequence v must not be the trivial sequence (0,...,m-1).
Suppose we propose the following relation:
 v0
=
 0
 vi
=
 (vi-1 + 2) mod m
 R1
=
 {(i, vi) | i ∈ {0,...,m-1}}
Notice that we can redefine R1 above more concisely:
 R1
=
 {(i, (0 + 2 ⋅ i) mod m) | i ∈ {0,...,m-1}}
Does R1 always satisfy both requirements? Suppose that m is even. Then we have that there exists j {0,...,m-1} such that 2 ⋅ j = m. But this means that 2 ⋅ j ≡ 0, so 2 ⋅ (j+1) ≡ 2 ⋅ j + 2 ⋅ 1 ≡ 2 ⋅ 1 ≡ 2 and so on. This means that R1 is not injective, so the first requirement is not met when m is even. Suppose we define R2 to be a variant of R1 parameterized by some b {0,...,m-1}:
 R2
=
 {(i, (0 + b ⋅ i) mod m) | i ∈ {0,...,m-1}}
What conditions can we impose on b and m so that they satisfy both requirements?
After examining the permutation we can obtain by multiplying all the congruence classes in some set ℤ/pℤ by a particular a ℤ/pℤ, we might wonder if we can use this fact to implement a random number generator. One immediate benefit of this approach is that this approach would satisfy several conditions that we might associate with a "good" algorithm for generating random numbers:
• the "state" of the algorithm is easy to store: it consists of a single congruence class in ℤ/pℤ, which can be represented using an integer;
• it is possible to compute the ith random number in the sequence efficiently (i.e., with a single multiplication followed by a single modulus operation);
• the sequence that is generated will contain exactly one instance of all the numbers in the chosen range {0,...,p-1};
• the sequence that is generated can, at least in some cases, be a non-trivial sequence that might appear "random".
Fact: If m is prime and b {2,...,m-1}, then R2 satisfies both requirements.
We know this is true because in this case, R is a permutation, so it satisfies Requirement 1. Furthermore, element v1 = b, so v is never the trivial sequence. Thus, Requirement 2 is satisfied.
Algorithm: The following is one possible implementation of a simple random number generation algorithm.
1. inputs: upper bound (prime) p ℕ, seed a {0,...,p-1}, index i {0,...,p-1}
1. return (ai) mod p
Exercise: What are some drawbacks (or unresolved issues) with building random sequences by choosing a prime m and some a {2,...,m-1}?

### [link]  3.5. Greatest common divisor and related facts

It is actually possible to generalize Euclid's lemma so that it does not rely on prime numbers existing at all. In order to do so, however, we must first introduce concepts that make it possible to reason about a particular relationship between numbers that is similar to the property of primality, but is less restrictive.
Definition: For any two x, y ℤ, we define the greatest common divisor, denoted gcd(x,y), as the greatest integer z ℤ such that z | x and z | y. Equivalently, we can define it as the maximum of a set:
 gcd(x,y)
=
 max{z | z ∈ ℤ, z | x, z | y}
We can also define it recursively (not that z | 0 for all z ℤ because 0/z ℤ):
 gcd(x,0)
=
 x
 gcd(x,y)
=
 gcd(y, x mod y)
To see why the recursive definition of gcd works, consider two cases. If x < y, then the two inputs are simply reversed. This ensures that the first input x is eventually larger than the second input y. If xy and they share a greatest common divisor a, then we have for n = ⌊ x/y ⌋ that:
 y
=
 y' ⋅ a
 x
=
 x' ⋅ a
 x mod y
=
 x - (n ⋅ y)
=
 (x' ⋅ a) - (n ⋅ y)
=
 x' ⋅ a - ((n ⋅ y') ⋅ a)
=
 (x' - n ⋅ y') ⋅ a
Notice that (x' - ny') ⋅ a < x' ⋅ a, but that the new smaller value is still a multiple of a, so the greatest common divisor of this value and y is still a.
Example: Consider the number 8 and 9. The factors of 8 are 1, 2, 4, and 8, while the factors of 9 are 1, 3, and 9. Thus, the maximum of the numbers in the intersection {1,2,4,8} ∩ {1,3,9} is 1, so we have that gcd(8, 9) = 1.
Example: We can implement the inefficient algorithm for the greatest common divisor using Python in the following way:

def gcd(x, y):
return max({z for z in range(0, min(x,y)) if x % z == 0 and y % z == 0})

Exercise: Consider the following relation:
 { (x, y) | gcd(x,y) ≠ 1 }
Is this an equivalence relation?
Fact: For any x ℤ, y ℤ, x | y iff gcd(x,y) = x.
Definition: For any x ℤ, y ℤ, x and y are relatively prime, relative primes, and coprime iff gcd(x,y) = 1.
Fact (Euclid's lemma generalization): For any a, b, c ℕ, if a | (bc) and a and b are relatively prime, then it must be that a | c.
Fact: If m, a ℕ and x, y ℤ/mℤ where gcd(a, m) = 1 (i.e., a and m are coprime), and suppose we have that:
 x
 y (mod m)
Then it must be that:
 a ⋅ x
 a ⋅ y (mod m)
Notice that we can prove the above by contradiction. If we instead suppose that axay (mod m), then because a and m are coprime, by Euclid's generalized theorem, we can canel a on both sides of the equation to obtain xy (mod m).
Exercise: Solve the following problems using the algebraic facts you know about the gcd operation.
• Find gcd(18,42).
• Find gcd(21000, 2100).
• For a positive even integer a ℤ, find gcd(a/2, a - 1).
• Suppose that for some a ℤ/mℤ, the set {ia mod m | i {1,...,m-1}} contains every number in the set {1,...,m-1}. What is gcd(a, m)?
Exercise: Solve the following equation for all possible congruence classes x ℤ/16ℤ:
 9 ⋅ x + 2
 4 (mod 16)
Exercise: Solve the following equation for all possible congruence classes x ℤ/15ℤ:
 30 ⋅ x
 14 (mod 15)
Fact: For any a ℕ and m ℕ, if gcd(a,m) = 1, then {(i, (ia) mod m) | i {0,...,m-1}} is a permutation.
The above can be proven by noticing that if gcd(a,m) = 1, then a does not divide m and m does not divide a. Notice that in this fact in which p was required to be prime, the fact that p is prime was not used in isolation; only the coprime relationship between p and a was required.
Using the generalization of Euclid's lemma, it is now possible to address the drawback we observed in our initial random number generating algorithm. We can now accept any upper bound m, not just a prime upper bound, and there is no need for either the algorithm or the user to find a prime p before generating random numbers. However, we have a new problem: how to do we obtain a non-trivial coprime for any given m?
Fact: For any m ℤ where m ≥ 2, gcd(m,m+1) = 1.
We can prove this fact by contradiction. Suppose there exists a factor z > 1 of m and m+1. In other words, gcd(m,m+1) > 1. Then we have that:
 z ⋅ a
=
 m
 z ⋅ b
=
 m + 1
 (z ⋅ b) - (z ⋅ a)
=
 m + 1 − m
 z ⋅ (b − a)
=
 1
 b − a
=
 1 / z
If z > 1 then 1/z ∉ ℤ, so (ba) ∉ ℤ. Since b-a ℤ, this is a contradiction, so it must be that gcd(m, m+1) = z = 1.
Fact: Suppose that m ℕ is an odd positive integer. Then for any k ℕ, gcd(m, 2k) = 1. This is because if m had any factors of 2, it would be even.
Fact: Suppose that m ℕ is of the form 2km' for some odd m' and some k ≥ 1. Then m' ⋅ 2 and m have exactly the same prime factors, which means (m' ⋅ 2) − 1 and m share no factors, so gcd(m, (m' ⋅ 2) − 1) = 1.
Algorithm: The following algorithm uses this fact to generate a new coprime. In the worst case, it runs in a linear amount of time in the length of the bit representation of m, and it may in some cases return m − 1 as a result. Note that the operations below (e.g., multiplication ⋅ and subtraction −) are on congruence classes in ℤ/mℤ and not on integers.
1. inputs: positive integer m
1. p := any number in {3,...,m-1}
2. while p − 1 and m are not coprime
1. p := p ⋅ gcd(p − 1, m)
3. return p − 1
Suppose that during the ith iteration of the algorithm, we have pi. This algorithm works by "moving" the factors shared by m and that iteration's pi − 1 quantity into pi+1, thus ensuring m and pi+1 in the subsequent iteration do share that factor (thus ensuring by this fact that m and pi+1 − 1 in the next iteration do not share it).
Fact: Suppose that we have some m ℕ, and that we choose some b {2,...,m-1} such that b > m/2. Then it is guaranteed that:
 gcd(m, b)
<
 b
To see why, consider that if gcd(m, b) = b, this would mean that there exists some k ≥ 2 such that bk = m, and this would mean that:
 b ⋅ 2
 m
 b
 m/2
This contradicts our assumption that b > m/2, so it must be that gcd(m, b) < b. We can then further conclude that:
 b / gcd(m, b)
>
 1
Thus, if gcd(m, (b / gcd(m, b))) = 1, this provides a way to find a number that is greater than 1 and coprime with m. However, this is not guaranteed to work every time because it may still be that gcd(m, (b / gcd(m, b))) > 1. Under those conditions, the options would be to try a different b, or to use a different technique.
At this point, we can define an improved random number generation algorithm that works for any upper bound.
Algorithm: The following is another variant of a simple random number generation algorithm.
1. inputs: upper bound m ℕ, index i {0,...,m-1}
1. a := number in {2,...,m-1} s.t. a and m are coprime (always the same a for an m)
2. return (ai) mod m
This algorithm has a more subtle flaw: poor choices of a (e.g., very small values such as 2) result in a very predictable "random" sequence. It is preferable to choose an a that is coprime with the upper bound m, and that falls somewhere between the middle and the upper quarter of the range {0,...,m-1} (i.e., between 0.5 ⋅ m and 0.75 ⋅ m).
Algorithm: In this variant, the algorithm attempts to find a coprime that is as close as possible to (4/7) ⋅ m. The value 4/7 is chosen in an ad hoc manner in this example. Other values in the range between 1/2 and 3/4 might also produce "nice"-looking results.
1. inputs: upper bound m ℕ, index i {0,...,m-1}
1. b := number in {2,...,m-1} s.t. b and m are coprime
2. for possible powers k in the range 1 to the bit length of m
1. a := power bk of b that is as close as possible to ((4/7) ⋅ m)
3. return (ai) mod m
However, the above algorithm is not ideal, and common random number generators found in standard libraries (such as the linear congruential generator) use a slightly different fact about permutations that results in sequences that appear yet more "random". So far, we have learned enough to build a simplified version of a linear congruential generator in which the "multiplier" is one modulo the modulus (it happens to be a very simple extension of our existing permutation-based random number generator).
Fact: For any m ℕ, for any s ℤ/mℤ, the relation {(i, i + s) | i ℤ/mℤ} is a permutation. Note that this permutation merely "shifts" all the congruence classes up by s (wrapping around through m ≡ 0 any values that exceed m − 1).
Algorithm (simplified linear congruential generator): Below is a simplified version of a linear congruential generator (in which the "multiplier" is one modulo the modulus).
1. inputs: upper bound m ℕ, index i {0,...,m-1}
1. a := number in {2,...,m-1} s.t. a and m are coprime (with the same additional preferences found in this algorithm)
2. s := number in {1,...,m-1} (ideally, this number is different for each input m)
3. return (ai + s) mod m
The fully generalized linear congruential generator has a few drawbacks if we want to construct an implementation that satisfies our desired criteria (i.e., full coverage of the domain ℤ/mℤ, no repetition, and the ability to compute the ith random number in the sequence with a small, constant number of arithmetic operations). In particular, we would need to find an additional "multiplier" k {2,...,m − 1} such that k − 1 shares all prime factors and factors of 4 with m (this is difficult to guarantee without the prime factorization of m).
Algorithm (linear congruential generator): Below is the implementation of a linear congruential generator. Note that it makes a recursive call to itself.
1. inputs to LCG: upper bound m ℕ, index i {0,...,m-1}
1. a := number in {2,...,m-1} s.t. a and m are coprime
2. s := number in {1,...,m-1} s.t. s and m are coprime
3. k := number in {1,...,m-1} s.t. k − 1 shares all prime factors and factors of 4 with m
4. if i = 0 then return s
5. else return (kLCG(m, i − 1) + a) mod m
Under circumstances in which the modulus satisfies predetermined criteria (e.g., it is of the form m = 2t for some t ℕ), it is easier to obtain an appropriate k. However, to determine the ith number in the sequence we would also need to either (1) maintain a counter in memory to keep track of the index of the current random number in the sequence, (2) compute a fairly large sum, or (3) perform an iterative loop or recursive chain of calls (as in the above implementation).

### [link]  3.6. Generating prime numbers

Many applications require the generation of new primes. We have already seen a simple example in which generating new random sequences required prime numbers. Another important class of applications with this requirement are cryptographic schemes and protocols. In this section, we consider the problem of generating prime numbers, and in particular, random prime numbers in a particular range.
Algorithm: There exists a simple algorithm that is guaranteed to generate a new prime number distinct from any of its inputs, but it is not efficient.
1. inputs: set of primes {p1 ,... , pn}
1. n := p1 ⋅ ... ⋅ pn + 1
2. F := factors of n
3. return any element in F
The above algorithm must return a new prime distinct from any of the primes p1 ,... , pn. To see why, consider the following:
 P
=
 p1 ⋅ ... ⋅ pn
 gcd(P, P + 1)
=
 1
There are two possibilities: P+1 is prime, or P+1 is not prime.
• If P+1 is prime, then P > pi for all i {1,...,n}, so P+1 is a new prime.
• If P+1 is not prime, it cannot share any factors with P since gcd(P, P + 1) = 1, so no factors of P+1 are in the set {p1 ,... , pn}. But it must have factors, so any of these factors will be different from the primes in the input set {p1 ,... , pn}.
Thus, the algorithm is a guaranteed method for generating new primes. It also constitutes a proof that there are infinitely many primes. Unfortunately, this algorithm is impractical because the new primes produced by it grow exponentially as the set of primes {p1 ,... , pn} is extended with new primes returned by the algorithm.
In practice, most algorithms that need to generate large primes for commercial applications simply choose from a range of numbers (e.g., at random) and filter out non-primes using some efficient algorithm that does not provide an absolute guarantee that the numbers that remain are all prime. As long as it is not too likely that the generated number is not a prime, this may be sufficient.
Example: Suppose we want to generate a d-digit prime number (in decimal representation). The prime number theorem states that for a given N, the number of primes in the range {2,...,N} is about N/(ln(N)). We can roughly estimate the number of primes with d-digit decimal representations using the following formula:
 (10d+1-1 / ln(10d+1-1)) - (10d / ln(10d))
For d = 8, this value is about 4,780,406, so we can roughly say that the chances that an 8-digit number chosen at random (here we are ignoring the details of what distribution is used) is prime are about:
 4,780,406/((109 - 1) - 108) ≈ 5.5/100
We can use our ability to generate random numbers in a specific range to build a generic algorithm template for generating prime numbers (without specifying exactly how we check that each candidate number we consider is prime).
Algorithm: Suppose we defined the following algorithm for generating a prime with a d-digit representation.
1. inputs: d
1. do
1. n := a random number from {10d, ..., 10d+1-1}
while n is not prime
2. return n
Assuming we were choosing numbers "well" with respect to their distribution (we are being imprecise here), we could optimistically hope that for d = 8, the above algorithm would only need to check for primality about 20 times (since roughly 1 out of every 20 numbers it tries should be a prime).
It remains to define an algorithm for checking whether an arbitrary input m ℕ is prime. We could check every number k between 2 and ⌊ √(m) ⌋ to see if it is a factor of m. However, ⌊ √(m) ⌋ still grows exponentially in the representation size of m. For example, for an n-bit input, an integer m in {0,...,2n-1} which must have a representation size of at least n bits, we have the following exponential running time:
 √(m)
=
 √(2n)
=
 2n/2
=
 (21/2)n
 1.42n
If we only consider primes and not any of their multiples (i.e., we apply the Sieve of Eratosthenes to the set {2,...,⌊ √(m) ⌋}), we can decrease the number of times we check the divisibility of m. However, we would need to do a lot of extra work to filter out the multiples of primes. Modern algorithms such as ECPP run in polynomial time, but in practice it is currently difficult to implement a version of these algorithms that runs quickly enough for certain applications (or doesn't consume too much power, such as when it runs on mobile devices).
Algorithm: Given the above considerations, we introduce a modified algorithm.
1. inputs: d
1. do
1. n := a random number from {10d-1, ..., 10d-1}
while n is not probably prime
2. return n
It remains to define a subroutine for checking whether a number is probably prime (for some appropriate definition of "probably") that is very efficient.

### [link]  3.7. Detecting probable prime numbers

In this subsection, we consider the problem of defining a very efficient algorithm to check whether a positive integer m ℕ is prime. In fact, the algorithm we consider will be detectors of some, but not all, composite numbers.
Fact: For any n ℕ, n is composite iff n > 1 and it is not the case that n is prime.
That is, the algorithms we consider recognizes prime numbers but with false positives. They only guarantee that there are no false negatives (i.e., if the algorithm outputs that its input is composite, then it is indeed composite; otherwise, the number may or may not be prime and we call it probably prime because we were not able to detect that it is composite). First, consider how an algorithm for checking primality that never has a "false" output behaves:
 algorithm input algorithm output meaning description actually a composite number(this is not known at time of input) composite the input is composite true negative actually a prime number(this is not known at time of input) prime the input is prime true positive
Compare the above table to the following table describing three possible conditions (and one forbidden condition) for an algorithm that detects probable primes.
 algorithm input algorithm output meaning description actually a composite number(this is not known at time of input) composite the input isdefinitely composite true negative actually a composite number(this is not known at time of input) probably prime the input is eithercomposite or prime false positive actually a prime number(this is not known at time of input) probably prime the input is eithercomposite or prime true positive actually a prime number(this is not known at time of input) composite impossible false negative(we will not consider algorithmsthat return such outputs)
Below is a comparison of the outputs of four possible probable prime algorithms on inputs in the range {2,...,10} ⊂ ℕ.
 inputnumber perfectalgorithm perfect probableprime algorithm less accurateprobable primealgorithm very inaccurateprobable primealgorithm 2 prime probablyprime probablyprime probablyprime 3 prime probablyprime probablyprime probablyprime 4 composite composite probablyprime probablyprime 5 prime probablyprime probablyprime probablyprime 6 composite composite composite probablyprime 7 prime probablyprime probablyprime probablyprime 8 composite composite probablyprime probablyprime 9 composite composite composite probablyprime 10 composite composite probablyprime probablyprime
Algorithm: We now define our first algorithm for testing whether a number is probably prime.
1. inputs: m ℕ, k
1. repeat k times:
1. a := a random number from {2,...,m-1}
2. if a | m then return composite
2. return probably prime
Notice that the above algorithm will never say that a prime number is actually composite. If it does not find a factor of m because it did not run for sufficiently many iterations, then it will indicate that m is probably prime. Thus, it will have no false negatives (i.e., an incorrect output indicating a prime number is composite).
Algorithm: We now define another algorithm for testing whether a number is probably prime.
1. inputs: m ℕ, k
1. repeat k times:
1. a := a random number from {2,...,m-1}
2. if a | m then return composite
3. if gcd(a,m) ≠ 1 then return composite
2. return probably prime
The above algorithm is interesting because by using the gcd operation, we get more value out of each random number we try. In fact, the gcd operation runs in polynomial time but tells us if the intersection between the two sets of factors (the factors of a and the factors of m) contains any numbers. Checking this intersection using the naive approach would take exponential time.
The above algorithm is somewhat problematic if we want to have a good idea of how to set k given our desired level of confidence in the output. For example, how high should k be so that the probability that we detect a composite is more than 1/2? If we require that k ≈ √(m) to be sufficiently confident in the output, we might as well use the brute force method of checking every a {2,..., ⌊ √(m) ⌋}.
To define a more predictable testing approach for our algorithm, we derive a theorem that is frequently used in applications of modular arithmetic (in fact, this fact underlies the prime number generators found in many software applications).
Fact (Fermat's little theorem): For any p ℕ, for any a {1,...,p-1}, if p is prime then it is true that:
 ap-1
 1 (mod p)
We have already shown that if p is a prime then R defined as below is a permutation:
 R
=
 { (1, (1 ⋅ a) mod p), (2, (2 ⋅ a) mod p), ..., (p-1, ((p-1) ⋅ a) mod p) }
=
 { (i, (i ⋅ a) mod p) | i ∈ {1,...,p-1} }
Next, to make our notation more concise, note that:
 1 ⋅ 2 ⋅ ... ⋅ p-1
=
 (p - 1)!
 (1 ⋅ a) ⋅ (2 ⋅ a) ⋅ ... ⋅ ((p-1) ⋅ a)
=
 ap-1 (p - 1)!
Recall that p is prime, so p does not divide (p - 1)!. Thus, we can divide by (p - 1)! both sides of the following equation:
 ap-1 (p - 1)!
 1 ⋅ (p - 1)!
 ap-1
 1
We now have derived the statement of Fermat's little theorem.
Fact: A number p ℕ is prime iff p > 1 and for all a {1,...,p-1}, ap-1 mod p = 1.
If we negate the statement above, we can define when a number is composite (i.e., when it is not prime) in a way that suggests a straightforward algorithm.
Definition: A number m ℕ is composite iff m > 1 and there exists a {1,...,m-1} such that am-1 mod m ≠ 1. In this case, a is a Fermat witness to the compositeness of m.
Definition: If for composite m ℕ and a {1,...,m-1}, we have that am-1 mod m = 1, then a is a Fermat liar and m is a pseudoprime with respect to a.
Algorithm (Fermat primality test): We now extend our algorithm. The following algorithm can be used to test whether a number is probably prime.
1. inputs: m ℕ, k
1. repeat k times:
1. a := a random number from {2,...,m-1}
2. if a | m then return composite
3. if gcd(a,m) ≠ 1 then return composite
4. if am-1 mod m ≠ 1 then return composite
2. return probably prime
If m is a prime, the above algorithm will always return probably prime.
For any given candidate a in the above algorithm, if the first test fails and gcd(a,m) ≠ 1 then a is a factor of m. Thus, in the worst case, the first is gcd(a,m) = 1 for all k instances of a that we consider. How many of these k instances must pass the second test before we are confident that m is prime? In fact, for most composite numbers m, k can be very low.
Fact: If for a composite m ℤ there is at least one Fermat witness a {2,...,m-1} such that gcd(a,m) = 1, then at least half of all a such that gcd(a,m) = 1 are Fermat witnesses.
Suppose that a is a Fermat witness and a1,...,an are distinct Fermat liars. Then for every Fermat liar ai we have that:
 (a ⋅ ai)m-1
 am-1 ⋅ aim-1 (mod m)
 am-1
But a is a Fermat witness, so am-1 mod m ≠ 1. Thus, (aai)m-1 mod m ≠ 1, so aai is also Fermat witness. Furthermore, for any two distinct Fermat liars ai and aj we have by the generalized Euclid's lemma and the fact that ai, aj, and a are all coprime with m:
 ai
 aj (mod m)
 a ⋅ ai
 a ⋅ aj
Since there is a witness for every liar, there are at least as many witness as liars, so at least half the values are witnesses. How many numbers m have at least one Fermat witness? Equivalently, how many numbers have no Fermat witnesses?
Definition: For any m ℤ, if m has no coprime Fermat witnesses, then m is a Carmichael number, also known as a Fermat pseudoprime.
The distribuation of Carmichael numbers is high enough that the Fermat primality test is usually not used in favor of slightly more complex tests for probable primes. However, those tests follow a similar principle. The Fermat primality test is used in some deployed software applications (such as PGP).
 for the chosena we have... what it means probability of this occurringif m is a non-Carmichael composite a | m a is a non-trivial factor of m,so m is composite (# integers in {2,...,m-1} that are factors with m) / (m-2) gcd(a,m) ≠ 1 m and a have a non-trivial factor,so m is composite (# integers in {2,...,m-1} that share factors with m) / (m-2) am-1 mod m ≠ 1 a is a Fermat witnessthat m is composite at least 1/2
We can consider a particular example input for the primality test to see how each successive check in the algorithm can extract valuable information about whether the input is composite. The following table is for m = 15.
 m = 15 and a = ... 2 3 4 5 6 7 8 9 10 11 12 13 14 a | m PP C PP C PP PP PP PP PP PP PP PP PP gcd(a,m) ≠ 1 PP C PP C C PP PP C C PP C PP PP am-1 mod m = ... 4 9 1 10 6 4 4 6 10 1 9 4 1
We can now summarize all the facts and algorithms we have introduced and how their relationships allow us to construct a prime number generator.
 Euclid'slemmageneralization ⇐ multiples ofcoprime a in ℤ/mℤare a permutation gcd(m,m+1) = 1 ⇑ ⇑ Fermat'slittletheorem randomnumbergenerator ⇒ coprimegenerator ⇑ ⇑ greatestcommondivisoralgorithm ⇐ Fermatprimalitytest ⇐ probableprimedetector ⇑ probableprimegenerator

To better understand multiplicative inverses, we first review the definition of an additive inverse.
Fact: For any m ℕ, every element in the set ℤ/mℤ has an inverse with respect to addition defined over ℤ/mℤ (i.e., an additive inverse). Consider any x ℤ/mℤ. Then px ℤ/mℤ and
 x + (p − x)
 p (mod p)
 0
We denote by −x the additive inverse of x.
Example: What is the additive inverse of 2 ℤ/5ℤ?
The additive inverse is 5 − 2 = 3, since 2 + 3 mod 5 = 0.
There is more than one way to compute multiplicative inverses; in this subsection, we will present facts that will help us build algorithms for computing multiplicative inverses.
Definition: Given a positive integer m ℕ and a congruence classes x ℤ/mℤ, suppose there exists a congruence class y ℤ/mℤ such that:
 x ⋅ y
 1 (mod m)
Then we say that y is the multiplicative inverse of x in ℤ/mℤ. We usually denote the multiplicative inverse of x as x-1 (as is often done for multiplicative inverses over the integers, i.e., 2-1 = 1/2).
Fact: Let p ℕ be a prime number, and let a ℤ/pℤ. Then we know by Fermat's little theorem that:
 ap-1
 1 (mod p)
But we can factor the above to get:
 a ⋅ ap-2
 1 (mod p)
Thus, the multiplicative inverse of a ℤ/pℤ is:
 a-1
 ap-2 (mod p)
Note that:
 a ⋅ a-1
 a ⋅ ap-2 (mod p)
 ap-1
 1
Example: What is the multiplicative inverse of 2 ℤ/5ℤ? We can compute it as follows:
 2-1
 25-2 (mod 5)
 23
 8
 3
We can check to confirm that this is true:
 2 ⋅ 3
 6 (mod 5)
 1

### [link]  3.9. Chinese remainder theorem (CRT) and applications

In previous sections we presented facts that allowed us to solve certain individual equations with solution spaces corresponding to sets of congruence classes such as ℤ/mℤ. It is also possible to solve systems of equations over sets of congruence classes.
Theorem (Chinese remainder theorem): The Chinese remainder theorem (CRT) states that given primes p1,...,pk ℕ, for any a1,...,ak ℤ there exists a solution x ℤ to the system of equations:
 x mod p1
=
 a1
 x mod pk
=
 ak
We can also state the theorem in terms of congruences. Given primes p1,...,pk ℕ, for any a1 ℤ/p1ℤ, ..., ak ℤ/pkℤ there exists a unique solution x ℤ/(p1 ⋅ ... ⋅ pk)ℤ to the system of equations:
 x
 a1 (mod p1)
 x
 ak (mod pk)
In other words, all the solutions to the first system above are from the same congruence class of ℤ/(p1 ⋅ ... ⋅ pk)ℤ. The theorem applies even if p1,...,pk are only relatively prime or coprime.
Example: Solve the following system of equations for the unique solution x ℤ/10ℤ:
 x
 3 (mod 5)
 x
 0 (mod 2)
We can list the integers corresponding to each congruence class and find the unique integer in {0, ..., 2 ⋅ 5 - 1} that is in both lists:
 3 + 5ℤ
=
 {..., 3, 8, 13, 18, 23, 28, ...}
 0 + 2ℤ
=
 {..., 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, ...}
We can compute the intersection, which should contain all the integers that satisfy both equations:
 (3 + 5ℤ) ∩ (0 + 2ℤ)
=
 {..., 8, 18, 28, ...}
This appears to be the congruence class 8 + 10ℤ. Thus, we have the unique solution:
 x
 8 (mod 10)
The Chinese remainder theorem has many applications in a variety of contexts. In this section we present the following algorithms, which all rely on the ability to solve systems of equations involving congruence classes.
 efficientmodulararithmetic ⇓ Chineseremaindertheorem ⇐ CRT solver ⇐ rangeambiguityresolution ⇑ Shamir secretsharing protocol
Fact: Given a, a ℤ, if a + b {0,...,m-1}, then it is true that
 (a mod m) + (b mod m)
 a + b (mod m)
Likewise, if a ⋅ b {0,...,m-1}, then it is true that
 (a mod m) ⋅ (b mod m)
 a ⋅ b (mod m)
Example (efficient modular arithmetic): Suppose we want to perform a large number of arithmetic operations in sequence. The operations could be specified as a program that operates on a single variable and performs a sequence of variable updates that correspond to arithmetic operations, such as the example below.
 x
:=
 3
 x
:=
 x + 6
 x
:=
 x ⋅ 2
 x
:=
 x − 17
 x
:=
 x + 1
Suppose that over the course of the computation, x might become very large (e.g., 0 ≤ x ≤ 21000000000). However, we have an additional piece of information: once the sequence of operations ends, we know that 0 ≤ x < 1024.
Given our additional information about the range of the final output, we do not need to store 1000000000 bit numbers in order to perform the computation and get a correct result. It is sufficient to instead perform all the operations in ℤ/1024ℤ:
 x
:=
 3 mod 1024
 x
:=
 (x + 6) mod 1024
 x
:=
 (x ⋅ 2) mod 1024
 x
:=
 (x − 17) mod 1024
 x
:=
 (x + 1) mod 1024
The above will produce the same result in ℤ/1024ℤ, and we will only need 1024 bits at any single point in the computation to store each intermediate result.
Example (efficient distributed modular arithmetic): Suppose that, as in the previous example, we want to perform a large number of arithmetic operations in sequence on large integers (e.g., in ℤ/270ℤ). However, our resources may be limited. For example, we may only have a collection of processors that can each perform arithmetic on relatively small integers (e.g., in ℤ/28ℤ). Is it possible for us to perform this computation using these processors, and is it possible for us to speed up the computation by running the processors in parallel? We may assume that a single arithmetic computation in ℤ/2nℤ running on a single processor takes n time steps to perform.
Suppose we have ten processors that can perform arithmetic computations in ℤ/28ℤ or any smaller space (such as ℤ/27ℤ). We can approach this problem by choosing a collection of primes p1,...,p10 such that 27 < pi < 28, which implies that:
 p1 ⋅ ... ⋅ p10
>
 27 ⋅ ... ⋅ 27
 270
We can then perform the sequence of computations modulo each of the primes pi to obtain ten results a1, ..., a10. Once we obtain the results, we can apply the Chinese remainder theorem to obtain x:
 x
=
 a1 mod p1
 x
=
 a10 mod p10
Since the product of the primes is greater than 270, the unique solution x to the above system of equations will be the correct result of the computation. Since the processors were running in parallel, the computation was about 10 times faster than it would have been if we had performed the computation in ℤ/270ℤ (or in sequence using a single processor that can perform computations in ℤ/28ℤ).
Example (variant of range ambiguity resolution): Suppose we want to build a radar or other sensing device that sends signals out and listens for reflections of those signals in order to detect the distances of obstacles in the environment. The device has a clock that counts up from 0, one integer per second. If the device sends a single signal out that travels at 1 km per second at time 0 and receives a response in 12 seconds at time 12, it knows that the distance to the object and back is 12 km.
However, what if we cannot wait 12 seconds or more? For example, the obstacle may be moving quickly and we want to constantly update our best guess of the distance to that object. We would need to send signals more frequently (for example, every 5 seconds). But then if an object is 12 seconds away, we would have no way to tell when running in a steady state which of the signals we just received.
However, we can obtain some information in this scenario. Suppose we send a signal every 5 seconds, only when the clock's timer is at a multiple of 5. Equivalently, imagine the clock counts up modulo 5 (i.e., 0,1,2,3,4,0,1,2,3,4,0,...) and we only send signals when the clock is at 0. What information can we learn about the object's distance in this scenario? If the distance to the object and back is d, then we would learn d mod 5, because we would get the signal back when the clock is at 0, 1, 2, 3, or 4.
We can use multiple instances of the above device (each device using its own distinct frequency for sending signals) to build a device that can check for obstacles more frequently while not giving up too much accuracy. Pick a collection of primes p1,..., pn such that their product is greater than the distance to any possible obstacle (e.g., if this is a ship or plane, we could derive this by considering the line of sight and the Earth's curvature). Take n instances of the above devices, each with their own clock that counts in cycles through ℤ/piℤ and sends out a signal when the clock is at 0. Running in a steady state, if at any point in time the known offsets are a1,...,an, we would know the following about the distance d to an obstacle:
 d
 a1 (mod p1)
 d
 an (mod pn)
We can then use the Chinese remainder theorem to derive the actual distance d < p1 ⋅ ... ⋅ pn.
Protocol (Shamir secret sharing): Suppose there are N participants and we want to divide some secret information among them into N parts so that any k or greater number of participants can reconstruct the secret information, but no subset of fewer than k participants can reconstruct it. Let s ℤ be the secret information. Collect a set of randomly chosen relatively prime integers M = {m1,...,mN} such that:
• the product of any collection of at least k integers in M is greater than s;
• the product of any collection of k-1 integers in M is less than s.
Give each participant i {1,...,N} the value s mod mi. Now, any number of participants nk can use the Chinese remainder theorem to solve for s.
Note: There are many alternative ways to implement Shamir secret sharing. Consider the following example using curve-fitting. We choose some large m ℤ, and then randomly select integers c1,...,ck ℤ/mℤ. We then use these integers as coefficients in a polynomial:
 f(x)
=
 s + c1 x + c2 x2 + ... + ck xk
Each participant i {1,...,N} is given f(i). Any k participants can now use curve-fitting techniques or techniques for solving collections of equations (e.g., computing the reduced row echelon form of a matrix) to determine all the coefficients of f and, thus, solve for s.

### [link]  3.10. Solving systems of equations with CRT solutions using multiplicative inverses

The Chinese remainder theorem guarantees that a unique solution exists to particular systems of equations involving congruence classes. But can these solutions be computed automatically and efficiently? In fact, they can. However, computing such solutions requires the ability to compute multiplicative inverses in ℤ/mℤ.
 greatestcommondivisoralgorithm Fermat'slittletheorem Euler'stheorem ⇑ ⇑ ⇑ Bézout'sidentity ⇐ extendedEuclideanalgorithm ⇐ algorithm forfindingmultiplicativeinverses ⇒ Euler'stotientfunction φ ⇑ Chineseremaindertheorem ⇐ CRT solverfor twoequations ⇑ induction ⇐ CRT solverfor nequations
How is computing multiplicative inverses related to solving systems of equations that have solutions according to CRT? Consider the following example.
Example: Suppose we want to solve the following system of equations:
 x
 1 (mod 5)
 x
 0 (mod 4)
The above two equations are constraints on the integers that can be in the congruence class x. One way to state these constraints in English is: "x must be a multiple of 4, and x must be in 1 + 5ℤ". But then we can rewrite the above as a single equation:
 4 ⋅ y
 1 (mod 5)
Then, we only need to solve for y, and let x = 4 ⋅ y. What is y? The above equation implies that y is the multiplicative inverse of 4 in ℤ/5ℤ. Thus, we can compute:
 y
 4-1 (mod 5)
 45-2 (mod 5)
 43
 64
 4
Thus, the multiplicative inverse of 4 in ℤ/5ℤ is itself. Plugging y into x = 4 ⋅ y gives us 16. Thus, we know by CRT that we have our unique solution in ℤ/(4 ⋅ 5)ℤ = ℤ/20ℤ:
 x
 16 (mod 20)
The above example suggests that we can solve certain pairs of equations with CRT solutions if one of the congruence classes is 0 and the other 1. What if the other congruence class is not 1?
Example: Suppose we want to solve the following system of equations for x ℤ/15ℤ:
 x
 4 (mod 5)
 x
 0 (mod 3)
We can observe that we want some x that is a multiple of 3 and is in 4 + 5ℤ. We can set x = 3 ⋅ y for some y, and then we want to solve the following for y ℤ/5ℤ:
 3 ⋅ y
 4 (mod 5)
Using Fermat's little theorem, we can compute the multiplicative inverse of 3 ℤ/5ℤ:
 35-1
 1 (mod 5)
 35-1 ⋅ 3-1
 1 ⋅ 3-1
 3(5-1)-1
 3-1
 35-2
 3-1
 33
 3-1
 27
 3-1
 2
 3-1
Thus, we know that the multiplicative inverse of 3 ℤ/5ℤ is 2, and we have that 2 ⋅ 3 ≡ 1 (mod 5). Notice that 4 ≡ 4 ⋅ 1 (mod 5):
 3 ⋅ y
 4 (mod 5)
 3 ⋅ y
 4 ⋅ 1 (mod 5)
Since 1 ≡ 2 ⋅ 3 (mod 5), we can substitute:
 3 ⋅ y
 4 ⋅ (3 ⋅ 2) (mod 5)
We can cancel 3 on both sides using Euclid's lemma (since 5 is prime) or Euclid's generalized lemma (since 3 and 5 are coprime):
 y
 4 ⋅ 2 (mod 5)
 y
 8
 y
 3
Since we originally set x = 3 ⋅ y, we can now substitute and solve for x ℤ/15ℤ:
 x
 3 ⋅ y (mod 15)
 x
 3 ⋅ 3
 x
 9
 x
 9
Thus, x ≡ 9 (mod 15) is a solution to our equation. We can confirm this:
 9
 4 (mod 5)
 9
 0 (mod 3)
Example: Note that what we actually did in the previous example when we cancelled 3 ℤ/5ℤ on both sides is that we multiplied both sides by the multiplicative inverse of 3 ℤ/5ℤ. Suppose we knew that the multiplicative inverse of 3 ℤ/5ℤ is 2. We can use this information to help us solve the following equation:
 3 ⋅ x
 2 (mod 5)
We multiply both sides by 3-1 ≡ 2 (mod 5):
 3-1 ⋅ 3 ⋅ x
 3-1 ⋅ 2 (mod 5)
 x
 2 ⋅ 2
 x
 4
Notice that we have now reduced the problem of solving an equation with a single coefficient before x into the problem of finding the multiplicative inverse of the coefficient.
Example: Suppose we want to solve the following system of equations:
 x
 0 (mod 11)
 x
 4 (mod 7)
The above equations require that x ℤ/77ℤ be divisible by 11, and that x 4 + 7ℤ. Since x is divisible by 11, it is a multiple of 11, so we want to find x = 11 ⋅ y where:
 11 ⋅ y
 4 (mod 7)
To solve the above, it is sufficient to multiply both sides of the equation by 11-1 (mod 7). Since 11 ≡ 4 (mod 7), it is sufficient to find 4-1 (mod 7).
 11-1
 4-1 (mod 7)
 47-2
 45
 1024
 2
Thus, we can multiply both sides to obtain:
 11 ⋅ y
 4 (mod 7)
 11-1 ⋅ 11 ⋅ y
 11-1 ⋅ 4
 y
 2 ⋅ 4
 y
 8
 y
 1
Thus, we have:
 x
 11 ⋅ y (mod 77)
 x
 11 ⋅ 1
 x
 11 (mod 77)
Fact: Suppose we are given two unequal prime numbers p, q ℕ, and the following two equations:
 x
 1 (mod p)
 x
 0 (mod q)
This implies x must be a multiple of q, so rewrite x = qy. Then we have:
 q ⋅ y
 1 (mod p)
Thus, we can solve for q-1 by computing:
 q-1
 qp-2 (mod p)
Then we have:
 q-1 ⋅ q ⋅ y
 q-1 ⋅ 1 (mod p)
 y
 q-1 (mod p)
 x
 q ⋅ y (mod (p ⋅ q))
Notice that qy is indeed a solution to the original system because:
 q ⋅ y
 1 (mod p)

 because y ≡ q-1 (mod p);
 q ⋅ y
 0 (mod q)

 because q ⋅ y is a multiple of q.
Fact: Suppose we are given two unequal prime numbers p, q ℕ, and the following two equations where a ℤ/pℤ:
 x
 a (mod p)
 x
 0 (mod q)
This implies x must be a multiple of q and a multiple of a, so rewrite x = aqy. Then we have:
 a ⋅ q ⋅ y
 a (mod p)
As in the previous fact, the above works if y = q-1 (mod p), so compute:
 y
=
 q-1 (mod p)
 x
 a ⋅ q ⋅ y (mod (p ⋅ q))
Notice that aqy is indeed a solution to the original system because:
 a ⋅ q ⋅ y
 a (mod p)

 because y ≡ q-1 (mod p);
 a ⋅ q ⋅ y
 0 (mod q)

 because a ⋅ q ⋅ y is a multiple of q.
Fact: Suppose we are given two unequal prime numbers p, q ℕ, and the following two equations where a ℤ/pℤ and b ℤ/qℤ:
 x
 a (mod p)
 x
 b (mod q)
Suppose we instead solve the following two systems:
 x1
 a (mod p)
 x1
 0 (mod q)
 x2
 0 (mod p)
 x2
 b (mod q)
Notice that x1 + x2 is a solution to the original system because:
 x1 + x2
 x1 + 0 (mod p)
 a + 0 (mod p)
 a (mod p)
 x1 + x2
 0 + x2 (mod q)
 0 + b (mod q)
 b (mod q)
We know how to solve the above two systems separately:
 x1
 a ⋅ q ⋅ q-1 (mod (p ⋅ q))
 x2
 b ⋅ p ⋅ p-1 (mod (p ⋅ q))
Thus, we have the solution to the original system:
 x
 x1 + x2 (mod (p ⋅ q))
We have shown that we can solve a system of equations with a solution according to CRT if the moduli in the equations are both prime. What if the moduli are merely coprime? So far, we only needed a way to compute multiplicative inverses of numbers modulo a prime, and Fermat's little theorem was sufficient for this purpose. However, if the moduli are not prime, we need some other method to compute multiplicative inverses.
Fact: For any m ℕ, an x ℤ/mℤ has an inverse with respect to multiplication defined over ℤ/mℤ (i.e., a multiplicative inverse) iff gcd(x,m) = 1.
Fact (Bezout's identity): For any two integers x ℤ, y ℤ where x ≠ 0 or y ≠ 0, let z = gcd(x,y). Then there exist a ℤ and b ℤ such that:
 a ⋅ x + b ⋅ y
=
 z
Fact: For any two integers x ℤ, y ℤ where x ≠ 0 or y ≠ 0, and gcd(x,y) = 1, there exist a ℤ and b ℤ such that:
 a ⋅ x + b ⋅ y
=
 1
This fact is a special case of Bézout's identity (i.e., the case in which gcd(x,y) = 1).
Example: Suppose we have s,t ℤ such that:
 5 ⋅ s + 3 ⋅ t
=
 1
We can then do the following:
 − 5 ⋅ t + (5 ⋅ s + 3 ⋅ t) + 5 ⋅ t
=
 1
 (5 ⋅ s − 5 ⋅ t) + (3 ⋅ t + 5 ⋅ t)
=
 1
 5 ⋅ (s − t) + 8 ⋅ t
=
 1
Thus, we have converted a instance of Bézout's identity for 5 and 3 into an instance of Bézout's identity for 5 and 8.
We can repeat the above as many times as we want. Suppose we instead want Bézout's identity for 3 and 13. We can do the following:
 − 5 ⋅ 2 ⋅ t + (5 ⋅ s + 3 ⋅ t) + 5 ⋅ 2 ⋅ t
=
 1
 (5 ⋅ s − 5 ⋅ 2 ⋅ t) + (3 ⋅ t + 5 ⋅ 2 ⋅ t)
=
 1
 5 ⋅ (s − 2 ⋅ t) + 13 ⋅ t
=
 1
Fact: For any two integers a, b, s, t ℤ, suppose we have that:
 a ⋅ s + b ⋅ t
=
 1
Let us assume that a > b and that a mod b = r (in other words, for some k,
 a mod b
=
 r
 b ⋅ k + r
=
 a
Then we have that:
 a ⋅ s + b ⋅ t
=
 1
 − b ⋅ k ⋅ s + (a ⋅ s + b ⋅ t) + b ⋅ k ⋅ s
=
 1
 (a ⋅ s − b ⋅ k ⋅ s) + (b ⋅ (t + k ⋅ s))
=
 1
 (a − b ⋅ k) ⋅ s + b ⋅ (t + k ⋅ s)
=
 1
 r ⋅ s + b ⋅ (t + k ⋅ s)
=
 1
 (a mod b) ⋅ s + b ⋅ (t + k ⋅ s)
=
 1
Thus, for any instance of Bézout's identity for a and b and a > b, there must exist an instance of Bézout's identity for a mod b and b.
The above fact suggests that if we want to find the s and t coefficients for an equation as + bt = 1 given a > b, we should try finding Bézout's identity for a mod b and b. But notice that:
 a mod b
<
 b
The above implies that the problem of finding the coefficients for an instance of Bézout's identity can be reduced to a smaller version of the problem: find Bézout's identity for a mod b and b can then be reduced further to finding b mod (a mod b) and a mod b. At this point, we have a strictly smaller instance of the problem:
 a mod b
<
 b
<
 a
 b mod (a mod b)
<
 b
<
 a
Thus, we can use recursion; the recursive algorithm that solves this problem is called the extended Euclidean algorithm, and is a modification of the recursive algorithm that computes the gcd of two numbers.
Algorithm (extended Euclidean algorithm): The collection of equations considered in the Chinese remainder theorem can be solved constructively (i.e., in a way that provides a concrete solution and not just a proof that a solution exists) by applying an extended version of the greatest common divisor algorithm. We provide the definition of the algorithm below.
1. extended Euclidean algorithm: x ℤ, y
1. if y = 0
1. (s,t) := (1, 0)
2. return (s,t)
2. otherwise
1. (s,t) := extended Euclidean algorithm(y, x mod y)
2. return (t, s - (⌊ x/y ⌋ ⋅ t) )
Given two inputs x ℤ, y ℤ, the extended Euclidean algorithm returns two integers u, v such that
 u ⋅ x + v ⋅ y
=
 gcd(x,y)
We can check that the above is indeed a solution to xa (mod m). Consider the following:
 u ⋅ m + v ⋅ n
=
 1
 v ⋅ n
=
 1 - u ⋅ m
 v ⋅ n
 1 (mod m)
Furthermore, we have that:
 ((u ⋅ m) ⋅ b) mod m
=
 0
Then, we can conclude:
 ((u ⋅ m) ⋅ b + (v ⋅ n) ⋅ a) mod m
=
 0 + ((v ⋅ n) ⋅ a) mod m
=
 0 + (1 ⋅ a) mod m
=
 a mod m
Using a similar argument, we can show that the solution is also equivalent to b (mod m).
Example: Suppose we want to find the multiplicative inverse of 49 in ℤ/100ℤ and the multiplicative inverse of 100 in ℤ/49ℤ. We run the extended Euclidean algorithm on the inputs 49 and 100 to obtain the following instance of Bézout's identity:
 (-24) ⋅ 100 + 49 ⋅ 49
=
 1
We can use the above to find the multiplicative inverse of 49 in ℤ/100ℤ:
 (-24) ⋅ 100 + 49 ⋅ 49
=
 1
 (-24) ⋅ 100 + 49 ⋅ 49
 1 (mod 100)
 49 ⋅ 49
 1 (mod 100)
Thus, 49-1 = 49 in ℤ/100ℤ. We can also find the multiplicative inverse of 100 in ℤ/49ℤ (also known as 2 ℤ/49ℤ):
 (-24) ⋅ 100 + 49 ⋅ 49
=
 1
 (-24) ⋅ 100 + 49 ⋅ 49
 1 (mod 49)
 -24 ⋅ 100
 1 (mod 49)
 25 ⋅ 100
 1 (mod 49)
Thus, 100-1 = 25 in ℤ/49ℤ.
Example: Suppose we want to solve the following system:
 x
 23 (mod 100)
 x
 31 (mod 49)
We use the extended Euclidean algorithm to find that:
 (-24) ⋅ 100 + 49 ⋅ 49
=
 1
This tells us that −24 is the inverse of 100 in ℤ/49ℤ and that 49 is the inverse of 49 in ℤ/100ℤ. Thus, to build 31 in ℤ/49ℤ, we need:
 31
 1 ⋅ 31 (mod 49)
 (100 ⋅ 100-1) ⋅ 31
 (100 ⋅ -24) ⋅ 31
To build 23 in ℤ/100ℤ, we need:
 23
 1 ⋅ 23 (mod 100)
 (49 ⋅ 49-1) ⋅ 23
 (49 ⋅ 49) ⋅ 23
Then the solutions to the system are in the congruence class:
 x
 (100 ⋅ -24) ⋅ 31 + (49 ⋅ 49) ⋅ 23 (mod (100 ⋅ 49))
 -19177 mod 4900
 423
Algorithm: Suppose we are given a collection of equations of the following form such that m1,...,mk are all pairwise coprime.
 x
 a1 (mod m1)
 x
 ak (mod mk)
Let C be the set of these equations, where Ci is the ith equation. The following algorithm can be used to find a solution for this system of equations.
1. solve system of equations: C is a set of constraints xai mod mi
1. while |C| > 1
1. remove two equations Ci and Cj from C and solve them to obtain a new equation xc (mod mimj)
2. add the new equation to C
2. return the one equation left in C

### [link]  3.11. More practice with CRT

Example: Solve the following equation for x ℤ/5ℤ by multiplying both sides by the appropriate multiplicative inverse:
 3 ⋅ x
 2 (mod 5)
Example: Solve the following system of equations for x ℤ/35ℤ by finding multiplicative inverses of 5 ℤ/7ℤ and 7 ℤ/5ℤ:
 x
 4 (mod 5)
 x
 2 (mod 7)
Example: Suppose you know that 7-1 ≡ 3 (mod 10). Solve the following system of equations:
 x
 0 (mod 2)
 x
 1 (mod 5)
 x
 3 (mod 7)
Example: Suppose we have a single processor that can perform arithmetic operations (addition, subtraction, multiplication, and modulus) on integers that can be represented with at most 11 bits (211 = 2048). On this processor, a single arithmetic operation can be performed in 11 time steps. We also have three other processors that can perform arithmetic on integers that can be represented with at most 4 bits (24 = 16). Each of these processors can perform an arithmetic operation on 4-bit integers in 4 time steps.
For example, suppose we want to perform 1000 arithmetic operations on 11-bit integers. Using a single processor, this would require:
 1000 ⋅ 11 = 11,000 time steps
If we use three coprime numbers 13, 14, and 15, and we use each of the three 4-bit processors to perform these operations modulo 13, 14, and 15 in parallel, 1000 operations would require:
 1000 ⋅ 4 = 4,000 time steps
Note that 13 ⋅ 14 ⋅ 15 = 2730, and that 2730 > 2048, so:
 13 ⋅ 14 ⋅ 15
>
 211
Suppose it takes 1400 time steps to solve a system of three congruence equations of the following form:
 x
 a (mod 13)
 x
 b (mod 14)
 x
 c (mod 15)
If we want to perform the computations as quickly as possible and we can use either the 11-bit processor or the three 4-bit processors, how many operations k would we need to perform before we decided to switch from the the 11-bit processor to the 4-bit processors?
Example: Suppose we are using echolocation to measure the distance to a wall that is at most 15 distance units away. We have two devices that emit sounds at two different frequencies. One device emits sound every 3 seconds, while the other device emits a sound every 11 seconds. Suppose we hear the following:
• the device that emits a sound every 3 seconds hears a response 2 seconds after each time it emits a sound;
• the device that emits a sound every 11 seconds hears a response 4 seconds after each time it emits a sound.
If sound travels one distance unit per second, how far away is the wall?
Example: Suppose Alice, Bob, and Eve are using the Shamir secret sharing protocol to store a combination for a lock; all three participants would need to work together to retrieve the secret lock combination in ℤ/60ℤ. They are each given the following equations:
 Alice:

 x
 1 (mod 3)
 Bob:

 x
 3 (mod 4)
 Eve:

 x
 2 (mod 5)
• What is the lock combination?
• The lock only permits anyone to try two incorrect combinations before locking down completely and becoming inaccessible. Suppose Eve has a chance to steal either Bob's secret information or Alice's secret information, but she can only choose one. Whose information should she steal in order to unlock the lock?
Example: Suppose we want to store a number n between 0 and 500,000 on a collection of 5-bit memory regions. However, we want to make sure that if any one of the memory regions is turned off, we can still recover the number exactly, without any missing information or errors. How many memory regions will we need to use? Note that 323 = 32,768 and 324 = 1,048,576.
Example: Suppose we make the following simplifications: for every t years,
• when the Earth revolves around the sun, it travels a circumference of 1 unit, at a rate of 1 ⋅ t (once per year);
• when the asteroid Ceres revolves around the sun, it travels a circumference of 5 units;
• when the planet Jupiter revolves around the sun, it travels a circumference of 11 units.
Suppose that on June 21st, 2000, the Earth, Ceres, and Jupiter all align (i.e., one can draw a straight line through all three). Next, suppose that it is June 21st of some year between 2000 and 2055. At this time, there is no alignment. However, Jupiter aligned with earth on June 21st two years ago, and Ceres aligned with Earth on June 21st three year ago. What year is it?

### [link]  3.12. Euler's totient function, Euler's theorem, and applications

Definition: For any input m ℕ, define Euler's totient function φ by:
 φ(m)
=
 |{k | k ∈ {1,...,m}, gcd(k,m) = 1}|
Example: Compute φ(15).
 φ(15)
=
 |{k | k ∈ {1,...,15}, gcd(k,15) = 1}|
=
 |{1,2,4,7,8,11,13,14}|
=
 8
Example: Suppose p ℕ is a prime number. What is φ(p)?
 φ(p)
=
 |{k | k ∈ {1,...,p}, gcd(k,p) = 1}|
=
 |{1,2,3,...,p-1}|
=
 p-1
Example: What is φ(15)?
 φ(15)
=
 |{k | k ∈ {1,...,15}, gcd(k,15) = 1}|
=
 15 - |{k | k ∈ {1,...,15}, gcd(k,15) ≠ 1}|
=
 15 - |{3,6,9,12,15} ∪ {5,10,15}|
=
 15 - |{3,6,9,12}| - |{5,10}| - |{15}|
=
 15 - (5-1) - (3-1) - 1
=
 15 - 5 - 3 + 1 + 1 - 1
=
 15 - 5 - 3 + 1
=
 (3 ⋅ 5) - 5 - 3 + 1
=
 (3-1) ⋅ (5-1)
=
 2 ⋅ 4
=
 8
Fact: For any x ℕ and y ℕ, if gcd(x,y) = 1 then:
 φ(x) ⋅ φ(y)
=
 φ(x ⋅ y)
Example: Suppose p ℕ and q ℕ are prime numbers. What is φ(pq)?
 φ(p ⋅ q)
=
 φ(p) ⋅ φ(q)
=
 (p-1) ⋅ (q-1)
Fact: For any prime p ℕ, we have that:
 φ(pk)
=
 pk - pk-1
Fact: For any a ℕ and m ℕ, if am-1 mod m = 1 then:
 am-1 mod m
=
 1
 am-1
=
 1 + k ⋅ m
 1
=
 gcd(1 + k ⋅ m, k ⋅ m)
=
 gcd(am-1, k ⋅ m)
=
 gcd(a, k ⋅ m)
=
 gcd(a, m)
Thus, a and m are coprime.
Example: Suppose m ℕ is a Carmichael number. At most how many Fermat liars does m have?
Fact: We can use φ to provide a formula for the probability that the Fermat primality test will detect that a Carmichael number m ℕ is actually composite. It is approximately:
 (m - φ(m)) / m
To be more precise (since we do not check 0 or 1 in our actual implementation), it is:
 ((m - 3) - φ(m)) / (m - 3)
Unfortunately, Euler's totient function does not in general have a better upper bound than f(m) = m.
Example: How many elements of ℤ/mℤ have a multiplicative inverse in ℤ/mℤ? Since an x ℤ/mℤ has an inverse iff gcd(x,m) = 1. Thus, the set of such x is exactly the set {x | x {1,...,m}, gcd(k,m) = 1}. But this is the definition of φ(m). Thus, there are φ(m) elements in ℤ/mℤ that have a multiplicative inverse.
Theorem (Euler's theorem): For any m ℕ and a ℤ/mℤ, if gcd(m,a) = 1 then we have that:
 aφ(m) mod m
=
 1
Notice that if m is a prime number, then φ(m) = m-1. Then for any a ℤ/mℤ, gcd(a,m) = 1 and am-1 mod m = 1. This is exactly the statement of Fermat's little theorem. Thus, Euler's theorem is a generalization of Fermat's little theorem.
Fact: For any m ℕ and a ℤ/mℤ, if gcd(m,a) = 1 then for any i ℤ/φ(m)ℤ such that i ≡ 0 we have that
 ai mod m
=
 1
This is because:
 i
 0 (mod φ(m))
=
 k ⋅ φ(m)
 aφ(m) ⋅ k mod m
=
 (aφ(m))k mod m
=
 1k mod m
=
 1 mod m
Fact: For any p ℕ, if p is prime and a ℤ/pℤ then for any k ℤ we have that:
 ak mod p
=
 a(k mod (p-1)) mod p
Fact: For any m ℕ and a ℤ/mℤ, if gcd(m,a) = 1 then for any k ℤ we have that:
 ak mod m
=
 a(k mod φ(m)) mod m
Example: We can compute the integer value 238 mod 7 as follows because 7 is prime:
 238
 238 mod (7-1) (mod 7)
 238 mod 6
 22
 4
Since the final operation in the integer term is a modulus operation, the congruence class 4 is also exactly the integer result of the term.
Example: We can compute 4210000000 mod 5 as follows because 5 is prime:
 4210000000
 4210000000 mod (5-1) (mod 5)
=
 4210000000 mod 4
=
 40
=
 1
Example: We can compute 48100+ 3 mod 15 as follows because gcd(4,15) = 1:
 4(8100 + 3)
 4(8100 + 3) mod φ(15) (mod 15)
=
 4(8100 + 3) mod ((5-1) ⋅ (3-1))
=
 4(8100 + 3) mod 8
=
 43
=
 64
=
 4
Example: Compute 56603 mod 7.
Fact: For any m ℕ and a ℤ/mℤ where gcd(m,a) = 1, we can use the Euler's theorem to find the inverse of a. Notice that:
 aφ(m) mod m
=
 1
 (aφ(m)-1 ⋅ a) mod m
=
 1
Thus, aφ(m)-1 mod m is the multiplicative inverse of a in ℤ/mℤ.
Example: Find the multiplicative inverse of 52 in ℤ/7ℤ.
It is sufficient to notice that 56 ≡ 1 (mod 7), so 52 ⋅ 54 ≡ 1, so 54 is the inverse of 52 in ℤ/7ℤ.
Example: We can find the multiplicative inverse of 3 in ℤ/22ℤ using the following steps. We first compute φ(22) = 10.
 φ(22)
=
 φ(11 ⋅ 2)
=
 φ(11) ⋅ φ(2)
=
 (11 − 1) ⋅ (2 − 1)
=
 10 ⋅ 1
=
 10
Next, we compute the inverse using Euler's theorem.
 3-1
 3φ(22) − 1 (mod 22)
 310 − 1
 39
 33 ⋅ 33 ⋅ 33
 5 ⋅ 5 ⋅ 5
 25 ⋅ 5
 3 ⋅ 5
 15
Definition: For m ℕ, We define (ℤ/mℤ)* to be the following subset of ℤ/mℤ:
 (ℤ/mℤ)*
=
 { a | a ∈ ℤ/mℤ, a has an inverse in ℤ/mℤ }
Example: Does 11 have an inverse in ℤ/22ℤ (i.e., is it true that 11 (ℤ/22ℤ)*)?
Example: Compute |(ℤ/35ℤ)*|.
 |(ℤ/35ℤ)*|
=
 |{ a | a ∈ ℤ/35ℤ, a has an inverse in ℤ/35ℤ }|
=
 |{ a | a ∈ ℤ/35ℤ, gcd(a,35) = 1 }|
=
 |{ a | a ∈ ℤ/35ℤ, gcd(a,35) = 1 }|
=
 φ(35)
=
 φ(5 ⋅ 7)
=
 φ(5) ⋅ φ(7)
=
 4 ⋅ 6
=
 24
Fact: For any m ℕ, (ℤ/mℤ)* is closed under multiplication modulo m. That is, for any a ℤ/mℤ and b ℤ/mℤ, if there exist a-1 ℤ and b-1 ℤ then (ab) has an inverse (a-1b-1). We can use the commutativity of multiplication to show this:
 (a ⋅ b) ⋅ (a-1 ⋅ b-1)
 (a ⋅ a-1) ⋅ (b ⋅ b-1)
 1 ⋅ 1
 1

## [link]  Review 1. Properties, Algorithms, and Applications of Modular Arithmetic

This section contains a comprehensive collection of review problems going over the course material covered until this point. Many of these problems are an accurate representation of the kinds of problems you may see on an exam.
Exercise: For some a ℕ, suppose that a-1 ℤ/21ℤ and a-1 ℤ/10ℤ (that is, a has an inverse in ℤ/21ℤ, and it also has an inverse in ℤ/10ℤ). Determine whether or not a has an inverse in ℤ/210ℤ. Explain why or why not. Hint: use gcd.
Exercise: Bob is trying to implement a random number generator. However, he's distracted and keeps making mistakes while building his implementation.
1. Bob begins his algorithm by generating two coprime numbers a and m such that gcd(a,m) = 1. However, he mixes them up and defines the following computation:
 [      (i ⋅ m) mod a      |      i ∈ {1,...,a-1}      ]
Is Bob going to get a permutation? Why or why not?
2. Bob notices part of his mistake and tries to fix his algorithm; he ends up with the following:
 [      (i ⋅ m) mod m      |      i ∈ {1,...,m-1}      ]
How many distinct elements does the list he gets in his output contain?
3. Bob notices his algorithm isn't returning a permutation, but he mixes up a few theorems and attempts the following fix:
 [      (i ⋅ am-1) mod m      |      i ∈ {1,...,m-1}      ]
Bob tests his algorithm on some m values that are prime numbers. How many elements does the set he gets in his output contain?
4. Bob doesn't like the fact that his permutation doesn't look very random, so he moves the i term to the exponent:
 [      (ai ⋅ (m-1)) mod m      |      i ∈ {1,...,m-1}      ]
Bob tests his algorithm on some m values that are prime numbers. How many elements does the set he gets in his output contain?
Exercise: Suppose you have the following instance of Bézout's identity: 2 ⋅ 3 + (−1) ⋅ 5 = 1. Solve the following system of equations:
 x
 2 (mod 3)
 x
 3 (mod 5)
Exercise: Solve the following system of equations:
 x
 2 (mod 7)
 x
 3 (mod 5)
Exercise: Determine the size of the following set:
 {x | x ∈ ℤ/(11 ⋅ 13)ℤ,      x ≡ 5 mod 11,      x ≡ 7 mod 13}
Exercise: For a given y ℤ/(pq)ℤ where p and q are distinct primes, how many solutions does the following system of equations have:
 x
 y2 (mod p)
 x
 y2 (mod q)
Exercise: Determine the size of the following set:
 {x | x ∈ ℤ/(11 ⋅ 13)ℤ,      s ∈ ℤ/11ℤ,      t ∈ ℤ/13ℤ,      x ≡ s mod 11,      x ≡ t mod 13 }
Exercise: Suppose that n ℕ is even and n/2 − 1 is odd. Determine the size of the following set:
 {i ⋅ (n/2 - 1) mod n | i ∈ {0,...,n-1} }
Exercise: For any n ℕ, let a ℤ/nℤ have an inverse a-1 ℤ/nℤ. Determine the size of the following set:
 { (a ⋅ i) mod n | i ∈ ℤ/nℤ }
Exercise: Let p be a prime number. Compute the set size |ℤ/pℤ - (ℤ/pℤ)*|.
Exercise: In a game, you win if you can guess correctly whether a large number n is prime in under a minute (if you are wrong, you win nothing and you lose nothing). You are given a handheld calculator that can only perform addition, subtraction, multiplication, division, exponentiation, and modulus (the calculator can represent arbitrarily large numbers, and can provides quotients to any precision). Describe one strategy you can use to give yourself a high probability of winning.
Exercise: Suppose that n ℕ. Compute the following:
 534 ⋅ n + 1 mod 11
Exercise: Suppose we make the following simplifications:
• the Earth rovolves around the sun once per year;
• the asteroid Ceres rovolves around the sun every 5 years;
• the planet Jupiter revolves around the sun every 11 years.
Suppose that on June 21st, 2000, the Earth, Ceres, and Jupiter all align (i.e., one can draw a straight line through all three).
1. Which two of these objects will align again on June 21st, and in which year?
2. How many years will pass before all three align again?
3. Suppose that it is June 21st of some year between 2000 and 2055. At this time, there is no alignment. However, Jupiter aligned with earth on June 21st four years ago, and Ceres aligned with Earth on June 21st one year ago. What year is it?
Exercise: Suppose there exist two devices, where one can either produce or consume exactly 2 units of power and another can either produce or consume exactly 7 units of power:
• device A: +/− 2 units
• device B: −/+ 7 units
Suppose we want to produce exactly 1 unit of power using a combination of some number of A devices and B devices. Is this possible?

## [link]  4. Computational Complexity of Modular Arithmetic Algorithms

### [link]  4.1. Definition of computational problems and their complexity

Below, we review a small set of definitions and facts from complexity theory. We will only use these facts as they relate to problems in modular arithmetic and abstract algebra. A course on computational complexity theory would go into more detail.
Definition: Informally, for some formula f, we call a statement of the following form a problem:
• "Given x, find y such that f(x, y) is true."
In the above, x can be viewed as the input describing the problem, and y can be viewed as the solution to the problem.
Definition: The computational complexity of a problem refers to the running time of the most efficient algorithm that can solve the problem.

### [link]  4.2. Complexity of algorithms for solving tractable problems

In this subsection we consider the running time of efficient algorithms for performing common arithmetic operations (addition, subtraction, multiplication, exponentiation, and division). We consider the complexity of these arithmetic operations on each of the following domains:
• unbounded positive integers;
• integers modulo 2k;
• integers modulo n for some n ℕ.
All of our arithmetic algorithms will operate on bit string representations of positive integers. A bit string representation such as
 ak-1...a0
is defined to represent the integer
 2k-1 ⋅ ak-1 + ... + 20 ⋅ a0
Note that this means that k = ⌈log2(a)⌉. Below are some specific examples:
 111
=
 22 ⋅ 1 + 21 ⋅ 1 + 20 ⋅ 1
 1101
=
 23 ⋅ 1 + 22 ⋅ 1 + 21 ⋅ 0 + 20 ⋅ 1
 10
=
 21 ⋅ 1 + 20 ⋅ 0
Since the operations we consider usually take two arguments, we will follow the following conventions:
• the first (left-hand side) input is x, an k-bit integer;
• the second (right-hand side) input is y, an l-bit integer.
Thus, x ≤ 2k - 1 and y ≤ 2l - 1.
Algorithm: There exists an algorithm that can compute the sum of a k-bit integer x and an l-bit integer y in time O(max(k,l)+1). The size of the output is O(max(k,l)+1).
1. addition of unbounded positive integers: k-bit integer x, l-bit integer y
1. r := 0 (a bit vector to store the result)
2. c := 0 (the carry bit)
3. for i from 0 to max(k,l) − 1
1. r[i] := (x[i] xor y[i]) xor c
2. c := (x[i] and y[i]) or (x[i] and c) or (y[i] and c)
4. r[max(k,l)] := c
5. return r
Example: Below is a Python implementation of the bitwise addition algorithm.

from bitlist import bitlist
k = len(x)
l = len(y)
r = bitlist(0)
c = 0
for i in range(0, max(k,l)): # Upper bound is not inclusive.
r[i] = (x[i] ^ y[i]) ^ c
c = (x[i] & y[i]) | (x[i] & c) | (y[i] & c)
r[max(k,l)] = c
return r

How can we use the addition algorithm to implement multiplication? One approach for multiplying two positive integers x, y ℕ is to do repeated addition of y (repeating the addition operations x times). However, if x is an k-bit integer, this would require up to 2k-1 addition operations, which would take exponential time in the representation size of the input x.
A more efficient approach is to use the representation of x as a sum of powers of 2, and to apply the distributive property. Suppose x is represented as the binary bit string ak-1...a0. Then we have:
 x ⋅ y
=
 (ak-1 ⋅ 2k-1 + ... + a2 ⋅ 22 + a1 ⋅ 21 + a0 ⋅ 20) ⋅ y
=
 (ak-1 ⋅ 2k-1 ⋅ y) + ... + (a2 ⋅ 22 ⋅ y) + (a1 ⋅ 21 ⋅ y) + (a0 ⋅ 20 ⋅ y)
Notice that we have now rewritten multiplication as k − 1 addition operations. The only other problem is how to multiply y by powers of 2. We can do so simply by appending a 0 to the bit string representation of y. Suppose y is represented as the binary bit string bk-1...b0. Then we have:
 2 ⋅ y
=
 2 ⋅ bk-1...b1b0
=
 2 ⋅ (bk-1 ⋅ 2k-1 + ... + b1 ⋅ 21 + b0 ⋅ 20)
=
 bk-1 ⋅ 2k + ... + b1 ⋅ 22 + b0 ⋅ 21
=
 (bk-1 ⋅ 2k + ... + b1 ⋅ 22 + b0 ⋅ 21) + 0 ⋅ 20
=
 bk-1...b1b00
Thus, our algorithm only needs to depend on addition, and on shifting bit strings left by one (a.k.a., appending a 0 to the bit string at the position of the least significant bit).
Algorithm: There exists an algorithm that can compute the product of a k-bit integer x and an l-bit integer y in time O(k ⋅ (max(k,l)+1+k)) or O(max(k,l)2). The size of the output is O(k+l) (because the shift left for the 21 case does not contribute to the final result, the l-bit integer is shifted left at most k-1 times, but there may still be a carried bit on the last addition operation that is performed).
1. multiplication of unbounded positive integers: k-bit integer x, l-bit integer y
1. r := 0 (a bit vector to store the result)
2. for i from 0 to k − 1
1. if x[i] is 1
1. r := r + y (using unbounded integer addition)
2. shift the bits of y left by one bit (i.e., multiply y by 2)
3. return r
Example: Below is a Python implementation of the bitwise multiplication algorithm. Note that it relies on the bitwise addition algorithm implementation.

def mult(x, y):
k = len(x)
l = len(y)
r = bitlist(0)
for i in range(0, k): # Upper bound is not inclusive.
if x[i] == 1:
y = y << 1
return r

Algorithm: There exists an algorithm that can compute the exponentiation xy of an k-bit integer x and an l-bit integer y in time O(k ⋅ 2l). The size of the output is O(k ⋅ 2l). Notice that this means that for unbounded integer outputs, the algorithm runs in exponential time because it must build an output the size of which is exponentially large in the size of the input.
1. exponentiation of unbounded positive integers: k-bit integer x, l-bit integer y
1. r := 1 (a bit vector to store the result)
2. for i from 0 to l − 1
1. if y[i] is 1
1. r := rx (using unbounded integer multiplication)
2. x := xx (using unbounded integer multiplication)
3. return r
Example: Below is a Python implementation of the bitwise exponentiation algorithm. Note that it relies on the bitwise multiplication algorithm implementation.

def exp(x, y):
k = len(x)
l = len(y)
r = bitlist(1)
for i in range(0, l): # Upper bound is not inclusive.
if y[i] == 1:
r = mult(r, x)
x = mult(x, x)
return r

Algorithm: There exists an algorithm that can compute the integer quotient ⌊ x / y ⌋ of an k-bit integer x and an l-bit integer y in time O((kk) + (k ⋅ (2 ⋅ k))) or O(k2).
1. integer division of unbounded positive integers: k-bit integer x, l-bit integer y
1. if y > x
1. return 0
2. for i from 0 to k − 1
1. shift y left by one bit
3. t := 0 (a bit vector to store ⌊ x / y ⌋ ⋅ y)
4. q := 0 (a bit vector to store the integer quotient)
5. p := 2k (to keep track of the current power of 2)
6. for i from 0 to k
1. if t + yx
1. t := t + y (using unbounded integer addition)
2. q := q + p (using unbounded integer addition)
2. shift y right by one bit
3. shift p right by one bit
7. return q
Example: Below is a Python implementation of the bitwise integer division algorithm. Note that it relies on the bitwise addition algorithm implementation.

def div(x, y):
k = len(x)
l = len(y)
if y > x:
return bitlist(0)
for i in range(0, k): # Upper bound is not inclusive.
y = y << 1
t = bitlist(0)
q = bitlist(0)
p = bitlist(2**k)
for i in range(0, k+1): # Upper bound is not inclusive.
y = y >> 1
p = p >> 1
return q

Algorithm: There exists an algorithm that can compute x mod y of an k-bit integer x and an l-bit integer y in time O(k2). This is accomplished by first performing integer division, then an integer multiplication, and then a subtraction. This corresponds to the formula for the modulus operation:
 x mod y
=
 x - ⌊ x/y ⌋ ⋅ y
When we consider the operations above as operating on integers modulo 2k (with results also in 2k), this corresponds to simply dropping any bits beyond the k least-significant bits when performing the computation.
Fact: There exists an algorithm that can compute the sum of two k-bit integers x and y in time O(k). The size of the output is O(k).
Fact: There exists an algorithm that can compute the product of two k-bit integers x and y in time O(k2). The size of the output is O(k).
Fact: There exists an algorithm that can compute xy for two k-bit integers x and y in time O(k3). The size of the output is O(k).
Fact: The recursive algorithm for gcd (and the extended Euclidean algorithm) makes O(log (max(x,y))) recursive calls on an integer inputs x ℕ and y ℕ. Notice that this means that the number of recursive calls is linear, or O(max(k,l)), for inputs consisting of an k-bit integer x and an l-bit integer y.
To see the above, consider the following fact: for any a ℕ, b ℕ, if ba then a mod b < (1/2) ⋅ a. Consider the two possibilities for a and b:
• if b ≤ (1/2) ⋅ a, then ⌊ a / b ⌋ > 1, so:
 (a mod b)
<  b
 (1/2) ⋅ a
• if b > (1/2) ⋅ a, then ⌊ a / b ⌋ = 1, so:
 a mod b
=  a − ⌊ a/b ⌋ ⋅ b
=  a − 1 ⋅ b
=  a − b
<  a − ((1/2) ⋅ a)
<  (1/2) ⋅ a
Thus, every time a mod b is computed in the algorithms, size of the second paramter is halved. Since every other invocation switches the two parameters, both parameters are halved. Thus, the number of invocations or iterations for an input m is log(m).
Fact: The recursive algorithm for the extended Euclidean algorithm on inputs consisting of a k-bit integer x and an l-bit integer y runs in time O(max(k,l) ⋅ (2 ⋅ max(k,l)2 + max(k,l))), or O(max(k,l)3). The number of recursive calls is about max(k,l), and each recursive call involves an integer division, a multiplication, and a subtraction.
If all inputs and outputs are integers that can be represented with at most k bits, the running time is then O(k3).
Fact: The following problem can be solved in polynomial time: given x (ℤ/nℤ)*, compute x-1. This can be reduced to running the extended Euclidean algorithm, which has a polynomial running time.
If all inputs and outputs are integers that can be represented with at most k bits, the running time is then O(k3).
Fact: There exists an O(max(k,l)3 + (k+l)2) algorithm that can solve the following system of two equations (for k-bit integers x,x' and l-bit integers y,y') using the Chinese remainder theorem:
 s
 x' (mod x)
 s
 y' (mod y)
This algorithm calls the extended Euclidean algorithm on x and y, and then performs four multiplications modulo (xy). If all inputs and outputs are integers that can be represented with at most k bits, the running time is then O(k3).
Exercise: Multiply the following two numbers (represented in binary) using the multiplication algorithm presented in lecture: 1101101.

### [link]  4.3. Complexity of (probably) intractable problems

In the previous section we saw that addition, subtraction, multiplication, exponentiation, and division (both integer division, modulus, and multiplication by multiplicative inverses) can all be computed efficiently (i.e., in polynomial time) both over integers and over congruence classes. It is also possible to efficiently compute roots and logarithms of integers (we omit proofs of this fact in this course). However, no efficient algorithms are known for computing roots and logarithms of congruence classes.
Definition: A problem can be solved in polynomial time iff there exists for some constant c an algorithm that solves all instances of the problem in time O(nc). The set of all problems that can be solved in polynomial time is called P, and if a problem can be solved in polynomial time, we say that the problem is in P.
Definition: A problem can be solved in exponential time iff there exists an algorithm that solves all instances of the problem in time O(2n).
Definition: There exists a polynomial-time reduction from a problem A to a problem B iff there exists a polynomial-time algorithm that can convert any instance of problem A into an instance of problem B (i.e., convert an input for A into an input for B, and convert the output from B into an output from A).
A polynomial-time reduction from one problem to another can be viewed as two separate polynomial-time algoritms: a conversion algorithm that takes inputs to problem A and invokes a solver for problem B some polynomial number of times, and a conversion algorithm that takes all the outputs obtained from the solver for problem B and assembles and/or converts them into outputs for problem A.
 solver forproblem B ⇒⇒⇒ conversionfrom output(s) fromB to output from A ⇑⇑⇑ ⇓ conversionfrom input forA to input(s) for B ⇐ solver forproblem A
We can summarize the above diagram by simply saying that problem A reduces to problem B.
 problem B ⇐ problem A
We have already seen examples of such reductions. For example, a CRT solver for two equations makes a single call to the extended Euclidean algorithm. Thus, there exists a polynomial-time reduction from the problem of solving a two-equation system using CRT to the problem of computing multiplicative inverses.
 findingmultiplicativeinverses ⇐ solving two-equationsystems using CRT
Fact: If there exists a polynomial-time reduction from problem A to problem B, and problem A is not in P (i.e., there exists no polynomial-time algorithm to solve A), then problem B must not be in P, either.
To see why B cannot be in P, we can present a proof by contradiction. Suppose that there does exist a polynomial-time algorithm to solve problem B. Then the polynomial-time reduction from A to B can invoke a polynomial-time algorithm. But then the reduction and algorithm for B working together will constitute a polynomial-time algorithm to solve A. Then it must be that A is in P. But this contradicts the fact that A is not in P, so no such polynomial-time algorithm for B could exist.
The above fact allows us to make conclusions about the computational complexity of certain problems based on their relationships (in terms of implementation) to other problems.
 problem Bpremise:can be solved inpolynomial timeB ∈ P ⇐ problem Aconclusion:can be solved inpolynomial timeA ∈ P problem Bconclusion:cannot be solved inpolynomial timeB ∉ P ⇐ problem Apremise:cannot be solved inpolynomial timeA ∉ P
Intuitively, we can imagine that if problem A is "attached to" (i.e., depends on) problem B, an "easy" problem B will "pull" A down into the set of easily solvable problems P, while a "difficult" problem A will "pull" problem B into the set of hard-to-solve problems.
Conjecture (factoring): The following problem is not in P: given any integer n ℕ where n = pq and p and q are prime, find p and q.
Fact: Suppose that n = pq for two primes p ℕ and q ℕ. Given only n and φ(n), it is possible to compute p and q. Consider the following:
 φ(n)
=
 (p − 1) ⋅ (q − 1)
 φ(n)
=
 p ⋅ q − p − q + 1
 φ(n)
=
 n - p − q + 1
 φ(n) - n
=
 - p − q + 1
 φ(n) - n - 1
=
 − p − q
Thus, it is sufficient to solve the following system of equations for p and q:
 n
=
 p ⋅ q
 φ(n) - n - 1
=
 − p − q
Example: Suppose that n = 15 and φ(n) = 8. Factor n.
We can plug n and φ(n) into the system of equations derived in the applicable fact:
 15
=
 p ⋅ q
 8 − 15 − 1
=
 − p − q
With two equations and two unknowns, we can now solve for p and q:
 8 − 15 − 1
=
 − p − q
 p
=
 15 − 8 + 1 − q
=
 8 − q
 15
=
 (8 − q) ⋅ q
 0
=
 − q2 + 8q − 15
 0
=
 q2 − 8q + 15
At this point, we use the quadratic equation:
 q
=
 1/2 ⋅ (8 ± √(64 − 4(1)(15)))
 q
=
 1/2 ⋅ (8 ± √(4))
 q
=
 1/2 ⋅ (8 ± 2)
 q
 {3, 5}
 {p, q}
=
 {3, 5}
Conjecture (computing φ): The following problem is not in P: given any integer n ℕ where n = pq and p and q are prime, find φ(n).
If we can compute φ(n), then we can compute p and q. If computing φ(n) were any easier than factoring n (e.g., if we had a polynomial-time algorithm for computing φ(n)), then our claim about the hardness of factoring n would be a contradiction. In other words, factoring n can be reduced to solving φ(n).
 computing φ(n)conclusion:cannot be solved inpolynomial timecomputing φ(n) ∉ P ⇐ factoring nconjecture:cannot be solved inpolynomial timefactoring ∉ P
The above fact (i.e., that if factoring n is not in P, then neither is computing φ(n)) holds for arbitrary n, not just a product of two primes. However, the proofs in those cases are more sophisticated [Shoup].
Suppose we are given the following equation:
 xy
=
 z
There are three computational questions we could ask about the above equation:
• given x and y, compute z (this is the exponentiation operation);
• given x and z, compute y (this is the logarithm operation, since we have logx z = y in an equivalent notation);
• given y and z, compute x (this is the yth root operation, since we have  y√(z) = x in an equivalent notation).
We have efficient algorithms for computing all three of the above if x, y, and z are all integers or real numbers. Suppose we instead consider the following equation for some n ℕ:
 xy
 z (mod n)
In other words, we can interpret the equation as a congruence of equivalence classes in ℤ/nℤ. In this case, we already know that the first operation (exponentiation) has an efficient implementation because exponentiation and modulus are both efficient operations. However, we believe that the other two operations (computation of logarithms and roots of congruence classes) are computationally difficult (no polynomial-time algorithm exists to compute solutions).
Conjecture (RSA problem): The following problem is not in P: compute m given only the following:
 n
=
 p ⋅ q

 for two primes p and q in ℕ
 e
 ℤ/φ(n)ℤ

 where e ≥ 3
 c
=
 me mod n

 for an unknown m ∈ ℤ/nℤ
Notice that the RSA problem is analogous to computing the eth root of c in ℤ/nℤ:
 e√(c) (mod n)
=
 e√(me) (mod n)
=
 m (mod n)
Note also that this can be accomplished by first finding φ(n) and then computing the inverse of e, but this is as difficult as factoring n, and we assume that is not in P. Is there another way to compute m? We do not know, but we assume that there is no other faster (i.e., polynomial-time) way to do so.
Conjecture (discrete logarithm assumption): The following problem is not in P: compute e given only the following:
 n
 ℕ
 m
 {1,...,n − 1}
 c
=
 me mod n

 for an unknown e ∈ ℕ
Notice that this is analogous to computing the logarithm of a value c in ℤ/nℤ with respect to a known base m:
 logm (c)
=
 logm (me)
=
 e
Note that the RSA problem requires that e ≥ 3, so it does not include the problem of computing square roots. The last intractable problem we will consider in this section is the problem of computing square roots of congruence classes within ℤ/nℤ for some n ℕ. We examine this problem separately from the RSA problem because it is possible to reduce factoring directly to the problem of computing square roots (while there is currently no known deterministic polynomial-time reduction from the factoring problem to the RSA problem).
Before we formally define the problem of computing square roots in ℤ/nℤ, we must first introduce some concepts and facts. This is because the problem of computing square roots in ℤ/nℤ is different from the problem of computing square roots of integers in ℕ, and this difference is likely what makes it computationally more difficult.
Definition: Given some n ℕ and some y ℤ/nℤ, we say that y is a quadratic residue in ℤ/nℤ if there exists x ℤ/nℤ such that x2y.
Example: Let us find the quadratic residues in ℤ/7ℤ:
 02 ≡ 0 (mod 7)
 12 ≡ 1 (mod 7)
 22 ≡ 4 (mod 7)
 32 ≡ 2 (mod 7)
 42 ≡ 2 (mod 7)
 52 ≡ 4 (mod 7)
 62 ≡ 1 (mod 7)
The quadratic residues in ℤ/7ℤ are 0, 1, 2, and 4. Notice that 3, 5, and 6 are not quadratic residues in ℤ/7ℤ. Thus, the equations x2 ≡ 3, x2 ≡ 5, and x2 ≡ 6 have no solution in ℤ/7ℤ.
Example: Consider the set of congruence classes ℤ/5ℤ. We have that:
 02 ≡ 0 (mod 5)
 12 ≡ 1 (mod 5)
 22 ≡ 4 (mod 5)
 32 ≡ 4 (mod 5)
 42 ≡ 1 (mod 5)
Notice that 2 and 3 are not quadratic residues in ℤ/5ℤ. Thus, neither x2 ≡ 2 nor x2 ≡ 3 have solutions in ℤ/5ℤ.
Fact: Given some n ℕ and some y ℤ/nℤ, if y and n are coprime and y is a non-zero quadratic residue in ℤ/nℤ then there exist at least two a,b ℤ/nℤ such that ab, a2y, and b2y.
Note that this is analogous to square roots in ℤ (since √(z) ℤ and −√(z) ℤ are both square roots of z ℤ if they exist).
We can prove this fact in the following way: suppose that y is a quadratic residue. Then there exists at least one x ℤ/nℤ such that:
 x2 mod n
=
 y
But this means that (nx) ℤ/nℤ is such that:
 ((n − x)2) mod n
=
 (n2 − (2 ⋅ n ⋅ x) + x2) mod n
=
 x2 mod n
=
 y mod n
Thus, x and (nx) are both roots of y.
Example: It is the case that 4 ℤ/5ℤ is a quadratic residue in ℤ/5ℤ, with two roots 2 and 3:
 22 mod 5
=
 4
 32 mod 5
=
 9 mod 5
=
 4
Example: Consider 0 ℤ/3ℤ. We have that:
 02 ≡ 0 (mod 3)
 12 ≡ 1 (mod 3)
 22 ≡ 1 (mod 3)
Thus, x2 ≡ 0 has exactly one solution in ℤ/3ℤ.
Fact: Let p ℕ be a prime such that p mod 4 = 3, and suppose that y ℤ/pℤ. Then y has either 0, 1, or 2 roots in ℤ/pℤ.
Example: Suppose we want to solve the following equation for x ℤ/7ℤ:
 x2
 3 (mod 7)
Suppose we start by squaring both sides:
 x4
 32 (mod 7)
We can then use Euler's theorem to add any multiple of φ(7) to the exponent:
 x4
 32 ⋅ 1 (mod 7)
 x4
 32 ⋅ 3φ(7)
 x4
 32 + φ(7)
Since 7 is prime, φ(7) must be even, so 2 + φ(7) is also even. Thus, we can divide the exponent by 2 on both sides.
 x2
 3(2 + φ(7))/2
Furthermore, since 7 ≡ 3 (mod 4), we know that 2 + φ(7) is a multiple of 4. Thus, we can actually divide both exponents by 4:
 x
 3(2 + φ(7))/4
Thus, we have found x as a power of the original quadratic residue 3.
Fact: Let p ℕ be a prime such that p mod 4 = 3, and suppose that y ℤ/pℤ is a quadratic residue with two roots in ℤ/pℤ. Then we can compute the roots using the following formula:
 x ≡ ± y(p+1)/4 (mod p)
In fact, if the modulus n is not prime, there may exist more than two roots of a value in ℤ/nℤ.
Example: It is the case that 1,-1,6,-6 ℤ/35ℤ are all square roots of 1 ℤ/35ℤ:
 12 mod 35
=
 1
 (-1)2 mod 35
=
 342 mod 35
=
 1156 mod 35
=
 ((33 ⋅ 35)+1) mod 35
=
 1 mod 35
 62 mod 35
=
 36 mod 35
=
 1 mod 35
 (-6)2 mod 35
=
 292 mod 35
=
 841 mod 35
=
 ((24 ⋅ 35)+1) mod 35
=
 1 mod 35
Example: Suppose we are given an instance of the congruent squares problem where y = 2 and n = 15. We want to find x ℤ/15ℤ such that x ≢ ± y but x2y2 ≡ 22 ≡ 4. Notice that we have that:
 y
 2 mod 3
 y2
 22 mod 3
 1 mod 3
 (3-y)2
 12 mod 3
 1 mod 3
Notice also that we have that:
 y
 2 mod 5
 y2
 22 mod 5
 4 mod 5
 (5-y)2
 32 mod 5
 4 mod 5
Thus, the square roots of 4 in ℤ/3ℤ are 1 and 2, and the square roots of 4 in ℤ/5ℤ are 2 and 3. We can then apply the Chinese remainder theorem to every pair of combinations:
 r1
 1 mod 3
 r1
 2 mod 5
 r1
 7 mod 15
 r2
 2 mod 3
 r2
 2 mod 5
 r2
 2 mod 15
 r3
 1 mod 3
 r3
 3 mod 5
 r3
 13 mod 15
 r4
 2 mod 3
 r4
 3 mod 5
 r4
 8 mod 15
Thus, x = 8 and x = 7 are solutions to x ≢ ± 2 and x2 ≡ 4.
Fact (Hensel's lemma): Let p ℕ be a prime number greater than 2, and let k ℕ be any positive integer (i.e., k ≥ 1). Suppose that x and p are coprime, and that x ℤ/pkℤ can be squared to obtain some quadratic residue r ℤ/pkℤ:
 x2 ≡ r (mod pk)
We can compute y ℤ/pk+1ℤ such that:
 y2 ≡ r (mod pk+1)
We compute it as follows. First, we compute c using the following formula:
 c
 x-1 ⋅ 2-1 ⋅ ((r - x2) / pk) (mod p)
Then, we have that:
 y
 x + c ⋅ pk
To see why Hensel's lemma is true, suppose that we have that:
 x2
 r (mod pk)
Notice that if it is possible to "lift" x to a root of r in ℤ/pk+1ℤ, the only possibility is that this new root y has an additional multiple of pk. Thus, it must be that for some integer multiple c, we have:
 y
=
 x + c ⋅ pk
We can then substitute:
 y2
 r (mod pk+1)
 (x + (c ⋅ pk))2
 r (mod pk+1)
But we can simplify the above equation:
 (x + (c ⋅ pk))2
 r (mod pk+1)
 x2 + (2 ⋅ x ⋅ c ⋅ pk) + (c2 ⋅ p2k)
 r (mod pk+1)
But notice that the third term on the left-hand side in the above equation is equivalent to the congruence class 0 + pk+1ℤ:
 c2 ⋅ p2k
 0 (mod pk+1)
Thus, we have:
 x2 + (2 ⋅ x ⋅ c ⋅ pk)
 r (mod pk+1)
 (x2 − r) + (2 ⋅ x ⋅ c ⋅ pk)
 0
The above can be rewritten using the divisibility predicate as:
 pk+1
|
 (x2 − r) + (2 ⋅ x ⋅ c ⋅ pk)
Thus, we can divide both sides of the above relationship by pk to obtain:
 p
|
 (x2 − r)/(pk) + (2 ⋅ x ⋅ c)
We can then rewrite the above as an equation of congruence classes:
 (x2 − r)/(pk) + (2 ⋅ x ⋅ c)
 0 (mod p)
 (2 ⋅ x ⋅ c)
 − (x2 − r)/pk
 c
 x-1 ⋅ 2-1 ⋅ (− (x2 − r)/pk)
 c
 x-1 ⋅ 2-1 ⋅ ((r − x2)/pk)
Thus, we have derived the formula in Hensel's lemma.
Example: To better understand Hensel's lemma, we can derive the lemma for a particular example. Let us start with the following equation:
 42
 2 (mod 7)
Suppose we want to find y ℤ/72ℤ such that:
 y2
 2 (mod 49)
We know that the difference between 4 and y must be a multiple of 7, so we write:
 y
=
 4 + 7 ⋅ c
Then we proceed:
 y2
 2 (mod 49)
 (4 + 7 ⋅ c)2
 2 (mod 49)
 42 + (2 ⋅ 7 ⋅ c ⋅ 4) + (49 ⋅ c2)
 2 (mod 49)
 42 + (2 ⋅ 7 ⋅ c ⋅ 4)
 2 (mod 49)
We simplify further to compute c:
 (42 - 2) + (2 ⋅ 7 ⋅ c ⋅ 4)
 0 (mod 49)
 14 + (2 ⋅ 7 ⋅ c ⋅ 4)
 0 (mod 49)
The above can be rewritten using the divisibility predicate as:
 49 | 14 + (2 ⋅ 7 ⋅ c ⋅ 4)
 7 | 2 + (2 ⋅ c ⋅ 4)
We can again rewrite the above as an equation of congruence classes:
 2 + (2 ⋅ c ⋅ 4)
 0 (mod 7)
 2 ⋅ c ⋅ 4
 − 2 (mod 7)
 2 ⋅ c ⋅ 4
 5 (mod 7)
 c
 2-1 ⋅ 4-1 ⋅ 5 (mod 7)
 c
 4 ⋅ 2 ⋅ 5 (mod 7)
 c
 5 (mod 7)
Thus, we have:
 y
 4 + 7 ⋅ 5 (mod 49)
 y
 39 (mod 49)
Since 49 − 39 = 10, we have:
 y
 ± 10 (mod 49)
 y2
 2 (mod 49)
Example: We want to find both solutions y ℤ/121ℤ to:
 y2
 5 (mod 121)
Since 121 = 112, we have p = 11, k = 1, and r = 5. We begin by finding x ℤ/11ℤ such that:
 x2
 5 (mod 11)
Since 11 3 + 4ℤ, we can use an explicit formula:
 x
 ± 5(11+1)/4 (mod 11)
 ± 53
 ± 3 ⋅ 5
 ± 4
Thus, it is sufficient to lift the solution 4 ℤ/11ℤ to a solution in ℤ/121ℤ using Hensel's lemma. We compute c:
 c
 x-1 ⋅ 2-1 ⋅ ((r − x2)/pk) (mod p)
 4-1 ⋅ 2-1 ⋅ ((5 − 16)/11) (mod 11)
 4-1 ⋅ 2-1 ⋅ (− 1)
 3 ⋅ 6 ⋅ (− 1)
 (− 18)
 4
Thus, we have:
 y
 4 + 4 ⋅ 11 (mod 121)
 4 + 44
 48
Thus, we have the solution
 y
 ± 48 (mod 121)
Example: The distance traveled by an object that is at rest at time t = 0 and then immediately begins accelerating at 4 meters/second2 (i.e., the speed of the object increases by the quantity 4 meters/second every second) can be defined in terms of time in second t as:
 d
=
 1/2 ⋅ 4 ⋅ t2
We might expect an object to behave this way if it is being pulled by gravity, or if it is using a stable propulsion engine (e.g., a rocket).
Suppose we are using a range ambiguity resolution technique to track the distance the object has traveled. If it a particular moment, we know that the distance from the object is in the congruence class 10 + 11ℤ, what can we say about the amount of time t that has elapsed since the object started moving?
Since the distance is in 10 + 11ℤ, we can say:
 d
 10 (mod 11)
 1/2 ⋅ 4 ⋅ t2
 10
 2 ⋅ t2
 10
We know that 2-1 ≡ 6 (mod 11), so we multiply both sides of the above equation to obtain:
 t2
 60 (mod 11)
 t2
 5 (mod 11)
Thus, we can compute:
 t
 5(11+1)/4 (mod 11)
 t
 53
 t
 3 ⋅ 5
 t
 4 (mod 11)
Thus, we can say that the amount of time that has elapsed is in 4 + 11ℤ.
Example: Solve the following system of equations for x ℤ/21ℤ (find all solutions):
 x2
 1 (mod 3)
 x2
 1 (mod 7)
We know that there is exactly one solution y ℤ/21ℤ to the following system:
 y
 1 (mod 3)
 y
 1 (mod 7)
The solution is simply y = 1, and since there is only one solution, this is the only possibility. Thus, we are looking for all the solutions to the following equation:
 x2
 1 (mod 21)
Since 3 mod 4 = 7 mod 4 = 3, we know that there are two solutions to each of the following equations:
 x2
 1 (mod 3)
 x2
 1 (mod 7)
The solutions are as follows:
 x
 1 (mod 3)
 x
 2 (mod 3)
 x
 1 (mod 7)
 x
 6 (mod 7)
Taking every pair of combinations with one solution from ℤ/3ℤ and one solution from ℤ/7ℤ, we get:
 x1
 1 (mod 3)
 x1
 1 (mod 7)
 x1
 1 (mod 21)
 x2
 2 (mod 3)
 x2
 1 (mod 7)
 x2
 8 (mod 21)
 x3
 1 (mod 3)
 x3
 6 (mod 7)
 x3
 13 (mod 21)
 x4
 2 (mod 3)
 x4
 6 (mod 7)
 x4
 20 (mod 21)
Example: How many solutions x ℤ/(33 ⋅ 35)ℤ does the following system of equations have:
 x2
 4 (mod 33)
 x2
 4 (mod 35)
We know that each of the following equations have two solutions (2 and -2 in the respective sets). Notice that 4 mod 3 = 1.
 x2
 1 (mod 3)
 x2
 4 (mod 11)
 x2
 4 (mod 5)
 x2
 4 (mod 7)
Thus, there are two possible choices for each of the variables r1 {-2,2}, r2 {-2,2}, r3 {-2,2}, r4 {-2,2}, so there are 2 ⋅ 2 ⋅ 2 ⋅ 2 = 24 = 16 possible systems of the form:
 x
 r1 (mod 3)
 x
 r2 (mod 11)
 x
 r3 (mod 5)
 x
 r4 (mod 7)
Each system has a unique solution because the tuple (r1, r2, r3, r4) is unique, so there are 16 solutions for x in ℤ/(33 ⋅ 35)ℤ. Alternatively, we could break the problem down into two subproblems. First, we solve the following equation:
 x2
 4 (mod 33)
We obtain four distinct solutions (r1, r2, r3, r4) in ℤ/33ℤ. Next, we solve the following equation:
 x2
 4 (mod 35)
We then have four distinct solutions in (s1, s2, s3, s4) ℤ/35ℤ. Since gcd(33,35) = 1, we can then take any combination of solutions ri and si and set up the system:
 x
 ri (mod 33)
 x
 si (mod 35)
There will be exactly one solution to each of the above systems. There are 42 = 16 distinct systems, so there will be 16 distinct solutions.
We can summarize everything we know about computing square roots of congruence classes as follows. Suppose we want to find all solutions to the equation x2a (mod n) for some n ℕ and some a ℤ/nℤ.
• If n is a prime p, the possibilities are:
• a is not a quadratic residue in ℤ/pℤ, so there are no solutions to the equation,
• a ≡ 0, in which case x ≡ 0 is the one and only solution to the equation,
• a is a quadratic residue in ℤ/pℤ, so there are exactly two solutions to the equation, ± x ℤ/pℤ.
• If n is prime power pk+1 and a is coprime with p, the possibilities are:
• a is not a quadratic residue in ℤ/pkℤ, so it is not a quadratic residue in ℤ/pk+1ℤ;
• a is a quadratic residue in ℤ/pkℤ, and both square roots of a in ℤ/pkℤ can be "lifted" to ℤ/pk+1ℤ using Hensel's lemma.
• If n is a product of two coprime numbers k and m, then there is a solution in ℤ/nℤ for every possible combination of y and z such that:
 y2
 a mod k
 z2
 a mod m
Each combination corresponds to a solution x ℤ/nℤ defined using CRT as:
 x
 y mod k
 x
 z mod m
Let us consider the problem of finding all of the square roots of a member of ℤ/nℤ. Notice that this problem is analogous to computing all the square roots of y in ℤ/nℤ:
 √(y)
=
 ± x
The main difference is that the number of square roots may be greater than 2. This problem is believed to be computationally difficult (i.e., no algorithm in P exists that can solve the problem). In fact, even finding just one additional square root is believed to be computationally difficult.
Conjecture (congruent squares): The following problem is not in P: given n = pq for two primes p and q in ℕ and y ℤ/nℤ, find an x ℤ/nℤ such that x2y2 but x ≢ ± y.
Factoring can be reduced to finding congruent squares. Suppose we want to factor n. We find x and y such that:
 x2 mod n
=
 y2 mod n
 0 mod n
=
 (x2 − y2) mod n
=
 ((x + y) ⋅ (x − y)) mod n
 n
|
 (x + y) ⋅ (x − y)
Since n cannot divide (x+y) (because x ≢ ± y, so x + yn), and it cannot divide (x-y) (since (x+y) < n), and (x-y) ≠ 0 (since x ≢ ± y), it must be that n shares factors with both (x+y) and (x-y). Thus, it must be that either gcd(n,x + y) or gcd(n,x - y) is a non-trivial factor of n, and this can be computed efficiently.
The following diagram summarizes the relationships between the problems that are conjectured to be intractable (i.e., not in P). Each directed edge represents that there exists a polynomial-time reduction from the source problem to the destination problem. All of the nodes in the graph are conjectured to be not in P.
 congruent squares(square roots ofcongruence classes) ⇑⇓ ⇑⇓ computing φ(n)for n = p ⋅ q ⇐⇒ factoringn = p ⋅ q ⇑ ⇑ RSA problem(eth roots ofcongruence classes) discrete logarithm(logarithms ofcongruence classes)

### [link]  4.4. Applications of intractability

The computational intractability of certain problems in modular arithmetic makes it possible to address some practical security issues associated with implementing communication protocols. In particular, it helps address two common problems:
• parties must communicate over a public communications channel, so everything they send is visible both to their receiver and to anyone that may be eavesdropping;
• parties trying to communicate cannot physically meet to agree on shared secret information before communicating.
Protocol (hard-to-forge identification with meeting): Suppose Alice and Bob know that Alice will need to send Bob a single message at some point in the future. However, it is possible that Eve might try to impersonate Alice and send a message to Bob while pretending to be Alice.
In order to help Bob confirm that a message is truly from Alice (or to determine which message is from Alice given multiple messages), Alice and Bob meet in person and agree on a secret identifier s. When Alice decides to send a message m to Bob, she will send (m, s). Bob can then compare s to his own copy of s and confirm the message is from Alice.
Eve's only attack strategy is to try and guess s. As long as Alice and Bob choose s from a very large range of integers, the probability that Eve can guess s correctly is small.
A major flaw in the above identification protocol is that Alice and Bob must first meet in person to agree on a secret. Can Alice and Bob agree on a secret without meeting in person?
Protocol (hard-to-forge identification without meeting): Suppose Alice and Bob know that Alice will need to send Bob a single message at some point in the future. Alice prepares for this by doing the following:
• choose two large primes p and q at random;
• compute n = pq;
• send the public identifier n to Bob over a public/non-secure communication channel.
When Alice is ready to send her message m to Bob, Alice will send (m, (p, q)), where (p, q) is the private identifier. Bob can confirm that pq = n, at which point he will know Alice was the one who sent the message.
Conjecture (forging identification): The following problem is not in P: in the previously defined protocol, given a public identifier n, compute the private identifier (p, q).
If it were possible to quickly (i.e., in polynomial time) compute the private identifier, then it would be easy to forge the identity of a sender by recording their public identifier n. However, suppose that this forging computation could be used as a subprocedure in a factoring algorithm. Then it would be possible to implement an efficient factoring algorithm. In other words, factoring n can be reduced to forging a private identifier. Thus, the problem of forging a private identifier must not be in P (i.e., the fastest algorithm that exists for forging an identity is not in P, which means there is no polynomial-time for forging an identity).
 forging privateidentifier for nconclusion:cannot be solved inpolynomial timeforging ∉ P ⇐ factoring nconjecture:cannot be solved inpolynomial timefactoring ∉ P
Note that the reduction also works in the other direction: an algorithm for factoring can be used for forging an identity. However, this proves nothing about the difficulty of forging! Just because forging can be reduced to an inefficient algorithm for factoring does not mean that there does not exist some other algorithm for forging that does not rely on factoring.
 alternativeefficientalgorithm ⇐ forging privateidentifier for n ⇒ factoring n
The above identification protocol improves over the previous protocol because it allows Alice and Bob to agree on an identifier for Alice without meeting in person. Its security relies on the fact that it is unlikely that Eve is capable of forging identifiers because her ability to forge would solve a problem that we believe is very difficult to solve.
However, the protocol still has many other flaws. For example, Eve could preempt Alice and send her own signature %n before Alice has a chance to send her signature to Bob. A more thorough examination of such protocols is considered in computer science courses focusing explicitly on the subject of cryptography.
Protocol (Diffie-Hellman key exchange): We introduce the Diffie-Hellman key exchange protocol. This protocol is useful if two parties who cannot meet physically want to agree on a secret value that only they know.
• Public key generation (performed by one party):
1. Randomly choose a public large prime number p ℕ and an element g ℤ/pℤ.
• Private key generation (performed by both parties):
1. Party A randomly chooses a secret a ℤ/φ(p)ℤ.
2. Party B randomly chooses a secret b ℤ/φ(p)ℤ.
• Protocol:
1. Party A computes (ga mod p) and sends this public value to party B.
2. Party B computes (gb mod p) and sends this public value to party A.
3. Party A computes (gb mod p)a mod p.
4. Party B computes (ga mod p)b mod p.
5. Since multiplication over ℤ/φ(p)ℤ is commutative, both parties now share a secret gab mod p.
This protocol's security only relies on the discrete logarithm assumption.
It is not known whether the discrete logarithm problem is related to the factoring problem. Factoring can be reduced using a probabilistic approach to the discrete logarithm problem modulo pq.
Protocol (RSA protocol): We introduce the RSA public-key cryptographic protocol. This protocol is useful in many scenarios, such as the following:
• a sender wants to send the receiver a secret message over a public channel;
• a receiver wants to allow any number of senders to send him messages over a public channel, and the receiver does not yet know who the senders will be.
This protocol can also be used to prove the identity of the receiver.
• Key generation (performed by the receiver):
1. Randomly choose two secret prime numbers p ℕ and q ℕ of similar size.
2. Compute a public key value n = pq.
3. Compute the secret value φ(n) = (p-1) ⋅ (q-1).
4. Choose a public key value e {2,...,φ(n)-1} such that gcd(e, φ(n)) = 1.
5. Compute the secret private key d = e-1 mod φ(n)
• Protocol (encryption and decryption): There are two participants: the sender and the receiver.
1. The sender wants to send a message m {0,...,n-1} where gcd(m,n) = 1 to the receiver.
2. The receiver reveals the public key (n,e) to the sender.
3. The sender computes the ciphertext (encrypted message) c = me mod n.
4. The sender sends c to the receiver.
5. The receiver can recover the original message by computing m = cd mod n.
The above encryption-decryption process works because for some k ℤ:
 e ⋅ d
 1 (mod φ(n))
=
 1 + φ(n) ⋅ k
 (me)d mod n
=
 (m1 + φ(n) ⋅ k) mod n
=
 (m ⋅ (mφ(n) ⋅ k)) mod n
=
 (m ⋅ (mφ(n) ⋅ k)) mod n
=
 (m ⋅ mφ(n)) mod n
=
 (m ⋅ 1) mod n
=
 m mod n
Besides the message m, there are three pieces of secret information that an eavesdropper cannot know in order for the encryption to provide any privacy:
• p and q
• φ(n)
• d = e-1
Notice that if an eavesdropper knows p and q where n = pq, the eavesdropper can easily compute φ(n) (which was supposed to be private). If the eavesdropper can compute φ(n), then they can use the extended Euclidean algorithm to compute the inverse d = e-1 of the public key value e. They can then use d to decrypt messages.
Suppose the eavesdropper only knows \phi(%n). Then the eavesdropper can compute %d and decrypt any message. In fact, the eavesdropper can also recover %p and %q.
Protocol (Rabin cryptosystem): We introduce the Rabin cryptosystem protocol. It is similar to the RSA scheme, but it does not rely on the difficulty of the RSA problem.
• Key generation (performed by the receiver):
1. Randomly choose two secret prime numbers p ℕ and q ℕ of similar size.
2. Compute a public key value n = pq.
• Protocol (encryption and decryption): There are two participants: the sender and the receiver.
1. The sender wants to send a message m {0,...,n-1} to the receiver.
2. The receiver reveals the public key n to the sender.
3. The sender computes the ciphertext (encrypted message) c = m2 mod n.
4. The sender sends c to the receiver.
5. The receiver can recover the original message by computing √(c) in ℤ/pℤ and ℤ/qℤ, and then finding the four solutions to the following system by using the Chinese remainder theorem:
 m
 √(c) mod p
 m
 √(c) mod q.
Notice that the receiver must guess which of the square roots corresponds to the original message. Also notice that it is not a good idea to encrypt messages in the ranges {0, ..., √(n)} and {n − √(n), ..., n − 1} because it is easy to decrypt such messages by computing the integer square root √(c) of c and then confirming that √(c)2c (mod n).
The following diagram summarizes the relationships between algorithms that might break each of the protocols presented in this section, and existing problems that are believed not to be in P. Thus, our conjectures imply that all of the problems below are not in P.
 breakingRabin encryption ⇐ congruent squares(square roots ofcongruence classes) ⇑⇓ ⇑⇓ finding RSAsecret key ⇐ computing φ(n)for n = p ⋅ q ⇐⇒ factoringn = p ⋅ q ⇑ ⇑ decrypting individualRSA messages ⇐ RSA problem(eth roots ofcongruence classes) breakingDiffie-Hellman ⇐ discrete logarithm(logarithms ofcongruence classes)
Fact: Suppose we have some modulus n ℕ and some a (ℤ/nℤ)*. Let r ℤ/φ(n)ℤ be the smallest r such that ar ≡ 1 (mod n). Then it must be that r | φ(n).
To see why, suppose that φ(n) is not divisible by r. Then there must be some c < r such that:
 φ(n)
=
 c + k ⋅ r
But if that's true, we have:
 aφ(n)
 ac + k ⋅ r
 ac ⋅ ak ⋅ r
 ac ⋅ (ar)k
 ac
The above implies acaφ(n) ≡ 1. But since c < r, this contradicts our initial assumption. Thus, it must be that r | φ(n).
Recall the algorithm we learned before for generating random numbers based on using the multiples of a congruence class. In a setting with an adversary, we could imagine the adversary may want to predict the next random number in a sequence given some partial information. If we are using the original algorithm, an adversary can do this efficiently.
Suppose the adversary knows the modulus n, and knows that some number r is the ith random number in the sequence. The adversary then knows the following equation must hold:
 a ⋅ i
 r (mod n)
If it happens to be the case that gcd(i, n) = 1, the adversary can then compute the original "seed" a by doing a single inversion followed by a single multiplication:
 i-1 ⋅ a ⋅ i
 i-1 ⋅ r (mod n)
 a
 i-1 ⋅ r
This would then allow the adversary to predict any random number in the sequence.
Suppose the adversary knows the modulus n, and knows that the "seed" for a random sequence is a. Then given a random number in the sequence r, the adversary can determine which number in the sequence it must be by using the same equation as above and computing a-1 (mod n):
 a ⋅ i
 r (mod n)
 a-1 ⋅ a ⋅ i
 a-1 ⋅ r
 i
 a-1 ⋅ r
We can use what we have learned about intractable problems to create an algorithm for generating random numbers that is slightly more robust against the two attacks described above.
Algorithm: The following is another possible implementation of a random number generation algorithm.
1. inputs: upper bound n ℕ, seed a (ℤ/nℤ)*, index i {2,...,φ(n) − 1}
1. return (ai) mod n
One downside of this algorithm is that it will never produce a permutation of ℤ/nℤ. Even if n is prime and greater than 2, then φ(n) must be composite (since it is even), which means that the smallest exponent i that solves the identity ai ≡ 1 could be some factor of φ(n). Even if φ(n) happens to be prime, it cannot be close to n (since n cannot then also be prime, which means φ(n) ≠ n − 1). However, this algorithm does have a few benefits if an adversary is involved.
Given some partial information, an adversary would have a more difficult time making predictions about the random sequence. Suppose the adversary knows the modulus n, and knows that some number r is the ith random number in the sequence. The adversary then knows the following equation must hold:
 ai
 r (mod n)
However, in order to compute a, the adversary would now need to compute the ith root of the congruence class r modulo n. Effectively, this means the adversary could solve the RSA problem if i > 2 (or the congruent squares problem if i = 2). If we believe these problems cannot be solved efficiently, such an adversary cannot exist.
Alternatively, suppose the adversary knows the modulus n, and knows that the "seed" for a random sequence is a. Then given a random number in the sequence r, the adversary could try to determine which number in the sequence it must be by solving the following equation for i:
 ai
 r (mod n)
If the adversary can do so, then the adversary can solve the discrete logarithm problem, so it is unlikely that such an adversary exists.
Example: Bob decides to create his own online currency BobCoin. Bob knows that in order for BobCoin to be successful, it needs to be possible to make more BobCoins as more and more people start using them. However, he also does not want rapid inflation to occur. Thus, Bob issues BobCoins according to the following protocol:
• every day, Bob chooses two new random primes p and q;
• Bob computes n = pq, and then discards p and q;
• Bob posts n online for everyone to see;
• at any time on that day, anyone can submit a factor f of n;
• if f is a factor of n, Bob issues that person a BobCoin, invalidates n permanently so that no one else can use it, and generates a new n.
1. Why is it okay for Bob to discard p and q?
2. Suppose that Bob always posts numbers n that have 100 digits, and it takes the fastest computer one year to factor a 100-digit number through trial and error. If Alice wants to earn a BobCoin in one day, how many computers will Alice need to run in parallel to earn a BobCoin?
3. Suppose Bob wants to issue a complimentary BobCoin coupon to a group of 100 people. However, he wants to make sure that they can use their BobCoin coupon only if at least 20 out of those 100 people agree that the coupon should be redeemed for a BobCoin. How can Bob accomplish this?

In the previous sections, we studied a specific algebraic structure, ℤ/nℤ, as well as its operations (e.g., addition, multiplication), and its properties (commutativity, associativity, and so on). There exist many other algebraic structures that share some of the properties of ℤ/nℤ. In fact, we can create a hierarchy, or even a web, of algebraic structures by picking which properties of ℤ/nℤ we keep and which we throw away.
Showing that some new algebraic structure is similar or equivalent to another, more familiar structure allows us to make inferences about that new structure based on everything we already know about the familiar structure. In computer science, the ability to compare algebraic structures using their properties is especially relevant because every time a programmer defines a new data structure and operations on that data structure, they are defining an algebraic structure. Which properties that algebraic structure possesses determines what operations can be performed on it, in what order they can be performed, how efficiently they can be performed, and how they can be broken down and reassembled.

Recall that a permutation on a set X is a bijective relation between X and X (i.e., a subset of the set product X × X). Since a permutation is a bijective map (i.e., a function), we can reason about composition of permutations (it is just the composition of functions). Thus, we can study sets of permutations as algebraic structures under the composition operation o.
Notice that for any set X of finite size n, we can relabel the elements of X to be {0,...,n-1} (that is, we can define a bijection between X and {0,...,n-1}). Thus, we can study permutations on {0,...,n-1} without loss of generality. We will adopt the following notation for permutations:
 [a1,...,an]
Where a1,...,an is some rearrangement of the integers from 0 to n-1. For example, the identity permutation on n elements would be:
 [0,1,2,3,4,5,...,n-1]
Definition: Any permutation that swaps exactly two elements is called a swap. Examples of swaps are [0,3,2,1], [1,0], and [0,6,2,3,4,5,1].
Definition: Any permutation that swaps exactly two adjacent elements is called an adjacent swap. Examples of adjacent swaps are [0,1,3,2], [1,0,2,3,4], and [0,1,3,2,4,5,6].
Definition: Define Sn to be the set of all permutations of the set {0,...,n-1}.
Example: The set of permutations of {0,1} is S2 = {[0,1], [1,0]}.
Example: The set of permutations of {0,1,2} is S3 = {[0,1,2], [0,2,1], [1,0,2], [1,2,0], [2,0,1], [2,1,0]}.
Fact: The set Sn contains n! permutations.
Suppose we want to construct a permutation [a1,...,an] using the elements in {0,...,n-1}, where we are only allowed to take each element in the set once and assign it to an unassigned entry ai. Then for the first slot, we have n possibilities; for the second, we have n-1 possibilities. For the third, we have n-2 possibilities, and so on until we have only one possibility left. Thus, the number of possible permutations we can make is:
 n!
=
 n ⋅ (n-1) ⋅ (n-2) ⋅ ... ⋅ 2 ⋅ 1
Definition: Define the set Cn to be the set of all cyclic permutations on n elements. Any permutation that performs a circular shift on elements is a cyclic permutation (also known as a cyclic shift permutation, a circular shift permutation, or just a shift permutation). Examples of shifts are [6,7,0,1,2,3,4,5], [2,3,4,0,1], and [4,0,1,2,3].
Definition: Define the set Mn to be the set of all multiplication-induced permutations on n elements. Any permutation on n elements that corresponds to multiplication by some coprime a < n is called a multiplication-induced permutation. Examples of such permutations include [0,2,4,1,3] (corresponding to multiples 2 ⋅ i for ascending i in ℤ/5ℤ).
Example: The set of multiplication-induced permutations on 6 elements (i.e., permutations of {0,1,2,3,4,5}) is the collection of permutations of the form [a ⋅ 0 mod 6, a ⋅ 1 mod 6, a ⋅ 2 mod 6, a ⋅ 3 mod 6, a ⋅ 4 mod 6, a ⋅ 5 mod 6] for each a that is coprime with 6. Thus, a {1,5}, and so we have:
 M6 = {[0,1,2,3,4,5], [0,5,4,3,2,1]}.
Example: The set of multiplication-induced permutations on 7 elements (i.e., permutations of {0,1,2,3,4,5,6}) is:
 M7 = {[0,1,2,3,4,5,6], [0,2,4,6,1,3,5], [0,3,6,2,5,1,4], [0,4,1,5,2,6,3], [0,5,3,1,6,4,2], [0,6,5,4,3,2,1]}.
Note that |M7| = φ(7) = 6 because there are 6 possible a ℤ/7ℤ that are coprime with 7.

### [link]  5.2. Isomorphisms: Equivalence of Algebraic Structures

An algebraic structure is a set together with a binary operator over that set. All algebraic structures are closed under their binary operation.
Definition: Let S be a set, and let ⊕ be a binary operator. Let closure(S,⊕) be the closure of S under ⊕. We can define the set closure(S,⊕) in the following way:
 closure(S, ⊕)
=
 { x1 ⊕ x2 | x1,x2 ∈ S } ∪ { x1 ⊕ (x2 ⊕ x3) | x1,x2,x3 ∈ S } ∪ { (x1 ⊕ x2) ⊕ x3 | x1,x2,x3 ∈ S } ∪ ...
Alternatively, we could define it in the following way using recursion:
 closure0(S, ⊕)
=
 S
 closuren(S, ⊕)
=
 { x ⊕ y | x,y ∈ (closuren-1(S, ⊕) ∪ ... ∪ closure0(S, ⊕)}
 closure(S, ⊕)
=
 closure0(S, ⊕) ∪ closure1(S, ⊕) ∪ closure2(S, ⊕) ∪ ...
Notice that if a set S is finite, there is a natural way to algorithmically list all elements in closure(S, ⊕) by starting with the elements in S and "building up" all the elements in each of the closurei(S, ⊕) subsets.
The concept of an isomorphism between two algebraic structures captures the fact that two structures are not only the same size, but that the two structures have the same internal "structure" with respect to their respective operations. Isomorphisms are useful because they allow us to learn more about a structure by studying the structure isomorphic to it. They can also be useful because if two structures are isomorphic, we can perform computations in one structure instead of another structure (e.g., because it is more secure, more efficient, and so on) while obtaining the same final result.
Fact: Let A be an algebraic structure with operator ⊕ and let B be an algebraic structure with operator ⊗. We say that A is isomorphic to B, which we denote as (A,⊕) ≅ (B,⊗) or simply AB, if the following conditions hold:
• there exists a bijection (i.e., a bijective relation) between A and B, which we denote using  = ;
• for all a, a' A and b,b' B, if a = b and a' = b' then aa' = bb'.
Another way to state the definition is to write it in terms of a bijective map m between A and B:
• there exists a bijective map m between A and B;
• for all a, a' A,      m(aa') = m(a) ⊗ m(a').
In other words, an isomorphism is a bijection that preserves (or respects) the binary operations on the two sets: if any true equation involving elements from A and the operator ⊕ is transformed by replacing all elements of a with their corresponding elements m(b) B and by replacing all instances of ⊕ with ⊗, the resulting equation is still true.
Example: Consider the set of permutations on two elements S2 and the set of congruence classes ℤ/2ℤ. It is true that (S2,o) ≅ (ℤ/2ℤ,+), where o is composition of permutations and where + is addition of congruence classes in ℤ/2ℤ. The following table demonstrates the bijection:
 S2 ℤ/2ℤ [0,1] 0 [1,0] 1
The following table demonstrates that the bijection above respects the two operations.
 S2 ℤ/2ℤ [0,1] o [0,1] = [0,1] 0 + 0 = 0 [0,1] o [1,0] = [1,0] 0 + 1 = 1 [1,0] o [0,1] = [1,0] 1 + 0 = 1 [1,0] o [1,0] = [0,1] 1 + 1 = 0
Another way to demonstrate the above is to show that the "multiplication tables" (though the operation need not be multiplication) for the two operators are exactly the same (i.e., the entries in the multiplication table all correspond according to the bijection).
 +o 0[0,1] 1[1,0] 0[0,1] 0[0,1] 1[1,0] 1[1,0] 1[1,0] 0[0,1]
Fact: For any positive integer n ℕ, (ℤ/nℤ,+) ≅ (Cn, o).
Example: Consider the set of cyclic permutations on three elements C3 and the set of congruence classes ℤ/3ℤ. It is true that (C3,o) ≅ (ℤ/3ℤ,+).
 C3 ℤ/3ℤ [0,1,2] o [0,1,2] = [0,1,2] 0 + 0 = 0 [0,1,2] o [1,2,0] = [1,2,0] 0 + 1 = 1 [0,1,2] o [2,0,1] = [2,0,1] 0 + 2 = 2 [1,2,0] o [0,1,2] = [1,2,0] 1 + 0 = 1 [1,2,0] o [1,2,0] = [2,0,1] 1 + 1 = 2 [1,2,0] o [2,0,1] = [0,1,2] 1 + 2 = 0 [2,0,1] o [0,1,2] = [2,0,1] 2 + 0 = 2 [2,0,1] o [1,2,0] = [0,1,2] 2 + 1 = 0 [2,0,1] o [2,0,1] = [1,2,0] 2 + 2 = 1
Example: To compute the composition of two permutations [45,46,...,49,0,1,2,...,44] o [3,4,5,...,49,0,1,2], it is sufficient to recognize that [45,46,...,49,0,1,2,...,44] corresponds to 45 ℤ/50ℤ, and [3,4,5,...,49,0,1,2] corresponds to 3 ℤ/50ℤ. Thus, since 45 + 3 = 48, the result of the composition must be [48,49,0,1,2,...,47].
 45 + 3
=
 48
 [45,46,...,49,0,1,2,...,44] o [3,4,5,...,49,0,1,2]
=
 [48,49,0,1,2,...,47]
Fact: For any prime p ℕ, (ℤ/φ(p)ℤ,+) ≅ ((ℤ/pℤ)*, ⋅).
Note that |ℤ/φ(p)ℤ| = φ(p) = |(ℤ/pℤ)*|.
Example: Consider the set ℤ/2ℤ with the addition operation + modulo 2, and the set (ℤ/3ℤ)* together with the multiplication operation ⋅ modulo 3. It is true that (ℤ/2ℤ, +) ≅ ((ℤ/3ℤ)*,⋅).
 (ℤ/2ℤ, +) ((ℤ/3ℤ)*, ⋅) 0 + 0 = 0 1 ⋅ 1 = 1 0 + 1 = 1 1 ⋅ 2 = 2 1 + 0 = 1 2 ⋅ 1 = 2 1 + 1 = 0 2 ⋅ 2 = 1
Example: Consider the set ℤ/2ℤ with the addition operation + modulo 2, and the set (ℤ/6ℤ)* together with the multiplication operation ⋅ modulo 6. It is true that (ℤ/2ℤ, +) ≅ ((ℤ/6ℤ)*,⋅). Note that (ℤ/6ℤ)* = {1,5}, because only 1 and 5 in the range {0,...5} are coprime with 6.
 (ℤ/2ℤ, +) ((ℤ/6ℤ)*, ⋅) 0 + 0 = 0 1 ⋅ 1 = 1 0 + 1 = 1 1 ⋅ 5 = 5 1 + 0 = 1 5 ⋅ 1 = 5 1 + 1 = 0 5 ⋅ 5 = 1
Isomorphisms need not be defined between different sets. It is possible to define an isomorphism between a set and itself that has non-trivial, interesting, and even useful characteristics.
Fact: For any n ℕ and any a ℤ/nℤ where a is coprime with n, (ℤ/nℤ,+) ≅ (ℤ/nℤ,+) under the bijection that relates x ℤ/nℤ with ax ℤ/nℤ. This is because for any x,y ℤ/nℤ, we have:
 a ⋅ (x + y)
 a ⋅ x + a ⋅ y
Fact: For any n ℕ and any e ℤ/φ(n)ℤ where e is coprime with φ(n), ((ℤ/nℤ)*,⋅) ≅ ((ℤ/nℤ)*,⋅) under the bijection that relates x (ℤ/nℤ)* with xe (ℤ/nℤ)*. This is because for any x, y (ℤ/nℤ)*, we have:
 (x ⋅ y)e
 xe ⋅ ye (mod n)
Example: Consider the set ℤ/3ℤ with the addition operation + modulo 3, and another instance of the set ℤ/3ℤ with the addition operation + modulo 3. The following is a bijection between (ℤ/3ℤ, +) and (ℤ/3ℤ, +):
 ℤ/3ℤ ℤ/3ℤ 0 0 1 2 2 1
Note that the above bijection corresponding to multiplication by 2 modulo 3, since 0 ⋅ 2 ≡ 0, 1 ⋅ 2 ≡ 2, and 2 ⋅ 2 ≡ 1. This bijection is an isomorphism (and is an instance of this fact about isomorphisms):
 ℤ/3ℤ ℤ/3ℤ 0 + 0 = 0 0 + 0 = 0 0 + 1 = 1 0 + 2 = 2 0 + 2 = 2 0 + 1 = 1 1 + 0 = 1 2 + 0 = 2 1 + 1 = 2 2 + 2 = 1 1 + 2 = 0 2 + 1 = 0 2 + 0 = 2 1 + 0 = 1 2 + 1 = 0 1 + 2 = 0 2 + 2 = 1 1 + 1 = 2
If we only consider algebraic structures with particular algebraic properties, we can actually show that there is only one algebraic structure of a particular size (i.e., there is only one "isomorphism class" of algebraic structures having that size).
Fact: Suppose we have an algebraic structure (A, ⊕) with two elements in which the elements in the set must have inverses, one of them must be an identity, and ⊕ is associative. Without loss of generality, let's label the two elements a and b, and let a be the label of the identity element. Because a is the identity, we must have:
 a ⊕ a
=
 a
 a ⊕ b
=
 b
 b ⊕ a
=
 b
The identity is its own inverse, so a-1 = a. The only question that remains is to determine what bb must be. If we have bb = b, then we must ask what the inverse of b can be (since it isn't b itself, as we would then have bb = a). But then the only option is a. That would mean that ba should be a (since b and its inverse should yield the identity element). But this contradicts the equations we already derived above. So it must be that b is its own inverse:
 a ⊕ b
=
 a
Thus, there can be only one distinct algebraic structure (in terms of its "multiplication table") having two elements; it's the algebraic structure isomorphic to (A, ⊕), as well as all the other algebraic structures isomorphic to it: (S2, o), (C2, o), (ℤ/2ℤ, +), ((ℤ/3ℤ)*, ⋅), ((ℤ/6ℤ)*, ⋅), and so on.
Example (partial homomorphic encryption supporting addition): Suppose that for some n ℕ, Alice wants to store a large number of congruence classes b1,...,bk ℤ/nℤ in Eve's database (perhaps Alice will generate or collect these over a long period of time). Alice does not have enough memory to store the congruence classes herself, but she does not want to reveal the congruence classes to Eve.
What Alice can do before she stores anything in Eve's database is to pick some secret a ℤ/nℤ that is coprime with n. Then, every time Alice needs to store some b in Eve's database, Alice will instead send Eve the obfuscated value (ab) mod n. Since a is coprime with n, there exists a-1 ℤ/nℤ. Thus, if Alice retrieves some obfuscated data entry c from Eve's database, she can always recover the original value by computing (a-1c) mod n, because:
 a-1 ⋅ (a ⋅ b)
 b (mod n)
Furthermore, Alice can ask Eve to compute the sum (modulo n) of all the entries in the database (or any subset of them). Suppose that Alice has stored obfuscated versions of b1,...,bk ℤ/nℤ in Eve's database. Then if Eve computes the sum of all the obfuscated entries stored in her database, she will get:
 a ⋅ b1 + ... + a ⋅ bk
=
 a ⋅ (b1 + ... + bk) (mod n)
Thus, if Alice asks Eve for the sum of all the obfuscated entries in the database, Alice can recover the actual sum of the original entries that she stored in the database because:
 a-1 ⋅ (a ⋅ (b1 + ... + bk))
=
 b1 + ... + bk (mod n)
In this way, Alice has avoided having to store and add all the database entries, while preventing Eve finding out the actual entries, or their sum.
Example (partial homomorphic encryption supporting multiplication): Suppose that for some n ℕ, Alice wants to store a large number of congruence classes b1,...,bk (ℤ/nℤ)* in Eve's database. Alice does not have enough memory to store the congruence classes herself, but she does not want to reveal the congruence classes to Eve.
We assume that Alice knows or can easily compute φ(n), while Eve does not know and cannot compute it (perhaps Alice generated n using a method similar to the one in RSA encryption protocol).
What Alice can do before she stores anything in Eve's database is to pick some secret e ℤ/φ(n)ℤ that is coprime with φ(n). Since Alice knows e and φ(n), she can compute e-1 using the extended Euclidean algorithm.
Then, every time Alice needs to store some b in Eve's database, Alice will instead send Eve the encrypted value be mod n. If Alice retrieves some encrypted data entry c from Eve's database, she can always recover the original value by computing (ce-1}) mod n, because by Euler's theorem:
 (be)e-1
 be ⋅ e-1
 b (mod n)
Furthermore, Alice can ask Eve to compute the product (modulo n) of all the entries in the database (or any subset of them). Suppose that Alice has stored encrypted versions of b1,...,bk ℤ/nℤ in Eve's database. Then if Eve computes the product of all the encrypted entries stored in her database, she will get:
 b1e ⋅ ... ⋅ bke
=
 (b1 ⋅ ... ⋅ bk)e (mod n)
Thus, if Alice asks Eve for the product of all the encrypted entries in the database, Alice can recover the actual product of the original entries that she stored in the database because:
 ((b1 ⋅ ... ⋅ bk)e)e-1
=
 b1 ⋅ ... ⋅ bk (mod n)
In this way, Alice has avoided having to store and multiply all the database entries, while preventing Eve from finding out the actual entries, or their product. Furthermore, because it is believed that factoring n, computing φ(n), and solving the RSA problem is computationally difficult, it is highly unlikely that Eve can decrypt the database entries or the result.
We know by Euler's theorem and the algebraic properties of exponents that for any ℤ/nℤ, any b (ℤ/nℤ)*, and any x, y ℤ/φ(n)ℤ the following identity must hold:
 bx ⋅ by
 bx + y (mod n)
 b(x + y) mod φ(n) (mod n)
We might naturally ask whether there might be an isomorphism between (ℤ/φ(n)ℤ, +) and ((ℤ/nℤ)*, ⋅). In fact, sometimes there is (although more often it is an isomorphism between a subset of (ℤ/nℤ)* and ℤ/kℤ for k | φ(n)).
Given the above, we might ask whether it might be possible to create a homomorphic encryption protocol using an isomorphism of the form (ℤ/φ(n)ℤ, +) ≅ ((ℤ/nℤ)*, ⋅) in which Alice can encrypt her data x and y by computing bx (mod n) and by (mod n). This should be secure because it is believed that no efficient algorithms for computing discrete logarithms exist. Then, in order to have Eve compute a sum of the data values x and y, Alice can ask Eve to compute the product bxby on her end, which is equivalent to bx + y.
However, there is a flaw in this protocol: Alice has no way to retrieve x + y from bx + y because that requires computing a discrete logarithm, as well. Thus, an isomorphism of the form (ℤ/φ(n)ℤ, +) ≅ ((ℤ/nℤ)*, ⋅) would not necessarily give us a practical homomorphic encryption protocol.
Fact: Given a set of possible data values ℤ/nℤ (e.g., integers within a certain range), any compression algorithm for elements in ℤ/nℤ must be a bijection and a permutation (since it must be invertible in order for decompression to be possible). As a result, it must necessarily expand the representation size of some elements.
Example: Suppose that we are working with elements in ℤ/11ℤ = {0,1,2,3,4,5,6,7,8,9,10}. Suppose we define an algorithm that compresses 10 ℤ/11ℤ into an element in ℤ/11ℤ with a smaller representation size. One example of such an element is 1, since:
 10 ⋅ 10
 1 (mod 11)
Thus, one possible implementation of a compression algorithm is a function that takes any x ℤ/11ℤ and returns (10 ⋅ x) mod 11. Since 10 has an inverse in ℤ/11ℤ, this function is invertible, so decompression is possible (simply multiply by 10 again). However, this will necessarily expand the representation of at least one value: 1 ℤ/11ℤ:
 10 ⋅ 1
 10 (mod 11)
Note that this cannot be avoided because multiplying all the elements in ℤ/11ℤ by 10 amounts to a permutation, so at least one compressed element must be 10.

### [link]  5.3. Generators of Algebraic Structures

Because an algebraic structure (A,⊕) often consists of a set of objects that can be "built up" using the binary operator ⊕ from a smaller, possibly finite, collection of generators GA, it is often easier to reason about an algebraic structure by first reasoning about its generators, and then applying structural induction.
Fact: Let W be the set of swap permutations on n elements. Then W is a set of generators for the set of permutations Sn, which can be defined as:
 S
=
 closure(W, o)
Fact: Let A be the set of adjacent swap permutations on n elements. Then A is a set of generators for the set of permutations Sn, which can be defined as:
 S
=
 closure(A, o)
Fact: The set ℤ/nℤ has a single generator 1 ℤ/nℤ with respect to addition + modulo n:
 ℤ/nℤ
=
 closure({1}, +)
Fact: If a and n are coprime, then a is a generator for ℤ/nℤ with respect to addition + modulo n:
 ℤ/nℤ
=
 closure({a}, +)
Fact: Suppose we have two algebraic structures A and B where for some operator ⊕:
 A
=
 closure({a}, ⊕)
 B
=
 closure({b}, ⊕)
If the generator a can be expressed in terms of b, i.e., a = b ⊕ ... ⊕ b, then it must be that:
 A
 B
Furthermore, if in addition to the above, the generator b can be expressed in terms of a, i.e., b = a ⊕ ... ⊕ a, then it must be that:
 B
 A
This would then imply (by basic set theory):
 A
=
 B

### [link]  5.4. Isomorphisms and Linear Equations of Congruence Classes

Fact: Let n be a positive integer. Then if + represents addition modulo n, we have:
 ℤ/nℤ
 closure({1}, +)
In other words, 1 is a generator for ℤ/nℤ with respect to +.
Fact: Let a and n be coprime positive integers. Then if + represents addition modulo n, we have:
 ℤ/nℤ
 closure({a}, +)
In other words, a can be a single generator for ℤ/nℤ with respect to +. This is equivalent to a fact we have already seen.
Fact: Let a and n be any two positive integers. Then if + represents addition modulo n, we have:
 ℤ/(n/gcd(n,a))ℤ
 closure({a}, +)
 closure({gcd(n,a)}, +)
Example: Consider n = 6 and a = 4. Then we have g = gcd(4,6) = 2. We have:
 closure({4}, +)
=
 {4 ⋅ 0, 4 ⋅ 1, 4 ⋅ 2}
=
 {0, 4, 2}
=
 closure({2}, +)
Note that:
 4
=
 2 + 2 (mod 6)
 2
=
 4 + 4 (mod 6)
Thus, 2 can be expressed using the generator 4, and 4 can be expressed using the generator 2.
Fact (linear congruence theorem): Suppose that for a positive integer n and two congruence classes a ℤ/nℤ and b ℤ/nℤ where g ≡ gcd(a,n), we are given the following equation:
 a ⋅ x
 b (mod n)
Since a and n are not coprime, we cannot solve the above equation. Furthermore, if b ∉ closure({a}, +), we know the equation cannot be solved. Since closure({a}, +) = closure({g}, +), the equation can only be solved if b closure({g}, +). In other words, the equation can only be solved if g|b.
Note that if g ≡ gcd(a,n) and b closure({g}, +), we have:
 n
=
 n' ⋅ g
 a
=
 a' ⋅ g
 b
=
 b' ⋅ g
 (a' ⋅ g) ⋅ x
 (b' ⋅ g) (mod (n' ⋅ g))
We can then rewrite the above as:
 (a' ⋅ g) ⋅ x
=
 (b' ⋅ g) + k ⋅ (n' ⋅ g)
We can divide both sides of the above equation by g:
 a' ⋅ x
=
 b' + k ⋅ n'
We can convert the above equation back into an equation of congruence classes:
 a' ⋅ x
 b' mod n'
At this point a' and n' are coprime, we can compute a'-1 (mod n') and multiply both sides by it to find our solution x:
 a'-1 ⋅ a' ⋅ x
 a'-1 ⋅ b' mod n'
 x
 a'-1 ⋅ b' mod n'
Example: Solve the following equation for x ℤ/8ℤ, or explain why no solution exists:
 2 ⋅ x
 3 (mod 8)
Example: Solve the following equation for x ℤ/24ℤ, or explain why no solution exists:
 16 ⋅ x
 7 (mod 24)
Example: Solve the following equation for x ℤ/15ℤ, or explain why no solution exists:
 9 ⋅ x
 6 (mod 15)
Example: Suppose we want to find all the solutions in ℤ/6ℤ to the following equation:
 2 ⋅ x
 4 (mod 6)
Using the linear congruence theorem, we can find the unique solution modulo 3:
 x
 2 (mod 3)
The solutions modulo 6 will be those congruence classes in ℤ/6ℤ whose integer representatives are members of 2 + 3ℤ = {..., 2, 5, 8, 11, 14, ...}. These are 2 + 6ℤ and 5 + 6ℤ, since 2 ≡ 2 (mod 3) and 5 ≡ 2 (mod 3). Note that:
 2 + 3ℤ
=
 2 + 6ℤ ∪ 5 + 6ℤ
 {..., 2, 5, 8, 11, 14, 17, 20, ...}
=
 {..., 2, 8, 14, 20, ...} ∪ {..., 5, 11, 17, ...}

### [link]  5.5. Isomorphisms and the Chinese Remainder Theorem

Fact (Chinese remainder theorem isomorphism): Let n and m be coprime positive integers. Let ℤ/nℤ × ℤ/mℤ be the set product of ℤ/nℤ and ℤ/mℤ, and let ⊕ be an operation on ℤ/nℤ × ℤ/mℤ defined as follows:
 (a,b) ⊕ (c,d)
=
 (a + c,b + d)
Then it is true that (ℤ/nℤ × ℤ/mℤ, ⊕) ≅ (ℤ/(mn)ℤ, +). The bijective relationship in this isomorphism is as follows:
 (a mod n, b mod m) ∈ ℤ/nℤ × ℤ/mℤ

 corresponds to

 (a ⋅ (m ⋅ m-1) + b ⋅ (n ⋅ n-1)) ∈ ℤ/(m ⋅ n)ℤ
In other words, given (a, b), we can map it to a ⋅ (mm-1) + b ⋅ (nn-1), and given some c = a ⋅ (mm-1) + b ⋅ (nn-1) from ℤ/(nm)ℤ, we can map it back to ℤ/nℤ × ℤ/mℤ using (c mod n, c mod m).
Example (partial homomorphic encryption with validation): Suppose that for some p ℕ, Alice wants to store a large number of congruence classes b1,...,bk (ℤ/pℤ)* in Eve's database. Alice does not have enough memory to store the congruence classes herself, but she does not want to reveal the congruence classes to Eve. Alice also wants Eve to compute the product of the congruence classes for her as in this example. However, because Alice does not actually have the numbers Eve is multiplying, Alice has no way to know that the product Eve returns to her corresponds to the actual product; perhaps Eve is saving money and cheating by returning a random number to Alice.
To address this, Alice first chooses a new prime q (distinct from p). She then computes n = pq and φ(n) = (p − 1) ⋅ (q − 1), finds e (ℤ/φ(n)ℤ)* and computes de-1 (mod φ(n)). This will allow Alice to encrypt things before storing them in Eve's database. However, at this point Alice will not simply encrypt her congruence classes b1, ..., bk.
Instead, Alice will first choose a single random value r ℤ/qℤ. Then, Alice will map each of her values (bi, r) ℤ/pℤ × ℤ/qℤ via the CRT isomorphism to some values ci ℤ/nℤ. Alice will then encrypt the values ci by computing cie (mod n), and will submit these values to Eve's database. Now, whenever Alice can ask Eve to compute the following product:
 c1e ⋅ ... ⋅ cke
 (c1 ⋅ ... ⋅ ck)e (mod n)
If Eve returns the product (c1 ⋅ ... ⋅ ck)e to Alice, Alice can decrypt it by computing:
 ((c1 ⋅ ... ⋅ ck)e)d
 c1 ⋅ ... ⋅ ck (mod n)
Next, Alice can compute (c1 ⋅ ... ⋅ ck) mod p to retrieve the actual product in ℤ/pℤ. However, Alice also wants to make sure that Eve actually multiplied all the entries. Alice can do so by computing:
 (c1 ⋅ ... ⋅ ck) mod q
 r ⋅ ... ⋅ r (mod q)
 rk (mod q)
Alice can quickly compute rk (mod q) and compare it to (c1 ⋅ ... ⋅ ck) mod q. This gives Alice some confidence (but not total confidence) that Eve actually computed the product because it ensures that Eve really did multiply k distinct values provided by Alice.
What is one way that Eve can still cheat and save money under these circumstances (if she knows that Alice is using this validation method)? Is there any way Alice can counter this (hint: what if Alice chooses different values r for each entry)?
Fact: Let n and m be positive integers. and let g = gcd(n,m). Let ℤ/(n/g)ℤ × ℤ/(m/g)ℤ × ℤ/gℤ be a set product, and let ⊕ be an operation on ℤ/(n/g)ℤ × ℤ/(m/g)ℤ × ℤ/gℤ defined as follows:
 (a,b,c) ⊕ (x,y,z)
=
 (a + x,b + y, c + z)
Then it is true that (ℤ/(n/g)ℤ × ℤ/(m/g)ℤ × ℤ/gℤ, ⊕) ≅ (ℤ/((mn)/g)ℤ, +).
Example: Consider the set ℤ/2ℤ × ℤ/3ℤ with the operation ⊕, and the set ℤ/6ℤ together with the operation +. It is true that (ℤ/2ℤ × ℤ/3ℤ, ⊕) ≅ (ℤ/6ℤ, +). The bijection is specified below.
 ℤ/2ℤ × ℤ/3ℤ ℤ/6ℤ (0,0) 0 (0,1) 4 (0,2) 2 (1,0) 3 (1,1) 1 (1,2) 5
The isomorphism is demonstrated below.
 ℤ/2ℤ × ℤ/3ℤ ℤ/6ℤ (0,0) ⊕ (0,0) = (0,0) 0 + 0 = 0 (0,0) ⊕ (0,1) = (0,1) 0 + 4 = 4 (0,0) ⊕ (0,2) = (0,2) 0 + 2 = 2 (0,0) ⊕ (1,0) = (1,0) 0 + 3 = 3 (0,0) ⊕ (1,1) = (1,1) 0 + 1 = 1 (0,0) ⊕ (1,2) = (1,2) 0 + 5 = 5 (0,1) ⊕ (0,0) = (0,1) 4 + 0 = 4 (0,1) ⊕ (0,1) = (0,2) 4 + 4 = 2 (0,1) ⊕ (0,2) = (0,0) 4 + 2 = 0 (0,1) ⊕ (1,0) = (1,1) 4 + 3 = 1 (0,1) ⊕ (1,1) = (1,2) 4 + 1 = 5 (0,1) ⊕ (1,2) = (1,0) 4 + 5 = 3
 ℤ/2ℤ × ℤ/3ℤ ℤ/6ℤ (0,2) ⊕ (0,0) = (0,2) 2 + 0 = 2 (0,2) ⊕ (0,1) = (0,0) 2 + 4 = 0 (0,2) ⊕ (0,2) = (0,1) 2 + 2 = 4 (0,2) ⊕ (1,0) = (1,2) 2 + 3 = 5 (0,2) ⊕ (1,1) = (1,0) 2 + 1 = 3 (0,2) ⊕ (1,2) = (1,1) 2 + 5 = 1 (1,0) ⊕ (0,0) = (1,0) 3 + 0 = 3 (1,0) ⊕ (0,1) = (1,1) 3 + 4 = 1 (1,0) ⊕ (0,2) = (1,2) 3 + 2 = 5 (1,0) ⊕ (1,0) = (0,0) 3 + 3 = 0 (1,0) ⊕ (1,1) = (0,1) 3 + 1 = 4 (1,0) ⊕ (1,2) = (0,2) 3 + 5 = 2
 ℤ/2ℤ × ℤ/3ℤ ℤ/6ℤ (1,1) ⊕ (0,0) = (1,1) 1 + 0 = 0 (1,1) ⊕ (0,1) = (1,2) 1 + 4 = 5 (1,1) ⊕ (0,2) = (1,0) 1 + 2 = 3 (1,1) ⊕ (1,0) = (0,1) 1 + 3 = 4 (1,1) ⊕ (1,1) = (0,2) 1 + 1 = 2 (1,1) ⊕ (1,2) = (0,0) 1 + 5 = 0 (1,2) ⊕ (0,0) = (1,2) 5 + 0 = 5 (1,2) ⊕ (0,1) = (1,0) 5 + 4 = 3 (1,2) ⊕ (0,2) = (1,1) 5 + 2 = 1 (1,2) ⊕ (1,0) = (0,2) 5 + 3 = 2 (1,2) ⊕ (1,1) = (0,0) 5 + 1 = 0 (1,2) ⊕ (1,2) = (0,1) 5 + 5 = 4
Since 1 and 5 are generators for ℤ/6ℤ with respect to +, the corresponding elements (1,1) and (1,2) are generators for ℤ/2ℤ × ℤ/3ℤ.
Fact: Suppose that for two positive integers n and m where g ≡ gcd(n,m) and two congruence classes a ℤ/nℤ and b ℤ/mℤ, we are given the following system of equations:
 x
 a (mod n)
 x
 b (mod m)
We then know that:
 n
=
 n' ⋅ g
 m
=
 m' ⋅ g
But this means that:
 x
 a (mod (n' ⋅ g))
 x
 b (mod (m' ⋅ g))
The above equations can be converted into facts about divisibility:
 x
=
 a + k ⋅ (n' ⋅ g)
 x
=
 b + l ⋅ (m' ⋅ g)
But note that:
 x
=
 a + (k ⋅ n')) ⋅ g
 x
=
 b + (l ⋅ (m')) ⋅ g
The above implies:
 x
 a (mod g)
 x & ≡ & b (mod g)
Since xx, it must be that:
 a & ≡ & b (mod g)
Thus, a solution x exists for the system of equations only if ab (mod (gcd(n,m)).
Fact: Suppose that for two positive integers n and m where g ≡ gcd(n,m) and two congruence classes a ℤ/nℤ and b ℤ/mℤ, we are given the following system of equations:
 x
 a (mod n)
 x
 b (mod m)
Note that because g ≡ gcd(n,m), we have:
 n
=
 n' ⋅ g
 m
=
 m' ⋅ g
To find a solution, first determine whether ab (mod g), and compute r = a (mod g). Then set:
 x = y + r
We can now solve the following system for y:
 y + r
 a (mod n)
 y + r
 b (mod m)
We substruct r from both sides in both equations. We now have g | a-r and g | b-r, since r was the remainder when dividing a and b by g:
 y
 (a − r) (mod n)
 y
 (b − r) (mod m)
Now set:
 y = g ⋅ z
We can now solve the following system for y:
 g ⋅ z
 (a − r) (mod (n' ⋅ g))
 g ⋅ z
 (b − r) (mod (m' ⋅ g))
Using the linear congruence theorem on both equations, we get:
 z
 (a − r)/g (mod n')
 z
 (b − r)/g (mod m')
We know that n' and m' are coprime, so we can solve for z using the usual method for solving systems of equations with coprime moduli. Once we find z, we can compute a solution x to the original system of equations:
 x
 g ⋅ z + r (mod ((n ⋅ m) / g))
Example: Suppose we want to solve the following system of equations:
 x
 1 (mod 6)
 x
 3 (mod 8)
First, we compute gcd(6,8) = 2. Then we check that 1 ≡ 3 (mod 2). Since this is true, we know we can find a solution. We proceed by subtracting 1 from both sides of both equations:
 x − 1
 0 (mod 6)
 x − 1
 2 (mod 8)
We can now apply the linear congruence theorem to both equations:

 x − 1 2

 0 (mod 3)

 x − 1 2

 1 (mod 4)
We can now solve the above system of equations using the usual CRT solution computation because the moduli are now coprime:

 x − 1 2

 0 ⋅ (4 ⋅ 4-1) + 1 ⋅ (3 ⋅ 3-1)
 0 + 1 ⋅ (3 ⋅ 3)
 9
We now compute x:

 x − 1 2

 9
 x − 1
 18
 x
 19
Since the range of unique CRT solutions with coprime moduli is ℤ/((6 ⋅ 8)/gcd(6,8))ℤ = ℤ/24ℤ, the congruence class solution is:
 x
 19 (mod 24)
By putting together all the theorems and algorithms we have seen so far, we can now define a general-purpose solver for linear systems of equations involving congruence classes.
 greatestcommondivisoralgorithm Fermat'slittletheorem Euler'stheorem ⇑ ⇑ ⇑ Bézout'sidentity ⇐ extendedEuclideanalgorithm ⇐ algorithm forfindingmultiplicativeinverses ⇒ Euler'stotientfunction φ ⇑ Chineseremaindertheorem ⇐ CRT solverfor twoequations ⇑ linearcongruencetheorem ⇐ generalCRT solverfor twoequations ⇑ induction ⇐ generalCRT solverfor nequations
We can also assemble a general-purpose algorithm for computing square roots of congruence classes modulo composite numbers (assuming we have the factorization of the modulus).
 formula for3+4ℤprimes ⇐ square rootsmodulo p ⇑ Hensel's lemma ⇐ square rootsmodulo pk ⇑ CRT solverfor twoequations ⇐ square rootsmodulo n ⋅ m
Example: Suppose we want to solve the following equation for any congruence classes in ℤ/6ℤ that solve it:
 4 ⋅ x + 3 ⋅ x2
 2 (mod 6)
One approach is to use the Chinese remainder theorem and split the problem into two equations by factoring 6:
 4 ⋅ x + 3 ⋅ x2
 2 (mod 2)
 4 ⋅ x + 3 ⋅ x2
 2 (mod 3)
We can now simplify each of the above:
 3 ⋅ x2
 0 (mod 2)
 4 ⋅ x
 2 (mod 3)
We can simplify each further:
 x2
 0 (mod 2)
 x
 2 (mod 3)
We know that the only solution to the first is x ≡ 0 (mod 2), so we have now obtained the following system of equations; we can use CRT to find the unique solution modulo 6:
 x
 0 (mod 2)
 x
 2 (mod 3)
 x
 2 (mod 6)
We can check that, indeed, 4 ⋅ 2 + 3 ⋅ 22 ≡ 20 ≡ 2 (mod 6).
Example: Solve the following equation for all congruence classes in ℤ/7ℤ that satisfy it:
 x2 − 3 ⋅ x
 0 (mod 7)

### [link]  5.6. Groups, Subgroups, and Direct Products

Definition: We call an algebraic structure (A, ⊕) a group under ⊕ if:
• A is closed under ⊕;
• ⊕ is associative on A;
• A contains an identity;
• A has inverses with respect to ⊕ (for all x A, there exists x-1 A such that xx-1 = 1 and x-1x = 1 are always true).
If (A, ⊕) possesses no other algebraic properties, we call it a free group or we say it is strictly a group.
Example: One example of a group is (ℤ, +), because the integers are closed under addition, addition is associative, 1 is an identity, and every integer x ℤ has an additive inverse −x ℤ.
Example: Any vector space V with vector addition is a group.
Example: The set ℤ2 × 2 (the set of all 2 × 2 matrices with integer entries) together with matrix addition is a group.
Fact: For any n ℕ, the algebraic structure (ℤ/nℤ, +) where + is addition modulo n is a group.
Fact: For any n ℕ, the algebraic structure ((ℤ/nℤ)*, ⋅) where ⋅ is multiplication modulo n is a group.
If we look at subsets of the elements in a group, we might find that certain subsets are closed under the operator for that group. These subsets are called subgroups, and the concept of a subgroup can be very useful when studying and using groups.
Definition: Let A be a group under the operator ⊕. We say that B is a subgroup of A if BA, B is closed under ⊕, and B is a group.
Example: The following are all the subgroups of ℤ/4ℤ under addition + modulo 4:
• {0}, because all terms of the form 0, 0+0, 0+0+0, and so on are equivalent to 0;
• {0,2}, since closure({0,2}, +) = {0,2};
• {0,1,2,3} = ℤ/4ℤ.
The following are all the subgroups of ℤ/6ℤ under addition + modulo 6
• {0}, because all terms of the form 0, 0+0, 0+0+0, and so on are equivalent to 0;
• {0,2,4}, since closure({2}, +) = closure({0,2,4}, +) = {0,2,4};
• {0,3}, since closure({3}, +) = closure({0,3}, +) = {0,3};
• {0,1,2,3,4,5} = closure({1}, +) = ℤ/6ℤ.
Fact: Given some n ℕ and some factor f ℕ such that f|n, then (closure({f}, +), +) is a subgroup of (ℤ/nℤ, +), and it is isomorphic to the group (ℤ/(n/f)ℤ, +).
Example: The following are all the non-trivial subgroups of ℤ/6ℤ under addition + modulo 6, together with their corresponding isomorphic group:
• ({0,2,4}, +) ≅ (ℤ/(6/2)ℤ) ≅ (ℤ/3ℤ, +);
• ({0,3}, +) ≅ (ℤ/(6/3)ℤ) ≅ (ℤ/2ℤ, +).
The notion of a subgroup allows us to introduce an alternative definition for prime numbers.
Definition: Given an integer p ℕ where p > 1, we say that p is prime if the only subgroups of (ℤ/pℤ, +) are the trivial subgroups ({0}, +) and (ℤ/pℤ, +).
Conjecture (hidden subgroup problem): The following problem is not in P: given a group (A, ⊕), find a non-trivial subgroup of A (non-trivial means not the subgroup that contains only the identity, ({I}, ⊕), and not the subgroup consisting of the entire group, (A, ⊕)).
Often, we are interested in a more restricted versions of this problem, which are also not believed to be in P:
• finding a non-trivial subgroup of (ℤ/nℤ, +);
• finding a non-trivial subgroup of ((ℤ/nℤ)*, ⋅);
• finding the size of the subgroup closure({a}, ⋅) of the group (ℤ/nℤ)*, where a (ℤ/nℤ)*.
Example: Suppose that for some n ℕ, we are given a ℤ/nℤ such that gcd(a, n) = 1. We know from Euler's theorem that aφ(n) ≡ 1 (mod n). However, φ(n) is not necessarily the smallest exponent of a that will yield 1.
For example, consider 3 ℤ/8ℤ. Even though φ(8) = 23 - 22 = 4 and 34 ≡ 1 (mod 8), it is also true that 32 ≡ 9 ≡ 1 (mod 8).
Thus, given some a ℤ/nℤ such that gcd(a, n) = 1, the problem of determining the smallest r such that ar ≡ 1 (mod n) amounts to finding the smallest subgroup of ((ℤ/nℤ)*, ⋅) that contains a. Equivalently, this amounts to finding |closure({a}, ⋅)|.
Algorithm (Shor's algorithm): Shor's algorithm relies on the ability of a quantum computer to find the smallest r > 0 such that ar ≡ 1 (mod n). It takes an arbitrary integer n as its input and finds a non-trivial factor of n with high probability. We highlight the quantum portion of the algorithm in green.
1. Shor's algorithm: n
1. do
1. choose a random a ℤ/n
2. if gcd(a, n) > 1, return gcd(a, n)
3. otherwise, find the smallest r > 0 such that ar ≡ 1 (mod n)
until r is even and a(r/2) ≢ −1 (mod n)
2. since we have exited the loop, then r is even and a(r/2) ≢ −1 (mod n)
3. thus, a(r/2) is a non-trivial root of ar
4. return gcd(a(r/2) ± 1, n), which is a non-trivial factor by this fact
Note that 1 has exactly four roots modulo n because n = pq. Since r > 0, a(r/2) cannot be 1. This leaves −1 and two other possibilities.
Example: Let us consider the example where n = 15. If we do not know the factors of 15, we could instead start with a = 2, since 2 (ℤ/15ℤ)*. We then find that:
 r
=
 |closure({2}, ⋅)|
=
 |{2, 4, 8, 1}|
=
 4
Thus, ar ≡ 24 ≡ 1 (mod 15). In this case, r = 4 is even and a(r/2) ≡ 24/2 ≡ 4, where 4 ≢ −1. But this means we have found a root 4 of 1 where 4 ≢ ± 1:
 4 ⋅ 4
 1 ⋅ 1 (mod 15)
Thus, we can now use the reduction from factoring to the congruent squares problem to find a factor of n:
 gcd(4 + 1, 15)
=
 5
 gcd(4 − 1, 15)
=
 3
We now consider a few other examples illustrating how subgroups and isomorphisms between groups and subgroups can be applied.
Example (arithmetic with unbounded error and bounded unreliability): Suppose you need to perform a sequence of k addition operations in ℤ/15ℤ, but all the addition operators ⊕ modulo n available to you are error-prone. To add two numbers a, b modulo n accurately, you must perform the computation ab at least n times (because up to ⌈ n/2 ⌉ of those attempts will result in an arbitrarily large error).
This means that to perform k addition operations modulo 15, it will be necessary to perform every operation 15 times, for a total of k ⋅ 15 operations modulo 15. If each addition operation modulo n takes about log2 n steps, this would mean that k operations would take:
 k ⋅ 15 ⋅ 4 steps
Assuming that performing CRT to find a solution in ℤ/15ℤ takes 10,000 steps, determine how you can use CRT to speed up the computation of these k addition operations, and for what minimum k this would be advantageous.
Definition: Given two groups (A, ⊕) and (B, ⊗), we define the direct product of these two groups to be the group (A × B, ◊) where the operator ◊ is defined over A × B as follows:
 (a, b) ◊ (a', b')
=
 (a ⊕ a', b ⊗ b)
Example: The direct product (ℤ/2ℤ × ℤ/2ℤ, +) where + is component-wise addition is a group, but it is not isomorphic to (ℤ/4ℤ, +). To see why, consider that ℤ/4ℤ has a generator 1 ℤ/4ℤ. Are any of the four elements (0,0), (0,1), (1,0), or (1,1) generator for all the elements in ℤ/2ℤ × ℤ/2ℤ?
Example: The direct product (ℤ/2ℤ × ℤ/3ℤ, +) where + is component-wise addition is a group, and it is isomorphic to ℤ/6ℤ. Can you find a single generator in ℤ/2ℤ × ℤ/3ℤ that can be used to generate every element in ℤ/2ℤ × ℤ/3ℤ? What is the name of the theorem that states that there is an isomorphism between ℤ/2ℤ × ℤ/3ℤ and ℤ/6ℤ?
Example: Suppose we consider the set of all possible polynomials of the form ak xk + ak-1 xk-1 + ... + a2 x2 + a1 x1 + a0 x0 where every coefficient ai is from ℤ/2ℤ, and where + represents addition modulo 2. Then we can observe the following:
 02
=
 0
 12
=
 1
In other words, for any x ℤ/2ℤ, x2x. That means any term of the form xk can be simplified into x:
 x2
=
 x
 x3
=
 x2 ⋅ x
=
 x ⋅ x
=
 x
 x4
=
 x3 ⋅ x
=
 x ⋅ x
=
 x
This, together with the fact that every coefficient ai can be simplified to either 0 or 1 modulo 2, shows us that there are only four distinct polynomials modulo 2:
 {0 ⋅ x + 0,      0 ⋅ x + 1,      1 ⋅ x + 0,      1 ⋅ x + 1}
=
 {0, 1, x, x + 1}
One notation for this set of polynomials is ℤ/2ℤ[x], which can be read as "ℤ/2ℤ extended with x" or "polynomials in ℤ/2/Z". The set {0, 1, x, x + 1} together with addition modulo 2 is an algebraic structure, and it is isomorphic to ℤ/2ℤ × ℤ/2ℤ where the two ceofficients of the polynomial a1 x + a0 correspond to the two components of an element (a1, a0) in ℤ/2ℤ × ℤ/2ℤ.

## [link]  Review 2. Algebraic Structures and their Properties

This section contains a comprehensive collection of review problems going over all the course material. Many of these problems are an accurate representation of the kinds of problems you may see on an exam.
Exercise: Suppose we have the following polynomial in the integers (i.e., all operations are arithmetic operations):
 6 x1001 + 2 x600 + 1
Prove that the arithmetic expression above is always divisible by 3 if gcd(x,3) = 1.
Exercise: Suppose you have n ℕ and a congruence class a ℤ/nℤ such that gcd(a, n) = 1. Compute in terms of n (and only n) the congruence class corresponding to the following term:
 0 ⋅ a + 1 ⋅ a + 2 ⋅ a + ... + (n − 1) ⋅ a
Exercise: Find all x ℤ/29ℤ that satisfy the following:
 y2
 16 (mod 29)
 x2
 y (mod 29)
Exercise: Solve the following problems.
1. Consider the following two circular shift permutations:
 [8,9,0,1,2,3,4,5,6,7]
 [5,6,7,8,9,0,1,2,3,4]
How many of each would you need to compose to obtain the permutation [9,0,1,2,3,4,5,6,7,8]?
2. Rewrite the permutation [3,4,0,1,2] as a composition of adjacent swap permutations.
Exercise: Suppose we want to perform k exponentiation operations (e.g., if k = 4, we want to compute (((ab)c)d)e) modulo 21. Assume the following:
• a single exponentiation operation modulo 21 takes 213 = 9261 steps;
• a single exponentiation operation modulo 3 takes 33 = 27 steps;
• a single exponentiation operation modulo 7 takes 73 = 343 steps;
• an exponentiation operation modulo 3 and an exponentiation operation modulo 7 together take 343 + 27 = 370 steps;
• solving a two-equation system for two values, a modulo 3 and b modulo 7, takes 8000 steps using CRT;
• we can either compute the exponentiation sequence directly modulo 21, or we can split it into two sequences of computations (one modulo 3, the other modulo 7) and then recombine using CRT at the end.
1. What is the number of steps needed to perform k exponentiations modulo 21?
2. What is the number of steps needed to perform k exponentiations modulo 3 and k exponentiations modulo 7, then to recombine using CRT?
Exercise: Find solutions to the following problems.
1. Explain why the following polynomial has no integer solutions (Hint: you only need to evaluate the polynomial for two possible values of x):
 x4 + x2 + 3
=
 0
2. Find at least one solution x ℤ/10ℤ to the following system of equations (you must use Bézout's identity):
 6 ⋅ y + 5 ⋅ x - 1
 0 (mod 10)
 x2
 y (mod 10)
Exercise: Find solutions to the following problems.
1. Suppose you want to send some s ℤ/nℤ to Alice and Bob, but you want to ensure that the only way Alice and Bob can retrieve s is if they work together. What two distinct pairs (s1, p1) and (s2, p2) would you send to Alice and Bob, respectively, so that they would need to work together to recover s?
2. Suppose Bob is generating a public RSA key; he chooses a very large prime p, and then he chooses q = 2. Why is this not secure?
3. Suppose Alice and Bob use Shamir secret sharing to share a password s to a lock that is not protected from brute force attacks (i.e., anyone can keep trying different passwords until they unlock it). Alice holds s mod p and Bob holds s mod q, where s < pq. However, suppose that Bob happens to be using q = 2, and Alice knows this. What can Alice do to quickly break the lock?
Exercise: Suppose that Alice, Bob, Carl, and Dan are sharing a secret s using Shamir secret sharing, where each participant is assigned a distinct modulus n that is coprime to everyone else's modulus. Each participant is holding a part of the secret s mod n, and the secret can be recovered by any two participants. However, Eve has sabotaged the value stored by one one the participants. Below are the values currently stored by everyone; one of them is corrupted.
• Alice: nAlice = 3 and (s mod 3) = 2
• Bob: nBob = 4 and (s mod 4) = 3
• Carl: nCarl = 5 and (s mod 5) = 2
• Dan: nDan = 7 and (s mod 7) = 4
1. Which participant's stored value s mod n has Eve sabotaged?
2. What is the correct secret value s?
3. What's the number of different shared secret values these four participants can store (assuming they use the same moduli, and require that any two members should be able to recover the secret).
4. Suppose you want to store an n-bit number s. You want to store it in a way that makes it possible to recover s even if one of the bits is corrupted. How can you accomplish this using at most approximately 2 ⋅ n bits?
Exercise: Suppose you want to make a two-player game in which each player gets a different element (call them a and b) from the algebraic structure (A, ⊕). Then, they would be given a third element c, and they must work together to use their two elements a and b to create the target element c. It must be impossible for an individual player to make the target element c on their own. Explain why each of the following algebraic structures would or would not work for this game.
• ℤ/3ℤ
• ℤ/4ℤ
• ℤ/2ℤ × ℤ/2ℤ
• ℤ/6ℤ
• ℤ/2ℤ × ℤ/3ℤ
• S3

## [link]  Appendix A. Python Reference

The Python programming language will be among the languages we use in this course. This language supports the object-oriented, imperative, and functional programming paradigms, has automatic memory managememt, and natively supports common high-level data structures such as lists and sets. Python is often used as an interpreted language, but it can also be compiled.

The latest version of Python 3 can be downloaded at: https://www.python.org/downloads/. In this course, we will require the use if Python 3, which has been installed on all the CS Department's undergraduate computing lab machines, as well as on csa2/csa3.

### [link]  A.2. Assembling a Python module

The simplest Python program is a single file (called a module) with the file extension .py. For example, suppose the following is contained within a file called example.py:

# This is a comment in "example.py".
# Below is a Python statement.
print("Hello, world.")

Assuming Python is installed on your system, to run the above program from the command line you can use the following (you may need to use python3, python3.2, python3.3, etc. depending on the Python installation you're using). Note that in the examples below %> represents a terminal prompt, which may look different on your system.

%> python example.py
Hello, world.

If you run Python without an argument on the command line, you will enter Python's interactive prompt. You can then evaluate expressions and execute individual statements using this prompt; you can also load and execute a Python module file:

%> python
Python 3.2 ...
Hello, world.
>>> x = "Hello." # Execute an assignment statement.
>>> print(x)     # Execute a "print" statement.
Hello.
>>> x            # Evaluate a string expression.
'Hello.'
>>> 1 + 2        # Evaluate a numerical expression.
3


### [link]  A.3. Common data structures (i.e., Python expressions)

Python provides native support for several data structures that we will use throughout this course: integers, strings, lists, tuples, sets, and dictionaries (also known as finite maps). In this subsection, we present how instances of these data structures are represented in Python, as well as the most common operations and functions that can be applied to these data structure instances.
• Booleans consist of two constants: True and False.
• The usual logical operations are available using the operators and, or, and not.

>>> True                                      # A boolean constant.
True
>>> False                                     # A boolean constant.
False
>>> True and False or True and (not False)    # A boolean expression.
True

• Integers are written as in most other programming languages (i.e., as a sequence of digits).
• The usual arithmetic operations are available using the operators +, *, -, and /. The infix operator // represents integer division, and the infix operators ** represents exponentiation. Negative integers are prefixed with the negation operator -.
• The usual relational operators ==, !=, <, >, <=, >= are available.
• The int() function can convert a string that looks like an integer into an integer.

>>> 123                                       # An integer constant.
True
>>> 1 * (2 + 3) // 4 - 5                      # An integer expression.
-4
>>> 4 * 5 >= 19                               # A boolean expression involving integers.
True
>>> int("123")                                # A string being converted into an integer
123

• Strings are delimited by either ' or " characters. Strings can be treated as lists of single-character strings. Another way to look at this is that there is no distinction between a character and a string: all characters are just strings of length 1. Multiline strings can be delimited using """ or ''' (i.e., three quotation mark characters at the beginning and end of the string literal).
• The empty string is denoted using '' or "".
• Two strings can be concatenated using +.
• The function len() returns the length of a string.
• Individual characters in a string can be accessed using the bracketed index notation (e.g., s[i]). These characters are also strings themselves.

>>> 'Example.'                                # A string.
'Example.'
>>> "Example."                                # Alternate notation for a string.
'Example.'
>>> len("ABCD")                               # String length.
4
>>> "ABCD" + "EFG"                            # String concatenation.
'ABCDEFG'
>>> "ABCD"[2]                                 # Third character in the string.
'C'

• Lists are similar to arrays: they are ordered sequences of objects and/or values. The entries of a list can be of a mixture of different types, and lists containing one or more objects are delimited using [ and ], with the individual list entries separated by commas. Lists cannot be members of sets.
• The empty list is denoted using [].
• Two lists can be concatenated using +.
• The function len() returns the length of a list.
• Individual entries in a list can be accessed using the bracketed index notation (e.g., a[i]).
• To check if a value is in a list, use the in relational operator.

>>> [1,2,"A","B"]                             # A list.
[1, 2, 'A', 'B']
>>> [1, 2] + ['A','B']                        # Concatenating lists.
[1, 2, 'A', 'B']
>>> len([1,2,"A","B"] )                       # List length.
4
>>> [1,2,"A","B"][0]                          # First entry in the list.
1
>>> 1 in [1, 2]                               # List containment check.
True

• Tuples are similar to lists (they are ordered, and can contain objects of different types), except they are delimited by parentheses ( and ), with entries separated by commas. The main distinction between lists and tuples is that tuples are hashable (i.e., they can be members of sets).
• The empty tuple is denoted using ().
• A tuple containing a single object x is denoted using (x, ).
• Two tuples can be concatenated using +.
• A tuple can be turned into a list using the list() function.
• A list can be turned into a tuple using the tuple() function.
• The function len() returns the length of a tuple.
• Individual entries in a tuple can be accessed using the bracketed index notation (e.g., t[i]).
• To check if a value is in a tuple, use the in relational operator.

>>> (1,2,"A","B")                             # A tuple.
(1, 2, 'A', 'B')
>>> (1,)                                      # Another tuple.
(1,)
>>> (1, 2) + ('A','B')                        # Concatenating tuples.
(1, 2, 'A', 'B')
>>> list((1, 2, 'A','B'))                     # A tuple being converted into a list.
[1, 2, 'A', 'B']
>>> tuple([1, 2, 'A','B'])                    # A list being converted into a tuple.
(1, 2, 'A', 'B')
>>> len((1,2,"A","B"))                        # Tuple length.
4
>>> (1,2,"A","B")[0]                          # First entry in the tuple.
1
>>> 1 in (1, 2)                               # Tuple containment check.
True

• Sets are unordered sequences that cannot contain duplicates. They are a close approximation of mathematical sets. Sets cannot be members of sets.
• The empty set is denoted using set().
• The methods .union() and .intersect correspond to the standard set operations.
• A list or tuple can be turned into a set using the set() function.
• A set can be turned into a list or tuple using the list() or list() function, respectively.
• The function len() returns the size of a set.
• To access individual entries in a set, it is necessary to turn the set into a list or tuple.
• To check if a value is in a set, use the in relational operator.

>>> {1,2,"A","B"}                             # A set.
{1, 2, 'A', 'B'}
>>> ({1,2}.union({3,4})).intersection({4,5})  # Set operations.
{4}
>>> set([1, 2]).union(set(('A','B')))         # Converting a list and a tuple to sets.
{'A', 1, 2, 'B'}
>>> len({1,2,"A","B"})                        # Set size.
4
>>> 1 in {1,2,"A","B"}                        # Tuple containment check.
True

• Frozen sets are like sets, except they can be members of other sets. A set can be turned into a frozen set using the frozenset() function.

>>> frozenset({1,2,3})                        # A frozen set.
frozenset({1, 2, 3})
>>> {frozenset({1,2}), frozenset({3,4})}      # Set of frozen sets.
{frozenset({3, 4}), frozenset({1, 2})}

In Python, values that are members of a set data structure must be hashable so that it is easy to deduplicate elements in an unordered set (e.g., {1,1,2,2} == {1,2} should be True, and this would take O(n2) steps to compute in the worst case because sets are not ordered). However, values of type set are not hashable because they can change (e.g., it is possible to insert an element into a set and also to remove an element from a set), which means their hashes could change, as well. Values of type frozenset cannot be changed; once their hash value is computed at the time of creation, it never changes. Thus, it is okay to include values of type frozenset inside instances of set without worrying about incurring a quadratic running time when deduplicating (or, e.g., computing a set union).
• Dictionaries are unordered collections of associations between some set of keys and some set of values. Dictionaries are also known as finite maps.
• The empty dictionary is denoted using {}.
• The list of keys that the dictionary associates with values can be obtained using list(d.keys()).
• The list of values that the dictionary contains can be obtained using list(d.values()).
• The function len() returns the number of entries in the dictionary.
• Individual entries in a dictionary can be accessed using the bracketed index notation (e.g., d[key]).

>>> {"A":1, "B":2}                            # A dictionary.
{'A': 1, 'B': 2}
>>> list({"A":1, "B":2}.keys())               # Dictionary keys.
['A', 'B']
>>> list({"A":1, "B":2}.values())             # Dictionary values.
[1, 2]
>>> len({"A":1, "B":2})                       # Dictionary size.
2
>>> {"A":1, "B":2}["A"]                       # Obtain a dictionary value using a key.
1


### [link]  A.4. Function, procedure, and method invocations

Python provides a variety of ways to supply parameter arguments when invoking functions, procedures, and methods.
• Function calls and method/procedure invocations consist of the function, procedure, or method name followed by a parenthesized, comma-delimited list of arguments. For example, suppose a function or procedure example() is defined as follows:

def example(x, y, z):
print("Invoked.")
return x + y + z

To invoke the above definition, we can use one of the following techniques.
• Passing arguments directly involves listing the comma-delimited arguments directly between parentheses.

>>> example(1,2,3)
Invoked.
6

• The argument unpacking operator (also known as the *-operator, the scatter operator, or the splat operator) involves providing a list to the function, preceded by the * symbol; the arguments will be drawn from the elements in the list.

>>> args = [1,2,3]
>>> example(*args)
Invoked.
6

• The keyword argument unpacking operator (also known as the **-operator) involves providing a dictionary to the function, preceded by the ** symbol; each named paramter in the function definition will be looked up in the dictionary, and the value associated with that dictionary key will be used as the argument passed to that parameter.

>>> args = {'z':3, 'x':1, 'y':2}
>>> example(**args)
Invoked.
6

• Default parameter values can be specified in any definition. Suppose the following definition is provided.

def example(x = 1, y = 2, z = 3):
return x + y + z

The behavior is then as follows: if an argument corresponding to a parameter is not supplied, the default value found in the definition is used. If an argument is supplied, the supplied argument value is used.

>>> example(0, 0)
3
>>> example(0)
5
>>> example()
6


Python provides concise notations for defining data structures and performing logical computations. In particular, it support a comprehension notation that can be used to build lists, tuples, sets, and dictionaries.
• List comprehensions make it possible to construct a list by iterating over one or more other data structure instances (such as a list, tuple, set, or dictionary) and performing some operation on each element or combination of elements. The resulting list will contain the result of evaluating the body for every combination.

>>> [ x for x in [1,2,3] ]
[1, 2, 3]
>>> [ 2 * x for x in {1,2,3} ]
[2, 4, 6]
>>> [ x + y for x in {1,2,3} for y in (1,2,3) ]
[2, 3, 4, 3, 4, 5, 4, 5, 6]

It is also possible to add conditions anywhere after the first for clause. This will filter which combinations are actually used to add a value to the resulting list.

>>> [ x for x in {1,2,3} if x < 3 ]
[1, 2]
>>> [ x + y for x in {1,2,3} for y in (1,2,3) if x > 2 and y > 1 ]
[5, 6]

• Set comprehensions make it possible to construct a set by iterating over one or more other data structure instances (such as a list, tuple, set, or dictionary) and performing some operation on each element or combination of elements. The resulting list will contain the result of evaluating the body for every combination. Notice that the result will contain no duplicates because the result is a set.

>>> { x for x in [1,2,3,1,2,3] }
{1, 2, 3}

• Dictionary comprehensions make it possible to construct a dictionary by iterating over one or more other data structure instances (such as a list, tuple, set, or dictionary) and performing some operation on each element or combination of elements. The resulting dictionary will contain the result of evaluating the body for every combination.

>>> { key : 2 for key in ["A","B","C"] }
{'A': 2, 'C': 2, 'B': 2}


### [link]  A.6. Other useful built-in functions

The built-in function type() can be used to determine the type of a value. Below, we provide examples of how to check whether a given expression has one of the common Python types:

>>> type(True) == bool
True
>>> type(123) == int
True
>>> type("ABC") == str
True
>>> type([1,2,3]) == list
True
>>> type(("A",1,{1,2})) == tuple
True
>>> type({1,2,3}) == set
True
>>> type({"A":1, "B":2}) == dict
True


### [link]  A.7. Common Python definition and control constructs (i.e., Python statements)

A Python program is a sequence of Python statements. Each statement is either a function definition, a variable assignment, a conditional statement (i.e., if, else, and/or elif), an iteration construct (i.e., a for or while loop), a return statement, or a break or continue statement.
• Variable assignments make it possible to assign a value or object to a variable.

x = 10

It is also possible to assign a tuple (or any computation that produces a tuple) to another tuple:

(x, y) = (1, 2)

• Function and procedure definitions consist of the def keyword, followed by the name of the function or procedure, and then by one or more arguments (delimited by parentheses and separated by commas).

def example(a, b, c):
return a + b + c

• Conditional statements consist of one or more branches, each with its own boolean expression as the condition (with the exception of else). The body of each branch is an indented sequence of statements.

def fibonacci(n):
# Computes the nth Fibonacci number.
if n <= 0:
return 0
elif n <= 2:
return 1
else:
return fibonacci(n-1) + fibonacci(n-2)

• Iteration constructs make it possible to repeat a sequence of statements over and over. The body of an iteration construct is an indented sequence of statements.
• The while construct has a boolean expression as its condition (much like if). The body is executed over and over until the expression in the condition evaluates to False, or a break statement is encountered.

def example1(n):
# Takes an integer n and returns the sum of
# the integers from 1 to n-1.
i = 0
sum = 0
while i < n:
sum = sum + i
i = i + 1
return sum

def example2(n):
# Takes an integer n and returns the sum of
# the integers from 1 to n-1.
i = 0
sum = 0
while True:
sum = sum + i
i = i + 1
if i == n:
break
return sum

• The for construct makes it possible to repeat a sequence of statements once for every object in a list, tuple, or set, or once for every key in a dictionary.

def example3(n):
# Takes an integer n and returns the sum of
# the integers from 1 to n-1.
sum = 0
for i in range(0,n):
sum = sum + i
return sum

def example4(d):
# Takes a dictionary d that maps keys to
# integers and returns the sum of the integers.
sum = 0
for key in d:
sum = sum + d[key]
return sum