A note on assignment games with the same nucleolus

We show that the family of assignment matrices which give rise to the same nucleolus forms a compact join-semilattice with one maximal element. The above family is in general not a convex set, but path-connected.


Introduction
The assignment game (Shapley and Shubik, 1972) is the cooperative viewpoint of a twosided market. There are two sides of the market, i.e. two disjoint sets of agents, buyers and sellers, who can trade. The profits are collected in a matrix, the assignment matrix. The allocation of the optimal profit should be such that no coalition has incentives to depart from the grand coalition and act on its own. In doing so, a first game-theoretical analysis of cooperation focuses on the core of the game. Shapley and Shubik show that the core of any assignment game is always nonempty. It coincides with the set of solutions of the linear program, dual to the classical optimal assignment problem. A recent survey on assignment games is Núñez and Rafels (2015).
Among other solutions, the nucleolus (Schmeidler, 1969) is a "fair" solution in the general context of cooperative games. It is a unique core-selection that lexicographically minimizes the excesses 1 arranged in a nondecreasing way. The standard procedure for computing the nucleolus proceeds by solving a finite (but large) number of related linear programs. As a solution concept, the nucleolus has been analyzed and computed in many cooperative games. Solymosi and Raghavan (1994) gives an algorithm for the computation of the nucleolus of the assignment game, computed in polynomial time. Recently Martínez-de-Albéniz et al. (2013b) provides a new procedure to compute the nucleolus of the assignment game. An interesting survey on the nucleolus and its computational complexity is given in Greco et al. (2015). The description of the 2 × 2 case is discussed in Martínez-de-Albéniz et al. (2013a).
In this paper we focus on the structure of the family of assignment matrices that give rise to the same nucleolus. The main contributions of the paper are the following: -The family of matrices with the same nucleolus forms a join-semilattice, i.e. closed by entry-wise maximum. The family has a unique maximum element which is always a valuation matrix. 2 -We show that the above family is a path-connected set and give a precise path, connecting any matrix of the family with its maximum element.

Preliminaries on the assignment game
An assignment market (M, M , A) is defined to be two disjoint finite sets: M the set of buyers and M the set of sellers, and a nonnegative matrix A = (a i j ) i∈M, j∈M which represents the profit obtained by each mixed-pair (i, j) ∈ M × M . To distinguish the j-th seller from the jth buyer we will write the former as j when needed. The assignment market is called square whenever |M| = |M | . Usually we denote by m = |M| and m = |M | . M + m denotes the set of nonnegative square matrices with m rows and columns, and M + m×m the set of nonnegative matrices with m rows and m columns.
Recall that M + m×m forms a lattice with the usual ordering ≤ between matrices. The maximum C = A ∨ B of two matrices A, B ∈ M + m×m is defined entry-wise, i.e. as c i j = max{a i j , b i j } for all (i, j) ∈ M × M . Given an ordered subset of matrices (F , ≤) , F ⊆ M + m×m , we say matrix C ∈ F is a maximal (minimal) element of (F , ≤) if whenever there is a matrix D ∈ F with D ≥ (≤)C, then D = C. Matrix C ∈ F is a maximum element of If for some buyer i ∈ M there is no seller j ∈ M satisfying (i, j) ∈ µ we say buyer i is unmatched by µ and similarly for sellers. The set of all matchings from M to M is repre- the set of all optimal matchings. Shapley and Shubik (1972) associate any assignment market with a game in coalitional form (M ∪ M , w A ) called the assignment game in which the worth of a coalition S ∪ T ⊆ M ∪ M with S ⊆ M and T ⊆ M is w A (S ∪ T ) = max µ∈M (S,T ) ∑ (i, j)∈µ a i j , and any coalition formed only by buyers or sellers has a worth of zero.
The main goal is to allocate the total worth among the agents, and a prominent solution for cooperative games is the core. Shapley and Shubik (1972) prove that the core of the assignment game is always nonempty. Given an optimal matching µ ∈ M * A (M, M ) , the core of the assignment game, C(w A ), can be easily described as the set of nonnegative payoff vectors (x, y) ∈ R M + × R M + satisfying and all agents unmatched by µ get a null payoff. Now we define the nucleolus (Schmeidler, 1969) of an assignment game, taking into account that its core is always nonempty. The excess of a coalition / 0 = R ⊆ M ∪ M with respect to an allocation in the core, (x, y) ∈ C(w A ), is defined as e (R, (x, y)) := w A (R) − ∑ i∈R∩M x i − ∑ j∈R∩M y j . By the bilateral nature of the market, it is known that the only coalitions that matter are the individual and mixed-pair ones (Núñez, 2004). Given an allocation (x, y) ∈ C(w A ), define the excess vector θ (x, y) = (θ k ) k=1,...,r as the vector of individual and mixed-pair coalitions excesses arranged in a non-increasing order, i.e. θ 1 ≥ θ 2 ≥ . . . ≥ θ r . Then the nucleolus of the game (M ∪ M , w A ) is the unique core allocation ν (w A ) ∈ C(w A ) which minimizes θ (x, y) with respect to the lexicographic order 3 over the whole set of core allocations. For ease of notation we will use, for We use the characterization of the nucleolus of a square assignment game of Llerena and Núñez (2011), see also Llerena et al. (2015). To introduce this characterization we define the maximum transfer from a coalition to another coalition. for any core allocation (x, y) ∈ C (w A ). Llerena and Núñez (2011) gives a geometric characterization of the nucleolus of a square assignment game. They prove that the nucleolus of a square assignment game is characterized as the unique core allocation (x, y) ∈ C(w A ) such that for any / 0 = S ⊆ M and / 0 = T ⊆ M with |S| = |T |. In certain cases, the number of equalities can be reduced. Indeed, note that if Therefore, for this characterization we only have to check (3) for the cases T = µ(S) for some optimal matching µ ∈ M * A (M, M ) and any / 0 To analyze the non-square case we can use two different approaches and we will apply both of them.
The first and classical approach consists in adding null rows or columns in order to make the initial matrix square. The added rows or columns correspond to dummy agents and they receive a null payoff at any core allocation and hence also in the nucleolus. At this extended square assignment matrix we apply the previous geometric characterization. Notice that the number of coalitions to be checked grows quickly for each added agent.
To fix our first approach we introduce some notation. Given any arbitrary assignment matrix A ∈ M + m×m , with m < m and where µ = {(1, 1), (2, 2), . . . , (m, m)} is an optimal matching for A, we define the following square matrix A 0 ∈ M + m obtained from the original matrix A by adding m − m zero rows, that is m − m dummy players.
We know that the matching The second approach is an adaptation of the consistency property of the nucleolus, see Llerena et al. (2015), and it can be found in Martínez-de-Albéniz et al. (2015). It keeps the dimension of the problem as low as we can and it has an interest on its own. Basically it consists in reducing the assignment problem to an appropriate square matrix, dropping out those agents unassigned by an optimal matching, and reassessing the matrix entries. Apart from the dimension issue, the main feature of this approach is that we must not care about the added zero rows or columns when we deal with the matrix.
To introduce the second approach we need some notations.
and define the square matrix Then the relationship between their nucleolus is the following one: Moreover the fixed matching µ is also optimal for matrix A µ . An example of the application of this second approach is the following. Consider matrix where the optimal matching is denoted in boldface, µ = {(1, 1), (2, 2)}. Now vector a µ = (7, 5) and matrix A µ ∈ M + 2 is given by The nucleolus of the game w A µ is ν(A µ ) = (0.5, 6.25; 0.5, 0.75), and then the nucleolus of the game w A is ν(A) = (7.5, 11.25; 0.5, 0.75, 0, 0, 0).

Assignment games with the same nucleolus
We introduce the family of matrices with a given nucleolus. To this end, for an arbitrary assignment matrix A ∈ M + m×m we denote by the family of matrices that share the same nucleolus as A.
It is clear that matrices with the same nucleolus must have the same worth for the grand coalition even if they do not have any optimal matching in common, consider e.g. matrices 1 0 0 1 and 0 1 1 0 .
For any assignment game there exists a unique matrix, its buyer-seller exact representative, among those leading to the same core, such that no matrix entry can be raised without modifying its core (Núñez and Rafels, 2002). From Núñez (2004) assignment games with the same core have the same nucleolus. In particular for each matrix in [A] ν its corresponding buyer-seller exact representative always belongs to the family. Nevertheless, as we will see, assignment matrices with different cores may also share the same nucleolus.
We focus now on the structure of this family [A] ν : it is a nonempty compact joinsemilattice 4 with a unique maximal element. Secondly we characterize this maximum and show it is a specific type of assignment matrix, a valuation matrix.
Theorem 1 Let A ∈ M + m×m be an assignment matrix. The family [A] ν forms a compact join-semilattice with a unique maximal element.
Proof First we prove that this family is a join-semilattice. Let B, B ∈ [A] ν . If m = m , we add zero rows or columns to make the matrices square, recall (5). It is known that these rows or columns correspond to dummy players which obtain zero payoff at any core allocation, and also in the nucleolus. Therefore we can assume from now on that matrices are square. We have B, B ≤ B ∨ B , and also C(w B ) ∩C(w B ) = / 0, since both games share the nucleolus. We claim C(w B ) ∩C(w B ) = C(w B∨B ).
To see it, take any (x, y) ∈ C(w B ) ∩C(w B ). It is clear x i + y j ≥ max{b i j , b i j } for all (i, j) ∈ M × M . Then for any optimal matching µ of matrix B ∨ B we have As a consequence, since (x, y) is the nucleolus of w B and w B , we obtain the equality δ B∨B S,T (x, y) = δ B∨B T,S (x, y) , proving that B ∨ B ∈ [A] ν . Now we show that this family is a compact set, and therefore with a unique maximal element. We show that it is bounded and closed. It is bounded since 0 ≤ b i j ≤ x i + y j for all In contrast with the previous result, the minimum defined entry-wise of two matrices with the same nucleolus may not have the same nucleolus, see matrices in (10). Now we introduce a kind of assignment matrices, useful for our purposes. A matrix A ∈ M + m×m is a valuation matrix 5 if for any i 1 , i 2 ∈ {1, . . . , m} and j 1 , j 2 ∈ {1, . . . , m } we have a i 1 j 1 + a i 2 j 2 = a i 1 j 2 + a i 2 j 1 . Clearly this definition is equivalent to see that any 2 × 2 submatrix has two optimal matchings.
Obviously, any fully-optimal 6 square matrix is a valuation matrix, and for square matrices the converse also holds. This characterization fails for non-square matrices as the following matrix shows: This is a valuation matrix, but clearly not all matchings are optimal.
Finally we want to point out two general properties for non-square valuation matrices. Let A ∈ M + m×m be a non-square valuation matrix with m < m and µ ∈ M * A (M, M ) any optimal matching, Then: (i) The square submatrix A M×µ(M) is fully-optimal. Its worth is w A (M ∪ M ).
(ii) The entries of matrix A satisfy a i j 1 ≥ a i j 2 for all i ∈ M, j 1 ∈ µ(M) and j 2 ∈ M \ µ(M). Define now matrix A ∈ M + m×m as follows We claim: A is the maximum of the family [A] ν , and clearly a valuation matrix. 5 Following Topkis (1998), a function is a valuation if it is submodular and supermodular. 6 A ∈ M + m×m is a fully-optimal matrix if all matchings are optimal, i.e. To prove claim (i), let A 0 , M 0 , and µ 0 the notation introduced in (5) to make square the initial matrix A. We denote by x 0 , y 0 ∈ R M 0 + × R M + the vector defined by x 0 k = x k if k ∈ M and x 0 k = 0 if k ∈ M 0 \M and y 0 k = y k if k ∈ M . We know ν A 0 = x 0 , y 0 and then δ A 0 M,µ 0 (M) (x 0 , y 0 ) = δ A 0 µ 0 (M),M (x 0 , y 0 ), but {y j }, and where we have used that y 0 j = y j = 0 for j ∈ M \ µ(M) and From the above equality we easily deduce x i ≥ min   (x, y). Clearly a i j = x i + y j ≥ b i j for 1 ≤ i, j ≤ m.
If m = m we are done, and B ≤ A. Otherwise, m < m . Consider matrix B 0 , see (5). We know ν(B 0 ) = (x 0 , y 0 ). Then {y j }, and We obtain for all i ∈ M and j ∈ M \ µ(M), or equivalently This ends our third claim, and proves the maximality of matrix A since we have seen that in the non-square case, B ≤ A. The fact that A is a valuation matrix is left to the reader.
From the proof of Theorem 2 we expect several valuation matrices if the initial assignment matrix is not square. In (11) we have introduced matrix D ∈ M + 3×5 which is an example of such a situation. By (4), (8) and (9)  which is strictly greater than the valuation matrix D. Both valuation matrices share the same nucleolus.
In the proof of Theorem 2 we have found the expression of the maximum element of family [A] ν , with ν(A) = (x, y). It is matrix A ∈ M + m×m as follows where µ ∈ M * A (M, M ) is an optimal matching. A close look at (12) could raise expectations of different maximum matrices A depending on the chosen optimal matching µ, but this is not the case, as it is easy to check.
The family [A] ν is not, in general, a convex set. To see it, consider the following matrices A and A : These elements are used for the characterization of the nucleolus and correspond to the minimum of some numbers. The elements of ∆ (B) can be ordered increasingly: and then ∆ (B) = {δ B 0 , δ B 1 , . . . , δ B r * }. From these parameters we can define a new matrix B 0 with the same nucleolus. We set , and we raise the worth of entry b i j to b 0 i j in such a way that It is clear that matrix B 0 has the same nucleolus as matrix B since the equalities of the geometric characterization of the nucleolus haven't changed and therefore B 0 ∈ [A] ν . Moreover ∆ (B) = ∆ (B 0 ). We may choose increasing linear paths from B to B 0 , one for each entry to raise. Notice that since we are moving up the entries that do not determine the distances of ∆ (B), all matrices on these paths will preserve the original nucleolus. Otherwise, r * > 0. In this case, for all (i, j) ∈ M × M such that x i + y j − b 0 i j = δ B 0 r * raise linearly and simultaneously b 0 i j to b 1 i j defined by the equality x i + y j − b 1 i j = δ B 0 r * −1 . We obtain a new matrix B 1 ∈ [A] ν , defined for all i ∈ M and j ∈ M by It is easy to see that ∆ (B 1 ) ⊆ ∆ (B 0 ), and ∆ (B 1 ) = ∆ (B 0 ). This means we have reduced the set of distances related with the nucleolus. Once again by (3) it is easy to see that ν(B 1 ) = (x, y) or equivalently B 1 ∈ [A] ν . Now, in a finite number of steps, proceed sequentially raising all entries until for all (i, j) ∈ M × M we have x i + y j − b r * i j = 0. That is, matrix B r * coincides with matrix A for the square case. In it all matchings are optimal.
For the non-square case, we assume |M| < |M | . Let B ∈ [A] ν , and let µ ∈ M * B (M, M ) be an optimal matching.
Notice first that matrix B can be modified without changing its nucleolus in the following way: This new matrix, denoted by B has the same nucleolus and then B ∈ [A] ν . Indeed, matrix B has also µ as an optimal matching and then by definition it has the same square matrix B µ ∈ M + m , i.e. ( B) µ = B µ , see (7). It is easy to see that the relationships between matrices B and ( B) µ are b From (8) and (9)  , with x i = x i − b µ i for i ∈ M, and y j = y j for j ∈ µ(M).
We can apply the previous procedure for square matrices to obtain an increasing piecewise linear path from ( B) µ to its maximum matrix in [( B) µ ] ν . This path, applied to matrix B M×µ(M) , see (13) (8) and (9). This ends the proof.
As a direct consequence of the above theorem we obtain that there is a continuum of elements in any family [A] ν , A ∈ M + m , except for the null matrices and the 2 × 2 assignment matrices k k 0 0 , Notice that if matrix A is not the maximal element A ∈ [A] ν Theorem 3 makes it obvious. If m ≥ 3, A = A and different from the null matrix we can lower one nonzero entry and get the same nucleolus, see (12). If m = 2 and A = A there are two optimal matchings. Then if one of the matchings has both entries positive, we can lower them equally a small amount and get the same nucleolus.