The role of adjacency matrix degeneration in maximum entropy weighted network models

Complex network null models based on entropy maximization are becoming a powerful tool to characterize and analyze data from real systems. However, it is not easy to extract good and unbiased information from these models: A proper understanding of the nature of the underlying events represented in them is crucial. In this paper we emphasize this fact stressing how an accurate counting of configurations compatible with given constraints is fundamental to build good null models for the case of networks with integer valued adjacency matrices constructed from aggregation of one or multiple layers. We show how different assumptions about the elements from which the networks are built give rise to distinctively different statistics, even when considering the same observables to match those of real data. We illustrate our findings by applying the formalism to three datasets using an open-source software package accompanying the present work and demonstrate how such differences are clearly seen when measuring network observables.


I. INTRODUCTION
Network science [1] is a prime example of the multiple uses that many tools and methodologies extracted from traditional physics can have when applied to a variety of transdisciplinar problems.
The advent of the so-called Big Data era given by the explosion of ICT technologies is providing researchers with unprecedented large datasets in a myriad of different fields ranging from biology [2] to urban studies [3], including bibliometrics [4], chemical sciences [5] or even history [6] to cite a few. The current challenge is to extract knowledge and useful information from such enormous datasets. A standard approach is based on representing them as a graph. Network representation of data is specially useful due to its relative simplicity in terms of computational effort, visualization [7] and analysis. However it presents some serious limitations which forces to look for innovative methodological tools.
One way to go beyond simple network representation is to generate appropriate null models. They must be flexible and reliable enough to compare our original data to in the search of statistically relevant patterns. In general, this is not a simple task, data processing is tricky and subtle in many situations and it may lead to wrong conclusions based on a poor understanding of the problem under study. A clever strategy to find efficient null models consists on generating randomized instances of a given network while keeping some quantities constant [7,8]. This can be done by algorithmic randomization of graphs [9] but such a procedure can be costly in terms of computational time (specially for dense datasets) and programming difficulty. Most importantly, most "rewiring techniques" do not always generate an unbiased sampling of networks [10].
A different approach to this problem has its roots in the analogy of networks with classical statistical mechanics systems [11][12][13][14][15], though it was originally proposed by sociologists and also by urban planners [16] under the name of exponential random graphs [17]. It is based on the idea of constructing an ensemble of networks with different probabilities of appearance, which on average fulfil the considered constraints. The advantage of these methods is that they consider the possibility of having fluctuations (as usually happens in real data) and their derivation is completely analytical. Furthermore, such methods provide an easy way of rapidly simulating (and averaging) network instances belonging to a given ensemble. So far in the literature successful development of this kind of methodology has been performed for different types of monolayered networks [13,[18][19][20], directed [14] and bipartite structures [21], stochastic block models [22] and some multiplex weighted networks [4].
Recently, there is a growing interest to study more complex mathematical structures [23,24] to account for the inherent multi-layered character of some network systems. This fact calls for the need of developing maximum entropy ensembles with a multi-layered perspective [4], which will help in the analysis of real world datasets. This is the main goal of this work. In this paper, we complement previous work on maximum entropy weighted networks by considering systems of aggregated multiplexes, where we have information about the layered structure of the system and the nature of their events, but -as usually happens for real data-we only have access to its accumulated structure (the sum of weights connecting two nodes in each layer for each pair of nodes). We show how the role of event and layer degeneracy induce important differences in the obtained statistics for each case, which recovers the mono-layered studied cases when the number of layers is set to unity. We further show that, despite the statistics being different, all the cases considered are examples of maximum likelihood networks of the dual problem [25] but yield different expectations for network quantities, highlighting the need of choosing an appropriate null model for each case study based on the weighted character of the networks.
In section II we present the mathematical framework and calculations of degeneracy for maximum entropy networks with arbitrary constraints. Section III extends the calculation to obtain explicit statistics for a very general case of constraints for the different cases considered. Finally, section IV presents an application of the model for the particular case of fixed strengths on the analysis of three real world datasets. Extended mathematical calculations are provided in the Appendix and details on the used datasets, measured quantities and numerical methods in the Supplementary Material (SM) accompanying this work [26], including also a package for public use to apply the proposed models [27].

II. MAXIMUM ENTROPY CONSTRAINED GRAND-CANONICAL NETWORK ENSEMBLES WITH INTEGER WEIGHTS
We consider a representation of a network of N nodes, based on an adjacency matrix T composed by positive integer valued entries t ij ∈ N which we call occupation numbers. Each of these entries accounts for the intensity of the interaction between any given pair of nodes i and j in the network, measured in terms of discrete events (which may be trips between locations in a mobility network or messages between users in social networks for instance). We study the case of directed networks with selfloops, albeit the undirected case follows from the derivation. Our objective is to fully determine the grand canonical ensemble of networks [11,20,28] which fulfill on average some Q+1 given constraints {T ≡ ij t ij , C q (T)}. The total number of events T = ij t ij determines the sampling of the network, and is the minimal required constraint to consider any ensemble under the present framework [20]. In this paper, we examine the problem where constraints take the form of linear combinations of functions of the individual occupation numbers t ij , (1) To completely determine an ensemble, it is not enough to specify the quantities we wish to fix (the constraints given by the original data), we must also define the statistical nature of the events allocated to the occupation numbers t ij . In other words, we have to count all the network instances which give rise to the same particular configuration of the adjacency matrix T. This degeneracy term D(T) depends solely on the specifics of the system one represents, and counts the number of equivalent (micro) states that a particular unique realization of the adjacency matrix T can describe.
Once a grand canonical ensemble is fully constructed, the probability to obtain a particular configuration of occupation numbers T reads, The so-called Grand-Canonical partition function Z = {T} D(T)e H({θq},T) must be summed considering all the possible configurations of the adjacency matrix T one can consider, regardless of whether the proposed constraints are met. Such a probability with an exponential form [17] is obtained by maximizing the Shannon entropy S = {T} P (T) ln P (T) associated to the ensemble while preserving the Q + 1 constraints on average.
Using Equation (1) we reach, Let us notice that if the degeneracy term factorizes, i.e., D(T) ∝ ij D ij (t ij ) the partition function can be reexpressed as where the statistics of T are formed by a set of independent random variables corresponding to the occupation numbers {t ij }. Whenever one defines the degeneracy term and is able to sum the individual partition functions Z ij , then one gets the explicit statistics of the occupation numbers. The values of the Lagrange multipliers (θ T , {θ q }) associated to the Q + 1 constraints (which can also be understood as a posterior hidden variables [29,30]) are obtained by solving the so called saddle point equations, where {Ĉ q } are the values of the quantities one wants to keep fixed (on average) in the ensemble. The degeneracy terms are in general subtle to compute and to the best of our knowledge, are seldom considered in the literature. In order to calculate them, however, we need to make considerations about the type of networks at study and their elements. In this work, we consider systems composed by events that are either distinguishable or indistinguishable. Additionally, we study the general representation of an overlay multiplex network, which is obtained by aggregating several layers of a system into a single (integer weighted) adjacency matrix [23,24]. Examples of such networks range from aggregation of transportation layers [31], networks generated by accumulation of information over a certain time span such as Origin-Destination matrices [3], email communications [32] or human contacts [33] and even an aggregation of trading activities in different sectors such as the World Trade Network [34].
Thus the system under consideration is an aggregation of M network layers containing the same type of events: They can be either a group of layers composed by distinguishable (Multi-Edge -ME) or indistinguishable (Weighted -W) events or even an aggregation of Binary (B) networks. The occupation numbers corresponding to layer m are noted t m ij , but we only have access to information about their accumulated value through all the layers, i.e. the aggregated occupation numbers t ij = m t m ij . Finally, the degeneracy term is the product of the multiplicity induced by the nature of the events times the nature of the layers (which in the only real possible scenario are always distinguishable) D(T) = D(T) Events D(T) Layers . This last term is computed (for each pair of nodes or state ij) by counting the number of different groupings one can construct by splitting t ij = m t m ij (distinguishable or indistinguishable) aggregated events into M different layers respecting the occupation limitation of the considered events: Either only one event per layer (Binary network) or an unrestricted number (Weighted and Multi-Edge networks).

Network Type D(T)Events D(T)Layers
Multi-Edge (ME) The resulting degeneracy terms are shown in table I (see details in Appendix D), from which one can see that in principle the event degeneracy term does not factorize for the distinguishable cases due to the presence of the variable T ! = ij t ij !. One can nevertheless obtain an effective degeneracy term by substituting it bŷ T ! (a constant) -as shown in Appendix D, where a complete discussion of the implications of this substitution for the different cases is provided-which leads to results fully equivalent to those obtained by performing the exact calculation for the Multi-Edge case with constraints of the form (6). In doing so, two preliminary conclusions can be drawn. Firstly, both the distinguishable and indistinguishable binary cases will lead to the same statistics since their degeneracy term on events will be constant (hence on the remainder of the paper we will omit the case BD). Secondly, in all the cases the complete degeneracy terms will factorize into state ij independent terms, which means that the statistics of the aggregated occupation numbers will be state independent (equation (4)).

III. LINEAR CONSTRAINTS ON AGGREGATED OCCUPATION NUMBERS
To go further in our derivation, we now consider the case where the constraints are linear functions of the aggregated occupation numbers, Such a case is very generic and includes networks with local constraints on nodes [35], community constraints [18] and generalized cost constraints such as distances [36]. The case where the constraints depend on both the binary projection of the occupation numbers and their values f ij (t ij ) = c ij q t ij +c ij q Θ(t ij ) can be derived from the methodology developed here and is analyzed in Appendix A.
The individual partition functions can be summed In this case, we have redefined z ij ≡ e θ T Q q e θqc ij q to ease notation. This leads to, And we recover well known probability distributions: Poisson distribution for the Multi-Edge case [20] (independent of the number of layers M ), Negative Binomial for the Weighted case (being the geometric distribution [19] a special case when M = 1) and Binomial distribution for the aggregated Binary case (being the Bernoulli distribution [11] a special case for M = 1).
The resulting statistics show some important features: On the one hand, one sees that albeit the degeneracy term changes for Multi-Edge networks for either case of a monolayer or a multilayer, the form of the obtained statistics does not. This means that it is not possible to distinguish a Multi-Edge monolayered network from an aggregation of multiple Multi-Edge layers belonging to an ensemble with the same constraints. On the other hand, the situation for the other cases changes: For multiplexes the resulting occupation numbers will have different statistics from the monoplex case. This has the implication than one could in principle discern the aggregated nature of a network by inspection of their accumulated edge statistics {t ij }, provided that one has access to enough realizations of a system and that it belongs to the same ensemble (i.e. the system evolves according to some given, even if unknown, linear constraints [3] of the form in equation (6)). Another important implication of the obtained statistics is the very different interpretations encoded in the values z ij . This collection of values is related to the constraints originally imposed to the network ensemble through the set of Lagrange multipliers (θ T , {θ q }) (equations (3) and (5)) and can be understood as a posteriori measures related to the intensity of each node-pair ij. These measures encode the correlations between nodes imposed by the constrained topology (note that for local constraints only at the level of nodes we obtain a factorization t ij = M x i y j ). Table II reports the two first central moments of each distribution. For the Multi-Edge case z ij is both directly mapped to the average occupation of the considered link ij, t ij and to its (relative) importance in the network [20]. In all the other cases, however, z ij relates to a probability of a set of events emerging from a given node, to be allocated to a link ij. Obviously, as t ij grows, z ij grows in all cases, but not in the same linear way (in the W case, for instance, z ij is bounded to a maximum value of 1). This means that while in all cases z ij is related to the importance of a given link with respect to the others, the dependency in all non ME cases is highly non-linear. Finally, we can see that for a large number of layers M T /N 2 , the ensembles become equivalent to the ME, as the degeneracy term on (distinguishable) layers dominates the configuration space of the ensembles.
The different obtained statistics are highly relevant as their marked differences point out at a (regularly overlooked) problem: Different maximum entropy ensembles yield very different statistics for the same considered constraints, and hence each dataset needs to be analyzed carefully, since the process behind the formation of each network dictates their degeneracy term. Furthermore, all the obtained statistics are derived from a maximum entropy methodology, and hence the values z ij obtained from (5) are in all cases maximum likelihood estimates for the probability of T to belong to the set of models described by equation (8) (see Appendix B). Thus, any of the presented models will be a correct ensemble in a maximum likelihood sense [25] for some given constraints, and the appropriate choice for each network representation depends on the system under study, in contrast to the interpretation given by [35].
This means that if one wants to assess the effects a given constraint has on a network constructed from real data, one needs to very carefully choose the appropriate null model to compare the data to. It is also worth pointing out that most of these ensembles are not equivalent to a manual rewiring of the network [37] (albeit one expects small differences, see Appendix C). However, maximum entropy models allow for an analytical treatment of the problem, and simplify the generation of network samples when the considered constraints are increasingly complicated (both at the coding level and at the computational one). This has many implications, including the possibility of computing p values, information theoretic related quantities such as ensemble entropies [13-15, 18, 22] or model likelihoods as well as efficient weighted network pruning algorithms [38,39]. Moreover, this procedure helps in the fast and simple generation of samples of networks with prescribed constraints.
The main difficulty of the soft-constrained maximum entropy framework hereby presented for null model generation is the problem of solving the saddle point equations (5) associated to each ensemble. With the exception of some particular cases [37], these equations do not have an analytical solution and must be obtained numerically. In such a case, the best approach is to maximize the associated loglikelihood of each model to a set of observations (constraints), yet the difficulty of each problem increases with the number of constraints since each fixed quantity has an associated variable to be solved. Considering the different statistics obtained in this paper, the most difficult case by far is the Weighted one (W), since the condition that 0 ≤ z ij < 1 imposes a non-convex condition in the domain of the loglikelihood function to maximize, while the others are in general easily solved using iterative balancing algorithms (see SM for an extended discussion and details on the numerical methods used).

IV. APPLICATION TO REAL DATA: THE CASE OF FIXED STRENGTHS
To highlight the importance of considering an appropriate null model for the assessment of real data features, in this final section we consider the case of networks with a fixed strength sequence. Real networks usually display highly skewed node strength distributions, which have important effects in their observables. Hence, to correctly assess whether some observed feature in a dataset can be solely explained by the strength distribution, it is crucial to choose an appropriate null model to compare the data to. This situation is especially important for instance with regard to community analysis through modularity maximization for weighted networks, because the modularity function to be optimized [18] needs as input a prediction from a null model with fixed strengths (Q ∝ i,j t ij − t ij δ ci,cj where {c} are the commu-nity node labels associated to the optimal network partition). For a directed model with fixed strengths, the constraints in equation (1) So the resulting saddle point equations (5) arê whereŝ denotes the numerical value found in the data for the random variable s, which particularized to each case reads, ME: The ME case has an analytical solution [37] while the others must be solved computationally. Supplementary Material SM provides extended details about the network quantities computed, simulations, averaging and computational methods and algorithms used in this section, which are available in the freely provided, open source ODME package [27]).
As real world datasets we use a snapshot of the World Trade Network (WTN), the OD matrix generated by Taxi trips in Manhattan for the year 2011 [3,40] and the multiplex European Airline Transportation network [31]. WTN has been vastly studied in the literature and recently has been represented as an aggregated system of M ∼ 100 layers [41] representing different types of commodities being traded. In this network, nodes represent countries and weights represent the amount of trade between them, measured in millions of US dollars. In the OD Taxi dataset, which we construct as the aggregation of M = 365 daily layer snapshots, each node represents an intersection and each weight the number of trips recorded between them [42]. Finally, in the airline network each node is an airport and weights correspond to the number of airlines providing direct connections between them, so the network is an aggregation of M = 37 binary layers (one for each airline).
In all cases we will consider directed networks, and throughout this paper we will only show results in the outgoing direction, as the results in the incoming direction are qualitatively equal. Note that the binary aggregated case cannot be always applied since the maximum number of events allocated per node pair cannot exceed the number of layers, and for the WTN dataset this condition (max({ŝ i }) ≤ s max = N M ) is violated for some nodes.
To analyze the difference between models, we compute ensemble expectations for different edge and node related properties suitably rescaled (fixing the original strength distribution of each dataset) and then compare the obtained results with the real observed data features.
The airline dataset is very sparse and differences between models are not wide, which points out the need of adequate sampling for a successful analysis on weighted networks. Anyhow, since it is by construction an aggregation of binary layers, the B case displays the most resemblance to the data, both qualitatively and quantitatively (see SM).
The WTN and Taxi datasets, in contrast, contain enough sampling for the wide differences between models to emerge. All cases have the same number of eventŝ T on average, but they are not distributed among connections between nodes in the same way for the different models. Being zero the most probable value for the geometric distribution, for the W case with a single layer the connection probability initially grows distinctively faster than in all the other cases leading to larger number of binary connections between low strength nodes (Figure 1-A,B). Yet the higher relative fluctuations of the geometric statistics also generate extremely large maximum weights in the tail of the existing occupation number distribution (Figure 2), which are concentrated in connections between high strength nodes (Figures 1-C,D). Since the total number of events incoming and outgoing each node is fixed, this means that the W case has comparatively the lowest degrees for the most weighted nodes despite counting the larger number of binary connections E = ij Θ(t ij ) = i k out i = j k in i as can be seen in Figure 3-A,B.
These anomalies for low and high strength nodes respectively for the W case produce wild asymmetries in the allocation of weights per node, which can be studied measuring their disparity Y 2 = j t 2 ij / j t ij 2 ( Figure   3-C,D), which quantifies how homogeneously distributed are the weights emerging from each node: It displays a U shaped form with both low and high strength nodes tending to very strongly concentrate their weights on few connections. This non-monotonic behaviour is in strong contrast with the one observed for the real data and usually in other datasets [4]. Concerning second order node correlations, the outgoing weighted average neighbor strength s w nn = j t ij s in j /s out i ( Figure 4) again displays a large range of variation for the W case (with either one or more than one layer) in contrast with the slight assortative profile of the real data, the uncorrelated profile of the ME case and the slight dissassortative trend of  the B case. This last case is caused by the combination of two factors: The limitation on the maximum weight of the edges cannot compensate (with large weights connecting the nodes with the larger strength) the tendency of large nodes to be connected to a macroscopic fraction of the network, which is dominated by low strength nodes.
Obviously none of the null models used reproduce the real data, however, the goal in model construction is rather to assess the structural impact that a given constraint (in this case a strength distribution) has on the network observables. In this sense, we show that different models provide very different insights about such impacts. In particular, since the Airline dataset is by construction an aggregated binary network and the Taxi dataset a Multi-Edge one (people riding Taxis are clearly distinguishable), the fact that the B and ME cases respectively lie closer to the real data comes at no surprise. The WTN case, however, is unclear: The modelling of trade transactions has not a clearly defined nature, but if one assumes the WTN to be a multilayered network, its aggregated analysis should be performed using either the W (with M > 1 layers) or ME case, which again are closer both in functional form and qualitative values to the real case (in contrast with [41] where the W model with a single layer is used).

V. CONCLUSIONS
In this work we have showed the importance of considering the nature of the events one wishes to model when using an integer valued network representation of a system. We have developed and solved a maximum entropy framework for model generation applied to networks generated by aggregation [23,24] of multiplexes. We have shown how different considerations about the nature of the events generating the elements of the multiplex give rise to distinctively different node pair statistics. For the case where one wants to fix properties expressed as linear functions of the individual weights (and optionally their binary projection) in the network, we elegantly recovered well know statistics such as Poisson, Binomial, Negative Binomial, Geometric and Bernouilli for each case.
We have further provided a practical example by focusing on the case of fixed strengths and applying the models to assess relevant features on three different real-world datasets containing different types of weights, showing how the role of adjacency matrix degeneracy plays a crucial role in model construction. To this end, we have made considerations about the statistical nature of the obtained models as well as the weaknesses and strengths derived from their practical applications in real cases. Finally, we provide the open source software package ODME [27] for practitioners to apply the proposed models to other datasets.
The insights derived from this paper can open the door to the objective identification of truly multiplex struc-tures (except one case where it has been shown to be impossible) by inspection of the statistics of their edges, provided that several instances of a network belonging to the same ensemble are available.
The take home message of this work is that in order to perform a meaningful analysis on a given network, a practitioner needs to be able to select an appropriate null model, which not only depends on the endogenous constraints one considers but also on the very nature of the process one is modelling. Our work provides researchers with a range of maximum entropy (and maximum likelihood) models to choose from, covering a wide spectra of possibilities for the case of weighted networks. Each of this models is not wrong or even right in a general case despite yielding very different predictions for the same sets of constraints, but just more or less appropriate depending on the problem at hand one wants to study.
We develop hereby the general case where the constraints have the general form f ij q (t ij ) =c ij q Θ(t ij )+c ij q t ij . The general derivation remains essentially unchanged from the linear case (main text) with a slight modification in the calculation of the explicit partition function, Where {z ij } have been redefined and the constraint on the total number of events T = ij t ij is introduced in their redefinition. This yields,  which corresponds to the zero-inflated versions (ZI) of the previous statistics recovered in the case f ij = c ij q t ij , that is, asymmetric statistics where the probability of the first occurrence is different from the rest. Note that in this very general case, one can always set {c ij q = 0 ∀ij} to include the case where only binary constraints are considered). Explicitly, for the different statistics, we have, ME (ZIP -Zero Inflated Poisson): We can then compute the binary connection statistics, Note how the binary projection in all cases corresponds to Bernouilli statistics. Regarding the occupation num-ber statistics, one has explicitly, And we observe a clear non-trivial relation between binary statistics and weights, which leads to important correlations between degrees and strengths in networks belonging to these ensembles [20] which are also present in real data [43].

Appendix B: Maximum likelyhood distributions
The probability distributions derived in this paper for networks belonging to the different described ensembles fulfil the maximum likelihood principle for networks [25]. Indeed, the constraint point equations in (5) can be understood as the equations resulting from maximizing the likelihood of the inverse problem of finding the set of values for the Lagrange multipliers (θ T , {θ q }) that maximize the likelihood of the observed adjacency matrixT to belong to each of the described models. In other words, defining the loglikelihood function of a network by and maximizing this expression with respect to (θ T , {θ q }) one is lead to the equation (5). Explicitly, which at the critical points lead to ∆C q = 0 ∀q and the condition of maximum with respect to all the variables is fulfilled (we also note that the problem in this form is concave). We thus see that the initial statement of the paper is confirmed: It is not enough to specify the constraints to fully define a maximum entropy ensemble, but one needs also to state the nature of its elements, since any maximum entropy ensemble will lead to a maximum likelihood description of a dataset. There is not a "correct" ensemble to fix a given constraint, but the one that better describes the system that is being represented.

Appendix C: Ensemble fluctuations
If the maximum entropy description provided here is to be successful, then the fluctuations of the obtained networks need to be bounded and have well defined statistics. In particular, we require that the associated entropy of the distribution and the statistics of the constraints possess finite first and second moments, and that their relative fluctuations around the average values need to be small in the limit of large samplingT . Explicitly, we have, where a = 0 for ME case, a = 1 for W case and a = −1 for B case. We thus see that the fluctuations only disappear for large sampling in the linear case given by equation (6) for the ME description. By construction, the constraints are extensive in the occupation numbers t ij , thus when the number of events T grows their average value C q must also grow [20], yet only for the ME case we have t ij ∝T and thus only in this case relative fluctuations decay asT −1 . Otherwise, the maximally random allocation of events will be made as homogeneous as possible among the states while preserving the constraints, hence α q will in general be a large number (the denominator in the sum has L terms while the numerator has L(L − 1), being L the number of available node pairs for the allocation) and relative fluctuations will be bounded and O(M −1 ). For similar reasons, one expects the first term to vanish for large sampling. For very large number of layers, then the ensembles become equivalent to the ME case, and fluctuations vanish in the large sampling limit [37]. For the case where any binary constraint is additionally imposed (Appendix A), the relative fluctuations of the binary structure dominate the statistics in the large sampling limit, and despite being bounded, these never vanish [20].
Concerning the associated Gibbs-Shannon entropy of the ensembles, since the occupation number statistics are independent, we have the random variable ln P (T) = ln ij p ij (t ij ) = ln p ij (t ij ) (associated to a given network instance) is a sum of independent contributions which in all cases studied (Poisson, Negative Binomial and Binomial) have well defined first and second moments when averaged over the ensemble. Hence, the statistics of ln P (T) will be Gaussian, and no extreme outliers are expected. This indicates that the total average number of possible network instances compatible with a given set of constraints is a well defined quantities, and one can define a typical network structure representing the ensemble (unlike other studied cases in the literature [44]).
Appendix D: Calculation of degeneracy terms 1. Layer degeneracy: For each state ij out of the possible L(N ) = N 2 nodepairs (N (N − 1) if not accepting self-loops) one needs to consider the process of allocating t ij events in M possible distinguishable levels. For the W case this corresponds to the urn problem of placing t ij identical balls in M distinguishable urns. For the B case one faces the problem of selecting groups of t ij ≤ M urns out of a set of M urns and finally for the ME case one must count how to place t ij distinguishable balls in M distinguishable urns. These problems are well known and their solution leads to the second column in I, with the product over ij representing the fact that the allocation among the layers for each node-pair is independent.

Event degeneracy:
For this calculation one only needs to take into account the distinguishable case (otherwise there is no degeneracy). Such a case, however is controversial to analyze. The correct counting of configurations in a Grand-Canonical ensemble is an issue spanning more than a century (see [45] and references therein for details and extended discussion), ever since Gibbs used it to establish the relation in classical statistical mechanics between the Canonical and Grand-Canonical Ensembles of an ideal gas.
Grand Canonical ensembles of networks can be faced in many ways. The usual view is to imagine a collection of N copies of a system in where to distribute F events in such a way that there areT = F/N events on average in each copy [46]. In this framework, the probability to obtain a particular copy with T = ij t ij events and a set of constraints {C q (T)} reads, The prior expression is however related to the probability to obtain a given configuration of T, regardless whether it is unique or not (several configurations can give rise to the same T). For the case of distinguishable events, there are F T different ways of obtaining the same number of events T among the set of copies and T !/ ij t ij ! ways of distributing them to obtain a given adjacency matrix T, hence one must consider an additional term in expression (D1), For the case with linear constraints of the form in equation (6), the system partition function Z reads (z ij ≡ e θ T ij e θqc ij q ), If we add the strong condition that the ensemble average number of events has to be equal toT , a scaling of M ij z ij on the total number of events F distributed among the copies N is made apparent, Wrapping together the previous expressions and considering that the number of copies is arbitrary, one can imagine the limit where it goes to infinity keepingT constant. This amounts to consider F infinitely large too, which leads to a factorizable partition function of the form in equation (4) which does not depend on the number of copies of the system, as neither does its associated probability, We have thus reached the same independent Poisson probabilities as the ones obtained by taking an effective factorizable degeneracy termT ! ij (M tij /t ij !) (see equation (8)).
These results are in accordance with previous works [20] where the complete equivalence between Canonical (ensembles with soft linear constraints as in equation (6) but with T =T fixed for every network in the ensemble) and Micro-canonical ensembles (ensembles where all constraints are exactly fulfilled) of Multi-Edge networks was proven. The equivalence between Microcanonical ensembles and Grand-canonical ensembles in Poisson form has also been validated by simulations for the case where strengths are fixed [37].
For non linear cases (such as the case with binary constraints or the B case with distinguishable events), the effective degeneracy term is an approximation, since the complete calculation using partial sums where T is exactly fixed cannot be performed. Approximating T ! byT ! amounts to consider that the possible fluctuations of the macroscopic variable T are caused by the state independent fluctuations of the microscopic structure given by {t ij }. This however leads to the same statistics emerging from the leading order terms inT of the system partition function computed using a Micro-canonical formalism (see [20]). The data for the aggregated multiplex of flight connections between European cities have been obtained from [31]. It has N = 417 nodes which represent airports and weights represent the existence of a connection by a given airline between two airports. The layers are thus the M = 37 different airlines present in the dataset, and the adjacency matrix has been obtained by aggregating all the binary layers,t ij = m Θ(t m ij ). The network is undirected but is represented as directed (t ij =t ji ∀i, j) and does not contain self loops nor nodes for which the degrees are zero.
The strength distribution for all the datasets is shown in figure 5 and network quantities are summarized in  TABLE III. Network details on the used datasets. The symbolx = N −1 i xi indicates the graph-average of quantity x, L = N (N − 1) for the self-loops allowed case and L = N 2 otherwise, see section F for details on each quantity.

Appendix F: Measured quantities
In the paper, we compute the differences between models for different first and second order rescaled network metrics, detailed in the following (we only present the outgoing version of each quantity): • Scaled binary degrees: Represents the percentage of the network connected by at least a single event to a given node. • Non-zero weight distribution: Distribution of occupation numbers per existing links. .
• Scaled graph-average weight and graph-average binary connection probability of edges as a function of product of incoming and outgoing strengths: We use this metric to analyze the correlations between the binary connection probability and the average weight of links, as a function of the "importance" of each link as measured by the product of the nodes' strengths.t • Disparity: This bounded quantity Y 2,i ∈ (k −1 i (s i ), 1] indicates the homogeneity in the distribution of events among the edges emerging from a given node. The higher this value, the more concentrated the events are on a predominant link.
• Scaled average neighbor weighted strength: This quantity indicates the average strength of the neighbors of a nodes, weighted by the events of each connection. In this version, we have rescaled the quantity by the Dataset Case ε k ± σ k εY 2 ± σY 2 εsw nn ± σsw nn ε knn ± σ knn CP C  • Scaled average neighbor degree: These quantities represents node degree correlations, at the binary connection level.
• Sorensen Common part of Commuters index: This indicator is used to compute similarity between weighted matrices [48] and we use the version (less prone to be affected by sampling) appearing in [3].
Some of these cases have the issue of low-strength nodes which do not appear in every simulation (s = 0 or k = 0), leading to an undefined value of such quantities. Hence to average over an ensemble for this cases, we only compute the conditioned mean on existing cases. Node and edge related metrics for the Airlines dataset over r = 10 4 realizations, see section F for details on the measured quantities. In this case, due to poor sampling, differences between ensembles are not clearly appreciable, yet the W case is clearly distinct from the others and from the real data, being also the B case the one closer to the empirical data.
Recovering the expressions in Appendix 2 of the main text, we see that the concavity of the function to solve is assured in all the cases, yet the method to solve each system of equations will depend on the specifics of the problem at hand. In the general case, the difficulty of the problem depends on the number of constraints and varies with the particular restrictions of the considered case.
For the case of fixed strengths, one can approach this problem by maximizing the likelihood associated to every model, compared to the actual strength list provided {ŝ out ,ŝ in }. Hence, we need to solve the N associated {x, y} values considering z ij = x i y j . So we have, In the following, we provide the details on the computational methods used to solve the saddle point equations, which for the particular case of fixed strengths discussed in the paper, is implemented in the freely available, open source package ODME [27].

Multi Edge case
This scenario is fully analytical [37] for the case where self-loops are accepted. Otherwise, the statistic is Poisson so t ij = σ 2 tij = x i y j and one has the only restriction that x i ≥ 0, y i ≥ 0∀i, The saddle point equation can be then safely solved using algorithm 1 with Algorithm 1: Balancing algorithm to solve saddle point equations for ME and B cases.

Binary case
In this case the restriction is analogous to the previous one and we have, Although the problem cannot be shown to be strictly concave in this form, the saddle point equations can again be solved using algorithm 1 with the relation Some convergence problems can be encountered using the algorithm 1 if the larger values of the strength approach the limit max{ŝ out ,ŝ in } = M , but in this case a simple, positively bounded, unconstrained gradient descent method has been implemented to solve the problem.

Weighted case
The weighted case is considerably more complicated than the previous ones, since it includes the restriction that 0 ≤ x i y j < 1 ∀ i, j and hence the maximization is performed on a non-convex domain. The balancing approach (algorithm 1) is then not satisfactory, since there is no explicit enforcement for the values {x, y} to remain in the domain of the Loglikelihood function one wants to maximize. The scalar function being considered is subject to the conditions that, 0 ≤ x i y j ≤ 1 ∀i, j ∈ {1...N }. (H9) In principle, the problem is concave, and thus finding a solution to the saddle point equations gives the global maximum. Sadly, for real cases this concavity is lost as soon as the explicit domain constraint is included 0 ≤ x i y j < 1. Hence, there is no general algorithm that can be applied with assured results, and obtaining a solution to the saddle point equations will not be guaranteed in all cases (specially for large N or a very skewed distribution of strengths). We thus deal with a large scale, non-concave (non-convex), bounded and constrained maximization (minimization) problem.

a. Preconditioning
Basically, the difficulty of the problem derives from the form of the strength sequence {ŝ out ,ŝ in }. The more skewed this distribution is, the more difficult the problem is to solve, for a given fixed N . For easy problems, a good way to pre-condition the problem is to first solve the easier, bounded, unconstrained problem of finding, The problem of this method is that it does not consider all the available phase space (see figure 7): The solution lies in the hyper-volume defined by the axis and y max (x max ) = x −1 max , which is a larger volume than that defined by the axis and x max ≤ α, y max ≤ α −1 . Usually, a good choice is α = 1. If the distribution of of strengths is very skewed, the optimal solution most likely lies outside the second area, but the suboptimal solution within this region serves as preconditioning for the complete maximization problem thanks to the convexity of the function (without considering the domain).
We have implemented this preconditioning procedure using a truncated Newton TNC method [49] from the Scipy suite [50]. The preconditioning method looks for solutions inside the area delimited by the green rectangle. The orange area represents the domain of the L W s function, which is clearly non-convex.

b. Constrained problem
Since the loglikelihood function is not defined outside of the domain, we use an interior point method to solve the problem adding L non-linear inequalities of the form 0 ≤ x i y j < 1 (L = N 2 for the case without selfloops and L = N (N − 1) otherwise). The implementation is done in CVXOPT [51], but it has an obvious limitation given by the use of memory, which grows very fast with the number of nodes of the given network. Additionally, as early mentioned, the convergence of the algorithm is not assured due to the non-convexity of the complete problem, yet in our cases we obtained very satisfactory cases for the different cases analyzed.

Precision
For all the cases considered, we analyze the precision of our solving approach by computing the euclidean norm of the absolute error and the maximum of the absolute relative error among the nodes, The resulting values for each example are reported in table V.