The Barnes Update Applied in the Gauss-Newton Method: an Improved Algorithm to Locate Bond Breaking Points

A mechanochemical reaction is a reaction induced by mechanical energy. A gen-eral accepted model for this type of reactions consists in a ﬁrst order perturbation on the associated potential energy surface (PES) of the unperturbed molecular system due to mechanical stress or pulling force. Within this theoretical framework, the so-called optimal barrier breakdown points or optimal bond breaking points (BBPs) are critical points of the unperturbed PES where the Hessian matrix has a zero eigenvector that coincides with the gradient vector. Optimal BBPs are ’catastrophe points’ that are particularly important because its associated gradient indicates how to optimally harness tensile forces to induce reactions by transforming a chemical reaction into a barrierless process. Building on a previous method based on a nonlinear least squares minimization to locate BBPs (Boﬁll et al., J. Chem. Phys. 2017 , 147 , 152710-10), we propose a new algorithm to locate BBPs of any molecular system based on the Gauss-Newton method combined with the Barnes update for the nonsymmetric Jacobian matrix, which is shown to be more appropriate than the Broyden update. The eﬃciency of the new method is demonstrated for a multidimensional model PES and a medium size molecular system of interest in enzymatic catalysis.


Introduction
One of the main problems in theoretical chemistry is to study the mechanisms of chemical transformations. An important achievement in the development of models to understand the mechanisms associated with any chemical transformation was the introduction of the following concepts, namely, potential energy surface (PES) and reaction path (RP) as a way to describe the molecular system evolution from reactants to products in geometrical terms and the stationary points of the PES with respect to the coordinates, namely, transition states (TS) or first order saddle points (SP) and minima. In the last years, another type of points has increased its importance to explain some features of the chemical reactivity, namely, the valley-ridged inflection points (VRI) related with the bifurcations of the RP. The VRIs are points of the PES with different mathematical characteristics to those associated with stationary points. Finally, another type of points of the PES has recently turned out to be crucial in the theory that rationalizes the chemical transformations under mechanical forces, the so-called mechanochemistry. [1][2][3][4][5][6][7][8][9][10][11][12] From a conceptual point of view, the phenomenon of mechanical activation can be understood on the basis of the fact that the PES of a given reactive system changes when this system is subjected to tensile stress. As a result of these force-induced changes of the PES, the barriers between the minima and the saddle points change. 13 When a mechanical force is applied to a molecular system, the reactivity is either enhanced or suppressed. 14 This is explained using the PES model by the modification of the barrier that separates two minima due to the magnitude and direction of the mechanical force. The easiest case is when the direction of the applied force, f is constant. The modified PES or force-transformed PES, 15 labeled as, V f (x) is given by where V (x) is the original potential, T means transposed, (x−x 0 ) is the displacement vector and x 0 is an anchor or reference point. The dimension of the vectors is N . If we expand up to second order the Taylor expansion around x 0 the original potential, that is where g(x 0 ) and H(x 0 ) are the gradient and the Hessian of V (x) at x 0 , respectively, and ||x − x 0 || = [(x − x 0 ) T (x − x 0 )] 1/2 . Substituting this expansion in Eq. 1 we have thus in this model the perturbation due to external constant force only affects to first order in and t is the parameter that characterizes the curve. The Branin expression, Eq. 2, is a way to construct the Force Displacement Stationary Points (FDSP). 13,14 To see this we compute how the force-transformed PES, Eq. 1, changes through the NT where in its derivation, Eqs. 1 and 2, the directional derivative has been used. Regarding Eq. 3, it holds dV f (x)/dt = 0 at point x, in the next situations: (1 ) when the magnitude of the applied force, F , is equal to the gradient norm, ||g(x)||, of the original potential, V (x), or The second situation occurs if the point x belongs to the valley-ridge-inflection manifold of the original PES. 17,18 When this condition is fulfilled, the magnitude of the applied external force is irrelevant to make dV f (x)/dt = 0 since it depends on the original PES. The NT which goes through a VRI point in its evolution is a singular NT and this type of NT plays an important role in mechanochemistry. 12 The third condition is an artefact which comes in by the inserted dx/dt which is zero at the stationary points of the original PES. On the effective PES the stationary points are moved out of their original positions.
According to the previous requirement (1 ) the magnitude of the external force to be applied depends on the behavior of ||g(x)|| through the NT curve. For this reason it is important to analyze this norm by considering its variation or slope on the corresponding where in its derivation the directional derivative concept has been used, Eq. 2, and the where x(t 0 ) = x 0 . Note that d||g(x(t))||/dt| t=t 0 = 0 according to Eq. 4 since Det(H(x 0 )) = 0.
Using Eq. (B4) of reference 19 and using as the tangent vector the Branin expression, Eq. 2, where e j is the normalized eigenvector of the matrices A(x 0 ) and H(x 0 ) with eigenvalues µ j and 0, respectively. The eigenvalue µ j is the unique eigenvalue of A(x 0 ) different from zero. The matrix F(x 0 ) j results from the contraction of the third derivative tensor of the potential energy V (x) with respect to the coordinates with the eigenvector e j . Assuming that we find the first manifold of points that satisfy Det(H(x)) = 0 for all the NTs that emerge from the same minimum and go to the same SP, then it holds the value of e T j F(x) j e j < 0, and the NT with e j = l is the one which presents the higher decrease of ||g(x)|| after the manifold is transversed. This NT in this point coincides with a Gradient Extremal (GE). 13 The proof is trivial, since e j = l and l is the normalized gradient. This implies that the gradient is an eigenvector of the A(x) matrix and in turn an eigenvector of the corresponding Hessian matrix, being this the requirement that a point belongs to a GE. The fact shows the importance of this manifold in the mechanochemistry theory based on Eq. 1. At the points belonging to the manifold Det(H(x)) = 0 the bond breaking point (BBP) emerges. 7,13 At the latter type of points, the gradient norm along an NT reaches a turning point, and the effective PES, V f (x), presents a shoulder on the FDSP path. Within the manifold of BBPs of a given PES, there is an optimal BBP and also the NT that goes through this point. 13 This BBP defines the lowest force in magnitude and the corresponding pulling direction that should be applied to a molecular system in order to mechanically promote a given chemical transformation by making it barrierless. The optimal BBP satisfies the equation 13 where g(x) = 0. Eq. 7 tell us that the optimal BBP is also a point of a GE where the eigenvalue of the Hessian matrix is zero. It is the point where Eq. 6 takes the value for e j = l. The problem of locating optimal BBPs is equivalent to finding the point of coordinates such that the eigenvalue problem, has in this point the eigenvalue zero, g(x) T H(x)g(x) = 0. For this purpose if we define the then to find the optimal BBPs merely consists in locating the point such that the above function takes the minimum value, which is zero, since it is a sum of nonlinear least square functions. In other words, the σ(x)-function is by definition the scalar product of the vector, s(x) = H(x)g(x) g(x) −1 , by itself and reaches its minimum value, the zero value, at the point where s(x) = H(x)g(x) g(x) −1 = 0 which is just the condition of an optimal BBP given in Eq. 7. We recall that g(x) g(x) −1 = 0 at the slope of the PES, thus the σ(x)function is never zero outside a stationary point except if at this point the corresponding Hessian matrix has at least one zero eigenvalue. Thus the original problem has been transformed into finding the point where σ(x) = 0. Since the σ(x)-function is continuous and differentiable with respect to x we can expand it around x up to second order in a Taylor expansion According to Eq. 8 the derivatives that appear in Eq. 9 with respect to x have the form and The J(x) is the N × N Jacobian matrix. Since s(x) approaches the zero vector at the desired point where the σ(x)-function is close to zero, then normally the components s i (x) for i = 1, · · · , N are small. This suggests that a good approximation to ∇ x ∇ T x σ(x) might be obtained by neglecting the final terms of this matrix to result in ∇ . This is equivalent to making a linear approximation to each component of the s(x)-vector.
In this manner using the two elements necessary to compute the gradient, ∇ x σ(x), namely, s(x) and J(x), are also enough to evaluate the approximate Hessian matrix of this function. The expression for ∇ x ∇ T x σ(x) J(x) has been reported in Ref. 19. When the second derivative is approximated, the basic Newton method becomes the Gauss-Newton method or the generalized least squares method. 20 Note that the Gauss-Newton method can fail or can converge slowly. The convergence of the Gauss-Newton algorithm is largely improved if at each iteration the restricted step technique is used.
In the previous algorithm to locate optimal BBPs the restricted step was evaluated employing the rational function optimization technique. [21][22][23][24] However, the main problem of the rational function optimization technique used to evaluate the restricted step is that the computed displacement vector, ∆x = x − x, violates the imposed restriction on its length. The length of the displacement vector is sometimes larger or shorter with respect to the required length. For this reason the displacement vector should be scaled. This scaled vector is not the optimal one since the restricted step technique consists in finding the stationary point of minimal energy in a subspace which is generated by the intersection of the PES expanded up to second order in ∆x and a sphere of a given radius centered in the point x. This radius is known as trust radius. 20 The problem has been treated several times, and now we propose to solve it by diagonalization of a non-symmetric matrix. The restricted step technique in the present case can be formulated as follows: given the σ(x )-function centered at the point x and expanded until second order in ∆x, find the stationary points located in the intersection of this function with a sphere centered at the point x of radius r. This problem is solved through the Lagrange multipliers method, that now reads where ∆σ (2) Differentiating L(∆x, λ) with respect to ∆x and λ yields the equations Note that λ is a function of x, however, this dependence will be omitted to make the notation short. It can be shown that from the set of tuples, {(λ k , ∆x k )} max k=min , solutions of Eqs. 11, if λ max > λ i > λ j > λ min then ∆σ (2) The proof of these inequalities is given in Appendix A. The smallest λ, labeled as λ min , is needed in order to minimize the value of the quadratic approximation of the σ(x) function around the point x. From Eqs. 10 and 11 we have that So in place of the original minimization we can solve the Lagrange Eqs. 11 for λ min . The Lagrange equations 11 can be reduced to a quadratic eigenvalue problem. 25,26 For the deriva-tion let us assume that λ does not belong to the set of eigenvalues of the matrix J(x)J T (x), thus from Eq. 11.a Taking into account the normalization condition for ∆x, Eq. 11.b we get the secular function of which the zeros are to be computed. Now, if we define then instead of solving the secular function, Eq. 14, we have to solve the system Eq. 15.b can be formulated as r −2 s T (x)J T (x)t(x) = 1. Using the factor 1 in Eq. 15.a, we get the quadratic eigenvalue problem, Note that the restriction that λ does not belong to the set of eigenvalues of matrix J(x)J T (x) is no longer necessary. Of course, we must face the fact that the set of solutions for λ has been extended by these manipulations, because two equations cannot be formulated as a single one without consequences. These consequences are exposed in some detail in Appendix B and are taken into account in the proposed algorithm.
The quadratic eigenvalue problem, Eq. 16, can be reduced to an ordinary eigenvalue problem by properly chosen transformations. With the definition , the following equations can be established In matrix terms this leads to Thus we have transformed the original quadratic eigenvalue problem into an equivalent linear one that can be solved with traditional techniques. Let us denote the real solutions of Eq. 18 .,N , where λ i are given in increasing order. This set of triples has the full set of solutions Eq. 14 and λ 1 = λ min , is the desired solution of Eq. 11. Now, substituting the real triple, (λ min , t T min (x), p T min (x)), in Eq. 17.b we see that, If we multiply this equation from the left by [J(x)J T (x) − λ min I] −1 and using Eq. 17.a we the latest equation from the left by s T (x)J T (x) and taking into account Eq. 13 we have, ∆x T min ∆x min = r 2 , as expected. Note that this result can be applied to any real triple and it is Eq. 19, that transforms the vectors t T i (x) and p T i (x) of the i-th triple to the ∆x i vector that belongs to the (λ i , ∆x i )-tuple solution of Eq. 11, that is particularly important for the algorithm. Now, we treat the case that λ i belongs to an eigenvalue of the the matrix J(x)J T (x) . Let us assume that t i (x) is the eigenvector; then, from Eq. 17.a we obtain that The application of Eq. 19 to this triple gives an undetermined ∆x vector. For this reason the triples formed by λ's that belong to the eigenvalue spectra of J(x)J T (x) matrix are not solutions of Eq. 11, see also Appendix B. Now, we study the origin of the complex solutions. Let λ i and λ i+1 be two consecutive eigenvalues of the matrix J(x)J T (x) such that λ J i < λ J i+1 but they are not solutions of Eq. 18. Now if in the interval (λ J i , λ J i+1 ) the secular function, f (λ) given in Eq. 14, is greater than zero then there exists a complex tuple and the corresponding conjugate, which are both solutions of Eq. 18. The non-symmetric eigenvalue problem of Eq. 18 always has at least two real tuples, and their λ's correspond to the lowest, λ min , and the highest, λ max values.
The proof of this assertion is the following. As shown above any real tuple solution of Eq. 18 is solution of Eq.14 excepting if, let us say λ i = λ J i , an eigenvalue of J(x)J T (x) matrix, is solution of Eq. 14. Now let us take a r 2 value, then assuming that J(x)s(x) = 0 the f (λ) increases monotonically from −r 2 to ∞ as λ increases from −∞ to λ J 1 , the lowest eigenvalue of J(x)J T (x) matrix. Thus we have to find a unique value of λ in this interval, In a similar manner the f (λ) decreases monotonically from ∞ to −r 2 as λ increases from λ J N , the highest eigenvalue of J(x)J T (x), to ∞. Thus we have to find a unique value of λ in this interval, λ max > λ J N , for which f (λ max ) = 0. This concludes the proof.
Finally, if the solution of Eq. 18 has 2N real tuples then from the above results it is easy to proof the corollary about the existence of the next interlace,

Restricted Step Characterization and Trust Radius Update
Many optimization algorithms are based on truncating the Taylor series of the objective function to be minimized, but this truncated function does not have a minimizing point. In other words, σ(x + ∆x) ≈ σ(x) + ∆σ (2) (∆x) and regarding the region about the central point x in which the quadratic expansion is adequate, the σ(x) + ∆σ (2) (∆x) function does not have a minimum point. A way to solve this problem is to assume a region R x centered at the point x in which the function σ(x) + ∆σ (2) (∆x) agrees with σ(x + ∆x) in some sense.
Then one chooses x = x + ∆x such that ∆x minimizes ∆σ (2) (∆x) for all x = x + ∆x in R x -region. This is the basis of the restricted step method. 20 The trust radius is the parameter that characterizes the R x -region centered at the point x.
The region R x is normally limited by a sphere, and to seek for the solution ∆x of the problem where the equality is nothing more that Eq. 10 solved through Eqs. 18 and 19, and by standard Gauss-Newton for the inequality if the resulting ∆x satisfies that ∆x T ∆x < r 2 .
Now emerges the question of how the radius r of sphere R x shall be chosen. The radius r should take into account certain measure of agreement at each step between ∆σ (2) (∆x) and gives a measure of accuracy for which ∆σ (2) (∆x) approximates σ(x + ∆x) − σ(x). Thus when ratio is the unity or near to it, it indicates the best accuracy. Following Fletcher 20 an algorithm can be stated that changes r adaptively and attempts to maintain a certain degree of agreement measured by ratio, whilst keeping r as large as possible. The algorithm used in the present context at each iteration has the following form: 1 ) given x and r, compute J(x) and s(x); 2 ) solve Eq. 20 first by Gauss- and if ∆ T x∆x > r 2 then solve Eq. 18 and through Eq. 19 obtain the adequate ∆x; 3 ) compute σ(x + ∆x) and hence ratio from Eq. 21 using Eq. 12; if ratio > 0.75 and ||∆x|| = r then the new r takes the value 2r, otherwise the value of the new r is the same of the current step; One can prove that the constant numbers used, namely, 0.0, 0.25, 0.75, 2.0 are arbitrary and the algorithm is quite insensitive to their change. 20

The Barnes Update Formula for the Jacobian Matrix
The matrix at each iteration. As will be shown below, the Broyden formula is a simplification of the Barnes formula 28 and the latter has some properties that the former does not have and these differences can be important to improve the algorithm.
In this and the next subsection we use a d-vector rather than ∆x to make the notation short. Let us define the pair of differences for k = 1, . . . , n, where s (k) = s(x (k) ) = H(x (k) )g(x (k) )||g(x (k) )|| −1 and n indicates the iteration-n. Then a matrix J (k+1) can be calculated, where J (n+1) = J(x (n+1) ), which satisfies the hereditary condition Since J (k) does not relate d (k) and y (k) correctly, thus J (k+1) is chosen so that it does correctly relate this pair of differences, J (k+1)T d (k) = y (k) being the quasi-Newton condition. One possibility is to have C (k) is a matrix, which to avoid carrying over too much information from iteration to iteration, is calculated from the quantities d (k) , y (k) and J (k) . One can see the updating technique as a perturbation to the current J (k) matrix by C (k) to satisfy both the current quasi-Newton condition and the hereditary condition, Eq. 22. Taking into account this premise, to obtain an explicit form for C (k) we substitute Eq. 23 into the quasi-Newton condition giving One simple solution for this expression is readily seen to be where v (k) and w (k) are two non-zero arbitrary vectors except that neither should be orthogonal to d (k) . The numerators in Eq. 25 are matrices with a rank one. Thus C (k) has a rank one or two depending upon the choice of v (k) and w (k) . However, there is evidently still a considerable amount of flexibility in the choice of C (k) . Using the hereditary condition, Eq. 22, we have With C (k) given by Eq. 25 it therefore follows that Thus choose J (k+1) so as not to change the information in J (k) relevant to these directions, ensures that the updated matrix satisfies both the quasi-Newton condition and a hereditary property. Thus the resulting update formula is This formula was proposed for the first time by Barnes. 28 Note that the vector w (k) is the

Orthonormalization of the Displacement Vectors
Eq. 28 represents a modification of the matrix J (k) by a rank one update. A useful representation that helps to understand this update is based on the definition of the matrices Thus using Eq. 28 one needs only a way to update the D +(k−1) matrix to D +(k) , and in this computation the w (k) vector is obtained. This is done using a procedure proposed by Fletcher 29 that is now briefly described for the purpose of being applied in the present context. Another procedure is due to Ben-Israel. 30 It is also an iterative procedure but we will not discuss it here.
We define a projection matrix P (k−1) as whereas D +(k−1) has dimension (k − 1) × N . Normally the orthogonalization of a vector, say v, with respect to the set of vectors, {d (j) } k−1 j=1 , can be achieved by computing An expression that satisfies these three conditions is where and 0 is the zero vector of dimension N . This w (k) vector is the one that appears in the Barnes update formula, Eq. 28.
In fact the w (k)T /(w (k)T d (k) ) vector of Eq. 28 is the k-th row of the D +(k) matrix, the added row that has transformed the D +(k−1) matrix to the D +(k) matrix. This row is also labeled as d +(k)T . The ||w (k) || 2 is a measure of the linearly dependent character of the set of vectors If it is close to zero, it implies that this set of vectors is nearly linearly dependent.
Now we consider the case that D +(k) , D (k) , and d (k) are given, and that D +(k−1) and D (k−1) are required. This consideration is important when k > N , since as explained previously the oldest vector of the set {d (k) } N k=1 should be substituted by the newest one. We see that Using Eq. 29 the k-th row of D +(k) matrix, that is d +(k)T , is just w (k)T /w (k)T d (k) as noted above thus the last equation can be rearranged in the form Now the rows of D +(k−1) are linear combinations of the columns of D (k−1) , and d +(k) is orthogonal to all of these. Hence multiplying this equation by d +(k) on the right gives so that D +(k−1) is now determined completely and the formula for contracting the basis is established. These formulae enable the basis set {d (j) } k j=1 to be varied at will.
In summary, using Barnes update formula, Eq. 28, if the iteration, say k, is lower than

Examples and Performance of the Algorithm
In this section, we will present the results of the algorithm applied to a toy model (Frenkel-Kontorova) and to two different and relevant medium size molecular systems. These two molecular systems are the ring opening of a Benzocyclobutene derivative and the Chorismate Mutase rearrangement.

The Frenkel-Kontorova model
We use a Frenkel-Kontorova model as the first example. [31][32][33][34][35] It is a simple, yet non-trivial, model. One often divides in solid-state physics a set of particles into a one-dimensional subsystem of interacting elements, and the remaining part as a substrate. The latter acts by a potential on the extracted subsystem. In chemistry, the Frenkel-Kontorova chain is to a certain degree similar to a linear chain-like alkane elongation. 36 N equal atoms or ions may be described by the vector of coordinates x = (x 1 , .., ..., x N ).
The positions x i are on an axis. The coordinates of all particles satisfy x i <x i+1 . They are sorted in a fixed order. We search for a movement of the full chain along the x-axis. The potential energy surface (PES) of the model is We calculate the case N = 10. For simplicity we use here the ratio a o /a s = 1 thus we exclude a misfit of the chain and the substrate. 33 The parameter a o is the average spacing of the chain, but a s is the periodicity of the substrate potential. We usually put the factor at the sinusoidal potential, v=1. Then the spring constant, k, is also the ratio of the strength.
of the sinusoidal potential to that of the spring potential. Here we also simplify k = 1 throughout. All quantities referred to are dimensionless.  In Table 1 we still compare the energy of the optimal BBP with the first BBPs of other NTs.

The Benzocyclobutene ring opening
The first real chemical example is devoted to the conrotatory electrocyclic ring opening of cis-1,2-dimethyl-benzocyclobutene to yield a E, Z-diene. This transformation was studied by us in a previous paper on the so-called gentlest-ascent-dynamics-conjugate-directions  1,1,1,1,1,1,1,1,1) T 10.077 (1,1,1,1,0,0,0,0,0,0 3.047 (GAD-CD) transition state search algorithm 38 and will be named bcb in the following. The computational details for the bcb reaction are analogous to those presented below for the chorismate to prephenate reaction. The BBP of the bcb reaction will be located using the algorithm herein presented and also the algorithm introduced in Ref. 19, which was based on a Broyden update. This will allow us to compare the performance of the two algorithms.
The stationary points (namely, reactant, product, and TS) on the bcb reaction were first determined, and their relative energies were computed, taking reactants as the energy zero.
The structures of the stationary points are depicted in Fig. 3. Also, three geometrical pa- ing process, and the two dihedral angles corresponding to these carbon atoms, the methyl substituents, and two of the carbon atoms on the benzene ring. For a precise definition of these dihedrals, we refer the reader to Fig. 6 of Ref. 38. The results are presented in Table  2, which also shows the energies and geometric parameters for the located BBP. The BBP was determined setting the MNVAR variable to different values to ascertain the efficiency of the algorithm in these different situations, but the rest of computational parameters are identical. The MNVAR = 0 calculation corresponds to the update method of Ref. 19, and values of MNVAR of 5, 10, 20, and 30 were chosen for the Barnes update algorithm. As can be seen in Table 3, all calculations converge in a relatively small number of iterations and yield the BBP and the force vector at the BBP. In this particular case the optimal value for MNVAR = 10. It is observed that the BBP of the bcb transformation is located in between the transition state and the product, E, Z-diene, but much closer both energetically and geometrically to the transition state. Thus, it can be concluded that the Barnes update is significantly more efficient than the algorithm employed in Ref. 19. The molecular structure of the BBP and the force vector superimposed on this structure are depicted in Fig. 4. The force vector is equal to the gradient at the BBP and represents the optimal pulling direction to carry out the conversion from cis-1,2-dimethyl-benzocyclobutene to E, Z-diene in the gas phase. Figure 4: The molecular structure of the converged BBP for the bcb reaction and the force vector superimposed on it are shown.

The Chorismate-Mutase rearrangement
The second real chemical example that we have studied is the intramolecular Claisen rearrangement of chorismate to prephenate. This reaction is catalyzed by the chorismate mutase enzyme and is it part of the synthetic pathway to phenylalanine and tyrosine in bacteria, fungi, and higher plants. 39 Significant controversy about the enzymatic mechanism of this reaction, one of the few examples of enzyme-catalyzed pericyclic reaction, has arisen. Thus, it was discussed which of a transition state stabilization by electrostatic interactions in the enzyme active center, or a near attack conformation effect, was the main factor accounting for the enzyme catalytic enhancement of the reaction rate constant. 40 More recently, entropic effects have been proposed to be also part of the picture. 41 In the present work, we do not aim at a thorough study of the chorismate to prephen-ate reaction, but rather we provide a first study of the reaction from the point of view of mechanochemistry. In particular, we have computed the BBP for prephenate production by interfacing the BBP-finding algorithm with the TURBOMOLE 42 Table 4. Chorismate has different conformers in the gas phase, and the conformer determined here is called 'OH-out' in the literature. 46 In this conformer, the OH group is pointing towards one of the oxygens of the CO 2 group, thereby forming an hydrogen bond. Another conformer, called 'OH-in', has the OH group pointing in the opposite direction with respect to the CO 2 group, and originates a different reaction path. Note the evolution of the rearrangement from a short C O distance and a long C C distance in chorismate, to intermediate distances in the TS, and a long C O distance and short C C distance in prephenate. The relative energies reported in Table 4 can be compared to those reported in the literature at the B3LYP/6 − 31+G(d, p) level. 47 Thus, the ∆E = energy is 50.8 kcal·mol −1 at the M06 − 2X/def2-SVP level, and it is 37.5 kcal·mol −1 at the B3LYP/6 − 31+G(d, p) level; and the respective ∆E R energies are 5.2 and −10.5 kcal·mol −1 . Despite these differences, note that the aim of the present study is to determine the mechanochemical behaviour of the system rather than trying to obtain accurate results for the reaction. for the chorismate-prephenate 24-atom system.
The number of iterations required to converge to the BBP, the energies and the force vector norm at the BBP are presented in Table 5. Note that for all calculations, it is always fulfilled that the square root of the quotient of the sigma-function and the number of degrees of freedom at the BBP is less than 1.0 × 10 −3 , but the norm of the σ-function is always slightly larger than 1.0 × 10 −3 (around 3.0 × 10 −3 ). More precisely, the value of the σ-function is about 7.0 × 10 −5 at the BBP. As can be seen, the number of iterations to converge to the BBP vary significantly with MNVAR, and the optimal value is MNVAR = 30. This value is slightly less than half the number of degrees of freedom of the system. A BBP search was also carried out with the algorithm presented in Ref. 19, but it failed to converge within the 1000 maximum iteration limit. The energy, the C O and C C bond distances, and the relative energy to chorismate for the BBP are shown in Table 4. Comparing with the stationary points, one can observe that the BBP is located in between the TS and prephenate, but much closer both energetically and geometrically to the TS. The molecular structure of the BBP and the force vector superimposed on this structure are depicted in Fig. 6. The force vector is equal to the gradient at the BBP and represents the optimal pulling direction to carry out the conversion from prephenate to chorismate in the gas phase.

Conclusion
We have presented a Gaussian-Newton method combined with a Barnes update of a Jacobian matrix that enables the location of points on the PES where the Hessian matrix has a zero eigenvalue and the corresponding zero eigenvector coincides with the gradient. The algorithm has been demonstrated to work efficiently for the Frenkel-Kontorova model PES and the the Chorismate Mutase rearrangement. The critical points located by the algorithm herein presented are particularly important in the context of mechanochemistry because they coincide with the optimal BBPs of the system. The gradient in these points indicates how a pulling force should be applied to a molecular system to render a given chemical transormation barrierless using the smallest possible force. Therefore, our algorithm will be useful for identifying efficient ways of employing mechanical stress to enforce chemical reactions. 29

Appendix A: Consequences of the increasing Order of the Lagrangian Multipliers
For any tuple, let say i, Eq. 11.a can be multiplied from the left by ∆x T i . Taking the expression of Eq. 12 we have ∆σ (2) . Now we want to prove that if λ j > λ i then ∆σ (2) (∆x j ) > ∆σ (2) (∆x i ). For this purpose we first multiply from the left Eq. 11.a for the i-tuple by ∆x T j and analogously for the j-tuple by ∆x T i , respectively. The difference between the two resulting equations gives Any ∆x k vector for k = 1, . . . , N can be written as ∆x k = ru k where u T k u k = 1, thus θ j,i being the angle formed between the unitary vectors u i and u j . We conclude that Note that the zero case corresponds to the situation that i-tuple and j-tuple are the same. A consequence of this relation is the following: the difference between ∆σ (2) (∆x max ) − ∆σ (2) (∆x min ) can be expressed as (2) (∆x max−1 )) + (∆σ (2) (∆x max−1 ) − ∆σ (2) (∆x max−2 )) + . . . (2) (∆x min−2 ) − ∆σ (2) (∆x min−1 )) + (∆σ (2) (∆x min−1 ) − ∆σ (2) (∆x min )) .

+(∆σ
Substituting in the latest equality Eq. 33, and after some rearrangements we obtain an interesting expression between the angles θ j,i cos θ max,min = 1 Finally, we prove that in the set of tuple-solutions of Eqs. 11 there is no degeneracy. For this purpose we assume that λ i = λ j . Then from the relation and Eq. 16 In this appendix we show that the set of solutions for λ has been extended in Eq. 16 with respect to Eqs. 11. 25,26 As one will see this fact does not have consequences in the proposed algorithm and they already are taken into account and discussed in Section 2.1. 1. Let us start by assuming that λ and ∆x are the solution of Eqs. 11, then Eq. 16 has a solution for this λ. We have to consider two cases, the first one is this that λ is an eigenvalue of J(x)J T (x) and the second case is that λ is not an eigenvalue of this matrix. (b) We consider now the second case where λ is not an eigenvalue of the matrix . From the definition of the vector t(x) given after Eq. 14 we can write, , where the definition of the vector t(x), Eq. 13 and that ∆x satisfies Eq. 11.b has been used. Thus Eq. 16 is again satisfied. Consequently from the pair ∆x and the solution λ of Eq. 11, we always obtain an eigenvalue λ and the corresponding vector, t(x), being solutions of Eq. 16.
2. Now we treat the reverse situation, λ and t(x) are solutions of Eq. 16. We try to find a solution of Eq. 11. We have to consider to cases when λ is not an eigenvalue of the matrix J(x)J T (x) and when it is an eigenvalue.
(a) In the first case since λ is not an eigenvalue of J(x)J T (x) always Eq. 13 exists obtaining ∆x and thus Eq. 11.a is satisfied. Now multiplying Eq. 16 from the left by Since the eigenvector t(x) = 0 then the last equality implies that s T (x)J T (x)t(x) = 0 and we can write Multiplying this equality from the left by s T (x)J T (x) and using Eq. 13 we obtain that the norm of ∆x is r 2 , thus Eq. 11.b is satisfied too. This result is also given in Section 2.1. In the first subcase it is ∆x T ∆x > r 2 , thus Eqs. 11 do not have a solution for this λ because the normalization condition r 2 is not satisfied. The second subcase occurs when, ∆x T ∆x = r 2 , thus ∆x = ∆x, being the unique solution of Eqs. 11 for this λ.
(iii) Third, ∆x T ∆x < r 2 , and Eqs. 11 have several solutions for this λ. Let us assume that the eigenvalue λ is l-degenerate being {l (i) } l i=1 the set of this l-degenerate orthonormal vectors. Notice that ∆x is orthogonal to this set of l-degenerate orthonormal vectors. With this notation we can write ∆x = ∆x + c 1 l (1) + · · · + c l l (l) , with r 2 = ∆x T ∆x = ∆x T ∆x + c 2 1 + · · · + c 2 l being solutions of Eqs. 11. This set of solutions constitutes a manifold of dimension l − 1.
In summary, the solvability of Eqs. 11 for the smallest λ min must satisfy that λ min ≤ λ J 1 , being λ J 1 the smallest eigenvalue of the matrix J(x)J T (x) . We assume that λ min is the smallest eigenvalue of Eq. 16, with the above results we have the next three cases. 3) If λ min = λ J 1 and ∆x satisfies [J(x)J T (x) − λ min I]∆x = J(x)s(x) but ∆x T ∆x < r 2 then we must find a vectorl being a linear combination of the set of eigenvectors, {l (i) } l i=1 , corresponding to the l-degenerate eigenvalue λ J 1 of the matrix J(x)J T (x) such that r 2 = ∆x T ∆x +l Tl . Then ∆x = ∆x +l = ∆x + c 1 l (1) + · · · + c l l (l) represents one of the many solutions of Eqs. 11.

Supporting Information Available
The following file is available free of charge.
• Filename: Supporting-Info.pdf Cartesian coordinates of the initial point and the BBP for the conversion to prephenate.
An animation file with filename: FKm10DanimateBarnesBBP.gif is attended.