TWO NEW SERIES OF PRINCIPLES IN THE INTERPRETABILITY LOGIC OF ALL REASONABLE ARITHMETICAL THEORIES

Abstract The provability logic of a theory T captures the structural behavior of formalized provability in T as provable in T itself. Like provability, one can formalize the notion of relative interpretability giving rise to interpretability logics. Where provability logics are the same for all moderately sound theories of some minimal strength, interpretability logics do show variations. The logic IL (All) is defined as the collection of modal principles that are provable in any moderately sound theory of some minimal strength. In this article we raise the previously known lower bound of IL (All) by exhibiting two series of principles which are shown to be provable in any such theory. Moreover, we compute the collection of frame conditions for both series.


Introduction
Relative interpretations in the sense of Tarski, Mostowski and Robinson [12] are widely used in mathematics and in mathematical logic to interpret one theory into another. Roughly speaking, such an interpretation between two theories is a translation from the language of one theory to the language of the other so that the translation preserves logical structure and theoremhood.
We shall write U ⊲ V to denote that a theory U interprets a theory V . Once we know that U ⊲ V , this provides us much information; for example the consistency of U implies the consistency of V and also, various definability results carry over from the one theory to the other. Famous examples of interpretations are abundant: the theory of the natural numbers into the theory of the integers, set theory plus the continuum hypothesis into ordinary set theory (in fact, set theory plus the negation of the continuum hypothesis can also be interpreted into ordinary set theory, though this fact is less well-known), non-Euclidean geometry into Euclidean geometry, etc.
Interpretability, being a syntactical notion, allows for formalization very much as one can formalize the notion of provability. As such, we can consider interpretability logics which will actually extend the well-know provability logic GL named after Gödel and Löb. We shall see that the interpretability logic of a theory is the collection of all structural properties of interpretability that it can prove.
Where all modestly correct theories of some minimal strength -let us call them reasonable theories in this paper-have the same provability logic GL, the situation is different in the case of interpretability and different theories have different logics. It is an open question to determine the logic of interpretability principles being provable in any reasonable theory. This paper reports on substantial progress on this open question by increasing the previously known lower bound.

Preliminaries
Let U and V denote theories with languages L U and L V respectively. A relative interpretation j from V into a theory U -we will write j : U ⊲ V -is a pair δ(x), t where δ(x) is a formula of L U that specifies the domain in which V will be interpreted and t is a translation, mapping symbols of L V to formulas of L U providing a definition in U of these symbols.
For further details, we refer the reader to [16] and just mention some particularities here. For example, we restrict to languages with only constant and relations symbols and treat function symbols as functional relations. Moreover, we restrict ourselves to one-dimensional interpretations where each object in the one theory is represented as one object in the other theory, as opposed to a sequence of objects, as in the case of higher dimensional interpretations as studied, for example in [10].
Further, we do not allow for extra parameters in our interpretation. Thus, an n-ary relation symbol R in the language of V will be mapped by t to a formula R t in the language of U with exactly n free variables. If the language of V contains equality, we do not require that t maps equality to equality.
The translation t is extended to a translation j of formulas in the usual way by having j commute with the connectives and relativize the quantifiers to the domain specifier δ(x) as follows: ∀x ϕ(x) j := ∀x δ(x) → ϕ j (x) . We will not go too much into details but the main point is that interpretations are primarily syntactical notions -especially for finite languages-and as such allow for an arithmetization/formalization very much as formal proofs do.

Arithmetic
In order to formalize the notion of interpretability within some base theory T one needs to require some minimal strength conditions on T . In particular, we shall require that T can speak of numbers where to code syntax and we shall assume 1 that the language of T contains the language of arithmetic +, ×, S, 0, 1 <, = .
We will need that the main properties of the basic syntactical operations like substitution are provable within T . For reasonable coding protocols this implies that we need to require the totality of a function of growth-rate ω 1 (x) := x → 2 |x| 2 where |x| denotes the integral part of the binary logarithm of x.
Further, to perform basic arguments we need a minimal amount of induction and actually a surprisingly little amount of induction suffices. Buss's theory S 1 2 has just the needed amount of induction and proves the totality of ω 1 and this shall be our base theory (formulated in the standard language of arithmetic).
Alternatively, we could have taken as base theory I∆ 0 + Ω 1 which consists of Robinson's arithmetic Q together with induction for bounded formulas with parameters and the axiom Ω 1 stating that the graph of ω 1 defines a total function. We refer the reader to [5] and [2] for further details.
A sharply bounded quantifier is one of the form ∀ x<|y| where |y| denotes the integer value of the binary logarithm of y. The class ∆ b 0 contains exactly the formulas where each quantifier is sharply bounded. The class Σ b 1 arises by allowing bounded existential quantifiers and sharply bounded universal quantifiers to occur over ∆ b 0 formulas. It is essential that the ω 1 function may occur in the bounding terms. By ∃Σ b 1 we denote those formulas that arise by allowing a single unbounded existential quantifier over a Σ b 1 formula. The complexity classes Π n , Σ n and ∆ n refer to the usual quantifier alternations hierarchies in the standard language of arithmetic.
In this paper we shall only be concerned with first order-theories containing the language of arithmetic with a poly-time recognizable set of axioms extending S 1 2 and shall often refrain from repeating (some of) these conditions. We shall write T φ as the ∃Σ b 1 formalization of φ being provable in the theory T and refrain from distinguishing formulas from their Gödel numbers or even the numerals thereof. When I is a formula with one free variable we shall denote by I T φ the formalization of φ being provable in the theory T with a proof that satisfies I. It is well known that we can express provable Σ 1 completeness using formalized provability. Lemma 2.1. For any theory T extending S 1 2 we have that 1 One can consider a slightly more general setting where theories T do not directly speak of the natural numbers but where it is assumed that T is decent in some sense and comes with an interpretation N of the natural numbers (see e.g. [17]). One then only has to require that N satisfies sufficiently many axioms of number theory so that the arithmetisation of syntax can be performed. For example, ZFC set theory does not contain the language of arithmetic but we can easily perceive the numbers as 'living' inside set theory, that is, there is a natural interpretation of the numbers in ZFC.
We will use U ⊲ V to denote the formalization of "the theory V is interpretable in the theory U ". If we abbreviate the existential quantifier over numbers that code a pair δ(x), t defining an interpretation by ∃ int j we can write An interpretation j : U ⊲ V can be used as a uniform way to obtain a model of V inside any model of U . If U satisfies full induction, then we see that actually the defined model of V is an end extension of the model of U : we define f (0) := 0 and f (x + 1) := f (x) + j 1 j and by induction see that ∀x∃y f (x) = y. As such, we see that any Σ 1 consequence of U must necessarily also hold in V . Since T ϕ is a Σ 1 formula, the insight on end extensions is reflected in what is called Montagna's principle In case U does not have full induction, we can still define the graph F (x, y) of the function f from above, but we can no longer prove that the function is total. However, we can prove that ∃y F (x, y) is progressive, that is, we can prove In particular, the formula ∀ x ′ ≤x ∃y F (x ′ , y) defines an initial segment within U . A common trick in weak arithmetics is to use this initial segment as our natural numbers instead of applying induction (which is not necessarily available). By Solovay's techniques on shortening initial segments we may assume that they obey certain closure properties giving rise to the what is called a definable cut. A formula J is called a T -cut whenever T proves all of Let Cut(J) denote the conjunction of these three requirements. Sometimes we want to quantify over cuts within T so that these cuts can then of course be non-standard. We shall use ∀ Cut J ψ and ∃ Cut J ψ to denote ∀J ( T Cut(J) → ψ) and ∃J ( T Cut(J) ∧ ψ) respectively. Sometimes we shall write x∈J instead of J(x). Sometimes we will need to find a cut J inside another cut I. In such cases we will not just require that the Gödel number of the formula J is in I but moreover we shall require that the proof that J is a cut can also be found within I. Thus, for example, ∃ Cut J∈I ψ will be short for ∃J I T Cut(J) ∧ ψ . We note that if ψ(v) ∈ ∃Σ b 1 , then ∃ Cut J ψ(J) is again provably equivalent to an ∃Σ b 1 formula. Let us get back to the role of induction in Montagna's principle. If j : U ⊲ V and U does not prove full induction, then j will not define an end extension of any model of U . However, it is easy to see that j does define, using the progressive formula ∀ x ′ ≤x ∃y F (x ′ , y), a definable cut in U on which f is an isomorphism. This is reflected in a weakening of Montagna's principle also referred to as Pudlák's principle.
Lemma 2.2. Let T be a theory containing S 1 2 and let U and V be theories.

The interpretability logic of a theory
Interpretability logics are designed to capture structural behavior of formalized interpretability just as provability logic captures the structural behavior of formalized provability. To this end we consider a propositional modal language with a unary modal operator to model formalized provability and a binary modal operator ⊲ to model formalized interpretability of sentential extensions of some base theory. Let us make this more precise.
Thus, let us fix an arithmetical theory T ; By * we will denote a realization, that is, any mapping from the set of propositional variables to sentences of T . The map * is extended to the set of all modal formulas of interpretability logics as follows Note that we use the same symbol ⊲ for the binary modal operator as for the sentence in the language of arithmetic as defined in (1). We can now define the interpretability logic of a theory as those modal principles which are provable under any realization. With some liberal notation this is captured in the following.

Small witnesses
As a direct corollary to (2) -Montagna's principle-we can conclude that whenever T proves full induction. However, there is no direct reflection of Pudlák's principle on the level of interpretability logics since Pudlák's principle would translate to for the particular cut J corresponding to j : A ⊲ B and this cannot be expressed in our modal language. In a sense, J C corresponds to finding a small witness of the provability of C. As we shall see, there are various occasions where we can conclude that such small witnesses exist. The main ingredient in obtaining such small witnesses is expressed by the so-called Outside big, inside small lemma.
To formulate the lemma we should first provide a means to speak under the scope of a provability predicate about numbers that are given externally. As usual this is done via the notion of numerals. A numeral is a syntactical term that uniquely denotes a number. Since unary numerals grow too big, we will resort to dyadic numerals. Dyadic numeralsñ are defined recursively bỹ 0 := 0, 2n := SS0 ×ñ and, 2n + 1 := S SS0 ×ñ . Clearly, dyadic numerals are exponentially much shorter than unary numerals. Lemma 2.4 (Outside big, inside small). For T, U any theories extending S 1 2 , we have that Proof. Given J and given x, not necessarily in J, we can construct a proofobject to the extent that x ∈ J in the obvious way. A proof ofñ ∈ J will follow the built-up ofñ using the standard proofs of lemmas to the effect that ∀x x ∈ J → (SS0 × x) ∈ J and ∀x x ∈ J → S(SS0 × x) ∈ J . We refer to e.g. [2,7] for details.
The next lemma tells us that small witnesses suit the purpose of inner model constructions.
Lemma 2.5 (Formalized Henkin construction). For theories T, U and V all extending Proof. (Sketch) The theory T can verify that the usual Henkin construction can be formalized in U without many problems where J plays the role of the natural numbers. Instead of applying induction to obtain a maximal consistent set M V as a consistent branch of infinite length in Lindenbaum's lemma, we can now only conclude that the length of the branch is within some cut I which is a shortening of J thereby yielding a set M I V which is contradiction-free on I. The set M I V can be used to obtain a term model and we define an interpretation j : (U ∪ {Con J (V )} ⊲ V ) from the term model as usual so that provably φ j ↔ φ ∈ M I V . Note that since the interpretation of identity can be any equivalence relation, there is no need to move to equivalence classes in the construction of our term model. By construction we have By the outside big, inside small principle and the formalized deduction theorem we now conclude that . We refer to [15] where one can see that the necessary induction for this argument is available in S 1 2 and that moving from φ ∈ M I V to φ j by applying the commutation clauses can be done in p-time.
Using these lemmas we can infer in various occasions the existence of small witnesses to provability. Given two sentences α and β, it is common practice to denote T ∪ {α} ⊲ T ∪ {β} by α ⊲ T β. Moreover, it is common practice to omit theory subscripts in both the interpretation predicate ⊲ T and in the provability predicate T and we will do so too. As such, both the arithmetical predicate and modal operator are denoted by the same symbol but the context will always clearly indicate which reading to employ.
Proof. Reason in arbitrary T by contraposition and apply the Henkin construction on a cut.
As a corollary to this lemma, we see that It is an open problem to classify the modal principles that hold in any theory extending S 1 2 . This paper raises the previously known lower bound.
We formulate some other direct corollaries of the outside-big inside-small principle in the following useful lemma.
Lemma 2.7. Let T be any theory containing S 1 2 . We have that One ingredient in proving interpretability principles arithmetically sound, is to find small witnesses. Another ingredient tells us how we can keep these witnesses small. A simple generalization of Pudlák's lemma which was first proved in [6] and tells us how to do so.

Modal interpretability logics
When working in interpretability logic, we shall adopt a reading convention that will allow us to omit many brackets. Thus, we say that the strongest binding 'connectives' are ¬, and ♦ which all bind equally strong. Next come ∧ and ∨, followed by ⊲ and the weakest connective is →. Thus, for example, If we do not disambiguate a formula of nested conditionals (→ or ⊲), then this should be read as a conjunction. For example, A ⊲ B ⊲ C should be read as (A ⊲ B) ∧ (B ⊲ C) and likewise for implications.
We first define the core logic IL which shall be present in any other interpretability logic. As before, we work in a propositional signature where apart from the classical connectives we have a unary modal operator and a binary modal operator ⊲. Definition 2.9 (IL). The logic IL contains apart from all propositional logical tautologies, all instantiations of the following axiom schemes.
The rules of the logic are Modus Ponens (from A → B and A, conclude B) and Necessitation (from A conclude A).
It is not hard to see that IL ⊆ IL(All). By ILM we denote the logic that arises by adding Montagna's axiom scheme to IL. It follows from our earlier observations that ILM ⊆ IL(T) and the other inclusion can be proven too. A theory T is called Σ 0 1 -sound if it proves no false Σ 0 1 -sentences.
The logic ILP arises by adding the axiom scheme to the basic logic IL. If T is finitely axiomatizable it is easy to see that (1) is provably equivalent to a Σ 1 formula so that by provable Σ 1 completeness we see that ILP ⊆ IL(T) for any finitely axiomatized theory T that proves that exponentiation is a total function. If T can moreover prove the totality of superexponentiation supexp then the inclusion can be reversed too. Here, supexp(x) is defined as x → 2 x x with 2 n 0 := n and 2 n m+1 := 2 (2 n m ) .
It follows that IL ⊆ IL(All) ⊆ (ILP ∩ ILM). In this paper we shall focus on these bounds.

Relational semantics
We can equip interpretability logics with a natural relational semantics often referred to as Veltman semantics.
a non-empty set of possible worlds, R a binary relation on W so that R −1 is transitive and well-founded. Here, each S x is a binary relations on x↑ (where x↑ := {y | xRy}). The requirements are that the S x are reflexive and transitive and the restriction of R to x↑ is contained in S x , that is R ∩ (x↑) ⊆ S x .
A Veltman model consists of a Veltman frame together with a valuation V : Prop → P(W ) that assigns to each propositional variable p ∈ Prop a set of worlds V (p) in W where p is stipulated to be true. This valuation defines a forcing relation ⊆ W ×Form telling us which formulas are true at which particular world: The logic IL is sound and complete with respect to all Veltman models ( [3]). Often one is interested in considering all models that can be defined over a frame. Thus, given a frame F and a valuation V on F we shall denote the corresponding model by F , V . A frame condition for a modal formula schema A is a formula F (first or higher-order) in the language {R, {S x } x∈W } so that F |= F (as a relational structure) if and only if ∀ valuation V F , V |= A.
It is easy to establish that the frame condition for P is xRyRzS x u → zS y u where xRyRzS x u is short for xRy ∧ yRz ∧ zS x u. Likewise, it is elementary to see that the frame condition for M is given by yS x zRu → yRu. In this paper we shall compute the frame conditions for two new series of principles in IL(All).
Often we shall denote a valuation V directly by the induced forcing relation . Given a Veltman model F , we define a C-assuring successor -denoted by R C -as follows

A slim hierarchy of principles
In this section we present a hierarchy of interpretability principles in IL(All) of growing strength. For a well-behaved sub-hierarchy we shall compute the frame conditions and prove arithmetical soundness. There is no particular 'slimness' inherent to the hierarchy presented here. The main reason for our name is that we tend to depict the frame conditions (see Figure 1) in a slim way as opposed to the depicted frame conditions for the series of principles that we refer to as a broad series of principles (see Figure 2).
Here and in the next section we shall refrain from denoting arithmetical sentences by greek lower-case letters and modal formulas by latin upper-case letters. We will use the latter for both and the context will clearly tell which reading to use.

A slim hierarchy
Let a i , b i , c i and e i denote different propositional variables for i ∈ ω. Inductively, we define a series of principles as follows.
To illustrate how these substitutions work we shall calculate the first five principle schemes.
It is easy to see that the hierarchy defines a series of principles of increasing strength as expressed by the following lemma.
Proof. By an easy case distinction. We see that Thus, to understand the hierarchy well, it suffices to study a well-behaved co-final subsequence of it. To this end we define the following hierarchy.
For any n ≥ 0 we define schemata X n , Y n and Z n as follows.
For any n ≥ 0 define To see how this proceeds, let us evaluate the first couple of schematic instances: It is clear that the R k hierarchy is directly related to the R k hierarchy: Proof. By visual inspection we see that it holds for k = 0, 1. It is proven in full generality by an easy induction. To prove the lemma, it is best to consider the place-holders like A i etc. as propositional variables since otherwise in principle, for example, A i could contain E i as a subformula.
For the remainder of this section, we shall focus on the R k hierarchy and begin by computing a collection of frame conditions.

Frame conditions
For any n ≥ 0 we define a ternary relation G n (x, y, z) on Veltman-frames as follows.
For every n ≥ 0 we define the first-order frame condition F n as follows. , y, z)).
The main result of this subsection is that F 2n is the frame correspondence of R n . For n = 0 this has been established in [4]. It is easy to see that G n+1 (x, y, z) implies G n (x, y, z) so that F n+1 also implies F n . The frame conditions F k are depicted in Figure 1 for the first three values of k.
In what follows we let F = W, R, S be an arbitrary Veltman-frame. With a forcing relation we will always mean a forcing relation on F . For our convenience we define Before we can prove a frame correspondence we first need a technical lemma.
Lemma 3.3. For all k ≥ 0 and all x, y, z ∈ W . If G 2k (x, y, z) then for any forcing relation for which x Y k and xR C k y and z X k−1 , Proof. We shall write xR C k y as short for xR C k y and prove the claim by induction on k. With the convention that A −1 ≡ Z −1 ≡ ⊤ the lemma is trivial for Figure 1: From left to right we have depicted F 0 to F 2 . Since F k+1 implies F k we have only depicted the content of F k+1 which is new w.r.t. F k . As such we should read the pictures as: "if all un-dashed relations are as in the picture, then also the dashed relation should be present". k = 0. So assume k > 0. Let be a forcing relation and take x, y and z such that Take an arbitrary u ∈ W with zRu. By (7) we have yS x u and thus by (5) we have u C k . This shows z C k . To show that also the other two conjuncts hold at z assume that u E k . By (4) we find some v with uS x v and v Y k−1 .
In order to show z E k ⊲ A k−1 we have to find some a with uS z a A k−1 . We note that Y k−1 implies ♦A k−1 thus there exists some a with vRa A k−1 . By (7) we have G 2k−1 (z, u, v) and thus uS z a.
In order to show that also z E k ⊲ Z k−1 we have to find some b with uS z b Z k−1 . We just used that Y k−1 implies ♦A k−1 , but we note that Y k−1 implies the stronger statement that ¬(A k−1 ⊲ ¬C k−1 ). Thus there exists some a with a A k−1 and vR C k−1 a.
As above, by (7) we have G 2k−1 (z, u, v) and thus uS z a and zRa. By (6) there Since uS z a whence also uS z b holds, we will be done if we show that b Z k−1 .
To show that the remaining conjuncts of a, b) and use (8), (9) and (10) to invoke the (IH) on v, a and b.
Proof. Fix a forcing relation and let w, x ∈ W such that w X k and wRx Y k . Then for some y we have xR C k y A k . Thus there exists z with yS w z and Combining (11) and (12) gives z Z k .
The reversal of this corollary is again preceded by a technical lemma. We shall denote by a k , b k , c k , and e k , propositional variables that shall play the role of the A k , B k , C k and E k respectively in the principles R n . Likewise, by X k we shall denote the formula that arises by substituting a j for A j in X k and b j for B j . The formulas Y k and Z k are defined similarly. Lemma 3.5. For any k ≥ 0 and all x, y, z ∈ W . If for all forcing relations for which x Y k and xR c k y and z X k−1 Proof. Induction on k. Let x, y, z ∈ W and assume the conditions of the lemma. Unfolding the definition of G 2k (x, y, z) shows us that we have to show that 1. for all u with zRu we have yS x u (k ≥ 0); 2. and for all v and a with uS x v and vRa we have uS z a (k > 0); 3. and for all b with aS z b we have G 2(k−1) (v, a, b) (k > 0).
We will show 1 and 2 'by hand' and invoke the (IH) for 3. In each of the three cases we will choose similar but different forcing relations . We first show 1. So let zRu. Define w c k ⇔ yS x w and w a k ⇔ w = y.
And let all the other variables be false everywhere. Then xR c k y and x ¬(a k ⊲ ¬c k ). Since none of the e i nor a j with j = k holds anywhere in the model, we trivially have x Y k and z X k−1 and thus according to the conditions of the lemma in particular z c k . By definition of we thus have yS x u which proves 1. Note that for k = 0 we only have to look after 1 hence we have now dealt with the base case of our induction. Now we continue to show 2 assuming k > 0. Choose any v and a with uS x v and vRa. As above define w c k ⇔ yS x w and w a k ⇔ w = y.
We now also define w e k ⇔ w = u and, Let all the other propositional variables be false everywhere. Now v Y k−1 and thus x Y k . It is not hard to see that we also have z X k−1 and thus according to the condition of the lemma we have in particular z e k ⊲ a k−1 .
Since zRu e k there must be an a ′ with uS z a ′ a k−1 . Since a is the only world that forces a k−1 we must have uS z a.
To finish and show 3 choose b such that aS z b. We want to show that G 2(k−1) (v, a, b). Invoking the (IH) it is enough to show that for any forcing relation for which v Y k−1 , and vR c k−1 a and b X k−2 , Our strategy in proving this is as follows. We slightly tweak to obtain ′ . This ′ is similar to in that (13) still holds and moreover However, it is (possibly) different in that we now know that x ′ Y k , and xR c k ′ y and, z ′ X k−1 so that we may apply the main assumption of the lemma to ′ concluding z ′ c k ∧ (e k ⊲ a k−1 ) ∧ (e k ⊲ Z k−1 ). The latter will help us conclude (14).
Thus we consider an arbitrary forcing relation that satisfies (13). We modify to obtain ′ such that it satisfies Apart from these modifications, ′ will coincide with . It is a straightforward check to see that we have (13) for ′ and that moreover (15) holds. In addition, by the definition of ′ we now also have x ′ Y k and xR c k ′ y and z ′ X k−1 .
Thus, we see that ′ satisfies the antecedent of the condition of the lemma. Consequently, we have z ′ e k ⊲ Z k−1 . Since zRu ′ e k , there must exist some (15) the same holds for and we are done.
Putting this all together gives us the frame correspondence for R k .
Theorem 3.6. For any Veltman frame F and any natural number Proof. The second equivalence is a direct consequence of Lemma 3.2 so we focus on the first equivalence. The ⇒ direction is just Corollary 3.4. For the other direction, fix some k, assume that F |= R k and let wRxRyS w z. We have to show that G 2k (x, y, z). Now consider any forcing relation that satisfies xR c k y, and x Y k and, z X k−1 . By Lemma 3.5 it is enough to show that Now consider a forcing relation ′ where ′ is like except that Notice that xR c k ′ y and thus also x ′ Y k . But now we have w ′ X k as well and thus w ′ Y k ⊲ Z k . Thus there must be some z ′ with xS w z ′ Z k . Since b k is a conjunct of Z k and z is the only world where b k is forced we must have z ′ Z k . Since c k ∧ (e k ⊲ a k−1 ) ∧ (e k ⊲ Z k−1 ) does not involve a k nor b k we have (17).

Arithmetical soundness
Via a series of lemmata we shall prove Theorem 3.7 to the effect that the hierarchy {R i } i∈ω is arithmetically sound in any reasonable arithmetical theory. It is sufficient to prove that each of the R 2m is arithmetically sound in any reasonable arithmetical theory whence we shall focus on the principles R i . We shall first exhibit a soundness proof of R 1 and then indicate how this is generalized to the rest of the hierarchy. Before proving the arithmetical soundness of R 1 we first need to prove some auxiliary lemmas.
Lemma 3.8. Let T be any theory extending S 1 2 . We have that for any arithmetical sentences E 1 , A 0 , B 0 and C 0 that

Proof. Reason in T and assume
as was to be shown. Lemma 3.9. Let T be any theory extending S 1 2 . We have that for any arithmetical sentences E 1 , A 0 , B 0 and C 0 that Proof. Reason in T . From the assumption we get in particular that Lemma 3.10. Let T be any theory extending S 1 2 . Then, for any arithmetical sentences E 1 , A 0 , B 0 and C 0 we have that Proof. Reasoning in T we get from With these technical lemmas we can prove soundness of R 1 .
Lemma 3.11. Let T be any theory extending S 1 2 . We have that for any arithmetical sentences E 1 , A 1 , B 1 , A 0 , B 0 and C 0 that Proof. We reason in T . To begin, we observe that by Pudlák's Lemma we have holds for any σ ∈ Σ 1 . Next, using our new technical lemma and Lemma 2.6 we get The last step is due to the principle of outside-big inside-small (Lemma 2.4) and allows us to conclude We can combine this with the particular cut K from (18) to obtain ) is equivalent to an ∃Σ b 1 sentence relativized to K.) Our technical lemmas 3.9 and 3.10 tell us that and we are done.
The soundness proofs for R k are essentially not much different. We shall indicate where the soundness proof for R 1 needs to be modified and begin with modifications of the technical lemmas.
However, first we must inductively define a series of important formulas. In our definition we work with more variables than actually needed. However, we have chosen to do so since our variables can be interpreted as numbers or as formulas and we wish to avoid expressions like ∀ Cut J ∃J∈J φ.
It is easy to see that for each k > 0 the formula H k is an ∃Σ b 1 formula. The next lemmas show us that H k+1 are ∃Σ b 1 consequences of the Σ 3 statements E k+1 ⊲ Y k which contain all the essential information for proving soundness. First we prove a simple modification of Lemma 3.8.
Lemma 3.12. Let T be any theory extending S 1 2 . For any arithmetical sentences E 1 , A 0 , B 0 and C 0 and for any ∃Σ b 1 formula σ we have Proof. We repeat the proof of Lemma 3.8. Note that, by our reading conventions the antecedent We reason in T and see that As before, this final equation implies With the following lemma we see that the H k+1 are an ∃Σ b 1 encoding of information present in E k+1 ⊲ Y k : Lemma 3.13. Let T be a theory containing S 1 2 and let the formulas E i , A i , and C i be arbitrary. For any number k we have that Proof. By an external induction on k. For k = 0 this is simply Lemma 3.8. For the inductive case we reason in T and see that By the inductive hypothesis we have that E k+1 ⊲ Y k → H k+1 so that Since H k+1 is equivalent to an ∃Σ b 1 formula, by Lemma 3.12 we see that as was to be shown.
Moreover, the H k+1 formulas contain all the information to get the induction going as shown by the following lemma.
Lemma 3.14. Let T be a theory containing S 1 2 and let the formulas E i , A i , B i , and C i be arbitrary. For any number k we have that Proof. By induction on k where the case k = 0 is just lemma 3.10. For the inductive case, we reason in T and assume (X k+1 ) ∧ H k+2 .
From the definition of H k+2 we get From X k+1 -which is by definition equal to A k+1 ⊲B k+1 ∧(X k )-we find via Pudlák's lemma, our Lemma 2.2, a specific cut K k+2 such that for any formula σ in Σ 1 we obtain A k+1 ∧ σ K k+2 ⊲ B k+1 ∧ (X k ) ∧ σ. We can plug in this cut K k+2 to (19) to obtain via transitivity of ⊲ that We are almost done but B k+1 ∧ (X k ) ∧ C k+1 ∧ H k+1 is not quite equal to Z k+1 as was needed. The missing conjuncts are E k+1 ⊲ A k and E k+1 ⊲ Z k . The first is easily seen to follow from H k+1 and the second follows from the inductive hypothesis applied to (X k ) ∧ H k+1 .
We are now ready to prove Theorem 3.7 that the whole hierarchy is arithmetically sound.
Theorem 3.15. Let T be a theory containing S 1 2 and let A i , B i , C i and E i be arbitrary arithmetical formulas. We have for each number k that Proof. By an external induction on k where the base case is the soundness of R 0 which has been proven in [4]. Thus, we reason in T assuming A k+1 ⊲B k+1 ∧(X k ). We need to conclude that Y k+1 ⊲ Z k+1 . But Y k+1 is nothing but ¬(A k+1 ⊲ ¬C k+1 ) ∧ (E k+1 ⊲ Y k ). By Lemma 3.13 we know that (E k+1 ⊲ Y k ) → H k+1 . Using this and reasoning as before we obtain

This can be combined with Pudlák's Lemma on
It is easy to see that H k+1 implies E k+1 ⊲ A k . Moreover, Lemma 3.14 tells us that (X k ) ∧ H k+1 → E k+1 ⊲ Z k so that we may conclude as was to be shown.

A broad series of principles
In this section we present a different series of principles. We refer to this series as the broad series since the frame-conditions -see Figure 2-are typically represented over a broader area than the slim hierarchy as discussed above.

A broad series
In order to define the second series we first define a series of auxiliary formulas. For any n ≥ 1 we define the schemata U n as follows.
Now, for n ≥ 0 we define the schemata R n as follows.
As an illustration we shall calculate the first four principles.
While the series R i did define a hierarchy in that R i+1 ⊢ R i , we shall see that no such relation holds for the series R i .

Frame conditions
It is not hard to determine the frame condition for the first couple of principles in this series and in Figure 2 we have depicted the first three frame-conditions. In this section we shall prove that the correspondence proceeds as expected. Informally, the frame condition for R n shall be the universal closure of In order to make this frame condition precise and prove it, we shall first recast it in a recursive fashion. In writing (20) recursively we shall use those variables that will emphasize the relation with (20). Of course, free variables can be renamed at the readers liking.
First, we start by introducing a relation B n that captures the antecedent of (20). Note that this antecedent says that first there is a chain of points x i related by R, followed by a chain of points y i related by different S relations.
x 3 x 2 x 1 x 0 B 0 (x 1 , x 0 , y 0 , y 1 ) := x 1 Rx 1 Ry 0 S x1 y 1 , For every n ≥ 0 we can now define the first order frame condition F n as follows.
Sometimes we shall write x n+1 B n [x 0 , y 0 ] y n+1 conceiving the quaternary relation B n as a binary relation indexed by the pair x 0 , y 0 . In what follows we let F = W, R, S be an arbitrary Veltman-frame. The next lemma follows from an easy induction on n. To prove that F |= F n implies F |= R n we first need a technical lemma.
Lemma 4.2. Let w ∈ W and be a forcing relation on F . If then there exist x 0 , y 0 and y k+1 such that B k (x k+1 , x 0 , y 0 , y k+1 ), x 0 R C y 0 and y k+1 A.
Proof. Induction on k. If k = 0 then U k+1 = ♦¬(D 1 ⊲ ¬C) and the statement is easily checked. For the inductive case, we assume Recall that U k+2 := ♦((D k+1 ⊲ D k+2 ) ∧ U k+1 ). Thus, there exists some x k+1 with x k+2 Rx k+1 and Applying the (IH) (with D k+2 substituted for A) we find x 0 , y 0 and y k+1 with B k (x k+1 , x 0 , y 0 , y k+1 ), x 0 R C y 0 and y k+1 D k+2 . As B k (x k+1 , x 0 , y 0 , y k+1 ) we get x k+1 Ry k+1 (Lemma 4.1). Since we had x k+2 Rx k+1 we see that x k+2 Ry k+1 D k+2 , and since x k+2 D k+2 ⊲ A, we find some y k+2 with y k+1 S x k+2 y k+2 and y k+2 A. By definition of B k+1 we have B k+1 (x k+2 , x 0 , y 0 , y k+2 ). Proof. Induction on n. For n = 0 this is known (see [4]), so we assume n > 0. Let be a forcing relation, let x n+1 , x n ∈ W and assume x n+1 A ⊲ B, x n+1 Rx n and x n U n ∧ (D n ⊲ A). By Lemma 4.2 we find x 0 , y 0 and y n such that B n−1 (x n , x 0 , y 0 , y n ), with x 0 R C y 0 and y n A. We have that B n−1 (x n , x 0 , y 0 , y n ) implies x n Ry n (Lemma 4.1) and thus since x n+1 Rx n we also we have x n+1 Ry n A. By assumption x n+1 ⊢ A ⊲ B so that for some y n+1 we have y n S xn+1 y n+1 B. Clearly, we also have x n S xn+1 y n+1 so that we are done if we have shown that y n+1 C. To this purpose, we choose some u with y n+1 Ru. Since we have that B n (x n+1 , x 0 , y 0 , y n+1 ), by F n we have also y 0 S x0 u. But x 0 R C y 0 and thus we have u C, as required.
To prove the converse implication, we start again with a technical lemma. As before we shall denote by a, b, c, and d k , propositional variables that shall play the role of the A, B, C and D k respectively in the principles R n . Let U k denote the formula that arises by simultaneously substituting c for C and d k for D k in U k . 2. x c iff y 0 S x0 x; 3. x a ⇔ x = y k+1 ; 4. x p for any p / ∈ {d 1 , . . . , d k+1 , c, a}.
Proof. The idea is very simple using the informal description of B k being the antecedent of (20). We define a valuation so that d i+1 is only true at y i and a is only true at y k+1 . Moreover, we define x c iff y 0 S x0 x and x p for any p / ∈ {d 1 , . . . , d k+1 , c, a}. It is not hard to see that x k+1 U k+1 ∧ (d k+1 ⊲ a) for this valuation .
To make the argument precise, we proceed by induction on k. If k = 0 then B k (x 1 , x 0 , y 0 , y 1 ) simply means x 1 Rx 0 Ry 0 S x1 y 1 and we define The lemma is easily checked if we further define x p for any p / ∈ {d 1 , c, a}. For the inductive case we consider k > 0. Then B k (x k+1 , x 0 , y 0 , y k+1 ) implies that there are x k and y k such that The (IH) (with d k+1 substituted for a) gives a forcing relation such that ; in other words x k+1 U k+1 . We now define ′ as follows x ′ a ⇔ x = y k+1 and x ′ p ⇔ x p for p = a.
Clearly, the properties x k U k ∧ (d k ⊲ d k+1 ), aR c b, x d k+1 ⇔ x = y k simply extend to ′ and likewise we have that x ′ p for any p / ∈ {d 1 , . . . , d k+1 , c, a}. Moreover, we now have x k+1 ′ d k+1 ⊲ a as well.
As a corollary to this lemma, we can now obtain the full the frame conditions for the principles R n . Theorem 4.5. For each number n we have F |= F n iff F |= R n .
Proof. The ⇒ direction is just Corollary 4.3 so we focus on the other direction. Thus, we suppose that F |= R n , consider any x n+1 , x 0 , y 0 , y n+1 ∈ W with B n (x n+1 , x 0 , y 0 , y n+1 ) and set out to show that for any u with y n+1 Ru we have y 0 S x0 u. We now apply Lemma 4.4 and simultaneously substitute a for d n+1 and b for a to see that there exists a forcing relation such that Since n = 0 is known, we assume n > 0. Thus, we find x n with x n+1 Rx n and x n U n−1 ∧ d n ⊲ a (note that U n−1 [d n+1 /a] = U n−1 ). Using F |= R n we see that there must exist some x with x b ∧ c. But y n+1 is the only world that forces b thus necessarily y n+1 c. By the choice of we thus have that if y n+1 Ru then y 0 S x0 u.
Using the frame condition we readily see that the broad series of principles does not define a hierarchy. Proof. For each m = n it is easy to exhibit a frame F so that F |= F n but F |= F m .

Arithmetical soundness
We will now see that all the principles R n are arithmetically sound and begin with a simple lemma.
Lemma 4.7. For any theory T extending S 1 2 and any natural number n > 0, we have that Proof. We proceed by induction on n and first consider n = 1. Thus, we reason in T and assume U 1 , that is, ♦¬(D 1 ⊲ ¬C). We conclude by Lemma 2.6 that as was to be shown. Next, we consider the inductive case, again reasoning in T and assuming U n+1 which is ♦ (D n ⊲ D n+1 ) ∧ U n . Reasoning inside the ♦, by the (IH) we conclude from U n that ∀ Cut J ♦ (D n ∧ J C).
By Lemma 2.8 we obtain from D n ⊲ D n+1 that Combining with (21) and (22) under a ♦ we conclude that as was to be shown.
With this lemma, we can now prove the soundness of the series R n .
Theorem 4.8. For each natural number n we have that R n is arithmetically sound in any theory T extending S 1 2 .
Proof. Since we already know that R 0 is sound, we consider n > 0. We reason in T , assume A ⊲ B and set out to prove U n ∧ (D n ⊲ A) ⊲ B ∧ C. By Pudlák's Lemma we get On the other hand, by the generalization of Pudlák's Lemma (Lemma 2.8) applied to D n ⊲ A we obtain that ∀ Cut J ∃ Cut K D n ∧ K C ⊲ A ∧ J C so that By Lemma 4.7 we see that U n → ∀ Cut K♦(A ∧ K C). Combining these last two observations, we see that Combining this with (23) yields U n ∧(D n ⊲A)⊲B ∧ C as was to be shown.

On the core interpretability logic IL(All)
Apart from the principles mentioned earlier in this paper the literature has considered various other principles too. Some of those are In [13], IL(All) was conjectured to be ILW. In [15] this conjecture was falsified and strengthened to a new conjecture, namely that ILW * , which is a proper extension of ILW, is IL(All). In [8] it was proven that the logic ILW * P 0 is a proper extension of ILW * , and that ILW * P 0 is a subsystem of IL(All) (we write ILW * P 0 instead of IL{W * , P 0 }). This falsified the conjecture from [15]. In [8] it is also conjectured that ILW * P 0 is not the same as IL(All).
In [7] it is conjectured that ILW * P 0 =IL(All) and this conjecture was refuted in [4] by proving that the logic ILRW is a subsystem of IL(All) and a proper extension of ILW * P 0 .
It is easy to see that A ⊲ ♦B → (A ⊲ ♦B) ∈ ILP ∩ ILM. In [16] it was shown however that A ⊲ ♦B → (A ⊲ ♦B) / ∈ IL(All) thereby lowering the upper bound IL(All) ⊆ ILP ∩ ILM. Since A ⊲ ♦B → (A ⊲ ♦B) is reminiscent of the modally incomplete principle P 0 , we remark here that the principle A ⊲ B → ¬(A ⊲ ♦C) ⊲ B ∧ ¬C implies A ⊲ ♦B → (A ⊲ ♦B) so that it cannot be in IL(All) either.
The current paper raises the previously known lower bound of IL(All). However, it seems unlikely that this will be the end of the story and the two series presented here seem amenable for interactions. Just by mere inspection of the frame conditions we observe that F n = ∀w, x, y, z (B 0 (w, x, y, z) ⇒ G n (x, y, z)), F n = ∀w, x, y, z (B n (w, x, y, z) ⇒ G 0 (x, y, z)).
suggesting possible interactions. For example, a combination of R 1 and R 1 could yield We note that the two series presented in this paper only spoke of S relations that were imposed by the frame conditions. This suggests that a new conjecture can be formulated.
In words the conjecture is expressed as follows. First we single out the second order frame conditions that are inherent to provability and interpretability. These are the converse well-foundedness of the R relation as expressed by Löb's axiom ( A → A) → A and the converse well-foundedness of R • S x as expressed by W. Over these frames, we will further impose the existence of all S x relations that are forced to be there in virtue of both the ILP and the ILM frame condition. The logic of those frames is put forward as the new conjecture for IL(All).
To make this conjecture mathematically precise, we will introduce some notation. Let F be a class of IL-frames. By IL[F] we shall denote the interpretability logic corresponding to this class. That is, Let F (x, y, z) denote any sentence -first or higher order-in the language with a binary relation R and infinitely many indexed binary relations S u . We now define the following class of conditions C ILP∩S ILM := {F (x, y, z) → xS y z | ILP |= F (x, y, z) → xS y z ∧ ILM |= F (x, y, z) → xS y z}.
We wrote ILP |= F (x, y, z) → yS x z to denote that for any Veltman frame F for which F |= ILP we also have F |= F (x, y, z) → yS x z. Likewise, we speak of ILM |= F (x, y, z) → yS x z. Of course, in this context the condition F (x, y, z) → xS y z is equivalent to its universal closure. The class C ILP∩S ILM should thus capture all the S x relations that are imposed both because of ILM and of ILP frame conditions. We now define It is easy to formulate the conjecture where the antecedent F (x, y, z) is replaced by a set of sentences rather than a single sentence yet it seems hard to imagine that this is needed. Note that the conjecture only speaks of principles related to imposed S relations. For example, this will leave out a principle like A ⊲ B → (♦A ∧ C ⊲ B ∧ C) as formulated in [7] and whose frame condition is xRyRzS x uRv → ∃wyRwRv.
As the referee did, we remark that with the current paper out, it seems unlikely that IL(All) will have a nice axiomatization. It may occur however that resorting to a richer language may significantly simplify the answer. In [9] the second author and Visser looked at such a richer language where constants for particular definable cuts were available. In the light of the current paper, we expect that studies as in [9] will gain increasing importance.