IF logic has a lot of global properties in common with first-order logic (section
3.2): compactness, the downward and upward Lowenheim-Skolem properties,
some variants of the interpolation theorem and of Beth’s definability theorem.
In section 3.3, it is shown that IF logic and exspecially extended IF logic are
sufficient to express many important concepts of modern mathematics that are
beyond the grasp of classical first-order logic. It is also shown that extended IF
logic provides categorical theories of natural numbers and real numbers; and a
reduction is provided of some problems of extended IF to problems of IF logic.
In section 3.4 it is shown that IF theories with a sufficient arithmetical content
may express their own truth predicates, violating, thus, Tarski’s theorem; this
fact suggested the possibility of an inner development of part of model theory,
of a semantical theory that needs not to refer to theories of sets. Then, the
possibility of a reduction of higher-order logics to extended IF logic is discussed
(section 3.5). Other themes are briefly summarized in section 3.6; among them,
the debate on the first-order nature of IF logic. Chapter Three also features
demonstrations of the non-axiomatizability of IF and extended IF logics (in
section 3.3). Section 3.1 follows Ehrenfeucht’s proof of the non-axiomatizability
of the logic of branching quantifiers; while writing of that matter, it became
clear that such method may not be applied to IF logic; our notes remain there
as a memento. However, we wish to point out that the non-axiomatizability
of IF logic is not a consequence of the second Lindstro¨m theorem (a wrong
opinion that seems to show up here an there in the World Wide Web); since
IF logic does not respect the law of the excluded middle, it is not subject to
Lindstro¨m’s theorems (and indeed, it contradicts the first Lindstro¨m theorem:
it is an extension of first-order logic that features both the compactness and the
downward Lowenheim-Skolem properties).
Chapter One presents some of the main formulations of IF semantics. Sec-
tion 1.1 introduces the syntax of IF logic together with the older semantics,
inspired by Henkin’s semantics for branching quantifiers: IF sentences are given
a Σ11 translation, whose truth value is established by standard second-order se-
mantics. The section features also a survey of some of the variants of IF logic
which occur in literature. Section 1.2 shows Hintikka’s game semantics, and the
equivalence of the two approaches is proved; also, the equivalence of IF logic
and branching quantifier logic is clarified, and it is shown that IF logic is not
just a fragment of Σ11 logic, but it has, instead, the same expressive power. Sec-
tion 1.3 presents a semantics due to Wilfrid Hodges, which assigns a meaning
also to open IF formulas and which gives a negative answer to one of Hintikka’s
claims: the assertion that no compositional semantics was definable for IF logic.
Hodges’ semantics, though quite complex, throws some light on the concept of
“independence”. The semantics was presented for a generalized IF language,
and indirectly related to IF logic; a variant of Hodges’ semantics that applies
to IF languages could be probably developed, but we preferred to follow the
canonical path; as Hodges himself explained, an IF variant of his semantics
would likely present some awkward and confusing features.
Chapter Two is devoted to the problematic and bizarre features of IF logic.
In section 2.1 an explicit example of the failure of the law of the excluded middle
4
is shown, and it is suggested that the class of indetermined formulas that show
this feature is quite vast. Then, the discussion shifts to the effects of the failure
of the tertium non datur on the notions of equivalence and implication. Section
2.2 introduces extended IF logic and some of its variants. Section 2.3 refers
to some papers of Theo M. V. Janssen and Francien Dechesne which pointed
out some very strange features of IF logic, and which criticized both the role
of IF logic as a foundational basis and its correctness in expressing the notions
of independence. Some of their arguments are presented and, in some cases,
criticized; not all of them seem correct to us. The wrong arguments show,
however, that some seemingly irrelevant changes in the semantics of IF logic
may result in the loss of very important properties; for example, the resulting
logic may not be a conservative extension of first-order logic anymore.
The present work does not exhaust the subject of IF logic; for example, the
relationships that occur among IF logic and generalized IF logics (and other
variants) are still in need of a clarification; other semantics should be compared
to those we presented here, in order to classify the possible interpretations of the
notions of dependence and independence. For what regards the discussion on
the role of IF logic in the foundations of mathematics, from the reading of recent
works on IF logic we got the impression that, nowadays, independence-friendly
logic is considered a sort of “theory of dependence” rather than a foundational
logic; whatever be the nice properties of IF logic, its deductive incompleteness
is felt as too serious a loss. Winds may change, and it is difficult to make
previsions about a field of studies that is still so young.
5
Chapter 1
Semantics
1.1 IF Logic: meaning and some history
Independence-Friendly (IF) logic was introduced for the first time to a wide au-
dience in 1996, by Jaakko Hintikka’s Principles of Mathematics Revisited ([8]).
It is an extension of classical first-order logic and a reformulation of Henkin’s
branching quantification. From the viewpoint of Hintikka, it is first-order logic
freed from some unnecessary restrictions in expressive power; more precisely, in
its capability to express relationships of dependence and independence among
variables.
Hintikka’s starting point was the simple observation that the strength of
predicate logic is not determined by the expressiveness of single quantifiers
alone, but, rather, by the possibility of nesting quantifiers in order to define
dependences among quantified variables. For example, a formula of the form
∀x∃yP [x, y] (1.1)
states implicitly that it is possible to associate to each object assigned to x an
object y so as to satisfy P [x, y]; thus, we may intuitively assert that (1.1) entails
the second-order formula
∃f∀xP [x, f(x)] (1.2)
and vice versa. In general, all dependences of existentially quantified variables
on universally quantified variables may be translated into a Σ11 second-order for-
mula, i.e. a first-order formula preceded by existential second order quantifiers.
We shall investigate later the nature of this second-order translation; however,
a comparison between first-order formulas and these second-order translations
suggests that not all the functional dependencies that are expressible by means
of Σ11 formulas can be asserted by means of first-order formulas. For example,
the formula
∃f∃g∀x∀yP [x, y, f(x), g(y)] (1.3)
is not the translation of any first-order formula. This is because in first-order
logic existentially quantified variables are meant to depend on all the universal
6
quantifiers whose scope they lie in; this implies that, if (1.3) were the translation
of any first-order formula, the sets of dependence variables of the functions f and
g should be linearly ordered by inclusion. That is, we should have { x} ⊂ { y}
or { y} ⊂ { x}, which is not the case. For more clarity, let’s see some other
formulas and their translations:
∀x∃z∀y∃wP [x, y, z, w] ↪→ ∃f∃g∀x∀yP [x, y, f(x), g(x, y)]
∀y∃w∀x∃zP [x, y, z, w] ↪→ ∃f∃g∀x∀yP [x, y, f(x, y), g(y)]
∀x∀y∃z∃wP [x, y, z, w] ↪→ ∃f∃g∀x∀yP [x, y, f(x, y), g(x, y)]
As we can see, all these formulas are not equivalent to (1.3); and whatever
permutation we may apply to the quantifier prefix, we will not obtain that
formula (this is a consequence of theorem 3.3.2). Notice that, while all the
previous formulas express something weaker than (1.3), this is not always the
case. For example, the ordinary first-order formula
∀x∃z∀y∃wP [x, y, z, w]∧
∧∀x∀x′∀z∀z′∀y∀w∀w′((P [x, y, z, w] ∧ P [x′, y, z′, w′])→ w = w′) (1.4)
says something stronger than (1.3). It tells what (1.3) itself asserts, and further-
more it states that we must choose the value of w in accordance with a certain
function (which has the same role the function g mentioned in (1.3); (1.3) only
states the existence of an adequate function, not its uniqueness.
Hintikka ascribes these limitations in the expressiveness of ordinary first-
order languages to what he calls “Frege’s fallacy”: the inability to distinguish
between two different meanings of the scope of a quantifier. One is the priority
scope: when evaluating the truth or falsity of a sentence, this kind of scope
is what determines the order in which semantic rules are applied. The second
may instead be called the binding scope, and it determines the patterns of
dependences between variables. In Frege’s and modern notation there are no
means to differentiate the two. As we have seen, patterns of priority scope
are limited by some restrictions: being related to evaluations of truth, priority
scopes must occur encapsulated one in each other (and this is why we can express
this kind of scope using parenthesis). There is no reason, instead, to limit the
range of application of binding scopes. We should be able to allow disconnected
binding scopes, and it should be possible for a quantifier to impose dependences
on variables lying outside its priority scope. Henkin, Hintikka and others have
developed notational devices that should allow such greater freedom; however,
the great variety of solutions proposed may be puzzling. It is not always clear if
all the different systems have the same expressive power or not. In the following
paragraphs, I will try to provide an overview of the main variants.
Hintikka’s temporary notations To show which parts of a formula are
influenced by the binding scope of a quantifier we may enclose such syntagms
in indexed parentheses; for example, formula (1.3) may be rewritten as
∀x∀x(∃y∀z)∀x∀z((∃w)∀x(P [x, y, z, w])∀x)∀z. (1.5)
7
It is immediately clear that this notation is heavy and hardly readable.
Furthermore, it may be a bit ambiguous. Hintikka never explains how we should
read the priority scope from such formulas; probably, unindexed parentheses are
needed to handle formulas such as ∀x(∀x(P [x, ...])∀x ∨ R[x, ...]). Also, observe
the syntagm ∀x(∃y∀z)∀x: it states that ∀x binds y and (perhaps) z; what do we
mean when we write ∀x(P [x, y, z, w])∀x? If P [x, y, z, w] is an atomic formula,
the subscripts are probably superfluous. Instead, if P [x, y, z, w] has the form
∃vQ[x, y, z, w, v] we are supposed to think that the quantifier ∃v is bound by
∀x; next, we should say what effect the quantifier ∀x has on Q[x, y, z, w, v],
and so on. Otherwise, if P [x, y, z, w] is a disjunction or a conjunction, we
should assert that ∨ (resp. ∧) is dependent on x; that is, that we need to
know the value of x to make out which disjunct (resp. conjunct) we can verify
(resp. falsify). This will be clearer after we discuss the semantics of IF logic.
In conclusion, the expression ∀x(P [x, y, z, w])∀x states that all the quantifiers
and binary connectives in P [x, y, z, w] are bound by ∀x; when we write down
explicitly the formula P [x, y, z, w] we could simply enclose in parentheses its
quantifiers and connectives, since binding scope has no effect on atomic formulas.
This leads us naturally to the simplified notations adopted by Hintikka in the
Principles.
Hintikka’s IF notation In his Principles of Mathematics Revisited, Hintikka
adopted the following language: that of first-order logic (based on connectives
∨, ∧ and ∼) plus the existential quantifiers (∃y/∀x1, . . .∀xn), “there is an y in-
dependent of ∀x1, . . .∀xn”, and disjunctions of the form (∨/∀x1, . . .∀xn). More
formally:
Def 1.1.1 A string is an IF sentence if and only if it is a sentence of first-
order logic (using only the binary connectives ∨ and ∧, and the negation ∼) in
negation normal form (that is, negations occur only in front of atomic formulas)
or if it is obtained from a negation normal first-order sentence by substitutions
of the following kinds:
1) (∃x/∀y1, . . . ,∀yn) replacing ∃x, under the condition that ∃x occurs within
the scope of ∀y1, . . . ,∀yn
2) (∨/∀y1, . . . ,∀yn) replacing ∨, under the condition that ∨ occurs within
the scope of ∀y1, . . . ,∀yn.
An IF formula is defined analogously from first-order formulas, without the
restricting conditions “occurs within the scope of ∀y1, . . . ,∀yn”.
The substitutions are limited by the condition “. . . occurs within the scope
of ∀y1, . . . ,∀yn” just for technical reasons; if we did allow unrestricted formula,
it would not be possible to express smoothly the semantics of the new symbols.
In the present work, I will adopt this notion of formula, with some slight
variations. In the slashed tokens I will often write only the variables, and not
the universal quantifiers, as it has been done in literature:
(∃x/y1, . . . yn) (∨/y1, . . . yn)
8
This does not seem to limit the expressiveness of our language. Furthermore, I
will omit some parentheses if they are an obstacle to readability; in some cases,
slashed variables will be written as subscripts. Adopting this language, we can
now rewrite (1.3) as an IF formula:
(∀x)(∃z)(∀y)(∃w/∀x)P [x, y, z, w] (1.6)
or, shortly
∀x∃z∀y∃w/xP [x, y, z, w]. (1.7)
The expression P [x, y, z, w] is, of course, an informal writing which denotes
a formula whose free variables are among x, y, z, w; in the following, we may
also write P (x, y, z, w) if it does not harm to the readability. In theoretical
discussions, I shall often write
∃x/V
adopting metalinguistic variables such as V and W to abbreviate lists of indi-
vidual variables. The intended meaning of the new quantifiers and connectives is
the following: supposing that subformulas such as (∃x/y1, . . . ym)P [x, z1, . . . , zn],
resp. Q[z1, . . . , zn](∨/y1, . . . ym)R[z1, . . . , zn], occur within the scope of quan-
tifiers ∀z1, . . . ,∀zn (which include ∀y1, . . . ,∀ym), we assert that, in order to
translate the whole formula into a Σ11 formula, we can make the following sub-
stitutions:
(∃x/y1, . . . ym)P [x, z1, . . . , zn] ↪→
∃f∀w1, . . .∀wsP ∗[f(w1, . . . ws), z1, . . . , zn]
Q[z1, . . . , zn](∨/y1, . . . ym)R[z1, . . . , zn] ↪→
∃f∀w1, . . .∀ws((f(w1, . . . ws) = 0 ∧Q∗[z1, . . . , zn])∨
∨(f(w1, . . . ws) 6= 0 ∧R∗[z1, . . . , zn]))
where{w1, . . . ws} = {z1, . . . , zn} \ {y1, . . . ym}, 0 is some fixed
individual constant, and P ∗, Q∗, R∗ are the Σ11 translations of P,Q,R.
(As we can see, to explain these meanings we had to refer to the context in
which a subformula occurs).
Thus, the disjunctionQ[z1, . . . , zn](∨/y1, . . . yn)R[z1, . . . , zn] (occurring in a con-
text like that just described) asserts that the choice between Q and R is made
according to the values of a function f of domain {z1, . . . , zn} \ {y1, . . . yn}. In
the next section, a game-theoretic semantics will be introduced; in that context,
we will be able to identify this function f with a strategy function of a certain
semantic game.
We say that an IF sentence is true if its Σ11 translation is true according
to standard second-order semantics, and non-true if its Σ11 translation is false
according to standard second-order semantics. The game-theoretical semantics
will allow the definition of falsity, a distinct concept from non-truth.
Connective (∨/y1, . . . yn) can be eliminated from IF languages without any
loss of expressiveness. In fact, formula (∃x/y1, . . . , ym)((x = 0∧Q[z1, . . . , zn])∨
(x 6= 0 ∧ R[z1, . . . , zn])) has exactly the same second-order translation as
Q[z1, . . . , zn](∨/y1, . . . yn)R[z1, . . . , zn]. Furthermore, the variables that occur
9
slashed after a disjunction have no influence on the falsity of the formula (see
next section); so, slashed disjunctions may be eliminated also when we consider
IF logic under the light of game-theoretical semantics.
Other independent quantifiers; slashes and backslashes Now that we
have comfortable notations, we will see some examples. The reader may still
wonder what exactly does the notion of dependence express, and if the new
formulas say anything interesting from a mathematical point of view. As a
matter of fact, well-known mathematical notions such as uniform convergence
may not be defined by simple first-order formulas, but can be expressed by our
new kind of formulas. To express notions of uniform convergence, we say things
like: “for every x there is an y, and for every > 0 there is a δ > 0, whose
choice is independent of x, such that. . . ”
In symbols, our defining formula will have a prefix of the form ∀x∃y∀(∃δ/x) to
express the fact that the choice of δ depends on the choice of but not on the
value chosen for x. By what means were such notions expressed in literature,
when no explicit notational device was available to assert independence? As a
matter of fact, independence was often expressed informally in words; otherwise,
it could be expressed through functional notation. We must not forget that
IF logic, at least in the form exposed in [8], is just a fragment of second-order
logic, and so it can’t express anything that is unutterable for the mathematician;
he/she/it may simply use the far stronger set-theoretic language to state the
same things. But Hintikka’s work was precisely aimed at eliminating any appeal
to set theory in the foundations of mathematics; the possibility of expressing
the same concepts by means of IF languages, that are felt by him to be nothing
more than first-order, is a progress in this direction.
Hintikka suggests, in his Principles, that some recent areas of scientific
thought are strongly imbued with Independence-Friendly notions and problems;
and that this unacknowledged fact may account for the particular difficulties
involved in those fields of research. In particular, he points his finger on quan-
tum mechanics and Ramsey Theory, the latter being a study of combinatorial
problems related to scatterings of points in geometrical spaces. We shall not
investigate such complex themes; rather, I will present a toy example, meant
only to illustrate how an IF formula should be interpreted. I hope the reader is
not annoyed by its lack of mathematical interest. Consider this simple situation
in the euclidean plane:
10
-ff qA qB
qC
r
x
r
z
q
O
r
y
r
w
Following the picture, we write −→OA(p) to express: ”point p belongs to half-line−→OA (and is different from O)”; analogously for −−→OB(p). By ÂOC(p) we mean
that point p belongs to the interior of the quarter of plane delimited by half-lines−→OA and −−→OC , and so on. Qd(a, b, c, d) shall mean that the points a, b, c, d are
the vertices of a quadrilateral, in clockwise order. We may then assert that:
(∀x)(∀z)((
−→OA(x)∧−−→OB(z))→ (∃y/z)(∃w/x)(ÂOC(y)∧B̂OC(w)∧Qd(x, y, w, z)))
(In this example, “(−→OA(x) ∧ −−→OB(z)) → . . .” is just an abbreviation for “(∼−→OA(x)∨ ∼ −−→OB(z))∨ . . .”. This is a correct formula, since ∼ occurs only in front
of atomic formulas. ) This formula states that:
1. Given two points, x and z, belonging respectively to half-lines −→OA and−−→OB, we can choose two points y and w, belonging to ÂOC and B̂OC
respectively, such that xywz is a quadrilateral.
2. Points y and w could be chosen by two different persons, one of whom
knows which point is x but not which point is z, and the other person
knows which point is z but not which point is x.
Point 2 is what we cannot express by means of ordinary first-order logic. How-
ever, if we confine our treatment of dependences to Hintikka’s IF notation, there
are some facts that are true in our example yet seem not to be expressible in a
straightforward manner. It is clear that y could be chosen by someone who does
not know which w has been chosen, and vice versa. How could we assert this
fact? Is not an (∃y/∃w) operator needed? The IF interpretation of a formula
∀x∀z(∃y/∀z)(∃w/∀x)yP [x, y, z, w] (1.8)
is
∃f∃g∀x∀yP [x, z, f(x), g(z)]. (1.9)
11
If we observe (1.9) with greater care, that formula seems to assert that the
choice of y is made independently of that of w, and vice versa. Otherwise, we
would have written
∃f∃g∀x∀yP [x, z, f(x, g(z)), g(z))] (1.10)
or
∃f∃g∀x∀yP [x, z, f(x), g(z, f(x)))]. (1.11)
Here, the reader may object that a dependence of the form f(x, g(z)) is nothing
else than a function h(x, z); hence, if we assert (1.10), we are contradicting our
hypothesis that y does not depend on z. Thus, the dependency of y on ∃w
implies the dependency of y on ∀z, and ∀x∀z(∃y/∀z)(∃w/∀x,∃y)yP [x, y, z, w]
is intuitively equivalent to (1.8). However, look at this simpler assertion con-
cerning our geometric example:
(∃w)(∃y/∃w)(ÂOC(y) ∧ B̂OC(w) ∧ y 6= w) (1.12)
It is not clear, at first sight, if there is an equivalent to this formula in Hintikka’s
IF notation. Throughout the Principles of Mathematics Revisited, Hintikka
assumed conventionally that existential quantifiers never bind any variables,
but he never gave a precise explanation for this choice. Instead, in a more
recent article ([9], 2002) he tried to confront with the problem. His answer was
that IF languages, as those presented in the previous paragraph, are sufficient
to express all analogue quantifiers such as
(∃y/∃x), (∀y/∃x), (∀y/∀x), (∀y/∃x, ∀z)
etc. That is, all formulas containing such quantifiers are equivalent (in the
context of second-order semantics) to some IF formulas; this translation to IF
language is performed by means of substitutions of equivalent formulas (sub-
stitution of equivalent subformulas is justified: see section 1.3). For example,
formula (1.12) is equivalent to
∀z(∃w/∀z)∃y(z 6= w ∨ ((ÂOC(y) ∧ B̂OC(w) ∧ y 6= w))).
However, even if they were correct, Hintikka’s equivalences don’t seem to be
sufficient to carry out the translation; he didn’t provide anything to eliminate
quantifiers with multiple declarations of independence, such as (∀z/∃x,∃y) and,
furthermore, his substitutions establish only weak equivalences (see section 2.1).
Quantifiers like (∀x/∀y) and (∃x/∃y) are generalizations of IF quantifiers whose
meaning is quite clear under the light of the game-theoretical semantics that
will be presented in the next section; however, the fact that their Skolemizations
reduce to Skolemizations of sentences which contain only quantifiers of the form
(∃y/∀x1, . . . ,∀xn) may be a clue that an explicit elimination is always possible.
An interesting feature of paper [9] was the introduction of new notations in
order to state explicitly both dependence and independence from quantifiers. As
before, independence is expressed by slashes (/); independence is now expressed
12
with a backslash (\); mixed expressions are allowed. For example, (∃z/∃y\∀z)
is a quantifier that says that there exists a z whose choice is independent of
the quantifier ∃y, but is dependent on quantifier ∀z. Using these notations it
is easy to write down sentences that assert more dependencies than what could
be done by means of ordinary first-order logic; for example, we may state that
(∃x\∃y)(∃y\∃x)P [x, y] (1.13)
(its meaning may be that x and y have to be chosen simultaneously, each de-
pending by some law on the other). If Hintikka were right, and all of these
expressions were expressible by means of IF notation, that would mean that IF
languages, which were thought up to express independence, have a far stronger
expressive power; this would bring us closer to the descriptive power of the tem-
porary “indexed parentheses” notation. However, in the present work we shall
always limit ourselves to IF languages, and we shall never suppose that they
have the same expressiveness as the slash-and-backslash notation.
Double-slash notation This is just a slight variant of IF languages that
sometimes occurred in literature. The idea is that instead of stating which
variables do not bind the considered quantifier, we could write down the quan-
tifiers that are not bound by the considered quantifier. For example, instead of
(∀x)(∃y/∀x) we may write (∀x//∃y)(∃y). Also expressions like (∀x//∨1) should
be allowed; notice that, if we do so, we should give names (i.e. indices) to each
disjunction, to distinguish among them. Anyways, binding of connectives may
be eliminated, analogously with what we did for IF language. For example, for-
mula (∀x//∨)(P ∨Q) is equivalent to (∀x//∃y)(∃y)((y = 0∧P )∨ (y 6= 0∧Q)).
This system is equivalent to IF languages if we impose some limitations in
the usage of bound variables; we should not allow adopting the same bound
variable for different quantifiers occurring in the same formula. Otherwise,
formulas such as (∀x//∃y)((∃yP ) ∧ (∃yQ)) would be ambiguous: which one
of the two quantifiers “∃y” is not bound by ∀x? If we don’t want to restrict
our usage of variables, we could for example assign a name to each quantifier
(we would have then (∃y)1 and (∃y)2); or we could establish that, in formulas
like (∀x//∃y)((∃yP ) ∧ (∃yQ)), the quantifier binds all the occurences of each
variable that it binds, and write different variable names if we want to specify
that some occurrences should not be bound. However, these problems are quite
inessential.
If we adopt the third and last of the conventions I suggested, the double-
slash notation gains a slight advantage over IF notation: it is, in general, more
compact, because we only have to state each independence relation once. This
is not the case with IF language: formula (∀x//∃y)((∃yP ) ∧ (∃yQ)) becomes
(∀x)((∃y/∀x)P ∧ (∃y/∀x)Q), and independence from x is asserted twice.
Dependence on connectives It has been suggested that we should also
consider dependences on connectives, that is, expressions containing quantifiers
and connectives such as (∃x/∨1), (∨/∧1), etc. Their intended meaning will be
13
clearer to the reader after he reads the next section on game-theoretic semantics.
If we write a quantifier like (∃x/∨1), we mean that there is a “Verifier” that has
to choose a value for x without knowing which of the two disjuncts of ∨1 has
been chosen. As W. Hodges observed in [11], it is quite unlikely that the players
of a semantical game may not know what stage of the game has been reached
(that is, which subformula is being considered); from this, and the knowledge of
the formula in its entirety, all previous choices of conjuncts and disjuncts may
be reconstructed. More precisely, this is true if we never use the same binding
variables. Otherwise, some formulas could be identical, and so we should adopt
some tricks like those suggested in the previous paragraph (e.g. assign a name
to each quantifier). Under these assumptions, we may agree with Hodges that
independence from connectives is a vacuous concept.
Branching quantifiers As we said earlier, most of the ideas behind IF logic
are not totally anew, and can be traced back to an article of Leon Henkin,
Some remarks on infinitely long formulas ([6]), year 1961. His name is often
associated only to the so-called Henkin quantifier
{
∀x ∃z
∀y ∃w
}
P [x, y, z, w]
which has the same meaning as our formula (1.6): ∃f∃g∀x∀yP [x, y, f(x), g(y)].
In reality, Henkin’s original definition of branching quantifiers (or partially-
ordered quantifiers) was much more general; it was intended to give a meaning
to a vast class of infinitary quantifiers. Quoting from [6],
(...)let m,n be cardinal numbers and d a function such that to each
ordinal ν < n the function d assigns a set d(ν) of ordinals < m. Then
we associate with m,n, d a quantifier Qm,n,d such that, given any two
disjoint sequences (vi0 , . . . , viµ , . . .)µ<m and (vj0 , . . . , vjν , . . .)ν<n of
distinct individual variables, and any formula φ (which in general
contains these variables freely), the formula
(∗) (Qm,n,dvi0 , . . . , viµ , . . . ; vj0 , . . . , vjν , . . .)
φ(vi0 , . . . , viµ , . . . ; vj0 , . . . , vjν , . . .)
has the following intuitive meaning: Given any values of the variables
vi0 , . . . , viµ , . . ., there exist values of the variables vj0 , . . . , vjν , . . .,—
the value of vjν being independent of the values of the variables viµ
except for µ ∈ d(ν)—such that φ is true. More precisely, we explain
the meaning of the first-order formula (*) by asserting its equivalence
with the second-order formula
(∃g0, . . . , gν , . . .)ν<n(∀vi0 , . . . , viµ , . . .)
φ(vi0 , . . . , viµ , . . . ; g0(vi00 , vi01 , . . .), . . . , gν(viν0 , viν1 , . . .))
where ν0, ν1, . . . are the elements of d(ν). A quantifier Qm,n,d of this
kind we call a dependent quantifier.
14
The Henkin quantifier is just quantifier Q2,2,id{0,1} . However, it has notable
characteristics:
• It is not definable in terms of classical quantifiers.
• It is the minimal dependent quantifier, i.e. it is definable in terms of any
other nontrivial dependent quantifier (see [1]).
• Ehrenfeucht proved that the logic obtained by adding the Henkin quanti-
fier to ordinary first-order logic (that is, by allowing it to occur in front of
a first-order formula) is not recursively axiomatizable (part of the proof
can be found in [6]). As we shall see, this is the case also for IF logic.
• The compactness property does not hold for this logic.
• The quantifier “there exist infinitely many” is definable in terms of the
Henkin quantifier.
The term “Henkin quantifier”, or also branching quantifier, is often referred in-
stead to the finitary case of Henkin’s definition. The terms often refer explicitly
to quantifiers of the particular form
∀x11, . . . , x
1
n1 ∃z
1
1 , . . . , z
1m1
∀x21, . . . , x
2
n2 ∃z
2
1 , . . . , z
2m2
.
.
.
.
.
.
∀xr1, . . . , x
r
nr ∃z
r
1 , . . . , z
rmr
whose intended meaning is that each existential quantifier depends only on the
universal quantifiers which occur in the same line. This last usage of the term
“branching quantifier” may bring confusion, but is justified by the results of
Wilbur John Walkoe jr. ([20]): all finitary branching quantifiers are definable
in terms of quantifiers of the form just shown. The kind of definability he
referred to was this: a quantifier Q2 containing variables η1, . . . , ηn is definable
in terms of a quantifier Q1 if for each n-ary predicate P [η1, . . . , ηn] there is a
formula φ such that |= Q2P [η1, . . . , ηn]↔ Q1φ.
His results were even stronger: the branching quantifiers are all definable in
terms of quantifiers in the normal form
∀x11, . . . , x
1
n1 ∃z
1
∀x21, . . . , x
2
n2 ∃z
2
.
.
.
.
.
.
∀xr1, . . . , x
r
nr ∃z
r
or also in terms of quantifiers in another normal form:
{
∀x11, . . . , x
1
n1 ∃z
1
1 , . . . , z
1m1
∀x21, . . . , x
2
n2 ∃z
2
1 , . . . , z
2m2
}
15
Michael Krynicki, in his 1993 paper [15], refined the result one step further: all
branching quantifiers are definable in terms of quantifiers of the form
{
∀x11, . . . , x
1
n1 ∃z
1
∀x21, . . . , x
2
n2 ∃z
2
}
Under some extra conditions, i.e. that a pairing function is definable, these
quantifiers are definable in terms ofQ2,2,id{0,1} ; also this was observed in Walkoe’s
paper [20] (by the way, Hintikka erroneously attributes this discovery to Kryn-
icki). But it can be shown that such a reduction is not true in general. A
proof of this follows from the fact that, as was just said, the logic obtained by
adding Q2,2,id{0,1} to first-order logic does not respect, in general, the compact-
ness property, while the logic that allows all branching quantifiers does (this
can be inferred from theorem 3.2.1 together with corollary 1.2.15). We shall
refer to the logic obtained by allowing any branching quantifier as a prefix for
a first-order formula as Lbq.
The reader may wonder if the logic that employs all of these quantifiers
coincides with our IF logic. It is so, at least on the semantical level (corollary
1.2.15): both IF logic and Lbq logic may be translated sentence-by-sentence
into Σ11 formulas and vice versa. From the results of section 1.2 the reader may
deduce how it is possible to translate an IF sentence into an equivalent Lbq
sentence, and vice versa. However, such conversion is not so obvious as it may
seem while looking at very simple examples; in general, not all IF sentences
have an intuitive translation into branching quantifiers of normal form. As an
example of the difficulties that may arise, notice that branching quantifiers in
normal form have a more or less explicit restriction: different lines of a branching
quantifier may not contain occurrences of the same variable. A quantification
of the form {
∀x ∃z
∀y ∃z
}
P [x, y, z]
does not make sense: it tells us that z is only a function of x and also that z is
only a function of y. Translating to a Σ11 sentence, what should we substitute
for z? Should it be f(x) or g(y)? (The only situation compatible with both
dependences is the limit case that a fixed constant can always be chosen for z.
Should we assign this trivial meaning to these quantifiers?) Yet, it seems that
IF logic can express sensible sentences that, literally rearranged into branching
quantifiers, would show such nonsensical feature. For example, formula
∀a∀x∀y(∃w/x)(∃b/a)P [x, y, w, a, b] (1.14)
may be meaningfully translated into
∃g∃h∀a∀x∀yP [x, y, g(a, y), a, h(x, y)]. (1.15)
If we try to translate (1.14) into the language of normal form branching quan-
tifiers, we obtain
∀x ∃b
∀y ∃w, b
∀a ∃w
P [x, y, z]
16
which is meaningless. In general terms, the problem seems to be that branching
quantifiers in normal form are suitable to describe binding scopes that are not
encapsulated, but only under the restriction that such scopes do not overlap.
The solution to this riddle, as the reader has probably already understood, is
that an equation like (1.14) should be translated into an Lbq sentence whose
quantifier prefix is not in normal form; only in a second instance, such Lbq
sentence may be translated into a normal form expression.
The history of connectives like (∨/y1, . . . yn) dates back to 1992; they were
introduced by Gabriel Sandu and Jouko Va¨a¨na¨nen in an article bearing the
title Partially ordered connectives ([17]). There, they appear in the form of
“branching connectives” such as
{
∀x
∨
i∈{0,1}
∀z
∨
j∈{0,1}
}
or {
∀x ∃z
∀y ∨
}
.
In the context of IF logic, these connectives can be eliminated, and thus we shall
not discuss their properties. The interested reader may refer to [17] and [15].
Va¨a¨na¨nen’s logics In his recent monograph ([19], 1997) Jouko Va¨a¨na¨nen
introduced a new formalism for expressing dependences; he calls his creation
Dependence Logic. The syntax of Dependence Logic does not involve special
quantifications or connectives; instead, it allows a special kind of atomic formu-
las which are denoted as = (t1, . . . , tn), for any n. Such a formula expresses the
fact that the term tn is completely determined by t1, . . . , tn−1. Such a logic,
which is given a peculiar compositional semantics, is equivalent to IF logic in
the sense that each IF sentence may be translated into a sentence of Dependence
Logic which has the same models, and vice versa; obviously, the two logics are
quite different on the syntactical level. Team Logic is obtained by adding a
“classical negation” to Dependence Logic, and we may suppose that it has the
expressive power of what here we call a “fully extended IF logic”.
1.2 Game Theoretical Semantics (GTS)
In the preceding section we already presented a semantics for IF logic: an IF for-
mula is true if and only if its Σ11 translation is true under a classical second-order
semantics. We may say that, otherwise, a formula is false; this, however, was
not Hintikka’s first choice. While trying to discover an autonomous definition
of truth for IF logic, he recognized that this logic lacks, at least at a first glance,
a quality that is necessary for the introduction of a Tarski-type definition of
truth: the so-called compositionality. The meaning of this word from linguistics
is that the truth value of a sentence should be determined by the truth values
(resp., in the case of open formulas, by the interpretations) of its subformulas;
17