Fitch diagram helper

Though this be madness…

Fitch diagrams are a way of constructing formal logic proofs in sentential logic or in predicate logic. The script on this page page (open script in separate tab) allows users to constuct these proofs and check its validity automatically.

The script allows the use of the logical connectives listed in the table below. In addition, the script allows for the use of predicates, functions, variables, and constants. The only restrictions on these is that they must start with a letter and consist only of letters, numbers, and the underscore (_) character. In particular, the typical restrictions that predicates must start with uppercase letters, while functions and variables must start with lowercase letters, is not enforced.

Logical connective Logic symbol Script symbol
Equality = =
Conjunction &, ^
Disjunction |
Negation ¬ ~, !
Absurdity #
Implication >
Biconditional <
Universal quantifier @
Existential quantifier ?


Each statement in a proof has to be justified by a rule. Below, we describe each of the possible rules in more detail. In some cases, the rule alone justifies the use of a certain statement. In general, however, a rule refers to previous statements and/or subproofs. Single statements are referred to through their line number, while subproofs are referred to by the first and last line of the subproof, separated by a dash (e.g. 4-6). If a rule refers to more than one statement and/of subproof, they are separated by a comma (e.g. 3, 4-5, 7). The script on this page will automatically check whether rules are applied correctly. Below, we look at each of the supported rules in more detail.


Premises are the statements that are assumed to be true, and from which we want to derive the goal or conclusion. As a result, premise rules are always allowed, except that they must appear as a single block at the start of the proof. That is, you cannot slip in an extra premise halfway through the proof: it must appear at the top.


Assumptions, as the name suggests, are assumed to be true. However, unlike premises, assumptions are temporary. In essence, an assumption starts a subproof that has a single premise (the assumption). While a subproof can refer to any statement in proofs that contain it, there are only a few ways in which the main proof can interact with subproofs. We will visit these rules below.


A flag is like an assumption, except that a flag assumes that some constant exists rather than that some statement is true. Note that this must be a new constant, it may not already exist in some statement before the flag statement. The flag rule is only needed to introduce a universal quantifier (∀).


A tautology rule introduces a statement that is logically true (e.g. P ∨ ¬ P). In general, you should avoid introducing tautologies this way, and instead use Fitch to derive the tautologies you need. After all, if you can derive Q from premises P1, …, Pn, then (P1 ∧ … ∧ Pn) → Q is a tautology. Using this tautology to prove Q, however, would miss the point of making a proof that Q does indeed follow from premises P1, …, Pn.

When you do introduce a tautology this way, the script will check whether the formula you provided actually is a tautology. However, this check may fail for formulas that contain certain combinations of universal and existential quantifiers. If this happens, the script will trust your judgment on whether or not the formula is a tautology.


Reiteration repeats a previously derived statement. You can only reiterate statements that are in the same subproof, or a subproof containing the reiteration. Note that you never need to use reiteration, it is simply there to make the proof more readable.

Equality (=) rules

The equality symbol (=) indicates that the left-hand side is the same as the right-hand side. In terms of Fitch proofs, equality is not the same as mathematical equality. From a statement a=b, it does not follow that b=a.

= Introduction

k a = a = Introduction

Any constant or variable equals itself.

= Elimination

k P(a)
m a = b
n P(b) = Elimination: k, m

An equality a=b allows any occurence of the left-hand side (a) of the equation to be replaced by the right-hand side (b).

Conjunction (∧) rules

The conjunction symbol (∧) indicates that both the left-hand side and the right-hand side of a statement are true.

∧ Introduction

k P
m Q
n P ∧ Q ∧ Introduction: k, m

For any two derived formulas P and Q, their conjunction P ∧ Q can also be derived.

∧ Elimination

k P ∧ Q
m P ∧ Elimination: k

Whenever a conjunction P ∧ Q is derived, both the left-hand side (P) and the right-hand side (Q) can also be derived.

Disjunction (∨) rules

The disjunction symbol (∨) indicates that of the left-hand side and the right-hand side of a statement, at least one of them is true (and possible both). Subproofs are needed to eliminate a disjunction: to show that R follows from P ∨ Q, you must show that R follows from assuming P, and that R also follows from assuming Q.

∨ Introduction

k P
m P ∨ Q ∨ Introduction: k

Any disjunction in which either the left-hand side or the right-hand side is a derived formula can also be derived.

∨ Elimination

g P ∨ Q
h P
j R
k Q
m R
n R ∨ Elimination: g, h-j, k-m

Formula R can be derived from a disjunction P ∨ Q if R follows from assuming the left-hand side (P) of a disjunction, but also from assuming the right-hand side (Q) of the same disjunction.

Negation (¬) rules

The negation symbol (¬) indicates that the underlying statement is not true. Introducing a negation relies on a subproof: if assuming P results in inconsistencies, then ¬P can be derived.

¬ Introduction

k P
n ¬ P ¬ Introduction: k-m

Whenever absurdity can be derived in a subproof, the assumption of that subproof can be derived to be false.

¬ Elimination

k ¬ ¬ P
m P ¬ Elimination: k

A double negation can be eliminated.

Absurdity (⊥) rules

Absurdity (⊥) is the result of two contradicatory statements. From absurdity, any statement can be derived.

⊥ Introduction

k P
m ¬ P
n ⊥ Introduction: k, m

Aburdity follows when both a formula and its negation can be derived.

⊥ Elimination

m P ⊥ Elimination: k

Any formula follow froms absurdity.

Implication (→) rules

The implication symbol (→) indicates that whenever the left-hand side can be derived, the right-hand side must also be true.

→ Introduction

k P
m Q
n P → Q → Introduction: k-m

If formula Q can be derived from assuming P, then the corresponding conditional can be derived.

→ Elimination

k P → Q
m P
n Q → Elimination: k, m

If Q follows from P, and P is derived, then Q can be derived.

Biconditional implication (↔) rules

The biconditional implication symbol (↔) indicates an implication that works both ways. That is, a biconditional implication P ↔ Q means that when P is true, Q can be derived, but also that if Q is true, P can be derived.

↔ Introduction

g P
h Q
k Q
m P
n P ↔ Q ↔ Introduction: g-h, k-m

If formula Q follows from assuming P, and P follows from assuming Q, then the corresponding biconditional can be derived.

↔ Elimination

k P ↔ Q
m Q
n P → Elimination: k, m

If P can be derived if and only if Q can be derived, then P follows from Q, and Q follows from P.

Universal quantifier (∀) rules

The universal quantifier (∀) indicates that some statement holds for all possible constants. For example, if we know that ∀x P(x) is true and that c is some constant, then P(c) is true. But be careful: a universal quantifier does not imply that a constant actually exists.

∀ Introduction

k a
m P(a)
n ∀x P(x) ∀ Introduction: k-m

If a formula can be derived for a randomly chosen constant, it holds universally.

∀ Elimination

k ∀x P(x)
m P(a) ∀ Elimination: k

If a formula holds universally, it holds for any (existing) constant.

Existential quantifier (∃) rules

The existential quantifier (∃) indicates that a statement holds for at least one constant, though it gives no information about the identity of that constant. For example, if you know that ∃x P(x), this tells you that there is some constant c so that P(c) holds, but it gives no information at all about whether or not for a given constant b, P(b) holds.

∃ Introduction

k P(a)
m ∃x P(x) ∃ Introduction: k

If a formula holds for some constant (a), there exists a constant for which the formula holds.

∃ Elimination

k ∃x P(x)
m P(a) ∃ Introduction: k

If a formula holds for some constant, give that constant a name that has not previously been used.

Search algorithms

Seek, and ye shall find

To find a solution, you usually need to search for it. For artificial agents, this is no different. On this page, we explore several ways in which agents can search, and list some of the advantages and disadvantages. The JavaScript on this page makes use of HTML5.

We represent a search problem as a graph, where each node is a point in the solution. For example, this could be a road map, where different nodes are different cities. But nodes can also represent different actions in a plan. For example, an agent that wants to make a cup of tea can search through his repertoire of actions and find that it should pass through the nodes of “putting on the kettle” and “pouring the hot water” to get to the goal of “enjoying a nice cup of tea”.

In our representation, agents always try to find a path from the start S to the goal G. Every arrow represents a path from one node to another, and can only be travelled in one direction. The number next to each edge shows the cost of taking that path (e.g. the time, effort, or energy consumed by taking the path). When there is no edge connecting two nodes, it is not possible to go from one node to the other directly. However, it may still be possible to go there indirectly.

Below, we describe a number of search algorithms, and show an example of the way this algorithm works in practice. This list of algorithms includes

Each example consists of the example network and the stack. The stack shows what options the agent can still explore. Whenever a node is put onto the stack (the agent recognizes the possibility of going to that node), the corresponding node in the network turns green. When the node is removed from the stack (the agent has finished considering the option), it turns blue. As soon as the goal node G turns blue, the agent has found a solution and stops. In some cases, it may happen that an edge turns red. This indicates that the agent has chosen not to investigate that path at all.

To compare these search algorithms more directly, you can also view the search algorithm script page, which provides a simultaneous view of the network, the stack, the search tree, and the algorithm.

Depth first search

An agent that uses depth first search is always moving forward. Every time the agent reaches a node, it looks at all the options available from that point and chooses to continue searching in the last direction that the agent has observed. Backtracking to a previous point only happens when the agent reaches a dead end, and is therefore forces to turn back.

One of the main advantages of depth first search is that it requires very little memory. But because of that, depth first search may lead agents to go around in circles. Fortunately, there are no circular paths in the example above. In general, to prevent going around in circles, agents can skip visited nodes. Whenever they see a node for the second time, they ignore the possibility of going there.

Depth limited search

As mentioned above, the disadvantage of depth first search is that you may get stuck in a circular path. One way to overcome this is stop searching after a given number of steps. For example, the agent may restrict itself for paths that visit no more than three nodes. This way, the agent is sure not to get stuck in a circular path for all eternity. The downside, of course, is that if there the path from start to goal visits at least four nodes, the agent cannot find a solution even though it exists.

Iterative deepening search

The disadvantage of depth first search is that you may get stuck in a circular path. Depth limited search solves this partially by putting a limit on the length of the path. Unfortunately, this may lead the agent to be unable to find the solution, because it takes too many steps.

To solve this problem, the agent could simply increase its search limit and try again. This is exactly what iterative deepening search does. Whenever the agent is unable to find a path from start to goal, the agent increases its search depth limit and restarts the search. But the demonstration shows that this solution has its own problems: the agent may be doing the same search over and over again, forgetting what it has learned every time.

Breadth first search

If depth first search is about always moving forward, breadth first search is about not straying too far. An agent that follows breadth first search explores its options in the order it observed these options. Rather than staying on one chosen path, the agent will jump back and forth between options until it finds the solution.

Breadth first search has a huge advantage over depth first search. Because breadth first search means a meticulous search for a path to the goal, it is guaranteed to find such a path if it exists. It will even find the path that takes the fewest number of steps, though this may not be the path with the lowest cost. This advantage comes at a cost of the memory needed to remember all paths that are not yet explored.

Uniform cost search

Uniform cost search is a refinement of breadth first search. Whereas breadth first search always considers the path with the fewest steps first, uniform cost search prefers to consider paths with lower costs. An agent that follows uniform cost search orders its stack of paths to consider based on the length of the path. In the stack view of the example above, the length of the path is listed below the node. Since the agent always looks at the path with the shortest length first, as soon as the agent considers the goal node G, it has found the shortest path from start to goal.

Greedy best first search

Unlike the previous search algorithms, which were all uninformed search, greedy best first search is an informed search algorithm. That is, greedy best first search makes use of information that is not part of the problem description in the form of a heuristic. This heuristic gives an estimate for the cost of reaching the goal from any node. For example, a heuristic on a road map could be the straight-line distance between two cities. This is easier to calculate than the road distance, but still gives the agent some indication of whether it is looking in the right direction. This is the main idea of the heuristic: it should be easy to calculate, but still give information that is relevant to the search.

An agent that follows greedy best first search orders its stack based on the heuristic value, and therefore tries nodes with a low heuristic value first. That is, it first tries those nodes that are expected to be very close to the goal node already, and ignore those paths that go into the wrong direction. However, like depth first search, this can cause the agent to go around in circles. Another disadvantage of greedy best first search is that it needs a heuristic, which may not be easy to get.

A* search

Like greedy best first search, A* search is an informed search algorithm that makes use of a heuristic in addition to the problem description. But instead of relying only on the heuristic, A* search also looks at how long the path already is (similar to uniform cost search. That is, A* search makes an estimate of the total length of the path rather than the remaining distance, and orders the stack based on this estimate.

Like uniform cost search, A* search will find the shortest length path, but only if the heuristic follows a simple rule: the heuristic cannot overestimate the cost of actually reaching the goal. That is, the heuristic value for a node cannot be more than the actual cost of reaching the goal from that node. It can, however, be less. In fact, for the special case that the heuristic value is zero for all nodes, A* search is the same as uniform cost search. But for a more informative heuristic, A* search could be a lot more efficient in finding the path with the lowest cost.