The foundations of arithmetic are often expressed by starting from one number, zero, and one function, the successor function. We denote those by 0 and s. (The successor function of a given number represents the following number, so s(x) is what you would normally call x+1.) From those, we can easily define things like the other natural numbers:
1 : s(0)
2 : s(1)
3 : s(2)
...
We can also define the basic operators. For example, addition can be described like so:
x + 0 : x
x + s(y) : s(x) + y
Just to horrify mathematicians, Bengt decides to include infinity in the natural numbers. Just like 0 and s, it is not defined as anything in particular. Then we need a couple more lines to define addition:
x + ∞ : ∞
∞ + x : ∞
To include things like the equals operator, which is essentially a function from two numbers to a truth value, we also need to define true and false. But Bengt decides to cheat a little by reusing the numbers, like in some programming languages. Let's be a little bit original and say that 0 denotes true, and ∞ denotes false.
- Define multiplication.
- Define subtraction. Since we're only dealing with natural numbers, we'll use a slightly unusual definition of subtraction; the smaller number is always subtracted from the larger one. That is, what we want is actually the absolute value of subtraction. Infinity minus infinity should be zero.
- Define equality. Remember that you can use the things you have defined in (a) and (b) - or haven't, if you skipped those.
- Define the => operator, that is, equal-or-greater-than.
- Define the other => operator, that is, logical implication.
- What about logical negation? Will that be interesting somehow?
- Finally, put those operators to use by proving that 1+1=2.