3.1: Binary Representations - Mathematics

Suppose (left{a_{n} ight}_{n=1}^{infty}) is a sequence such that, for each (n=1,2,3, ldots,) either (a_{n}=0) or (a_{n}=1) and, for any integer (N,) there exists an integer (n>N) such that (a_{n}=0 .) Then

[0 leq frac{a_{n}}{2^{n}} leq frac{1}{2^{n}}]

for (n=1,2,3, dots,) so the infinite series

[sum_{n=1}^{infty} frac{a_{n}}{2^{n}}]

converges to some real number (x) by the comparison test. Moreover,

[0 leq x

We call the sequence (left{a_{n} ight}_{n=1}^{infty}) the binary representation for (x,) and write

[x=.a_{1} a_{2} a_{3} a_{4} dots.]

Exercise (PageIndex{1})

Suppose (left{a_{n} ight}_{n=1}^{infty}) and (left{b_{n} ight}_{n=1}^{infty}) are both binary representations for (x .) Show that (a_{n}=b_{n}) for (n=1,2,3, ldots).

Now suppose (x in mathbb{R}) with (0 leq x<1). Construct a sequence (left{a_{n} ight}_{n=1}^{infty}) as follows: If (0 leq x

[s_{n}=sum_{i=1}^{n} frac{a_{i}}{2^{i}}]

and set (a_{n+1}=1) if

[s_{n}+frac{1}{2^{n+1}} leq x]

and (a_{n+1}=0) otherwise.

lemma (PageIndex{1})

With the notation as above,

[s_{n} leq x

for (n=1,2,3, ldots).



[s_{1}=left{egin{array}{ll}{0,} & { ext { if } 0 leq x

it is clear that (s_{1} leq x1) and (s_{n-1} leq x

[s_{n}=s_{n-1}+frac{1}{2^{n}} leq x

If (x

[s_{n}=s_{n-1} leq x


proposition (PageIndex{2})

With the notation as above,

[x=sum_{n=1}^{infty} frac{a_{n}}{2^{n}}.]


Given (epsilon>0,) choose an integer (N) such that (frac{1}{2^{n}}N,) it follows from the lemma that

[left|s_{n}-x ight|


[x=lim _{n ightarrow infty} s_{n}=sum_{n=1}^{infty} frac{a_{n}}{2^{n}}.]


lemma (PageIndex{3})

With the notation as above, given any integer (N) there exists an integer (n>N) such that (a_{n}=0).


If (a_{n}=1) for (n=1,2,3, dots,) then

[x=sum_{n=1}^{infty} frac{1}{2^{n}}=1,]

contradicting the assumption that (0 leq x<1 .) Now suppose there exists an integer (N) such that (a_{N}=0) but (a_{n}=1) for every (n>N .) Then

[x=s_{N}+sum_{n=N+1}^{infty} frac{1}{2^{n}}=s_{N-1}+sum_{n=N+1}^{infty} frac{1}{2^{n}}=s_{N-1}+frac{1}{2^{N}},]

implying that (a_{N}=1,) and thus contradicting the assumption that (a_{N}=0). (quad) Q.E.D.

Combining the previous lemma with the previous proposition yields the following result.

Proposition (PageIndex{4})

With the notation as above, (x=. a_{1} a_{2} a_{3} a_{4} ldots).

The next theorem now follows from Exercise 3.1.1 and the previous proposition.

Theorem (PageIndex{5})

Every real number (0 leq x<1) has a unique binary representation.

It is just like counting in decimal except we reach 10 much sooner.

Well how do we count in Decimal?
0 Start at 0
. Count 1,2,3,4,5,6,7,8, and then.
9 This is the last digit in Decimal
1 0 So we start back at 0 again, but add 1 on the left

The same thing is done in binary .

0 Start at 0
1 Then 1
•• 1 0 Now start back at 0 again, but add 1 on the left
••• 11 1 more
•••• . But NOW what . ?

What happens in Decimal?
99 When we run out of digits, we .
100 . start back at 0 again, but add 1 on the left

And that is what we do in binary .

0 Start at 0
1 Then 1
•• 1 0 Start back at 0 again, but add 1 on the left
••• 11
•••• 1 00 start back at 0 again, and add one to the number on the left.
. but that number is already at 1 so it also goes back to 0 .
. and 1 is added to the next position on the left
••••• 101
•••••• 110
••••••• 111
•••••••• 1 000 Start back at 0 again (for all 3 digits),
add 1 on the left
••••••••• 1001 And so on!

See how it is done in this little demonstration (press play button):

Chapter 2. Binary and Number Representation

Binary is a base-2 number system that uses two mutually exclusive states to represent information. A binary number is made up of elements called bits where each bit can be in one of the two possible states. Generally, we represent them with the numerals 1 and 0 . We also talk about them being true and false. Electrically, the two states might be represented by high and low voltages or some form of switch turned on or off.

We build binary numbers the same way we build numbers in our traditional base 10 system. However, instead of a one's column, a 10's column, a 100's column (and so on) we have a one's column, a two's columns, a four's column, an eight's column, and so on, as illustrated below.

For example, to represent the number 203 in base 10, we know we place a 3 in the 1's column, a 0 in the 10's column and a 2 in the 100's column. This is expressed with exponents in the table below.

Or, in other words, 2 × 10 2 + 3 × 10 0 = 200 + 3 = 203. To represent the same thing in binary, we would have the following table.

That equates to 2 7 + 2 6 + 2 3 +2 1 + 2 0 = 128 + 64 + 8 + 2 + 1 = 203.

The basis of computing

You may be wondering how a simple number is the basis of all the amazing things a computer can do. Believe it or not, it is! The processor in your computer has a complex but ultimately limited set of instructions it can perform on values such as addition, multiplication, etc. Essentially, each of these instructions is assigned a number so that an entire program (add this to that, multiply by that, divide by this and so on) can be represented by a just a stream of numbers. For example, if the processor knows operation 2 is addition, then 252 could mean "add 5 and 2 and store the output somewhere". The reality is of course much more complicated (see Chapter 3, Computer Architecture) but, in a nutshell, this is what a computer is.

In the days of punch-cards, one could see with their eye the one's and zero's that make up the program stream by looking at the holes present on the card. Of course this moved to being stored via the polarity of small magnetic particles rather quickly (tapes, disks) and onto the point today that we can carry unimaginable amounts of data in our pocket.

Translating these numbers to something useful to humans is what makes a computer so useful. For example, screens are made up of millions of discrete pixels , each too small for the human eye to distinguish but combining to make a complete image. Generally each pixel has a certain red, green and blue component that makes up it's display color. Of course, these values can be represented by numbers, which of course can be represented by binary! Thus any image can be broken up into millions of individual dots, each dot represented by a tuple of three values representing the red, green and blue values for the pixel. Thus given a long string of such numbers, formatted correctly, the video hardware in your computer can convert those numbers to electrical signals to turn on and off individual pixels and hence display an image.

As you read on, we will build up the entire modern computing environment from this basic building block from the bottom-up if you will!

Bits and Bytes

As discussed above, we can essentially choose to represent anything by a number, which can be converted to binary and operated on by the computer. For example, to represent all the letters of the alphabet we would need at least enough different combinations to represent all the lower case letters, the upper case letters, numbers and punctuation, plus a few extras. Adding this up means we need probably around 80 different combinations.

If we have two bits, we can represent four possible unique combinations ( 00 01 10 11 ). If we have three bits, we can represent 8 different combinations. In general, with n bits we can represent 2 n unique combinations.

8 bits gives us 2 8 = 256 unique representations, more than enough for our alphabet combinations. We call a group of 8 bits a byte . Guess how big a C char variable is? One byte.


Given that a byte can represent any of the values 0 through 255, anyone could arbitrarily make up a mapping between characters and numbers. For example, a video card manufacturer could decide that 1 represents A , so when value 1 is sent to the video card it displays a capital 'A' on the screen. A printer manufacturer might decide for some obscure reason that 1 represented a lower-case 'z', meaning that complex conversions would be required to display and print the same thing.

To avoid this happening, the American Standard Code for Information Interchange or ASCII was invented. This is a 7-bit code, meaning there are 2 7 or 128 available codes.

The range of codes is divided up into two major parts the non-printable and the printable. Printable characters are things like characters (upper and lower case), numbers and punctuation. Non-printable codes are for control, and do things like make a carriage-return, ring the terminal bell or the special NULL code which represents nothing at all.

127 unique characters is sufficient for American English, but becomes very restrictive when one wants to represent characters common in other languages, especially Asian languages which can have many thousands of unique characters.

To alleviate this, modern systems are moving away from ASCII to Unicode , which can use up to 4 bytes to represent a character, giving much more room!


ASCII, being only a 7-bit code, leaves one bit of the byte spare. This can be used to implement parity which is a simple form of error checking. Consider a computer using punch-cards for input, where a hole represents 1 and no hole represents 0. Any inadvertent covering of a hole will cause an incorrect value to be read, causing undefined behaviour.

Parity allows a simple check of the bits of a byte to ensure they were read correctly. We can implement either odd or even parity by using the extra bit as a parity bit .

In odd parity, if the number of 1's in the 7 bits of information is odd, the parity bit is set, otherwise it is not set. Even parity is the opposite if the number of 1's is even the parity bit is set to 1.

In this way, the flipping of one bit will case a parity error, which can be detected.

XXX more about error correcting

16, 32 and 64 bit computers

Numbers do not fit into bytes hopefully your bank balance in dollars will need more range than can fit into one byte! Modern architectures are at least 32 bit computers. This means they work with 4 bytes at a time when processing and reading or writing to memory. We refer to 4 bytes as a word this is analogous to language where letters (bits) make up words in a sentence, except in computing every word has the same size! The size of a C int variable is 32 bits. Modern architectures are 64 bits, which doubles the size the processor works with to 8 bytes.

Kilo, Mega and Giga Bytes

Computers deal with a lot of bytes that's what makes them so powerful! We need a way to talk about large numbers of bytes, and a natural way is to use the "International System of Units" (SI) prefixes as used in most other scientific areas. So for example, kilo refers to 10 3 or 1000 units, as in a kilogram has 1000 grams.

1000 is a nice round number in base 10, but in binary it is 1111101000 which is not a particularly "round" number. However, 1024 (or 2 10 ) is a round number — ( 10000000000 — and happens to be quite close to the base 10 meaning value of "kilo" (1000 as opposed to 1024). Thus 1024 bytes naturally became known as a kilobyte . The next SI unit is "mega" for 10 6 and the prefixes continue upwards by 10 3 (corresponding to the usual grouping of three digits when writing large numbers). As it happens, 2 20 is again close to the SI base 10 definition for mega 1048576 as opposed to 1000000. Increasing the base 2 units by powers of 10 remains functionally close to the SI base 10 value, although each increasing factor diverges slightly further from the base SI meaning. Thus the SI base-10 units are "close enough" and have become the commonly used for base 2 values.

NameBase 2 FactorBytesClose Base 10 FactorBase 10 bytes
1 Kilobyte2 10 1,02410 3 1,000
1 Megabyte2 20 1,048,57610 6 1,000,000
1 Gigabyte2 30 1,073,741,82410 9 1,000,000,000
1 Terabyte2 40 1,099,511,627,77610 12 1,000,000,000,000
1 Petabyte2 50 1,125,899,906,842,62410 15 1,000,000,000,000,000
1 Exabyte2 60 1,152,921,504,606,846,97610 18 1,000,000,000,000,000,000

SI units compared in base 2 and base 10

It can be very useful to commit the base 2 factors to memory as an aid to quickly correlate the relationship between number-of-bits and "human" sizes. For example, we can quickly calculate that a 32 bit computer can address up to four gigabytes of memory by noting the recombination of 2 2 (4) + 2 30 . A 64-bit value could similarly address up to 16 exabytes (2 4 + 2 60 ) you might be interested in working out just how big a number this is. To get a feel for how big that number is, calculate how long it would take to count to 2 64 if you incremented once per second.

Kilo, Mega and Giga Bits

Apart from the confusion related to the overloading of SI units between binary and base 10, capacities will often be quoted in terms of bits rather than bytes. Generally this happens when talking about networking or storage devices you may have noticed that your ADSL connection is described as something like 1500 kilobits/second. The calculation is simple multiply by 1000 (for the kilo), divide by 8 to get bytes and then 1024 to get kilobytes (so 1500 kilobits/s=183 kilobytes per second).

The SI standardisation body has recognised these dual uses and has specified unique prefixes for binary usage. Under the standard 1024 bytes is a kibibyte , short for kilo binary byte (shortened to KiB). The other prefixes have a similar prefix (Mebibyte, MiB, for example). Tradition largely prevents use of these terms, but you may seem them in some literature.


The easiest way to convert between bases is to use a computer, after all, that's what they're good at! However, it is often useful to know how to do conversions by hand.

The easiest method to convert between bases is repeated division . To convert, repeatedly divide the quotient by the base, until the quotient is zero, making note of the remainders at each step. Then, write the remainders in reverse, starting at the bottom and appending to the right each time. An example should illustrate since we are converting to binary we use a base of 2.

Quotient Remainder
20310 ÷ 2 =1011
10110 ÷ 2 =501
5010 ÷ 2 =250
2510 ÷ 2 =121
1210 ÷ 2 =60
610 ÷ 2 =30
310 ÷ 2 =11
110 ÷ 2 =01

Reading from the bottom and appending to the right each time gives 11001011 , which we saw from the previous example was 203.

Boolean Operations

George Boole was a mathematician who discovered a whole area of mathematics called Boolean Algebra . Whilst he made his discoveries in the mid 1800's, his mathematics are the fundamentals of all computer science. Boolean algebra is a wide ranging topic, we present here only the bare minimum to get you started.

Boolean operations simply take a particular input and produce a particular output following a rule. For example, the simplest boolean operation, not simply inverts the value of the input operand. Other operands usually take two inputs, and produce a single output.

The fundamental Boolean operations used in computer science are easy to remember and listed below. We represent them below with truth tables they simply show all possible inputs and outputs. The term true simply reflects 1 in binary.

Usually represented by ! , not simply inverts the value, so 0 becomes 1 and 1 becomes 0

Infinite Words

Exercise 7

Presburger arithmetic is the set of first-order formulas which are true in the structure (ℕ, +) formed by the nonnegative integers with the addition.

Formulas of WMF2(S) can be interpreted as first-order formulas on ℕ by interpreting a second order variable X as the set of positions of the 1's in the binary representation of an integer x.

Find a formula φ (X, Y, Z) ∈ WMF2(S) to express the fact that the integers x, y, z associated with the variables X, Y, Z satisfy the relation x + y = z.

Deduce that every formula of Presburger arithmetic can be translated into a formula of MF2(<).

Deduce that Presburger arithmetic is decidable.

3.1: Binary Representations - Mathematics

Binary is a base-2 number system that uses two states 0 and 1 to represent a number. We can also call it to be a true state and a false state. A binary number is built the same way as we build the normal decimal number.

For example, a decimal number 45 can be represented as 4*10^1+5*10^0 = 40+5

Now in binary 45 is represented as 101101. As we have powers of 10 in decimal number similarly there are powers of 2 in binary numbers. Hence 45 which is 101101 in binary can be represented as:

The binary number is traversed from left to right.

Sign and Magnitude representation –
There are many ways for representing negative integers. One of the way is sign-magnitude. This system uses one bit to indicate the sign. Mathematical numbers are generally made up of a sign and a value. The sign indicates whether the number is positive, (+) or negative, (–) while the value indicates the size of the number.

For example 13, +256 or -574. Presenting numbers is this way is called sign-magnitude representation since the left most digit can be used to indicate the sign and the remaining digits the magnitude or value of the number.

Sign-magnitude notation is the simplest and one of the most common methods of representing positive and negative numbers. Thus negative numbers are obtained simply by changing the sign of the corresponding positive number, for example, +2 and -2, +10 and -10, etc. Similarly adding a 1 to the front of a binary number is negative and a 0 makes it positive.

For example 0101101 represents +45 and 1101101 represents -45 if 6 digits of a binary number are considered and the leftmost digit represents the sign.

But a problem with the sign-magnitude method is that it can result in the possibility of two different bit patterns having the same binary value. For example, +0 and -0 would be 0000 and 1000 respectively as a signed 4-bit binary number. So using this method there can be two representations for zero, a positive zero 0000 and also a negative zero 1000 which can cause big complications for computers and digital systems.

    One’s compliment –
    One’s Complement is a method which can be used to represent negative binary numbers in a signed binary number system. In one’s complement, positive numbers remain unchanged as before.

Negative numbers however, are represented by taking the one’s complement of the unsigned positive number. Since positive numbers always start with a 0, the complement will always start with a 1 to indicate a negative number.

The one’s complement of a negative binary number is the complement of its positive, so to take the one’s complement of a binary number, all we need to do is subtract 1’s equal to the number of digits present in the number from that number. This can also be achieved by just interchanging the digits of the number. Thus the one’s complement of 1 is 0 and vice versa.

For example One’s Compliment of 1010100:

In two’s complement representation, a negative number is the 2’s complement of its positive number. If the subtraction of two numbers is X – Y then it can be represented as X + (2’s complement of Y).

The two’s complement is one’s complement + 1 of a number in binary.

The main advantage of two’s complement over the previous one’s complement is that there is no double-zero problem and it is a lot easier to generate the two’s complement of a signed binary number. In two’s compliment arithmetic operations are relatively easier to perform when the numbers are represented in the two’s complement format.

For example to represent -27
27 in binary is: 00011011

  1. The plus/minus sign is represented by one bit, the highest-weighted bit (furthest to the left).
  2. The exponent is encoded using 8 bits (11 bits in 64 bit representation) immediately after the sign.
  3. The mantissa (the bits after the decimal point) with the remaining 23 bits(52 bits in 64 bit representation).

Attention reader! Don&rsquot stop learning now. Practice GATE exam well before the actual exam with the subject-wise and overall quizzes available in GATE Test Series Course.

3.1: Binary Representations - Mathematics

Before going through this section, make sure you understand about the representation of numbers in binary. You can read the page on numeric representation to review.

This document will introduce you to the methods for adding and multiplying binary numbers. In each section, the topic is developed by first considering the binary representation of unsigned numbers (which are the easiest to understand), followed by signed numbers and finishing with fractions (the hardest to understand). For the most part we will deal with

Adding unsigned numbers

Adding unsigned numbers in binary is quite easy. Recall that with 4 bit numbers we can represent numbers from 0 to 15. Addition is done exactly like adding decimal numbers, except that you have only two digits (0 and 1). The only number facts to remember are that

0+0 = 0, with no carry,
1+0 = 1, with no carry,
0+1 = 1, with no carry,
1+1 = 0, and you carry a 1.

so to add the numbers 0610=01102 and 0710=01112 (answer=1310=11012) we can write out the calculation (the results of any carry is shown along the top row, in italics).

The only difficulty adding unsigned numbers occurs when you add numbers that are too large. Consider 13+5.

The result is a 5 bit number. So the carry bit from adding the two most significant bits represents a results that overflows (because the sum is too big to be represented with the same number of bits as the two addends).

Adding signed numbers

Adding signed numbers is not significantly different from adding unsigned numbers. Recall that signed 4 bit numbers (2's complement) can represent numbers between -8 and 7. To see how this addition works, consider three examples.

In this case the extra carry from the most significant bit has no meaning. With signed numbers there are two ways to get an overflow -- if the result is greater than 7, or less than -8. Let's consider these occurrences now.

Obviously both of these results are incorrect, but in this case overflow is harder to detect. But you can see that if two numbers with the same sign (either positive or negative) are added and the result has the opposite sign, an overflow has occurred.

Typically DSP's, including the 320C5x, can deal somewhat with this problem by using something called saturation arithmetic, in which results that result in overflow are replaced by either the most positive number (in this case 7) if the overflow is in the positive direction, or by the most negative number (-8) for overflows in the negative direction.

There is no further difficult in adding two signed fractions, only the interpretation of the results differs. For instance consider addition of two Q3 numbers shown (compare to the example with two 4 bit signed numbers, above).

If you look carefully at these examples, you'll see that the binary representation and calculations are the same as before, only the decimal representation has changed. This is very useful because it means we can use the same circuitry for addition, regardless of the interpretation of the results.

Even the generation of overflows resulting in error conditions remains unchanged (again compare with above)

Multiplying unsigned numbers

Multiplying unsigned numbers in binary is quite easy. Recall that with 4 bit numbers we can represent numbers from 0 to 15. Multiplication can be performed done exactly as with decimal numbers, except that you have only two digits (0 and 1). The only number facts to remember are that 0*1=0, and 1*1=1 (this is the same as a logical "and").

Multiplication is different than addition in that multiplication of an n bit number by an m bit number results in an n+m bit number. Let's take a look at an example where n=m=4 and the result is 8 bits

In this case the result was 7 bit, which can be extended to 8 bits by adding a 0 at the left. When multiplying larger numbers, the result will be 8 bits, with the leftmost set to 1, as shown.

As long as there are n+m bits for the result, there is no chance of overflow. For 2 four bit multiplicands, the largest possible product is 15*15=225, which can be represented in 8 bits.

Multiplying signed numbers

There are many methods to multiply 2's complement numbers. The easiest is to simply find the magnitude of the two multiplicands, multiply these together, and then use the original sign bits to determine the sign of the result. If the multiplicands had the same sign, the result must be positive, if the they had different signs, the result is negative. Multiplication by zero is a special case (the result is always zero, with no sign bit).

Multiplying fractions

As you might expect, the multiplication of fractions can be done in the same way as the multiplication of signed numbers. The magnitudes of the two multiplicands are multiplied, and the sign of the result is determined by the signs of the two multiplicands.

There are a couple of complications involved in using fractions. Although it is almost impossible to get an overflow (since the multiplicands and results usually have magnitude less than one), it is possible to get an overflow by multiplying -1x-1 since the result of this is +1, which cannot be represented by fixed point numbers.

The other difficulty is that multiplying two Q3 numbers, obviously results in a Q6 number, but we have 8 bits in our result (since we are multiplying two 4 bit numbers). This means that we end up with two bits to the left of the decimal point. These are sign extended, so that for positive numbers they are both zero, and for negative numbers they are both one. Consider the case of multiplying -1/2 by -1/2 (using the method from the textbook):

This obviously presents a difficulty if we wanted to store the number in a Q3 result, because if we took just the 4 leftmost bits, we would end up with two sign bits. So what we'd like to do is shift the number to the left by one and then take the 4 leftmost bit. This leaves us with 1110 which is equal to -1/4, as expected.

On a 16 bit DSP two Q15 numbers are multiplied to get a Q30 number with two sign bits. On the 320C50 there are two ways to accomplish this. The first is to use the p-scaler immediately after the multiplier, or the postscaler after the accumulator. to shift the result to the left by one

2’s Complement to Decimal

To convert a 2’s complement representation into a readable number format, follow these steps:

  1. If the number is positive (the most significant bit is 0), convert the number into decimal format normally.
  2. If the number is negative, flip all the digits in the number. In other words, change all the 1’s to 0’s and all the 0’s to 1's.
  3. Add 1 to the flipped number.
  4. To see the number in decimal format, just convert it to a decimal number like in regular binary representation.

For example, let’s convert 10110110.

  1. Since the most significant bit is 1, we need to flip the digits in the number.
  2. 10110110 → 01001001.
  3. Add 1 to the number. 01001001+1=01001010.
  4. 01001010 (binary) = 74 (decimal).

So, 10110110 (2’s complement)=-74 (decimal).

Mathematically Unique?

The number 42 has a range of interesting mathematical properties. Here are some of them:

The number is the sum of the first three odd powers of two&mdashthat is, 2 1 + 2 3 + 2 5 = 42. It is an element in the sequence a(n), which is the sum of n odd powers of 2 for n > 0. The sequence corresponds to entry A020988 in The On-Line Encyclopedia of Integer Sequences (OEIS), created by mathematician Neil Sloane. In base 2, the nth element may be specified by repeating 10 n times (1010 . 10). The formula for this sequence is a(n) = (2/3)(4 n &ndash 1). As n increases, the density of numbers tends toward zero, which means that the numbers belonging to this list, including 42, are exceptionally rare.

The number 42 is the sum of the first two nonzero integer powers of six&mdashthat is, 6 1 + 6 2 = 42. The sequence b(n), which is the sum of the powers of six, corresponds to entry A105281 in OEIS. It is defined by the formulas b(0) = 0, b(n) = 6b(n &ndash 1) + 6. The density of these numbers also tends toward zero at infinity.

Forty-two is a Catalan number. These numbers are extremely rare, much more so than prime numbers: only 14 of the former are lower than one billion. Catalan numbers were first mentioned, under another name, by Swiss mathematician Leonhard Euler, who wanted to know how many different ways an n-sided convex polygon could be cut into triangles by connecting vertices with line segments. The beginning of the sequence (A000108 in OEIS) is 1, 1, 2, 5, 14, 42, 132. The nth element of the sequence is given by the formula c(n) = (2n)! / (n!(n + 1)!). And like the two preceding sequences, the density of numbers is null at infinity.

Catalan numbers are named after Franco-Belgian mathematician Eugène Charles Catalan (1814&ndash1894), who discovered that c(n) is the number of ways to arrange n pairs of parentheses according to the usual rules for writing them: a parenthesis is never closed before it has been opened, and one can only close it when all the parentheses that were subsequently opened are themselves closed.

For example, c(3) = 5 because the possible arrangements of three pairs of parentheses are:

Forty-two is also a &ldquopractical&rdquo number, which means that any integer between 1 and 42 is the sum of a subset of its distinct divisors. The first practical numbers are 1, 2, 4, 6, 8, 12, 16, 18, 20, 24, 28, 30, 32, 36, 40, 42, 48, 54, 56, 60, 64, 66 and 72 (sequence A005153 in OEIS). No simple known formula provides the nth element of this sequence.

All this is amusing, but it would be wrong to say that 42 is really anything special mathematically. The numbers 41 and 43, for example, are also elements of many sequences. You can explore the properties of various numbers on Wikipedia.

What makes a number particularly interesting or uninteresting is a question that mathematician and psychologist Nicolas Gauvrit, computational natural scientist Hector Zenil and I have studied, starting with an analysis of the sequences in the OEIS. Aside from a theoretical connection to Kolmogorov complexity (which defines the complexity of a number by the length of its minimal description), we have shown that the numbers contained in Sloane&rsquos encyclopedia point to a shared mathematical culture and, consequently, that OEIS is based as much on human preferences as pure mathematical objectivity.

The />grand unified theory appeared in a 1974 paper by Howard Georgi and Sheldon Glashow [12]. It was the first grand unified theory, and is still considered the prototypical example. As such, there are many accounts of it in the physics literature. The textbooks by Ross [31] and Mohapatra [21] both devote an entire chapter to the />theory, and a lucid summary can be found in a review article by Witten [39], which also discusses the supersymmetric generalization of this theory.

In this section, we will limit our attention to the nonsupersymmetric version of />theory, which is how it was originally proposed. Unfortunately, this theory has since been ruled out by experiment it predicts that protons will decay faster than the current lower bound on proton lifetime [26]. Nevertheless, because of its prototypical status and intrinsic interest, we simply must talk about the />theory.

  • Is the particle isospin up?
  • Is it isospin down?
  • Is it red?
  • Is it green?
  • Is it blue?

We can flesh out this scheme by demanding that the operation of taking antiparticles correspond to switching 0's for 1's in the code. So the code for the antiparticle of , the `antired right-handed antidown antiquark', is . This is cute: it means that being antidown is the same as being up, while being antired is the same as being both green and blue.

Furthermore, in this scheme all antileptons are `black' (the particles with no color, ending in 000), while leptons are `white' (the particles with every color, ending in 111). Quarks have exactly one color, and antiquarks have exactly two.

We are slowly working our way to the theory. Next let us bring Hilbert spaces into the game. We can take the basic properties of being up, down, red, green or blue, and treat them as basis vectors for . Let us call these vectors . The exterior algebra has a basis given by wedge products of these 5 vectors. This exterior algebra is 32-dimensional, and it has a basis labelled by 5-bit strings. For example, the bit string corresponds to the basis vector , while the bit string corresponds to .

Next we bring in representation theory. The group has an obvious representation on . And since the operation of taking exterior algebras is functorial, this group also has a representation on . In the grand unified theory, this is the representation we use to describe a single generation of fermions and their antiparticles.

Just by our wording, though, we are picking out a splitting of into : the isospin and color parts, respectively. Choosing such a splitting of picks out a subgroup of , the set of all group elements that preserve this splitting. This subgroup consists of block diagonal matrices with a block and a block, both unitary, such that the determinant of the whole matrix is 1. Let us denote this subgroup as .

Now for the miracle: the subgroup is isomorphic to the Standard Model gauge group (at least modulo a finite subgroup). And, when we restrict the representation of on to , we get the Standard Model representation!

There are two great things about this. The first is that it gives a concise and mathematically elegant description of the Standard Model representation. The second is that the seemingly ad hoc hypercharges in the Standard Model must be exactly what they are for this description to work. So, physicists say the theory explains the fractional charges of quarks: the fact that quark charges come in units the size of electron charge pops right out of this theory.

With this foretaste of the fruits the theory will bear, let us get to work and sow the seeds. Our work will have two parts. First we need to check that

where is some finite normal subgroup that acts trivially on . Then we need to check that indeed

as representations of .

First, the group isomorphism. Naively, one might seek to build the theory by including as a subgroup of . Can this be done? Clearly, we can include as block diagonal matrices in :

but this is not enough, because also has that pesky factor of , related to the hypercharge. How can we fit that in?

The first clue is that elements of must commute with the elements of . But the only elements of that commute with everybody in the subgroup are diagonal, since they must separately commute with and , and the only elements doing so are diagonal. Moreover, they must be scalars on each block. So, they have to look like this:

where stands for the identity matrix times the complex number , and similarly for in the block. For the above matrix to lie in , it must have determinant 1, so . This condition cuts the group of such matrices from down to . In fact, all such matrices are of the form

where runs over .

So if we throw in elements of this form, do we get ? More precisely, does this map:

give an isomorphism between and ? It is clearly a homomorphism. It clearly maps into into the subgroup . And it is easy to check that it maps onto this subgroup. But is it one-to-one?

The answer is no : the map has a kernel, . The kernel is the set of all elements of the form

and this is , because scalar matrices and live in and , respectively, if and only if is a sixth root of unity. So, all we get is

This sets up a nerve-racking test that the theory must pass for it to have any chance of success. After all, not all representations of factor through , but all those coming from representations of must do so. A representation of will factor through only if the subgroup acts trivially.

In short: the GUT is doomed unless acts trivially on every fermion. (And antifermion, but that amounts to the same thing.) For this to be true, some nontrivial relations between hypercharge, isospin and color must hold.

For example, consider the left-handed electron

For any sixth root of unity , we need

  • acts on as multiplication by
  • acts on as multiplication by
  • acts trivially on .

The action is indeed trivial--precisely because is a sixth root of unity.

Or, consider the right-handed quark:

  • acts on as multiplication by
  • acts trivially on the trivial representation
  • acts on as multiplication by .

Again, the action is trivial.

For to work, though, has to act trivially on every fermion. There are 16 cases to check, and it is an awful lot to demand that hypercharge, the most erratic part of the Standard Model representation, satisfies 16 relations.

Or is it? In general, for a fermion with hypercharge , there are four distinct possibilities:

Hypercharge relations
Case Representation Relation
Nontrivial , nontrivial
Nontrivial , trivial
Trivial , nontrivial
Trivial , trivial
Hypercharge relations
Case Representation Relation
Left-handed quark
Left-handed lepton
Right-handed quark
Right-handed lepton
But is sixth root of unity, so all this really says is that those exponents are multiples of six:
Hypercharge relations
Case Relation
Left-handed quark
Left-handed lepton
Right-handed quark
Right-handed lepton
Hypercharge relations
Case Hypercharge
Left-handed quark = even integer
Left-handed lepton = odd integer
Right-handed quark = odd integer
Right-handed lepton = even integer

Now it is easy to check this indeed holds for every fermion in the standard model. passes the test, not despite the bizarre pattern followed by hypercharges, but because of it!

By this analysis, we have shown that acts trivially on the Standard Model rep, so it is contained in the kernel of this rep. It is better than just a containment though: is the entire kernel. Because of this, we could say that is the `true' gauge group of the Standard Model. And because we now know that

it is almost as though this kernel, lurking inside this whole time, was a cryptic hint to try the theory.

Of course, we still need to find a representation of that extends the Standard Model representation. Luckily, there is a very beautiful choice that works: the exterior algebra . Since acts on , it has a representation on . Our next goal is to check that pulling back this representation from to using , we obtain the Stadard model representation

As we do this, we will see another fruit of the theory ripen. The triviality of already imposed some structure on hypercharges, as outlined in above in Table 3. As we fit the fermions into , we will see this is no accident: the hypercharges have to be exactly what they are for the theory to work.

To get started, our strategy will be to use the fact that, being representations of compact Lie groups, both the fermions and the exterior algebra are completely reducible, so they can be written as a direct sum of irreps. We will then match up these irreps one at a time.

The fermions are already written as a direct sum of irreps, so we need to work on . Now, any element acts as automorphisms of the exterior algebra :

where . Since we know how acts on the vectors in , and these generate , this rule is enough to tell us how acts on all of . This action respects grades in , so each exterior power in

is a subrepresentation. In fact, these are all irreducible, so this is how breaks up into irreps of . Upon restriction to , some of these summands break apart further into irreps of .

Let us see how this works, starting with the easiest cases. and are both trivial irreps of . There are two trivial irreps in the Standard Model representation, namely and its dual , where we use angle brackets to stand for the Hilbert space spanned by a vector or collection of vectors. So, we could select and , or vice versa. At this juncture, we have no reason to prefer one choice to the other.

Next let us chew on the next piece: the first exterior power, . We have

as vector spaces, and as representations of . But what is as a representation of ? The Standard Model gauge group acts on via the map

Clearly, this action preserves the splitting into the `isospin part' and the `color part' of :

    The part transforms in the hypercharge 1 rep of : that is, acts as multiplication by . It transforms according to the fundamental representation of , and the trivial representation of . This seems to describe a left-handed lepton with hypercharge 1.

In short, as a rep of , we have

and we have already guessed which particles these correspond to. The first summand looks like a left-handed lepton with hypercharge 1, while the second is a right-handed quark with hypercharge .

Now this is problematic, because another glance at Table 1 reveals that there is no left-handed lepton with hypercharge 1. The only particles with hypercharge 1 are the right-handed antileptons, which span the representation

But wait! is unique among the 's in that its fundamental rep is self-dual:

This saves the day. As a rep of , becomes

so it describes the right-handed antileptons with hypercharge 1 and the right-handed quarks with hypercharge . In other words:

where we have omitted the color label on to save space. Take heed of this: is short for the vector space , and it is three-dimensional.

Now we can use our knowledge of the first exterior power to compute the second exterior power, by applying the formula

So, let us calculate! As reps of we have

Consider the first summand, . As a rep of this space is just , which is the one-dimensional trivial rep, . As a rep of it is also trivial. But as a rep of , it is nontrivial. Inside it we are juxtaposing two particles with hypercharge 1. Hypercharges add, just like charges, so the composite particle, which consists of one particle and the other, has hypercharge 2. So, as a representation of the Standard Model gauge group we have

Glancing at Table 1 we see this matches the left-handed positron, . Note that the hypercharges are becoming useful now, since they uniquely identify all the fermion and antifermion representations, except for neutrinos.

Next consider the second summand:

Again, we can add hypercharges, so this representation of is isomorphic to

This is the space for left-handed quarks of hypercharge , which from Table 1 is:

where once again we have suppressed the label for colors.

Finally, the third summand in is

This has isospin , so by Table 1 it had better correspond to the left-handed antiup antiquark, which lives in the representation

Let us check. The rep is trivial under . As a rep of it is the same as . But because preserves the volume form on , taking Hodge duals gives an isomorphism

which is just what we need to show

In summary, the following pieces of the Standard Model rep sit inside :

We are almost done. Because preserves the canonical volume form on , taking Hodge duals gives an isomorphism between

as representations of . Thus given our results so far:

we automatically get the antiparticles of these upon taking Hodge duals,

So , as desired.

How does all this look in terms of the promised binary code? Remember, a 5-bit code is short for a wedge product of basis vectors . For example, 01101 corresponds to . And now that we have found an isomorphism , each of these wedge products corresponds to a fermion or antifermion. How does this correspondence go, exactly?

First consider the grade-one part . This has basis vectors called and . We have seen that the subspace , spanned by and , corresponds to

The top particle here has isospin up, while the bottom one has isospin down, so we must have and . Likewise, the subspace spanned by and corresponds to

Thus we must have , where runs over the colors .

Next consider the grade-two part:

Here lives in the one-dimensional rep of , which is spanned by the vector . Thus, . The left-handed quarks live in the rep of , which is spanned by vectors that consist of one isospin and one color. We must have and , where again runs over all the colors . And now for the tricky part: the quarks live in the rep of , but this is isomorphic to the fundamental representation of on , which is spanned by antired, antired and antiblue:

These vectors form the basis of that is dual to , , and under Hodge duality in . So we must have

where />can be any anticolor. Take heed of the fact that />is grade 2, even though it may look like grade 1.

To work out the other grades, note that Hodge duality corresponds to switching 0's and 1's in our binary code. For instance, the dual of 01101 is 10010: or written in terms of basis vectors, the dual of is . Thus given the binary codes for the first few exterior powers:

The Binary Code for

Now we can see a good, though not decisive, reason to choose . With this choice, and not the other, we get left-handed particles in the even grades, and right-handed particles in the odd grades. We choose to have this pattern now, but later on we need it.

Table 4 defines a linear isomorphism in terms of the basis vectors, so the equations in this table are a bit of a exaggeration. When we write say, , we really mean . This map is an isomorphism between representations of . It tells us how these representations are the `same'.

More precisely, we mean these representations are the same when we identify with using the isomorphism induced by . In general, we can think of a unitary representation as a Lie group homomorphism

where is a finite-dimensional Hilbert space and is the Lie group of unitary operators on . In this section we have been comparing two unitary representations: an ugly, complicated representation of :

and a nice, beautiful representation of :

so it is natural to wonder if is there a fourth homomorphism

Indeed, we just showed this! We have seen there exists a unitary operator from the Standard Model rep to , say

such that the induced isomorphism of the unitary groups,

makes the above square commute. So, let us summarize this result as a theorem:

where the left vertical arrow is the Standard Model representation and the right one is the natural representation of on the exterior algebra of .

We can use the happy face of five-divisibility!

Start in state $ . Follow the appropriate arrows as you read digits from your binary number from left-to-right. If you end up in state $ again your number is divisible by $5$ (and if not, the state number gives you the remainder).

How does it work? Well if we're in state $k$ it means the digits we have read so far form the number $n$ with remainder $k equiv n mod 5$ . If we then read another digit $b$ , we effectively move to the new number $n' = 2n + b$ . Thus we need to move to state $(2k + b) mod 5$ , which is exactly what we do in the above graph. Thus if we end up in state $ in the end we know there is no remainder, and the number that we read is divisible by 5.

The state diagram above is just this logic graphically displayed. You could have it as a table instead as well:

egin k & b & 2k + b & (2k + b) mod 5 hline 0 & 0 & 0 & 0 0 & 1 & 1 & 1 1 & 0 & 2 & 2 1 & 1 & 3 & 3 2 & 0 & 4 & 4 2 & 1 & 5 & 0 3 & 0 & 6 & 1 3 & 1 & 7 & 2 4 & 0 & 8 & 3 4 & 1 & 9 & 4 hline end

This also makes for a nice mental rule. You start with the number $ in your head and look at the digits from left-to-right. For each digit you multiply the number in your head by 2 and add the digit you just read. If the number goes to five or above you subtract five. If you end up with $ the number is divisible by 5.

Watch the video: Alternativní matematika (November 2021).