These topics are at the very foundation of computer engineering, programming, basic logic, the mathematics of expressions, and even has the high-faluten name of "Propositional Calculus," although it has nothing to do with the regular use of the term "Calculus," other than it is spelled the same.

I will start with some definitions of terms. You have probably heard a lot of references to "digital" and its converse, "analog." We will start there. "Digital" literally means "numbers" as in digits. That is, you could count it on your fingers or "digits."

One implication of this is that there is no "half-way." You are either holding up three fingers or four. No "three and one-half."

Analog, on the other hand, is a continuous value. A 1.5 volt battery may be 1.50 or 1.40 volts. Or it may be 1.45 volts, half way between. Or 1.4725 volts, or even 1.400001 volts. Although I'm using numbers to indicate the analog value, it is obvious I can use as many digits as I like.

Perhaps a clearer example is the difference between the common speedometer and the oddometer. The speedometer shows the car speed and you can observe literally any speed that is possible. You could drive exactly 50 mph or exactly 45 mph or any value between.

The oddometer, on the other hand, shows specific digits. Your mileage may be 62,000.4 miles or it may be 62,000.5 miles. Now old fashioned cars the oddometer was really analog and you could see a mileage where the last digit is half way between these two. The final number would be between the 4 and 5. But modern cars use a display that just shows the number.

Another example are meters that measure things like voltage. You can get such a meter with a dial and read it more like a clock—an analog clock. There are also digital volt meters and they are read like a digital clock. Now that I think about it, the two kinds of clocks are good examples too.

One example of digital circuits are "binary." While digital can mean any number of digits, and 10 is a common choice, binary means "two." The "bi" implies two digits just like the "bi" in "bicycle." You can count in decimal, going from zero to nine. You then carry one to the next place and start over with zero.In binary, there are only two digits: zero and one. So you count from zero to one, and then carry one to the next column and start over with zero.

## Binary Numbers

Decimal | Binary |
---|---|

0 | 0000 |

1 | 0001 |

2 | 0010 |

3 | 0011 |

4 | 0100 |

5 | 0101 |

6 | 0110 |

7 | 0111 |

8 | 1000 |

9 | 1001 |

10 | 1010 |

You notice that four digits of binary is required to match the decimal numbers. There are a few more four-digit binary values left, however. You can count from zero to fifteen with four binary digits and that is called "hexidecimal" which just means sixteen.

I mentioned the term "Boolean" earlier. That is named after George Boole, a mathematician, philosopher, and logician who lived in the nineteenth century. He had the quaint idea that he could develop a mathematics of logic and eliminate political debate. You would write the question in his mathematical terms and then simply solve for the answer. That would eliminate the need for political parties and voting.

Although the concept is very convincing, it just didn't work very well in practice, and Boole and his algebra didn't go far. Fast forward bout one hundred years and you have engineers starting to design digital computers. They quickly adopted Boole's and other's work to describe and analyze the special circuits involved.

Boolean math is also binary. That is, there are two allowed values: TRUE and FALSE. Obviously you could easily map these two values into a binary number system. It doesn't really matter how you map it. For consistency, I'll claim that FALSE = 0 and TRUE = 1.

These facts are really at the heart of all modern computer circuits. Some early computers didn't use binary circuits, but all modern computers have adopted that because of simplicity and reliability. Although the actual voltages used in the computers will vary somewhat—they are analog voltages, we simply assign a range of voltages to equal zero and the other range to equal one.

For example, we might decide in a circuit that uses 5 volts, that 0 - 2.5 volts is a binary zero and 2.5 volts to 5.0 volts is binary one. If we keep the "low" voltage under one volt and the "high" voltage over 4 volts, it will be simple to make all the circuits operate as expected, and there will be no confusing the two expected values.

In fact, that's a new set of terms. In computer circuits, we may speak of low and high. Again, I'll assume that low = 0 and high = 1 or, equivalently, low = FALSE and high = TRUE. Some electronic circuit designs map the voltages opposite of this, but we don't need to worry about those details and we'll keep it simple.

Now we're ready to perform some logic. We will just examine Boolean Algebra using TRUE and FALSE. I'll use the letters "T" and "F" for the obvious purpose and we'll let the electronics engineers translate it to voltages, but now you see how that could be easily done and why this is essential in computer circuit design.

Boole created various logical operations. These operations combine logical expressions and evaluate things as TRUE or FALSE. These operations form the basic building blocks of computer and other logic circuits that are in everything from your computer to your TV to your coffee pot to your nuclear power plant—assuming you have your own nuclear power plant.

# AND

Let's start with the "and" function. This function combines two or more inputs that are either TRUE or FALSE and produces a single output of TRUE/FALSE. The basic rule of an AND gate or circuit or function is that the output is TRUE if and only if all the inputs are TRUE.

One of the best methods to describe this type of digital logic is something called a
"Truth Table." This is a table or list of all possible combinations of inputs that shows the output for
each combination. The number of rows in a Truth Table depends on the number of inputs. With
two inputs, there will be four rows. The formula is 2^{n}, where n equals the
number of inputs. With two inputs, n=2, and the equation is 2^{2}=4.

## Truth Table for AND

A | B | AND |
---|---|---|

F | F | F |

F | T | F |

T | F | F |

T | T | T |

If there were three inputs, then the formula for the number of rows is 2^{3}=8.
But, no matter how many inputs, the output is FALSE unless all of the inputs are TRUE.
When I first started designing digital circuits at IBM, they called the "AND gate" an
"ALL gate," since all the inputs had to be one (or high) for the output to be a one
(or high). The symbol used on IBM circuit drawings was a square box with a capital
letter "A" in it. You could interpret the letter as meaning "AND" or "ALL." As was often
the case back in those days, IBM did things a little different than everyone else.

There is a standard logic symbol for AND used today, but I'll skip that for now.

It is possible to compare the logical "AND" gate to an arithmetic gate. You just change your definition slightly. Zero is still FALSE, but TRUE becomes "any number that is not zero." You use the standard mathematical function of multiplication. To keep from confusing the multiplication symbol with "x," we use a dot, "•" to indicate multiplication.

So the AND funtion of two variables, "A" and "B," can be expressed as A • B. You will notice that if they are both zero, then the multiplication produces zero or FALSE. Further, if one of the inputs is zero and the other is one, you still get zero or FALSE. Only if all the inputs are one will the multiplication produce one or TRUE.

In most programming languages, when you work with numbers they are called integers or real or float or some other name to imply the computer should treat them as numbers. When the numbers stand for TRUE or FALSE, they are called "Booleans" after George Boole. A rule such as zero = FALSE and NOT zero = TRUE is built into the "Boolean functions."

# OR

OR is similar to AND except the output is TRUE if ANY of the inputs is TRUE. That includes both or "all" of the inputs being TRUE.

In IBM logic, just as the AND gate or circuit was marked "A" or "all" (or "and'), the OR gate was marked with "O" for "one" (or "or"). That is, it was TRUE if one (or more) inputs were TRUE. This is the Truth Table.

## Truth Table for OR

A | B | OR |
---|---|---|

F | F | F |

F | T | T |

T | F | T |

T | T | T |

You may notice that the two Truth Tables are similar. The table for AND shows only one row that is TRUE, while the Truth Table for OR shows only one row that is FALSE.

As I stated before, AND and OR can be extended to three or four or even a dozen inputs. The rules remain the same. In all cases, AND will be TRUE in only one row, the row that has all TRUE inputs, and OR will be FALSE in only one row, the one with all FALSE inputs. That apparent symmetry will become significant in my discussion below.

In Boolean Algebra, the symbol for OR is the addition symbol or "+." Boolean OR acts like addition, but remember the definition of FALSE and TRUE. FALSE = zero and TRUE equals NOT zero.

So 0 + 0 = 0 or FALSE, but 0 + 1 = 1 + 0 = 1 or TRUE. Also, 1 + 1 = 2 or 10 in Binary. Since two is not equal to zero, two also equals TRUE.

Now we can test out some regular rules from normal Algebra and see if they work on Boolean Algebra. Remember, in Arithmetic and Algebra "multiplication distributes over addition." Here is an example in Algebra:

A • ( B + C ) = ( A • B ) + ( A • C )

It works the same way in Boolean Algebra. Notice I used the mathematical symbols for AND and OR, but I can just use the terms too. AND distributes over OR.

A AND ( B OR C ) = ( A AND B ) OR ( A AND C )

However, just as in regular Algebra, OR does not distribute over AND.

You can write and solve a lot more complex equations in Boolean Algebra and digital circuit designers do just that. They also use special tables called Karnough maps as well as Truth Tables. The life of a digital circuit designer is just full of tricks!

# NOT

There is one more important function to add to AND and OR. It is the NOT function. We also call it "negation" and "inversion." Unlike the two functions we've discussed so far which have two or more inputs, the NOT function has only one input. It reverses the input. If you put TRUE into the NOT function you get out FALSE and if you put FALSE into the NOT function you get out TRUE.

NOT is implemented in an electronic circuit as an "inverter." It inverts the input voltage. If you feed in a "low" voltage, you get out a "high," and if you feed in a "high," you get out a "low." It is a very simple circuit indeed and can be built with a single transistor.

This is the Truth Table of NOT.

## Truth Table for NOT

A | NOT |
---|---|

F | T |

T | F |

The mathematical symbol for NOT is either a line drawn over the top of the variable or expression, or the not symbol placed in front of the variable or expression. The NOT symbol is similar to a MINUS symbol and works pretty much the same way in a logical expression that MINUS works in an arithmetic expression. This is the symbol for NOT, "¬." Kind of a MINUS sign with a hook. On an EBCDIC (Extended Binary Coded Decimal Interchange Code) computer keyboard, the kind used on IBM mainframes, the NOT symbol is located on the top left of the keyboard. On ASCII (American Standard Code for Information Interchange) keyboards, like on an IBM PC or a Mac, that key is the tilde, "~."

Mathematically, in Boolean Algebra, the NOT is a little tricky. It isn't "minus" the quantity or "change the quantity sign." It is "invert" the quantity. If it is TRUE, make it FALSE, and if it is FALSE, then make it TRUE.

# NAND and NOR

Adding the NOT function to the output of AND and OR changes all the TRUE to FALSE and vice-versa. NOT AND or NAND is TRUE unless all the inputs are TRUE and NOT OR or NOR is TRUE if all the inputs are FALSE and goes FALSE if any input is TRUE. Here are the Truth Tables for each.

## Truth Table for NAND

A | B | NAND |
---|---|---|

F | F | T |

F | T | T |

T | F | T |

T | T | F |

## Truth Table for NOR

A | B | NOR |
---|---|---|

F | F | T |

F | T | F |

T | F | F |

T | T | F |

NAND and NOR can be built by taking an AND or OR gate, respectively, and adding an inverter or NOT to the output. In digital circuit design, NAND and NOR are more common than the simpler AND and OR because they are actually more flexible in design usage. It is simpler to build complex digital logic circuits with these negated output gates than with regular gates. For one thing, you can use a NOR gate as an inverter if you just tie all the inputs together. Then if the input is TRUE and output is FALSE since all inputs are TRUE. The opposite occurs if the input is FALSE.

# De Morgan's Law

That brings us to De Morgan's Law. Augustus De Morgan was a contemporary of George Boole and he added this important relationship to Boolean Algebra. Recall I stated earlier that there was a sort of symmetry between AND and OR, but AND had only one row that was TRUE and OR had only one row that was FALSE. Well, now we know that NOT changes these two around. De Morgan noted that, with addition of NOT in the right place, you can change AND to OR and vice versa.

Before going on, I'm going to introduce two more symbols. The use of the multiplication dot and adding plus for AND and OR is common in Boolean Algebra, but recall Boole originally intended the mathematics for use in logic and discussion.

When these techniques are used for understanding complex logic, we often use these alternative mathematical symbols for AND and OR. The math symbol for AND is "∧" and the symbol for OR is "∨." I remember them by thinking of the "∧" like the letter "A," and the other symbol is the "opposite" of AND, which I think of as OR.

Although it is tempting to use the Ampersand symbol (&) for AND, it is not very common. In fact, about the only time I've seen it done is in the table I copied off the Internet to use at the top of this blog article.

The "¬" is actually the math symbol. In Boolean Algebra the NOT function is usually shown as a line drawn over the symbols to be inverted.

So many ways to say the same thing. Each is like a different language. But at least there aren't many words in this language.

Using the mathematical notation, De Morgan's laws show that these expressions are identical.

¬( P ∨ Q ) = ¬P ∧ ¬Q

Alternatively

¬( P ∧ Q ) = ¬P ∨ ¬Q

That's pretty slick. However, any mathematical theorem, which is what a "law" really is, must be proven. There are about a dozen ways to prove it using Boolean Algebra or set theory or other methods. Normally you can't prove a theorem by example, but in this case you can. Since Truth Tables represent every possible combination, you can actually demonstrate and prove De Morgan's Laws using Truth Tables.

I'll leave that proof as an exercise to the reader. Try it. It is very simple and you'll gain valuable skills using Truth Tables.

# XOR

This is not the end of the story. There is one more important function to discuss. It is not an elemental function since you can construct it out of AND, OR, and NOT. But it is so useful that integrated circuits were created specifically to perform this function.

Recall in the definition of OR, if either (or both) inputs are TRUE the output is TRUE. That includes if both inputs are TRUE. This is called the "inclusive or."

XOR is the "exclusive or" and it is TRUE if either of the inputs is TRUE, **but not
both**. Here is the Truth Table.

## Truth Table for XOR

A | B | XOR |
---|---|---|

F | F | F |

F | T | T |

T | F | T |

T | T | F |

The concept can be extended to more than two inputs. In that case the output is TRUE if an odd number of inputs are TRUE. That alone makes XOR a very useful function in designing digital circuits and writing programs. XOR is a powerful tool to testing if a binary number is even or odd, and XOR is used in a lot of cryptographic functions to change digital values in most interesting ways.

To demonstrate this even/odd behavior of XOR, here's a Truth Table for a three-input XOR function or gate.

## Truth Table for three-input XOR

A | B | C | XOR |
---|---|---|---|

F | F | F | F |

F | F | T | T |

F | T | F | T |

F | T | T | F |

T | F | F | T |

T | F | T | F |

T | T | F | F |

T | T | T | T |

Of course, to finish the set, there is an XNOR function with an output exactly the opposite of the above Truth Table. It is TRUE when there is an even number of TRUE inputs. (We assume that all FALSE inputs is also an even number of TRUE. In this case we consider zero as an even number.)

If you think about all the ways you could process two Boolean inputs, and you assume that A and B, the two inputs, are equivalent or interchangeable, then there are eight possible Truth Tables you can construct. We've now created six with AND, OR, and XOR and their logical complements or negated output versions. The last two are not very interesting. That would be a Truth Table that always produces TRUE no matter what the input and the negation of that where the output is always FALSE. These are of no use since they don't produce outputs depending on the combination of inputs.

However, it may not be TRUE that A and B are interchangeable. In logic and the mathematics that George Boole developed, it wasn't just a simple combination of inputs, but a method to evaluate logical expressions.

# Logical Expressions

Boole's methods included AND, OR, and NOT, and possibly even XOR, but his purpose was to evaluate logical statements. This is similar to what we do in programming. So we will leave the realm of simple digital logic circuits and talk about higher logic implemented in programming languages.

Start with a statement such as "All dogs have tails." We won't worry if that is TRUE or not. As with much math, the basic assumptions are just accepted, and our goal is to determine the implications of those assumptions.

Based on the premise that "all dogs have tails" we can form an implication. In Boolean Algebra and Logical Expressions (or Propositional Calculus), the IMPLICATION symbol is an arrow. In this case there is a difference between "A" and "B." They can't necessarily be interchanged without changing the meaning.

We say the "A" implies "B." For example, based on "all dogs have tails" we have the implication that "if this is a dog" implies "he (or she) has a tail."

We will switch from using "A" and "B" to using "P" and "Q." This implies that now they are different. "P → Q" means "P IMPLIES Q." If P is TRUE, then Q is TRUE.

It is important to note that the opposite is not necessarily TRUE. We didn't say that Q IMPLIES P. For example, if it is TRUE that an animal has a tail, it does not imply it is a dog. It might be a monkey. We see that in the Truth Table for IMPLIES.

## Truth Table for IMPLIES

P | Q | → |
---|---|---|

F | F | T |

F | T | T |

T | F | F |

T | T | T |

Since, when P is FALSE, we know nothing about the value of Q, Q is always TRUE. Only when P is TRUE and Q is FALSE is the implication FALSE. You could say it is "WRONG."

That's a little confusing, but that's the Truth Table for Implies. Note that IMPLIES is a lot like IF ... THEN ... ELSE, which is a common programming construct. If P is TRUE implies that Q is TRUE.

But what if it does work both ways. What if P IMPLIES Q **and** Q IMPLIES P? That's
also called an "if and only if" relationship or some mathematicians abbreviate that as
"iff."

I might say that dinner is done if and only if the meat is at 350 degrees F. The meat is done IMPLIES it is at 350 degrees and the meat at 350 degrees implies it is done.

The symbol for this is a double headed arrow, "↔." This is often called "equivalence" to mean one is equivalent to the other.

## Truth Table for EQUIVALENT

P | Q | ↔ |
---|---|---|

F | F | T |

F | T | F |

T | F | F |

T | T | T |

I will conclude this discussion of Truth Tables, Boolean Expressions, AND and OR gates, and Propositional Calculus with a final comment on logic. This doesn't come from the work of George Boole, but from philosophers and logicians in ancient Greece such as Plato and Socrates.

As I stated with IMPLICATION, P → Q, if P is TRUE, then we expect Q to be TRUE. But if P is FALSE, we don't know anything about Q. That is called the "INVERSE." Also, the opposite or "CONVERSE" is not necessarily TRUE. P → Q is not the same thing as Q → P unless they are EQUIVALENT.

However, if Q is FALSE, then we expect P to be FALSE too, since if P was TRUE, then Q should be TRUE. This is called the "CONTRAPOSITIVE."

Ancient philosophers figured out all these comparisons, and it is an error in either computer logic or normal logic to assume that P → Q means that ¬P → ¬Q (INVERSE) or that Q → P (CONVERSE). These just don't necessarily follow from the implication. However, the CONTRAPOSITIVE does follow: ¬Q → ¬P.

Name | Form | Description |
---|---|---|

Implication | if P then Q | first statement implies truth of second |

Inverse | if NOT P then NOT Q | negation of both statements |

Converse | if Q then P | reversal of both statements |

Contrapositive | if NOT Q then NOT P | reversal and negation of both statements |

Negation | P and NOT Q | contradicts the implication |

Pretty confusing, eh? No wonder there's so much arguing in politics. If only we could solve it all with a Truth Table. Then we wouldn't need Congress at all—would we?

## No comments:

## Post a Comment