Saturday, January 12, 2013

Memories are Made of This


Computer technology has changed a lot since the very first computers built in the late 40s. Most of these changes have given us increase in performance. Processor and calculating speeds have gotten faster and faster and memory and storage sizes have increased steadily. All this time, costs were dropping.

In what seems like a lifetime ago … and it was nearly 30 years ago, so a significant part of a lifetime … I was teaching Programming Fundamentals (PF). This was the beginning of a compete curriculum of programmer retraining. We taught IBM employees the new job skill of “programming.” Prior to the class there were a couple of weeks of basic training on computer mathematics and how to use a PC. The first week of PF taught an introduction to computer architecture, what constitutes a computer program, compilers and linkers, and what not. As a person with a background in electronics engineering and digital design, I typically taught that first week. We had a text book that was used for only those first five days that covered basic computer architecture.

The students learned about the central processing unit, arithmetic logic unit, input and output, and “storage.” Each of these basic elements had lots of variations and complications, but it was storage that was, perhaps, the most sophisticated and interesting subject, even in a basic presentation.

From a very high level view of computer architecture, you needed some method of storing both programs and data. A basic, theoretical computer design developed by John Von Neumann and named the Von Neumann Machine was a useful model of a stored program computer. It had the simplest storage possible, just a tape that could move forward or backward and be read or written to. The storage was assumed to be infinite … something modern storage nearly duplicates. This simple architecture was used to solve certain mathematical problems regarding computability: what can be computed.

But most practical computers … actually about all practical computers … had less than infinite storage. And, more important, they usually had a hierarchy of storage with two layers or levels. One is often called internal storage. Although IBM didn’t like the term because they rejected any terminology that described computers as “electronic brains,” most people called this internal storage “memory” or “computer memory.” IBM finally adopted the term, but they ignored it for over thirty years.

The second level of memory was called “external storage.” There were several differences between internal storage and external storage. Internal was usually some form of electronic memory. In the very early days of computers it might be relays, vacuum tubes, magnetic core memory or even exotic technology like acoustic delay lines or oscilloscope memory, but by 70s it was a form of transistor circuits called “solid state memory” or, sometimes, "RAM," which stands for Random Access Memory.

External storage was typically magnetic recording tape or magnetic disks, either “floppy” or “hard.” There were several distinctions between these two types of storage. Internal memory was almost always faster to read and write than external. Often the difference in speed was one hundred or even one thousand times faster. However, the internal memory was usually much more limited is storage capacity, again by several orders of magnitude. In fact, magnetic tape storage was basically unlimited in size if you had enough tapes.

Therefore, the cost per bit or byte stored was almost always higher for internal memory … sort of went with the small size.

But the biggest difference of all between the two … even though this wasn’t true for some of the early forms of memory like mag-core … was that internal memory was volatile. With solid state memory, in those days, if you shut off the machine and removed power, the internal memory went blank. It forgot everything and had to be reloaded when the computer was powered back on, a process often called “booting.”

External storage, on the other hand, was non-volatile. The data remained even with no power. External memory, such as tape or disk, could actually be put in the mail and sent to a new site … an early form of networking.

Computer program design was based on these basic specifications of storage and this is why there were two layers or types. Internal storage was fast, but limited in size and volatile. External storage was slower, but typically much larger and non-volatile.

There was another complication. In general, programs had to be in internal storage in order to be executed by the central processing unit. Even input and output often involved storing in memory. In fact, one form of input was to read from a “file” on external storage into memory, and one form of output was to write data from memory to external storage “files.” You could also read from devices like keyboards and write to devices like displays and printers, but “read” and “write” when used in computer programs usually meant to or from memory and from or to a file on tape, disk, or even punch cards … actually another form of external memory.

Let’s put this all together. Most computers have internal memory which must be loaded up when the computer boots. This is accomplished by reading data from external storage into memory. Further, that is done by a special computer program that is part of the operating system. It is called the “bootstrap loader” and that is where the term “boot” comes from. Since a computer must read programs into memory to operate, the idea that at start-up it must read some initial programs is sort of like lifting yourself up by your bootstraps. What is required is a small program stored permanently (on non volatile) in memory with the instructions to read from disk.

This boot program is simple and short, but it does require a non-volatile memory area to be useful. So computers had simple programs called boot loaders literally wired into their circuits. By the 70s this was accomplished using a form of non-volatile memory called ROM or Read Only Memory. Now it wasn’t really “read only.” After all, if all you could do is “read,” how would the program get stored in the first place. It was more like “write one time” and then “read many.” The original program was “burned” into the ROM or PROM (Programmable Read Only Memory) by a special machine called a PROM Burner.

Notice we use that same terminology today for CD and DVD burners. After all, in most cases, these are “write once / read many” forms of external storage. So how did PROMs work? Well they typically contained thousands of little fuses. Little objects that, if intact, represented a zero. But if the fuse was burned out by the “burner” then they would represent a one. (Or it could be the opposite. Doesn’t really matter.)

Each little fuse represented a “bit.” A bit can be a zero or a one. Groups of eight bits are called “bytes.” Bytes are just a convenient method of organizing bits, sort of like a dozen eggs is a convenient way to package eggs. Eight bits was enough to store a character such as “a” or “B” or “9” or “$”. In those days, a common PROM was one thousand bytes or eight thousand little fuses. Although I suppose you could use a burner a second time to burn out more fuses, there was no way to undo or erase the burning process and start over. So PROMs were only written to once.

Early PCs had a built in boot program burned into the PROM. That worked well for manufacturers who built thousands of computers which all used the exact same program burned into ROM. But what about engineers like me that built special projects and needed to be able to change things … especially if we didn’t get it right the first time?

So a special kind of PROM was invented that could actually be erased and reprogrammed. These were called EEPROM for “Electrically Erasable PROM.” Most used ultraviolet light to erase the stored data. Imagine instead of little fuses, you had little areas that could be burned into a second state, thereby making a zero a one (or vice-versa), but you could reset the area with UV light back to its original value. These chips had a little window and you would expose them to a strong UV light for 30 minutes and you had a brand new “clean slate” to write on. They usually could be erased about 1,000 times, so they were very useful in laboratories and could be used over and over again.

Because of the use of light to erase the ROM, the term “Flash” began to be used for both the process of erasing and even the process of writing.

I have a few tales to tell about my experiences programming small computers with EEPROMs, but I’ll save that for another note in the interest of brevity.

As time and technology past, engineers developed solid state memory that would retain the information even after power was removed. They used methods to store charge and make the data stored in memory non-volatile. Yet the data could be erased and rewritten easily with an electrical signal, eliminating the need for the UV light. There were still some limits. These types of non-volatile memory would only support a limited number of erasures and re-writes, but the number quickly grew to several thousand times and even a million.

These non-volatile memory chips found use in digital cameras and other digital devices like audio recorders and answering machines. The difference between internal and external storage began to blur. Now internal storage could be non-volatile.

These new chips were called, among other things, “Flash NAND.” Flash from the origin with EEPROM and "NAND" stands for Not-AND, the actual digital circuit in the memory.

(All asynchronous digital circuits are either AND or OR or XOR possibly combined with “NOT.” That’s a total of six basic circuits to make all of the computer systems. Sort of like how basic bricks can be used to build a house or a school or an office building.)

As Flash NAND technology improved, it increased in storage size. That soon put it in competition with external storage. It always had the advantage of higher speed, and now it was chipping away at external storage advantage of size. ("Chipping away"!! Get it???) External storage remained cheaper per bit, but what was becoming known as Solid State Drives or SSD were catching up.

That brings us to today’s technology. SSDs can now be had in 64, 128, and even 256 Gigabyte sizes that would be adequate for a small computer such as a laptop. They are already providing the storage in what are really little computers: smartphones and tablets or iPads. Oh yes, you guessed it, they are in iPods too.

The technology that we saw in camera flash cards and USB drives or "memory keys" is now being packaged for internal computer use and mimics the mechanical hard drives that are replaced in many ways. That makes it easy for computer designers to replace magnetic hard drive storage with solid state external storage. In some very simple designs, there is no longer a distinction between memory and external storage, but, in the majority of computers, there is still a difference and a separation of the two storage methods if for no other reason than compatibility with existing programs and operating systems.

Hard disks are large, even the smallest hard disk is over an inch wide and fairly thick. They are mechanical and suffer from issues of heat or damage if dropped. Finally, they are power hungry, which is a problem with battery operated devices. So that’s why SSD found its use in cameras and audio recorders and smart phones and tablets and … the list goes on.

Now they are starting to appear in laptops. The Apple MacBook Air was an early example of SSD. Intel’s specification for an “Ultra-Book,” basically a Mac Air copy, is typically implemented with SSD. My new Samsung Chrome Book, like most Chrome Books, uses SSD. The advantages? As I said, low power use and low heat generation; much more hardy and resistant to drops and vibration; smaller size is even a factor. The only disadvantage? Well more than one. Cost for one and storage capacity for another. But capacity is rising even as cost per bit drops. So this may, ultimately, spell the end to hard disks. Notice that floppies are already an endangered species.

One big advantage of SSD comes from the fast speed. Boot times are now just a few seconds. Of course, fast read and write help speed up all programs, but it is that long wait for a computer to boot up that is most frustrating to personal computer users. We are getting close to “instant on” computers. In fact, the iPad is pretty much an instant on computer. It does used SSD storage, but the "instant on" comes more from the fact it is never really off … just slumbering.

The CD and DVD will be around a while longer, although the advantage goes to the DVD and it’s bigger brothers, Blue Ray and whatever will follow that. CDs are being replaced by downloads and iTunes and may disappear soon … although the car stereo might be their refuge.

Of course, you may chose to store everything in the “Cloud” and not worry about storage. I won’t get into the fact that the Cloud is just some computers in a data center somewhere and the data is stored on disk and tape. No matter if you use the Cloud or not, you still have to load programs into local storage — memory. And read and write times on the Internet may be the slowest link in the chain.

So, when you're not connected to the Internet, or even when you are, you may want a nice SSD to save your pictures, and songs, and videos, and even this blog.

No comments:

Post a Comment