Saturday, December 25, 2010

Writing about Writing

It has been a year now that I've been actively "blogging." I've written both in Facebook "Notes" and on my blog on BlogSpot. I've really enjoyed the writing and reminiscing, since most of what I write about are things that have happened to me or things I'm thinking about. It is great fun to put the thoughts to "paper" and hopefully share them with some friends and family and even strangers. Some times I try to be funny. Some times I try to be thoughtful. Some times I try to educate or excite interest. Some times I just want to talk about my life.

I used to get a lot more feedback on my writing. I can't tell if people are just not reading it any more, or if the content is singularly unremarkable, or I'm just being shunned for being a giant geek, nerd, pain in the tookus … which I admit I am. I tell myself that it is not important if anyone reads my words. I’m just writing it for my own personal satisfaction. But is that enough?

I follow many postings and blogs. Some just note the events of their day, and that is cool. I'm interested in what my friends and family are doing. Some post articles and links. That's cool too. I enjoy them and am often educated or entertained by them. I do those things too. But I don't find these things all that creative. I looked at a blog the other day that had an interesting artwork and graphics, but when I started to read the content, this person just posts what other people say. It was full of quotations and links. I didn’t find that very creative at all. This person is a living mimeograph machine. Is that creative? Maybe, I don’t know. Before computers I was impressed by people who could quote poems and prose and quotations, but in the age of copy and paste it is not so impressive.

I consider my notes (and my photographs too) my creative outlet. A little more creative than just commenting on my daily activities or tagging the restaurant I'm currently imbibing at (watch out — dangling participle alert). So let me say this about writing on the anniversary of these efforts.

I think that one of the more remarkable aspects of writing and publishing is that no two readers ever read the same book, even when they both have the same edition. I mean the mental pictures and assumptions will be different for each reader. Something about reading being an essential part of writing. The reader brings so much to the encounter that it is hard to just focus on the author's creation, but you must include the reader's view too. Let me explain.

We will all feel differently about a movie or a play or a painting or a song, but we have all undeniably seen or heard the same movie, play, painting or song. They are physical entities. A painting by Velázquez is purely and simply itself, as is "Blue" by Joni Mitchell. If you walk into the appropriate gallery in the Prado Museum, or if someone puts a Joni Mitchell disc on, you will see the painting or hear the music. You have no choice.

But writing does not exist without an active, consenting reader. (Possible exception is roadside signs, and Burma Shave might be acceptable as creative literature.) Oh, certainly writing has a physical presence. That is the whole point of writing, preserving thoughts for posterity — right? (Consider a book in a long-lost language that no one understands. Is it even writing if no one can ever read it and think about it? Just something else to ponder.)

What I mean is that writing requires a different level of participation. Words on paper are abstractions, and everyone who reads words on paper combines them with a different set of associations and images. I have vivid mental pictures of Don Quixote, Anna Karenina and Huckleberry Finn, but I feel confident they are not identical to the images carried in the mind of anyone else. You could argue this is true of a movie too, but I don’t think it is at anywhere near the same level. I see the actors and scenery in the movie and how the roles are interpreted, and I suspect the person in the theater next to me has about the same concept of the plot and characters as me.

But if we both read a book, how different our interpretations will be. I know when I first read “Dune” by Frank Herbert I didn’t really comprehend the sand worms very well. Even the art work on the cover didn’t help a lot. Then I saw the movie. I thought, “Now I get it.” “I know just what the worm looks like.” "Wow they're big!" Now I think that was a failure of my imagination. I should have pictured the worms in my mind better from the book. Now I just share the common view of anyone seeing them movie (actually either movie). Now I see them like worms, with rounded front ends, yet the great teeth that the book describes. Recall the Freman made knives from the teeth.

As an aside, no Dune movie has done justice to the ornithopters. The word is from the Greek for bird and wing. Obviously, they had flapping wings. The ornithopters in David Lynch's film really were unimaginative. I did enjoy Sting's performance on the other hand. What about you?

Hopefully I will get a little more feedback on these thoughts. Christmas is a busy time, and people’s thoughts are not focused on Facebook Notes or blogs — or are they? In any case, if you do chance to read these words, I would love to have some feedback and even conversation about it. What do you do that is creative? What do you think about writing?

How do you compare movies and books? Did you read "True Grit" by Charles Protis? Have you seen both movies? How do you compare the two movies, the characters, the motivations, etc.?

Do you keep a journal? Do you just post what other people say: links and videos, or do you have your own thoughts and ideas and passions? Tell me, I’d love to hear.

Friday, December 24, 2010

Thoughts on the Recording Art

I made my first recordings on a full-track tape deck at KXLO studios. Since AM radio is mono, and this was the early 60's, they had no need for stereo recorders. At that time, most mono tape recorders were what is called "half track." That means the recording was done on half of the tape, call it the "top half." Then, when you got to the end of the tape, you would flip the tape over and record on the other half. This had the positive affect of doubling the length of time the whole reel of tape could record, as well as finishing up with the tape nicely rewound on the original reel.

The more professional mono recorders at KXLO were full track. They recorded on the entire width of the magnetic tape. That means that, if you flipped the tape over, and then played it back, the audio would come out in reverse. Also the machine had "infinite" speed control, so you could slow things down and speed them up. We did some fun recording with that machine.

While in Navy 'A' School I bought my first tape recorder, a Sony reel-to-reel, and started recording music off the radio and from records. Later I recorded a lot of jam sessions and even made some serious "studio" recordings. I have those to this day. But, since they were just "ambient" or "live" recordings, I can't really do any post production. I would love to turn up the vocals or the bass guitar and turn down the lead guitar, but what you got is what you get.




Later I started recording my friend Casey Anderson in the mid seventies. I upgraded to a nice Teac stereo reel-to-reel. Like the Sony, it was a quarter track machine with two heads. Record stereo, flip the tape, and record stereo on the rewind. Then I added a Teac four track reel-to-reel. It had a typical quarter track design but with four heads and recorded four tracks on the tape in one direction. Like the old mono recorder at KXLO, you could only record in one direction, but I was able to record Casey's small combo with dedicated tracks for each instrument and voice. Later, recording 4 and 5 piece groups and even large blue grass bands, I would just partially mix down to four tracks and then do final production to stereo in post.

I always wanted more tracks, 8 or 16 or even more to give me more control. Then the whole recording process would be to simply capture each instrument and voice as best you can, and leave the actual mixing to post production. I finally achieved that goal when I purchased a Fostex digital hard disk recorder which allowed 16 tracks simultaneous recording. My first job with that new equipment was to record a band's reunion. They had electronic drums, so it was very simple to capture all the instruments and voices, even though they did quartet singing and had violin and keyboards in addition to two guitars and bass. I now have a home studio that is also 16 tracks and could be 32 or even 64 if I just had the audio card inputs on my computer. It is really more an issue that my computer has run out of slots to add sound cards than limits of the audio recording software.




That is the back ground to my reading this in Keith Richard's autobiography:

The thing about eight-track was it was punch in and go. And it was a perfect format for the Stones. You walk into that studio and you know where the drums are going to be and what they sound like. Soon after that, there were sixteen and then twenty-four tracks, and everyone was scrambling around these huge desks. It made it much more difficult to make records. The canvas becomes enormous, and it becomes much harder to focus. Eight-track is my preferable means of recording a four-, five-, six-piece band.

Add that to the fact that the Beatles and the Beach Boys all through the 60's and early 70's focused on simple mono recordings and left the stereo mixes to the engineers (and who leaves the music production to the engineers???). Now think what we have in the studio. We have electronic reverb (the Stones recorded in a basement to get the echo), parametric equalization and limiting/expansion/compression and even automatic pitch correction. (Heard Cher on her last disk?) When will it end? I have syncro envelope control, hiss and hum removal, and I can even clean up tapes that are almost beyond listening. All through the tricky application of fourier transforms and digital data storage.

Yet people are returning to vinyl (and tube amplifiers).

So is the pallet now just too complicated? Is digital music just too "cold?" Are transistors not "warm" enough? Have we lost control due to the high tech stuff? What would Muddy Waters think, or even Eric Clapton?

Thoughts to ponder. Is Santa brining my musician friends another digital toy? Or is there just a Hohner harmonica in your stocking? Well, and maybe a nice "green bullet" microphone to record that blues harp?!?

Tuesday, December 14, 2010

At some point in the near future ...

OK, the New Year is fast approaching, and it is time for the prognosticators to roll out the crystal ball and gaze into the future. Well, I’m as prognostic as anyone around here, so here goes.

First a comment about the “news business.” You know the newspapers, network television news, and those radio updates on the 20’s. Well, as anyone will tell you, they’re DEAD. Better toss in the flowers and start filling the grave. Free news from the Wall Street Journal and NY Times, not much longer. You have to pay to play. Besides, with all the changes in the news business over the last few decades, from the USA Today with its four brightly colored sections to — gasp — the Wall Street Journal getting color to the NY Times charging for on-line subscriptions (coming soon) to the news on your Kindle — Nook — Aluratech — Kobo — Apple Bookshelf… What, the advertising model isn’t working?

And what about that advertising model. Anybody notice how a nice set of messenger, email, and other Windows apps were taken out of Win 7 and now they are downloaded under the banner “Live” which really means “Google isn’t the only one that can sell advertising with their free content.”

The news business has changed a lot in the computer age. “Reporter,” “columnist” and “letter to the editor” now sound strangely quaint. They’ve been quickly — and sometimes too eagerly — supplanted by the technocratic “content generator,” “blogger” and “online message board.”

New-media marketing wizards advance the notion of “communities” engaged in “crowd sourcing” through Twitter, Facebook and other “social media.” The mass media, invented by our grandparents’ generation, have fallen out of fashion; in a world of customized information, there are no mass markets.

Significantly, anonymity, in the form of online “handles” (a CB radio term that’s been revived by the online “citizens’ band”), is the norm for the new media. With this trend, regrettably, the news business has shelved the traditional journalistic practice of squeezing a full name, title and company affiliation out of anyone seeking to post a comment in an online community.

And what about Fox and MSNBC? Why don’t they follow the example of James Carville and Mary Matalin and just get a room!!

So, what are the trends to follow (other than 3D TV for the masses)? Here it comes folks. My predictions for 2011 (or shortly thereafter).

((All warranties, implied and explicit, are hereafter denied and if you or your impossible mission team are captured, then the secretary will deny any existence of the hereafter. This warranty will self destruct in 10 seconds.))

1. I’ve already written about the Microsoft Kinect and how I think it will revolutionize the computer / human interface. Watch for “gesture recognition” coming to a computer near you soon. The little webcam will be replaced by a sophisticated vision system and you won’t need a mouse, or trackball, or touch pad, (or little red eraser on the IBM laptop), or even my favorite: the joy stick. No, I see in the crystal ball we’ll just be waving our hands and the computer will move the cursor. Add speech recognition to replace the keyboard, and the newest computer won’t have any buttons at all. You just clap-on and clap-off.

2. The Google Power Meter. With this app, you can monitor all your personal energy use and calculate your carbon footprint to the millimeter. I predict a new class of zombies who just watch the meter spin on their home power and wonder who is watching the TV, and why is that light still on in the bathroom.


3. Apple has a great big bull’s eye painted on its back. Expect heavy competition in areas such as smart phones and touch pads, ala the iPhone and iPad. Now that doesn’t even require a crystal ball. Apple will move to a new network provider in 2011 (hear that Verizon?), but just as with the PC, it will have to settle for a solid 20% of the market as the open competition and lower prices move market share to the clones. Not that Apple is altogether upset about that. They will still be rolling the money in faster than Steve Jobs can change turtlenecks.

4. As someone who has some experience requesting medical records be sent from one doctor to another using, get ready, here it comes, a FAX !!! No, no, no! Medical records will all be digitized and you’ll carry them around in your smart phone. May not happen in 2011, but when it does, you’ll remember I predicted it first. (And I predict a booming market for PGP as people want those records encrypted!) P.S. Your credit card too, and your little doggy, he he he.

5. Automotive radar in your Hyundai. That’s right folks. You people who can’t afford a car as expensive as the one I drive, you will soon have radar too. And you’ll still follow too close on the freeway, but I can’t do anything about that. The little red light on the dash will clammer that your too close, AND you don’t have your seatbelt on. The smart car is coming. With over 15 million lines of code in a modern automobile, I can’t wait for the second Tuesday of the month when Ford sends out its security updates. Oh no, my car has a virus and it won’t stop sneezing!

6. Geotagging the real world. I want all dogs, cats, and babies to have GPS installed. And while we’re at it, I want my phone, my TV remote, my car keys, my purse, and my reading glasses tagged too. I’m tired of looking for them and want an app to find them once and for all. Oh yes, and the teenagers. We must get them tagged first.

7. Energy storage. Come on folks, surely we can come up with a battery that can power a laptop for over 8 hours, a phone for over a week, and a car for over 35 miles. I’m not making any predictions, but I do suggest that whoever comes up with a practical energy storage device will own the world. Don’t expect it in 2011, but certainly by 2111 we’ll have solved that issue. (2525 is too far out to predict, and I only see images of long black tubes.)

8. And for my last prediction: we’ve got to get 3D TV on our smart phones. (Also those magnifying glasses that surgeons wear so we can see full detail, HD, Form 1040, progressive scanned, 3D shows on my smart phone’s 4” screen.) I may never have to suffer reality again. Infinite reruns of Homer Simpson, and I’ll just veg away here in my seat.

So put on your lampshade and get ready to party like its 1999. Here comes the little red corvette, and the purple rain, and 20ten … wait … that is so last year.

And now my bonus prediction. This just in off the wire, silicon chips will be able to communicate via pulses of light. IBM trumpets silicon nanophotonics as the enabler for envisioned exoscale processors which would perform a million trillion operations per second. Watch for these exciting new chips at a super market near you soon. Now I won't have to wait so long for my YouTube downloads. What, you say the problem is my network capacity? Never mind.

We are living in the future
I'll tell you how I know
I read it in the paper
Fifteen years ago

We're all driving rocket ships
And talking with our minds
And wearing turquoise jewelry
And standing in soup lines
We are standing in soup lines

Machine Vision and Microsoft’s New Kinect

My first experience with Machine Vision was in 1983. Let me define terms. “Machine Vision” is the process of using a computer to analyze visual data in the same way that the human brain analyzes the information fed to it by the eyes. I was working in magnetic recording head manufacturing. At that time magnetic recording heads were “glass sandwiches.” The process involved building up the pieces one step at a time. Each step new glass was applied; it was a formula of glass that melted at a slightly lower temperature than the previous glass. So you would take glass, add metal, melt, add more glass, melt the new glass, add more metal and glass, melt the latest layer. Thus were built up these miniature electronic devices that contained the magnetic recording and reading elements embedded in glass.

This process was not only time consuming and expensive, but the heads were too large for the next generation of miniaturized disk drives. So IBM pioneered (as it had so often done in extending the art of computer engineering) something called “thin film heads.” These state of the art (for 1980’s) recording heads were made more like the process used to make transistors and integrated circuits. And, they shared a problem with integrated circuits: yield. That is, not all the heads were good. Now that is OK. If you can cheaply, and at one time, make over 100 heads, it doesn’t matter if only 60 are good. But it does create the problem of determining which are good.

Of course, you could install the heads in a drive and test them that way, but that was way too expensive and wasteful. We did test these heads 100% once they were in a drive. In fact, that was my job and my creation, the ESTAR (Eight Station Test and Repair) machine, did just that. I was a test engineer and testing was my game.

But what we wanted to do was visually inspect the heads and eliminate as many of the bad heads before assembly as possible. Ideally we wanted to inspect the heads while they were still all attached to the same substrate called the “wafer.” That was done with a microscope viewed on a TV screen and a motorized stage that moved the heads so they could each be inspected one by one. The inspector could press a button while viewing a head and a dot of ink was dropped on the bad head. Press another button and the stage automatically repositioned to the next head for inspection. After inspection was completed, a machine would cut the wafer into individual heads and sort them based on the drop of ink: bad heads in the trash, good heads move down the line for assembly.

Now it was a very boring and time consuming job to inspect the heads manually, so — as the test engineer with an electronics engineering degree, design experience, knowledge of programming, and advanced math skills — naturally I got the job of creating a computerized head inspection tool.

This was my first experience with “Artificial Intelligence” or AI. That is the area of computer science interested in making computers “think” like humans do, learn like humans do, and in general duplicate human thought processes. AI was an important part of robotic designs and other interesting areas of research since the 50’s. So I took one of the inspection tools, fed the video into a computer and added computer control for the table and the ink button. I programmed the computer using state-of-the-art AI software and proceeded to “teach” it good and bad heads. I would feed the video for a good head and program the computer to view it as “good” and the same with bad heads programmed as “bad.” After running a few thousand heads through the “learning” circuits, I thought the computer would be able to distinguish good from bad. No, it didn’t do very well giving me both “false positives” and “false negatives.” In other words, it marked good heads as bad and bad heads were not “inked.”

So I modified the vision system adding a second camera, one in visible light and one with a blue light filter (since blue light seemed to show the imperfections better). That helped, but still we did not achieve the accuracy we needed. I worked with a Ph.D. math intern on algorithms, and the best we could get after working all summer was for the computer to correctly spot bad heads about 80% of the time and successful at not rejecting good heads over 90%. But that was not good enough. Human operators ran near 98% in both categories.

We kept increasing the precision of the algorithms, but pretty soon the system ran too slow. We needed this machine to inspect over 5,000 heads a day, and it just couldn’t do that in an 8 hour shift. Humans were required to insert the wafers, so we couldn’t run around the clock and finally the project was canceled. I did inherit a $10,000 Zeis optical system in the process, but later gave it to a friend working in Austin, TX who used it to inspect processor chips. He did have better luck than me.

The problem was that certain imperfections on the recording head surface were not detrimental to operation, while other imperfections were. My computer algorithms just could not distinguish the slight imperfections and which were damaging and which were not. I measured size of imperfections and reflection indices of the imperfections and even color (in a limited sense), but I could not get the level of discernment of the human eye and thinking brain. The computer just could not tell the difference between good and bad heads with the reliability that matched humans.

Now my lack of success was typical of AI at that point in time. Everyone looked for the holy grail of “faster processors.” Even better would have been parallel processors, since the algorithms I was running did tree searches and you can easily parallelize those algorithms.

Move the clock ahead 30 years. Enter game systems like the Wii and the Xbox 360. Now, as many of you know, the Wii has been doing very well in the market, partially because of the lower cost of that system compared to the comparable Sony and Microsoft offerings, but also because of the interesting controllers.  These are motion and position sensitive, wireless, hand held devices that let Wii game writers create interesting games like bowling, tennis, and exercise software.

So, Microsoft did them one better with the completely controller-less Kinect box. This is a device that employs machine vision (and machine hearing) to monitor the game players and detect movement of their hands, arms, legs, body, and head. The AI research that went into this box is fascinating and must have cost millions of dollars of research time. That coupled with an interesting, low cost interface is a fascinating thing. In fact, these boxes are being bought by hackers who are quickly modifying them to work directly with computers. MS doesn’t know exactly what to think about this. They want to sell the box, but they also want to sell Xbox 360 units. At the same time, they appreciate the interest and the good press. Now, in my opinion, this is just the start of something. I forecast lap top and desktop PCs with machine vision eliminating the need for a mouse or a touch screen and maybe even a keyboard. Imagine this interface integrated in a smart phone. Maybe we will enter text using the international sign language for the deaf. Interesting thoughts.

As an aside, Microsoft seems to be good at producing hardware. Their mouse was often the best on the market. It was designed and engineered at Microsoft in Ft. Collins, and I know many of the engineers that work there. Sadly, that engineering work has been sent to China, and my friends were laid off. As I’ve said before, I don’t think the manufacturing moved off shore will ever return. I’m much more concerned about the engineering moving off shore. And now back to you regularly scheduled program.

With the assistance of my friends at UBM TechInsights in Austin, TX, here is a breakdown of the Kinect hardware. Fabless semiconductor company PrimeSense, a Tel-Aviv, Israel company, enabled the technological feat via its PrimeSensor reference design, which it says lets a computer “perceive the world in three dimensions and translate these sections into a synchronized image.”

Another aside, the Israelis continue to build computer chips while the Palestinians produce “potato chips.” Add to list of articles in my queue, my opinions on the Israeli and Palestinian situation. I’m sure I could shed some light on that … sure! OK, back to Kinect.

In the MS approach, the room and its occupants are peppered with a pattern of dots, unseen by the users and generated by a near-infrared laser; the use of a Class I laser device provides focus at a distance without hazard to the players. I’m sure some of you have seen how Hollywood does some special effects, photographing actors in full skin suits covered with white balls. Same idea, the computer needs these fixed reference points to establish location and movement.

A CMOS image sensor in the Kinect detects reflected segments of the infrared dot pattern and maps the intensity of each segment to a corresponding distance from the sensor, with resolution of the depth dimension (z axis) down to 1 centimeter. Spatial resolution (x and y axes) is on the order of millimeters (which for those of you not good at metric measurements, are even smaller than centimeters), and RGB input from a second CMOS image sensor is pixel-aligned to add color to the acquired data.

The Kinect uses the three-dimensional position and movement data to produce corresponding on-screen movements by each player’s avatar. A motorized gear assembly keeps the image sensors aimed at the action. As players move, the Kinect follows. Four microphones are used to cancel echoes and background noise while helping determine which player has issued a voice command. It’s not too hard to think of other applications for this technology, but for now, it’s available as a video game interface.

Microsoft expects to sell a few million Kinect units by the end of the year, so it comes as no surprise that several of the commodity components have second and third sources. The 64 Mbyte DDR2 SDRAM socket may contain parts form Samsung, Elpida, and Hynix. Also, the 1-Mbyte NOR flash may be from Silicon Storage Technology or STMicroelectronics. The Kinect contains plenty of op amps and other small components that are easy to multiple source.

The “eyes’ of the Kinect are a pair of cameras, both of which incorporate CMOS image sensors from Aptina Imaging. The unit uses a PS1080 for communications via USB 2.0 with the application processor a Marvell product PXA168 — a low power, low-cost gigahertz-plus screamer that should have tech-frenzied gamers swooning. A pair of Wolfson Microelectronics WM8737L stereo A/D converters with built in microphone preamps accommodate the array of microphones.

Kinect also houses a MEMS accelerometer to support the unit’s limited range of motion provided by stepper and dc motor drivers. A USB hub controller from NEC and a pair of Texas Instruments USB audio streaming controllers and eight channel A/D converters round out the processing power.

What is really amazing is that all this algorithmic power is available for $150 retails. Just think of the possibilities beyond the Kinect: a TV with no remote; a computer with no mouse, no track pad, and no touch screen; affordable advances in home security; and any number of aids for the elderly and disabled. Whether you run off and attach your Kinect to a homemade robot, limit its use to the intended gaming purpose, or do neither, you’ll be seeing this technology again.

I remember IBM demonstrations at Disney World where keyboards were projected onto any surface with a red laser and then a camera would observe the “typist” using the keyboard. Obviously, MS has taken these ideas several steps farther.

Tell me again why I’m retiring now … the future is so bright I’ll have to wear shades — with laser dot, infrared technology built in, me thinks!