Welcome to DJ's Junk Drawer.

I will unofficially update this website on random dates within any random time interval.

Monday, December 31, 2012

Stick a fork in netbooks, they’re done

Stick a fork in netbooks, they’re done:
After beginning in late 2007, the age of netbooks is coming to close. Acer and Asus, the two remaining top-tier manufacturers of the small laptops, are ceasing netbook production today, reports The Guardian’s Charles Arthur. For a computing market that appeared to have unstoppable growth early on, the rise and fall of netbooks happened quickly. It should remind us that disruptive new technologies can quickly erode a product’s market share, and even, the viability of a product class itself.
An example of this change can be seen in one of my most-read posts ever here on GigaOM. Out of more than 7,500 posts I’ve written, one of my most viewed is “A quick guide to netbooks” from September 2008. No matter what news was hitting the tech cycle, this post on netbooks kept finding its way in front of readers who searched for netbook information on the web. Even a year after publication, the post was appearing on a daily basis near the top of our stats. Then 2010 arrived, and with it, the first credible consumer tablet in Apple’s iPad.
Charles Arthur provides four reasons for the netbook’s demise, but by analyzing the stats of my netbook guide post, I suggest that the revamped tablet market was the beginning of the end for netbooks. True, these are completely different products in terms of form factor, design, operating systems and supported applications. But both share an important commonality: relatively inexpensive mobile computing devices.
Let’s face it: There are only a few reasons that netbooks even became a “thing.” You could get one for between $200 and $400, you could run the apps you wanted to, and you could take them everywhere. The idea of a small, cheap laptop that ran all the same software your larger notebook or desktop could run was appealing at a time when the global economy began a huge downturn. The timing of netbooks was simply right.
smartbookI know because I bought the very first one available  in 2007 and used it to cover the Consumer Electronics Show in 2008: All of my posts were written on a small Asus Eee PC. I later upgraded to an MSI Wind machine and then a $399 Toshiba model in 2009. For half the cost of a full-sized laptop, I had something more portable that lasted longer on a single battery charge.
The idea of a netbook then morphed into a smartbook: A small laptop that ran not on Intel chips, but ARM chips used in smartphones. The concept was great, but with Apple’s iPad introduction in 2010, I immediately suggested that smartbooks were DOA; a point that Qualcomm confirmed nine months later.
Some current netbook owners will continue to cling to their device, mainly because it meets their needs of Microsoft Windows applications in a small laptop, and that’s fine: One should always use the best tool for the task at hand.
Our tasks, in terms of computing needs, however, have changed. Legacy application suites are getting replaced by a seemingly never-ending stream of smartphone and tablet applications. Cloud services for productivity and storage are the new Microsoft Office and hard drive. Touch computing is becoming the norm, not the exception, and mobile operating systems are optimized for it. Simply put: Netbooks are just another example of old-school computing and world is moving on. Farewell netbooks; it was fun while it lasted.
Acer C7 ChromebookI’d be remiss if I didn’t mention Google’s Chromebook initiative as it can appear on the surface that the company is continuing to offer a netbook experience: Low-cost, small laptops that run for hours at a time. There’s one key difference, however: The entire interface is a modern desktop browser that works as a jack-of-all-trades for creating and consuming web content. Best of all, the simplicity of the software brings all the benefits of the web without the distractions, upkeep or power-consuming features brought by a legacy desktop environment.
That doesn’t mean I think Chromebooks will take over the world as netbooks were expected to do, but the different software approach and deep integration with Google services give Chromebooks a chance to survive beyond the age of netbooks.






Neuristors: The future of brain-like computer chips

Neuristors: The future of brain-like computer chips:
Hewitt Crane was a practical minded kind of guy. To help the world get a better feel for just how much oil it used in a year, he came up the unit he called the cubic mile of oil (CMO), to considerable acclaim. Crane was actually one of the pioneers of computing. He was an early developer of magnetic core RAM, eye-tracking devices, pen input devices, and invented the first all-magnetic computers still finding extensive use for fail-safe systems in the military. Today, another kind of device he presciently envisioned back in 1960 is starting to attract attention — the neuristor.
A neuristor is the simplest possible device that can capture the essential property of a neuron — that is, the ability to generate a spike or impulse of activity when some threshold is exceeded. A neuristor can be thought of as a slightly leaky balloon that receives inputs in the form of puffs of air. When its limit is reached, it pops. The only major difference is that more complex neuristors can repeat the process again and again, as long as spikes occur no faster than a certain recharge period known as the refractory period. A neuristor uses a relatively simple electronic circuit to generate spikes. Incoming signals charge a capacitor that is placed in parallel with a device called a memristor. The memristor behaves like a resistor except that once the small currents passing through it start to heat it up, its resistance rapidly drops off. When that happens, the charge built up on the capacitor by incoming spikes discharges, and there you have it — a spiking neuron comprised of just two elementary circuit elements.
memristor
The details of trying to build neuristors have been largely unglamorous. In fact, most people trying to create artificial brains with neuristor-like elements have been content to do so by simulating them on regular old digital computers. The Spaun computer comprised of 2.5 million neurons is a recent example. A group at HP Labs has come up with an improved method of fabricating a neuristor made from niobium dioxide (NbO2). For now, their prototype devices are too inefficient to be used in large numbers on a single chip, but the group is developing ways to make them more compatible with current fabrication methods. This new study does not deny long-standing efforts of the neuromorphic community to build analog neurons into chips, it is just the next step towards getting people on board with neuristor hardware instead of neuristor simulation.

When spikes beat bits

In a good electronics course, the first and simplest computer you usually build is an analog device that uses just a handful of simple amplifier chips to map and and solve a rudimentary equation. The fact that digital computers waste a lot of energy just to do a small amount of computation was not lost on the energy-conscious mind of Hewitt. Taking a cue from the brain, he formulated ideas for building devices that would exchange spikes rather than bits. In principle, for spiking networks, the only thing that absolutely requires energy is the spike. A spike doesn’t need any ordered reference structure like a bit — it can become whatever is desired.
A group of fireflies exchanging flashes, crows cawing, or locusts emitting seasonal calls, are all examples of spiking networks. The individual integrates these spikes of information, emitted in the local currency by its peers, and feeds spikes back to system as the behavior of the group evolves. What the spikes eventually come to represent is in many ways indefinably unique to the particular unit that generates them.
While for machines this would make them a bit more difficult to interpret or reproduce, it makes them infinitely easier to program — the hardware is the software. If it becomes apparent that the system needs to internalize more information about say, a visual scene, it just tosses in some more neuristors and lets them competitively integrate or die according to the properties of the network.
Analog vs. digital vs. neuron spiking signals
When a newborn first opens their eyes to the world, those first rays of light modify the founding circuitry of the retina and brain that has been roughed out according to the genes. Initially random, spiking activity begins to acquire meaning relative to external world. In this way, spikes can be considered a baggage-free way to represent sensory input or motor output, among other things. It is certainly possible to represent bits with spikes, and spikes with bits, but the conversion is usually imperfect. For example, if we put an obstacle in the way of a fly in an experimental setup, the fly can make an adjustment to its flight path in about 30ms. A single neuron in the fly brain detects the visual stimulus captured by photoreceptors contacting its input neurites, and signals the flight muscle contacted directly by its output neurites. In this context we might then ask, how much information in bits per second is transmitted by this neuron?
If for the sake of argument, we suppose that this particular neuron can spike at a maximum of 500Hz, it follows that in the 30ms behavioral window there would be 15 intervals, each 2ms long, where a spike may or may not occur. There are then 215, or 32736 possible spike “words” that the neuron could in theory transmit. The bit rate of this neuron, at least in the given window would therefore be 15 bits per 30ms, or more conveniently, 500 bits per second. That number is just what the neuron is transmitting to the flight muscle in its termination zone. In fact each of the neuron’s 100 or more input neurites may be said to be integrating information at a roughly similar bit rate, we just cannot map a real world function to them as easily at that level of detail.
In principle the requirement that spikes fall within windows is artificial. Bats can reportedly detect Doppler shifts in echos down to the nanosecond level using spikes. Spikes with that kind of temporal precision, particularly when considered in the context of a network, could have much higher corresponding bit rates. Brains, or in particular neurons, use spikes in entirely different ways depending on what kind job the neuron does and what kind of network it is in. Neurons controlling the eye muscles need to fire quickly and often to maintain eye position. On the other hand, neurons that control the movements of food through the gut need to spike rhythmically on cue and be able to put out a constant volley of spikes at each peak in the rhythm to drive the gut muscle properly — these neurons might be likened to performing the “wave” much like you would do at a football game.
Next page: Are neuristors the future of prosthesis and robotics?

Welcome to Galaxy NGC 922... mind the black holes!

Welcome to Galaxy NGC 922... mind the black holes!:
Located some 150 million light-years from Earth, NGC 922 is admittedly a little far for a quick New Year's Eve getaway. But as this Hubble image reveals, the views are great, just so long as you avoid the black holes. More »