Stick a fork in netbooks, they’re done:
After beginning in late 2007, the age of netbooks is coming to close. Acer and Asus, the two remaining top-tier manufacturers of the small laptops, are ceasing netbook production today, reports The Guardian’s Charles Arthur. For a computing market that appeared to have unstoppable growth early on, the rise and fall of netbooks happened quickly. It should remind us that disruptive new technologies can quickly erode a product’s market share, and even, the viability of a product class itself.
An example of this change can be seen in one of my most-read posts ever here on GigaOM. Out of more than 7,500 posts I’ve written, one of my most viewed is “A quick guide to netbooks” from September 2008. No matter what news was hitting the tech cycle, this post on netbooks kept finding its way in front of readers who searched for netbook information on the web. Even a year after publication, the post was appearing on a daily basis near the top of our stats. Then 2010 arrived, and with it, the first credible consumer tablet in Apple’s iPad.
Charles Arthur provides four reasons for the netbook’s demise, but by analyzing the stats of my netbook guide post, I suggest that the revamped tablet market was the beginning of the end for netbooks. True, these are completely different products in terms of form factor, design, operating systems and supported applications. But both share an important commonality: relatively inexpensive mobile computing devices.
Let’s face it: There are only a few reasons that netbooks even became a “thing.” You could get one for between $200 and $400, you could run the apps you wanted to, and you could take them everywhere. The idea of a small, cheap laptop that ran all the same software your larger notebook or desktop could run was appealing at a time when the global economy began a huge downturn. The timing of netbooks was simply right.
I know because I bought the very first one available in 2007 and used it to cover the Consumer Electronics Show in 2008: All of my posts were written on a small Asus Eee PC. I later upgraded to an MSI Wind machine and then a $399 Toshiba model in 2009. For half the cost of a full-sized laptop, I had something more portable that lasted longer on a single battery charge.
The idea of a netbook then morphed into a smartbook: A small laptop that ran not on Intel chips, but ARM chips used in smartphones. The concept was great, but with Apple’s iPad introduction in 2010, I immediately suggested that smartbooks were DOA; a point that Qualcomm confirmed nine months later.
Some current netbook owners will continue to cling to their device, mainly because it meets their needs of Microsoft Windows applications in a small laptop, and that’s fine: One should always use the best tool for the task at hand.
Our tasks, in terms of computing needs, however, have changed. Legacy application suites are getting replaced by a seemingly never-ending stream of smartphone and tablet applications. Cloud services for productivity and storage are the new Microsoft Office and hard drive. Touch computing is becoming the norm, not the exception, and mobile operating systems are optimized for it. Simply put: Netbooks are just another example of old-school computing and world is moving on. Farewell netbooks; it was fun while it lasted.
I’d be remiss if I didn’t mention Google’s Chromebook initiative as it can appear on the surface that the company is continuing to offer a netbook experience: Low-cost, small laptops that run for hours at a time. There’s one key difference, however: The entire interface is a modern desktop browser that works as a jack-of-all-trades for creating and consuming web content. Best of all, the simplicity of the software brings all the benefits of the web without the distractions, upkeep or power-consuming features brought by a legacy desktop environment.
That doesn’t mean I think Chromebooks will take over the world as netbooks were expected to do, but the different software approach and deep integration with Google services give Chromebooks a chance to survive beyond the age of netbooks.
Welcome to DJ's Junk Drawer.
I will unofficially update this website on random dates within any random time interval.
Monday, December 31, 2012
Neuristors: The future of brain-like computer chips
Neuristors: The future of brain-like computer chips:
Hewitt Crane was a practical minded kind of guy. To help the world get a better feel for just how much oil it used in a year, he came up the unit he called the cubic mile of oil (CMO), to considerable acclaim. Crane was actually one of the pioneers of computing. He was an early developer of magnetic core RAM, eye-tracking devices, pen input devices, and invented the first all-magnetic computers still finding extensive use for fail-safe systems in the military. Today, another kind of device he presciently envisioned back in 1960 is starting to attract attention — the neuristor.
A neuristor is the simplest possible device that can capture the essential property of a neuron — that is, the ability to generate a spike or impulse of activity when some threshold is exceeded. A neuristor can be thought of as a slightly leaky balloon that receives inputs in the form of puffs of air. When its limit is reached, it pops. The only major difference is that more complex neuristors can repeat the process again and again, as long as spikes occur no faster than a certain recharge period known as the refractory period. A neuristor uses a relatively simple electronic circuit to generate spikes. Incoming signals charge a capacitor that is placed in parallel with a device called a memristor. The memristor behaves like a resistor except that once the small currents passing through it start to heat it up, its resistance rapidly drops off. When that happens, the charge built up on the capacitor by incoming spikes discharges, and there you have it — a spiking neuron comprised of just two elementary circuit elements.
The details of trying to build neuristors have been largely unglamorous. In fact, most people trying to create artificial brains with neuristor-like elements have been content to do so by simulating them on regular old digital computers. The Spaun computer comprised of 2.5 million neurons is a recent example. A group at HP Labs has come up with an improved method of fabricating a neuristor made from niobium dioxide (NbO2). For now, their prototype devices are too inefficient to be used in large numbers on a single chip, but the group is developing ways to make them more compatible with current fabrication methods. This new study does not deny long-standing efforts of the neuromorphic community to build analog neurons into chips, it is just the next step towards getting people on board with neuristor hardware instead of neuristor simulation.
A group of fireflies exchanging flashes, crows cawing, or locusts emitting seasonal calls, are all examples of spiking networks. The individual integrates these spikes of information, emitted in the local currency by its peers, and feeds spikes back to system as the behavior of the group evolves. What the spikes eventually come to represent is in many ways indefinably unique to the particular unit that generates them.
While for machines this would make them a bit more difficult to interpret or reproduce, it makes them infinitely easier to program — the hardware is the software. If it becomes apparent that the system needs to internalize more information about say, a visual scene, it just tosses in some more neuristors and lets them competitively integrate or die according to the properties of the network.
When a newborn first opens their eyes to the world, those first rays of light modify the founding circuitry of the retina and brain that has been roughed out according to the genes. Initially random, spiking activity begins to acquire meaning relative to external world. In this way, spikes can be considered a baggage-free way to represent sensory input or motor output, among other things. It is certainly possible to represent bits with spikes, and spikes with bits, but the conversion is usually imperfect. For example, if we put an obstacle in the way of a fly in an experimental setup, the fly can make an adjustment to its flight path in about 30ms. A single neuron in the fly brain detects the visual stimulus captured by photoreceptors contacting its input neurites, and signals the flight muscle contacted directly by its output neurites. In this context we might then ask, how much information in bits per second is transmitted by this neuron?
If for the sake of argument, we suppose that this particular neuron can spike at a maximum of 500Hz, it follows that in the 30ms behavioral window there would be 15 intervals, each 2ms long, where a spike may or may not occur. There are then 215, or 32736 possible spike “words” that the neuron could in theory transmit. The bit rate of this neuron, at least in the given window would therefore be 15 bits per 30ms, or more conveniently, 500 bits per second. That number is just what the neuron is transmitting to the flight muscle in its termination zone. In fact each of the neuron’s 100 or more input neurites may be said to be integrating information at a roughly similar bit rate, we just cannot map a real world function to them as easily at that level of detail.
In principle the requirement that spikes fall within windows is artificial. Bats can reportedly detect Doppler shifts in echos down to the nanosecond level using spikes. Spikes with that kind of temporal precision, particularly when considered in the context of a network, could have much higher corresponding bit rates. Brains, or in particular neurons, use spikes in entirely different ways depending on what kind job the neuron does and what kind of network it is in. Neurons controlling the eye muscles need to fire quickly and often to maintain eye position. On the other hand, neurons that control the movements of food through the gut need to spike rhythmically on cue and be able to put out a constant volley of spikes at each peak in the rhythm to drive the gut muscle properly — these neurons might be likened to performing the “wave” much like you would do at a football game.
Next page: Are neuristors the future of prosthesis and robotics?
Hewitt Crane was a practical minded kind of guy. To help the world get a better feel for just how much oil it used in a year, he came up the unit he called the cubic mile of oil (CMO), to considerable acclaim. Crane was actually one of the pioneers of computing. He was an early developer of magnetic core RAM, eye-tracking devices, pen input devices, and invented the first all-magnetic computers still finding extensive use for fail-safe systems in the military. Today, another kind of device he presciently envisioned back in 1960 is starting to attract attention — the neuristor.
A neuristor is the simplest possible device that can capture the essential property of a neuron — that is, the ability to generate a spike or impulse of activity when some threshold is exceeded. A neuristor can be thought of as a slightly leaky balloon that receives inputs in the form of puffs of air. When its limit is reached, it pops. The only major difference is that more complex neuristors can repeat the process again and again, as long as spikes occur no faster than a certain recharge period known as the refractory period. A neuristor uses a relatively simple electronic circuit to generate spikes. Incoming signals charge a capacitor that is placed in parallel with a device called a memristor. The memristor behaves like a resistor except that once the small currents passing through it start to heat it up, its resistance rapidly drops off. When that happens, the charge built up on the capacitor by incoming spikes discharges, and there you have it — a spiking neuron comprised of just two elementary circuit elements.
The details of trying to build neuristors have been largely unglamorous. In fact, most people trying to create artificial brains with neuristor-like elements have been content to do so by simulating them on regular old digital computers. The Spaun computer comprised of 2.5 million neurons is a recent example. A group at HP Labs has come up with an improved method of fabricating a neuristor made from niobium dioxide (NbO2). For now, their prototype devices are too inefficient to be used in large numbers on a single chip, but the group is developing ways to make them more compatible with current fabrication methods. This new study does not deny long-standing efforts of the neuromorphic community to build analog neurons into chips, it is just the next step towards getting people on board with neuristor hardware instead of neuristor simulation.
When spikes beat bits
In a good electronics course, the first and simplest computer you usually build is an analog device that uses just a handful of simple amplifier chips to map and and solve a rudimentary equation. The fact that digital computers waste a lot of energy just to do a small amount of computation was not lost on the energy-conscious mind of Hewitt. Taking a cue from the brain, he formulated ideas for building devices that would exchange spikes rather than bits. In principle, for spiking networks, the only thing that absolutely requires energy is the spike. A spike doesn’t need any ordered reference structure like a bit — it can become whatever is desired.A group of fireflies exchanging flashes, crows cawing, or locusts emitting seasonal calls, are all examples of spiking networks. The individual integrates these spikes of information, emitted in the local currency by its peers, and feeds spikes back to system as the behavior of the group evolves. What the spikes eventually come to represent is in many ways indefinably unique to the particular unit that generates them.
While for machines this would make them a bit more difficult to interpret or reproduce, it makes them infinitely easier to program — the hardware is the software. If it becomes apparent that the system needs to internalize more information about say, a visual scene, it just tosses in some more neuristors and lets them competitively integrate or die according to the properties of the network.
When a newborn first opens their eyes to the world, those first rays of light modify the founding circuitry of the retina and brain that has been roughed out according to the genes. Initially random, spiking activity begins to acquire meaning relative to external world. In this way, spikes can be considered a baggage-free way to represent sensory input or motor output, among other things. It is certainly possible to represent bits with spikes, and spikes with bits, but the conversion is usually imperfect. For example, if we put an obstacle in the way of a fly in an experimental setup, the fly can make an adjustment to its flight path in about 30ms. A single neuron in the fly brain detects the visual stimulus captured by photoreceptors contacting its input neurites, and signals the flight muscle contacted directly by its output neurites. In this context we might then ask, how much information in bits per second is transmitted by this neuron?
If for the sake of argument, we suppose that this particular neuron can spike at a maximum of 500Hz, it follows that in the 30ms behavioral window there would be 15 intervals, each 2ms long, where a spike may or may not occur. There are then 215, or 32736 possible spike “words” that the neuron could in theory transmit. The bit rate of this neuron, at least in the given window would therefore be 15 bits per 30ms, or more conveniently, 500 bits per second. That number is just what the neuron is transmitting to the flight muscle in its termination zone. In fact each of the neuron’s 100 or more input neurites may be said to be integrating information at a roughly similar bit rate, we just cannot map a real world function to them as easily at that level of detail.
In principle the requirement that spikes fall within windows is artificial. Bats can reportedly detect Doppler shifts in echos down to the nanosecond level using spikes. Spikes with that kind of temporal precision, particularly when considered in the context of a network, could have much higher corresponding bit rates. Brains, or in particular neurons, use spikes in entirely different ways depending on what kind job the neuron does and what kind of network it is in. Neurons controlling the eye muscles need to fire quickly and often to maintain eye position. On the other hand, neurons that control the movements of food through the gut need to spike rhythmically on cue and be able to put out a constant volley of spikes at each peak in the rhythm to drive the gut muscle properly — these neurons might be likened to performing the “wave” much like you would do at a football game.
Next page: Are neuristors the future of prosthesis and robotics?
Welcome to Galaxy NGC 922... mind the black holes!
Welcome to Galaxy NGC 922... mind the black holes!:
Located some 150 million light-years from Earth, NGC 922 is admittedly a little far for a quick New Year's Eve getaway. But as this Hubble image reveals, the views are great, just so long as you avoid the black holes. More »
Located some 150 million light-years from Earth, NGC 922 is admittedly a little far for a quick New Year's Eve getaway. But as this Hubble image reveals, the views are great, just so long as you avoid the black holes. More »
Tuesday, May 29, 2012
Google Apps For Business Gets ISO 27001 Certification
Google Apps For Business Gets ISO 27001 Certification:
Google just announced that its Google Apps for Business service has earned ISO 27001 certification. This certifies that Google is following the standard ISO information security management protocols and best practices “for the systems, technology, processes and data centers serving Google Apps for Business.”
If you’re a startup or individual user, chances are you don’t care too much about whether a company you are working with is following any of the ISO’s over 19,000 standards. This certification, however, will likely give larger and more highly regulated businesses (and the executives who sign off on these deals) the necessary reassurances that moving to Google’s cloud solutions is safe.
This ISO 27001 certification, which was certified by Ernst & Young CertifyPoint, an ISO certification body accredited by the Dutch Accreditation Council, follows Google’s previous FISMA certification for its Google Apps for Government product. Google also regularly submits itself to third-party audits according to the SSAE 16 / ISAE 3402 standard, which is quite comparable to the ISO 27001 standard.
It’s important to note, though, that this certification is not a security “seal of approval,” as security consultant Alec Muffett told Computer World UK’s Anh Nguyen, and does not “guarantee that the applications are 100% secure.” Instead, says Muffett, companies that apply for ISO 27001 certification get ” to design their own high-jump bar, document how tall it is and what it is made of, how they intend to jump over it and then they jump over it.”
Google’s Eran Feigenbaum, the company’s director of security for its Google Enterprise group, believes that “businesses are beginning to realize that companies like Google can invest in security at a scale that’s difficult for many businesses to achieve on their own.” While most of Google’s competitors focus on getting their data centers certified, it’s worth noting that Google also argues that its certification is broader and also includes its networking infrastructure and applications.
Given that all of this isn’t the most exciting news in the world – especially not on a national holiday in the U.S. – here is a video of Feigenbaum, who is also a mentalist – playing Russian Roulette with a nail gun:
Google just announced that its Google Apps for Business service has earned ISO 27001 certification. This certifies that Google is following the standard ISO information security management protocols and best practices “for the systems, technology, processes and data centers serving Google Apps for Business.”
If you’re a startup or individual user, chances are you don’t care too much about whether a company you are working with is following any of the ISO’s over 19,000 standards. This certification, however, will likely give larger and more highly regulated businesses (and the executives who sign off on these deals) the necessary reassurances that moving to Google’s cloud solutions is safe.
This ISO 27001 certification, which was certified by Ernst & Young CertifyPoint, an ISO certification body accredited by the Dutch Accreditation Council, follows Google’s previous FISMA certification for its Google Apps for Government product. Google also regularly submits itself to third-party audits according to the SSAE 16 / ISAE 3402 standard, which is quite comparable to the ISO 27001 standard.
It’s important to note, though, that this certification is not a security “seal of approval,” as security consultant Alec Muffett told Computer World UK’s Anh Nguyen, and does not “guarantee that the applications are 100% secure.” Instead, says Muffett, companies that apply for ISO 27001 certification get ” to design their own high-jump bar, document how tall it is and what it is made of, how they intend to jump over it and then they jump over it.”
Google’s Eran Feigenbaum, the company’s director of security for its Google Enterprise group, believes that “businesses are beginning to realize that companies like Google can invest in security at a scale that’s difficult for many businesses to achieve on their own.” While most of Google’s competitors focus on getting their data centers certified, it’s worth noting that Google also argues that its certification is broader and also includes its networking infrastructure and applications.
Given that all of this isn’t the most exciting news in the world – especially not on a national holiday in the U.S. – here is a video of Feigenbaum, who is also a mentalist – playing Russian Roulette with a nail gun:
Saturday, May 26, 2012
The Supernovae That Burn at the Center and the Edges of Everything [Space Porn]
The Supernovae That Burn at the Center and the Edges of Everything [Space Porn]:
From Earth, the Pinwheel Galaxy looks to us like just a pinprick of light in the Big Dipper. But it's an enormous galaxy that's twice the size of our own. And in this new image of the galaxy from NASA, you can see that the Pinwheel is bursting with supernovae — the purple regions in this image highlight areas of extreme heat, where stars are exploding. What's interesting is that these gigantic explosions are happening both in the center and at the extreme edges of the galaxy's arms.
More »
From Earth, the Pinwheel Galaxy looks to us like just a pinprick of light in the Big Dipper. But it's an enormous galaxy that's twice the size of our own. And in this new image of the galaxy from NASA, you can see that the Pinwheel is bursting with supernovae — the purple regions in this image highlight areas of extreme heat, where stars are exploding. What's interesting is that these gigantic explosions are happening both in the center and at the extreme edges of the galaxy's arms.
More »
If you are elderly and poor, prison is better than a retirement home [Video]
If you are elderly and poor, prison is better than a retirement home [Video]:
If you are in old age, with no family and little money, your options are slim if you need living assistance. More »
If you are in old age, with no family and little money, your options are slim if you need living assistance. More »
Friday, May 25, 2012
Revisiting why incompetents think they're awesome
Revisiting why incompetents think they're awesome:
In 1999 a pair of researchers published a paper called "Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments (PDF)." David Dunning and Justin Kruger (both at Cornell University's Department of Psychology at the time) conducted a series of four studies showing that, in certain cases, people who are very bad at something think they are actually pretty good. They showed that to assess your own expertise at something, you need to have a certain amount of expertise already.
Remember the 2008 election campaign? The financial markets were going crazy, and banks that were "too big to fail" were bailed out by the government. Smug EU officials proclaimed that all was well within the EU—even while they were bailing out a number of financial institutions. Fast forward to 2012, and the EU is looking at hard times. Greece can't pay its debt. Italy can, but the markets don't trust it to be able to. Spain and Portugal are teetering around like toddlers just waiting for the market to give them one good push. Members of the public are behaving like teenagers, screaming "F**k you," while flipping the bird. The markets are reacting like drunk parents, and the resulting bruises are going to take a while to heal.
In all of this, uninformed idiots blame the Greeks for being lazy, the Germans for being too strict, and everyone but themselves. Newspapers, blogs, and television are filled with wise commentary hailing the return of the gold standard, the breakup of the Euro, or any number of sensible and not-so-sensible ideas. How are we to parse all this information? Do any of these people know what they are talking about? And if anyone does, how can we know which ones to listen to? The research of Dunning and Kruger may well tell us there is no way to figure out the answers to any of these questions. That is kind of scary.
Read more | Comments
Aurich Lawson
In 1999 a pair of researchers published a paper called "Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments (PDF)." David Dunning and Justin Kruger (both at Cornell University's Department of Psychology at the time) conducted a series of four studies showing that, in certain cases, people who are very bad at something think they are actually pretty good. They showed that to assess your own expertise at something, you need to have a certain amount of expertise already.
Remember the 2008 election campaign? The financial markets were going crazy, and banks that were "too big to fail" were bailed out by the government. Smug EU officials proclaimed that all was well within the EU—even while they were bailing out a number of financial institutions. Fast forward to 2012, and the EU is looking at hard times. Greece can't pay its debt. Italy can, but the markets don't trust it to be able to. Spain and Portugal are teetering around like toddlers just waiting for the market to give them one good push. Members of the public are behaving like teenagers, screaming "F**k you," while flipping the bird. The markets are reacting like drunk parents, and the resulting bruises are going to take a while to heal.
In all of this, uninformed idiots blame the Greeks for being lazy, the Germans for being too strict, and everyone but themselves. Newspapers, blogs, and television are filled with wise commentary hailing the return of the gold standard, the breakup of the Euro, or any number of sensible and not-so-sensible ideas. How are we to parse all this information? Do any of these people know what they are talking about? And if anyone does, how can we know which ones to listen to? The research of Dunning and Kruger may well tell us there is no way to figure out the answers to any of these questions. That is kind of scary.
Read more | Comments
Apple CEO Tim Cook's Stock Rises With Choice to Turn Down $75 Million Dividend
Apple CEO Tim Cook's Stock Rises With Choice to Turn Down $75 Million Dividend:
Apple CEO Tim Cook is proving himself as much a master of employee and investor relations as he is of operational efficiency. His decisions to create a charitable matching program for Apple employees and to grant a long-pined-for dividend to company shareholders have won him a lot of favor among both groups, while putting his own stamp on Apple. And now Cook has made another move for which he’s likely to win accolades.
Cook is forgoing $75 million in dividends to which he’s entitled.
In a Thursday SEC filing, Apple announced plans to award a $2.65-a-share quarterly dividend on restricted stock units held by its employees. It’s a nice — and unusual — perk to offer (and one certain to cement employee loyalty in a very competitive talent arena), but Cook is passing it up.
From Apple’s 8-K:
So Cook truly did just walk away from $75 million. Which is remarkable for an executive of his standing in an era when entitlement, greed and arrogance are so often part of the job description. Which is not to say that he’s not reaping some benefits here. There’s a lot of mileage for Apple in a symbolic gesture like this, and Cook profits when Apple’s overall value increases.
Say what you will, but this was a classy gesture up and down. When was the last time you saw headlines about a successful CEO of a wildly successful company walking away from millions of dollars that he could have just as easily pocketed?
Clearly, Cook is focused on more important and interesting things than having the biggest yacht in the harbor.
Apple CEO Tim Cook is proving himself as much a master of employee and investor relations as he is of operational efficiency. His decisions to create a charitable matching program for Apple employees and to grant a long-pined-for dividend to company shareholders have won him a lot of favor among both groups, while putting his own stamp on Apple. And now Cook has made another move for which he’s likely to win accolades.
Cook is forgoing $75 million in dividends to which he’s entitled.
In a Thursday SEC filing, Apple announced plans to award a $2.65-a-share quarterly dividend on restricted stock units held by its employees. It’s a nice — and unusual — perk to offer (and one certain to cement employee loyalty in a very competitive talent arena), but Cook is passing it up.
From Apple’s 8-K:
At Mr. Cook’s request, none of his restricted stock units will participate in dividend equivalents. Assuming a quarterly dividend of $2.65 per share over the vesting periods of his 1.125 million outstanding restricted stock units, Mr. Cook will forego approximately $75 million in dividend equivalent value.That’s a lot of money to turn down. True, Cook is very well compensated — deservedly so, considering Apple’s performance — so he can obviously afford to forgo it. But, as best I can tell, he didn’t have to.
So Cook truly did just walk away from $75 million. Which is remarkable for an executive of his standing in an era when entitlement, greed and arrogance are so often part of the job description. Which is not to say that he’s not reaping some benefits here. There’s a lot of mileage for Apple in a symbolic gesture like this, and Cook profits when Apple’s overall value increases.
Say what you will, but this was a classy gesture up and down. When was the last time you saw headlines about a successful CEO of a wildly successful company walking away from millions of dollars that he could have just as easily pocketed?
Clearly, Cook is focused on more important and interesting things than having the biggest yacht in the harbor.
Wednesday, May 23, 2012
This foamy snake demonstrates why blood bubbles under disinfectant [Video]
This foamy snake demonstrates why blood bubbles under disinfectant [Video]:
Ever cut yourself, have someone put a dab of hydrogen peroxide on the cut, watch the entire area fizz like something that would turn you into the Joker, and wonder, "How is this good for me?" A simple chemistry demonstration that you can do at home explains it all. Take a look at foam snakes. More »
Ever cut yourself, have someone put a dab of hydrogen peroxide on the cut, watch the entire area fizz like something that would turn you into the Joker, and wonder, "How is this good for me?" A simple chemistry demonstration that you can do at home explains it all. Take a look at foam snakes. More »
Subscribe to:
Posts (Atom)