Liar - Dawn Jay

Powered by mp3skull.com

Sunday, July 31, 2011

Existence: Where did my consciousness come from?


THINK for a moment about a time before you were born. Where were you? Now think ahead to a time after your death. Where will you be? The brutal answer is: nowhere. Your life is a brief foray on Earth that started one day for no reason and will inevitably end.

But what a foray. Like the whole universe, your consciousness popped into existence out of nothingness and has evolved into a rich and complex entity full of wonder and mystery.

Contemplating this leads to a host of mind-boggling questions. What are the odds of my consciousness existing at all? How can such a thing emerge from nothingness? Is there any possibility of it surviving my death? And what is consciousness anyway?

Answering these questions is incredibly difficult. Philosopher Thomas Nagel once asked, "What is it like to be a bat?" Your response might be to imagine flying around in the dark, seeing the world in the echoes of high-frequency sounds. But that isn't the answer Nagel was looking for. He wanted to emphasise that there is no way of knowing what it is like for a bat to feel like a bat. That, in essence, is the conundrum of consciousness.

Neuroscientists and philosophers fall into two broad camps. One thinks that consciousness is an emergent property of the brain and that once we fully understand the intricate workings of neuronal activity, consciousness will be laid bare. The other doubts it will be that simple. They agree that consciousness emerges from the brain, but argue that Nagel's question will always remain unanswered: knowing every detail of a bat's brain cannot tell us what it is like to be a bat. This is often called the "hard problem" of consciousness, and seems scientifically intractable - for now.
Meanwhile, "there are way too many so-called easy problems to worry about", says Anil Seth of the University of Sussex in Brighton, UK.

One is to look for signatures of consciousness in brain activity, in the hope that this takes us closer to understanding what it is. Various brain areas have been found to be active when we are conscious of something and quiet when we are not. For example, Stanislas Dehaene of the French National Institute of Health and Medical Research in Gif sur Yvette and colleagues have identified such regions in our frontal and parietal lobes (Nature Neuroscience, vol 8, p 1391).

Consciousness explained

This is consistent with a theory of consciousness proposed by Bernard Baars of the Neuroscience Institute in San Diego, California. He posited that most non-conscious experiences are processed in specialised local regions of the brain such as the visual cortex. We only become conscious of this activity when the information is broadcast to a network of neurons called the global workspace - perhaps the regions pinpointed by Dehaene.

But others believe the theory is not telling the whole story. "Does global workspace theory really explain consciousness, or just the ability to report about consciousness?" asks Seth.

Even so, the idea that consciousness seems to be an emergent property of the brain can take us somewhere. For example, it makes the odds of your own consciousness existing the same as the odds of you being born at all, which is to say, very small. Just think of that next time you suffer angst about your impending return to nothingness.

As for whether individual consciousness can continue after death, "it is extremely unlikely that there would be any form of self-consciousness after the physical brain decays", says philosopher Thomas Metzinger of the Johannes Gutenberg University in Mainz, Germany.
Extremely unlikely, but not impossible. Giuilio Tononi of the University of Wisconsin-Madison argues that consciousness is the outcome of how complex matter, including the brain, integrates information. 

"According to Tononi's theory, if one could build a device or a system that integrated information exactly the same way as a living brain, it would generate the same conscious experiences," says Seth. Such a machine might allow your consciousness to survive death. But it would still not know what it is like to be a bat.

Existence: Why is the universe just right for us?


IT HAS been called the Goldilocks paradox. If the strong nuclear force which glues atomic nuclei together were only a few per cent stronger than it is, stars like the sun would exhaust their hydrogen fuel in less than a second. Our sun would have exploded long ago and there would be no life on Earth. If the weak nuclear force were a few per cent weaker, the heavy elements that make up most of our world wouldn't be here, and neither would you.

If gravity were a little weaker than it is, it would never have been able to crush the core of the sun sufficiently to ignite the nuclear reactions that create sunlight; a little stronger and, again, the sun would have burned all of its fuel billions of years ago. Once again, we could never have arisen.
Such instances of the fine-tuning of the laws of physics seem to abound. Many of the essential parameters of nature - the strengths of fundamental forces and the masses of fundamental particles - seem fixed at values that are "just right" for life to emerge. A whisker either way and we would not be here. It is as if the universe was made for us.

What are we to make of this? One possibility is that the universe was fine-tuned by a supreme being - God. Although many people like this explanation, scientists see no evidence that a supernatural entity is orchestrating the cosmos. The known laws of physics can explain the existence of the universe that we observe. To paraphrase astronomer Pierre-Simon Laplace when asked by Napoleon why his book Mécanique Céleste did not mention the creator: we have no need of that hypothesis.
Another possibility is that it simply couldn't be any other way. We find ourselves in a universe ruled by laws compatible with life because, well, how could we not?

This could seem to imply that our existence is an incredible slice of luck - of all the universes that could have existed, we got one capable of supporting intelligent life. But most physicists don't see it that way.

The most likely explanation for fine-tuning is possibly even more mind-expanding: that our universe is merely one of a vast ensemble of universes, each with different laws of physics. We find ourselves in one with laws suitable for life because, again, how could it be any other way?
The multiverse idea is not without theoretical backing. String theory, our best attempt yet at a theory of everything, predicts at least 10500 universes, each with different laws of physics. To put that number into perspective, there are an estimated 1025 grains of sand in the Sahara desert.

Fine-tuned fallacy

Another possibility is that there is nothing to explain. Some argue that the whole idea of fine-tuning is wrong. One vocal critic is Victor Stenger of the University of Colorado in Boulder, author of The Fallacy of Fine-tuning. His exhibit A concerns one of the pre-eminent examples of fine-tuning, the unlikeliness of the existence of anything other than hydrogen, helium and lithium.

All the heavy elements in your body, including carbon, nitrogen, oxygen and iron, were forged inside distant stars. In 1952, cosmologist Fred Hoyle argued that the existence of these elements depends on a huge cosmic coincidence. One of the key steps to their formation is the "triple alpha" process in which three helium nuclei fuse together to form a carbon-12 nucleus. For this reaction to occur, Hoyle proposed that the energy of the carbon-12 nucleus must be precisely equal to the combined energy of three helium nuclei at the typical temperature inside a red giant star. And so it is.
However, Stenger points out that in 1989 a team at the Technion-Israel Institute of Technology in Haifa showed that, actually, the carbon-12 energy level could have been significantly different and still resulted in the heavy elements required for life.

There are other problems with the fine-tuning argument. One is the fact that examples of fine-tuning are found by taking a single parameter - a force of nature, say, or a subatomic particle mass - and varying it while keeping everything else constant. This seems very unrealistic. The theory of everything, which alas we do not yet possess, is likely to show intimate connections between physical parameters. The effect of varying one may very well be compensated for by variations in another.
Then there is the fact that we only have one example of life to go on, so how can we be so sure that different laws could not give rise to some other living system capable of pondering its own existence?

One example of fine-tuning, however, remains difficult to dismiss: the accelerating expansion of the universe by dark energy. Quantum theory predicts that the strength of this mysterious force should be about 10120 times larger than the value we observe.
This discrepancy seems extraordinarily fortuitous. According to Nobel prizewinner Steven Weinberg, if dark energy were not so tiny, galaxies could never have formed and we would not be here. The explanation Weinberg grudgingly accepts is that we must live in a universe with a "just right" value for dark energy. "The dark energy is still the only quantity that appears to require a multiverse explanation," admits Weinberg. "I don't see much evidence of fine-tuning of any other physical constants."


Friday, July 29, 2011

Electric dolphins: cetaceans with a seventh sense

One extra sense isn't quite enough for Guiana dolphins. In addition to echolocation, they can sense the electric fields of their prey – the first time this has been seen in true mammals.
Wolf Hanke at the University of Rostock in Germany and colleagues were intrigued by thermal images showing intense physiological activity in the pits on the upper jaw of the dolphins, Sotalia guianensis. Fish, some amphibians and primitive egg-laying mammals such as the duck-billed platypus use similar pits to pick up electric fields generated by nearby animals.
By examining the structures in a dead dolphin, and training a live one to respond to an electric field comparable to that generated by a fish, the team showed that dolphins also have electro-sensory perception.
"Electroreception is good for sensing prey over short distances, where echolocation isn't so effective," says Hanke. Other species of dolphin, and even whales, may be similarly gifted, he says. "Most people don't realise that whales also feed on the floor of the ocean, so it is possible that they also use electrosensing."
Hanke points out that the electro-sensory organs are derived from whiskers in ancestral animals. These mechanoreceptor organs, like the hair cells in the human ear, mechanically transmit the stimulus of touch or sound waves. The adaptation in Guiana dolphins is fairly new, Hanke says, and he suspects that "it is relatively easy to evolve, to change mechanoreceptor organs into electroreceptors".

'Fluid cloak' to help submarines leave no wake

SUPER-STEALTHY submarines may one day glide through the water without creating a wake, if a plan to channel fluid intelligently around objects can be made to work.
A vehicle moving through a fluid normally disturbs the medium in two ways. First, some of the fluid gets dragged along with the vehicle, sapping its energy and slowing it down. Second, a turbulent wake forms behind it where fluid rushes in to fill the vacant space. The churning fluid in the wake in turn creates noise that reveals the vehicle's presence.
But channelling the fluid around the object in just the right way could solve both problems at once.

To do this, Yaroslav Urzhumov and David Smith of Duke University in Durham, North Carolina, propose encasing it in a mesh shell.
Crucially, the permeability of this mesh casing should vary from place to place to alter the speed of fluid flowing through it. This means that the shell and the object it contains would leave no lasting impression in the fluid - the fluid would exit the shell at exactly the same speed and in the same direction as it entered.
They modelled the pattern of permeability needed to make a sphere undetectable in fluid. The pattern was complex, with some spots having to accelerate the fluid flowing through it. To do that, the researchers propose embedding tiny pumps in the material to boost the flow rate. Pumps that are mere millimetres across already exist for biomedical devices.
The overall effect of their pattern is to initially accelerate the incoming fluid near the front of the shell, then to let it slow back down to its original speed at the back of the shell before it exits.
Since there is no net change to the motion of the fluid when the vehicle passes through it, there is no drag and no turbulent wake. The fluid closes seamlessly around the vehicle, as if it had never been there. "It's possible to have this structure glide through the fluid without disturbing it at all," says Urzhumov.
For the pattern in the mesh to work, there is a trade-off between the sphere's size and its speed. Steven Ceccio of the University of Michigan in Ann Arbor cautions that the "fluid cloaking" is only complete for small and slow-moving objects. For example, a vehicle 1-centimetre across could only stay drag and wake-free at speeds of less than 1 centimetre per second, he says: "If the object gets bigger, the [limiting] speed goes down even more."
But Urzhumov says it might be possible to develop mesh patterns that will work for larger objects or different shapes. And he argues that the fluid-cloaking pattern in this study could still reduce drag and weaken the wakes of larger and faster vehicles, even if it does not completely eliminate them.

Stegobot steals passwords from your Facebook photos

THINK twice before uploading your holiday pictures to Facebook - you could be helping someone to steal information from your computer. A botnet called Stegobot was created to show how easy it would be for a crook to hijack Facebook photos to create a secret communication channel that is very difficult to detect.
Like most botnets, Stegobot gains control of computers by tricking users into opening infected email attachments or visiting suspect websites. But rather than contacting the botmasters directly, it piggybacks on the infected user's normal social network activity. "If one of your friends is a friend of a friend of the botmaster, the information transfers hop by hop within the social network, finally reaching the botmasters," says Amir Houmansadr, a computer scientist at the University of Illinois at Urbana-Champaign who worked on the botnet.
Stegobot takes advantage of a technique called steganography to hide information in picture files without changing their appearance. It is possible to store around 50 kilobytes of data in a 720 by 720 pixel image - enough to transmit any passwords or credit card numbers that Stegobot might find on your hard drive.
The botnet inserts this information into any photo you upload to Facebook, and then waits for one of your friends to look at your profile. They don't even have to click on the photo, as Facebook helpfully downloads files in the background. If your friend is also infected with the botnet - quite likely, since any email you send them will pass it on - any photo they upload will also pass on the stolen data.
From there, the data will eventually make its way to the account of someone who is also friends with the botmaster, allowing them to extract details on your identity. The botmasters can also send commands to the botnet through the reverse process - uploading a photo with hidden instructions that make their way to infected computers.
"It's scary because it's virtually undetectable," says Shishir Nagaraja of the Indraprastha Institute of Information Technology, New Delhi, India, who led the project.
Marco Cova, a computer scientist at the University of Birmingham, UK, says that criminals could employ a system like Stegobot, as it is hard to detect, but other methods allow them to steal much larger quantities of data. "It's not the most efficient or convenient way," he says.

Read about Stegobot in the following pdf: 
http://www.hatswitch.org/~sn275/papers/stegobot.pdf

Will Li-Fi be the new Wi-Fi?

FLICKERING lights are annoying but they may have an upside. Visible light communication (VLC) uses rapid pulses of light to transmit information wirelessly. Now it may be ready to compete with conventional Wi-Fi.
"At the heart of this technology is a new generation of high-brightness light-emitting diodes," says Harald Haas from the University of Edinburgh, UK. "Very simply, if the LED is on, you transmit a digital 1, if it's off you transmit a 0," Haas says. "They can be switched on and off very quickly, which gives nice opportunities for transmitting data."
It is possible to encode data in the light by varying the rate at which the LEDs flicker on and off to give different strings of 1s and 0s. The LED intensity is modulated so rapidly that human eyes cannot notice, so the output appears constant.
More sophisticated techniques could dramatically increase VLC data rates. Teams at the University of Oxford and the University of Edinburgh are focusing on parallel data transmission using arrays of LEDs, where each LED transmits a different data stream. Other groups are using mixtures of red, green and blue LEDs to alter the light's frequency, with each frequency encoding a different data channel.
Li-Fi, as it has been dubbed, has already achieved blisteringly high speeds in the lab. Researchers at the Heinrich Hertz Institute in Berlin, Germany, have reached data rates of over 500 megabytes per second using a standard white-light LED. Haas has set up a spin-off firm to sell a consumer VLC transmitter that is due for launch next year. It is capable of transmitting data at 100 MB/s - faster than most UK broadband connections.
Once established, VLC could solve some major communication problems. In 2009, the US Federal Communications Commission warned of a looming spectrum crisis: because our mobile devices are so data-hungry we will soon run out of radio-frequency bandwidth. Li-Fi could free up bandwidth, especially as much of the infrastructure is already in place.
"There are around 14 billion light bulbs worldwide, they just need to be replaced with LED ones that transmit data," says Haas. "We reckon VLC is a factor of ten cheaper than Wi-Fi." Because it uses light rather than radio-frequency signals, VLC could be used safely in aircraft, integrated into medical devices and hospitals where Wi-Fi is banned, or even underwater, where Wi-Fi doesn't work at all.
"The time is right for VLC, I strongly believe that," says Haas, who presented his work at TED Global in Edinburgh last week.
One way to solve the bandwidth crisis, switch to light transmission
But some sound a cautious note about VLC's prospects. It only works in direct line of sight, for example, although this also makes it harder to intercept than Wi-Fi. "There has been a lot of early hype, and there are some very good applications," says Mark Leeson from the University of Warwick, UK. "But I'm doubtful it's a panacea. This isn't technology without a point, but I don't think it sweeps all before it, either."

Wednesday, July 27, 2011

Existence special: Cosmic mysteries, human questions


It’s lucky you’re here.

13.7 billion years ago, the universe was born in a cosmic fireball. Roughly 10 billion years later, the planet we call Earth gave birth to life, which eventually led to you. The probability of that sequence of events is absolutely minuscule, and yet it still happened.

Take a step back from the unlikeliness of your own personal existence and things get even more mind-boggling. Why does the universe exist at all? Why is it fine-tuned to human life? Why does it seem to be telling us that there are other universes out there, even other yous?

In these articles, we confront these mysteries of existence and others, from the possibility that the universe is a hologram to the near-certainty that you are a zombie.

Existence: Where did we come from?

 

WHY are we here? Where did we come from? According to the Boshongo people of central Africa, before us there was only darkness, water and the great god Bumba. One day Bumba, in pain from a stomach ache, vomited up the sun. The sun evaporated some of the water, leaving land. Still in discomfort, Bumba vomited up the moon, the stars and then the leopard, the crocodile, the turtle, and finally, humans.
This creation myth, like many others, wrestles with the kinds of questions that we all still ask today. Fortunately, as will become clear from this special issue of New Scientist, we now have a tool to provide the answers: science.
When it come to these mysteries of existence the first scientific evidence was discovered about 80 years ago, when Edwin Hubble began to make observations in the 1920s with the 100-inch telescope on Mount Wilson in Los Angeles County.
To his surprise, Hubble found that nearly all the galaxies were moving away from us. Moreover, the more distant the galaxies, the faster they were moving away. The expansion of the universe was one of the most important intellectual discoveries of all time.
This finding transformed the debate about whether the universe had a beginning. If galaxies are moving apart now, they must therefore have been closer together in the past. If their speed had been constant, they would all have been on top of one another billions of years ago. Was this how the universe began? At that time many scientists were unhappy with the universe having a beginning because it seemed to imply that physics had broken down.
One would have to invoke an outside agency, which for convenience one can call God, to determine how the universe began. They therefore advanced theories in which the universe was expanding at the present time, but didn't have a beginning. Perhaps the best known was proposed in 1948, and called the steady state theory.
According to this theory, the universe would have existed for ever and would have looked the same at all times. This last property had the great virtue of being a prediction that could be tested, a critical ingredient of the scientific method. And it was found lacking.
Observational evidence to confirm the idea that the universe had a very dense beginning came in October 1965, with the discovery of a faint background of microwaves throughout space. The only reasonable interpretation is that this background is radiation left over from an early hot and dense state. As the universe expanded, the radiation would have cooled until it is just the remnant we see today.
Theory backed this idea too. With Roger Penrose I showed that if Einstein's general theory of relativity is correct, there would be a singularity, a point of infinite density and space-time curvature, where time has a beginning.
The universe started off in the big bang, expanding faster and faster. This is called inflation and it turns out that inflation in the early cosmos was much more rapid: the universe doubled in size many times in a tiny fraction of a second.
Inflation made the universe very large and very smooth and flat. However, it was not completely smooth: there were tiny variations from place to place. These variations caused minute differences in the temperature of the early universe, which we can see in the cosmic microwave background.
The variations mean that some regions will be expanding slightly less fast. The slower regions eventually stop expanding and collapse again to form galaxies and stars. And, in turn, solar systems.
We owe our existence to these variations. If the early universe had been completely smooth, there would be no stars and so life could not have developed. We are the product of primordial quantum fluctuations.
As will become clear (see "Existence special: Cosmic mysteries, human questions"), many huge mysteries remain. Still, we are steadily edging closer to answering the age-old questions. Where did we come from? And are we the only beings in the universe who can ask these questions?

Existence: Why is there a universe?

 
AS DOUGLAS ADAMS once wrote: "The universe is big. Really big." And yet if our theory of the big bang is right, the universe was once a lot smaller. Indeed, at one point it was non-existent. Around 13.7 billion years ago time and space spontaneously sprang from the void. How did that happen?
Or to put it another way: why does anything exist at all? It's a big question, perhaps the biggest. The idea that the universe simply appeared out of nothing is difficult enough; trying to conceive of nothingness is perhaps even harder.
It is also a very reasonable question to ask from a scientific perspective. After all, some basic physics suggests that you and the rest of the universe are overwhelmingly unlikely to exist. The second law of thermodynamics, that most existentially resonant of physical laws, says that disorder, or entropy, always tends to increase. Entropy measures the number of ways you can rearrange a system's components without changing its overall appearance. The molecules in a hot gas, for example, can be arranged in many different ways to create the same overall temperature and pressure, making the gas a high-entropy system. In contrast, you can't rearrange the molecules of a living thing very much without turning it into a non-living thing, so you are a low-entropy system.
By the same logic, nothingness is the highest entropy state around - you can shuffle it around all you want and it still looks like nothing.
Given this law, it is hard to see how nothing could ever be turned into something, let alone something as big as a universe. But entropy is only part of the story. The other consideration is symmetry - a quality that appears to exert profound influence on the physical universe wherever it crops up. Nothingness is very symmetrical indeed. "There's no telling one part from another, so it has total symmetry," says physicist Frank Wilczek of the Massachusetts Institute of Technology.
And as physicists have learned over the past few decades, symmetries are made to be broken. Wilczek's own speciality is quantum chromodynamics, the theory that describes how quarks behave deep within atomic nuclei. It tells us that nothingness is a precarious state of affairs. "You can form a state that has no quarks and antiquarks in it, and it's totally unstable," says Wilczek. "It spontaneously starts producing quark-antiquark pairs." The perfect symmetry of nothingness is broken. That leads to an unexpected conclusion, says Victor Stenger, a physicist at the University of Colorado in Boulder: despite entropy, "something is the more natural state than nothing".
"According to quantum theory, there is no state of 'emptiness'," agrees Frank Close of the University of Oxford. Emptiness would have precisely zero energy, far too exacting a requirement for the uncertain quantum world. Instead, a vacuum is actually filled with a roiling broth of particles that pop in and out of existence. In that sense this magazine, you, me, the moon and everything else in our universe are just excitations of the quantum vacuum.

Before the big bang

Might something similar account for the origin of the universe itself? Quite plausibly, says Wilczek. "There is no barrier between nothing and a rich universe full of matter," he says. Perhaps the big bang was just nothingness doing what comes naturally.
This, of course, raises the question of what came before the big bang, and how long it lasted. Unfortunately at this point basic ideas begin to fail us; the concept "before" becomes meaningless. In the words of Stephen Hawking, it's like asking what is north of the north pole.
Even so, there is an even more mind-blowing consequence of the idea that something can come from nothing: perhaps nothingness itself cannot exist.
Here's why. Quantum uncertainty allows a trade-off between time and energy, so something that lasts a long time must have little energy. To explain how our universe has lasted for the billions of years that it has taken galaxies to form, solar systems to coalesce and life to evolve into bipeds who ask how something came from nothing, its total energy must be extraordinarily low.
That fits with the generally accepted view of the universe's early moments, which sees space-time undergoing a brief burst of expansion immediately after the big bang. This heady period, known as inflation, flooded the universe with energy. But according to Einstein's general theory of relativity, more space-time also means more gravity. Gravity's attractive pull represents negative energy that can cancel out inflation's positive energy - essentially constructing a cosmos for nothing. "I like to say that the universe is the ultimate free lunch," says Alan Guth, a cosmologist at MIT who came up with the inflation theory 30 years ago.

Physicists used to worry that creating something from nothing would violate all sorts of physical laws such as the conservation of energy. But if there is zero overall energy to conserve, the problem evaporates - and a universe that simply popped out of nothing becomes not just plausible, but probable. "Maybe a better way of saying it is that something is nothing," says Guth.
None of this really gets us off the hook, however. Our understanding of creation relies on the validity of the laws of physics, particularly quantum uncertainty. But that implies that the laws of physics were somehow encoded into the fabric of our universe before it existed. How can physical laws exist outside of space and time and without a cause of their own? Or, to put it another way, why is there something rather than nothing?

Existence: Are we alone in the universe?

 

HAVE you ever looked up at the night sky and wondered if somebody, or something, is looking back? If perhaps somewhere out there, the mysterious spark we call life has flickered into existence?
Intuitively, it feels as if we can't be alone. For every one of the 2000 stars you can see with your naked eye, there are another 50 million in our galaxy, which is one of 100 billion galaxies. In other words, the star we orbit is just one of 10,000 billion billion in the cosmos. Surely there is another blue dot out there - a home to intelligent life like us? The simple fact is, we don't know.
One way to estimate the number of intelligent civilisations was devised by astronomer Frank Drake. His equation takes into account the rate of star formation, the fraction of those stars with planets and the likelihood that life, intelligent life, and intelligent creatures capable of communicating with us, will arise.
It is now possible to put numbers on some of those factors. We know that about 20 stars are born in the Milky Way every year and we have spotted more than 560 planets around stars other than the sun. About a quarter of stars harbour a planet similar in mass to Earth (Science, vol 330, p 653).
But estimating the biological factors is little more than guesswork. We know that life is incredibly adaptable once it emerges, but not how good it is at getting started in the first place.

Unique planet

Some astronomers believe life is almost inevitable on any habitable planet. Others suspect simple life is common, but intelligent life exceedingly rare. A few believe that our planet is unique. "Life may or may not form easily," says physicist Paul Davies of Arizona State University in Tempe. "We're completely in the dark."
So much for equations. What about evidence? Finding life on Mars probably won't help, as it would very likely share its origin with Earthlings. "Impacts have undoubtedly conveyed microorganisms back and forth," says Davies. "Mars and Earth are not independent ecosystems."
Discovering life on Titan would be more revealing. Titan is the only other place in the solar system with liquid on its surface - albeit lakes of ethane. "We are starting to think that if there is life on Titan it would have a separate origin," says Dirk Schulze-Makuch at Washington State University in Pullman. "If we can find a separate origin we can say 'OK, there's a lot of life in the universe'."
Discovering alien microbes in our solar system would be some sort of proof that we are not alone, but what we really want to know is whether there is another intelligence out there. For 50 years astronomers have swept the skies with radio telescopes for any hint of a message. So far, nothing.
But that doesn't mean ET isn't there. It just might not know we're here. The only evidence of our existence that reaches beyond the solar system are radio signals and light from our cities. "We've only been broadcasting powerful radio signals since the second world war," says Seth Shostak of the SETI Institute in Mountain View, California. So our calling card has leaked just 70 light years into space, a drop in the ocean. If the Milky Way was the size of London and Earth was at the base of Nelson's Column, our radio signals would still not have left Trafalgar Square.
"It's probably safe to say that even if the local galaxy is choc-a-bloc with aliens, none of them know that Homo sapiens is here," says Shostak. That also works in reverse. Given the size of the universe and the speed of light, most stars and planets are simply out of range.
It is also possible that intelligent life is separated from us by time. After all, human intelligence has only existed for a minuscule fraction of Earth's history and may just be a fleeting phase (see page 39). It may be too much of a stretch to hope that a nearby planet not only harbours intelligent life, but that it does so right now.
But let's say we did make contact with aliens. How would we react? NASA has plans, and most religions claim they would be able to absorb the idea, but the bottom line is we won't know until it happens.
Most likely we'll never find out. Even if Earth is not the only planet with intelligent life, we appear destined to live out our entire existence as if it were - but with a nagging feeling that it can't be. How's that for existential uncertainty?

Existence: Am I a hologram?

TAKE a look around you. The walls, the chair you're sitting in, your own body - they all seem real and solid. Yet there is a possibility that everything we see in the universe - including you and me - may be nothing more than a hologram.
It sounds preposterous, yet there is already some evidence that it may be true, and we could know for sure within a couple of years. If it does turn out to be the case, it would turn our common-sense conception of reality inside out.
The idea has a long history, stemming from an apparent paradox posed by Stephen Hawking's work in the 1970s. He discovered that black holes slowly radiate their mass away. This Hawking radiation appears to carry no information, however, raising the question of what happens to the information that described the original star once the black hole evaporates. It is a cornerstone of physics that information cannot be destroyed.
In 1972 Jacob Bekenstein at the Hebrew University of Jerusalem, Israel, showed that the information content of a black hole is proportional to the two-dimensional surface area of its event horizon - the point-of-no-return for in-falling light or matter. Later, string theorists managed to show how the original star's information could be encoded in tiny lumps and bumps on the event horizon, which would then imprint it on the Hawking radiation departing the black hole.
This solved the paradox, but theoretical physicists Leonard Susskind and Gerard 't Hooft decided to take the idea a step further: if a three-dimensional star could be encoded on a black hole's 2D event horizon, maybe the same could be true of the whole universe. The universe does, after all, have a horizon 42 billion light years away, beyond which point light would not have had time to reach us since the big bang. Susskind and 't Hooft suggested that this 2D "surface" may encode the entire 3D universe that we experience - much like the 3D hologram that is projected from your credit card.
It sounds crazy, but we have already seen a sign that it may be true. Theoretical physicists have long suspected that space-time is pixelated, or grainy. Since a 2D surface cannot store sufficient information to render a 3D object perfectly, these pixels would be bigger in a hologram. "Being in the [holographic] universe is like being in a 3D movie," says Craig Hogan of Fermilab in Batavia, Illinois. "On a large scale, it looks smooth and three-dimensional, but if you get close to the screen, you can tell that it is flat and pixelated."

Quantum fluctuation

Hogan recently looked at readings from an exquisitely sensitive motion-detector in Hanover, Germany, which was built to detect gravitational waves - ripples in the fabric of space-time. The GEO600 experiment has yet to find one, but in 2008 an unexpected jitter left the team scratching their heads, until Hogan suggested that it might arise from "quantum fluctuations" due to the graininess of space-time. By rights, these should be far too small to detect, so the fact that they are big enough to show up on GEO600's readings is tentative supporting evidence that the universe really is a hologram, he says.
Bekenstein is cautious: "The holographic idea is only a hypothesis, supported by some special cases." Better evidence may come from a dedicated instrument being built at Fermilab, which Hogan expects to be up and running within a couple of years.
A positive result would challenge every assumption we have about the world we live in. It would show that everything is a projection of something occurring on a flat surface billions of light years away from where we perceive ourselves to be. As yet we have no idea what that "something" might be, or how it could manifest itself as a world in which we can do the school run or catch a movie at the cinema. Maybe it would make no difference to the way we live our lives, but somehow I doubt it.

Existence: Where did my consciousness come from?

THINK for a moment about a time before you were born. Where were you? Now think ahead to a time after your death. Where will you be? The brutal answer is: nowhere. Your life is a brief foray on Earth that started one day for no reason and will inevitably end.
But what a foray. Like the whole universe, your consciousness popped into existence out of nothingness and has evolved into a rich and complex entity full of wonder and mystery.
Contemplating this leads to a host of mind-boggling questions. What are the odds of my consciousness existing at all? How can such a thing emerge from nothingness? Is there any possibility of it surviving my death? And what is consciousness anyway?
Answering these questions is incredibly difficult. Philosopher Thomas Nagel once asked, "What is it like to be a bat?" Your response might be to imagine flying around in the dark, seeing the world in the echoes of high-frequency sounds. But that isn't the answer Nagel was looking for. He wanted to emphasise that there is no way of knowing what it is like for a bat to feel like a bat. That, in essence, is the conundrum of consciousness.
Neuroscientists and philosophers fall into two broad camps. One thinks that consciousness is an emergent property of the brain and that once we fully understand the intricate workings of neuronal activity, consciousness will be laid bare. The other doubts it will be that simple. They agree that consciousness emerges from the brain, but argue that Nagel's question will always remain unanswered: knowing every detail of a bat's brain cannot tell us what it is like to be a bat. This is often called the "hard problem" of consciousness, and seems scientifically intractable - for now.
Meanwhile, "there are way too many so-called easy problems to worry about", says Anil Seth of the University of Sussex in Brighton, UK.
One is to look for signatures of consciousness in brain activity, in the hope that this takes us closer to understanding what it is. Various brain areas have been found to be active when we are conscious of something and quiet when we are not. For example, Stanislas Dehaene of the French National Institute of Health and Medical Research in Gif sur Yvette and colleagues have identified such regions in our frontal and parietal lobes (Nature Neuroscience, vol 8, p 1391).

Consciousness explained

This is consistent with a theory of consciousness proposed by Bernard Baars of the Neuroscience Institute in San Diego, California. He posited that most non-conscious experiences are processed in specialised local regions of the brain such as the visual cortex. We only become conscious of this activity when the information is broadcast to a network of neurons called the global workspace - perhaps the regions pinpointed by Dehaene.
But others believe the theory is not telling the whole story. "Does global workspace theory really explain consciousness, or just the ability to report about consciousness?" asks Seth.
Even so, the idea that consciousness seems to be an emergent property of the brain can take us somewhere. For example, it makes the odds of your own consciousness existing the same as the odds of you being born at all, which is to say, very small. Just think of that next time you suffer angst about your impending return to nothingness.
As for whether individual consciousness can continue after death, "it is extremely unlikely that there would be any form of self-consciousness after the physical brain decays", says philosopher Thomas Metzinger of the Johannes Gutenberg University in Mainz, Germany.
Extremely unlikely, but not impossible. Giuilio Tononi of the University of Wisconsin-Madison argues that consciousness is the outcome of how complex matter, including the brain, integrates information. "According to Tononi's theory, if one could build a device or a system that integrated information exactly the same way as a living brain, it would generate the same conscious experiences," says Seth. Such a machine might allow your consciousness to survive death. But it would still not know what it is like to be a bat.

One example of fine-tuning, however, remains difficult to dismiss: the accelerating expansion of the universe by dark energy. Quantum theory predicts that the strength of this mysterious force should be about 10120 times larger than the value we observe.
This discrepancy seems extraordinarily fortuitous. According to Nobel prizewinner Steven Weinberg, if dark energy were not so tiny, galaxies could never have formed and we would not be here. The explanation Weinberg grudgingly accepts is that we must live in a universe with a "just right" value for dark energy. "The dark energy is still the only quantity that appears to require a multiverse explanation," admits Weinberg. "I don't see much evidence of fine-tuning of any other physical constants."

Existence: Am I a zombie?

IN A nutshell, you don't know.
Philosopher René Descartes hit the nail on the head when he wrote "cogito ergo sum". The only evidence you have that you exist as a self-aware being is your conscious experience of thinking about your existence. Beyond that you're on your own. You cannot access anyone else's conscious thoughts, so you will never know if they are self-aware.
That was in 1644 and little progress has been made since. If anything, we are even less sure about the reality of our own existence.
It is not so long ago that computers became powerful enough to let us create alternative worlds. We have countless games and simulations that are, effectively, worlds within our world. As technology improves, these simulated worlds will become ever more sophisticated. The "original" universe will eventually be populated by a near-infinite number of advanced, virtual civilisations. It is hard to imagine that they will not contain autonomous, conscious beings. Beings like you and me.
According to Nick Bostrom, a philosopher at the University of Oxford who first made this argument, this simple fact makes it entirely plausible that our reality is in fact a simulation run by entities from a more advanced civilisation.
How would we know? Bostrom points out that the only way we could be sure is if a message popped up in front of our eyes saying: "You are living in a computer simulation." Or, he says, if the operators transported you to their reality (which, of course, may itself be a simulation).
Although we are unlikely to get proof, we might find some hints about our reality. "I think it might be feasible to get evidence that would at least give weak clues," says Bostrom.
Economist Robin Hanson of George Mason University in Fairfax, Virginia, is not so sure. If we did find anything out, the operators could just rewind everything back to a point where the clue could be erased. "We won't ever notice if they don't want us to," Hanson says. Anyway, seeking the truth might even be asking for trouble. We could be accused of ruining our creators' fun and cause them to pull the plug.

Zombie invasion

Hanson has a slightly different take on the argument. "Small simulations should be far more numerous than large ones," he says. That's why he thinks it is far more likely that he lives in a simulation where he is the only conscious, interesting being. In other words, everyone else is an extra: a zombie, if you will. However, he would have no way of knowing, which brings us back to Descartes.
Of course, we do have access to a technology that would have looked like sorcery in Descartes's day: the ability to peer inside someone's head and read their thoughts. Unfortunately, that doesn't take us any nearer to knowing whether they are sentient. "Even if you measure brainwaves, you can never know exactly what experience they represent," says psychologist Bruce Hood at the University of Bristol, UK.
If anything, brain scanning has undermined Descartes's maxim. You, too, might be a zombie. "I happen to be one myself," says Stanford University philosopher Paul Skokowski. "And so, even if you don't realise it, are you."
Skokowski's assertion is based on the belief, particularly common among neuroscientists who study brain scans, that we do not have free will. There is no ghost in the machine; our actions are driven by brain states that lie entirely beyond our control. "I think, therefore I am" might be an illusion.
So, it may well be that you live in a computer simulation in which you are the only self-aware creature. I could well be a zombie and so could you. Have an interesting day.

 


 

Saturday, July 23, 2011

DVD alloys help make computers that think like us.

THE material that lets us record on DVDs has a far more tantalising property: it can mimic the nerve cells of the brain and the junctions between them. The discovery could lead to the development of brain-like computers that, crucially, operate at ultra-low power levels.
A brain-like computer is one that can learn and adapt without external programming. Such an ability would allow machines to become far better at tasks like face and speech recognition. They could also process and store data in the same location - just as nerve cells do. Conventional computing loses efficiency by keeping these functions separate.
Now two research groups have built artificial nerve cells, or neurons, and synapses - the junctions between them - using an alloy known as GST, an acronym of the symbols for its components: germanium, antimony and tellurium.
In the UK, David Wright and colleagues at the University of Exeter have created a GST neuron (Advanced Materials, DOI: 10.1002/adma.201101060), while at Stanford University in California, Philip Wong's group have created a nanoscale electronic synapse. The junction even mimics the way synapses can change their connection strength (Nano Letters, DOI: 10.1021/nl201040y).
GST is known as a "phase-change" alloy, because of its ability to change its molecular structure from a crystalline to a disordered amorphous "phase" when heated. In DVDs, this allows binary 0s and 1s to be recorded and then read by a laser.
But GST can do more than store two states. Different areas within a tiny spot of GST can be crystalline or amorphous to differing degrees, which means it can store information across a much wider range of values than simply 0 or 1. This is important because it is a build-up of input signals that makes a real neuron "fire" when it reaches a certain threshold.
Wright's neuron is able to mimic this threshold firing because GST's electrical resistance drops suddenly when it moves from its amorphous phase to the crystalline. So incoming signals in the form of pulses of current are applied to the artificial neuron - and it is deemed to have fired when its resistance plummets.
GST's talents don't end there. When a real neuron fires, the signal's importance to the next neuron it arrives at is set by the strength of the synapse connecting them. In nature, this strength is adjusted in a process called spike-timing-dependent plasticity (STDP): if the first neuron repeatedly fires before the second, the synapse's strength increases, but if the second fires first, its strength decreases.
Duygu Kuzum, a member of the Stanford team, says GST's ability to change its resistance has allowed them to program it to dynamically modify the strength of the nanoscale artificial synapses they have built - just like STDP. This lets them prioritise which neural signals are most important to any given task.
At just 75 nanometres across, the artificial synapse may offer the low power sought for brain-like computers, says Kuzum. The team's calculations suggest a system with 1010 synapses would consume just 10 watts - compared with the 1.4 megawatts needed by a supercomputer to simulate just 5 seconds of brain activity.
"Phase-change devices may indeed capture the right essence of the behaviour of the brain," says Steve Furber of the University of Manchester, UK, who is building a brain-like computer from conventional microprocessors. "But it has a very long way to go. I'll be interested when they can make 100 million of them on a chip for next to nothing."

Race to build an artificial brain

Phase-change materials are competing with at least three other approaches to brain-like computing.
In the Blue Brain Project, Henry Markram's group at the Swiss Institute for Technology in Lausanne aims to create a software model of brain biochemistry on a supercomputer.
And this week, the University of Manchester in the UK began building Spinnaker - a 1-billion-neuron computer - using smartphone microprocessors that model 18,000 neurons each. The connection strength between neurons is stored using on-chip memory.
It's all about the connections
"Memristors", too, can have their resistance "set" by applying a voltage across them, making them strong contenders for neurons or synapses.

Autopiloted glider knows where to fly for a free ride

HAWKS and albatrosses soar for hours or even days without having to land. Soon robotic gliders could go one better, soaring on winds and thermals indefinitely. Cheap remote sensing for search and rescue would be possible with this technology, or it could be used to draw up detailed maps of a battlefield.
Glider pilots are old hands at using rising columns of heated air to gain altitude. In 2005 researchers at NASA's Dryden Flight Research Center in Edwards, California, flew a glider fitted with a custom autopilot unit 60 minutes longer than normal, just by catching and riding thermals. And in 2009 Dan Edwards, who now works at the US Naval Research Laboratory in Washington DC, kept a glider soaring autonomously for 5.3 hours this way.
Both projects relied on the glider to sense when it was in a thermal and then react to stay in the updraft. But thermals can be capricious, and tend to die out at night, making flights that last several days impossible, says Salah Sukkarieh of the Australian Centre for Field Robotics in Sydney. He is designing an autopilot system that maps and plans a glider's route so it can use a technique known as dynamic soaring when thermals are scarce. The glider first flies in a high-speed air current to gain momentum, then it turns into a region of slower winds, where the newly gained energy can be converted to lift. By cycling back and forth this way, the glider can gain either speed or altitude.
"Theoretically you can stay aloft indefinitely, just by hopping around and catching the winds," says Sukkarieh, who presented his research at a robotics conference in Shanghai, China, last month.
Inspired by albatrosses and frigate birds, the operators of radio-controlled gliders have used dynamic soaring to reach speeds of more than 600 kilometres per hour by flying between two regions of differing wind speeds.
To plan a path for dynamic soaring you need a detailed map of the different winds around the glider. So Sukkarieh is working on ways to accurately measure and predict these winds. He recently tested his autopilot on a real glider, which made detailed wind-speed estimates as it flew.
The system has on-board sensors, including an accelerometer and altimeter, which measure changes in the aircraft's velocity and altitude to work out how the winds will affect the glider. From its built-in knowledge of how wind currents move, the system was able to work out the location, speed, and direction of nearby winds to create a local wind map.
By mapping wind and thermal energy sources this way and using a path-planning program, the glider autopilot should be able to calculate the most energy-efficient routes between any two points. The system would be able to plot a path up to a few kilometres away when the wind is calm but only over a few metres when turbulent, as the winds change so quickly, says Sukkarieh.
He says that the amount of energy available to a glider is usually enough to keep it aloft for as long as it can survive the structural wear and tear. He plans to test the mapping and route-planning systems more extensively in simulations, to be followed by actual soaring experiments.
"I think we have some examples from nature that mean this should be possible," says Edwards, who is not involved in Sukkarieh's research. "We're just taking our first baby steps into doing it autonomously."

Make like a hawk

Hawks and vultures are masters of spiralling upwards in rising thermals. But flying around in search of a free lift is not terribly efficient so Salah Sukkarieh of the Australian Centre for Field Robotics in Sydney thinks these birds have learned to recognise visual cues for thermals, such as towering cumulus clouds surrounded by blue sky. He's working on software that would allow a robotic glider to recognise useful cloud formations. By looking for wispy, or "smeared" clouds, the glider can find the horizontal winds that are good for dynamic soaring. At the same time, radar could measure the movement of airborne dust particles, giving an indication of wind speed and direction.

Computers understand hand-waving descriptions

DESCRIBING objects is so much easier when you use your hands, the classic being "the fish was this big".
For humans, it's easy to understand what is meant, but computers struggle, and existing gesture-based interfaces only use set movements that translate into particular instructions. Now a system called Data Miming can recognise objects from gestures without the user having to memorise a "vocabulary" of specific movements.
"Starting from the observation that humans can effortlessly understand which objects are being described when hand motions are used, we asked why computers can't do the same thing," says Christian Holz of the Hasso Plattner Institute in Potsdam, Germany who developed the system with Andy Wilson at Microsoft Research in Redmond, Washington.
Holz observed how volunteers described objects like tables or chairs using gestures, by tracing important components repeatedly with their hands and maintaining relative proportions throughout their mime.
Data Miming uses a Microsoft Kinect motion-capture camera to create a 3D representation of a user's hand movements. Voxels, or pixels in three dimensions, are activated when users pass their hands through the space represented by each voxel. And when a user encircles their fingers to indicate a table leg, say, the system can also identify that all of the enclosed space should be included in the representation. It then compares user-generated representations with a database of objects in voxel form and selects the closest match.
In tests the system correctly recognised three-quarters of descriptions, and the intended item was in the top three matches from its database 98 per cent of the time. Holz presented his findings at the CHI 2011 meeting in Vancouver, Canada, in May.
The system could be incorporated into online shopping so users could gesture to describe the type of product they want and have the system make a suggestion. Or, says Holz: "Imagine you want a funky breakfast-bar stool. Instead of wandering around and searching Ikea for half an hour, you walk up to an in-store kiosk and describe the stool using gestures, which takes seconds. The computer responds immediately, saying you probably want the Funkomatic Breakfast Stool-o-rama, and it lives in row 7a."

Best ever measurement of Earth's radioactivity.

Ghostly subatomic particles streaming from Earth's interior have enabled the most precise measurement yet of our planet's radioactivity.
These particles, called antineutrinos, suggest that about half of Earth's heat comes from the radioactive decay of uranium and thorium – and give clues to the location of geological stashes of these elements.
Heat is needed to drive the convection currents in Earth's outer core that create its magnetic field. But exactly how much of this heat comes from radioactive decay wasn't known until now.
In 2005, researchers from the international KamLAND collaboration used a detector buried in Japan to measure antineutrinos that are produced when elements decay, allowing a rough estimate.

Chemical window

Now they have enough data – 111 geological antineutrinos to be precise – to refine their measurement, suggesting that about 20 terawatts of heat come from radioactive decay. Earth's total heat production is about 40 terawatts.
The researchers also had enough neutrinos to confirm that some must be coming from places other than the crust, something that wasn't possible before. "The uncertainty is small enough that some contribution must be from the mantle," says Giorgio Gratta, a physicist at Stanford University in California who is part of the KamLAND collaboration.
The ability to determine the location of the radioactive elements could permit better models of the Earth's interior, says Gratta. Seismic waves tell us about elasticity of the crust and mantle: now we have a small window into their chemistry, which should allow their behaviour to be better modelled. The presence of radioactive elements in the mantle, for example, could affect its flow.
There's still some uncertainly in the new measurement, because detections of antineutrinos are so infrequent. Larger detectors would help improve the measurements and might even be used to monitor undeclared nuclear facilities from afar, says Gratta.

Australia is first nation to put a price on carbon

Australia's 500 biggest polluters will pay A$23 (US$24.6) per tonne of carbon emitted into the atmosphere from July next year.
The country has one of the highest rates of greenhouse gas emissions per head of population in the developed world. The population of 22.6 million is responsible for around 1.3 per cent of the world's carbon dioxide emissions.
By 2020, the new carbon tax plan should cut Australia's carbon emissions by 5 per cent, relative to 2000 levels. This adds up to around 159 million tonnes of carbon pollution – equivalent to removing 45 million cars from the road, according to the Australian government. And by 2050 the government is promising to reduce its carbon emissions by 80 per cent, relative to 2000 levels.
The scheme was announced by Australia's prime minister Julia Gilliard in an address to the nation on Sunday. "We are going to create a clean energy future," she said.
The price of carbon, initially fixed at A$23 per tonne, will rise by 2.5 per cent each year in real terms until July 2015. After that date an emissions trading scheme will be introduced.
A$10 billion of the expected revenue from the package will go towards funding low pollution measures, energy efficiency initiatives and renewable energy technologies including solar, wind and geothermal power. Other revenue will go towards improving energy efficiency in the manufacturing sector, training support to move people from jobs in polluting industries, and tax cuts so the cost of living doesn't rise for most Australians.
"The package is not perfect," says Don Henry, executive director of the Australian Conservation Foundation. The starting price of carbon is "less than ACF called for", he says, but it is a "foundation on which we can build a low-carbon economy".

Polymer sandwich harvests electricity from waste heat

IN 314 BC the Greek philosopher Theophrastus noticed something unusual: when he heated a black crystalline rock called tourmaline, it would suddenly attract ash and bits of straw. He had observed what we now call pyroelectricity - the ability of certain crystals to produce a voltage briefly when heated or cooled. Now the same phenomenon is being used to convert waste heat into electricity.
Nearly 55 per cent of all the energy generated in the US in 2009 was lost as waste heat, according to research by the Lawrence Livermore National Laboratory in California. There have been many attempts at using this waste heat to generate electricity, so far with only limited success.
Pyroelectricity could be the key, say Scott Hunter and colleagues at Oak Ridge National Laboratory in Tennessee. They have built an energy harvester that sandwiches a layer of pyroelectric polymer between two electrodes made from different metals. Just a few millimetres long, the device is deployed by wedging it between a hot surface and a cold surface - between a computer chip and a fan inside a laptop, for example. Crucially, the device is anchored to the hot surface alone and so acts as a cantilever - a beam supported at one end.
As the device warms, the polymer expands more than the electrode close to the cold surface, and the whole device bends like the bimetallic strip in a thermostat. It droops toward the cold surface, where it cools and then springs back toward the hot surface, warming up again. Soon the cantilever is thrumming between the hot and cold surfaces like the hammer of a wind-up alarm clock. Each time it is heated, the polymer generates a small amount of electricity which is stored in a capacitor (Proceedings of SPIE, DOI: 10.1117/12.882125).
Previous attempts at using pyroelectric materials to recycle waste heat have only managed to turn 2 per cent of the heat into electricity. Hunter believes his device could achieve an efficiency of between 10 and 30 per cent.
Hunter says the device can also convert heat in exhaust gases into electricity. It might even be used to capture the energy that solar cells lose as heat, he says. Energy generation aside, he adds that the devices could soak up enough heat to play a significant role in cooling laptops and data centres.
Laurent Pilon of the University of California, Los Angeles, who also studies pyroelectric energy harvesting, says he likes the compactness of the device and its relative simplicity, but has some doubts about the potential efficiency. "I think some of their expectations are a little exaggerated," he says. "They are relying on conduction to heat the device, which is a slow process." He and other groups have used fluids to heat or chill a pyroelectric material. This is much quicker, though the need to pump the fluid around does consume some of the energy generated.

So much going to waste.

Quantum World

If successful scientific theories can be thought of as cures for stubborn problems, quantum physics was the wonder drug of the 20th century. It successfully explained phenomena such as radioactivity and antimatter, and no other theory can match its description of how light and particles behave on small scales.
But it can also be mind-bending. Quantum objects can exist in multiple states and places at the same time, requiring a mastery of statistics to describe them. Rife with uncertainty and riddled with paradoxes, the theory has been criticised for casting doubt on the notion of an objective reality - a concept many physicists, including Albert Einstein, have found hard to swallow.
Today, scientists are grappling with these philosophical conundrums, trying to harness quantum's bizarre properties to advance technology, and struggling to weave quantum physics and general relativity into a seamless theory of quantum gravity.

The birth of an idea

Quantum theory began to take shape in the early 20th century, when classical ideas failed to explain some observations. Previous theories allowed atoms to vibrate at any frequency, leading to incorrect predictions that they could radiate infinite amounts of energy - a problem known as the ultraviolet catastrophe.
In 1900, Max Planck solved this problem by assuming atoms can vibrate only at specific, or quantised, frequencies. Then, in 1905, Einstein cracked the mystery of the photoelectric effect, whereby light falling on metal releases electrons of specific energies. The existing theory of light as waves failed to explain the effect, but Einstein provided a neat solution by suggesting light came in discrete packages of energy called photons - a brain wave that won him the Nobel Prize for Physics in 1921.

Quantum weirdness

In fact, light's chameleon-like ability to behave as either a particle or a wave, depending on the experimental setup, has long stymied scientists. Danish physicist Niels Bohr explained this wave-particle duality by doing away with the concept of a reality separate from one's observations. In his "Copenhagen interpretation", Bohr argued that the very act of measurement affects what we observe.
One controversial experiment recently challenged this either/or scenario of light by apparently detecting evidence of both wave- and particle-like behaviour simultaneously. The work suggests there may be no such thing as photons - light appears quantised only because of the way it interacts with matter.
Other interpretations of quantum theory - of which there are at least half a dozen - deal with the measurement problem by suggesting even more far-fetched concepts than a universe dependent on measurement. The popular many worlds interpretation suggests quantum objects display several behaviours because they inhabit an infinite number of parallel universes.

Uncertainty rules

For about 70 years, this wave-particle duality was explained by another unsettling tenet of quantum theory - the Heisenberg uncertainty principle. Formulated by Werner Heisenberg in 1927 and recently made more precise, the theory puts an upper limit on knowledge. It says one can never know both the position and momentum of a quantum object - measuring one invariably changes the other.
Bohr defeated Einstein in a series of thought experiments in the 1920s and 1930s using this principle, but more recent work suggests the underlying cause of the duality seen in experiments is a phenomenon called entanglement.
Entanglement is the idea that in the quantum world, objects are not independent if they have interacted with each other or come into being through the same process. They become linked, or entangled, such that changing one invariably affects the other, no matter how far apart they are - something Einstein called "spooky action at a distance".
This may be involved in superconductivity and may even explain why objects have mass. It also holds promise for "teleporting" particles across vast distances - assuming everyone agrees on a reference frame. The first teleportation of a quantum state occurred in 1998, and scientists have been gradually entangling more and more particles, different kinds of particles, and large particles.

Secure networks

Entanglement may also provide a nearly uncrackable method of communication. Quantum cryptographers can send "keys" to decode encrypted information using quantum particles. Any attempt to intercept the particles will disturb their quantum state - an interference that could then be detected.
In April 2004, Austrian financial institutions performed the first money transfer encrypted by quantum keys, and in June, the first encrypted computer network with more than two nodes was set up across 10 kilometres in Cambridge, Massachusetts, US.
But keeping quantum particles entangled is a tricky business. Researchers are working on how to maximise the particles' signal and distance travelled. Using a sensitive photon detector, researchers in the UK recently sent encrypted photons down the length of a 100-kilometre fibre optic cable. Researchers in the US devised a scheme to entangle successive clouds of atoms in the hopes of one day making a quantum link between the US cities of Washington, DC, and New York.
In this computer artwork, the funnels represent several different universes being created at the same time. Each parallel universe may have different physical laws to our universe (central funnel), with differing levels of stability and expansion. Some universes may even be devoid of any matter (empty funnel, right) (Image: Mark Garlick / SPL)

Lightning-fast computers

Quantum computers are another long-term goal. Because quantum particles can exist in multiple states at the same time, they could be used to carry out many calculations at once, factoring a 300-digit number in just seconds compared to the years required by conventional computers.
But to maintain their multi-state nature, particles must remain isolated long enough to carry out the calculations - a very challenging condition. Nonetheless, some progress has been made in this area. A trio of electrons, the building blocks of classical computers, were entangled in a semiconductor in 2003, and the first quantum calculation was made with a single calcium ion in 2002. In October 2004, the first quantum memory component was built from a string of caesium atoms.
But particles of matter interact so easily with others that their quantum states are preserved for very short times - just billionths of a second. Photons, on the other hand, maintain their states about a million times longer because they are less prone to interact with each other. But they are also hard to store, as they travel, literally, at the speed of light.
In 2001, scientists managed to stop light in its tracks, overcoming one practical hurdle. And the first quantum logic gate - the brains behind quantum computers - was created with light in 2003.

Quantum gravity

While three of the four fundamental forces of nature - those operating on very small scales - are well accounted for by quantum theory, gravity is its Achilles heel. This force works on a much larger scale and quantum theory has been powerless so far to explain it.
A number of bizarre theories have been proposed to bridge this gap, many of which suggest that the very fabric of space-time bubbles up with random quantum fluctuations - a foam of wormholes and infinitesimal black holes.
Such a foam is thought to have filled the universe during the big bang, dimpling space-time so that structures such as stars and galaxies could later take shape.
The most popular quantum gravity theory says that particles and forces arise from the vibrations of tiny loops - or strings - just 10-35 metres long. Another says that space and time are discrete at the smallest scales, emerging from abstractions called "spin networks".
One recent theory, called "doubly special relativity", tweaks Einstein's idea of one cosmic invariant - the speed of light - and adds another at a very small scale. The controversial theory accounts for gravity, inflation, and dark energy. Physicists are now devising observations and experiments that could test the competing theories.

Economies of scale

Quantum physics is usually thought to act on light and particles smaller than molecules. Some researchers believe there must be some cut-off point where classical physics takes over, such as the point where the weak pull of gravity overwhelms other forces (in fact, gravity's effect on neutrons was recently measured). But macroscopic objects can obey quantum rules if they don't get entangled.
Certainly, harnessing troops of atoms or photons that follow quantum laws holds great technological promise. Recent work cooling atoms to near absolute zero have produced new forms of matter called Bose-Einstein and fermionic condensates. These have been used to create laser beams made of atoms that etch precise patterns on surfaces, and might one day lead to superconductors that work at room temperature.
All of these hopes suggest that, as queasy as quantum can be, it remains likely to be the most powerful scientific cure-all for years to come.