Liar - Dawn Jay

Powered by mp3skull.com

Monday, August 29, 2011

Sun Headed Into Hibernation, Solar Studies Predict

Sunspots may disappear altogether in next cycle.

What a quiet sun looks like: Very few active regions are visible in this 2009 satellite picture.
 
Enjoy our stormy sun while it lasts. When our star drops out of its latest sunspot activity cycle, the sun is most likely going into hibernation, scientists announced today.
Three independent studies of the sun's insides, surface, and upper atmosphere all predict that the next solar cycle will be significantly delayed—if it happens at all. Normally, the next cycle would be expected to start roughly around 2020.
The combined data indicate that we may soon be headed into what's known as a grand minimum, a period of unusually low solar activity.
The predicted solar "sleep" is being compared to the last grand minimum on record, which occurred between 1645 and 1715.
Known as the Maunder Minimum, the roughly 70-year period coincided with the coldest spell of the Little Ice Age, when European canals regularly froze solid and Alpine glaciers encroached on mountain villages.
(See "Sun Oddly Quiet—Hints at Next 'Little Ice Age?'")
"We have some interesting hints that solar activity is associated with climate, but we don't understand the association," said Dean Pesnell, project scientist for NASA's Solar Dynamics Observatory (SDO).
Also, even if there is a climate link, Pesnell doesn't think another grand minimum is likely to trigger a cold snap.
"With what's happening in current times—we've added considerable amounts of carbon dioxide and methane and other greenhouse gases to the atmosphere," said Pesnell, who wasn't involved in the suite of new sun studies.
"I don't think you'd see the same cooling effects today if the sun went into another Maunder Minimum-type behavior."
Sunspots Losing Strength
Sunspots are cool, dark blemishes visible on the sun's surface that indicate regions of intense magnetic activity.
For centuries scientists have been using sunspots—some of which can be wider than Earth—to track the sun's magnetic highs and lows.
(See the sharpest pictures yet of sunspots snapped in visible light.)
For instance, 17th-century astronomers Galileo Galilei and Giovanni Cassini separately tracked sunspots and noticed a lack of activity during the Maunder Minimum.
In the 1800s scientists recognized that sunspots come and go on a regular cycle that lasts about 11 years. We're now in Solar Cycle 24, heading for a maximum in the sun's activity sometime in 2013.
Recently, the National Solar Observatory's Matt Penn and colleagues analyzed more than 13 years of sunspot data collected at the McMath-Pierce Telescope at Kitt Peak, Arizona.
They noticed a long-term trend of sunspot weakening, and if the trend continues, the sun's magnetic field won't be strong enough to produce sunspots during Solar Cycle 25, Penn and colleagues predict.
"The dark spots are getting brighter," Penn said today during a press briefing. Based on their data, the team predicts that, by the time it's over, the current solar cycle will have been "half as strong as Cycle 23, and the next cycle may have no sunspots at all."
(Related: "Sunspot Cycles—Deciphering the Butterfly Pattern.")
Sun's "Jet Streams," Coronal Rush Also Sluggish
Separately, the National Solar Observatory's Frank Hill and colleagues have been monitoring solar cycles via a technique called helioseismology. This method uses surface vibrations caused by acoustic waves inside the star to map interior structure.
Specifically, Hill and colleagues have been tracking buried "jet streams" encircling the sun called torsional oscillations. These bands of flowing material first appear near the sun's poles and migrate toward the equator. The bands are thought to play a role in generating the sun's magnetic field.
(Related: "Sunspot Delay Due to Sluggish Solar 'Jet Stream?'")
Sunspots tend to occur along the pathways of these subsurface bands, and the sun generally becomes more active as the bands near its equator, so they act as good indicators for the timing of solar cycles.
"The torsional oscillation ... pattern for Solar Cycle 24 first appeared in 1997," Hill said today during the press briefing. "That means the flow for Cycle 25 should have appeared in 2008 or 2009, but it has not shown up yet."
According to Hill, their data suggest that the start of Solar Cycle 25 may be delayed until 2022—about two years late—or the cycle may simply not happen.
Adding to the evidence, Richard Altrock, manager of the U.S. Air Force's coronal research program for the National Solar Observatory (NSO), has observed telltale changes in a magnetic phenomenon in the sun's corona—its faint upper atmosphere.
Known as the rush to the poles, the rapid poleward movement of magnetic features in the corona has been linked to an increase in sunspot activity, with a solar cycle hitting its maximum around the time the features reach about 76 degrees latitude north and south of the sun's equator.
The rush to the poles is also linked to the sun "sweeping away" the magnetic field associated with a given solar cycle, making way for a new magnetic field and a new round of sunspot activity.
This time, however, the rush to the poles is more of a crawl, which means we could be headed toward a very weak solar maximum in 2013—and it may delay or even prevent the start of the next solar cycle.
Quiet Sun Exciting for Science
Taken together, the three lines of evidence strongly hint that Solar Cycle 25 may be a bust, the scientists said today during a meeting of the American Astronomical Society in Las Cruces, New Mexico.
But a solar lull is no cause for alarm, NSO's Hill said: "It's happened before, and life seems to go on. I'm not concerned but excited."
In many ways a lack of magnetic activity is a boon for science. Strong solar storms can emit blasts of charged particles that interfere with radio communications, disrupt power grids, and can even put excess drag on orbiting satellites.
"Drag is important for people like me at NASA," SDO's Pesnell said, "because we like to keep our satellites in space."
What's more, a decrease in sunspots doesn't necessarily mean a drop in other solar features such as prominences, which can produce aurora-triggering coronal mass ejections. In fact, records show that auroras continued to appear on a regular basis even during the Maunder Minimum, Pesnell said.
(See "Solar Flare Sparks Biggest Eruption Ever Seen on Sun.")
Instead, he said, the unusual changes to the sun's activity cycles offer an unprecedented opportunity for scientists to test theories about how the sun makes and destroys its magnetic field.
"Right now we have so many sun-watching satellites and advanced ground-based observatories ready to spring into action," Pesnell said. "If the sun is going to do something different, this is a great time for it to happen."

World's first (not first but trying to get first success) womb transplant.

Eva Ottosson, 56, has agreed to take part in a groundbreaking new medical procedure, which if successful could see her donate her uterus to her 25-year-old daughter Sara.
Doctors hope if the transplant is successful Sara, who was born without reproductive organs, could become pregnant and carry a child in the same womb from which she herself was born.
It is hoped the complex transplant operation could take place as early as next spring in Sweden, where doctors in Gothenburg have been assessing suitable patients for the revolutionary procedure.
Mrs Ottosson, who runs a lighting business in Nottingham, said: “My daughter and I are both very rational people and we both think ‘it’s just a womb’.
“She needs the womb and if I’m the best donor for her … well, go on. She needs it more than me. I’ve had two daughters so it’s served me well.”

The only previous womb transplant took place in Saudi Arabia in 2000 when a 26-year-old woman, who had lost her uterus due to haemorrhage, received a donated womb from a 46-year-old.
However the recipient developed problems and the womb had to be removed after 99 days.
Since then medical knowledge of the surgical procedure has improved and a team in Gothenburg, Sweden, believe they are at the stage where they can perform a successful transplant.
Sara, who lives and works in Stockholm, has a condition called Mayer Rokitansky Kuster Hauser (MRKH) syndrome, which affects around 1 in 5,000 people, and means she was born without a uterus and some parts of the vagina.
The cause is unknown but like many women with the condition Sara only realised she was missing her reproductive organs when she was a teenager and failed to begin menstruating.
If the procedure works, Sara will have her own eggs fertilised using her boyfriend’s sperm and then implanted into her donated womb.
Sara said she was unconcerned about the implications of receiving the womb that she herself was carried in.
She said: “I haven’t really thought about that. I’m a biology teacher and it’s just an organ like any other organ. But my mum did ask me about this. She said ‘isn’t it weird?’ And my answer is no. I’m more worried that my mum is going to have a big operation.”
She added: “It would mean the world to me for this to work and to have children. At the moment I am trying not to get my hopes up so that I am not disappointed. But we have also been thinking about adoption for a long time and if the transplant fails then we will try to adopt.”
Dr Mats Brannstrom, who is leading the medical team, said a womb transplant remained one of the most complex operations known to medical science.
He said: “Technically it is lot more difficult than transplanting a kidney, liver or heart. The difficulty with it is avoiding haemorrhage and making sure you have long enough blood vessels to connect the womb.
“You are also working deep down in the pelvis area and it is like working in a funnel. It is not like working with a kidney, which is really accessible.”
Mrs Ottosson said she hoped by talking about the operation it would help bring attention to an otherwise rarely publicised condition.
She said: “The girls who have MRKH are a silent group who don’t like to talk about it. So we hope that this will help those girls and that by talking about the condition we can encourage medical science to pinpoint what causes it.”
Sara and her mother are among a small group chosen to take part in the programme.
They have undergone tests and are now waiting to hear if they will be the first to undergo the groundbreaking procedure.

A Hat to Help You Sleep-A Personal Trainer and Some Exercise Will Help Too.

Cooling down a whirring brain can help you sleep, literally. An American scientist has developed a cap that chills the frontal cortex, slowing activity and helping promote a restful sleep. Dr Eric Nofzinger, of the Universitty of Pittsburg, tried his water filled cap on 24 people, half of whom had insomnia. He found that the insomniacs that were treated with the maximum cooling intensity took about 13 minutes to fall asleep and slept for 89% of the time that they were in bed; this was similar to the time spent asleep by those patients who slept normally. This group took an average of 16 minutes to fall asleep and also slept for 89% of the time. Insomnia is associated with increased metabolism in the frontal cortex. Reduced metabolism seems to promote good sleep. The cap could provide those who struggle to sleep well with a safer and healthier alternative to sleeping pills, and it’s hoped that a large scale trial of the cap will begin soon.
Structured exercise has also proven to be very effective in curing or helping insomnia. If you can’t manage to get it done on your own then you might want to think about some sessions with one of the Diets Don’t Work personal trainers in Windsor and Berkshire. They can not only help with exercise but all block bookings of personal training sessions includee nutritional assessment and advice. Both exercise and good healthy eating will really help counteract the stresses common to modern life and get you the good night’s sleep that you need!

Gas clouds may have created biggest cosmic explosions

THEY would make supernovae look like firecrackers. Giant gas clouds in the early universe could have powered the most energetic eruptions since the big bang.
Evidence for supermassive black holes - weighing millions or billions of suns - has been found in the early universe, but no one knows how they grew so big so fast. Tiny black holes, weighing as much as a star, simply didn't have enough time to clump together into the behemoths.
One theory suggests huge gas clouds around at the time collapsed into middleweight "seed" black holes. These could then have attracted more matter and become supermassive.
Now Pedro Montero of the Max Planck Institute for Astrophysics in Garching, Germany, and colleagues have calculated how these gas clouds, weighing a million suns, might have evolved into seed black holes. They also found that clouds don't always form black holes, but either way they would have created powerful explosions.
The clouds are so massive that they begin to contract under their own weight, eventually becoming dense enough to trigger nuclear reactions. These provide an outward pressure that counteracts the clouds' collapse.
What happens next depends on the clouds' chemical composition. Heavy elements such as oxygen and nitrogen - spat out by dying stars - boost the rate of nuclear reactions. If a giant gas cloud had at least 10 per cent of the sun's proportion of these elements, they would set off enough reactions to overwhelm gravity's inward pull. This would blow a cloud apart in an explosion with 100 times the electromagnetic energy of any supernova today, the team says (arxiv.org/1108.3090).
Such gigantic, bright bursts might be detected with future observatories that could search for fleeting events, says Mitch Begelman of the University of Colorado, Boulder.
If the cloud contained fewer heavy elements providing outward pressure, gravity would win out and the cloud would collapse into a seed black hole. But that would release even more energy than in the detonation scenario because high pressures and temperatures in the cloud's core would lead energetic photons there to turn into pairs of electrons and their antimatter counterparts. These would annihilate, releasing about 10,000 times the electromagnetic energy of the brightest supernovae in the form of neutrinos. These particles, which rarely interact with normal matter, are invisible, making the bursts ultra-powerful but not unusually bright.
Begelman says future models could be made more realistic if they allowed different parts of the cloud to rotate at different rates.

Friday, August 26, 2011

Extraterrestrial dust reveals asteroid's past and future



Talk about seeing a world in a grain of sand. A sprinkling of asteroid dust that slipped into Japan's Hayabusa probe when it touched down on the asteroid Itokawa six years ago has revealed surprising details about the space rock's past and its likely future.
Hayabusa was meant to land on the 500-metre-wide asteroid in 2005 and fire projectiles into its surface, scooping up the resulting debris for later return to Earth. But the projectiles never fired, and team members had to wait five long years for the probe, which suffered numerous equipment failures, to limp back to EarthMovie Camera (see a picture of it after it touched down in Australia).
They hoped that some asteroid dust had managed to find its way into Hayabusa's collection chamber when the probe touched Itokawa, but even when they saw dust there they were sceptical. "It was me who opened the sample catcher," says Tomoki Nakamura of Tohoku University in Sendai. "I could not believe they were real Itokawa samples."
Now, he and dozens of other researchers are reporting their analyses of the samples, which comprise more than 1500 rocky particles from Itokawa, all smaller than 0.2 millimetres across.

Larger parent

The studies suggest that Itokawa was once part of a much larger asteroid. The conclusion is based on the fact that the particles show a range of minerals that must have reached 800 °C to form (see picture). The decay of radioactive aluminium isotopes could have created that much heat if the parent body was at least 20 kilometres across – 40 times its current size, Nakamura and his collaborators say.
"The body needs to be big; otherwise, it would lose the temperature too quickly for these processes to occur," says Trevor Ireland of the Australian National University in Canberra, who analysed some of the samples.
Indeed, data from Hayabusa itself had already determined that Itokawa has had its share of knocks in the past. The asteroid's gravitational pull on the probe revealed that Itokawa has a very low density, suggesting it is a rocky rubble pile that probably coalesced after its parent body was smashed in an impact.

Solar wind

The samples also hint at a bleak future for Itokawa. Keisuke Nagao of the University of Tokyo and colleagues studied noble gases, such as neon, in three grains. These gases can be implanted by charged particles streaming in from the solar wind and even from beyond the solar system.
The results suggest that the grains have been exposed to these charged particles for no more than 8 million years. This implies either that Itokawa coalesced 8 million years ago, or that it loses tens of centimetres of material from its surface every million years, exposing new layers of rubble in the process.
This could occur if dust grains lifted off the surface after micrometeoroid impacts simply float away from the lightweight asteroid. If Itokawa is losing its surface at that rate, it may completely disappear in less than a billion years.

Disappearing space rock

"It's a bit sad to think it will eventually disappear," says Scott Sandford of NASA's Ames Research Center in California, who analysed some of the samples.
But he adds: "The 'disappearance' of Itokawa has its compensations. For one thing, I'd rather Itokawa be whittled away then to have it run into the Earth as a single object, which would cause some serious issues for the Earth if it happened. Also, one should remember that it is the process of whittling away at asteroids like Itokawa that produces the smaller meteoroids that ultimately land on the Earth as meteorites."
This particular finding about its future may actually have been helped by Hayabusa's failure to fire projectiles into the asteroid. "Solar wind penetrates only 100 nanometres or so into rock," says Ireland. "The gun firing would have meant that the projectile penetrated into the asteroid, pushing out chips and fragments." That would have made it "very hard to isolate surface pieces, which is where the solar wind resides", he adds.

Link confirmed

The samples' composition matches that of the most common type of meteorite found on Earth, called ordinary chondrites. Since Itokawa is classified as a stony S-type asteroid – the most common kind in the inner asteroid belt, the analyses confirm that ordinary chondrites come from S-type asteroids.
The studies also highlight the importance of returning samples from extraterrestrial bodies to Earth for study. "The analysis apparatus is too heavy to carry to an asteroid," Nakamura told New Scientist.
Alexander Krot at the University of Hawaii at Manoa, who was not part of the team, says two other sample-return missions – Japan's Hayabusa-2 and NASA's OSIRIS-Rex – will blast off in 2014 and 2016 to collect samples from asteroids rich in minerals that formed in water.
These could reveal clues about "one of the most outstanding questions in planetary science – the origin of Earth's water", he says in a commentary in Science. Studies of hydrogen isotopes in the water-formed minerals in the target asteroids could help determine if Earth's water was delivered by asteroids or comets or simply came from the dust from which the planet formed.

Image searches 'poisoned' by cybercriminals

ALL Pedro Bueno did was run a regular Google search for "iPhone with antenna" while trying to fix the Wi-Fi on his wife's cellphone. Moments later he was yet another victim of "search engine poisoning" - the latest battleground in the ongoing war between cybercriminals and Google.
The Google results page offered Bueno several image hits as well as the regular results. "I decided to see one of the pictures and clicked on it. It then started to load and suddenly I was redirected to another page," he wrote in a posting on the Internet Storm Center website, a volunteer group that monitors computer crime.
Claiming to be an antivirus program from the non-existent "Apple Security Center", this web page displayed a list of files that were supposedly trojans, spyware and other malware hidden on his computer. In fact, he had been sent to a fake antivirus website. At this point, the user may be tricked into paying for unnecessary antivirus protection or a virus is downloaded onto the unwitting user's computer. If you're unlucky and unwary, both.
Search engine poisoning is booming. Internet security firm Trend Micro estimates that in May 2011, more than 113 million users were redirected to malicious pages due to search engine poisoning. Hijacking image searches rather than text-based web searches is the fraudsters' latest twist on a popular scam.
"It's an arms race," says Christian Platzer of the cybersecurity lab at the Technical University of Vienna, Austria. Hackers write code to fool search engines into giving bogus results, and search-engine companies fight back by writing code to block their scams.
These scams are "pretty much automated" says Bojan Zdrnja, a computer security specialist in Croatia. It works like this: hackers gain access to legitimate websites and install programs which monitor Google Trends for hot keywords - words relating to any major news story, for example. The program then searches for content - including images - related to the hot topics and uses that material to automatically generate new web content of its own. Often they will hack a legitimate site that Google's software bots rate as credible and simply add their own content. This is not normally visible on the site, nor does the owner know about it.
As Google's bots crawl through the web, the malicious program identifies them and feeds them the automatically generated content from these faked web pages. Because everything on the page is specifically chosen to relate to that topic - be it Amy Winehouse's death or the shootings in Norway - the fake web page and the "poisoned" image quickly appear near the top of the relevant search results.
Next the user clicks on the thumbnail of the photo they want and the user's browser requests the page the image originated from. The attacker's program redirects the user to a fake antivirus website - putting them at risk.
"Google has done a pretty good job with standard searches," says Zdrnja, by detecting malware and warning users of potentially harmful pages. Blocking poisoned images from searches is the next challenge. A Google spokeswoman said: "We have cut down on the bad Image Search links by over 90 per cent since their peak at the start of May."

Thursday, August 25, 2011

Did quake or tsunami cause Fukushima meltdown?

Japan's nuclear safety agency today rejected a claim in British newspaper The Independent that the earthquake itself, not the subsequent tsunami, destroyed cooling systems leading to meltdowns at the Fukushima Daiichi nuclear plant.
"It is not correct," a spokesman for Japan's nuclear safety watchdog, the Nuclear and Industrial Safety Agency (NISA), told New Scientist.
The claim made in The Independent contradicts public reassurances from the Tokyo Electric Power Company (TEPCO) , the company that owns the plant, that its facility stood up to the quake as it should, but was overwhelmed by the tsunami. If the quake did cause the damage, it could call into question the resilience of TEPCO's other nuclear installations in Japan. TEPCO and Japan's nuclear industry as a whole have been criticised for attempting to cover up accidents in the past.
The paper reported that workers said they had seen cooling-water pipes bursting as they were evacuating from the nuclear plant following the quake at 2.52 pm on 11 March – before the tsunami struck about 45 minutes later.
It also quoted nuclear engineers who concluded from data released by TEPCO that coolant systems must have failed shortly after the quake.

Meltdown inevitable

"There was already so much damage to the cooling system that a meltdown was inevitable," Mitsuhiko Tanaka, a former nuclear plant designer, is quoted as saying.
Tanaka said that according to TEPCO's own data, emergency water-circulation equipment started up automatically shortly after the quake. "This only happens when there is a loss of coolant," he told The Independent. Likewise, between 3.04 pm and 3.11 pm, water sprayers in the containment vessel of reactor unit 1 were activated; Tanaka says this is a failsafe for when all other cooling systems have failed.
So by the time the tsunami struck at 3.37 pm, "the plant was already on its way to melting down", says the newspaper.
The Independent also quotes the results of a NISA visit to Fukushima nine days before the quake. It says that NISA warned TEPCO about its failure to inspect critical machinery at the plant, including recirculation pumps.

No damage

NISA's spokesman said that the agency's press release about its visit on 2 March may have been misunderstood. "There was no damaged piping in the Fukushima Daiichi nuclear power plant, as claimed in the article," he said.
What the press release actually said was that some of TEPCO's periodic equipment checks were behind schedule, said the spokesman.
NISA also rejected the central claim of the article: that the quake, not the tsunami, caused the critical damage leading to meltdown. "It is not correct," said the spokesman. "Before the tsunami hit, the cooling system was operated by diesel generators in the plant [to compensate for] a loss of external power sources after the earthquake."
So not until the tsunami swept away the diesel generators did the cooling system fail, ultimately causing meltdowns.

Viennese backup

NISA's version of events was backed up yesterday by the International Atomic Energy Agency in Vienna, Austria, which sent a fact-finding mission to Fukushima in May.
An IAEA spokesman said that a report from the mission – led by Mike Weightman the UK's chief inspector of nuclear installations – contains detailed accounts of the failure of cooling systems in the early hours of the disaster which challenge the idea that the quake caused the damage, as claimed in The Independent.
Meanwhile, TEPCO said on Wednesday that overall radiation released from the three damaged Fukushima reactors is now a 10-millionth of peak levels recorded on 15 March, just after the accident.
Wednesday also saw reactor 3 of the Tomari nuclear plant in Hokkaido become the first of Japan's nuclear installations since the disaster to resume full commercial operation.

IBM unveils microchip based on the human brain

How to replicate the squishy sophistication of the human brain in hard metal and silicon? IBM thinks it's found a way, and to prove it has built and tested two new "cognitive computing" microchips whose design is inspired by the human brain.
In the mammalian brain, neurons send chemical signals to each other across tiny gaps called synapses. A neuron's long "tail", the axon, sends the signals from its multiple terminals; the receptive parts of other neurons – the dendrites – collect them.
Each of IBM's brain-mimicking silicon chips is a few square millimetres in size and holds a grid of 256 parallel wires that represent dendrites of computational "neurons" crossed at right angles by other wires standing in for axons. The "synapses" are 45-nanometre transistors connecting the criss-crossing wires and act as the chips' memory; one chip has 262,144 of them and the other 65,536. Each electrical signal crossing a synapse consumes just 45 picajoules – a thousandth of what typical computer chips use.
Because the neurons and synapses are so close together, the pieces of hardware responsible for computation and memory are also much closer than in ordinary computer chips. Conventionally, the memory sits to the side of the processor, but in the new chips the memory – the synapses – and the processors – the neurons – are on top of each other, so they don't need to use as much energy sending electrons back and forth. That means the chips can perform parallel processing far more efficiently than conventional computers.
In preliminary tests, the chips were able to play a game of Pong, control a virtual car on a racecourse and identify an image or digit drawn on a screen. These are all tasks computers have accomplished before, but the new chips managed to complete them without needing a specialised program for each task. The chips can also "learn" how to complete each task if trained.

Fewer watts than Watson

Eventually, by connecting many such chips, Dharmendra Modha of IBM Research – Almaden, in San Jose, California, hopes to build a shoebox-sized supercomputer with 10 billion neurons and 100 trillion synapses that consumes just 1 kilowatt of power. That may still sound a lot – a standard PC uses only a few hundred watts – but a supercomputer like IBM's Watson uses hundreds of kilowatts. By contrast, the ultra-efficient human brain is estimated to have 100 billion neurons and at least 100 trillion synapses but consumes no more than 20 watts.
Kwabena Boahen of Stanford University, California, says scale is one of the key issues. Until the chips contain as many synapses as the human brain, it will be difficult to distinguish their accomplishments from those of other computers.
The chips are sponsored by a US Defense Advanced Research Projects Agency (DARPA) project to create computers whose abilities rival those of the human brain.

Friday, August 19, 2011

Ancient Egyptians believed in coiffure after death




Ancient Egyptians wouldn't be caught dead without hair gel. Style in the afterlife was just as important as it was during life on Earth – and coiffure was key.
To this end, men and women alike would have their tresses styled with a fat-based "gel" when they were embalmed. The evidence of their vanity has been found in a community cemetery dating back 3000 years.
Tomb paintings depict people with cone-shaped objects sitting on their heads, thought to be lumps of scented animal fat. "Once we started looking [for these], we found interesting hairstyles," says Natalie McCreesh of the University of Manchester, UK. "The hair was styled and perfectly curled."
She and her colleagues examined hair samples from 15 mummies from the Kellis 1 cemetery in Dakhla oasis, Egypt, and a further three samples from mummies housed in museum collections in the US, the UK and Ireland. The mummies were of both sexes, between 4 and 58 years old when they died, and dated from 3500 years to 2300 years ago.
When examined with light and electron microscopes, it became clear that the hairs of most mummies were coated with a fatty substance, though a few had been coiffed with something resinous.

Because they're worth it

The team used a solvent to separate the coatings from the hairs and determined the coatings' chemical composition. They found that the substances were different to those commonly used to embalm bodies. By contrast, two mummies whose heads had been shaved carried the same embalming materials on their heads as on the bandages around the body.
It seems, says McCreesh, that when a body was being coated in resinous materials, the hair would be covered and protected, or washed and restyled, in order to preserve the dead person's identity.
Maria Perla Colombini of the University of Pisa, Italy, points out that Egyptians were not the only ancient society to worry about mummified hair care. In South America, bodies were preserved with resin and pitch, and the hair coloured with powder, she says.
"People presume the ancient Egyptians shaved their heads. The priests and priestesses did, but not everyone. They did take pride in their appearance," says McCreesh.
"The whole point of mummification was to preserve the body as in life. I guess they wanted to look their best. You'd be dressed in your fancy party outfit. You'd want to look beautiful in preparation for the next life".

Beyond space-time: Welcome to phase space

A theory of reality beyond Einstein's universe is taking shape – and a mysterious cosmic signal could soon fill in the blanks
IT WASN'T so long ago we thought space and time were the absolute and unchanging scaffolding of the universe. Then along came Albert Einstein, who showed that different observers can disagree about the length of objects and the timing of events. His theory of relativity unified space and time into a single entity - space-time. It meant the way we thought about the fabric of reality would never be the same again. "Henceforth space by itself, and time by itself, are doomed to fade into mere shadows," declared mathematician Hermann Minkowski. "Only a kind of union of the two will preserve an independent reality."
But did Einstein's revolution go far enough? Physicist Lee Smolin at the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, Canada, doesn't think so. He and a trio of colleagues are aiming to take relativity to a whole new level, and they have space-time in their sights. They say we need to forget about the home Einstein invented for us: we live instead in a place called phase space.
If this radical claim is true, it could solve a troubling paradox about black holes that has stumped physicists for decades. What's more, it could set them on the path towards their heart's desire: a "theory of everything" that will finally unite general relativity and quantum mechanics.
So what is phase space? It is a curious eight-dimensional world that merges our familiar four dimensions of space and time and a four-dimensional world called momentum space.
Momentum space isn't as alien as it first sounds. When you look at the world around you, says Smolin, you don't ever observe space or time - instead you see energy and momentum. When you look at your watch, for example, photons bounce off a surface and land on your retina. By detecting the energy and momentum of the photons, your brain reconstructs events in space and time.
The same is true of physics experiments. Inside particle smashers, physicists measure the energy and momentum of particles as they speed toward one another and collide, and the energy and momentum of the debris that comes flying out. Likewise, telescopes measure the energy and momentum of photons streaming in from the far reaches of the universe. "If you go by what we observe, we don't live in space-time," Smolin says. "We live in momentum space."
And just as space-time can be pictured as a coordinate system with time on one axis and space - its three dimensions condensed to one - on the other axis, the same is true of momentum space. In this case energy is on one axis and momentum - which, like space, has three components - is on the other (see diagram).
Simple mathematical transformations exist to translate measurements in this momentum space into measurements in space-time, and the common wisdom is that momentum space is a mere mathematical tool. After all, Einstein showed that space-time is reality's true arena, in which the dramas of the cosmos are played out.
Smolin and his colleagues aren't the first to wonder whether that is the full story. As far back as 1938, the German physicist Max Born noticed that several pivotal equations in quantum mechanics remain the same whether expressed in space-time coordinates or in momentum space coordinates. He wondered whether it might be possible to use this connection to unite the seemingly incompatible theories of general relativity, which deals with space-time, and quantum mechanics, whose particles have momentum and energy. Maybe it could provide the key to the long-sought theory of quantum gravity.
Born's idea that space-time and momentum space should be interchangeable - a theory now known as "Born reciprocity" - had a remarkable consequence: if space-time can be curved by the masses of stars and galaxies, as Einstein's theory showed, then it should be possible to curve momentum space too.
At the time it was not clear what kind of physical entity might curve momentum space, and the mathematics necessary to make such an idea work hadn't even been invented. So Born never fulfilled his dream of putting space-time and momentum space on an equal footing.
That is where Smolin and his colleagues enter the story. Together with Laurent Freidel, also at the Perimeter Institute, Jerzy Kowalski-Glikman at the University of Wroclaw, Poland, and Giovanni Amelino-Camelia at Sapienza University of Rome in Italy, Smolin has been investigating the effects of a curvature of momentum space.
The quartet took the standard mathematical rules for translating between momentum space and space-time and applied them to a curved momentum space. What they discovered is shocking: observers living in a curved momentum space will no longer agree on measurements made in a unified space-time. That goes entirely against the grain of Einstein's relativity. He had shown that while space and time were relative, space-time was the same for everyone. For observers in a curved momentum space, however, even space-time is relative (see diagram).

 

This mismatch between one observer's space-time measurements and another's grows with distance or over time, which means that while space-time in your immediate vicinity will always be sharply defined, objects and events in the far distance become fuzzier. "The further away you are and the more energy is involved, the larger the event seems to spread out in space-time," says Smolin.
For instance, if you are 10 billion light years from a supernova and the energy of its light is about 10 gigaelectronvolts, then your measurement of its location in space-time would differ from a local observer's by a light second. That may not sound like much, but it amounts to 300,000 kilometres. Neither of you would be wrong - it's just that locations in space-time are relative, a phenomenon the researchers have dubbed "relative locality".
Relative locality would deal a huge blow to our picture of reality. If space-time is no longer an invariant backdrop of the universe on which all observers can agree, in what sense can it be considered the true fabric of reality?
That is a question still to be wrestled with, but relative locality has its benefits, too. For one thing, it could shed light on a stubborn puzzle known as the black hole information-loss paradox. In the 1970s, Stephen Hawking discovered that black holes radiate away their mass, eventually evaporating and disappearing altogether. That posed an intriguing question: what happens to all the stuff that fell into the black hole in the first place?
Relativity prevents anything that falls into a black hole from escaping, because it would have to travel faster than light to do so - a cosmic speed limit that is strictly enforced. But quantum mechanics enforces its own strict law: things, or more precisely the information that they contain, cannot simply vanish from reality. Black hole evaporation put physicists between a rock and a hard place.
According to Smolin, relative locality saves the day. Let's say you were patient enough to wait around while a black hole evaporated, a process that could take billions of years. Once it had vanished, you could ask what happened to, say, an elephant that once succumbed to its gravitational grip. But as you look back to the time at which you thought the elephant had fallen in, you would find that locations in space-time had grown so fuzzy and uncertain that there would be no way to tell whether the elephant actually fell into the black hole or narrowly missed it. The information-loss paradox dissolves.
Big questions still remain. For instance, how can we know if momentum space is really curved? To find the answer, the team has proposed several experiments.
One idea is to look at light arriving at the Earth from distant gamma-ray bursts. If momentum space is curved in a particular way that mathematicians refer to as "non-metric", then a high-energy photon in the gamma-ray burst should arrive at our telescope a little later than a lower-energy photon from the same burst, despite the two being emitted at the same time.
Just that phenomenon has already been seen, starting with some unusual observations made by a telescope in the Canary Islands in 2005 (New Scientist, 15 August 2009, p 29)Movie Camera. The effect has since been confirmed by NASA's Fermi gamma-ray space telescope, which has been collecting light from cosmic explosions since it launched in 2008. "The Fermi data show that it is an undeniable experimental fact that there is a correlation between arrival time and energy - high-energy photons arrive later than low-energy photons," says Amelino-Camelia.
Still, he is not popping the champagne just yet. It is not clear whether the observed delays are true signatures of curved momentum space, or whether they are down to "unknown properties of the explosions themselves", as Amelino-Camelia puts it. Calculations of gamma-ray bursts idealise the explosions as instantaneous, but in reality they last for several seconds. While there is no obvious reason to think so, it is possible that the bursts occur in such a way that they emit lower-energy photons a second or two before higher-energy photons, which would account for the observed delays.
In order to disentangle the properties of the explosions from properties of relative locality, we need a large sample of gamma-ray bursts taking place at various known distances (arxiv.org/abs/1103.5626). If the delay is a property of the explosion, its length will not depend on how far away the burst is from our telescope; if it is a sign of relative locality, it will. Amelino-Camelia and the rest of Smolin's team are now anxiously awaiting more data from Fermi.
The questions don't end there, however. Even if Fermi's observations confirm that momentum space is curved, they still won't tell us what is doing the curving. In general relativity, it is momentum and energy in the form of mass that warp space-time. In a world in which momentum space is fundamental, could space and time somehow be responsible for curving momentum space?

Work by Shahn Majid, a mathematical physicist at Queen Mary University of London, might hold some clues. In the 1990s, he showed that curved momentum space is equivalent to what's known as a noncommutative space-time. In familiar space-time, coordinates commute - that is, if we want to reach the point with coordinates (x,y), it doesn't matter whether we take x steps to the right and then y steps forward, or if we travel y steps forward followed by x steps to the right. But mathematicians can construct space-times in which this order no longer holds, leaving space-time with an inherent fuzziness.
In a sense, such fuzziness is exactly what you might expect once quantum effects take hold. What makes quantum mechanics different from ordinary mechanics is Heisenberg's uncertainty principle: when you fix a particle's momentum - by measuring it, for example - then its position becomes completely uncertain, and vice versa. The order in which you measure position and momentum determines their values; in other words, these properties do not commute. This, Majid says, implies that curved momentum space is just quantum space-time in another guise.
What's more, Majid suspects that this relationship between curvature and quantum uncertainty works two ways: the curvature of space-time - a manifestation of gravity in Einstein's relativity - implies that momentum space is also quantum. Smolin and colleagues' model does not yet include gravity, but once it does, Majid says, observers will not agree on measurements in momentum space either. So if both space-time and momentum space are relative, where does objective reality lie? What is the true fabric of reality?
Smolin's hunch is that we will find ourselves in a place where space-time and momentum space meet: an eight-dimensional phase space that represents all possible values of position, time, energy and momentum. In relativity, what one observer views as space, another views as time and vice versa, because ultimately they are two sides of a single coin - a unified space-time. Likewise, in Smolin's picture of quantum gravity, what one observer sees as space-time another sees as momentum space, and the two are unified in a higher-dimensional phase space that is absolute and invariant to all observers. With relativity bumped up another level, it will be goodbye to both space-time and momentum space, and hello phase space.
"It has been obvious for a long time that the separation between space-time and energy-momentum is misleading when dealing with quantum gravity," says physicist João Magueijo of Imperial College London. In ordinary physics, it is easy enough to treat space-time and momentum space as separate things, he explains, "but quantum gravity may require their complete entanglement". Once we figure out how the puzzle pieces of space-time and momentum space fit together, Born's dream will finally be realised and the true scaffolding of reality will be revealed.

Sunday, August 14, 2011

Mars rover reaches rim of vast, ancient crater

THREE years of trundling across treacherous dunes has brought NASA's Opportunity rover to its most significant target yet - a huge crater called Endeavour that was once soaked with water and could hold clues as to whether there was ever life on Mars.
Orbital observations suggest the rocks on Endeavour's rim are more than 3.5 billion years old and so date from the earliest, wettest phase of Martian history, when water carved out vast drainage channels across the planet. Until now, neither Opportunity nor its now-defunct sister Spirit (see "The rovers at a glance") had examined rocks that clearly date from this period.
"This is potentially the most exciting scientific opportunity for the rover mission yet," says John Callas, mission manager at NASA's Jet Propulsion Laboratory in Pasadena, California. That's because mineralogical studies from orbit suggest these ancient rocks formed in a cosy environment for life.
The rovers have previously studied rocks that were once immersed in acidic, salty water (see "Blueberry bonanza"). The 20-kilometre Endeavour, by contrast, seems to have harboured water friendlier to life, since the crater contains clay minerals that require a relatively neutral pH to form. What's more, orbital measurements do not indicate that the ancient water was salty - though salty water may be flowing on Mars today (see "Dark streaks point to salty flows").
Opportunity's arrival at Endeavour marks a huge milestone for the mission. The goal seemed "almost unbelievably audacious" when it started heading there, says James Wray of the Georgia Institute of Technology in Atlanta.
The rover was only designed to last three months and in 2008, when it set out from a smaller crater called Victoria, it had already been on Mars for more than four years (see its route here). "I have gained a wife, lost a grandfather and moved twice [since then]," Wray says. "From that perspective, it does feel like a lot of time has passed."
The rover might reveal what form the water at Endeavour took. If it finds rocks bearing the imprint of ripples, that would suggest that water pooled on the surface, while if it spots rocks threaded with veins of clay minerals, that would point to water percolating underground, Wray says.
Opportunity entered Victoria crater but is likely to spend all its time at Endeavour on the rim. Endeavour's interior is less enticing because sediment from a later, drier period of Martian history has buried the old rocks there.
If it is still functioning a few years from now, the rover could set off for another, smaller crater called Iazu, with rocks that are just as old. "But holy smoly, that's like 15 kilometres away," nearly as far as the three-year trek to Endeavour, says Ray Arvidson of Washington University in St Louis, Missouri. He is content to see Opportunity live out the rest of its days scrutinising rocks and capturing eye-popping vistas on Endeavour's rim. "That's a spectacular way to end the mission," he says.

Blueberry bonanza

Almost immediately after it landed in 2004 in a region of Mars called Meridiani Planum, Opportunity made a watershed discovery: rocks at its landing site had formed in ancient lakes.
The evidence came in part from tiny "blueberries" (see image) made of haematite, which almost always forms in water. Curved lines of sediment pointed to the sweeping motion of a water current, while sulphate salts and the mineral jarosite, which forms in dilute sulphuric acid on Earth, suggested that the water was briny and acidic.

Dark streaks point to salty flows

Mars's image as a dust bowl may need a makeover. Dark streaks seen forming in summer and fading in winter might be signs of water flowing just beneath the surface (see image).
The appearance of streaks on sloping ground, including light streaks seen by NASA's Mars Global Surveyor spacecraft, has been attributed to present-day liquid water. But the link is not watertight - avalanches of dust could also be to blame.
Now, NASA's Mars Reconnaissance Orbiter (MRO) has revealed a previously unknown group of seasonal dark streaks in Mars's southern hemisphere that may be caused by flowing water. Alfred McEwen of the University of Arizona, Tucson, and colleagues found slopes where dark streaks appear every spring and disappear each winter (Science, DOI: 10.1126/science.1204816).
The seasonal streaks, which the team call recurring slope lineae, show no preference for dusty areas, where dust avalanches would be more likely. They are, however, found where radar observations show evidence for underground glaciers.
One possibility is that they result from meltwater that drains down slopes when ice thaws in the spring. But the researchers believe any flowing water lies below the surface - if it were above, MRO probably would have spotted its spectral signature, they say.
Some of the streaks form at -23 °C, well below the freezing point of pure water. Salty water, however, can remain liquid at such temperatures, and if it is flowing just beneath the surface, it might shift dust grains above, causing the dark streaks. "The best explanation we have for these observations so far is flow of briny water, although this study does not prove that," says McEwen.
The discovery of what might be liquid water on present-day Mars raises the possibility that life may have a toehold there. "It is our first chance to see an environment on Mars that might allow for the expression of an active biological process," says Lisa Pratt of Indiana University in Bloomington.

Monday, August 8, 2011

Ultimate logic: To infinity and beyond

The mysteries of infinity could lead us to a fantastic structure above and beyond mathematics as we know it

WHEN David Hilbert left the podium at the Sorbonne in Paris, France, on 8 August 1900, few of the assembled delegates seemed overly impressed. According to one contemporary report, the discussion following his address to the second International Congress of Mathematicians was "rather desultory". Passions seem to have been more inflamed by a subsequent debate on whether Esperanto should be adopted as mathematics' working language.

Yet Hilbert's address set the mathematical agenda for the 20th century. It crystallised into a list of 23 crucial unanswered questions, including how to pack spheres to make best use of the available space, and whether the Riemann hypothesis, which concerns how the prime numbers are distributed, is true.

Today many of these problems have been resolved, sphere-packing among them. Others, such as the Riemann hypothesis, have seen little or no progress. But the first item on Hilbert's list stands out for the sheer oddness of the answer supplied by generations of mathematicians since: that mathematics is simply not equipped to provide an answer.

This curiously intractable riddle is known as the continuum hypothesis, and it concerns that most enigmatic quantity, infinity. Now, 140 years after the problem was formulated, a respected US mathematician believes he has cracked it. What's more, he claims to have arrived at the solution not by using mathematics as we know it, but by building a new, radically stronger logical structure: a structure he dubs "ultimate L".

The journey to this point began in the early 1870s, when the German Georg Cantor was laying the foundations of set theory. Set theory deals with the counting and manipulation of collections of objects, and provides the crucial logical underpinnings of mathematics: because numbers can be associated with the size of sets, the rules for manipulating sets also determine the logic of arithmetic and everything that builds on it.
These dry, slightly insipid logical considerations gained a new tang when Cantor asked a critical question: how big can sets get? The obvious answer - infinitely big - turned out to have a shocking twist: infinity is not one entity, but comes in many levels.

How so? You can get a flavour of why by counting up the set of whole numbers: 1, 2, 3, 4, 5... How far can you go? Why, infinitely far, of course - there is no biggest whole number. This is one sort of infinity, the smallest, "countable" level, where the action of arithmetic takes place.

Now consider the question "how many points are there on a line?" A line is perfectly straight and smooth, with no holes or gaps; it contains infinitely many points. But this is not the countable infinity of the whole numbers, where you bound upwards in a series of defined, well-separated steps. This is a smooth, continuous infinity that describes geometrical objects. It is characterised not by the whole numbers, but by the real numbers: the whole numbers plus all the numbers in between that have as many decimal places as you please - 0.1, 0.01, √2, π and so on.

Cantor showed that this "continuum" infinity is in fact infinitely bigger than the countable, whole-number variety. What's more, it is merely a step in a staircase leading to ever-higher levels of infinities stretching up as far as, well, infinity.

While the precise structure of these higher infinities remained nebulous, a more immediate question frustrated Cantor. Was there an intermediate level between the countable infinity and the continuum? He suspected not, but was unable to prove it. His hunch about the non-existence of this mathematical mezzanine became known as the continuum hypothesis.

Attempts to prove or disprove the continuum hypothesis depend on analysing all possible infinite subsets of the real numbers. If every one is either countable or has the same size as the full continuum, then it is correct. Conversely, even one subset of intermediate size would render it false.

A similar technique using subsets of the whole numbers shows that there is no level of infinity below the countable. Tempting as it might be to think that there are half as many even numbers as there are whole numbers in total, the two collections can in fact be paired off exactly. Indeed, every set of whole numbers is either finite or countably infinite.

Applied to the real numbers, though, this approach bore little fruit, for reasons that soon became clear. In 1885, the Swedish mathematician Gösta Mittag-Leffler had blocked publication of one of Cantor's papers on the basis that it was "about 100 years too soon". And as the British mathematician and philosopher Bertrand Russell showed in 1901, Cantor had indeed jumped the gun. Although his conclusions about infinity were sound, the logical basis of his set theory was flawed, resting on an informal and ultimately paradoxical conception of what sets are.


It was not until 1922 that two German mathematicians, Ernst Zermelo and Abraham Fraenkel, devised a series of rules for manipulating sets that was seemingly robust enough to support Cantor's tower of infinities and stabilise the foundations of mathematics. Unfortunately, though, these rules delivered no clear answer to the continuum hypothesis. In fact, they seemed strongly to suggest there might even not be an answer.

Agony of choice

The immediate stumbling block was a rule known as the "axiom of choice". It was not part of Zermelo and Fraenkel's original rules, but was soon bolted on when it became clear that some essential mathematics, such as the ability to compare different sizes of infinity, would be impossible without it.

The axiom of choice states that if you have a collection of sets, you can always form a new set by choosing one object from each of them. That sounds anodyne, but it comes with a sting: you can dream up some twisted initial sets that produce even stranger sets when you choose one element from each. The Polish mathematicians Stefan Banach and Alfred Tarski soon showed how the axiom could be used to divide the set of points defining a spherical ball into six subsets which could then be slid around to produce two balls of the same size as the original. That was a symptom of a fundamental problem: the axiom allowed peculiarly perverse sets of real numbers to exist whose properties could never be determined. If so, this was a grim portent for ever proving the continuum hypothesis.

This news came at a time when the concept of "unprovability" was just coming into vogue. In 1931, the Austrian logician Kurt Gödel proved his notorious "incompleteness theorem". It shows that even with the most tightly knit basic rules, there will always be statements about sets or numbers that mathematics can neither verify nor disprove.

At the same time, though, Gödel had a crazy-sounding hunch about how you might fill in most of these cracks in mathematics' underlying logical structure: you simply build more levels of infinity on top of it. That goes against anything we might think of as a sound building code, yet Gödel's guess turned out to be inspired. He proved his point in 1938. By starting from a simple conception of sets compatible with Zermelo and Fraenkel's rules and then carefully tailoring its infinite superstructure, he created a mathematical environment in which both the axiom of choice and the continuum hypothesis are simultaneously true. He dubbed his new world the "constructible universe" - or simply "L".

L was an attractive environment in which to do mathematics, but there were soon reasons to doubt it was the "right" one. For a start, its infinite staircase did not extend high enough to fill in all the gaps known to exist in the underlying structure. In 1963 Paul Cohen of Stanford University in California put things into context when he developed a method for producing a multitude of mathematical universes to order, all of them compatible with Zermelo and Fraenkel's rules.

This was the beginning of a construction boom. "Over the past half-century, set theorists have discovered a vast diversity of models of set theory, a chaotic jumble of set-theoretic possibilities," says Joel Hamkins at the City University of New York. Some are "L-type worlds" with superstructures like Gödel's L, differing only in the range of extra levels of infinity they contain; others have wildly varying architectural styles with completely different levels and infinite staircases leading in all sorts of directions.

For most purposes, life within these structures is the same: most everyday mathematics does not differ between them, and nor do the laws of physics. But the existence of this mathematical "multiverse" also seemed to dash any notion of ever getting to grips with the continuum hypothesis. As Cohen was able to show, in some logically possible worlds the hypothesis is true and there is no intermediate level of infinity between the countable and the continuum; in others, there is one; in still others, there are infinitely many. With mathematical logic as we know it, there is simply no way of finding out which sort of world we occupy.

That's where Hugh Woodin of the University of California, Berkeley, has a suggestion. The answer, he says, can be found by stepping outside our conventional mathematical world and moving on to a higher plane.
Woodin is no "turn on, tune in" guru. A highly respected set theorist, he has already achieved his subject's ultimate accolade: a level on the infinite staircase named after him. This level, which lies far higher than anything envisaged in Gödel's L, is inhabited by gigantic entities known as Woodin cardinals.

Woodin cardinals illustrate how adding penthouse suites to the structure of mathematics can solve problems on less rarefied levels below. In 1988 the American mathematicians Donald Martin and John Steel showed that if Woodin cardinals exist, then all "projective" subsets of the real numbers have a measurable size. Almost all ordinary geometrical objects can be described in terms of this particular type of set, so this was just the buttress needed to keep uncomfortable apparitions such as Banach and Tarski's ball out of mainstream mathematics.

Such successes left Woodin unsatisfied, however. "What sense is there in a conception of the universe of sets in which very large sets exist, if you can't even figure out basic properties of small sets?" he asks. Even 90 years after Zermelo and Fraenkel had supposedly fixed the foundations of mathematics, cracks were rife. "Set theory is riddled with unsolvability. Almost any question you want to ask is unsolvable," says Woodin. And right at the heart of that lay the continuum hypothesis.

Ultimate L

Woodin and others spotted the germ of a new, more radical approach while investigating particular patterns of real numbers that pop up in various L-type worlds. The patterns, known as universally Baire sets, subtly changed the geometry possible in each of the worlds and seemed to act as a kind of identifying code for it. And the more Woodin looked, the more it became clear that relationships existed between the patterns in seemingly disparate worlds. By patching the patterns together, the boundaries that had seemed to exist between the worlds began to dissolve, and a map of a single mathematical superuniverse was slowly revealed. In tribute to Gödel's original invention, Woodin dubbed this gigantic logical structure "ultimate L".

Among other things, ultimate L provides for the first time a definitive account of the spectrum of subsets of the real numbers: for every forking point between worlds that Cohen's methods open up, only one possible route is compatible with Woodin's map. In particular it implies Cantor's hypothesis to be true, ruling out anything between countable infinity and the continuum. That would mark not only the end of a 140-year-old conundrum, but a personal turnaround for Woodin: 10 years ago, he was arguing that the continuum hypothesis should be considered false.

Ultimate L does not rest there. Its wide, airy space allows extra steps to be bolted to the top of the infinite staircase as necessary to fill in gaps below, making good on Gödel's hunch about rooting out the unsolvability that riddles mathematics. Gödel's incompleteness theorem would not be dead, but you could chase it as far as you pleased up the staircase into the infinite attic of mathematics.

The prospect of finally removing the logical incompleteness that has bedevilled even basic areas such as number theory is enough to get many mathematicians salivating. There is just one question. Is ultimate L ultimately true?

Andrés Caicedo, a logician at Boise State University in Idaho, is cautiously optimistic. "It would be reasonable to say that this is the 'correct' way of going about completing the rules of set theory," he says. "But there are still several technical issues to be clarified before saying confidently that it will succeed."
Others are less convinced. Hamkins, who is a former student of Woodin's, holds to the idea that there simply are as many legitimate logical constructions for mathematics as we have found so far. He thinks mathematicians should learn to embrace the diversity of the mathematical multiverse, with spaces where the continuum hypothesis is true and others where it is false. The choice of which space to work in would then be a matter of personal taste and convenience. "The answer consists of our detailed understanding of how the continuum hypothesis both holds and fails throughout the multiverse," he says. 

Woodin's ideas need not put paid to this choice entirely, though: aspects of many of these diverse universes will survive inside ultimate L. "One goal is to show that any universe attainable by means we can currently foresee can be obtained from the theory," says Caicedo. "If so, then ultimate L is all we need."

In 2010, Woodin presented his ideas to the same forum that Hilbert had addressed over a century earlier, the International Congress of Mathematicians, this time in Hyderabad, India. Hilbert famously once defended set theory by proclaiming that "no one shall expel us from the paradise that Cantor has created". But we have been stumbling around that paradise with no clear idea of where we are. Perhaps now a guide is within our grasp - one that will take us through this century and beyond.

Richard Elwes is a teaching fellow at the University of Leeds in the UK and the author of Maths 1001: Absolutely Everything That Matters in Mathematics (Quercus, 2010) and How to Build a Brain (Quercus, 2011)

Homemade drone to help phone and Wi-Fi hackers

Invisible to radar, a drone flies over a city, while a hacker uses it to attack the cellphone network, spy on the ground and monitor Wi-Fi networks. But this is no stolen military vehicle. It is a homemade drone built for just a few thousand dollars using parts legally bought on the internet.

This is the future of network hacking, as envisioned by security consultants Mike Tassey and Richard Perkins. They have now built such a drone to prove how easy it is.

Using commercially available parts, they built a plane called WASP that can be a moving base station for cellphone networks, a flying camera and a Wi-Fi "packet sniffer" – all at the same time. Everything was bought legally and building it did not require much engineering know-how, they say.

The drone's frame was bought for less than $300 on the internet. A GSM radio turned it into a mobile version of a cellphone tower, a video camera to monitor the ground, while internet connectivity came from a USB dongle that can be bought in any electronics shop. The total cost of their drone was about $3800, says Tassey. The pair presented their work at the Black Hat Conference in Las Vegas last week.

Flying fake

Using the drone to attack a cellphone network would be as easy as flying while broadcasting the same signal as an ordinary cell tower, the pair say. Most cellphones are designed to latch on to the strongest available signal. If the local 3G or 4G network has a weaker signal than the one broadcast by the drone then the handset will default to GSM and can be tricked into latching onto the drone's antenna, using it as a base station.

In tests, Tassey and Perkins showed that the drone could then listen in, record phone calls and transmit the data over the internet.

But it's not all bad news. The pair say that a cheap drone like theirs could also be used run search patterns for lost hikers at a fraction of the cost of using a helicopter, for example.