Thursday, December 03, 2009

Vou abrir a Igreja Starblast Me Se Revelou.

03/12/2009

O primeiro milagre do heliocentrismo


Hélio Schwartsman para a Folha de Sampa


Como proteger-se do poder do Estado? Esse, que é um dos temas capitais da ciência política, já consumiu muita tinta e derrubou vários acres de florestas. A descrição mais clássica é a de Thomas Hobbes, para quem o Estado é um monstro feroz, um Leviatã, que, apesar de promover todo tipo de abuso, justifica-se por proteger os indivíduos da guerra de todos contra todos que configura o estado de natureza. Enquanto o poder público, leia-se, o soberano, garante a vida de seus súditos, devemos-lhe obediência total, o que inclui aceder aos menores caprichos e tolerar as piores injustiças. É só quando o soberano nos condena à morte, isto é, quando deixa de assegurar-nos a existência, que temos o direito de rebelar-nos contra sua autoridade.

OK. Admito que não é um cenário muito idílico. Mas tampouco o era a Inglaterra sob a guerra civil no século 17. De lá para cá, as coisas melhoraram bastante, pelo menos neste cantinho de mundo que chamamos de Ocidente democrático. Embora o poder do Estado ainda seja algo a temer, contamos hoje com um rol de direitos e garantias fundamentais que são geralmente observados. Quando não o são, podemos gritar e espernear. Na pior das hipóteses, já não precisamos ser condenados à morte para conquistar o direito de revolta.

Mais até, em determinadas circunstâncias o Estado pode ser considerado um aliado, que promove ativamente o bem-estar através de instituições como a Previdência social e serviços de educação e saúde.

Fiz essa longa introdução, que em jornalismo chamaríamos de nariz de cera, para propor uma discussão que julgo importante: em que nível devem materializar-se essas garantias fundamentais? Elas dizem respeito a indivíduos ou a grupos? É possível conceder benefícios a setores específicos?

Coloco essas questões a propósito da isenção de impostos para igrejas, que foi tema de reportagem de minha autoria publicada na edição de domingo da Folha de S.Paulo (quem tiver acesso à edição digital poderá conferir também a arte, que não é disponibilizada através do link). Para quem não é assinante de nada ou não está com paciência de ficar singrando hipertextos, faço um rápido resumo da matéria.

Eu, Claudio Angelo, editor de Ciência da Folha, e Rafael Garcia, repórter do jornal, decidimos abrir uma igreja. Com o auxílio técnico do departamento Jurídico da Folha e do escritório Rodrigues Barbosa, Mac Dowell de Figueiredo Gasparian Advogados, fizemo-lo. Precisamos apenas de R$ 418,42 em taxas e emolumentos e de cinco dias úteis (não consecutivos). É tudo muito simples. Não existem requisitos teológicos ou doutrinários para criar um culto religioso. Tampouco se exige número mínimo de fiéis.

Com o registro da Igreja Heliocêntrica do Sagrado EvangÉlio e seu CNPJ, pudemos abrir uma conta bancária na qual realizamos aplicações financeiras isentas de IR e IOF. Mas esses não são os únicos benefícios fiscais da empreitada. Nos termos do artigo 150 da Constituição, templos de qualquer culto são imunes a todos os impostos que incidam sobre o patrimônio, a renda ou os serviços relacionados com suas finalidades essenciais, as quais são definidas pelos próprios criadores. Ou seja, se levássemos a coisa adiante, poderíamos nos livrar de IPVA, IPTU, ISS, ITR e vários outros "Is" de bens colocados em nome da igreja.

Há também vantagens extratributárias. Os templos são livres para se organizarem como bem entenderem, o que inclui escolher seus sacerdotes. Uma vez ungidos, eles adquirem privilégios como a isenção do serviço militar obrigatório (já sagrei meus filhos Ian e David ministros religiosos) e direito a prisão especial.

A discussão pública relevante aqui é se faz ou não sentido conceder tantas regalias a grupos religiosos. Não há dúvida de que a liberdade de culto é um direito a preservar de forma veemente. Trata-se, afinal, de uma extensão da liberdade de pensamento e de expressão. Sem elas, nem ao menos podemos falar em democracia.

Em princípio, a imunidade tributária para igrejas surge como um reforço a essa liberdade religiosa. O pressuposto é o de que seria relativamente fácil para um governante esmagar com taxas o culto de que ele não gostasse.

Esse é um raciocínio que fica melhor no papel do que na realidade. É claro que o poder de tributar ilimitadamente pode destruir não apenas religiões, mas qualquer atividade. Nesse caso, cabe perguntar: por que proteger apenas as religiões e não todas as pessoas e associações? Bem, a Constituição em certa medida já o fez, quando criou mecanismos de proteção que valem para todos, como os princípios da anterioridade e da não cumulatividade ou a proibição de impostos que tenham caráter confiscatório.

Será que templos de fato precisam de proteções adicionais? Até acho que precisavam em eras já passadas, nas quais não era inverossímil que o Estado se aliasse à então religião oficial para asfixiar economicamente cultos rivais. Acredito, porém, que esse raciocínio não se aplique mais, de vez que já não existe no Brasil religião oficial e seria constitucionalmente impossível tributar um templo deixando o outro livre do gravame.

No mais, mesmo que considerássemos a imunidade tributária a igrejas essencial, em sua presente forma ela é bem imperfeita, pois as protege apenas de impostos, mas não de taxas e contribuições. Ora, até para evitar a divisão de receitas com Estados e municípios, as mais recentes investidas da União têm se materializado justamente na forma de contribuições. Minha sensação é a de que a imunidade tributária se tornou uma espécie de relíquia dispensável.

Está aí o primeiro milagre do heliocentrismo: não é todo dia que uma igreja se sacrifica dessa forma, advogando pela extinção de vantagens das quais se beneficia.

Sei que estou pregando no deserto, mas o Brasil precisaria urgentemente livrar-se de certos maus hábitos, cujas origens podem ser traçadas ao feudalismo e ao fascismo, e enfim converter-se numa República de iguais, nas quais as pessoas sejam titulares de direitos porque são cidadãs, não porque pertençam a esta ou aquela categoria profissional ou porque tenham nascido em berço esplêndido. O mesmo deve valer para associações. Até por imperativos aritméticos, sempre que se concede uma prebenda fiscal a um dado grupo, onera-se imediatamente todos os que não fazem parte daquele clube. Não é demais lembrar que o princípio da solidariedade tributária também é um dos fundamentos da República.

Compartilhe

Sunday, November 29, 2009

NYT - Climate Change in Japan

Op-Ed Contributor

In Japan, Concerns Blossom


Published: November 28, 2009

Tokyo

Before the Climate Conference, a Weather Report

President Obama and other world leaders will gather in Copenhagen next week to discuss global warming. The Op-Ed editors asked writers from four different continents give their own report on the climate changes they've experienced close to home.


IT'S autumn, and the people on the Chuo Line are all bundled up, just as they are in the spring. When I was a student, a friend from Hokkaido, in the north, told me she couldn't stand the winter cold in Tokyo. Although the temperature is lower in northern Japan, in Tokyo there is no moisture in the winter air; the dry winds bounce off the buildings, picking up speed until they seem to cut into your skin, making the cold intolerable.

When I was in elementary school in the mid-1960s, there were still paddy fields and vegetable patches on the outskirts of Tokyo. On frosty winter mornings spears of frozen grass crunched under my shoes as I walked to school, and it often snowed. Winters were harsher than they are now, but the face of spring was more clearly defined, boldly announcing its arrival. Summers were so hot and humid that even if I sat perfectly still the sweat rolled down my forehead, and when I walked through the rank grass on my way to the air-conditioned library, bugs used to jump up from the weeds around my feet.

I liked summer back then. But since the 1980s, the trees and grass have disappeared. The earth is now covered with asphalt and buildings, and the smell of parking lots mingled with oppressively hot gusts of air blown out from apartment air-conditioners hangs over the city; it seems this depressing heat will never go away. The gingko trees don't turn yellow until December. In place of the snow that used to fall in winter, the dry, cold blasts of wind come back, followed almost immediately by the unbearable heat of summer.

It's said that the suicide rate rises as the number of trees decreases. For some reason, only cherry trees seem to increase year by year. Many are of the type called somei-yoshino. A while ago I read that somei-yoshino is a cultivar that was artificially bred about a century ago and has since spread throughout the country.

If the conditions are the same, all the flowers on trees of this type bloom at once, and several days later, with no regrets for the brevity of their lives, the blossoms all fall together; thus embodying nationalistic ideology, they came to be regarded as a symbol of Japan even though they don't appear in ancient literary works or paintings. The flowers bloom at the same time because the trees are clones, bred from cuttings.

From March through May, the progress of the "cherry blossom front" is reported nightly on the weather report as it makes its way north through the archipelago. The TV meteorologist, who usually looks worried as she explains the lines that show the ominous movements of high and low pressure areas, becomes oddly cheerful when the topic switches to the "cherry blossom front," and she announces enthusiastically, "In just two weeks the cherries in the Kanto area will be in full bloom!"

Because of climate change, the weather always betrays our expectations, making us wonder if the earth isn't in its last days. Yet the "cherry blossom front" always follows the same course from south to north, which gives us a sense of relief. There are scores of varieties of cherry trees; if types other than somei-yoshino were planted, the "cherry blossom front" wouldn't be so predictable, and the weather report would cause more anxiety, I thought one day last spring as I left the train station and walked down the street lined with cherry trees. Beneath the trees people were sitting, eating box lunches and drinking sake or beer.

When I looked up, the somei-yoshino cherries were in full bloom, blanketing the sky; in the chill air, they looked like snow. Perhaps these white blossoms are the ghosts of snowflakes that no longer fall.

Yoko Tawada is the author of "The Naked Eye" and "Facing the Bridge." This essay was translated by Margaret Mitsutani from the Japanese.

Wednesday, November 18, 2009

Robert Smithson - Arte e os Elementos

How to Conserve Art That Lives in a Lake?


Published: November 17, 2009

In 1972, a year before his death in a plane crash at 35, the artist Robert Smithson wrote, "I am for an art that takes into account the direct effect of the elements as they exist from day to day." And with the creation of his greatest work — "Spiral Jetty," the huge counterclockwise curlicue of black basalt rock that juts into the Great Salt Lake in rural Utah — he certainly put that conviction to the test.

Eppich, Esmay and Tang: Collection of Dia Art Foundation
An aerial view of Robert Smithson's "Spiral Jetty" in Utah, taken by a camera attached to a latex weather balloon from about 800 feet in the air.

Tang/Collection of Dia Art Foundation
Rand Eppich of the Getty Conservation Institute surveying the site. The institute is helping Dia to document "Spiral Jetty."

After the piece was constructed in 1970, it spent decades underwater as the lake rose. It has re-emerged in the last few years because of drought, but its appearance has changed markedly, whitened by salt crystals and the buildup of silt. Mr. Smithson, who was fascinated by the concept of entropy, might have welcomed this transformation. But it is less clear what he would have thought about changes wrought by visitors to the remote site, who have, at times, carried off some of the rocks as art souvenirs. Or moved them to construct their own tiny spiral jetties nearby. Or, in one case, used them to spell out what they were undoubtedly drinking at the time — "BEER" — in the pink-hued sand next to the earthwork.

Issues like this recently prompted the Dia Art Foundation, which owns the work, to begin exploring the idea of systematically documenting the site, photographing it from year to year to give curators and conservators a better idea of how it is changing and a better basis for making decisions — always tricky in the world of land art — about whether to intervene.

"In my field we're trained to make condition reports," said Francesca Esmay, Dia's conservator, but she added of Smithson's work, composed of more than 6,000 tons of rock and soil: "Its scale is such that I can't just go out with a camera and pencil and clipboard by myself and describe it." So several months ago she turned to the Getty Conservation Institute, an arm of the J. Paul Getty Trust, which has organized and assisted in conservation and monitoring of art and historic sites from Central America to Africa to the Middle East.

After considering nearly every possible way to document "Spiral Jetty" from above — Rent a weather satellite? An airplane? A helicopter? Use a kite? — the institute, which often works in countries where conservation projects are carried out on shoestring budgets, came up with a remarkably simple solution: a $50 disposable latex weather balloon, easily bought online.

Along with a little helium, some fishing line, a slightly hacked Canon PowerShot G9 point-and-shoot digital camera, an improvised plywood and metal cradle for the camera and some plastic zip ties (to keep the cradle attached and the neck of the balloon cinched), a floating land-art documentation machine was improvised, MacGyver-like.

"I'm not supposed to use the word cheap — it's inexpensive," Rand Eppich, a senior project manager with the Getty institute, said. Mr. Eppich, who conceived the balloon plan, made the two-and-a-half-hour drive from Salt Lake City last May with a Getty assistant, Aurora Tang, and Ms. Esmay, to put the system in use for the first time.

And despite a couple of balloons that popped in the Utah heat ("Thankfully, we didn't have cameras on them," Mr. Eppich said), the three managed to get some spectacular and highly useful shots of the jetty from heights ranging from 800 to 1,600 feet, as they unreeled the fishing line tied to the balloon, allowing it to rise.

"You don't need to be skilled conservators to do this part — it's literally like remembering back to childhood birthday parties," said Ms. Esmay, who joined Dia three years ago as its first full-time conservator. She is also responsible for the condition of sites like Walter De Maria's "Lightning Field" in western New Mexico and for works by artists like Donald Judd, Dan Flavin and Louise Bourgeois at Dia:Beacon in Beacon, N.Y.

Mr. Eppich said the Getty's goal was to create a system that Dia could use annually at little cost and one simple enough that Ms. Esmay could operate it herself. "We want to help people do something that's repeatable and sustainable after we're gone," he said.

Preservation concerns about "Spiral Jetty" have arisen lately not only because of the work's re-emergence from the water but also because of plans announced in the last two and a half years by companies to initiate industrial projects near the site. One is a large expansion of a field of solar evaporation ponds used to extract potassium sulfate from the water for fertilizer. Another is a plan for exploratory oil drilling that Dia officials argued would disrupt the way the work would be viewed and potentially harm it physically. As a result of the drilling proposal — currently in limbo — Dia and Utah officials have begun exploring the creation of a buffer zone around the sculpture that would help protect it while still allowing the lake area to be used for other purposes.

But in addition to industrial threats to the work, there are also natural ones, like silt, which has begun to accumulate between the outermost band of the spiral and the next one in, as the lake's level has dropped. The lake is so low it is now possible to walk a quarter-mile into it with the water reaching only knee-high.

"In my personal opinion alone," Ms. Esmay said of the silt, "I think it's to such a degree now that it's foreign to the piece. But in 10 years it could be gone or in one year it could be gone. Or it could be worse. You have no way of knowing, and that's just inherent to the work itself."

She emphasized that the documentation project was not a prelude to any active plans to rebuild or even touch up the jetty. "Something like that might not happen for 20 years, if it ever happens at all," she said, "but at least we'll have 20 years of data that will show the patterns of change."

And if any conservation plans were to go forward, then the really complicated work would begin: trying to figure out what Mr. Smithson would have thought about it.

"Nature does not proceed in a straight line," he wrote. "It is rather a sprawling development. Nature is never finished."

Tuesday, October 20, 2009

NYT - Arte Conceitual

Op-Ed Contributor

Has Conceptual Art Jumped the Shark Tank?

Published: October 15, 2009

Christchurch, New Zealand

Tamara Shopsin

ART's link with money is not new, though it does continue to generate surprises. On Friday night, Christie's in London plans to auction another of Damien Hirst's medicine cabinets: literally a small, sliding-glass medicine cabinet containing a few dozen bottles or tubes of standard pharmaceuticals: nasal spray, penicillin tablets, vitamins and so forth. This work is not as grand as a Hirst shark, floating eerily in a giant vat of formaldehyde, one of which sold for more than $12 million a few years ago. Still, the estimate of up to $239,000 for the medicine cabinet is impressive — rather more impressive than the work itself.

No disputing tastes, of course, if yours lean toward the aesthetic contemplation of an orderly medicine cabinet. Buy it, and you acquire a work of art by the world's richest and — by that criterion — most successful living artist. Still, neither this piece nor Mr. Hirst's dissected calves and embalmed horses are quite "by" the artist in a conventional sense. Mr. Hirst's name rightfully goes on them because they were his conceptions. However, he did not reproduce any of the medicine bottles or boxes in his cabinet (in the way that Warhol actually recreated Brillo boxes), nor did he catch a shark or do the taxidermy.

In this respect, the pricey medicine cabinet belongs to a tradition of conceptual art: works we admire not for skillful hands-on execution by the artist, but for the artist's creative concept. Mr. Hirst has a talent for coming up with concepts that capture the attention of the art market, putting him in the company of other big names who have now and again moved away from making art with their own hands: Jeff Koons, for example, who has put vacuum cleaners into Plexiglas cases and commissioned an Italian porcelain manufacturer to make a cheesy gold and white sculpture of Michael Jackson and his pet chimp. Mr. Koons need not touch the art his contractors produce; the ideas are his, and that's enough.

Sophisticated gallery owners or curators normally respond with withering condescension to worries about the lack of craftsmanship in contemporary art. Art has moved on, I've heard it argued, since Victorian times, when "she'd painted every hair" was ordinary aesthetic praise. What is important today is not technical skill, but skill in playing inventively with ideas.

Since the endearingly witty Marcel Duchamp invented conceptual art 90 years ago by offering his "ready-mades" — a urinal or a snow shovel, for instance — for gallery shows, the genre has degenerated. Duchamp, an authentic artistic genius, was in 1917 making sport of the art establishment and its stuffy values. By the time we get to 2009, Mr. Hirst and Mr. Koons are the establishment.

Does this mean that conceptual art is here to stay? That is not at all certain, and it is not just auction results that are relevant to the issue. To see why works of conceptual art have an inherent investment risk, we must look back at the whole history of art, including art's most ancient prehistory.

It is widely assumed that the earliest human art works are the stupendously skillful cave paintings of Lascaux and Chauvet, the latter perhaps 32,000 years old, along with a few small realistic sculptures of women and of animals from the same period. But artistic and decorative behavior emerged in a far more distant past. Shell necklaces that look like something you would see at a tourist resort, as well as evidence of ochre body paint, have been found from more than 100,000 years ago. But the most intriguing prehistoric artifacts are much older even than that. I have in mind the so-called Acheulian hand axes.

The earliest stone tools are choppers and blades found in Olduvai Gorge in East Africa, from 2.5 million years ago. These unadorned tools remained unchanged for thousands of centuries, until around 1.4 million years ago when Homo ergaster, Homo erectus and other human ancestral groups started doing something new and remarkable. They began shaping single, thin stone blades, sometimes rounded ovals, but often in what to our eyes are arresting symmetrical pointed leaf or teardrop forms. Acheulian hand axes (after St.-Acheul in France, a site of 19th-century finds) have been unearthed in their thousands, scattered across Asia, Europe and Africa, wherever Homo erectus roamed.

The sheer numbers of hand axes indicate a rate of manufacture beyond needs for butchering animals. Even more curious, unlike other prehistoric stone tools, hand axes often exhibit no evidence of wear on their delicate blade edges, and some are in any case too big for practical use. They are occasionally hewn from colorful stone materials (even with decoratively embedded fossils). Their symmetry, materials and above all meticulous workmanship makes them quite simply beautiful to our eyes. What were these ancient yet somehow familiar artifacts for?

The best available explanation is that they are literally the earliest known works of art — practical tools transformed into captivating aesthetic objects, contemplated both for their elegant shape and virtuoso craftsmanship. Hand axes mark an evolutionary advance in human prehistory, tools attractively fashioned to function as what Darwinians call "fitness signals" — displays like the glorious peacock's tail, which functions to show peahens the strength and vitality of the males who display it.

Hand axes, however, were not grown, but consciously, cleverly made. They were therefore able to indicate desirable personal qualities: intelligence, fine motor control, planning ability and conscientiousness. Such skills gained for those who displayed them status and a reproductive advantage over the less capable. Across many thousands of generations this translated into both an increase in intelligence and an evolved sense that the symmetry and craftsmanship of hand axes is "beautiful."

Aesthetically pleasing hand axes constitute an unbroken Stone-Age tradition that stretches over a million years, ending 100,000 to 150,000 years ago, about the time that their makers' African descendants, now called Homo sapiens, started to become articulate speakers of language. These humans were probably finding new ways to amuse and amaze one another with — who knows? — jokes, dramatic storytelling, dancing or hairstyling. Alas, geological layers do not record these other, more perishable aspects of prehistoric life. For us moderns, the arts have come to depict imaginary worlds and express intense emotions with music, painting, dance and fiction.

However, one trait of the ancestral personality persists in our aesthetic cravings: the pleasure we take in admiring skilled performances. From Lascaux to the Louvre to Carnegie Hall — where now and again the Homo erectus hairs stand up on the backs of our necks — human beings have a permanent, innate taste for virtuoso displays in the arts.

We ought, then, to stop kidding ourselves that painstakingly developed artistic technique is passé, a value left over from our grandparents' culture. Evidence is all around us. Even when we have lost contact with the social or religious ideas behind the arts of bygone civilizations, we are still able, as with the great bronzes or temples of Greece or ancient China, to respond directly to craftsmanship. The direct response to skill is what makes it possible to find beauty in many tribal arts even though we often know nothing about the beliefs of the people who created them. There is no place on earth where superlative technique in music and dance is not regarded as beautiful.

The appreciation of contemporary conceptual art, on the other hand, depends not on immediately recognizable skill, but on how the work is situated in today's intellectual zeitgeist. That's why looking through the history of conceptual art after Duchamp reminds me of paging through old New Yorker cartoons. Jokes about Cadillac tailfins and early fax machines were once amusing, and the same can be said of conceptual works like Piero Manzoni's 1962 declaration that Earth was his art work, Joseph Kosuth's 1965 "One and Three Chairs" (a chair, a photo of the chair and a definition of "chair") or Mr. Hirst's medicine cabinets. Future generations, no longer engaged by our art "concepts" and unable to divine any special skill or emotional expression in the work, may lose interest in it as a medium for financial speculation and relegate it to the realm of historical curiosity.

In this respect, I can't help regarding medicine cabinets, vacuum cleaners and dead sharks as reckless investments. Somewhere out there in collectorland is the unlucky guy who will be the last one holding the vacuum cleaner, and wondering why.

But that doesn't mean we need to worry about the future of art. There are plenty of prodigious artists at work in every medium, ready to wow us with surprising skills. And yes, now and again I walk past a jewelry shop window and stop, transfixed by a sparkling, teardrop-shaped precious stone. Our distant ancestors loved that shape, and found beauty in the skill needed to make it — even before they could put their love into words.

Denis Dutton is a professor of the philosophy of art at the University of Canterbury in New Zealand and the author of "The Art Instinct: Beauty, Pleasure and Human Evolution."

Tuesday, October 06, 2009

How Nonsense Sharpens the Intellect

The New York Times - Mind
Published: October 5, 2009

In addition to assorted bad breaks and pleasant surprises, opportunities and insults, life serves up the occasional pink unicorn. The three-dollar bill; the nun with a beard; the sentence, to borrow from the Lewis Carroll poem, that gyres and gimbles in the wabe.

Alexander Hafemann

An experience, in short, that violates all logic and expectation. The philosopher Soren Kierkegaard wrote that such anomalies produced a profound "sensation of the absurd," and he wasn't the only one who took them seriously. Freud, in an essay called "The Uncanny," traced the sensation to a fear of death, of castration or of "something that ought to have remained hidden but has come to light."

At best, the feeling is disorienting. At worst, it's creepy.

Now a study suggests that, paradoxically, this same sensation may prime the brain to sense patterns it would otherwise miss — in mathematical equations, in language, in the world at large.

"We're so motivated to get rid of that feeling that we look for meaning and coherence elsewhere," said Travis Proulx, a postdoctoral researcher at the University of California, Santa Barbara, and lead author of the paper appearing in the journal Psychological Science. "We channel the feeling into some other project, and it appears to improve some kinds of learning."

Researchers have long known that people cling to their personal biases more tightly when feeling threatened. After thinking about their own inevitable death, they become more patriotic, more religious and less tolerant of outsiders, studies find. When insulted, they profess more loyalty to friends — and when told they've done poorly on a trivia test, they even identify more strongly with their school's winning teams.

In a series of new papers, Dr. Proulx and Steven J. Heine, a professor of psychology at the University of British Columbia, argue that these findings are variations on the same process: maintaining meaning, or coherence. The brain evolved to predict, and it does so by identifying patterns.

When those patterns break down — as when a hiker stumbles across an easy chair sitting deep in the woods, as if dropped from the sky — the brain gropes for something, anything that makes sense. It may retreat to a familiar ritual, like checking equipment. But it may also turn its attention outward, the researchers argue, and notice, say, a pattern in animal tracks that was previously hidden. The urge to find a coherent pattern makes it more likely that the brain will find one.

"There's more research to be done on the theory," said Michael Inzlicht, an assistant professor of psychology at the University of Toronto, because it may be that nervousness, not a search for meaning, leads to heightened vigilance. But he added that the new theory was "plausible, and it certainly affirms my own meaning system; I think they're onto something."

In the most recent paper, published last month, Dr. Proulx and Dr. Heine described having 20 college students read an absurd short story based on "The Country Doctor," by Franz Kafka. The doctor of the title has to make a house call on a boy with a terrible toothache. He makes the journey and finds that the boy has no teeth at all. The horses who have pulled his carriage begin to act up; the boy's family becomes annoyed; then the doctor discovers the boy has teeth after all. And so on. The story is urgent, vivid and nonsensical — Kafkaesque.

After the story, the students studied a series of 45 strings of 6 to 9 letters, like "X, M, X, R, T, V." They later took a test on the letter strings, choosing those they thought they had seen before from a list of 60 such strings. In fact the letters were related, in a very subtle way, with some more likely to appear before or after others.

The test is a standard measure of what researchers call implicit learning: knowledge gained without awareness. The students had no idea what patterns their brain was sensing or how well they were performing.

But perform they did. They chose about 30 percent more of the letter strings, and were almost twice as accurate in their choices, than a comparison group of 20 students who had read a different short story, a coherent one.

"The fact that the group who read the absurd story identified more letter strings suggests that they were more motivated to look for patterns than the others," Dr. Heine said. "And the fact that they were more accurate means, we think, that they're forming new patterns they wouldn't be able to form otherwise."

Brain-imaging studies of people evaluating anomalies, or working out unsettling dilemmas, show that activity in an area called the anterior cingulate cortex spikes significantly. The more activation is recorded, the greater the motivation or ability to seek and correct errors in the real world, a recent study suggests. "The idea that we may be able to increase that motivation," said Dr. Inzlicht, a co-author, "is very much worth investigating."

Researchers familiar with the new work say it would be premature to incorporate film shorts by David Lynch, say, or compositions by John Cage into school curriculums. For one thing, no one knows whether exposure to the absurd can help people with explicit learning, like memorizing French. For another, studies have found that people in the grip of the uncanny tend to see patterns where none exist — becoming more prone to conspiracy theories, for example. The urge for order satisfies itself, it seems, regardless of the quality of the evidence.

Still, the new research supports what many experimental artists, habitual travelers and other novel seekers have always insisted: at least some of the time, disorientation begets creative thinking.

Friday, September 18, 2009

Emaranhamento colorido

18/9/2009

Por Fábio de Castro

Agência FAPESP – Um grupo de cientistas brasileiros conseguiu, pela primeira vez, gerar um emaranhamento quântico de três feixes de luz de cores diferentes. O feito deverá ajudar a compreender as características do emaranhamento, considerado pelos cientistas como base para futuras tecnologias como computação quântica, criptografia quântica e teletransporte quântico.

Fenômeno intrínseco da mecânica quântica, o emaranhamento permite que duas ou mais partículas compartilhem suas propriedades mesmo sem qualquer ligação física entre elas.

De acordo com os autores do estudo, publicado nesta quinta-feira (17/9) no site Science Express, da revista Science, a possibilidade de alternar o emaranhamento entre as diferentes frequências de luz poderá ser útil para protocolos avançados de informação quântica.

O grupo, que reúne pesquisadores da Universidade de São Paulo (USP) e brasileiros do Instituto Max Planck e da Universidade de Relangen-Nuremberg, na Alemanha, teve apoio da FAPESP na modalidade Auxílio à Pesquisa – Regular. Os cientistas também fazem parte do Instituto Nacional de Ciência e Tecnologia de Informação Quântica (INCT-IQ).

A descoberta fez parte da tese de doutorado de Alessandro de Sousa Villar e pela qual ganhou o Prêmio Capes de Teses 2008 na categoria Física, além do prêmio Professor José Leite Lopes, outorgado pela Sociedade Brasileira de Física. Villar, que teve apoio da FAPESP na modalidade Bolsa de Doutorado, é pesquisador do Instituto Max Planck e da Universidade de Erlangen-Nuremberg, ambos na Alemanha.

De acordo com o autor principal do estudo, Paulo Nussenzveig, do Instituto de Física da USP, a possibilidade de gerar o emaranhamento de três feixes de luz diferentes havia sido prevista pela mesma equipe há três anos, mas ainda não havia sido demonstrada experimentalmente. Dos três feixes, apenas um estava na porção visível do espectro e dois no infravermelho.

"Em 2005, medimos pela primeira vez o emaranhamento em dois feixes, comprovando uma previsão teórica feita por outros grupos em 1988. A partir daí, percebemos que a informação presente no sistema era mais complexa do que imaginávamos e, em 2006, escrevemos um artigo teórico prevendo o emaranhamento de três feixes, que conseguimos demonstrar agora", disse Nussenzveig à Agência FAPESP.

O cientista explica que, para realizar o estudo, o grupo utilizou um experimento conhecido como oscilador paramétrico óptico (OPO), que consiste em um cristal especial disposto entre dois espelhos, sobre o qual é bombeada uma fonte de luz.

"O que esse cristal tem de especial é sua resposta à luz, que é não-linear. Com isso, podemos introduzir no sistema uma luz verde e ter como resultado uma luz infravermelha, por exemplo", explicou. Segundo ele, os OPO com onda contínua, empregados no estudo, são utilizados desde a década de 1980.

Enfrentando diversas dificuldades e surpresas, ao lidar com fenômenos até então desconhecidos, os cientistas conseguiram "domar" o sistema para observar o emaranhamento de três feixes com comprimentos de onda diferentes. Durante o experimento, descobriram ainda um efeito importante: a chamada morte súbita do emaranhamento também ocorria no caso estudado.

Segundo Nussenzveig, um estudo coordenado por Luiz Davidovich, da Universidade Federal do Rio de Janeiro (UFRJ), publicado na Science em 2007, mostrou que o emaranhamento quântico podia desaparecer repentinamente, "dissolvendo" o elo quântico entre as partículas – o que poderia comprometer a aplicação do fenômeno no futuro desenvolvimento de computadores quânticos.

O efeito, batizado como morte súbita do emaranhamento, já havia sido previsto anteriormente por físicos teóricos e foi observado pela primeira vez pelo grupo da UFRJ em sistemas discretos – isto é, sistemas que têm um conjunto finito de resultados possíveis.

"Para sistemas macroscópicos de variáveis contínuas existem relativamente poucos trabalhos e previsões teóricas. E não existia absolutamente nenhum trabalho experimental. Observamos pela primeira vez algo que não havia sido previsto: a morte súbita em variáveis contínuas. Isso significa que trata-se de um efeito global coletivo", disse.

Coração da física quântica

De acordo com outro autor do estudo, Marcelo Martinelli, também professor do Instituto de Física da USP, o emaranhamento quântico é a propriedade que distingue as situações quânticas das situações nas quais os eventos obedecem às leis da física clássica.

"Essa propriedade é verificada por meio de correlações que são diferentes das que ocorrem no mundo da física clássica. Quando jogamos uma moeda no chão, na física clássica, se temos a coroa voltada para cima, temos a cara voltada para baixo. No mundo quântico, esse resultado tem diferentes graus de liberdade e ângulos de correlação", explicou.

Segundo Martinelli, o emaranhamento já havia sido muitas vezes verificado em sistemas discretos, ou entre dois ou mais sistemas no domínio de variáveis continuas. Mas, quando havia três ou mais sub-sistemas, o emaranhamento gerado era sempre de feixes de luz da mesma cor.

"Isso é interessante, porque abre caminho para que possamos, a partir de um sistema que interage com uma certa frequência do espectro eletromagnético, transferir suas propriedades quânticas para outro sistema – seria o chamado teletransporte quântico", disse o cientista, que coordena o projeto de Auxílio Regular "Teletransporte de informação quântica entre diferentes cores", apoiado pela FAPESP.

De acordo com Martinelli, seria possível fazer isso utilizando feixes de emaranhamento como veículo para transformar a informação. "Mas, se só pudermos lidar com variáveis da mesma cor, a informação quântica do primeiro só passaria para um segundo e um terceiro sistema se todos eles atuarem na mesma freqüência. O nosso modelo permitiria fazer a transferência de informação quântica entre diferentes faixas do espectro eletromagnético", explicou.

Ao observar pela primeira vez a morte súbita de emaranhamento em um sistema de variáveis contínuas, o grupo conseguiu novas informações sobre a natureza do fenômeno.

Martinelli explica que todo sistema que interage com a natureza apresenta perdas, gradualmente. Uma chaleira em contato com o ambiente esfria continuamente até atingir o equilíbrio térmico com a temperatura externa. Mas esse processo se dá de forma exponencial e só estaria completo em um período de tempo infinito. Na prática, a chaleira sempre estará um pouco mais quente que o ambiente.

"No entanto, no caso do emaranhamento, a sua interação com o ambiente nem sempre segue esse decaimento exponencial. Eventualmente ele desaparece em um tempo finito – o que caracteriza a chamada morte súbita. Vimos que isso também ocorre para variáveis contínuas e, ajustando os parâmetros de operação do nosso OPO, conseguimos controlar essa morte súbita", disse.

Segundo ele, essa descoberta é importante para que um dia se faça transporte de informação quântica. "Se enviarmos essa informação por fibra óptica, por exemplo, não podemos perder o emaranhamento no sistema mediante perdas na propagação. Se a informação quântica passar a ter um papel central na tecnologia da informação, a compreensão da dinâmica da morte súbita e do emaranhamento serão ainda mais fundamentais", disse.

Além de Villar, Nussenzveig e Martinelli, participaram do estudo Antônio Sales Oliveira Coelho e Felippe Alexandre Silva Barbosa, ambos estudantes de pós-graduação do Instituto de Física da USP, e Katiúscia Cassemiro, do Instituto Max Planck, na Alemanha.

O artigo Three-Color Entanglement, de Paulo Nussenzveig e outros, pode ser lido por assinantes da Science em www.scienceexpress.org.

Thursday, September 03, 2009

Flagrante galáctico

Divulgação Científica

3/9/2009

Agência FAPESP – Um flagrante de proporções cósmicas acaba de ser capturado por um grupo internacional de astrônomos. As imagens mostram a ligação entre as galáxias de Andrômeda e do Triângulo.

Como qualquer caso entre estrelas do cinema, havia suspeitas da relação, mas nenhuma prova até o momento. Em artigo publicado na edição desta quinta-feira (3/9) da revista Nature, os cientistas apresentam as provas da ligação e descrevem como galáxias maiores crescem ainda mais ao incorporar estrelas de galáxias vizinhas e menores.

Esse modelo de evolução galáctica, conhecido como hierárquico, estima que galáxias de grandes dimensões, como Andrômeda – que inclusive pode ser vista a olho nu do hemisfério Norte –, estariam envoltas por "sobras" de galáxias menores.

Pela primeira vez, astrônomos têm imagens que confirmam o modelo hierárquico. A descoberta, que incluiu pesquisadores da Austrália, França, Alemanha e do Reino Unido, foi coordenada por Alan McConnachie, do Instituto de Astrofísica Herzberg, do Canadá, e do Conselho Nacional de Pesquisa do país.

"A galáxia de Andrômeda é nossa vizinha gigante, localizada a mais de 2,5 milhões de anos-luz da Via Láctea. Nosso estudo incluiu uma área com diâmetro de quase 1 milhão de anos-luz, centrada em torno de Andrômeda. Trata-se da mais extensa e mais profunda imagem já feita de uma galáxia", disse Geraint Lewis, da Universidade de Sydney, na Austrália, outro autor do estudo.

"Nós mapeamos os extremos inexplorados de Andrômeda pela primeira vez e encontramos estrelas e estruturas de grande porte que são remanescentes de galáxias menores e que foram incorporadas por Andrômeda como parte de seu contínuo crescimento", explicou.

A maior surpresa, para o grupo, foi descobrir que Andrômeda estava interagindo com sua vizinha, a galáxia do Triângulo, que é visível do hemisfério Norte com o uso de um pequeno telescópio. "Milhões de estrelas da galáxia do Triângulo já foram 'puxadas' por Andrômeda como resultado dessa relação", disse Lewis.

Como paparazzi que estão sempre de plantão na casa de estrelas do cinema e da televisão, o grupo pretende continuar a observar o resultado da interação entre as galáxias, estimando que possa resultar em uma união muito mais sólida. "As duas poderão se unir inteiramente", disse Lewis.

O estudo também indica que galáxias são muito maiores do que se estimava, com sua influência gravitacional se estendendo muito além das estrelas mais próximas ao seu centro.

"Como Andrômeda é considerada uma galáxia típica, foi surpreendente ver como ela é vasta. Encontramos estrelas a distâncias de até 100 vezes o raio do disco central da galáxia", contou Lewis. Os astrônomos usaram para o estudo o telescópio Canadense-Francês-Havaiano, localizado no monte Mauna Kea, no Havaí.

O artigo The remnants of galaxy formation from a panoramic survey of the region around M31, de Alan McConnachie e outros, pode ser lido por assinantes da Nature em www.nature.com.

Wednesday, September 02, 2009

Molécula contra diabetes e obesidade

Divulgação Científica

2/9/2009

Agência FAPESP – Mais de 180 milhões de pessoas em todo o mundo têm diabetes tipo 2, a forma mais comum da doença. E o total continua crescendo em um nível alarmante, o que tem levado centros de pesquisa em diversos países a tentar encontrar alternativas de combate ao problema, que tem entre seus principais fatores de risco a obesidade.

Um grupo internacional de pesquisadores acaba de apresentar um potencial candidato: a proteína TGR5. Os cientistas descobriram que sua ativação é capaz de reduzir o ganho de peso e de tratar o diabetes. O estudo foi publicado nesta quarta-feira (2/9) na revista Cell Metabolism.

Um trabalho anterior do mesmo grupo demonstrou que ácidos biliares (produzidos no fígado e que quebram as gorduras), por meio da ativação da TGR5 em tecidos musculares e adiposos marrom, foram capazes de aumentar o gasto de energia e de prevenir, ou até mesmo de reverter, obesidade induzida em camundongos.

No novo estudo, o grupo coordenado pelos professores Kristina Schoonjans e Johan Auwerx, da Escola Politécnica Federal de Lausanne, na Suíça, examinou o papel da TGR5 no intestino, onde essa proteína é expressada em células especializadas na produção de hormônios.

Os pesquisadores observaram que essas células, chamadas de células enteroendócrinas TGR5, controlam a secreção do hormônio GLP-1, que tem papel crítico no controle da função pancreática e na regulação dos níveis de açúcar no sangue.

Kristina e Auwerx trabalharam em conjunto com Roberto Pellicciari, da Universidade de Perugia, na Itália, que desenvolveu um ativador para a TGR5, chamado de INT-777, em colaboração com a empresa Intercept Pharmaceuticals, dos Estados Unidos.

O grupo demonstrou que – em testes condições laboratoriais em camundongos – a TGR5 pode efetivamente tratar o diabetes e reduzir a massa corporal. Os autores também mostraram que esses efeitos estavam relacionados ao aumento tanto da secreção da GLP-1 como do gasto energético.

Segundo os pesquisadores, os resultados apontam para uma nova abordagem no tratamento do diabetes tipo 2 e da obesidade. A alternativa proposta é baseada no aumento da secreção de GLP-1 por meio da administração do ativador da TGR5.

O artigo de Kristina Schoonjans e outros pode ser lido por assinantes da Cell Metabolism em www.cell.com/cell-metabolism.
 


Tuesday, September 01, 2009

After the Transistor, a Leap Into the Microcosm

Published: August 31, 2009

YORKTOWN HEIGHTS, N.Y. — Gaze into the electron microscope display in Frances Ross's laboratory here and it is possible to persuade yourself that Dr. Ross, a 21st-century materials scientist, is actually a farmer in some Lilliputian silicon world.

Chris Ramirez for The New York Times

CAPTAIN MINIATURE Frances Ross, a scientist at I.B.M. Research in Yorktown Heights, N.Y., operating an electron microscope, which allows her to study nanowires, about one one-thousandth the width of a human hair, as they grow.

Dr. Ross, an I.B.M. researcher, is growing a crop of mushroom-shaped silicon nanowires that may one day become a basic building block for a new kind of electronics. Nanowires are just one example, although one of the most promising, of a transformation now taking place in the material sciences as researchers push to create the next generation of switching devices smaller, faster and more powerful than today's transistors.

The reason that many computer scientists are pursuing this goal is that the shrinking of the transistor has approached fundamental physical limits. Increasingly, transistor manufacturers grapple with subatomic effects, like the tendency for electrons to "leak" across material boundaries. The leaking electrons make it more difficult to know when a transistor is in an on or off state, the information that makes electronic computing possible. They have also led to excess heat, the bane of the fastest computer chips.

The transistor is not just another element of the electronic world. It is the invention that made the computer revolution possible. In essence it is an on-off switch controlled by the flow of electricity. For the purposes of computing, when the switch is on it represents a one. When it is off it represents a zero. These zeros and ones are the most basic language of computers.

For more than half a century, transistors have gotten smaller and cheaper, following something called Moore's Law, which states that circuit density doubles roughly every two years. This was predicted by the computer scientist Douglas Engelbart in 1959, and then described by Gordon Moore, the co-founder of Intel, in a now-legendary 1965 article in Electronics, the source of Moore's Law.

Today's transistors are used by the billions to form microprocessors and memory chips. Often called planar transistors, they are built on the surface (or plane) of a silicon wafer by using a manufacturing process that precisely deposits and then etches away different insulating, conducting and semiconducting materials with such precision that the industry is now approaching the ability to place individual molecules.

A typical high-end Intel microprocessor is today based on roughly one billion transistors or more, each capable of switching on and off about 300 billion times a second and packed densely enough that two million transistors would fit comfortably in the period at the end of this sentence.

In fact, this year, the chip industry is preparing to begin the transition from a generation of microprocessor chips based on a minimum feature size of 45 nanometers (a human hair is roughly 80,000 nanometers in width) to one of 32 nanometers — the next step down into the microcosm. But the end of this particular staircase may be near.

"Fundamentally the planar transistor is running out of steam," said John E. Kelly III, I.B.M.'s senior vice president and director of research.

"We're at an inflection point, you better believe it, and most of the world is in denial about it," said Mark Horowitz, a Stanford University electrical engineer who spoke last week at a chip design conference in Palo Alto, Calif. "The physics constraints are getting more and more serious."

Many computer scientists have been warning for years that this time would come, that Moore's Law would cease to be valid because of increasing technical difficulties and the expense of overcoming them. Last week at Stanford University, during a panel on the future of scaling (of which the shrinking of transistors is one example), several panelists said the end was near.

"We're done scaling. We've been playing tricks since 90 nanometers," said Brad McCredie, an I.B.M. fellow and one of the company's leading chip designers, in a reference to the increasingly arcane techniques the industry has been using to make circuits smaller.

For example, for the past three technology generations Intel has used a material known as "strained silicon" in which a layer of silicon atoms are stretched beyond their normal atomic distance by depositing them on top of another material like silicon germanium. This results in lower energy consumption and faster switching speeds.

Other researchers and business executives believe the shrinking of the transistor can continue, at least for a while, that the current industry standard Mosfet (for Metal-Oxide-Silicon Field-Effect-Transistor) can be effectively harnessed for several more technology generations.

Technology executives at the Intel Corporation, the world's largest chipmaker, say they believe that by coupling more advanced photolithographic techniques with new kinds of materials and by changing the design of the transistor, it will be possible to continue to scale down to sizes as small as five nanometers — effectively taking the industry forward until the end of the next decade.

"Silicon will probably continue longer than we expect," said Michael C. Mayberry, an Intel vice president and the director of the company's component research program.

Both Intel and I.B.M. are publicly committed to a new class of transistors known as FinFETs that may be used as early as the 22-nanometer technology generation beginning in 2011 or 2012. Named for a portion of the switch that resembles a fish fin, these transistors have the dual advantage of offering greater density because they are tipped vertically out of the plane of the silicon wafer, as well as better insulating properties, making it easier to control the switching from a 1 to a 0 state.

But sooner or later, new materials and new manufacturing processes will be necessary to keep making computer technology ever cheaper. In the long term, new switches might be based on magnetic, quantum or even nanomechanical switching principles. One possibility would be to use changes in the spin of an individual electron to represent a 1 or a 0.

"If you look out into the future, there is a branching tree and there are many possible paths we might take," Dr. Mayberry said.

In Dr. Ross's laboratory at I.B.M., researchers are concentrating on more near-term technology. They are exploring the idea of constructing FinFET switches in a radical new process that breaks away from photo etching. It is a kind of nanofarming. Dr. Ross sprinkles gold particles as small as 10 nanometers in diameter on a substrate and then suffuses them in a silicon gas at a temperature of about 1,100 degrees Fahrenheit. This causes the particles to become "supersaturated" with silicon from the gas, which will then precipitate into a solid, forming a wire that grows vertically.

I.B.M. is pressing aggressively to develop this technology, which could be available commercially by 2012, she said. At the same time she acknowledged that significant challenges remain in perfecting nanowire technology. The mushroom-shaped wires in her laboratory now look a little bit like bonsai trees. To offer the kind of switching performances chipmakers require, the researchers must learn to make them so that their surfaces are perfectly regular. Moreover, techniques must be developed to make them behave like semiconductors.

I.B.M. is also exploring higher-risk ideas like "DNA origami," a process developed by Paul W. K. Rothemund, a computer scientist at the California Institute of Technology.

The technique involves creating arbitrary two- and three-dimensional shapes by controlling the folding of a long single strand of viral DNA with multiple smaller "staple" strands. It is possible to form everything from nanometer-scale triangles and squares to more elaborate shapes like smiley faces and a rough map of North America. That could one day lead to an application in which such DNA shapes could be used to create a scaffolding just as wooden molds are now used to create concrete structures. The DNA shapes, for example, could be used to more precisely locate the gold nanoparticles that would then be used to grow nanowires. The DNA would be used only to align the circuits and would be destroyed by the high temperatures used by the chip-making processes.

At Intel there is great interest in building FinFET switches but also in finding ways to integrate promising III-V materials on top of silicon as well as exploring materials like graphene and carbon nanotubes, from which the company has now made prototype switches as small as 1.5 nanometers in diameter, according to Dr. Mayberry. The new materials have properties like increased electron mobility that might make transistors that are smaller and faster than those that can be made with silicon.

"At that very small dimension you have the problem of how do you make the connection into the tube in the first place," he said. "It's not just how well does this nanotube itself work, but how do you integrate it into a system."

Given all the challenges that each new chip-making technology faces, as well as the industry's sharp decline in investment, it is tempting to suggest that the smaller, faster, cheaper trend may indeed be on the brink of slowing if not halting.

Then again, as Dr. Mayberry suggests, the industry has a way of surprising its skeptics.

A One-Way Ticket to Mars

Op-Ed Contributor

Published: August 31, 2009

Tempe, Ariz.

NOW that the hype surrounding the 40th anniversary of the Moon landings has come and gone, we are faced with the grim reality that if we want to send humans back to the Moon the investment is likely to run in excess of $150 billion. The cost to get to Mars could easily be two to four times that, if it is possible at all.

This is the issue being wrestled with by a NASA panel, convened this year and led by Norman Augustine, a former chief executive of Lockheed Martin, that will in the coming weeks present President Obama with options for the near-term future of human spaceflight. It is quickly becoming clear that going to the Moon or Mars in the next decade or two will be impossible without a much bigger budget than has so far been allocated. Is it worth it?

The most challenging impediment to human travel to Mars does not seem to involve the complicated launching, propulsion, guidance or landing technologies but something far more mundane: the radiation emanating from the Sun's cosmic rays. The shielding necessary to ensure the astronauts do not get a lethal dose of solar radiation on a round trip to Mars may very well make the spacecraft so heavy that the amount of fuel needed becomes prohibitive.

There is, however, a way to surmount this problem while reducing the cost and technical requirements, but it demands that we ask this vexing question: Why are we so interested in bringing the Mars astronauts home again?

While the idea of sending astronauts aloft never to return is jarring upon first hearing, the rationale for one-way trips into space has both historical and practical roots. Colonists and pilgrims seldom set off for the New World with the expectation of a return trip, usually because the places they were leaving were pretty intolerable anyway. Give us a century or two and we may turn the whole planet into a place from which many people might be happy to depart.

Moreover, one of the reasons that is sometimes given for sending humans into space is that we need to move beyond Earth if we are to improve our species' chances of survival should something terrible happen back home. This requires people to leave, and stay away.

There are more immediate and pragmatic reasons to consider one-way human space exploration missions.

First, money. Much of the cost of a voyage to Mars will be spent on coming home again. If the fuel for the return is carried on the ship, this greatly increases the mass of the ship, which in turn requires even more fuel.

The president of the Mars Society, Robert Zubrin, has offered one possible solution: two ships, sent separately. The first would be sent unmanned and, once there, combine onboard hydrogen with carbon dioxide from the Martian atmosphere to generate the fuel for the return trip; the second would take the astronauts there, and then be left behind. But once arrival is decoupled from return, one should ask whether the return trip is really necessary.

Surely if the point of sending astronauts is to be able to carry out scientific experiments that robots cannot do (something I am highly skeptical of and one of the reasons I don't believe we should use science to attempt to justify human space exploration), then the longer they spend on the planet the more experiments they can do.

Moreover, if the radiation problems cannot be adequately resolved then the longevity of astronauts signing up for a Mars round trip would be severely compromised in any case. As cruel as it may sound, the astronauts would probably best use their remaining time living and working on Mars rather than dying at home.

If it sounds unrealistic to suggest that astronauts would be willing to leave home never to return alive, then consider the results of several informal surveys I and several colleagues have conducted recently. One of my peers in Arizona recently accompanied a group of scientists and engineers from the Jet Propulsion Laboratory on a geological field trip. During the day, he asked how many would be willing to go on a one-way mission into space. Every member of the group raised his hand. The lure of space travel remains intoxicating for a generation brought up on "Star Trek" and "Star Wars."

We might want to restrict the voyage to older astronauts, whose longevity is limited in any case. Here again, I have found a significant fraction of scientists older than 65 who would be willing to live out their remaining years on the red planet or elsewhere. With older scientists, there would be additional health complications, to be sure, but the necessary medical personnel and equipment would still probably be cheaper than designing a return mission.

Delivering food and supplies to these new pioneers — along with the tools to grow and build whatever they need, for however long they live on the red planet — is likewise more reasonable and may be less expensive than designing a ticket home. Certainly, as in the Zubrin proposal, unmanned spacecraft could provide the crucial supply lines.

The largest stumbling block to a consideration of one-way missions is probably political. NASA and Congress are unlikely to do something that could be perceived as signing the death warrants of astronauts.

Nevertheless, human space travel is so expensive and so dangerous that we are going to need novel, even extreme solutions if we really want to expand the range of human civilization beyond our own planet. To boldly go where no one has gone before does not require coming home again.

Lawrence M. Krauss, the director of the Origins Initiative at Arizona State University, is the author of "The Physics of 'Star Trek.'"

Friday, August 28, 2009

Oxido Nitroso - Morrer sorrindo

Divulgação Científica

Novo e maior inimigo do ozônio

28/8/2009

Agência FAPESP – O óxido nitroso (N2O) é conhecido como gás do riso (ou hilariante), devido à capacidade de provocar contrações musculares involuntárias na face. Mas uma nova notícia sobre esse gás está longe de provocar bom humor.

Segundo uma pesquisa feita por cientistas da Administração Nacional do Oceano e Atmosfera (NOAA), nos Estados Unidos, o óxido nitroso se tornou, entre todas as substâncias emitidas por atividades humanas, a que mais danos provoca na camada de ozônio.

O estudo, publicado na edição esta sexta-feira (28/8) da revista Science, afirma que essa liderança nefasta se manterá por todo o século.

O óxido nitroso superou os clorofluorcarbonetos (CFCs), cuja emissão na atmosfera tem diminuído seguidamente por causa de acordos internacionais conduzidos com essa finalidade. Hoje, de acordo com a pesquisa, as emissões de N2O já são duas vezes maiores do que as de CFCs.

O óxido nitroso é emitido por fontes naturais (bactérias no solo e oceanos, por exemplo) e como um subproduto dos métodos de fertilização na agricultura, da combustão, do tratamento de esgoto e de diversos processos industriais. Atualmente, um terço da emissão do gás deriva de atividades humanas.

Ao calcular o efeito dessa emissão na camada de ozônio atualmente e estimar o mesmo para o futuro próximo, os autores da pesquisa observaram que os danos à camada de ozônio são grandes e continuarão elevados por muitas décadas se nada for feito para reduzir as emissões.

"A grande redução nos CFCs nos últimos 20 anos é uma história ambiental de sucesso. Entretanto, o óxido nitroso produzido pelo homem é agora o elefante na sala entre as substâncias que destroem o ozônio atmosférico", disse Akkihebbal Ravishankara, diretor da Divisão de Ciências Químicas do Laboratório de Pesquisas do Sistema Terrestre da NOAA, principal autor do estudo.

A camada de ozônio protege plantas, animais e pessoas do excesso de radiação ultravioleta emitida pelo Sol. A diminuição da camada faz com que mais radiação do tipo atinja a superfície terrestre, prejudicando a vida no planeta.

Apesar de o papel do óxido nitroso na destruição do ozônio ser conhecido há décadas, o novo estudo é o primeiro a calcular sua importância por meio do uso de métodos semelhantes aos usados na análise de CFCs e de outras emissões antrópicas.

Diferentemente dos CFCs e de outros desses gases, a emissão de óxido nitroso não é regulada pelo Protocolo de Montreal sobre Substâncias que Destroem a Camada de Ozônio, adotado em 1987 por 46 países.

Segundo os pesquisadores, como o óxido nitroso também é um gás de efeito estufa, a redução de suas emissões por atividades humanas seria uma boa medida tanto para a camada de ozônio como para o clima.

O artigo Nitrous oxide (N2O): The dominant ozone depleting substance emitted in the 21st century, de A.R. Ravishankara e outros, pode ser lido por assinantes da Science em www.sciencemag.org.

Thursday, August 27, 2009

Cyberwar NYT

Cyberwar

Defying Experts, Rogue Computer Code Still Lurks

Published: August 26, 2009

It is still out there.

Cyberwar

Zombie Networks

Computers, indispensable in peace, are becoming ever more important in political conflicts and open warfare. This article is the seventh in a series examining the growing use of computer power as a weapon.

Like a ghost ship, a rogue software program that glided onto the Internet last November has confounded the efforts of top security experts to eradicate the program and trace its origins and purpose, exposing serious weaknesses in the world's digital infrastructure.

The program, known as Conficker, uses flaws in Windows software to co-opt machines and link them into a virtual computer that can be commanded remotely by its authors. With more than five million of these zombies now under its control — government, business and home computers in more than 200 countries — this shadowy computer has power that dwarfs that of the world's largest data centers.

Alarmed by the program's quick spread after its debut in November, computer security experts from industry, academia and government joined forces in a highly unusual collaboration. They decoded the program and developed antivirus software that erased it from millions of the computers. But Conficker's persistence and sophistication has squelched the belief of many experts that such global computer infections are a thing of the past.

"It's using the best current practices and state of the art to communicate and to protect itself," Rodney Joffe, director of the Conficker Working Group, said of the malicious program. "We have not found the trick to take control back from the malware in any way."

Researchers speculate that the computer could be employed to generate vast amounts of spam; it could steal information like passwords and logins by capturing keystrokes on infected computers; it could deliver fake antivirus warnings to trick naïve users into believing their computers are infected and persuading them to pay by credit card to have the infection removed.

There is also a different possibility that concerns the researchers: That the program was not designed by a criminal gang, but instead by an intelligence agency or the military of some country to monitor or disable an enemy's computers. Networks of infected computers, or botnets, were used widely as weapons in conflicts in Estonia in 2007 and in Georgia last year, and in more recent attacks against South Korean and United States government agencies. Recent attacks that temporarily crippled Twitter and Facebook were believed to have had political overtones.

Yet for the most part Conficker has done little more than to extend its reach to more and more computers. Though there had been speculation that the computer might be activated to do something malicious on April 1, the date passed without incident, and some security experts wonder if the program has been abandoned.

The experts have only tiny clues about the location of the program's authors. The first version included software that stopped the program if it infected a machine with a Ukrainian language keyboard. There may have been two initial infections — in Buenos Aires and in Kiev.

Wherever the authors are, the experts say, they are clearly professionals using the most advanced technology available. The program is protected by internal defense mechanisms that make it hard to erase, and even kills or hides from programs designed to look for botnets.

A member of the security team said that the Federal Bureau of Investigation had suspects, but was moving slowly because it needed to build a relationship with "noncorrupt" law enforcement agencies in the countries where the suspects are located.

An F.B.I. spokesman in Washington declined to comment, saying that the Conficker investigation was an open case.

The first infections, last Nov. 20, set off an intense battle between the hidden authors and the volunteer group that formed to counter them. The group, which first called itself the "Conficker Cabal," changed its name when Microsoft, Symantec and several other companies objected to the unprofessional connotation.

Eventually, university researchers and law enforcement officials joined forces with computer experts at more than two dozen Internet, software and computer security firms.

The group won some battles, but lost others. The Conficker authors kept distributing new, more intricate versions of the program, at one point using code that had been devised in academia only months before. At another point, a single technical slip by the working group allowed the program's authors to convert a huge number of the infected machines to an advanced peer-to-peer communications scheme that the industry group has not been able to defeat. Where before all the infected computers would have to phone home to a single source for instructions, the authors could now use any infected computer to instruct all the others.

In early April, Patrick Peterson, a research fellow at Cisco Systems in San Jose, Calif., gained some intelligence about the authors' interests. He studies nasty computer programs by keeping a set of quarantined computers that capture and observe them — his "digital zoo."

He discovered that the Conficker authors had begun distributing software that tricks Internet users into buying fake antivirus software with their credit cards. "We turned off the lights in the zoo one day and came back the next day," Mr. Peterson said, noting that in the "cage" reserved for Conficker, the infection had been joined by a program distributing an antivirus software scam.

It was the most recent sign of life from the program, and its silence has set off a debate among computer security experts. Some researchers think Conficker is an empty shell, or that the authors of the program were scared away in the spring. Others argue that they are simply biding their time.

If the misbegotten computer were reactivated, it would not have the problem-solving ability of supercomputers used to design nuclear weapons or simulate climate change. But because it has commandeered so many machines, it could draw on an amount of computing power greater than that from any single computing facility run by governments or Google. It is a dark reflection of the "cloud computing" sweeping the commercial Internet, in which data is stored on the Internet rather than on a personal computer.

The industry group continues to try to find ways to kill Conficker, meeting as recently as Tuesday. Mr. Joffe said he, for one, was not prepared to declare victory. But he said that the group's work proved that government and private industry could cooperate to counter cyberthreats.

"Even if we lose against Conficker," he said, "there are things we've learned that will benefit us in the future."

Saturday, August 22, 2009

Ingredient for life detected in comet dust

It is the first time an amino acid has turned up in comet material, bolstering the idea that the building blocks of biology are 'ubiquitous' in space.

Wild 2 comet

The Wild 2 comet orbits between Mars and Jupiter. The image is taken by NASA's Stardust spacecraft, which passed by close enough to harvest particles from the comet's tail. (Don Brownlee / NASA / June 17, 2004)


Showing that the ingredients for life in the universe may be distributed far more widely than previously thought, scientists have found traces of a key building block of biology in dust snatched from the tail of a comet.

Scientists at the Goddard Space Flight Center in Greenbelt, Md., have uncovered glycine, the simplest amino acid and a vital compound necessary for life, in a sample from the comet Wild 2. The sample was captured by NASA's Stardust spacecraft, which dropped it into the Utah desert in 2006.

"By detecting glycine, we now know that comets could have delivered amino acids to the early Earth, contributing to the ingredients that life originated from," said Jamie Elsila, a research scientist at Goddard and coauthor of a paper outlining the discovery in the journal Meteoritics and Planetary Science.

The idea that the ingredients for life were delivered to Earth from space, rather than developing out of Earth's original chemical soup, has been around for years. Amino acids previously have been discovered in meteorites. But this is the first time an amino acid has turned up in comet material.

"This is yet another piece of evidence that the ingredients for life are ubiquitous. These building blocks of life are everywhere," said Carl Pilcher, director of NASA's Astrobiology Institute, which helped fund the research. Pilcher said the discovery strengthens the argument that life in the universe may be common, rather than rare.

The Stardust spacecraft, managed jointly by the Jet Propulsion Laboratory in La Cañada Flintridge and Lockheed Martin Space Systems in Denver, was launched in 1999 on a 2.9-billion-mile journey that made two loops around the sun before meeting up five years later with Wild 2, which orbits between Mars and Jupiter.

Flying as close as 147 miles to the hamburger-shaped comet, Stardust passed through its tail of dust and gas.

At its closest approach, the craft deployed a tennis-racket-shaped collector packed with a substance called aerogel, which harvested comet particles. The spacecraft then returned to Earth's orbit and jettisoned a capsule containing the sample. The capsule made what NASA called a "bulls-eye" landing in Utah on the morning of Jan. 15, 2006.

Jason Dworkin, a coauthor of the research paper, said glycine was first detected a few months after the sample landed. The next two years, he said, were spent verifying the result.

Don Brownlee, a University of Washington astronomer who served as chief scientist on the Stardust mission, called the work "a real tour de force technologically to make these measurements in such small samples."

Brownlee said the result is exciting because it represents a second, very large source of life-giving material. He estimated that there are as many as a trillion comets in and around the solar system, many of them in the chilly Kuiper Belt beyond Pluto, or in the Oort Cloud even farther out.

"There has been a huge question of where the prebiotic compounds came from on Earth," Brownlee said. "Did they come from space? Or were they made here? Or maybe they came from both places."

Just having the right materials is no guarantee that life will begin, of course, any more than leaving a hammer, nails and planks lying around will cause a barn to rise. Brownlee pointed out that many of the 30,000 or so meteorites that have been found on Earth bear traces of organic compounds, and there also is evidence that they were once warm and wet, all necessary conditions for life. Yet none of the meteorites has shown any evidence of life forms.

"They are all failed places where life could have arisen," Brownlee said.

john.johnson@latimes.com


The complete article can be viewed at:
http://www.latimes.com/news/nationworld/nation/la-sci-comet18-2009aug18,0,7605775.story

Visit latimes.com at http://www.latimes.com

Monday, August 17, 2009

Spasers

Divulgação Científica

Entra em cena o spaser

17/8/2009

Agência FAPESP – Uma fonte de luz laser é imprescindível para o funcionamento de circuitos nanofotônicos, que têm grande potencial para servir de base a tecnologias e computadores no futuro. Mas os lasers atuais não podem ser fabricados com dimensões pequenas o suficiente para serem integrados a chips eletrônicos.

Pesquisadores norte-americanos acabam de superar o obstáculo utilizando, no lugar dos fótons que formam a luz, nuvens de elétrons conhecidas como "plásmons de superfície" para criar pequenos dispositivos batizados como spasers.

A descoberta foi detalhada em artigo publicado neste domingo (16/8) na edição on-line da revista Nature. O trabalho foi realizado por cientistas das universidades de Purdue, Estadual de Norfolk e de Cornell, nos Estados Unidos.

A nanofotônica pode viabilizar, segundo os pesquisadores, uma série de avanços radicais, incluindo poderosas "hiperlentes" – resultando em sensores microscópios dez vezes mais poderosos do que os atuais e capazes de observar objetos tão pequenos quanto o DNA –, coletores solares mais eficientes ou computadores e produtos eletrônicos que utilizam a luz em vez de sinais eletrônicos para processar a informação.

"No artigo, demonstramos a viabilidade do componente mais crítico – o nanolaser –, essencial para que a nanofotônica se torne uma aplicação tecnológica prática", disse um dos autores, Vladimir Shalaev, professor de Engenharia Elétrica e da Computação da Universidade de Purdue.

Os nanolasers com base no spaser criados pelos pesquisadores consistem em esferas de 44 nanômetros (bilionésimos de metro) de diâmetro. Mais de um milhão delas poderiam caber dentro de um único glóbulo vermelho.

As esferas foram fabricadas na Universidade de Cornell, enquanto as universidades de Norfolk e Purdue realizaram a caracterização óptica necessária para determinar se os dispositivos se comportavam como lasers.

A descoberta confirma o trabalho realizado pelos físicos David Bergman, da Universidade de Tel Aviv (Israel), e Mark Stockman, da Universidade Estadual da Geórgia (Estados Unidos), que propuseram o conceito de spaser em 2003.

"Esse trabalho representa um marco importante que pode vir a ser o início de uma revolução na nanofotônica, com aplicações em imageamento e sensores em escalas muito menores que o comprimento de onda da luz visível", disse Timothy Sands, diretor do Centro Birck de Nanotecnologia do Parque Tecnológico de Purdue.

Evolução do laser

Os spasers contêm um núcleo de ouro envolto em uma câmara de material semelhante a vidro cheia com um corante verde. Quando uma luz é direcionada às esferas, plásmons gerados pelo núcleo de ouro são amplificados pelo corante. Os plásmons são então convertidos em fótons de luz visível, que são emitidos como um laser.

Spaser é uma sigla em inglês para "amplificação de plásmon de superfície por emissão estimulada de radiação". Para atuar como lasers, os dispositivos exigem um sistema de feedback que faz com que os plásmons de superfície oscilem para frente e para trás, de forma a ganhar força e poder ser emitidos como luz.

Os lasers convencionais são limitados em relação ao tamanho mínimo com que podem ser fabricados, porque para os fótons esse componente de feedback, chamado ressonador óptico, precisa ter pelo menos a metade do tamanho do comprimento de onda da luz laser.

Os pesquisadores, no entanto, superaram esse obstáculo usando os plásmons de superfície no lugar dos fótons, o que lhes permitiu criar um ressonador de 44 nanômetros de diâmetro, ou com menos de um décimo do tamanho dos 530 nanômetros do comprimento da onda emitida pelo spaser.

"No momento em que vamos comemorar os 50 anos da invenção do laser, talvez tenhamos conseguido um avanço radical para as tecnologias laser", disse Shalev. O primeiro trabalho sobre laser foi publicado em 1960.

De acordo com os cientistas, trabalhos futuros poderão envolver a criação de um nanolaser com base em spasers que utiliza uma fonte elétrica em vez de uma fonte luminosa – o que iria tornar o dispositivo mais prático para aplicações em computadores e na indústria eletrônica.

O artigo Demonstration of Spaser-based nanolaser, de Mikhail Noginov e outros, pode ser lido por assinantes da Nature em www.nature.com.

Friday, August 14, 2009

New osteoporosis drug
shown to reduce
spinal fractures

The drug, called denosumab, blocks production of cells that break down bones. In two studies, spinal fractures were reduced by two-thirds in women ages 60-90 and in men getting prostate cancer therapy




The first member of a new class of osteoporosis drugs reduced spinal fractures by about two-thirds in post-menopausal women and in men undergoing hormone-deprivation therapy for prostate cancer, according to two studies released online Tuesday by the .

The drug, called denosumab, blocks production of cells called osteoclasts that break down bones, and physicians have high hopes for it because of its efficacy, ease of administration and apparent lack of severe side effects. But it's a biological agent rather than a chemical, meaning it's difficult to produce, and it is likely to be the highest-priced osteoporosis drug in an already-crowded marketplace.

The most well-known osteoporosis drug, Fosamax, is in a class known as bisphosphonates. Those drugs actually kill osteoclasts but carry the risk of stomach and esophageal irritation and have been linked to some cases of jaw necrosis.

Amgen, the manufacturer of denosumab, has not said how much the drug will cost, but analysts expect it to be at least $2,000 a year -- and potentially much higher -- and predict yearly sales of $2 billion to $3 billion.

Already, many insurance companies are pushing physicians to the generic version of Fosamax, alendronate, which costs about $100 a year.

Some medical experts think a high price would discourage the use of denosumab.

"If it is going to be quite a bit higher than the next-most-expensive drug, I don't see that it is going to be used so widely," said Dr. Frederick R. Singer, director of the endocrine/bone disease program at John Wayne Cancer Institute in Santa Monica, who was not involved in the research.

An advisory committee of the Food and Drug Administration will meet Thursday to consider Amgen's application for approval of the drug, to be called Prolia, for treating osteoporosis in women and in men being treated for prostate cancer. If approved, it would be the first drug specifically approved for treating such men.

As many as half of women and 30% of men will suffer an osteoporosis-related fracture during their lifetime, according to the International Osteoporosis Foundation. About a third of the 2 million American men with prostate cancer undergo hormone-deprivation therapy to prevent release of the hormones that fuel the tumors, which sharply increases their risk of osteoporosis.

The two new trials were designed and funded by Amgen, and most of the researchers were Amgen employees or recipients of funds from the company. Nonetheless, osteoporosis experts were impressed.

"From a scientific standpoint, these are outstanding publications," Singer said.

The first study included 7,686 women ages 60 to 90. Half were given an injection of denosumab every six months for three years, and half received a placebo. Overall, 2.3% of women receiving the drug had a spinal fracture and 0.7% had a hip fracture, compared with 7.2% and 1.2% in the placebo group.

That is similar to or slightly better than results with bisphosphonates, although the drugs have not been compared head to head.

The drug "does everything you would want a drug to do in women to prevent fractures," said Dr. John S. Adams, an orthopedic surgeon at UCLA's Geffen School of Medicine.

The second study involved 1,468 prostate cancer patients receiving hormone-deprivation therapy. They underwent the same protocol as the women. Overall, 1.5% of men receiving the drug had a spinal fracture, compared with 3.9% of those in the placebo group. Men receiving the drug also had a 5.6% increase in bone mineral density, while those receiving placebo had a 1% decline.

There was no decline in non-spinal fractures.

Many of the patients in both studies reported soreness at the injection site and transient bone pain similar to arthritis. The drug caused eczema, an inflammation of the epidermis, in a few patients,and about 12 of the women got a serious skin infection called cellulitis.

Some earlier, small studies showed an apparent small increase in tumors in treated patients, but that was not observed in either of the new studies. Such potentially severe side effects will be a focus of the FDA panel.

"This appears to be the most potent of the osteoporosis drugs," Singer said, "but it will require very careful monitoring to look for rare side effects," which did not show up for other drugs until large numbers of people took them.

thomas.maugh@latimes.com