Tuesday, March 02, 2010

João Musa | Armando Bagolim

João Luis Musa - Instituto Tomie Ohtake

Duas séries de trabalhos de João Luiz Musa, uma em cor, outra em preto e branco, ocuparão duas grandes salas do Instituto Tomie Ohtake. O conjunto de 105 fotos expressa seu talento ímpar de extrair a mais alta qualidade da fotografia a partir de cenas, arquiteturas e paisagens, combinando apuro técnico e poesia. Vocação que começou bem cedo, quando cursava primeiro ano da Escola Politécnica da Universidade São Paulo e conheceu o laboratório de fotografia do grêmio. Desde então, desenvolve de maneira muito própria a sua obra – do olhar à revelação.

A série em preto e branco, Um Inverno, reúne 50 imagens captadas pelo fotógrafo em Paris, Londres, Roma, Lisboa, Madri, Bruges, Hamburgo, Genebra, Oslo, Estocolmo e outras cidades europeias em uma viagem durante o inverno de dezembro de 1973 a janeiro de 1974. Segundo o texto de Agnaldo Farias para a exposição, "homem, cidade e natureza" se encontram e o fotógrafo mostra a forma sutil de como se avizinham e se acomodam. "Há silêncio e calma nessas imagens, o que não quer dizer que não haja também dor e violência, mas é que, na contracorrente de uma fotografia compreendida como exercício de suspensão do agora, o artista descobriu que o instante decorre do passado, inescrutável e necessariamente longevo".

Algumas fotos desta viagem estiveram em exposição no MASP, em 1974, e outras que, segundo Musa, dialogavam com trechos dos Cadernos de Malte Laurids Brigge, de Rainer Maria Rilke, foram expostas em 1993, no MIS. Na seleção da presente mostra no Instituto Tomie Ohtake, o processo de edição, os negativos, contatos e cópias foram revisitados e novas imagens foram incorporadas ao ensaio.

Já na série em cor, Fotografias 2005 – 2009, estão 55 os trabalhos mais recentes, fotos captadas por Musa em Paris e Avignon na França, Nova York e cidades brasileiras – São Paulo, Rio de Janeiro, Parati, Cubatão e Itanhaém. Geradas por câmeras digitais e impressas em jato de tinta sobre papel de algodão, as imagens foram tratadas com recursos de interpretação sobre a luz e sobre a cor, graças aos novos arquivos e programas de manipulação, que a fotografia convencional analógica não permitia. "Hoje cada canal de cor pode ser tratado separadamente quanto a sua qualidade tonal, sua saturação e sua luminosidade", explica o artista.

Na série em cor, João Luiz Musa apresenta o resultado dos últimos cinco anos de trabalho, no qual, segundo o texto para a exposição de Luiz Armando Bagolin, demonstra a intensa experimentação com a luz e a cor aliada ao rigor com a edição. "Confrontado ao fotógrafo atual, aquele jovem fotógrafo viajante que percorrera anos atrás, sem grandes perspectivas, mas confiante na práxis do 'momento decisivo', um território desolado pela Guerra Fria, reaparece-lhe com estranhamento, enquanto a produção recente contempla o estado de arte dos lugares, do modo de vida, da cultura das pessoas, das coisas cuja luminosidade é, delas, sígnica", conclui Bagolin.

As séries estão sendo transformadas em dois livros a serem lançados no final da exposição. Fotografias 2005 – 2009 (156 páginas, 23x29cm, texto de Luiz Armando Bagolin) é uma edição da Imprensa Oficial em parceria com o Instituto Tomie Ohtake, enquanto Um Inverno (144 páginas, 23x29cm, texto de Agnaldo Farias) é edição conjunta da Imprensa Oficial e da EDUSP. São publicações desenvolvidas minuciosamente pelo próprio artista, processo inovador que conta com o entusiasmo do presidente da Imprensa Oficial, Hubert Alquéres.

Wednesday, January 13, 2010

Computador Úmido

Chemical computer that mimics neurons to be created

By Jason Palmer
Science and technology reporter, BBC News

Artist's impression of 'wet' computing cells (G Jones)

A promising push toward a novel, biologically-inspired "chemical computer" has begun as part of an international collaboration.

The "wet computer" incorporates several recently discovered properties of chemical systems that can be hijacked to engineer computing power.

The team's approach mimics some of the actions of neurons in the brain.

The 1.8m-euro (£1.6m) project will run for three years, funded by an EU emerging technologies programme.

The programme has identified biologically-inspired computing as particularly important, having recently funded several such projects.

What distinguishes the current project is that it will make use of stable "cells" featuring a coating that forms spontaneously, similar to the walls of our own cells, and uses chemistry to accomplish the signal processing similar to that of our own neurons.

The goal is not to make a better computer than conventional ones, said project collaborator Klaus-Peter Zauner of the University of Southampton, but rather to be able to compute in new environments.

  If one day we want to construct computers of similar power and complexity to the human brain, my bet would be on some form of chemical or molecular computing
Frantisek Stepanek, Institute of Chemical Technology, Prague

"The type of wet information technology we are working towards will not find its near-term application in running business software," Dr Zauner told BBC News.

"But it will open up application domains where current IT does not offer any solutions - controlling molecular robots, fine-grained control of chemical assembly, and intelligent drugs that process the chemical signals of the human body and act according to the local biochemical state of the cell."

Lipids and liquids

The group's approach hinges on two critical ideas.

First, individual "cells" are surrounded by a wall made up of so-called lipids that spontaneously encapsulate the liquid innards of the cell.

Recent work has shown that when two such lipid layers encounter each other as the cells come into contact, a protein can form a passage between them, allowing chemical signalling molecules to pass.

Second, the cells' interiors will play host to what is known as a Belousov-Zhabotinsky or B-Z chemical reaction. Simply put, reactions of this type can be initiated by changing the concentration of the element bromine by a certain threshold amount.

The reactions are unusual for a number of reasons.

But for the computing application, what is important is that after the arrival of a chemical signal to start it, the cell enters a "refractory period" during which further chemical signals do not influence the reaction.

That keeps a signal from propagating unchecked through any connected cells.

Such self-contained systems that react under their own chemical power to a stimulus above a threshold have an analogue in nature: neurons.

Neuron (SPL)
Each neuron in our brains can be viewed as a chemical computer

"Every neuron is like a molecular computer; ours is a very crude abstraction of what neurons do," said Dr Zauner.

"But the essence of neurons is the capability to get 'excited'; it can re-form an input signal and has its own energy supply so it can fire out a new signal."

This propagation of a chemical signal - along with the "refractory period" that keeps it contained within a given cell - means the cells can form networks that function like the brain.

'Real chance'

Frantisek Stepanek, a chemical computing researcher at the Institute of Chemical Technology Prague in the Czech Republic, said the pairing of the two ideas was promising.

"If one day we want to construct computers of similar power and complexity to the human brain, my bet would be on some form of chemical or molecular computing," he told BBC News.

"I think this project stands a real chance of bringing chemical computing from the concept stage to a practical demonstration of a functional prototype."

For its part, the team is already hard at work proving the idea will work.

"Officially the project doesn't start until the first of February," said Dr Zauner, "but we were so curious about it we already sent some lipids to our collaborators in Poland - they've already shown the lipid layers are stable."

Células Solares auto-montantes

Solar cells made through oil-and-water 'self-assembly'

By Jason Palmer
Science and technology reporter, BBC News

Microscope image of self-assembled solar cell
The approach made a device of 64,000 parts in three minutes


Researchers have demonstrated a simple, cheap way to create self-assembling electronic devices using a property crucial to salad dressings.

It uses the fact that oil- and water-based liquids do not mix, forming devices from components that align along the boundary between the two.

The idea joins a raft of approaches toward self-assembly, but lends itself particularly well to small components.

The work is reported in Proceedings of the National Academy of Sciences.

Crucially, it could allow the large-scale assembly of high-quality electronic components on materials of just about any type, in contrast to "inkjet printed" electronics or some previous self-assembly techniques.

Specific gravity

Such efforts have until now exploited the effect of gravity, assembling devices through so-called "sedimentation".

In this approach, "blank" devices are etched with depressions to match precisely-shaped components. Simply dumped into a liquid, the components should settle down into the blank device like sand onto a riverbed, in just the right places.

"That's what we tried for at least two years and we were never able to assemble these components with high yield - gravity wasn't working," said Heiko Jacobs of the University of Minnesota, who led the research.

SELF-ASSEMBLY EXPLAINED
Self-assembly graphic
The oil/water mix contains a number of individual solar cell elements
Each is coated with a "water-loving" molecule on the bottom and a "water-hating" one on top
The elements align neatly at the oil/water boundary in a two-dimensional sheet
The "blank" solar cell has pre-cut places for the elements and is dipped through the boundary
As it is slowly drawn upwards, the elements pop into place

"Then we thought if we could concentrate them into a two-dimensional sheet and then have some kind of conveyor belt-like system we could assemble them with high yields and high speed," he told BBC News.

To do that, the team borrowed an idea familiar to fans of vinaigrette: they built their two-dimensional sheets at the border between oil and water.

They first built a device blank as before, with depressions lined with low-temperature solder, designed for individual solar cell elements.

They then prepared the elements - each a silicon and gold stack a few tens of millionths of a metre across - and put different coatings on each side.

On the silicon side, they put a hydrophobic molecule, one that has a strong tendency to evade contact with water. On the gold side, they put a hydrophilic molecule, which has the converse tendency to seek out water.

By getting the densities of the oil- and water-based parts of the experiment just right, a "sheet" of the elements could be made to "float" between the two, pointing in the right direction thanks to their coatings.

The conveyor belt process is to simply dunk the device blank through the boundary and draw it back slowly; the sheet of elements rides up along behind it, each one popping neatly into place as the solder attracts its gold contact.

The team made a working device comprising 64,000 elements in just three minutes.

Bendy future

Having proved that the concept works, the team is now investigating just how small they can go in terms of individual elements, or how large they can go in finished devices.

The approach should also work for almost any material, stiff or flexible, plastic, metal or semiconductor - a promising fact for future display and imaging applications.

Babak Parviz, a nano-engineering professor at the University of Washington in Seattle, said the technique is a "clear demonstration that self-assembly is applicable across size scales".

"Self-assembly is probably the best method for integrating high-performance materials onto unconventional substrates," he told BBC News.

The method tackles what Dr Parviz said is the most challenging problem - the proper alignment of thousands of parts, each thinner than a human hair. But it also works with the highest-performance materials, he said.

"For example, this method allows one to use single-crystal silicon, which is far superior to other types of silicon for making solar cells."

Tuesday, January 12, 2010

Galaxias com apenas 600 milhoẽs de anos

With Updated Hubble Telescope, Reaching Farther Back in Time

NASA, ESA
THROUGH THE AGES
Using old and new cameras, the Hubble Space Telescope recorded thousands of galaxies, some dating back more than 12 billion years.
Published: January 11, 2010

Astronaut repairmen had hardly finished tightening the last stubborn bolts on the Hubble Space Telescope last summer when astronomers set the controls on the refurbished telescope to the dim and distant past.

The result was a new long-distance observing record. Astronomers announced in a series of papers over the fall and in a news conference last week that Hubble had recorded images of the earliest and most distant galaxies ever seen, blurry specks of light that burned brightly only 600 million to 800 million years after the Big Bang.

The specks are clouds only one-twentieth the size of the Milky Way galaxy and only 1 percent of its mass, and seem to show the lingering effects of the first generation of stars to form in the universe in that they get bluer the farther back you go in time.

The new galaxies, along with other recent discoveries like the violent supernova explosion of a star only 620 million years after the Big Bang, take astronomers deep into a period of cosmic history known as the dark ages, which has been little explored. It was then that stars and galaxies were starting to light up vigorously in larger and larger numbers and that a fog of hydrogen that had enveloped space after the Big Bang fires had cooled mysteriously dissipated.

"These are the seeds of the great galaxies of today," said Garth Illingworth of the University of California, Santa Cruz, who discussed the new galaxies last week at a meeting of the American Astronomical Society in Washington. "We are pushing Hubble to the limit to find these objects."

Richard Ellis of the California Institute of Technology, one of many astronomers who have been working with the observations, said, "We're reaching the beginning where galaxies formed for the first time."

Dr. Illingworth and his colleague Richard Bouwens led a team that used Hubble's new Wide Field Camera 3, which was installed by the astronauts in May, to stare at a small patch of the southern sky over 62 orbits in what they call the Hubble Ultra Deep Field. The patch, known as the southern GOODS field, for Great Observatories Origins Deep Survey, has been observed by a variety of telescopes and satellites, including Hubble in 2004.

The release of Dr. Illingworth's observations in the fall led to a kind of gold rush in astronomy. In the last three months, several teams, using different ways to analyze the data, have produced 15 papers and articles about the new galaxies. Dr. Illingworth said in an interview that his team had identified 21 galaxies from 600 million to 800 million years after the Big Bang, and that other groups had found similar numbers.

The most distant, he said, was about 600 million years after the Big Bang. The universe is about 13.7 billion years old, cosmologists agree, meaning that the light from these galaxies has been on its way to us for 13 billion years.

In addition, some of the groups say they have identified possible galaxies as far back as 480 million years after the Big Bang, but they disagree on how many and which ones they are.

The new wide-field camera has an infrared capability, which makes it well suited for probing the early universe. As the universe expands, objects farther away from us go away faster, shifting their light to longer, redder wavelengths. The most distant galaxies appear to be emitting almost all of their light at even longer wavelengths, as invisible infrared, or heat, radiation. Indeed, the James Webb Space Telescope, being built for a 2014 launch to explore the very earliest years of creation, will be an entirely infrared telescope.

The galaxies are too far away and faint to be studied spectroscopically by even the largest telescopes on Earth, but by comparing their brightnesses in different infrared wavelength bands with optical images recorded by Hubble in 2004, astronomers could estimate how reddened the galaxies were. Some that showed up in the infrared images did not even appear in visible light.

Unlike the graceful spirals and grandly round ellipticals that populate today's universe, these baby galaxies are dumpy and irregular. Another clue that astronomers are getting close to the start of time is the blueness of the new Hubble galaxies when the effects of cosmic expansion are taken into account.

According to theoretical models, the first stars were born about 200 million years after the Big Bang, and consisted solely of hydrogen and helium. Lacking the elements to make dust, which reddens starlight, these stars would be bluer than those today. The colors of these galaxies, Dr. Illingworth said, suggested the presence of stars born only 300 million years after the Big Bang.

The new galaxies continue a recent trend in which the farther into the past astronomers look, the fewer and fainter and smaller galaxies they find, suggesting that the first billion years of history was a time in which galaxies and stars were rapidly increasing in number. The universe reached a peak in the birth rate of stars about 10 billion years ago, when it was a third of its present age.

Astronomers still do not know, however, if they will find enough galaxies and stars in that epoch when the universe was only half a billion years old to have burned off the hydrogen fog. That process is technically known as reionization, in which electrons are stripped from the hydrogen nuclei, making intergalactic space transparent.

More evidence that galaxies and massive stars were already going strong a few hundred million years after the Big Bang came last spring when NASA's Swift satellite detected gamma rays from an exploding star that was traced to a galaxy only 625 million years from the Big Bang.

Nial Tanvir of the University of Leicester and his colleagues called that blast "a glimpse of the end of the dark ages," suggesting that similar gamma ray bursts from that era could be used to measure the rate of star formation back then and figure out if stars were enough to reionize the universe.

Dr. Ellis said, "It does look as if galaxies could do the trick of causing reionization." It could be that the new Hubble galaxies were just the tip of the iceberg and that many more galaxies are lurking just below the threshold of detection. "The new camera," he said, "has revealed a bunch of little glowworms. The James Webb telescope will see the sky blazing with them."

Thursday, December 03, 2009

Vou abrir a Igreja Starblast Me Se Revelou.

03/12/2009

O primeiro milagre do heliocentrismo


Hélio Schwartsman para a Folha de Sampa


Como proteger-se do poder do Estado? Esse, que é um dos temas capitais da ciência política, já consumiu muita tinta e derrubou vários acres de florestas. A descrição mais clássica é a de Thomas Hobbes, para quem o Estado é um monstro feroz, um Leviatã, que, apesar de promover todo tipo de abuso, justifica-se por proteger os indivíduos da guerra de todos contra todos que configura o estado de natureza. Enquanto o poder público, leia-se, o soberano, garante a vida de seus súditos, devemos-lhe obediência total, o que inclui aceder aos menores caprichos e tolerar as piores injustiças. É só quando o soberano nos condena à morte, isto é, quando deixa de assegurar-nos a existência, que temos o direito de rebelar-nos contra sua autoridade.

OK. Admito que não é um cenário muito idílico. Mas tampouco o era a Inglaterra sob a guerra civil no século 17. De lá para cá, as coisas melhoraram bastante, pelo menos neste cantinho de mundo que chamamos de Ocidente democrático. Embora o poder do Estado ainda seja algo a temer, contamos hoje com um rol de direitos e garantias fundamentais que são geralmente observados. Quando não o são, podemos gritar e espernear. Na pior das hipóteses, já não precisamos ser condenados à morte para conquistar o direito de revolta.

Mais até, em determinadas circunstâncias o Estado pode ser considerado um aliado, que promove ativamente o bem-estar através de instituições como a Previdência social e serviços de educação e saúde.

Fiz essa longa introdução, que em jornalismo chamaríamos de nariz de cera, para propor uma discussão que julgo importante: em que nível devem materializar-se essas garantias fundamentais? Elas dizem respeito a indivíduos ou a grupos? É possível conceder benefícios a setores específicos?

Coloco essas questões a propósito da isenção de impostos para igrejas, que foi tema de reportagem de minha autoria publicada na edição de domingo da Folha de S.Paulo (quem tiver acesso à edição digital poderá conferir também a arte, que não é disponibilizada através do link). Para quem não é assinante de nada ou não está com paciência de ficar singrando hipertextos, faço um rápido resumo da matéria.

Eu, Claudio Angelo, editor de Ciência da Folha, e Rafael Garcia, repórter do jornal, decidimos abrir uma igreja. Com o auxílio técnico do departamento Jurídico da Folha e do escritório Rodrigues Barbosa, Mac Dowell de Figueiredo Gasparian Advogados, fizemo-lo. Precisamos apenas de R$ 418,42 em taxas e emolumentos e de cinco dias úteis (não consecutivos). É tudo muito simples. Não existem requisitos teológicos ou doutrinários para criar um culto religioso. Tampouco se exige número mínimo de fiéis.

Com o registro da Igreja Heliocêntrica do Sagrado EvangÉlio e seu CNPJ, pudemos abrir uma conta bancária na qual realizamos aplicações financeiras isentas de IR e IOF. Mas esses não são os únicos benefícios fiscais da empreitada. Nos termos do artigo 150 da Constituição, templos de qualquer culto são imunes a todos os impostos que incidam sobre o patrimônio, a renda ou os serviços relacionados com suas finalidades essenciais, as quais são definidas pelos próprios criadores. Ou seja, se levássemos a coisa adiante, poderíamos nos livrar de IPVA, IPTU, ISS, ITR e vários outros "Is" de bens colocados em nome da igreja.

Há também vantagens extratributárias. Os templos são livres para se organizarem como bem entenderem, o que inclui escolher seus sacerdotes. Uma vez ungidos, eles adquirem privilégios como a isenção do serviço militar obrigatório (já sagrei meus filhos Ian e David ministros religiosos) e direito a prisão especial.

A discussão pública relevante aqui é se faz ou não sentido conceder tantas regalias a grupos religiosos. Não há dúvida de que a liberdade de culto é um direito a preservar de forma veemente. Trata-se, afinal, de uma extensão da liberdade de pensamento e de expressão. Sem elas, nem ao menos podemos falar em democracia.

Em princípio, a imunidade tributária para igrejas surge como um reforço a essa liberdade religiosa. O pressuposto é o de que seria relativamente fácil para um governante esmagar com taxas o culto de que ele não gostasse.

Esse é um raciocínio que fica melhor no papel do que na realidade. É claro que o poder de tributar ilimitadamente pode destruir não apenas religiões, mas qualquer atividade. Nesse caso, cabe perguntar: por que proteger apenas as religiões e não todas as pessoas e associações? Bem, a Constituição em certa medida já o fez, quando criou mecanismos de proteção que valem para todos, como os princípios da anterioridade e da não cumulatividade ou a proibição de impostos que tenham caráter confiscatório.

Será que templos de fato precisam de proteções adicionais? Até acho que precisavam em eras já passadas, nas quais não era inverossímil que o Estado se aliasse à então religião oficial para asfixiar economicamente cultos rivais. Acredito, porém, que esse raciocínio não se aplique mais, de vez que já não existe no Brasil religião oficial e seria constitucionalmente impossível tributar um templo deixando o outro livre do gravame.

No mais, mesmo que considerássemos a imunidade tributária a igrejas essencial, em sua presente forma ela é bem imperfeita, pois as protege apenas de impostos, mas não de taxas e contribuições. Ora, até para evitar a divisão de receitas com Estados e municípios, as mais recentes investidas da União têm se materializado justamente na forma de contribuições. Minha sensação é a de que a imunidade tributária se tornou uma espécie de relíquia dispensável.

Está aí o primeiro milagre do heliocentrismo: não é todo dia que uma igreja se sacrifica dessa forma, advogando pela extinção de vantagens das quais se beneficia.

Sei que estou pregando no deserto, mas o Brasil precisaria urgentemente livrar-se de certos maus hábitos, cujas origens podem ser traçadas ao feudalismo e ao fascismo, e enfim converter-se numa República de iguais, nas quais as pessoas sejam titulares de direitos porque são cidadãs, não porque pertençam a esta ou aquela categoria profissional ou porque tenham nascido em berço esplêndido. O mesmo deve valer para associações. Até por imperativos aritméticos, sempre que se concede uma prebenda fiscal a um dado grupo, onera-se imediatamente todos os que não fazem parte daquele clube. Não é demais lembrar que o princípio da solidariedade tributária também é um dos fundamentos da República.

Compartilhe

Sunday, November 29, 2009

NYT - Climate Change in Japan

Op-Ed Contributor

In Japan, Concerns Blossom


Published: November 28, 2009

Tokyo

Before the Climate Conference, a Weather Report

President Obama and other world leaders will gather in Copenhagen next week to discuss global warming. The Op-Ed editors asked writers from four different continents give their own report on the climate changes they've experienced close to home.


IT'S autumn, and the people on the Chuo Line are all bundled up, just as they are in the spring. When I was a student, a friend from Hokkaido, in the north, told me she couldn't stand the winter cold in Tokyo. Although the temperature is lower in northern Japan, in Tokyo there is no moisture in the winter air; the dry winds bounce off the buildings, picking up speed until they seem to cut into your skin, making the cold intolerable.

When I was in elementary school in the mid-1960s, there were still paddy fields and vegetable patches on the outskirts of Tokyo. On frosty winter mornings spears of frozen grass crunched under my shoes as I walked to school, and it often snowed. Winters were harsher than they are now, but the face of spring was more clearly defined, boldly announcing its arrival. Summers were so hot and humid that even if I sat perfectly still the sweat rolled down my forehead, and when I walked through the rank grass on my way to the air-conditioned library, bugs used to jump up from the weeds around my feet.

I liked summer back then. But since the 1980s, the trees and grass have disappeared. The earth is now covered with asphalt and buildings, and the smell of parking lots mingled with oppressively hot gusts of air blown out from apartment air-conditioners hangs over the city; it seems this depressing heat will never go away. The gingko trees don't turn yellow until December. In place of the snow that used to fall in winter, the dry, cold blasts of wind come back, followed almost immediately by the unbearable heat of summer.

It's said that the suicide rate rises as the number of trees decreases. For some reason, only cherry trees seem to increase year by year. Many are of the type called somei-yoshino. A while ago I read that somei-yoshino is a cultivar that was artificially bred about a century ago and has since spread throughout the country.

If the conditions are the same, all the flowers on trees of this type bloom at once, and several days later, with no regrets for the brevity of their lives, the blossoms all fall together; thus embodying nationalistic ideology, they came to be regarded as a symbol of Japan even though they don't appear in ancient literary works or paintings. The flowers bloom at the same time because the trees are clones, bred from cuttings.

From March through May, the progress of the "cherry blossom front" is reported nightly on the weather report as it makes its way north through the archipelago. The TV meteorologist, who usually looks worried as she explains the lines that show the ominous movements of high and low pressure areas, becomes oddly cheerful when the topic switches to the "cherry blossom front," and she announces enthusiastically, "In just two weeks the cherries in the Kanto area will be in full bloom!"

Because of climate change, the weather always betrays our expectations, making us wonder if the earth isn't in its last days. Yet the "cherry blossom front" always follows the same course from south to north, which gives us a sense of relief. There are scores of varieties of cherry trees; if types other than somei-yoshino were planted, the "cherry blossom front" wouldn't be so predictable, and the weather report would cause more anxiety, I thought one day last spring as I left the train station and walked down the street lined with cherry trees. Beneath the trees people were sitting, eating box lunches and drinking sake or beer.

When I looked up, the somei-yoshino cherries were in full bloom, blanketing the sky; in the chill air, they looked like snow. Perhaps these white blossoms are the ghosts of snowflakes that no longer fall.

Yoko Tawada is the author of "The Naked Eye" and "Facing the Bridge." This essay was translated by Margaret Mitsutani from the Japanese.

Wednesday, November 18, 2009

Robert Smithson - Arte e os Elementos

How to Conserve Art That Lives in a Lake?


Published: November 17, 2009

In 1972, a year before his death in a plane crash at 35, the artist Robert Smithson wrote, "I am for an art that takes into account the direct effect of the elements as they exist from day to day." And with the creation of his greatest work — "Spiral Jetty," the huge counterclockwise curlicue of black basalt rock that juts into the Great Salt Lake in rural Utah — he certainly put that conviction to the test.

Eppich, Esmay and Tang: Collection of Dia Art Foundation
An aerial view of Robert Smithson's "Spiral Jetty" in Utah, taken by a camera attached to a latex weather balloon from about 800 feet in the air.

Tang/Collection of Dia Art Foundation
Rand Eppich of the Getty Conservation Institute surveying the site. The institute is helping Dia to document "Spiral Jetty."

After the piece was constructed in 1970, it spent decades underwater as the lake rose. It has re-emerged in the last few years because of drought, but its appearance has changed markedly, whitened by salt crystals and the buildup of silt. Mr. Smithson, who was fascinated by the concept of entropy, might have welcomed this transformation. But it is less clear what he would have thought about changes wrought by visitors to the remote site, who have, at times, carried off some of the rocks as art souvenirs. Or moved them to construct their own tiny spiral jetties nearby. Or, in one case, used them to spell out what they were undoubtedly drinking at the time — "BEER" — in the pink-hued sand next to the earthwork.

Issues like this recently prompted the Dia Art Foundation, which owns the work, to begin exploring the idea of systematically documenting the site, photographing it from year to year to give curators and conservators a better idea of how it is changing and a better basis for making decisions — always tricky in the world of land art — about whether to intervene.

"In my field we're trained to make condition reports," said Francesca Esmay, Dia's conservator, but she added of Smithson's work, composed of more than 6,000 tons of rock and soil: "Its scale is such that I can't just go out with a camera and pencil and clipboard by myself and describe it." So several months ago she turned to the Getty Conservation Institute, an arm of the J. Paul Getty Trust, which has organized and assisted in conservation and monitoring of art and historic sites from Central America to Africa to the Middle East.

After considering nearly every possible way to document "Spiral Jetty" from above — Rent a weather satellite? An airplane? A helicopter? Use a kite? — the institute, which often works in countries where conservation projects are carried out on shoestring budgets, came up with a remarkably simple solution: a $50 disposable latex weather balloon, easily bought online.

Along with a little helium, some fishing line, a slightly hacked Canon PowerShot G9 point-and-shoot digital camera, an improvised plywood and metal cradle for the camera and some plastic zip ties (to keep the cradle attached and the neck of the balloon cinched), a floating land-art documentation machine was improvised, MacGyver-like.

"I'm not supposed to use the word cheap — it's inexpensive," Rand Eppich, a senior project manager with the Getty institute, said. Mr. Eppich, who conceived the balloon plan, made the two-and-a-half-hour drive from Salt Lake City last May with a Getty assistant, Aurora Tang, and Ms. Esmay, to put the system in use for the first time.

And despite a couple of balloons that popped in the Utah heat ("Thankfully, we didn't have cameras on them," Mr. Eppich said), the three managed to get some spectacular and highly useful shots of the jetty from heights ranging from 800 to 1,600 feet, as they unreeled the fishing line tied to the balloon, allowing it to rise.

"You don't need to be skilled conservators to do this part — it's literally like remembering back to childhood birthday parties," said Ms. Esmay, who joined Dia three years ago as its first full-time conservator. She is also responsible for the condition of sites like Walter De Maria's "Lightning Field" in western New Mexico and for works by artists like Donald Judd, Dan Flavin and Louise Bourgeois at Dia:Beacon in Beacon, N.Y.

Mr. Eppich said the Getty's goal was to create a system that Dia could use annually at little cost and one simple enough that Ms. Esmay could operate it herself. "We want to help people do something that's repeatable and sustainable after we're gone," he said.

Preservation concerns about "Spiral Jetty" have arisen lately not only because of the work's re-emergence from the water but also because of plans announced in the last two and a half years by companies to initiate industrial projects near the site. One is a large expansion of a field of solar evaporation ponds used to extract potassium sulfate from the water for fertilizer. Another is a plan for exploratory oil drilling that Dia officials argued would disrupt the way the work would be viewed and potentially harm it physically. As a result of the drilling proposal — currently in limbo — Dia and Utah officials have begun exploring the creation of a buffer zone around the sculpture that would help protect it while still allowing the lake area to be used for other purposes.

But in addition to industrial threats to the work, there are also natural ones, like silt, which has begun to accumulate between the outermost band of the spiral and the next one in, as the lake's level has dropped. The lake is so low it is now possible to walk a quarter-mile into it with the water reaching only knee-high.

"In my personal opinion alone," Ms. Esmay said of the silt, "I think it's to such a degree now that it's foreign to the piece. But in 10 years it could be gone or in one year it could be gone. Or it could be worse. You have no way of knowing, and that's just inherent to the work itself."

She emphasized that the documentation project was not a prelude to any active plans to rebuild or even touch up the jetty. "Something like that might not happen for 20 years, if it ever happens at all," she said, "but at least we'll have 20 years of data that will show the patterns of change."

And if any conservation plans were to go forward, then the really complicated work would begin: trying to figure out what Mr. Smithson would have thought about it.

"Nature does not proceed in a straight line," he wrote. "It is rather a sprawling development. Nature is never finished."

Tuesday, October 20, 2009

NYT - Arte Conceitual

Op-Ed Contributor

Has Conceptual Art Jumped the Shark Tank?

Published: October 15, 2009

Christchurch, New Zealand

Tamara Shopsin

ART's link with money is not new, though it does continue to generate surprises. On Friday night, Christie's in London plans to auction another of Damien Hirst's medicine cabinets: literally a small, sliding-glass medicine cabinet containing a few dozen bottles or tubes of standard pharmaceuticals: nasal spray, penicillin tablets, vitamins and so forth. This work is not as grand as a Hirst shark, floating eerily in a giant vat of formaldehyde, one of which sold for more than $12 million a few years ago. Still, the estimate of up to $239,000 for the medicine cabinet is impressive — rather more impressive than the work itself.

No disputing tastes, of course, if yours lean toward the aesthetic contemplation of an orderly medicine cabinet. Buy it, and you acquire a work of art by the world's richest and — by that criterion — most successful living artist. Still, neither this piece nor Mr. Hirst's dissected calves and embalmed horses are quite "by" the artist in a conventional sense. Mr. Hirst's name rightfully goes on them because they were his conceptions. However, he did not reproduce any of the medicine bottles or boxes in his cabinet (in the way that Warhol actually recreated Brillo boxes), nor did he catch a shark or do the taxidermy.

In this respect, the pricey medicine cabinet belongs to a tradition of conceptual art: works we admire not for skillful hands-on execution by the artist, but for the artist's creative concept. Mr. Hirst has a talent for coming up with concepts that capture the attention of the art market, putting him in the company of other big names who have now and again moved away from making art with their own hands: Jeff Koons, for example, who has put vacuum cleaners into Plexiglas cases and commissioned an Italian porcelain manufacturer to make a cheesy gold and white sculpture of Michael Jackson and his pet chimp. Mr. Koons need not touch the art his contractors produce; the ideas are his, and that's enough.

Sophisticated gallery owners or curators normally respond with withering condescension to worries about the lack of craftsmanship in contemporary art. Art has moved on, I've heard it argued, since Victorian times, when "she'd painted every hair" was ordinary aesthetic praise. What is important today is not technical skill, but skill in playing inventively with ideas.

Since the endearingly witty Marcel Duchamp invented conceptual art 90 years ago by offering his "ready-mades" — a urinal or a snow shovel, for instance — for gallery shows, the genre has degenerated. Duchamp, an authentic artistic genius, was in 1917 making sport of the art establishment and its stuffy values. By the time we get to 2009, Mr. Hirst and Mr. Koons are the establishment.

Does this mean that conceptual art is here to stay? That is not at all certain, and it is not just auction results that are relevant to the issue. To see why works of conceptual art have an inherent investment risk, we must look back at the whole history of art, including art's most ancient prehistory.

It is widely assumed that the earliest human art works are the stupendously skillful cave paintings of Lascaux and Chauvet, the latter perhaps 32,000 years old, along with a few small realistic sculptures of women and of animals from the same period. But artistic and decorative behavior emerged in a far more distant past. Shell necklaces that look like something you would see at a tourist resort, as well as evidence of ochre body paint, have been found from more than 100,000 years ago. But the most intriguing prehistoric artifacts are much older even than that. I have in mind the so-called Acheulian hand axes.

The earliest stone tools are choppers and blades found in Olduvai Gorge in East Africa, from 2.5 million years ago. These unadorned tools remained unchanged for thousands of centuries, until around 1.4 million years ago when Homo ergaster, Homo erectus and other human ancestral groups started doing something new and remarkable. They began shaping single, thin stone blades, sometimes rounded ovals, but often in what to our eyes are arresting symmetrical pointed leaf or teardrop forms. Acheulian hand axes (after St.-Acheul in France, a site of 19th-century finds) have been unearthed in their thousands, scattered across Asia, Europe and Africa, wherever Homo erectus roamed.

The sheer numbers of hand axes indicate a rate of manufacture beyond needs for butchering animals. Even more curious, unlike other prehistoric stone tools, hand axes often exhibit no evidence of wear on their delicate blade edges, and some are in any case too big for practical use. They are occasionally hewn from colorful stone materials (even with decoratively embedded fossils). Their symmetry, materials and above all meticulous workmanship makes them quite simply beautiful to our eyes. What were these ancient yet somehow familiar artifacts for?

The best available explanation is that they are literally the earliest known works of art — practical tools transformed into captivating aesthetic objects, contemplated both for their elegant shape and virtuoso craftsmanship. Hand axes mark an evolutionary advance in human prehistory, tools attractively fashioned to function as what Darwinians call "fitness signals" — displays like the glorious peacock's tail, which functions to show peahens the strength and vitality of the males who display it.

Hand axes, however, were not grown, but consciously, cleverly made. They were therefore able to indicate desirable personal qualities: intelligence, fine motor control, planning ability and conscientiousness. Such skills gained for those who displayed them status and a reproductive advantage over the less capable. Across many thousands of generations this translated into both an increase in intelligence and an evolved sense that the symmetry and craftsmanship of hand axes is "beautiful."

Aesthetically pleasing hand axes constitute an unbroken Stone-Age tradition that stretches over a million years, ending 100,000 to 150,000 years ago, about the time that their makers' African descendants, now called Homo sapiens, started to become articulate speakers of language. These humans were probably finding new ways to amuse and amaze one another with — who knows? — jokes, dramatic storytelling, dancing or hairstyling. Alas, geological layers do not record these other, more perishable aspects of prehistoric life. For us moderns, the arts have come to depict imaginary worlds and express intense emotions with music, painting, dance and fiction.

However, one trait of the ancestral personality persists in our aesthetic cravings: the pleasure we take in admiring skilled performances. From Lascaux to the Louvre to Carnegie Hall — where now and again the Homo erectus hairs stand up on the backs of our necks — human beings have a permanent, innate taste for virtuoso displays in the arts.

We ought, then, to stop kidding ourselves that painstakingly developed artistic technique is passé, a value left over from our grandparents' culture. Evidence is all around us. Even when we have lost contact with the social or religious ideas behind the arts of bygone civilizations, we are still able, as with the great bronzes or temples of Greece or ancient China, to respond directly to craftsmanship. The direct response to skill is what makes it possible to find beauty in many tribal arts even though we often know nothing about the beliefs of the people who created them. There is no place on earth where superlative technique in music and dance is not regarded as beautiful.

The appreciation of contemporary conceptual art, on the other hand, depends not on immediately recognizable skill, but on how the work is situated in today's intellectual zeitgeist. That's why looking through the history of conceptual art after Duchamp reminds me of paging through old New Yorker cartoons. Jokes about Cadillac tailfins and early fax machines were once amusing, and the same can be said of conceptual works like Piero Manzoni's 1962 declaration that Earth was his art work, Joseph Kosuth's 1965 "One and Three Chairs" (a chair, a photo of the chair and a definition of "chair") or Mr. Hirst's medicine cabinets. Future generations, no longer engaged by our art "concepts" and unable to divine any special skill or emotional expression in the work, may lose interest in it as a medium for financial speculation and relegate it to the realm of historical curiosity.

In this respect, I can't help regarding medicine cabinets, vacuum cleaners and dead sharks as reckless investments. Somewhere out there in collectorland is the unlucky guy who will be the last one holding the vacuum cleaner, and wondering why.

But that doesn't mean we need to worry about the future of art. There are plenty of prodigious artists at work in every medium, ready to wow us with surprising skills. And yes, now and again I walk past a jewelry shop window and stop, transfixed by a sparkling, teardrop-shaped precious stone. Our distant ancestors loved that shape, and found beauty in the skill needed to make it — even before they could put their love into words.

Denis Dutton is a professor of the philosophy of art at the University of Canterbury in New Zealand and the author of "The Art Instinct: Beauty, Pleasure and Human Evolution."

Tuesday, October 06, 2009

How Nonsense Sharpens the Intellect

The New York Times - Mind
Published: October 5, 2009

In addition to assorted bad breaks and pleasant surprises, opportunities and insults, life serves up the occasional pink unicorn. The three-dollar bill; the nun with a beard; the sentence, to borrow from the Lewis Carroll poem, that gyres and gimbles in the wabe.

Alexander Hafemann

An experience, in short, that violates all logic and expectation. The philosopher Soren Kierkegaard wrote that such anomalies produced a profound "sensation of the absurd," and he wasn't the only one who took them seriously. Freud, in an essay called "The Uncanny," traced the sensation to a fear of death, of castration or of "something that ought to have remained hidden but has come to light."

At best, the feeling is disorienting. At worst, it's creepy.

Now a study suggests that, paradoxically, this same sensation may prime the brain to sense patterns it would otherwise miss — in mathematical equations, in language, in the world at large.

"We're so motivated to get rid of that feeling that we look for meaning and coherence elsewhere," said Travis Proulx, a postdoctoral researcher at the University of California, Santa Barbara, and lead author of the paper appearing in the journal Psychological Science. "We channel the feeling into some other project, and it appears to improve some kinds of learning."

Researchers have long known that people cling to their personal biases more tightly when feeling threatened. After thinking about their own inevitable death, they become more patriotic, more religious and less tolerant of outsiders, studies find. When insulted, they profess more loyalty to friends — and when told they've done poorly on a trivia test, they even identify more strongly with their school's winning teams.

In a series of new papers, Dr. Proulx and Steven J. Heine, a professor of psychology at the University of British Columbia, argue that these findings are variations on the same process: maintaining meaning, or coherence. The brain evolved to predict, and it does so by identifying patterns.

When those patterns break down — as when a hiker stumbles across an easy chair sitting deep in the woods, as if dropped from the sky — the brain gropes for something, anything that makes sense. It may retreat to a familiar ritual, like checking equipment. But it may also turn its attention outward, the researchers argue, and notice, say, a pattern in animal tracks that was previously hidden. The urge to find a coherent pattern makes it more likely that the brain will find one.

"There's more research to be done on the theory," said Michael Inzlicht, an assistant professor of psychology at the University of Toronto, because it may be that nervousness, not a search for meaning, leads to heightened vigilance. But he added that the new theory was "plausible, and it certainly affirms my own meaning system; I think they're onto something."

In the most recent paper, published last month, Dr. Proulx and Dr. Heine described having 20 college students read an absurd short story based on "The Country Doctor," by Franz Kafka. The doctor of the title has to make a house call on a boy with a terrible toothache. He makes the journey and finds that the boy has no teeth at all. The horses who have pulled his carriage begin to act up; the boy's family becomes annoyed; then the doctor discovers the boy has teeth after all. And so on. The story is urgent, vivid and nonsensical — Kafkaesque.

After the story, the students studied a series of 45 strings of 6 to 9 letters, like "X, M, X, R, T, V." They later took a test on the letter strings, choosing those they thought they had seen before from a list of 60 such strings. In fact the letters were related, in a very subtle way, with some more likely to appear before or after others.

The test is a standard measure of what researchers call implicit learning: knowledge gained without awareness. The students had no idea what patterns their brain was sensing or how well they were performing.

But perform they did. They chose about 30 percent more of the letter strings, and were almost twice as accurate in their choices, than a comparison group of 20 students who had read a different short story, a coherent one.

"The fact that the group who read the absurd story identified more letter strings suggests that they were more motivated to look for patterns than the others," Dr. Heine said. "And the fact that they were more accurate means, we think, that they're forming new patterns they wouldn't be able to form otherwise."

Brain-imaging studies of people evaluating anomalies, or working out unsettling dilemmas, show that activity in an area called the anterior cingulate cortex spikes significantly. The more activation is recorded, the greater the motivation or ability to seek and correct errors in the real world, a recent study suggests. "The idea that we may be able to increase that motivation," said Dr. Inzlicht, a co-author, "is very much worth investigating."

Researchers familiar with the new work say it would be premature to incorporate film shorts by David Lynch, say, or compositions by John Cage into school curriculums. For one thing, no one knows whether exposure to the absurd can help people with explicit learning, like memorizing French. For another, studies have found that people in the grip of the uncanny tend to see patterns where none exist — becoming more prone to conspiracy theories, for example. The urge for order satisfies itself, it seems, regardless of the quality of the evidence.

Still, the new research supports what many experimental artists, habitual travelers and other novel seekers have always insisted: at least some of the time, disorientation begets creative thinking.

Friday, September 18, 2009

Emaranhamento colorido

18/9/2009

Por Fábio de Castro

Agência FAPESP – Um grupo de cientistas brasileiros conseguiu, pela primeira vez, gerar um emaranhamento quântico de três feixes de luz de cores diferentes. O feito deverá ajudar a compreender as características do emaranhamento, considerado pelos cientistas como base para futuras tecnologias como computação quântica, criptografia quântica e teletransporte quântico.

Fenômeno intrínseco da mecânica quântica, o emaranhamento permite que duas ou mais partículas compartilhem suas propriedades mesmo sem qualquer ligação física entre elas.

De acordo com os autores do estudo, publicado nesta quinta-feira (17/9) no site Science Express, da revista Science, a possibilidade de alternar o emaranhamento entre as diferentes frequências de luz poderá ser útil para protocolos avançados de informação quântica.

O grupo, que reúne pesquisadores da Universidade de São Paulo (USP) e brasileiros do Instituto Max Planck e da Universidade de Relangen-Nuremberg, na Alemanha, teve apoio da FAPESP na modalidade Auxílio à Pesquisa – Regular. Os cientistas também fazem parte do Instituto Nacional de Ciência e Tecnologia de Informação Quântica (INCT-IQ).

A descoberta fez parte da tese de doutorado de Alessandro de Sousa Villar e pela qual ganhou o Prêmio Capes de Teses 2008 na categoria Física, além do prêmio Professor José Leite Lopes, outorgado pela Sociedade Brasileira de Física. Villar, que teve apoio da FAPESP na modalidade Bolsa de Doutorado, é pesquisador do Instituto Max Planck e da Universidade de Erlangen-Nuremberg, ambos na Alemanha.

De acordo com o autor principal do estudo, Paulo Nussenzveig, do Instituto de Física da USP, a possibilidade de gerar o emaranhamento de três feixes de luz diferentes havia sido prevista pela mesma equipe há três anos, mas ainda não havia sido demonstrada experimentalmente. Dos três feixes, apenas um estava na porção visível do espectro e dois no infravermelho.

"Em 2005, medimos pela primeira vez o emaranhamento em dois feixes, comprovando uma previsão teórica feita por outros grupos em 1988. A partir daí, percebemos que a informação presente no sistema era mais complexa do que imaginávamos e, em 2006, escrevemos um artigo teórico prevendo o emaranhamento de três feixes, que conseguimos demonstrar agora", disse Nussenzveig à Agência FAPESP.

O cientista explica que, para realizar o estudo, o grupo utilizou um experimento conhecido como oscilador paramétrico óptico (OPO), que consiste em um cristal especial disposto entre dois espelhos, sobre o qual é bombeada uma fonte de luz.

"O que esse cristal tem de especial é sua resposta à luz, que é não-linear. Com isso, podemos introduzir no sistema uma luz verde e ter como resultado uma luz infravermelha, por exemplo", explicou. Segundo ele, os OPO com onda contínua, empregados no estudo, são utilizados desde a década de 1980.

Enfrentando diversas dificuldades e surpresas, ao lidar com fenômenos até então desconhecidos, os cientistas conseguiram "domar" o sistema para observar o emaranhamento de três feixes com comprimentos de onda diferentes. Durante o experimento, descobriram ainda um efeito importante: a chamada morte súbita do emaranhamento também ocorria no caso estudado.

Segundo Nussenzveig, um estudo coordenado por Luiz Davidovich, da Universidade Federal do Rio de Janeiro (UFRJ), publicado na Science em 2007, mostrou que o emaranhamento quântico podia desaparecer repentinamente, "dissolvendo" o elo quântico entre as partículas – o que poderia comprometer a aplicação do fenômeno no futuro desenvolvimento de computadores quânticos.

O efeito, batizado como morte súbita do emaranhamento, já havia sido previsto anteriormente por físicos teóricos e foi observado pela primeira vez pelo grupo da UFRJ em sistemas discretos – isto é, sistemas que têm um conjunto finito de resultados possíveis.

"Para sistemas macroscópicos de variáveis contínuas existem relativamente poucos trabalhos e previsões teóricas. E não existia absolutamente nenhum trabalho experimental. Observamos pela primeira vez algo que não havia sido previsto: a morte súbita em variáveis contínuas. Isso significa que trata-se de um efeito global coletivo", disse.

Coração da física quântica

De acordo com outro autor do estudo, Marcelo Martinelli, também professor do Instituto de Física da USP, o emaranhamento quântico é a propriedade que distingue as situações quânticas das situações nas quais os eventos obedecem às leis da física clássica.

"Essa propriedade é verificada por meio de correlações que são diferentes das que ocorrem no mundo da física clássica. Quando jogamos uma moeda no chão, na física clássica, se temos a coroa voltada para cima, temos a cara voltada para baixo. No mundo quântico, esse resultado tem diferentes graus de liberdade e ângulos de correlação", explicou.

Segundo Martinelli, o emaranhamento já havia sido muitas vezes verificado em sistemas discretos, ou entre dois ou mais sistemas no domínio de variáveis continuas. Mas, quando havia três ou mais sub-sistemas, o emaranhamento gerado era sempre de feixes de luz da mesma cor.

"Isso é interessante, porque abre caminho para que possamos, a partir de um sistema que interage com uma certa frequência do espectro eletromagnético, transferir suas propriedades quânticas para outro sistema – seria o chamado teletransporte quântico", disse o cientista, que coordena o projeto de Auxílio Regular "Teletransporte de informação quântica entre diferentes cores", apoiado pela FAPESP.

De acordo com Martinelli, seria possível fazer isso utilizando feixes de emaranhamento como veículo para transformar a informação. "Mas, se só pudermos lidar com variáveis da mesma cor, a informação quântica do primeiro só passaria para um segundo e um terceiro sistema se todos eles atuarem na mesma freqüência. O nosso modelo permitiria fazer a transferência de informação quântica entre diferentes faixas do espectro eletromagnético", explicou.

Ao observar pela primeira vez a morte súbita de emaranhamento em um sistema de variáveis contínuas, o grupo conseguiu novas informações sobre a natureza do fenômeno.

Martinelli explica que todo sistema que interage com a natureza apresenta perdas, gradualmente. Uma chaleira em contato com o ambiente esfria continuamente até atingir o equilíbrio térmico com a temperatura externa. Mas esse processo se dá de forma exponencial e só estaria completo em um período de tempo infinito. Na prática, a chaleira sempre estará um pouco mais quente que o ambiente.

"No entanto, no caso do emaranhamento, a sua interação com o ambiente nem sempre segue esse decaimento exponencial. Eventualmente ele desaparece em um tempo finito – o que caracteriza a chamada morte súbita. Vimos que isso também ocorre para variáveis contínuas e, ajustando os parâmetros de operação do nosso OPO, conseguimos controlar essa morte súbita", disse.

Segundo ele, essa descoberta é importante para que um dia se faça transporte de informação quântica. "Se enviarmos essa informação por fibra óptica, por exemplo, não podemos perder o emaranhamento no sistema mediante perdas na propagação. Se a informação quântica passar a ter um papel central na tecnologia da informação, a compreensão da dinâmica da morte súbita e do emaranhamento serão ainda mais fundamentais", disse.

Além de Villar, Nussenzveig e Martinelli, participaram do estudo Antônio Sales Oliveira Coelho e Felippe Alexandre Silva Barbosa, ambos estudantes de pós-graduação do Instituto de Física da USP, e Katiúscia Cassemiro, do Instituto Max Planck, na Alemanha.

O artigo Three-Color Entanglement, de Paulo Nussenzveig e outros, pode ser lido por assinantes da Science em www.scienceexpress.org.

Thursday, September 03, 2009

Flagrante galáctico

Divulgação Científica

3/9/2009

Agência FAPESP – Um flagrante de proporções cósmicas acaba de ser capturado por um grupo internacional de astrônomos. As imagens mostram a ligação entre as galáxias de Andrômeda e do Triângulo.

Como qualquer caso entre estrelas do cinema, havia suspeitas da relação, mas nenhuma prova até o momento. Em artigo publicado na edição desta quinta-feira (3/9) da revista Nature, os cientistas apresentam as provas da ligação e descrevem como galáxias maiores crescem ainda mais ao incorporar estrelas de galáxias vizinhas e menores.

Esse modelo de evolução galáctica, conhecido como hierárquico, estima que galáxias de grandes dimensões, como Andrômeda – que inclusive pode ser vista a olho nu do hemisfério Norte –, estariam envoltas por "sobras" de galáxias menores.

Pela primeira vez, astrônomos têm imagens que confirmam o modelo hierárquico. A descoberta, que incluiu pesquisadores da Austrália, França, Alemanha e do Reino Unido, foi coordenada por Alan McConnachie, do Instituto de Astrofísica Herzberg, do Canadá, e do Conselho Nacional de Pesquisa do país.

"A galáxia de Andrômeda é nossa vizinha gigante, localizada a mais de 2,5 milhões de anos-luz da Via Láctea. Nosso estudo incluiu uma área com diâmetro de quase 1 milhão de anos-luz, centrada em torno de Andrômeda. Trata-se da mais extensa e mais profunda imagem já feita de uma galáxia", disse Geraint Lewis, da Universidade de Sydney, na Austrália, outro autor do estudo.

"Nós mapeamos os extremos inexplorados de Andrômeda pela primeira vez e encontramos estrelas e estruturas de grande porte que são remanescentes de galáxias menores e que foram incorporadas por Andrômeda como parte de seu contínuo crescimento", explicou.

A maior surpresa, para o grupo, foi descobrir que Andrômeda estava interagindo com sua vizinha, a galáxia do Triângulo, que é visível do hemisfério Norte com o uso de um pequeno telescópio. "Milhões de estrelas da galáxia do Triângulo já foram 'puxadas' por Andrômeda como resultado dessa relação", disse Lewis.

Como paparazzi que estão sempre de plantão na casa de estrelas do cinema e da televisão, o grupo pretende continuar a observar o resultado da interação entre as galáxias, estimando que possa resultar em uma união muito mais sólida. "As duas poderão se unir inteiramente", disse Lewis.

O estudo também indica que galáxias são muito maiores do que se estimava, com sua influência gravitacional se estendendo muito além das estrelas mais próximas ao seu centro.

"Como Andrômeda é considerada uma galáxia típica, foi surpreendente ver como ela é vasta. Encontramos estrelas a distâncias de até 100 vezes o raio do disco central da galáxia", contou Lewis. Os astrônomos usaram para o estudo o telescópio Canadense-Francês-Havaiano, localizado no monte Mauna Kea, no Havaí.

O artigo The remnants of galaxy formation from a panoramic survey of the region around M31, de Alan McConnachie e outros, pode ser lido por assinantes da Nature em www.nature.com.

Wednesday, September 02, 2009

Molécula contra diabetes e obesidade

Divulgação Científica

2/9/2009

Agência FAPESP – Mais de 180 milhões de pessoas em todo o mundo têm diabetes tipo 2, a forma mais comum da doença. E o total continua crescendo em um nível alarmante, o que tem levado centros de pesquisa em diversos países a tentar encontrar alternativas de combate ao problema, que tem entre seus principais fatores de risco a obesidade.

Um grupo internacional de pesquisadores acaba de apresentar um potencial candidato: a proteína TGR5. Os cientistas descobriram que sua ativação é capaz de reduzir o ganho de peso e de tratar o diabetes. O estudo foi publicado nesta quarta-feira (2/9) na revista Cell Metabolism.

Um trabalho anterior do mesmo grupo demonstrou que ácidos biliares (produzidos no fígado e que quebram as gorduras), por meio da ativação da TGR5 em tecidos musculares e adiposos marrom, foram capazes de aumentar o gasto de energia e de prevenir, ou até mesmo de reverter, obesidade induzida em camundongos.

No novo estudo, o grupo coordenado pelos professores Kristina Schoonjans e Johan Auwerx, da Escola Politécnica Federal de Lausanne, na Suíça, examinou o papel da TGR5 no intestino, onde essa proteína é expressada em células especializadas na produção de hormônios.

Os pesquisadores observaram que essas células, chamadas de células enteroendócrinas TGR5, controlam a secreção do hormônio GLP-1, que tem papel crítico no controle da função pancreática e na regulação dos níveis de açúcar no sangue.

Kristina e Auwerx trabalharam em conjunto com Roberto Pellicciari, da Universidade de Perugia, na Itália, que desenvolveu um ativador para a TGR5, chamado de INT-777, em colaboração com a empresa Intercept Pharmaceuticals, dos Estados Unidos.

O grupo demonstrou que – em testes condições laboratoriais em camundongos – a TGR5 pode efetivamente tratar o diabetes e reduzir a massa corporal. Os autores também mostraram que esses efeitos estavam relacionados ao aumento tanto da secreção da GLP-1 como do gasto energético.

Segundo os pesquisadores, os resultados apontam para uma nova abordagem no tratamento do diabetes tipo 2 e da obesidade. A alternativa proposta é baseada no aumento da secreção de GLP-1 por meio da administração do ativador da TGR5.

O artigo de Kristina Schoonjans e outros pode ser lido por assinantes da Cell Metabolism em www.cell.com/cell-metabolism.
 


Tuesday, September 01, 2009

After the Transistor, a Leap Into the Microcosm

Published: August 31, 2009

YORKTOWN HEIGHTS, N.Y. — Gaze into the electron microscope display in Frances Ross's laboratory here and it is possible to persuade yourself that Dr. Ross, a 21st-century materials scientist, is actually a farmer in some Lilliputian silicon world.

Chris Ramirez for The New York Times

CAPTAIN MINIATURE Frances Ross, a scientist at I.B.M. Research in Yorktown Heights, N.Y., operating an electron microscope, which allows her to study nanowires, about one one-thousandth the width of a human hair, as they grow.

Dr. Ross, an I.B.M. researcher, is growing a crop of mushroom-shaped silicon nanowires that may one day become a basic building block for a new kind of electronics. Nanowires are just one example, although one of the most promising, of a transformation now taking place in the material sciences as researchers push to create the next generation of switching devices smaller, faster and more powerful than today's transistors.

The reason that many computer scientists are pursuing this goal is that the shrinking of the transistor has approached fundamental physical limits. Increasingly, transistor manufacturers grapple with subatomic effects, like the tendency for electrons to "leak" across material boundaries. The leaking electrons make it more difficult to know when a transistor is in an on or off state, the information that makes electronic computing possible. They have also led to excess heat, the bane of the fastest computer chips.

The transistor is not just another element of the electronic world. It is the invention that made the computer revolution possible. In essence it is an on-off switch controlled by the flow of electricity. For the purposes of computing, when the switch is on it represents a one. When it is off it represents a zero. These zeros and ones are the most basic language of computers.

For more than half a century, transistors have gotten smaller and cheaper, following something called Moore's Law, which states that circuit density doubles roughly every two years. This was predicted by the computer scientist Douglas Engelbart in 1959, and then described by Gordon Moore, the co-founder of Intel, in a now-legendary 1965 article in Electronics, the source of Moore's Law.

Today's transistors are used by the billions to form microprocessors and memory chips. Often called planar transistors, they are built on the surface (or plane) of a silicon wafer by using a manufacturing process that precisely deposits and then etches away different insulating, conducting and semiconducting materials with such precision that the industry is now approaching the ability to place individual molecules.

A typical high-end Intel microprocessor is today based on roughly one billion transistors or more, each capable of switching on and off about 300 billion times a second and packed densely enough that two million transistors would fit comfortably in the period at the end of this sentence.

In fact, this year, the chip industry is preparing to begin the transition from a generation of microprocessor chips based on a minimum feature size of 45 nanometers (a human hair is roughly 80,000 nanometers in width) to one of 32 nanometers — the next step down into the microcosm. But the end of this particular staircase may be near.

"Fundamentally the planar transistor is running out of steam," said John E. Kelly III, I.B.M.'s senior vice president and director of research.

"We're at an inflection point, you better believe it, and most of the world is in denial about it," said Mark Horowitz, a Stanford University electrical engineer who spoke last week at a chip design conference in Palo Alto, Calif. "The physics constraints are getting more and more serious."

Many computer scientists have been warning for years that this time would come, that Moore's Law would cease to be valid because of increasing technical difficulties and the expense of overcoming them. Last week at Stanford University, during a panel on the future of scaling (of which the shrinking of transistors is one example), several panelists said the end was near.

"We're done scaling. We've been playing tricks since 90 nanometers," said Brad McCredie, an I.B.M. fellow and one of the company's leading chip designers, in a reference to the increasingly arcane techniques the industry has been using to make circuits smaller.

For example, for the past three technology generations Intel has used a material known as "strained silicon" in which a layer of silicon atoms are stretched beyond their normal atomic distance by depositing them on top of another material like silicon germanium. This results in lower energy consumption and faster switching speeds.

Other researchers and business executives believe the shrinking of the transistor can continue, at least for a while, that the current industry standard Mosfet (for Metal-Oxide-Silicon Field-Effect-Transistor) can be effectively harnessed for several more technology generations.

Technology executives at the Intel Corporation, the world's largest chipmaker, say they believe that by coupling more advanced photolithographic techniques with new kinds of materials and by changing the design of the transistor, it will be possible to continue to scale down to sizes as small as five nanometers — effectively taking the industry forward until the end of the next decade.

"Silicon will probably continue longer than we expect," said Michael C. Mayberry, an Intel vice president and the director of the company's component research program.

Both Intel and I.B.M. are publicly committed to a new class of transistors known as FinFETs that may be used as early as the 22-nanometer technology generation beginning in 2011 or 2012. Named for a portion of the switch that resembles a fish fin, these transistors have the dual advantage of offering greater density because they are tipped vertically out of the plane of the silicon wafer, as well as better insulating properties, making it easier to control the switching from a 1 to a 0 state.

But sooner or later, new materials and new manufacturing processes will be necessary to keep making computer technology ever cheaper. In the long term, new switches might be based on magnetic, quantum or even nanomechanical switching principles. One possibility would be to use changes in the spin of an individual electron to represent a 1 or a 0.

"If you look out into the future, there is a branching tree and there are many possible paths we might take," Dr. Mayberry said.

In Dr. Ross's laboratory at I.B.M., researchers are concentrating on more near-term technology. They are exploring the idea of constructing FinFET switches in a radical new process that breaks away from photo etching. It is a kind of nanofarming. Dr. Ross sprinkles gold particles as small as 10 nanometers in diameter on a substrate and then suffuses them in a silicon gas at a temperature of about 1,100 degrees Fahrenheit. This causes the particles to become "supersaturated" with silicon from the gas, which will then precipitate into a solid, forming a wire that grows vertically.

I.B.M. is pressing aggressively to develop this technology, which could be available commercially by 2012, she said. At the same time she acknowledged that significant challenges remain in perfecting nanowire technology. The mushroom-shaped wires in her laboratory now look a little bit like bonsai trees. To offer the kind of switching performances chipmakers require, the researchers must learn to make them so that their surfaces are perfectly regular. Moreover, techniques must be developed to make them behave like semiconductors.

I.B.M. is also exploring higher-risk ideas like "DNA origami," a process developed by Paul W. K. Rothemund, a computer scientist at the California Institute of Technology.

The technique involves creating arbitrary two- and three-dimensional shapes by controlling the folding of a long single strand of viral DNA with multiple smaller "staple" strands. It is possible to form everything from nanometer-scale triangles and squares to more elaborate shapes like smiley faces and a rough map of North America. That could one day lead to an application in which such DNA shapes could be used to create a scaffolding just as wooden molds are now used to create concrete structures. The DNA shapes, for example, could be used to more precisely locate the gold nanoparticles that would then be used to grow nanowires. The DNA would be used only to align the circuits and would be destroyed by the high temperatures used by the chip-making processes.

At Intel there is great interest in building FinFET switches but also in finding ways to integrate promising III-V materials on top of silicon as well as exploring materials like graphene and carbon nanotubes, from which the company has now made prototype switches as small as 1.5 nanometers in diameter, according to Dr. Mayberry. The new materials have properties like increased electron mobility that might make transistors that are smaller and faster than those that can be made with silicon.

"At that very small dimension you have the problem of how do you make the connection into the tube in the first place," he said. "It's not just how well does this nanotube itself work, but how do you integrate it into a system."

Given all the challenges that each new chip-making technology faces, as well as the industry's sharp decline in investment, it is tempting to suggest that the smaller, faster, cheaper trend may indeed be on the brink of slowing if not halting.

Then again, as Dr. Mayberry suggests, the industry has a way of surprising its skeptics.

A One-Way Ticket to Mars

Op-Ed Contributor

Published: August 31, 2009

Tempe, Ariz.

NOW that the hype surrounding the 40th anniversary of the Moon landings has come and gone, we are faced with the grim reality that if we want to send humans back to the Moon the investment is likely to run in excess of $150 billion. The cost to get to Mars could easily be two to four times that, if it is possible at all.

This is the issue being wrestled with by a NASA panel, convened this year and led by Norman Augustine, a former chief executive of Lockheed Martin, that will in the coming weeks present President Obama with options for the near-term future of human spaceflight. It is quickly becoming clear that going to the Moon or Mars in the next decade or two will be impossible without a much bigger budget than has so far been allocated. Is it worth it?

The most challenging impediment to human travel to Mars does not seem to involve the complicated launching, propulsion, guidance or landing technologies but something far more mundane: the radiation emanating from the Sun's cosmic rays. The shielding necessary to ensure the astronauts do not get a lethal dose of solar radiation on a round trip to Mars may very well make the spacecraft so heavy that the amount of fuel needed becomes prohibitive.

There is, however, a way to surmount this problem while reducing the cost and technical requirements, but it demands that we ask this vexing question: Why are we so interested in bringing the Mars astronauts home again?

While the idea of sending astronauts aloft never to return is jarring upon first hearing, the rationale for one-way trips into space has both historical and practical roots. Colonists and pilgrims seldom set off for the New World with the expectation of a return trip, usually because the places they were leaving were pretty intolerable anyway. Give us a century or two and we may turn the whole planet into a place from which many people might be happy to depart.

Moreover, one of the reasons that is sometimes given for sending humans into space is that we need to move beyond Earth if we are to improve our species' chances of survival should something terrible happen back home. This requires people to leave, and stay away.

There are more immediate and pragmatic reasons to consider one-way human space exploration missions.

First, money. Much of the cost of a voyage to Mars will be spent on coming home again. If the fuel for the return is carried on the ship, this greatly increases the mass of the ship, which in turn requires even more fuel.

The president of the Mars Society, Robert Zubrin, has offered one possible solution: two ships, sent separately. The first would be sent unmanned and, once there, combine onboard hydrogen with carbon dioxide from the Martian atmosphere to generate the fuel for the return trip; the second would take the astronauts there, and then be left behind. But once arrival is decoupled from return, one should ask whether the return trip is really necessary.

Surely if the point of sending astronauts is to be able to carry out scientific experiments that robots cannot do (something I am highly skeptical of and one of the reasons I don't believe we should use science to attempt to justify human space exploration), then the longer they spend on the planet the more experiments they can do.

Moreover, if the radiation problems cannot be adequately resolved then the longevity of astronauts signing up for a Mars round trip would be severely compromised in any case. As cruel as it may sound, the astronauts would probably best use their remaining time living and working on Mars rather than dying at home.

If it sounds unrealistic to suggest that astronauts would be willing to leave home never to return alive, then consider the results of several informal surveys I and several colleagues have conducted recently. One of my peers in Arizona recently accompanied a group of scientists and engineers from the Jet Propulsion Laboratory on a geological field trip. During the day, he asked how many would be willing to go on a one-way mission into space. Every member of the group raised his hand. The lure of space travel remains intoxicating for a generation brought up on "Star Trek" and "Star Wars."

We might want to restrict the voyage to older astronauts, whose longevity is limited in any case. Here again, I have found a significant fraction of scientists older than 65 who would be willing to live out their remaining years on the red planet or elsewhere. With older scientists, there would be additional health complications, to be sure, but the necessary medical personnel and equipment would still probably be cheaper than designing a return mission.

Delivering food and supplies to these new pioneers — along with the tools to grow and build whatever they need, for however long they live on the red planet — is likewise more reasonable and may be less expensive than designing a ticket home. Certainly, as in the Zubrin proposal, unmanned spacecraft could provide the crucial supply lines.

The largest stumbling block to a consideration of one-way missions is probably political. NASA and Congress are unlikely to do something that could be perceived as signing the death warrants of astronauts.

Nevertheless, human space travel is so expensive and so dangerous that we are going to need novel, even extreme solutions if we really want to expand the range of human civilization beyond our own planet. To boldly go where no one has gone before does not require coming home again.

Lawrence M. Krauss, the director of the Origins Initiative at Arizona State University, is the author of "The Physics of 'Star Trek.'"