Archive for the ‘Science’ Category

 

NATURE NANOTECHNOLOGY | LETTER

Nature Nanotechnology

 

(2012)

 

doi:10.1038/nnano.2012.34
Received

 

30 December 2011 
Accepted

 

16 February 2012 
Published online

 

25 March 2012

Abstract

 

The observation of interference patterns in double-slit experiments with massive particles is generally regarded as the ultimate demonstration of the quantum nature of these objects. Such matter–wave interference has been observed for electrons1, neutrons2, atoms34 and molecules567 and, in contrast to classical physics, quantum interference can be observed when single particles arrive at the detector one by one. The build-up of such patterns in experiments with electrons has been described as the “most beautiful experiment in physics”891011. Here, we show how a combination of nanofabrication and nano-imaging allows us to record the full two-dimensional build-up of quantum interference patterns in real time for phthalocyanine molecules and for derivatives of phthalocyanine molecules, which have masses of 514 AMU and 1,298 AMU respectively. A laser-controlled micro-evaporation source was used to produce a beam of molecules with the required intensity and coherence, and the gratings were machined in 10-nm-thick silicon nitride membranes to reduce the effect of van der Waals forces. Wide-field fluorescence microscopy detected the position of each molecule with an accuracy of 10 nm and revealed the build-up of a deterministic ensemble interference pattern from single molecules that arrived stochastically at the detector. In addition to providing this particularly clear demonstration of wave–particle duality, our approach could also be used to study larger molecules and explore the boundary between quantum and classical physics.

Figures at a glance

Main

When Richard Feynman described the double-slit experiment with electrons as being ‘at the heart of quantum physics’12 he was emphasizing how the fundamentally non-classical nature of the superposition principle allows the quantum wavefunction associated with a massive object to be widely delocalized, while the object itself is always observed as a well-localized particle. Recent experiments have focused this discussion by demonstrating the stochastic build-up of interference patterns1113, by implementing double-slit diffraction in the time domain1415 (including down to the attosecond level16), and by identifying a single molecule as the smallest double-slit for electron interference1718 that enables fundamental studies of decoherence19. The extension of this work20 to large molecules requires a sufficiently intense and coherent beam of slow and neutral molecules, a nanoscale diffraction grating, and a detector that offers a spatial accuracy of a few nanometres and a molecule-specific detection efficiency of close to 100%. We achieve that in this work with a combination of micro-evaporation, nanofabrication and nano-imaging.

Our experimental set-up comprises three sections: beam preparation, coherent manipulation and detection (Fig. 1). The molecules need to be prepared such that each one interferes with itself, and all lead to similar interference patterns on the screen. Because the transverse and longitudinal coherence functions are determined by the Fourier transforms of the source spatial extension and velocity distribution21, we require a good collimation and velocity selection. Under ‘far-field’ conditions we can approximate the molecular wavefunctions as plane waves, and the angle θn of the nth order diffraction peak is given by the equation sin θn = nΛ/d, where Λ = h/mv is the de Broglie wavelength, h is Planck’s constant, m is the particle mass, vis the velocity, and d is the period of the diffraction grating. Massive particles therefore need to be slow to achieve sizable diffraction angles. Although deceleration techniques have been advanced for molecules even as complex as benzonitrile22, effusive beams (Fig. 1b) are still well suited for preparing slow beams of particles a hundred times more massive than that2324.

Figure 1: Set-up for laser-evaporation, diffraction and nano-imaging of complex molecules.
Set-up for laser-evaporation, diffraction and nano-imaging of complex molecules.

a, Thermolabile molecules are ejected by laser micro-evaporation. A blue diode laser (445 nm, 50 mW) is focused onto window W1 to evaporate the molecules coated on its inner surface. A CMOS camera and a quartz balance (QB) monitor the evaporation area and the molecular flux. b, Stable molecules can be evaporated in a Knudsen cell. The collimation slit S defines the beam coherence. The molecular beam divergence is further narrowed by the width of the diffraction grating G. c, Electron micrograph showing that the grating is nanomachined into a 10-nm-thin SiNx membrane with a period of d = 100 nm. The vacuum system is evacuated to 1 × 10−8 mbar. Molecules on quartz window W2 are excited by a red diode laser (661 nm). High-resolution optics collects, filters and images the light onto an EMCCD camera. d,e, The molecules for this study: phthalocyanine PcH2 (C32H18N8, mass m = 514 AMU, number of atoms N = 58, d) and its derivative F24PcH2(C48H26F24N8O8m = 1,298 AMU, N = 114, e). The mass, atomic number and internal complexity of F24PcH2 are approximately twice those of PcH2.

For thermolabile organic molecules, which may decompose when heated to their evaporation temperature, we use a laser micro-source (Fig. 1a), which reduces the heat load to a minimum. A blue diode laser is focused onto a thin layer of molecules deposited on the inside of the entrance vacuum window W1, which can be moved by a motorized translation stage. Although high temperatures can be reached locally, this affects only the particles within the focus area. In comparison to a Knudsen cell, the heat load to the sample is therefore reduced by two to three orders of magnitude (to several 10 mW). Spectral coherence is achieved by sorting the arriving molecules according to their longitudinal velocity and their respective freefall height in the Earth’s gravitational field25.

The collimation slit S defines the spatial coherence of the molecular beam. The slit and the grating width further downstream narrow the beam divergence to less than the diffraction angle. The grating is machined into a thin SiNx membrane and has a period of d =100 nm. To minimize the dispersive van der Waals interaction between the molecules and the grating wall we reduce the grating thickness from 160 nm (as in earlier diffraction experiments520) to as little as 10 nm in our present set-up. This is important for the manipulation of complex molecules, which may exhibit high polarizabilities, permanent and even thermally induced electric dipole moments2627. Each individually diffracted molecule finally arrives at the 170-µm-thin quartz plate W2, which seals the detector vacuum chamber against ambient air. The gradual emergence of the quantum interference pattern is then observed by means of wide-field fluorescence microscopy of W2.

Imaging of single molecules in the condensed phase began about two decades ago28, and various methods for subwavelength optical imaging have been developed since29. Here, we make use of a scheme that is similar to single-molecule high-resolution imaging with photo-bleaching (SHRIMP)30. Even if the pointspread function of an optical emitter is bound to Abbé’s diffraction limit, it is still possible to determine its barycentre with nanometre accuracy, if the signal-to-noise ratio is high enough and as long as the pointspread functions of neighbouring molecules do not overlap.

The phthalocyanine molecule PcH2 (Fig. 1d) and its derivative F24PcH2 (Fig. 1e) were selected because they are stable molecules and efficient dyes, even in vacuum. The molecular sample on W2 was illuminated under a shallow angle so that the excitation laser did not enter the imaging optics. Fluorescence was collected by a microscope objective, filtered, and imaged onto the single-photon-sensitive electron-multiplying charge-coupled device (EMCCD) camera. (See Supplementary Table S3 for full details of the imaging optics.)

Figure 2 shows a typical fluorescence image of surface-deposited phthalocyanine molecules. We detect ~1 × 105 fluorescence photons per molecule before abrupt bleaching or desorption is observed from one frame to the next, in support of the claim that we monitor single molecules and not aggregates. By fitting a two-dimensional Gaussian to each molecular image we can determine its position with an accuracy of 10 nm. This would even fulfil the detector requirements of matter–wave near-field interferometry31.

Figure 2: Single-molecule imaging of PcH2 with subwavelength accuracy.
Single-molecule imaging of PcH2 with subwavelength accuracy.

Plots show photon numbers at various position at six different time points (starting at the back), as extracted from the frames of a movie recorded with an EMCCD camera. Two molecules are localized on the quartz surface next to one another. After frame IV, molecule 2 either bleaches or desorbs again. Our experiments indicate that bleaching typically occurs after the detection of ~1 × 105 photons. We detect each molecule with a signal-to-noise ratio of ~20, which enables us to determine the barycentre of their pointspread function with an accuracy of ~10 nm. Most molecules remain immobilized on the nanoscale and the interference pattern persists even over days.

The high detection efficiency exceeds that of electron-impact quadrupole mass spectrometry by more than a factor of 104. This large gain allows us, for the first time, to optically visualize the real-time build-up of a two-dimensional quantum interference pattern from stochastically arriving single molecules, as shown in Fig. 3. This series was recorded with an effusive source (Fig. 1b) heated to 750 K. A typical velocity of 150 m s−1then corresponds to a de Broglie wavelength of ΛdB = 5.2 pm. The actual velocity distribution is reconstructed from the molecular height distribution on the detection screen and turns out to be slightly faster and narrower than thermal. The pictures represent a balance of the continuous accumulation of molecules and intermittent bleaching by the imaging laser (3 s per frame for Fig. 3a–d). Figure 3 shows the influence of the van der Waals force quite clearly. The high fringe visibility up to the fourth interference order can only be explained by an effective slit narrowing32 by a factor of about two due to the molecule–wall interaction, even for gratings as thin as 100 atomic monolayers. The relative importance of the molecule–wall interaction is discussed in the Methods and Supplementary Fig. S2.

Figure 3: Build-up of quantum interference.
Build-up of quantum interference.

ae, Selected frames from a false-colour movie recorded with an EMCCD camera showing the build-up of the quantum interference pattern for PcH2 molecules. Images were recorded before deposition of the molecules (a) and 2 min (b), 20 min (c), 40 min (d) and 90 min (e) after deposition. Scale bars, 20 µm (ae). The colour bar ranges from −5 to 120 photons in ad and from −20 to 650 photons in ead are taken from Supplementary Movie 1, and the wide-field view in e is taken with the same objective as Supplementary Movie 2. The movie frame rate was 0.1 Hz for the first 20 min (ac). Thereafter it was reduced to 0.05 Hz to allow for another dynamic equilibrium of bleaching and the arrival of fresh molecules. Collimation slit S (Fig. 1) was cut into a 50-nm-thick SiN membrane and had dimensions of 1 µm (width) and 100 µm (height). Diffraction grating G was cut into a 10-nm-thick SiN membrane (width, 5 µm; height, 100 µm), with period d = 100 nm. Width of the individual slits:s = 50 nm. L1 = 702 mm, L2 = 564 mm. The arrow pointing downwards indicates the direction of the gravitational acceleration g.

The high sensitivity of fluorescence detection now also allows us to extend far-field diffraction to more complex molecules. In Fig. 4 we compare specifically the interference pattern of the fluoroalkylated phthalocyanine F24PcH2 with that of PcH2, both starting from the new laser micro-evaporation source (Fig. 1a), which allows us to record the interference patterns with a material consumption 100 times smaller than when using a Knudsen cell.

Figure 4: Comparison of interference patterns for PcH2 and F24PcH2.
Comparison of interference patterns for PcH2 and F24PcH2.

a,b, False-colour fluorescence images of the quantum interference patterns of PcH2 (a) and F24PcH2 (b). We can deduce both the mass and the velocity of the molecules from these images, because diffraction spreads out the molecular beam in the horizontal direction, and the effects of gravity mean that the height h on the screen (left axes) depends on the velocity v of the molecule (right axes). The colour bar ranges from −20 to 400 photons in a, and from −20 to 600 photons in b. The true fluorescence of both molecules starts at wavelengths above 700 nm.c,d, One-dimensional diffraction curves obtained by integrating the fluorescence images in a and b betweenh = −160 µm and h = −240 µm (dashed yellow lines in a and b), which corresponds to velocity spread Δv/v = 0.27. All imaging settings are specified in Supplementary Table S1. Collimation slit S (Fig. 1) was 3 µm wide (defined by a pair of steel razor blades with 300 nm edge width). Diffraction grating G was cut into a 10-nm-thick SiN membrane and had dimensions of 3 µm (width) and 100 µm (height), with period d = 100 nm (width of individual slits s = 75 nm). L1 = 566 mm, L2 = 564 mm.

To account for the higher polarizability of F24PcH2, this experiment was performed with wider slits (75 nm) than those used for Fig. 3 (s = 50 nm). Again, we see clear quantum interference. We retrieve one-dimensional projections from the two-dimensional diffraction patterns by vertically integrating over a part of the velocity distribution (Fig. 4c,d). The solid lines in these diagrams represent the textbook-like diffraction of plane waves at a grating. They also include an incoherent average over the known source extension as well as over the detected velocity range. We find agreement between the numerical model and our experiment if we fit a van der Waals constant of C3 = 16 meV nm3 for PcH2 and C3 = 98 meV nm3 for F24PcH2 in the interaction with the SiNx walls. Details of the modelling are available in the Methods and the Supplementary Information.

The uncertainty in this fit is estimated to be ~50%. Precision measurements of C3 will become possible in the future with a more accurate determination of the open slit width across the entire grating, better velocity selection, systematic variation of the grating thickness, and a more rigorous theoretical description of the molecule–wall interaction. To obtain the numerical fit of Fig. 4d it was necessary to convolute the calculated interference pattern with a Gaussian with a standard deviation of 3 µm. This smearing may be attributed either to surface diffusion or to a low contribution of fragmented molecules within the molecular beam. Diffusion would in fact be consistent with the design specifications of this molecule: F24PcH2 is fluorinated to reduce its binding to the surroundings, to facilitate evaporation. Note that in contrast to Fig. 3 the patterns of Fig. 4show high contrast only to the first diffraction order. This is related to the larger grating slit width used in this experiment.

Compared to our previous molecular far-field experiments6, we have improved the source economy by a factor of 1,000, reduced the grating thickness (and the corresponding van der Waals phase shift) by a factor of 16, and increased the detection efficiency to the level of single molecules. Fluorescence imaging with nanometre accuracy is orders of magnitude more sensitive than the ionization methods used in previous work, and it should be possible to detect many natural and functionalized organic molecules, and also quantum dots, with this method. Scanning tunnelling microscopy has been used for single-molecule interference imaging33, but our approach offers recording speeds that can be up to 1,000 times faster over an imaging area that is 105 times larger. Although the effects of the van der Waals force are still evident for membranes as thin as 10 nm, it should be possible to reduce or even eliminate these effects in future experiments by using gratings made of double-layer graphene or made of light34.

The diffraction of single molecules at a grating is an unambiguous demonstration of the wave–particle duality of quantum physics3536. It is only explicable in quantum terms, independent of the absolute value of the interference contrast. In contrast to photons and electrons, which are irretrievably lost in the detection process, fluorescent molecules stay in place to provide clear and tangible evidence of the quantum behaviour of large molecules.

Methods

Molecular synthesis and sample preparation

The three-step synthesis of F24PcH2 comprises the formation of a fluorinated phthalonitrile followed by the assembly of a fluorous zinc phthalocyanine derivative and a final demetallation step37 (see Supplementary Methods for synthetic protocols and analytical data). A solution (PcH2)/suspension (F24PcH2) of molecules in acetone was smeared onto the BK7 entrance window to form a thin layer. Inhomogeneities in the sample thickness may occur but do not influence the final diffraction pattern. We observed some liquefaction during the micro-evaporation, which also homogenized the sample.

Fabrication of the nanogratings

The SiNx gratings were produced by focused ion beam (FIB) milling into a 10-nm-thin membrane (from TEMWINDOWS.com). The FIB milling was carried out in the ionLiNE system (from RAITH GmbH) using gallium ions at E = 35 keV and with currents ranging from 1 pA to 7 pA. The gratings have a period of 100 nm and opening widths of 50 nm (Fig. 4a) and 75 nm (Fig. 4b).

Cleaning the quartz window

In situ plasma cleaning was used to clean the surface of the quartz window. Outside: air at atmospheric pressure. Inside: nitrogen at 1 mbar. Discharge: a.c. voltage of 1.5 kV and 10 kHz, with a 0.5 mm electrode at a distance of 0.5 mm from the window. The grounded vacuum chamber served as the counter-electrode.

Numerical modelling of the diffraction images

We fit our data using diffraction integrals in the paraxial (Eikonal) approximation. In the last step, this involved an incoherent sum over all coherent diffraction patterns associated with molecules starting from different source points, which caused limited spatial coherence, and with different velocities, which cause limited spectral coherence. To identify the contributing velocities we fit the molecular distribution on the screen to a Maxwell–Boltzmann velocity and took into account all vertical constraints in our set-up. We could thus assign the forward velocity of a molecule according to its vertical position on the screen. The van der Waals interaction between the polarizable molecule and the dielectric grating wall was taken into account in the phase of the grating transmission function. The phase term ψ = exp(iC3D[1/Δx3 + 1/(s – Δx)3]/planckv) was multiplied by the binary transmission function given by the period and opening fraction of the grating. C3denotes the van der Waals constant, which we determined from a numerical fit of the expected diffraction curve to the observed interference pattern. The distance Δx of a particular molecule to its nearest grating bar as well as its longitudinal velocity v and the grating thickness D are important for the effective momentum kick during passage through the grating. We assumed Δx to be constant for each molecule during the transit time through the grating. We neglected all fringe effects, such as attraction outside the grating slit. To illustrate the high significance of surface–wall interactions even for a grating as thin as 10 nm and a molecular transit time of only 100 ps, we compared our experimental data with different theoretical assumptions inSupplementary Fig. S2.

Author information

Affiliations

  1. Vienna Center of Quantum Science and Technology, Faculty of Physics, University of Vienna, Boltzmanngasse 5, 1090 Vienna, Austria

    • Thomas Juffmann,
    • Adriana Milic,
    • Michael Müllneritsch,
    • Peter Asenbaum &
    • Markus Arndt
  2. The Center for Nanoscience and Nanotechnology, Tel Aviv University, 69978 Tel Aviv, Israel

    • Alexander Tsukernik &
    • Ori Cheshnovsky
  3. Department of Chemistry, University of Basel, St. Johannsring 19, 4056 Basel, Switzerland

    • Jens Tüxen &
    • Marcel Mayor
  4. Karlsruhe Institute of Technology, Institute for Nanotechnology, PO Box 3640, 76021 Karlsruhe, Germany

    • Marcel Mayor
  5. School of Chemistry, The Raymond and Beverly Sackler faculty of exact sciences, Tel Aviv University, 69978 Tel Aviv, Israel

    • Ori Cheshnovsky

Contributions

T.J. and M.A. conceived the experiments. T.J., A.M., M.Mu. and O.C. worked on the set-up of the experiment. T.J. performed the diffraction experiments. J.T. and M.Ma. designed and synthesized the F24PcH2 molecules. A.T. and O.C. fabricated the 10 nm diffraction gratings. P.A. developed the basis for the micro-evaporation source. M.A. and T.J. wrote the paper, with comments by all authors.

Competing financial interests

The authors declare no competing financial interests.

Corresponding author

Correspondence to:

 

Advertisements

Resonancia Schumann

Posted: March 28, 2011 in 2012, Ciber, Science, Universe

Schumann resonance 01.png

Schumann resonance 02.png

Resonancia Schumann es un conjunto de picos en la banda de frecuencia extremadamente baja (ELF) del espectro radioeléctricode la Tierra.

Esto es porque el espacio entre la superficie terrestre y la ionosfera actúa como una guía de onda. Las dimensiones limitadas terrestres provocan que esta guía de onda actúe como cavidad resonante para las ondas electromagnéticas en la banda ELF. La cavidad es excitada en forma natural por los relámpagos, y también, dado que su séptimo sobretono se ubica aproximadamente en 60 Hz, influyen las redes de transmisión eléctrica de los territorios en que se emplea corriente alterna de esa frecuencia.

La frecuencia más baja, y al mismo tiempo la intensidad más alta, de la resonancia de Schumann se sitúa en aproximadamente 7,83 Hz. Los sobretonos detectables se extienden hasta el rango de kilohercios.

Este fenómeno se llama así en honor de Winfried Otto Schumann, que predijo matemáticamente su existencia en 1952,1 a pesar de ser observada por primera vez por Nikola Tesla y formar la base de su esquema para transmisión de energía y comunicaciones inalámbricas.2 La primera representación espectral de este fenómeno fue preparado por Balser y Wagner en 1960.1

Contenido

[ocultar]

[editar]Cultura popular

[editar]Pseudociencia

En algunos sitios de internet y libros,3 4 5 realizan afirmaciones pseudocientíficas, asociando dichas ondas con las ondas alpha, y adjudicándoles un papel en los procesos biológicos.

Entre los errores de estas publicaciones se encuentran los siguientes:

  • Adjudican a las ondas Schumann una frecuencia exacta e invariable de 7.8 Hz,3 4 cuando ésta es aproximada y variable.6 1 Incluso ni siquiera están presentes constantemente.1
  • Adjudican a las ondas alpha una frecuencia exacta e invariable, también de 7.8 Hz,3 4 cuando varían entre 8 y 12 Hz.7 Ni siquiera son frecuentes en niños,8 lo que descartaría que sean imprescindibles.
  • Considera que las ondas alfa son una sincronizadoras,3 4 cuando en realidad se considera que son producto de la sincronización de las neuronas.9 Es decir, que esas publicaciones pseudocientíficas invierten causa con efecto.
  • No poseen citas ni referencias a artículos científicos con revisión por pares, ni a ensayos concluyentes.
  • No poseen ninguna explicación del supuesto mecanismo, ni ensayos falsables que lo demuestren, sino que recurren a la falacia lógica cum hoc ergo propter hoc.

[editar]Referencias

  1. ↑ a b c d ¿Qué es la resonancia Schumann? en nasa.gov (en inglés).
  2. Electrical World and Engineer, artículo The Transmission of Electrical Energy Without Wires As A Means Of Furthering World Peace de Nikola Tesla, páginas 21-24 (7 de enerode 1905).
  3. ↑ a b c d Las Ondas Shumann (sic) en BibliotecaPleyades.net.
  4. ↑ a b c d Las ondas schumann en radiesteciaargentina.netfirms.com.
  5. Harper, John Jay; Lipton, Bruce H.; Krill, O. H.; Tranceformers: Shamans of the 21st Century
  6. Magnetic Activity and Schumann Resonance, estudio sobre la actividad magnética y la resonancia de Schumman por la Universidad de California (en inglés)
  7. Buela-Casal, Gualberto; Navarro Humanes, José Francisco; Avances en la investigación del sueño y sus trastornos.
  8. López-Navidad, A.;Kulisevsky, J.; Caballero, F.; El donante de órganos y tejidos: Evaluación y manejo
  9. Universidad de Texas; Revista latinoamericana de psicología, Volúmenes 33-34.

[editar]Véase también

[editar]Enlaces externos

SECRETS OF A MIND-GAMER

Posted: February 16, 2011 in Ciber, Science

http://www.nytimes.com/interactive/2011/02/20/magazine/mind-secrets.html?hp


How I trained
my brain and became
a world-class
memory athlete.

By Joshua Foer

 

Dom DeLuise,

 

the comedian ( and five of
clubs ), was implicated in the following unseemly acts in my mind’s eye: He hocked a fat globule of spittle ( nine of
clubs ) on Albert Einstein’s thick

 

white mane ( three of
diamonds ) and delivered a devastating karate kick ( five of
spades ) to the groin

 

of Pope Benedict XVI ( six of
diamonds ). Michael

 

Jackson ( king of
hearts ) engaged in behavior bizarre even for him. He defecated ( two of
clubs ) on a

 

salmon burger ( king of
clubs ) and captured his flatulence ( queen of
clubs ) in a

 

balloon ( six of
spades ). This tawdry tableau, which I’m not proud to commit

 

cover.png

 

Joshua Foer.

Marco Grob for The New York Times

 

to the page, goes a long way toward explaining the unexpected spot in which I found myself in the spring of 2006. Sitting to my left was Ram Kolli, an unshaven 25-year-old business consultant from Richmond, Va., who was also the defending United States memory champion. To my right was the lens of a television camera from a national cable network. Spread out behind me, where I couldn’t see them and they couldn’t disturb me, were about 100 spectators and a pair of TV commentators offering play-by-play analysis. One was a blow-dried mixed martial arts announcer named Kenny Rice, whose gravelly, bedtime voice couldn’t conceal the fact that he seemed bewildered by this jamboree of nerds. The other was the Pelé of U.S. memory sport, a bearded 43-year-old chemical engineer and four-time national champion from Fayetteville, N.C., named Scott Hagwood. In the corner of the room sat the object of my affection: a kitschy, two-tiered trophy of a silver hand with gold nail polish brandishing a royal flush. It was almost as tall as my 2-year-old niece (if lighter than most of her stuffed animals).

The audience was asked not to take any flash photographs and to maintain total silence. Not that Kolli or I could possibly have heard them. Both of us were wearing earplugs. I also had on a pair of industrial-strength earmuffs that looked as if they belonged to an aircraft-carrier deckhand (in the heat of a memory competition, there is no such thing as deaf enough). My eyes were closed. On a table in front of me, lying face down between my hands, were two shuffled decks of playing cards. In a moment, the chief arbiter would click a stopwatch, and I would have five minutes to memorize the order of both decks.

The unlikely story of how I ended up in the finals of the U.S.A. Memory Championship, stock-still and sweating profusely, began a year earlier in the same auditorium, on the 19th floor of the Con Edison building near Union Square in Manhattan. I was there to write a short article about what I imagined would be the Super Bowl of savants.

The scene I stumbled upon, however, was something less than a clash of titans: a bunch of guys (and a few women), varying widely in age and personal grooming habits, poring over pages of random numbers and long lists of words. They referred to themselves as mental athletes, or M.A.’s for short. The best among them could memorize the first and last names of dozens of strangers in just a few minutes, thousands of random digits in under an hour and — to impress those with a more humanistic bent — any poem you handed them.

 

Multimedia

game-190.jpg

Interactive Game

Test Your Memory Skills

numbers-190.png

PDF

Memorizing Numbers

names-190.png

PDF

Memorizing Names

I asked Ed Cooke, a competitor from England — he was 24 at the time and was attending the U.S. event to train for that summer’s World Memory Championships — when he first realized he was a savant.

“Oh, I’m not a savant,” he said, chuckling.

“Photographic memory?” I asked.

He chuckled again. “Photographic memory is a detestable myth. Doesn’t exist. In fact, my memory is quite average. All of us here have average memories.”

That seemed hard to square with the fact that he knew huge chunks of “Paradise Lost” by heart. Earlier I watched him recite a list of 252 random digits as effortlessly as if it were his telephone number.

“What you have to understand is that even average memories are remarkably powerful if used properly,” Cooke said. He explained to me that mnemonic competitors saw themselves as “participants in an amateur research program” whose aim is to rescue a long-lost tradition of memory training.

memorydog.png

Today we have books, photographs, computers and an entire superstructure of external devices to help us store our memories outside our brains, but it wasn’t so long ago that culture depended on individual memories. A trained memory was not just a handy tool but also a fundamental facet of any worldly mind. It was considered a form of character-building, a way of developing the cardinal virtue of prudence and, by extension, ethics. Only through memorizing, the thinking went, could ideas be incorporated into your psyche and their values absorbed.

Cooke was wearing a suit with a loosened tie, his curly brown hair cut in a shoulder-length mop, and, incongruously, a pair of flip-flops emblazoned with the Union Jack. He was a founding member of a secret society of memorizers called the KL7 and was at that time pursuing a Ph.D. in cognitive science at the University of Paris. He was also working on inventing a new color — “not just a new color, but a whole new way of seeing color.”

Cooke and all the other mental athletes I met kept insisting that anyone could do what they do. It was simply a matter of learning to “think in more memorable ways,” using a set of mnemonic techniques almost all of which were invented in ancient Greece. These techniques existed not to memorize useless information like decks of playing cards but to etch into the brain foundational texts and ideas.

It was an attractive fantasy. If only I could learn to remember like Cooke, I figured, I would be able to commit reams of poetry to heart and really absorb it. I imagined being one of those admirable (if sometimes insufferable) individuals who always has an apposite quotation to drop into conversation. How many worthwhile ideas have gone unthought and connections unmade because of my memory’s shortcomings?

At the time, I didn’t quite believe Cooke’s bold claims about the latent mnemonic potential in all of us. But they seemed worth investigating. Cooke offered to serve as my coach and trainer. Memorizing would become a part of my daily routine. Like flossing. Except that I would actually remember to do it.

I

n 2003, the journal Nature reported on eight people who finished near the top of the World Memory Championships. The study looked at whether the memorizers’ brains were structurally different from the rest of ours or whether they were just making better use of the memorizing abilities we all possess.

Researchers put the mental athletes and a group of control subjects into f.M.R.I. scanners and asked them to memorize three-digit numbers, black-and-white photographs of people’s faces and magnified images of snowflakes as their brains were being scanned. What they found was surprising: not only did the brains of the mental athletes appear anatomically indistinguishable from those of the control subjects, but on every test of general cognitive ability, the mental athletes’ scores came back well within the normal range. When Cooke told me he was an average guy with an average memory, it wasn’t just modesty speaking.

There was, however, one telling difference between the brains of the mental athletes and those of the control subjects. When the researchers looked at the parts of the brain that were engaged when the subjects memorized, they found that the mental athletes were relying more heavily on regions known to be involved in spatial memory. At first glance, this didn’t seem to make sense. Why would mental athletes be navigating spaces in their minds while trying to learn three-digit numbers?

The answer lies in a discovery supposedly made by the poet Simonides of Ceos in the fifth century B.C. After a tragic banquet-hall collapse, of which he was the sole survivor, Simonides was asked to give an account of who was buried in the debris.

My trainer and all the other mental athletes I met kept insisting that anyone could do what they do. It was simply a matter of learning to ‘think in more memorable ways.’

When the poet closed his eyes and reconstructed the crumbled building in his imagination, he had an extraordinary realization: he remembered where each of the guests at the ill-fated dinner had been sitting. Even though he made no conscious effort to memorize the layout of the room, it nonetheless left a durable impression. From that simple observation, Simonides reportedly invented a technique that would form the basis of what came to be known as the art of memory. He realized that if there hadn’t been guests sitting at a banquet table but, say, every great Greek dramatist seated in order of birth — or each of the words of one of his poems or every item he needed to accomplish that day — he would have remembered that instead. He reasoned that just about anything could be imprinted upon our memories, and kept in good order, simply by constructing a building in the imagination and filling it with imagery of what needed to be recalled. This imagined edifice could then be walked through at any time in the future. Such a building would later come to be called a memory palace.

Virtually all the details we have about classical memory training — indeed, nearly all the memory tricks in the competitive mnemonist’s arsenal — can be traced to a short Latin rhetoric textbook called “Rhetorica ad Herennium,” written sometime between 86 and 82 B.C. It is the only comprehensive discussion of the memory techniques attributed to Simonides to have survived into the Middle Ages. The techniques described in this book were widely practiced in the ancient and medieval worlds. Memory training was considered a centerpiece of classical education in the language arts, on par with grammar, logic and rhetoric. Students were taught not just what to remember but how to remember it. In a world with few books, memory was sacrosanct.

memorypalaces.png

Living as we do amid a deluge of printed words — would you believe more than a million new books were published last year? — it’s hard to imagine what it must have been like to read in the age before Gutenberg, when a book was a rare and costly handwritten object that could take a scribe months of labor to produce. Today we write things down precisely so we don’t have to remember them, but through the late Middle Ages, books were thought of not just as replacements for memory but also as aides-mémoire. Even as late as the 14th century, there might be just several dozen copies of any given text in existence, and those copies might well be chained to a desk or a lectern in some library, which, if it contained a hundred other books, would have been considered particularly well stocked. If you were a scholar, you knew that there was a reasonable likelihood you would never see a particular text again, so a high premium was placed on remembering what you read.

To our memorybound predecessors, the goal of training your memory was not to become a “living book” but rather a “living concordance,” writes the historian Mary Carruthers, a walking index of everything read or learned that was considered worthwhile. And this required building an organizational scheme for accessing that information. When the point of reading is remembering, you approach a text very differently from the way most of us do today. You can’t read as fast as you’re probably reading this article and expect to remember what you’ve read for any considerable length of time. If something is going to be made memorable, it has to be dwelled upon, repeated.

In his essay “First Steps Toward a History of Reading,” Robert Darnton describes a switch from “intensive” to “extensive” reading that occurred as printed books began to proliferate. Until relatively recently, people read “intensively,” Darnton says. “They had only a few books — the Bible, an almanac, a devotional work or two — and they read them over and over again, usually aloud and in groups, so that a narrow range of traditional literature became deeply impressed on their consciousness.” Today we read books “extensively,” often without sustained focus, and with rare exceptions we read each book only once. We value quantity of reading over quality of reading. We have no choice, if we want to keep up with the broader culture. I always find looking up at my shelves, at the books that have drained so many of my waking hours, to be a dispiriting experience. There are books up there that I can’t even remember whether I’ve read or not.

A

ttention, of course, is a prerequisite to remembering. Part of the reason that techniques like visual imagery and the memory palace work so well is that they enforce a degree of mindfulness that is normally lacking. If you want to use a memory palace for permanent storage, you have to take periodic time-consuming mental strolls through it to keep your images from fading. Mostly, nobody bothers. In fact, mnemonists deliberately empty their palaces after competitions, so they can reuse them again and again.

“Rhetorica ad Herennium” underscores the importance of purposeful attention by making a distinction between natural memory and artificial memory: “The natural memory is that memory which is embedded in our minds, born simultaneously with thought. The artificial memory is that memory which is strengthened by a kind of training and system of discipline.” In other words, natural memory is the hardware you’re born with. Artificial memory is the software you run on it.

The principle underlying most memory techniques is that our brains don’t remember every type of information equally well. Like every other one of our biological faculties, our memories evolved through a process of natural selection in an environment that was quite different from the one we live in today. And much as our taste for sugar and fat may have served us well in a world of scarce nutrition but is maladaptive in a world of ubiquitous fast-food joints, our memories aren’t perfectly suited for our contemporary information age. Our hunter-gatherer ancestors didn’t need to recall phone numbers or word-for-word instructions from their bosses or the Advanced Placement U.S. history curriculum or (because they lived in relatively small, stable groups) the names of dozens of strangers at a cocktail party. What they did need to remember was where to find food and resources and the route home and which plants were edible and which were poisonous. Those are the sorts of vital memory skills that they depended on, which probably helps explain why we are comparatively good at remembering visually and spatially.

In a famous experiment carried out in the 1970s, researchers asked subjects to look at 10,000 images just once and for just five seconds each. (It took five days to perform the test.) Afterward, when they showed the subjects pairs of pictures — one they looked at before and one they hadn’t — they found that people were able to remember more than 80 percent of what they had seen. For all of our griping over the everyday failings of our memories — the misplaced keys, the forgotten name, the factoid stuck on the tip of the tongue — our biggest failing may be that we forget how rarely we forget. The point of the memory techniques described in “Rhetorica ad Herennium” is to take the kinds of memories our brains aren’t that good at holding onto and transform them into the kinds of memories our brains were built for. It advises creating memorable images for your palaces: the funnier, lewder and more bizarre, the better. “When we see in everyday life things that are petty, ordinary and banal, we generally fail to remember them. . . . But if we see or hear something exceptionally base, dishonorable, extraordinary, great, unbelievable or laughable, that we are likely to remember for a long time.”

What distinguishes a great mnemonist, I learned, is the ability to create lavish images on the fly, to paint in the mind a scene so unlike any other it cannot be forgotten. And to do it quickly. Many competitive mnemonists argue that their skills are less a feat of memory than of creativity. For example, one of the most popular techniques used to memorize playing

The point of memory techniques to take the kinds of memories our brains aren’t that good at holding onto and transform them into the kinds of memories our brains were built for.

cards involves associating every card with an image of a celebrity performing some sort of a ludicrous — and therefore memorable — action on a mundane object. When it comes time to remember the order of a series of cards, those memorized images are shuffled and recombined to form new and unforgettable scenes in the mind’s eye. Using this technique, Ed Cooke showed me how an entire deck can be quickly transformed into a comically surreal, and unforgettable, memory palace.

But mental athletes don’t merely embrace the practice of the ancients. The sport of competitive memory is driven by an arms race of sorts. Each year someone — usually a competitor who is temporarily underemployed or a student on summer vacation — comes up with a more elaborate technique for remembering more stuff more quickly, forcing the rest of the field to play catch-up. In order to remember digits, for example, Cooke recently invented a code that allows him to convert every number from 0 to 999,999,999 into a unique image that he can then deposit in a memory palace.

Memory palaces don’t have to be palatial — or even actual buildings. They can be routes through a town or signs of the zodiac or even mythical creatures. They can be big or small, indoors or outdoors, real or imaginary, so long as they are intimately familiar. The four-time U.S. memory champion Scott Hagwood uses luxury homes featured in Architectural Digest to store his memories. Dr. Yip Swee Chooi, the effervescent Malaysian memory champ, used his own body parts to help him memorize the entire 57,000-word Oxford English-Chinese dictionary. In the 15th century, an Italian jurist named Peter of Ravenna is said to have used thousands of memory palaces to store quotations on every important subject, classified alphabetically. When he wished to expound on a given topic, he simply reached into the relevant chamber and pulled out the source. “When I left my country to visit as a pilgrim the cities of Italy, I can truly say I carried everything I owned with me,” he wrote.

W

hen I first set out to train my memory, the prospect of learning these elaborate techniques seemed preposterously daunting. One of my first steps was to dive into the scientific literature for help. One name kept popping up: K. Anders Ericsson, a psychology professor at Florida State University and the author of an article titled “Exceptional Memorizers: Made, Not Born.”

Ericsson laid the foundation for what’s known as Skilled Memory Theory, which explains how and why our memories can be improved, within limits. In 1978, he and a fellow psychologist named Bill Chase conducted what became a classic experiment on a Carnegie Mellon undergraduate student, who was immortalized as S.F. in the literature. Chase and Ericsson paid S.F. to spend several hours a week in their lab taking a simple memory test again and again. S.F. sat in a chair and tried to remember as many numbers as possible as they were read off at the rate of one per second. At the outset, he could hold only about seven digits at a time in his head. When the experiment wrapped up — two years and 250 mind-numbing hours later — S.F. had increased his ability to remember numbers by a factor of 10.

When I called Ericsson and told him that I was trying to train my memory, he said he wanted to make me his research subject. We struck a deal. I would give him the records of my training, which might prove useful for his research. In return, he and his graduate students would analyze the data in search of how I might perform better. Ericsson encouraged me to think of enhancing my memory in the same way I would think about improving any other skill, like learning to play an instrument. My first assignment was to begin collecting architecture. Before I could embark on any serious degree of memory training, I first needed a stockpile of palaces at my disposal. I revisited the homes of old friends and took walks through famous museums, and I built entirely new, fantastical structures in my imagination. And then I carved each building up into cubbyholes for my memories.

Cooke kept me on a strict training regimen. Each morning, after drinking coffee but before reading the newspaper or showering or getting dressed, I sat at my desk for 10 to 15 minutes to work through a poem or memorize the names in an old yearbook. Rather than take a magazine or book along with me on the subway, I would whip out a page of random numbers or a deck of playing cards and try to commit it to memory. Strolls around the neighborhood became an excuse to memorize license plates. I began to pay a creepy amount of attention to name tags. I memorized my shopping lists. Whenever someone gave me a phone number, I installed it in a special memory palace. Over the next several months, while I built a veritable metropolis of memory palaces and stocked them with strange and colorful images, Ericsson kept tabs on my development. When I got stuck, I would call him for advice, and he would inevitably send me scurrying for some journal article that he promised would help me understand my shortcomings. At one point, not long after I started training, my memory stopped improving. No matter how much I practiced, I couldn’t memorize playing cards any faster than 1 every 10 seconds. I was stuck in a rut, and I couldn’t figure out why. “My card times have hit a plateau,” I lamented.

memoryTerry.png

“I would recommend you check out the literature on speed typing,” he replied.

When people first learn to use a keyboard, they improve very quickly from sloppy single-finger pecking to careful two-handed typing, until eventually the fingers move effortlessly and the whole process becomes unconscious. At this point, most people’s typing skills stop progressing. They reach a plateau. If you think about it, it’s strange. We’ve always been told that practice makes perfect, and yet many people sit behind a keyboard for hours a day. So why don’t they just keeping getting better and better?

In the 1960s, the psychologists Paul Fitts and Michael Posner tried to answer this question by describing the three stages of acquiring a new skill. During the first phase, known as the cognitive phase, we intellectualize the task and discover new strategies to accomplish it more proficiently. During the second, the associative phase, we concentrate less, making fewer major errors, and become more efficient. Finally we reach what Fitts and Posner called the autonomous phase, when we’re as good as we need to be at the task and we basically run on autopilot. Most of the time that’s a good thing. The less we have to focus on the repetitive tasks of everyday life, the more we can concentrate on the stuff that really matters. You can actually see this phase shift take place in f.M.R.I.’s of subjects as they learn new tasks: the parts of the brain involved in conscious reasoning become less active, and other parts of the brain take over. You could call it the O.K. plateau.

Psychologists used to think that O.K. plateaus marked the upper bounds of innate ability. In his 1869 book “Hereditary Genius,” Sir Francis Galton argued that a person could improve at mental and physical activities until he hit a wall, which “he cannot by any education or exertion overpass.” In other words, the best we can do is simply the best we can do. But Ericsson and his colleagues have found over and over again that with the right kind of effort, that’s rarely the case. They believe that Galton’s wall often has much less to do with our innate limits than with what we consider an acceptable level of performance. They’ve found that top achievers typically follow the same general pattern. They develop strategies for keeping out of the autonomous stage by doing three things: focusing on their technique, staying goal-oriented and getting immediate feedback on their performance. Amateur musicians, for example, tend to spend their practice time playing music, whereas pros tend to work through tedious exercises or focus on difficult parts of pieces. Similarly, the best ice skaters spend more of their practice time trying jumps that they land less often, while lesser skaters work more on jumps they’ve already mastered. In other words, regular practice simply isn’t enough.

For all of our griping over our failing memories — the misplaced keys, the forgotten name, the factoid stuck on the tip of the tongue — our biggest failing may be that we forget how rarely we forget.

To improve, we have to be constantly pushing ourselves beyond where we think our limits lie and then pay attention to how and why we fail. That’s what I needed to do if I was going to improve my memory.

With typing, it’s relatively easy to get past the O.K. plateau. Psychologists have discovered that the most efficient method is to force yourself to type 10 to 20 percent faster than your comfort pace and to allow yourself to make mistakes. Only by watching yourself mistype at that faster speed can you figure out the obstacles that are slowing you down and overcome them. Ericsson suggested that I try the same thing with cards. He told me to find a metronome and to try to memorize a card every time it clicked. Once I figured out my limits, he instructed me to set the metronome 10 to 20 percent faster and keep trying at the quicker pace until I stopped making mistakes. Whenever I came across a card that was particularly troublesome, I was supposed to make a note of it and see if I could figure out why it was giving me cognitive hiccups. The technique worked, and within a couple days I was off the O.K. plateau, and my card times began falling again at a steady clip. Before long, I was committing entire decks to memory in just a few minutes.

More than anything, what differentiates top memorizers from the second tier is that they approach memorization like a science. They develop hypotheses about their limitations; they conduct experiments and track data. “It’s like you’re developing a piece of technology or working on a scientific theory,” the three-time world champ Andi Bell once told me. “You have to analyze what you’re doing.”

To have a chance at catapulting myself to the top tier of the competitive memorization circuit, my practice would have to be focused and deliberate. That meant I needed to collect data and analyze it for ways to tweak the images in my memory palaces and make them stickier.

Cooke, who took to referring to me as “son,” “young man” and “Herr Foer,” insisted that if I really wanted to ratchet up my training, I would need an equipment upgrade. All serious mnemonists wear earmuffs. A few of the most intense competitors wear blinders to constrict their field of view and shut out peripheral distractions.

memorycat.png

 

Illustrations by David Sparshott

 

“I find them ridiculous, but in your case, they may be a sound investment,” Cooke said on one of our twice-weekly phone check-ins. That afternoon, I went to the hardware store and bought a pair of industrial-grade earmuffs and a pair of plastic laboratory safety goggles. I spray-painted them black and drilled a small eyehole through each lens. Henceforth I would always wear them to practice.

What began as an exercise in participatory journalism became an obsession. True, what I hoped for before I started hadn’t come to pass: these techniques didn’t improve my underlying memory (the “hardware” of “Rhetorica ad Herennium”). I still lost my car keys. And I was hardly a fount of poetry. Even once I was able to squirrel away more than 30 digits a minute in memory palaces, I seldom memorized the phone numbers of people I actually wanted to call. It was easier to punch them into my cellphone. The techniques worked; I just didn’t always use them. Why bother when there’s paper, a computer or a cellphone to remember for you?

Yet, as the next U.S.A. Memory Championship approached, I began to suspect that I might actually have a chance of doing pretty well in it. In every event except the poem and speed numbers (which tests how many random digits you can memorize in five minutes) my best practice scores were approaching the top marks of previous U.S. champions. Cooke told me not to make too much of that fact. “You always do at least 20 percent worse under the lights,” he said, and he warned me about the “lackadaisical character” of my training.

“Lackadaisical” wasn’t the word I would have chosen. Now that I had put the O.K. plateau behind me, my scores were improving on an almost daily basis. The sheets of random numbers that I memorized were piling up in the drawer of my desk. The dog-eared pages of verse I learned by heart were accumulating in my “Norton Anthology of Modern Poetry.” To buoy my spirits, Cooke sent me a quotation from the venerable martial artist Bruce Lee: “There are no limits. There are plateaus, and you must not stay there; you must go beyond them. If it kills you, it kills you.” I copied that thought onto a Post-it note and stuck it on my wall. Then I tore it down and memorized it.

M

ost national memory contests, held in places like Bangkok, Melbourne and Hamburg, bill themselves as mental decathlons. Ten grueling events test the competitors’ memories, each in a slightly different way. Contestants have to memorize an unpublished poem spanning several pages, pages of random words (record: 280 in 15 minutes), lists of binary digits (record: 4,140 in 30 minutes), shuffled decks of playing cards, a list of historical dates and the names and faces of as many strangers as possible. Some disciplines, called speed events, test how much the contestants can memorize in five minutes (record: 480 digits). Two marathon disciplines test how many decks of cards and random digits they can memorize in an hour (records: 2,080 digits and 28 decks). In the most exciting event of the contest, speed cards, competitors race to commit a single pack of playing cards to memory as fast as possible.

When I showed up at the following year’s U.S.A. Memory Championship, I brought along my black spray-painted memory goggles for speed cards. Until the moment a freshly shuffled deck was placed on the desk in front of me, I was still weighing whether to put them on. I hadn’t practiced without my goggles in weeks, and the Con Edison auditorium was full of distractions. But there were also three television cameras circulating in the room. As one of them zoomed in for a close-up of my face, I thought of all the people I knew who might end up watching the broadcast: high-school classmates I hadn’t seen in years, friends who had no idea about my new memory obsession, my girlfriend’s parents. What would they think if they turned on their televisions and saw me wearing huge black safety goggles and earmuffs, thumbing through a deck of playing cards? In the end, my fear of public embarrassment trumped my competitive instincts.

From the front of the room, the chief arbiter, a former Army drill sergeant, shouted, “Go!” A judge sitting opposite me clicked her stopwatch, and I began peeling through the pack as fast as I could, flicking three cards at a time off the top of the deck and into my right hand. I was storing the images in the memory palace I knew better than any other, one based on the house in Washington in which I grew up. Inside the front door, the Incredible Hulk rode a stationary bike while a pair of oversize, loopy earrings weighed down his earlobes (three of clubs, seven of diamonds, jack of spades). Next to the mirror at the bottom of the stairs, Terry Bradshaw balanced on a wheelchair (seven of hearts, nine of diamonds, eight of hearts), and just behind him, a midget jockey in a sombrero parachuted from an airplane with an umbrella (seven of spades, eight of diamonds, four of clubs). I saw Jerry Seinfeld sprawled out bleeding on the hood of a Lamborghini in the hallway (five of hearts, ace of diamonds, jack of hearts), and at the foot of my parents’ bedroom door, I saw myself moonwalking with Einstein (four of spades, king of hearts, three of diamonds).

The art of speed cards lies in finding the perfect balance between moving quickly and forming detailed images. You want a large enough glimpse of your images to be able to reconstruct them later, without wasting precious time conjuring any more color than necessary. When I put my palms back down on the table to stop the clock, I knew that I’d hit a sweet spot in that balance. But I didn’t yet know how sweet.

The judge, who was sitting opposite me, flashed me the time on her stopwatch: 1 minute 40 seconds. I immediately recognized that not only was that better than anything I ever did in practice but that it also would shatter the United States record of 1 minute 55 seconds. I closed my eyes, put my head down on the table, whispered an expletive to myself and took a second to dwell on the fact that I had possibly just done something — however geeky, however trivial — better than it had ever been done by anyone in the entire country.

(By the standards of the international memory circuit, where 21.9 seconds is the best time, my 1:40 would have been considered middling — the equivalent of a 5-minute mile for the best Germans, British and Chinese.)

As word of my time traveled across the room, cameras and spectators began to assemble around my desk. The judge pulled out a second unshuffled deck of playing cards and pushed them across the table. My task now was to rearrange that pack to match the one I just memorized.

I fanned the cards out, took a deep breath and walked through my palace one more time. I could see all the images perched exactly where I left them, except for one. It should have been in the shower, dripping wet, but all I could spy were blank beige tiles.

“I can’t see it,” I whispered to myself frantically. “I can’t see it.” I ran through every single one of my images as fast as I could. Had I forgotten the fop wearing an ascot? Pamela Anderson’s rack? The Lucky Charms leprechaun? An army of turbaned Sikhs? No, no, no, no.

I began sliding the cards around the table with my index finger. Near the top of the desk, I put the Hulk on his bike. Next to that, I placed Terry Bradshaw with his wheelchair. As the clock ran down on my five minutes of recall time, I was left with three cards. They were the three that had disappeared from the shower: the king of diamonds, four of hearts and seven of clubs. Bill Clinton copulating with a basketball. How could I have possibly missed it?

I quickly neatened up the stack of cards into a square pile, shoved them back across the table to the judge and removed my earmuffs and earplugs.

One of the television cameras circled around for a better angle. The judge began flipping the cards over one by one, while, for dramatic effect, I did the same with the deck I’d memorized.

Two of hearts, two of hearts. Two of diamonds, two of diamonds. Three of hearts, three of hearts. Card by card, each one matched. When we got to the end of the decks, I threw the last card down on the table and pumped my fist. I was the new U.S. record holder in speed cards. A 12-year-old boy stepped forward, handed me a pen and asked for my autograph.

Joshua Foer is the author of “Moonwalking With Einstein: The Art and Science of Remembering Everything,” from which this article is adapted, to be published by Penguin Press next month.

 

When Science Goes Psychic A

Posted: January 9, 2011 in Science

Introduction

ESP

A respected psychology journal has agreed to publish a paper presenting what its author describes as strong evidence for extrasensory perception, the ability to sense future events. Though the paper was peer reviewed, there are many critics who say the research is nonsense. A group of psychologists published a rebuttal paper in the same journal.

How does the peer review process ensure good quality research? Are there factors that the standard process cannot take account of? Or is ESP simply a claim that should not be entertained as a subject of scientific inquiry?

study that embraces ESP raises questions about what scholarly journals should publish.

No Sacred Mantle

Updated January 7, 2011, 06:32 PM

Lawrence M. Krauss, the Foundations Professor in the physics department at Arizona State University, is the author of “Quantum Man,” which is being published in March. He is also director of the Origins Initiative at A.S.U.

Part of the problem here is the assumption that when research is published, via the peer review process, that it is therefore correct. This is a fallacy. Lots of garbage ends up in peer-reviewed journals. All that successfully getting published means is that you have survived some sort of peer review. This is, by necessity, random and highly variable and arbitrary.

How good research survives, and bad research gets happily buried in the dustbin of history.

The quality of the peer reviewers depends upon the knowledge of the editor, and who is available. Since statistics is clearly an essential part of this work, it is of some concern that none of the reviewers were statisticians, but once again that may reflect the quality and character of the journal, and the group of reviewers to whom the editor has access.

But “publication” is not some sacred mantle, and the public should know that. Scientists already do. When I scan the scientific literature I find lots of results that I am reasonably sure are garbage and ignore them. The public should be skeptical of all such results, as should scientists, and most of us are trained to be skeptical in this way.

Here is the way I often categorize the scientific process.. One has an idea. One then does research, which is supposed to test the idea, i.e., try and push it forward while also trying to prove it wrong. One then submits for peer review, and inevitably one finds oneself dealing with peer reviewers who misunderstand the paper or who haven’t thought about it carefully And then sometimes one can convince them to accept it for publication.

But big deal! The proof of the pudding is not publication, but rather if the idea catches the interest of others, who then do more research to test it and push it forward. In this way, the good research survives, and the bad research gets happily buried in the dustbin of history, which is what I expect will happen in this case.

http://www.nytimes.com/roomfordebate/2011/01/06/the-esp-study-when-science-goes-psychic/publication-is-not-a-sacred-mantle

Topics: Sciencepsychology

New State of Matter Seen in Clay

Posted: January 9, 2011 in Science

by Edwin Cartlidge on 21 December 2010, 5:59 PM Image

sn-matter.jpg

Feat of clay. When clay is suspended in water and makes up between 1% and 2% of the suspension by weight, it forms a very stable “equilibrium gel” that could be useful in medicine and nanotechnology.
Credit: B. Ruzicka et al., Nature Materials, Advance Online Publication (2010)

Researchers have observed a new kind of extremely light and stable gel in a suspension of clay at the European Synchrotron Radiation Facility (ESRF) in Grenoble, France. The so-called equilibrium gel, predicted 4 years ago by physicists, could lead to improved drug-delivery systems and other novel microscopic devices.

A gel is a liquid that is rendered solid by a more or less rigid but disordered network of microscopic particles dispersed throughout its volume. These jellylike materials are extremely common and are used in everything from foods and pharmaceuticals to paints and cosmetics. However, many gels are made by “phase separating” a liquid suspension, which means cooling the liquid down until it splits into two distinct components, the more dense of which is the gel. Unfortunately, this is an unstable process that makes it difficult to control certain properties of the gel, including its density.

In the latest research, carried out over 7 years, physicist Barbara Ruzicka of the University of Rome “La Sapienza” and colleagues have shown how an existing material—the synthetic clay Laponite, which is used as a thickener in many household products—can form a stable gel. The researchers suspended Laponite in water and used the powerful x-ray beams of ESRF to study how the structure of the suspension changes over time and how this evolution depends on the amount of clay present.

At concentrations of up to 1% Laponite by weight, the initial fluid transformed into a gel after a few months, the researchers found. Then about 3 years later, it separated into two phases: one clay-rich and the other clay-poor. However, no such phase separation occurred at concentrations above 1%. Unlike at the lower concentrations, at which the arrangement of the clay particles was continually in flux, at concentrations above 1% the structure eventually stopped changing, indicating that the particles had locked into a stable structure: the equilibrium gel.

According to Ruzicka and co-workers, the clay particles reach an equilibrium because of the way they interact with one another. Typical particles dispersed in a liquid have charges distributed symmetrically across their surfaces and will interact with all of their nearest neighbors when they form a gel. The relatively high density of particles needed to do this will not generally exist in the liquid state, but they can exist if the liquid undergoes phase separation.

Clay particles, in contrast, are disc-shaped and have an asymmetric charge distribution—a net negative charge on their faces and a net positive charge along their edges. So they do not interact with all of their nearest neighbors, allowing them to lock together at lower densities. As such, say the researchers, the material will be able to form a gel without the help of a phase transition. Ruzicka explains that the suspension will change reversibly and continuously from the liquid state into the gel state, a process confirmed by computer simulations developed by the group.

This finding has lots of potential applications, says Ruzicka. One is batteries containing a gel electrolyte, which would produce a relatively high power for a given weight of battery and which could be incorporated into microscopic devices if the gel could be made at a low enough density. Alternatively, equilibrium gels could be used as coatings to deliver drugs into the body. These coatings are needed to protect against the body’s immune system and dissolve when the drug reaches its target, so making the coatings lighter would reduce the amount of material that ultimately ends up in the body.

Tom McLeish, a soft condensed matter physicist at Durham University in the United Kingdom, who was not involved with the research, says that the work is important because it provides an experimental demonstration of a new state of matter. And he agrees that the work could also have “applications aplenty.” He argues that the scope for applications could be enhanced enormously by fabricating equilibrium gels artificially—in other words, by making gels that contain particles with specific charge distributions rather than using preexisting materials, as was the case in the current work.

Source: http://news.sciencemag.org/sciencenow/2010/12/new-state-of-matter-seen-in-clay.html?rss=1