I went to DC last week to give a talk for the Smithsonian Associates lecture program, based on my most recent book, Breakfast with Einstein. One of the examples I used for how quantum physics shows up in an ordinary morning was my alarm clock, which ultimately traces its time back to the definition of one second as 9,192,631,770 oscillations of light emitted as a cesium atom moves between two particular energy states (discussed in more detail in this old post).
During the question-and-answer period afterwards, somebody asked a really good question: Why cesium? That is, of all the atoms in the periodic table, why pick cesium, specifically, as the basis for our definition of time?
There are a bunch of features of cesium that you can point to as being attractive for making an atomic clock: it’s a heavy atom, so at any given temperature it’s moving relatively slowly, reducing any Doppler shifts you might encounter; and the particular state pair chosen has the highest energy difference of the analogous states in the other alkali metals, and all else being equal, states with a bigger energy difference and thus higher frequency get you better accuracy.
Those are all basically post-hoc arguments, though. The real reason for the choice of cesium is that it was easy to work with in the 1960’s when the choice was made. It’s got some convenient experimental properties—a low melting point that makes it easy to make an atomic beam, and a single stable isotope so you don’t have to worry about spurious atoms in the beam—but the most important factor is that the transition frequency of 9.19GHz is in the microwave region of the spectrum. By 1960, there was already a highly-developed technological infrastructure for generating and measuring microwave frequencies, thanks to the radar development projects that started in WWII. Cesium is also relatively “nice” in a theoretical sense, in that it’s an alkali metal atom with only a single unpaired electron in the outermost shell, giving it a relatively simple energy level structure that makes its properties and interactions easier to understand with the technology of the 1960’s.
As technology has improved, it turns out that cesium has some properties that make it a less than ideal choice for a time standard. Collisions between cesium atoms produce a frequency shift that’s bigger than that for other alkalis, which limits cesium clocks to working with relatively low-density samples. In a lot of respects, despite its lower transition frequency, rubidium would be a better choice for building a stable and practical clock, which is why the clocks in the photo above from the US Naval Observatory use rubidium rather than cesium.
And there are still other atoms that would be even better. The title of “best atomic frequency standard in the world” has passed back and forth between those based on neutral atoms in an optical lattice and those based on single ions held in a trap. (Here’s a news story about a recent advance by the ion clocks, for example, and one from last year about lattice clocks.)These use pairs of states with transition frequencies in the visible or ultraviolet regions of the spectrum, nearly a million times bigger than the cesium transition frequency.
But, of course, those advantages and disadvantages are also contingent on technology. You can’t use optical or ultraviolet frequencies to make a clock unless you have some way to convert those to lower frequencies that can be used to drive electronic clocks, something that was supremely inconvenient before the invention of the optical frequency comb (which won a Nobel Prize for John L. Hall and Theodor Hänsch in 2005). And the cesium collisional shift only becomes a disadvantage when you start dealing with laser-cooled samples of atoms held in atom traps, rather than atomic beams.
This is a specific example of a very general phenomenon, where the specific physics effects and applications we choose to study , and the solutions we find, are often determined in large part by ancillary technologies. This isn’t a problem, exactly, just an inevitable fact of life in science: in a universe with a nearly infinite variety of things you might study, it’s natural to start with things that are relatively easy, and don’t require too much ancillary technology development, before moving on to more complicated situations. In the terms of the old joke about lost keys, you start out looking where the light is and only after you’ve exhausted that do you start making new lampposts.
These effects of choices driven by ancillary technology can carry on for a really long time, though, following a phenomenon out of the basic research lab and into practical applications. The fifty-odd years of cesium clocks are one example, and medical technology is another.
Back in April, the APS meeting in Denver included a session on commercialization of physics results, where Ron Walsworth from Harvard gave a talk about various research projects turned products. He mentioned in passing something about a portable MRI scanner being developed by some of his commercial collaborators, which sounded interesting.
“MRI” of course stands for “(Nuclear) Magnetic Resonance Imaging,” with the “N” absent from the acronym to avoid scaring patients— it’s a technique that exploits the fact that atomic nuclei can act like single spins, with distinct “up” and “down” states in a magnetic field. You can manipulate these spins with radio-frequency light, causing them to flip back and forth, and by playing some clever tricks to measure the frequency of the RF absorbed by these nuclei at particular positions, make a map of where in the body you find the right sort of atoms. As this works particularly well with hydrogen, this is a great technique for distinguishing between different types of soft tissues based on their water content, making it an incredibly useful tool for medicine.
A typical commercial MRI unit uses whopping huge magnetic fields (technical term) generated by great big superconducting magnets, which are heavy and expensive and kind of claustrophobic for lots of patients. They also can’t be used in the presence of significant amounts of metal— many hospitals won’t even run MRI scans on people with significant numbers of tattoos, because of the metal in the inks. The standard explanation for this is rooted in the basic physics: the whopping huge field causes more of the spins to line up with the field, getting you a stronger signal out, and thus better resolution.
A portable system would necessarily involve smaller fields, which would seem to create problems with getting a signal out, so I asked Walsworth what the trick was, expecting to hear about some cool basic physics process for generating spin alignments that would give you bigger signals with smaller magnets. But that’s not what’s going on at all, he said— there’s really nothing new in the physics, just the ancillary technology used to generate the images.
As he explained it, the biggest limitation for making MRI images back when these systems started to be made was data processing: interpreting the signals picked up from the machine to generate the maps of different types of tissue. Whopping huge magnetic fields make this a lot more tractable with 1980’s computing technology, because with a very large field you can make a lot of simplifying assumptions that make the signals coming out pretty straightforward to interpret. Going to a lower field violates a lot of those nice simplifying assumptions, so while you can still pick up a signal from your patient, interpreting what it means is much more complicated, and wasn’t really feasible with the limited computing resources available for the early machines.
The trick to making a portable scanner, then, is not a revision of the physics of the core NMR process, but a recognition that high-power computing has gotten cheap. Modern computer processors (and signal-processing algorithms) make it possible to get useful information out of the weaker and more complicated signals you get from using lower magnetic fields in the MRI system. I couldn’t find the specific product he mentioned, but this video with Carl Zimmer gives you some idea. Being smarter about the computing side of the imaging process lets you reduce the requirements for the magnetic field, opening the door to smaller, lighter, and most importantly cheaper systems.
So, in both metrology and medicine, we see the ways that ancillary technology constrains what we do in and with physics. Particularly on the experimental side, we’re very much a tool-driven field, and which pieces of our nearly infinite universe we study depends on what we’ve got on hand to study them with.