Progress in particle physics and the resulting gradual discovery of the fundamental properties of the universe is based upon the progress in particle accelerators. Progress in particle accelerators is measured by the higher and higher “resolution” or “magnification” that comes with the acceleration of particle beams to higher and higher kinetic energies.
The first accelerators in the early 1930’s utilized direct voltage to accelerate ions to energies of a few hundred keV (thousand electron-volts), resulting in the first induced nuclear disintegration in 1932. High voltage sparking limited these first accelerators to less than 1 MeV, and new ideas were needed to push past the 1 MeV barrier. The concept of resonant acceleration provided this impetus in the 1940’s by the application of radio frequency (RF) electric fields oscillating in resonance with the particles passing through a series of accelerating gaps. This led from the linear accelerator to the cyclotron, where another seemingly impassable energy barrier was reached at approximately 25 MeV.
Recall that an "electron-volt" is the energy of an electron when accelerated through a potential difference of one volt. You can give the electron (or any other particle) all the kinetic energy in one single potential across a gap, or you can give it multiple kicks from multiple gaps. This is similar to pushing a child in a swing. With each push you add more kinetic energy instead of doing it all in one big shove.
Then came the principle of phase stability, which allowed the invention of the synchrocyclotron and synchrotron, and the energy barrier was pushed up to 2 GeV by the early 1950’s. (Consider "phase stability" as pushing on the child's swing at just the top of the arc, thereby maximizing the energy transferred to the swing.) In the 1950’s came alternating gradient focusing, allowing a dramatic reduction in magnet size in large accelerators, and the barrier moved again to 400 GeV. Then came the concept of colliding beams in the 1960’s, and the energy frontier moved dramatically forward. Super conducting magnets and computer control is the current state of the art. We are limited in the 21st century only by the prohibitive cost of building new accelerators, and the question of where to build them.
Earlier I described Ernest Rutherford’s experiment firing α-particles at a thin sheet of gold foil and discovering, to his amazement, that some of the particles (which are helium nuclei containing two protons and two neutrons) bounced back. This showed that atoms had a very dense, yet tiny central core or nucleus.
A decade later he used α-particles of about 5 million electron volts (MeV) to produce radioactive isotopes and to disintegrate nitrogen nuclei. He then challenged the scientific community to develop devices that would accelerate charged particles to energies greater than those occurring in natural α-decay.
This led to the development of an accelerator at the Cavendish Laboratory in Cambridge, England in 1930. They achieved energies in the 500-800 kV range and made new discoveries when they smashed protons into a lithium nucleus creating α-particles.
In 1930, Robert J. Van de Graaff, developed the generator named after him. It was a simple affair consisting of large metal globes fed electrons by rubber belts. I’m sure all my readers have seen such sparking globes in old Frankenstein movies. The simple device created potential difference of about 1.5 MV, but sparking continued to limit greater energy levels.
Modern Van de Graaff generators can produce as much as 15 MV, but the next big advancement came from the use of multiple gaps that used lower voltages, but several stages to increase particle acceleration. These devices are fed radio frequency alternating voltages and this is what we call the betatron.
A research team working at the University of California at Berkley used this form of acceleration in a round container and created the first cyclotron that spun the particles around and around. That way the particles get a push on each revolution, again like the child in the swing. These devices could accelerate heavy particles such as protons and helium nuclei to several million eV. There were many technical problems that were solved, and by the 1960s there were over a hundred cyclotrons in laboratories all around the world.
The betatron, which accelerated electrons, was also subject to design improvements that led to the concept of phase stability and the use of both radio frequency electric and magnetic fields to synchronize particles in a well-defined orbit producing the syncrotron with energies in the several hundred MeV. Further advances in magnet designs led to the cosmotron, so named since it could match the energy level of cosmic rays. The next advance was the ability to focus the beam of particles and energy moved into the GeV range.
Two types of accelerators were being utilized. One uses speeding electrons. Since they are much smaller and less massive than protons, they made fine subtle measurements of atomic structure. The other designs using protons and atomic nuclei were much better at “smashing,” but the larger particles could mask subtle effects.
In 1966 Stanford University built a linear accelerator called SLAC for Stanford Linear Accelerator Center. It was two miles long, buried 25 feet underground. A number of new particles were discovered at SLAC leading to several Nobel prizes. The device continues to be upgraded and improved, but it has been eclipsed by machines with greater power.
I’ve mentioned CERN often in this quantum history. The name CERN is derived from the acronym for the French Conseil Européen pour la Recherche Nucléaire, a provisional body founded in 1952 with the mandate of establishing a world-class fundamental physics research organization in Europe. This research facility is funded by several European nations.
In 1969, CERN started construction on two large, connected rings located on the border between Switzerland and France. When the facility was being built, they dug tunnels large enough to add a more powerful accelerator at some point in the future. CERN was the first proton-antiproton collider and added much to the knowledge of fine atomic structure.
In 1983 the Tevatron was constructed in Illinois and pioneered some of the engineering practices that CERN would follow with their future construction of the Large Hadron Collider. The Tevatron moved energies into the Terravolt range.
Since the 1960’s had produced a fascinating theory predicting a new particle eventually called the Higgs boson and since that particle would complete the jigsaw puzzle of messenger particles and complete what is called the “standard model,” there was a lot of effort to build an even larger collider to explore the energy levels at which the Higgs boson would appear.
In 1993 the Superconducting Super Collider (SSC) was planned for Texas and construction began. However, soon Congress dropped its budget and left it incomplete. Eventually CERN gained the backing of the important European nations and the Large Hadron Collider, the world’s most powerful accelerator, was built in the original tunnels of its predecessor and went operational in 2008 after an initial failure that took over a year to repair. Now it is functioning producing energy in the range of 7 TeV in its 17 mile circumference ring.
Inside the accelerator, two high-energy particle beams travel at close to the speed of light before they are made to collide. The beams travel in opposite directions in separate beam pipes — two tubes kept at ultrahigh vacuum. Obtaining a collision is actually very difficult. The two beams that meet head-on are like a fog and actual particle collisions are very rare. CERN compares it to firing two needles at each other from 10 km away and hitting head on. It is like two shotgun blasts aimed at each other … some pellets … some times … will hit head on. The collisions do occur, because there are millions of particles in each stream and they go around the ring in microseconds for another opportunity to collide. So, even though a single collision is rare, there are billions and billions of opportunities and impacts occur on a steady basis and the results of these impacts are studied. At four places on the ring the beams are compressed and aimed at each other.
Some 1,232 dipole magnets keep the beams on their circular path, while an additional 392 quadrupole magnets are used to keep the beams focused, in order to maximize the chances of interaction between the particles in the four intersection points, where the two beams will cross. Approximately 96 tons of liquid helium is needed to keep the magnets, made of copper-clad niobium-titanium, at their operating temperature of 1.9 K (−271.25 °C), making the LHC the largest cryogenic facility in the world at liquid helium temperature.
Six detectors have been constructed at the LHC, located underground in large caverns excavated at the LHC's intersection points. The detectors at the LHC are built around the collision points where the particle beams meet head-on and they are designed to track the motion and measure the energy and charge of the new particles thrown out in all directions from the collisions. The LHC detectors are very large, for example ATLAS is the size of a 5 story building. Their great size is necessary firstly, to trap high energy particles traveling near the speed of light and secondly, to allow the tracks of charged particles to be detectably curved by the detector magnets.
Detectors are typically made up of layers, like an onion, with each layer designed to detect different properties of the particles as they travel through the detector. The layers nearest to the collision point are designed to very precisely track the movement of particles, especially the short-lived particles that are both the most difficult to detect and the most interesting to the researchers.
Subsequent layers track the movement, and also slow down and stop, longer-lived and more energetic particles. As these particles are slowed down they release energy that is measured by calorimeters in the layer.
Detectors usually include a powerful magnet; this affects the motion of charged particles produced in collisions and from the extent of its effect researchers can measure the charge and momentum of particles. Through measurements of momentum, mass can be deduced.
Over the last few years, the power of the Large Hadron Collider has been gradually increased. It will be a few more years before the atom smasher is running at full power. In the mean time, there is a lot of data being produced that has to be analyzed and compared. Some of the events that are sought are rare and some particles only “live” for a very, very short amount of time before they disintegrate into other particles. So often the data analysis is to find the secondary or tertiary products of the original particle. It isn’t just looking for a needle in a haystack, but for the bent piece of straw that the needle left behind.
The data collected in the detectors is sent to researchers all over the world. Of course, the data is processed in powerful computers. In order to facilitate the transmission and analysis of the detector data with maximum efficiency, the world-wide network provided by the Internet is used. In the early days of CERN operation, before the LHS was built, this problem of data sharing led to a unique solution.
In order to deliver this data to research labs and universities all over the world, in 1984 a researcher at CERN, Tim Berners-Lee, developed a new graphical interface and system (HTML) for the Internet which he named the “World Wide Web.”
The first Web server was a Next Cube created by the company founded by Steve Jobs after he was forced out of Apple in the 80’s, which Berners-Lee got in September 1990, and it was in December of that year that the Web was established between just a couple of CERN computers. Berners-Lee also used the Next computer to develop and run a multimedia browser and Web editor.
So, that’s the story of particle accelerators, brought to you by this spin-off technology called the WWW. Now, what about the Large Hadron Collider? Did it find the Higgs boson? And why is the Higgs particle so important that the Nobel laureate from the Illinois Institute of Technology, Leon Lederman wrote about it in a book called “The God Particle: If the Universe is the Answer, what is the Question?” Hyperbole for certain, but still a very good question.
Well, keep your eye on the world wide web, because the answer to those questions will be the subject of the next chapter, brought to you via that very same WWW.
This then is the process of quantum theory advancement. Sometimes new discoveries are made in the experimental laboratories that lead to new theories to explain the unexpected results. Other times theory is ahead of experiments and predicts results that take years to be verified in the laboratory. In 1960 the Higgs boson was predicted, but it took over 40 years for the experimental devices to reach the energy range to reproduce the theoretical particle and verify its actual existence. Next is the story of that particle.