Categories

Probabilities and Particle Collisions

I was impressed when my husband, who participates in one of the experiments at the Large Hadron Collider in Geneva, first recounted the discovery of the top quark at the Tevatron Particle Accelerator in Batavia, Illinois.  He told me about it shortly after we met, in the summer of 1998.  I had already read some popular books about physics, but none of them had given me a look at what physicists actually mean when they say, with confidence, that they’ve seen the particle they were hunting down.  Seeing the top quark required, among other things, a consensus among hundreds of experimentalists about how to sort out the electronic buzz from a highly sensitive 4-story tall detector.  The buzz is created by millions of particle collisions per second. Software and hardware systems have been designed to permanently record the electronic effects of 200 of those collisions per second.  In fact, the sighting of an elusive particle relies on this enormous accumulation of data because, in the end, it is an analysis of various probabilities that determines whether or not something has been observed.  Distributions from data can best be compared to expectations when the numbers are high (a few million flips of a coin will get you heads 50% of the time).   Then there are the collaborations and disputes over how to do the analysis of the data, how to identify the presence of a particular particle, how to draw a conclusion from what they know about the probabilities.

There are probabilities everywhere.  They begin, perhaps, with the fact that in quantum mechanics, the very appearance of a particle is probabilistic.  The quantum world is not deterministic.  There is no unique outcome to some given set of initial conditions.   Particle interactions can only happen probabilistically.  But there are also the probabilities that data will conform to a particular distribution or that the energy recorded is just noise from the detector itself and not from the collision of particles.  Data plots are the way measurements are communicated.  Simulated collisions (aptly called Monte Carlos)  are also created to establish probabilities.

There can be disagreement over a myriad of considerations, how quickly a particular analysis can produce a publishable result, will the method itself produce errors or are the errors mostly statistical errors, how reliable are the simulations.

But, after all, we’re trying to get a glimpse of the moments after the Big Bang.  “What we’re trying to do is almost impossible,” my husband once said concisely.  And inside the little bit of space provided by that almost, experimentalists have managed to design an experiment, that can somehow hold steady what is known in order to isolate and test what is not known, even when that happens to be the conditions in the early universe.   They manage it with extraordinary control over material, the material of the detector, the material of their computers, and partner this with a powerful conceptual development – probability and statistics.

The Tevatron and the Large Hadron Collider (LHC) are now in the race to find the Higgs particle. This is the big one.  It’s the particle that would account for the mass of truly fundamental particles.  It is the only standard model particle that has not been observed and, finding that it isn’t there, would mean abandoning the model.    I plan to follow up with more on the Higgs and more on probabilities.

Comments are closed.