A candidate Higgs event; see below. |
CMS speaks first ("not in alphabetical order", I wonder if they played rock-paper-scissors), but no questions will be allowed until both have spoken.
In prefatory comments, he complains about pile-up. In fact probably the same pile-up phenomenon that plagues AXP astronomers, where multiple particles get mistaken for a single higher-energy particle.
03:10 ET - "We are in a position to exclude the standard model." Is that what he said?
CMS uses a 3.8 T field - less than a hospital MRI, but over a vast volume. Magnetic fields are densities, after all, so they tend not to look very impressive, but he mentions that the stored energy is huge. I wonder just how huge?
Dealing with pileup drastically increased the computational burden.
Machine learning seems to have been key in identifying electron events.
As a check, they can verify that all the Standard Model (that is, non-Higgs) reactions happen at the levels predicted by theory for 8 TeV collisions; agreement with theory is excellent.
Looking for Higgs to two-photon decays: All 2012 analysis was blind, developed without ever looking in the "signal region" where Higgs effects are expected. It keeps everyone honest, he says, but also keeps the level of excitement manageable. Getting decent energy resolution requires location of the vertex to within a centimeter, but they can manage this.
"MVA", multivariate analysis (which I take to be machine-learning-based) is used for selection but not statistical analysis.
2011 data (7 TeV) is messy and inconclusive. 2012 data is also fairly messy-looking, but when you combine them, weighting by signal-to-noise, you get a very clear deviation from the Standard Model. Combining both years gives 4.1 sigma, and the two years separately give lower-significance bumps at the same energy, 125 GeV.
Looking for H to Z Zbar: most promising channel, but also the most challenging for selection. Again, analyzed and optimized blindly.They were able to test their detection by looking at low-probablity Standard Model events with similar properties. And sure enough, they got a nice big peak consistent with a 126-GeV Higgs. Detections through this channel are about 3.2 sigma, which is about what you'd expect from this much data.
Combining these two models, they are consistent and give a combined significance of five sigma. Sustained applause follows. "It's the last month of running that did it!"
The situation gets more complicated when you add more Higgs reaction channels, mostly because you need much more complicated cuts to separate them from other reactions. Preliminary results are consistent with a 126 GeV Higgs, but "these are early days".
All together, they are able to exclude a Higgsless model; with this much data they would have expected about six sigma but they only get about five sigma (slightly less if you include the lesser channels). Best fit is 125.3 plus or minus 0.6 GeV. All the production modes for which they have decent data give consistent mass values. TL;DR: 4.9 sigma detection of the Higgs at 125.3 plus or minus 0.6 GeV.
"A small fraction of the CMS collaboration" fills a room; 3300 scientists.
ATLAS speaks next. ATLAS presents some possible Higgs events, but sadly they don't look as impressive as the press visualizations.
Pileup is the bugbear for ATLAS as well, so they'll talk about only the two "high-resolution" channels, two-photon and Z Zbar.
Non-functioning channels are only "a few permil".
"It is not easy to speak second, because all the clever things have already been said."
The pileup they see is "30 per crossing", above the experiment's design "23 per crossing".
Reconstructed events look like bewildering multicoloured fountains of lines and curves.
She talks about "in-time" and "out-of-time" pileup; not quite sure what she means.
The detectors are calorimeters, which would be a good reason for them to be subject to pileup.
Using tracking is the basic way to get rid of pileup, in other words, knowing the paths taken should help figure out how many particles there were.
It took serious ingenuity to keep the trigger thresholds low without getting swamped with events. The trigger system contains 500 "items" (criteria?)! This is necessary to keep data on low-energy events. Also a challenge is to come up with pileup-independent trigger criteria.
She also thanks the computer people for the massive infrastructure needed to cope with the data; the most recent data analyzed in this report is less than two weeks old.
Electroweak and top cross-section measurements agree well with Standard Model predictions; this is important not just as a check on the system but because these events must be discriminated from Higgs events.
Expected 10-15% increase in sensitivity this year over last year.
Until today, ATLAS searches exclude everything up to about 440 GeV except for a little region around 125 GeV and a slight lower-energy window where a very mild excess was observed. You expect such an excess somewhere just because of the range of values you're looking at.
What's new today? More data, 8 TeV, minimization of pileup dependence, sensitivity improved using Monte Carlo. Monte Carlo was checked against 2012 data in regions where no signal was expected.
Higgs to two-photon: expect 170 signal events against a background of six thousand non-events. Only manage to keep 40% of the signal events.
(Big question in my mind: do they have any specific events they can say with confidence are Higgs events?)
(Small question in my mind: is permil a standard way to measure things in Italian? She uses it a lot and I'd only ever heard of it in Unicode tables.)
Result of heroic calibration and selection: an excess at around 126 GeV. p value for the combined 2011 and 2012 data is 4.5 sigma, coming from 3.4 and 3.5 sigma results from each year. This is considerably larger than would be expected by a standard model Higgs. (The actual combined significance is somewhat reduced if you include the "look-elsewhere" effect.)
Higgs to Z Zbar: what you actually see is a shower of leptons, so you need very good lepton acceptance. Worse, theory isn't very good at predicting these showers, so you need to use controls rather than Monte Carlo. Reconstruction is messy because detector materials affect lepton paths.
Observations show an excess of events; there's a small anomaly at high masses the theorists need to work on, but it doesn't affect the data ranges relevant to Higgs detection. In the Higgs mass range, they see 13 events but expect 5 from background plus 5 from a 126-GeV Higgs.
She has three "beautiful" candidate Higgs events, which I've posted below in screenshot-o-vision. I trust better versions will appear on the Net shortly.
Combined significance of this channel is 3.4 sigma; slightly higher than a standard model Higgs would predict.
The grand combination, both channels and both years: excludes anything except a narrow mass window, 122.6-129.7 GeV (95%). The significance as a function of mass gives a beautiful consistency everywhere except the above region, where you get a combined significance of 5.0 sigma. (Prolonged applause. "I'm not done yet! There is more to come!")
She plots the significance as a function of time, as it went up and down and finally up to today's five sigma detection, then pleads for more data. In particular, she wants to know whether the too-high significance relative to Standard Model Higgs is a deviation from theory.
Conclusion: looked up to 600 GeV and excluded anything but a narrow region around 126 GeV.
She adds a personal remark that a light Higgs mass is great because it lets them examine the Higgs in many states, "so, thanks Nature."
Moderator: "As a layman I would say: I think we have it. Would you agree?" Nods and applause.
Question period: no real questions, but a lot of theorists saying "wow". One charmingly said "it's great to go to a physics conference that gets applause like you do with a football game".
Press conference will follow, for what it's worth.
No comments:
Post a Comment