You are viewing a single comment's thread from:

RE: Hunting for leptoquarks at CERN's Large Hadron Collider

in StemSociallast year

Making hypotheses is good to have a dictionary of possible signatures of new phenomena. Next, for each signature we check whether data is consistent with the Standard Model expectation (i.e. the background). The game is of course to make sure our catalogue of signatures is complete. In other words, are we looking at all possible options?

I will probably write a post soon on leptoquarks and dark matter. The main findings of this paper is that some experimental searches were not performed. Those searches were however the best ones to constrain the model considered. We have a unique machine and it is important to exploit it best. This is where theory works like that one are important. They make sure there is no loophole in the search program.

Now in terms of the hypotheses that are studied, there is always an unknown about the masses of the new particles and how strongly they are coupled. This will impact the rates of anomalies that could be seen in data. From a null result (no observation of anything anomalous), we constrain those rates and thus restrict the possibilities for the properties of the new states.

I have however only talked about restrictions. It is actually very hard (probably impossible) to fully exclude an hypothesis. For instance, we can constrain the maximal strength at which a hypothetical particle could be coupled. Anything below that would be fine as the particle would be so feebly coupled that it won't produce enough signal to be visible. There, there is nothing we can really do, except trying to design more clever analyses dedicated to rarer signals.

I hope this clarified. Otherwise, feel free to come back to me ;)

PS: thanks a lot! :)