The discovery of the 125 GeV Higgs boson marked the final piece of the Standard Model (SM), but no new particles have been observed since. However, multi-lepton anomalies at the LHC hint at a new scalar between 145–155 GeV, decaying mainly to WW, with no corresponding ZZ signal—suggesting a neutral $SU(2)_L$ triplet. Recasting ATLAS Run-2 di-photon data reveals a 4σ excess near 152 GeV and a...
In a supersymmetric theory, large mass hierarchies can lead to large uncertainties in fixed-order calculations of the SM-like Higgs mass. A reliable prediction is then obtained by performing the calculation in an effective field theory (EFT) framework, involving the matching to the full supersymmetric theory at the high scale to include contributions from the heavy particles, and a subsequent...
It is common practice to study the phenomenology of extended Higgs sectors ad-hoc. This approach becomes much more convincing when the extension is required to address the fundamental puzzles of SM. We show that if CP violation originates from spontaneous symmetry breaking, an additional light Higgs doublet below about 500 GeV is needed, in the framework of minimal renormalizable SO(10). Its...
The virtual corrections for $gg \to HH$ at NLO QCD have been efficiently approximated using a Taylor expansion in the limit of a forward kinematics. The same method has been recently applied to the calculation of a subset of the NNLO corrections, which are desirable given the significant impact, at NLO, of the uncertainty due to the choice of the top mass renormalization scheme. In this talk,...
We present a fully analytic computation of the two-loop electroweak corrections to double Higgs production in gluon fusion, mediated by light quarks. The calculation is performed using the method of differential equations, employing a large mass expansion to generate boundary functions. We implement the results in the Powheg-Box-V2 framework for phenomenological studies. The corrections to the...
We extend the Nested Soft-Collinear Subtraction Scheme to final states with any number of gluons and (anti)quarks and derive general analytical formulas for the infrared-finite NNLO QCD corrections to a generic hadron collider process with a single flavour of quarks (nf=1). We also discuss some ideas to move towards an extension with arbitrary nf.
After the discovery of the Higgs boson in 2012, the measurements of the Higgs self coupling is still a challenge for current and future experiments in particle physics. Higgs-boson pair production via gluon fusion is a loop-induced process. In order to increase the accuracy of the theoretical predictions for this process, higher-order corrections are necessary to reduce theoretical...
In recent years the perturbative approach to the short flow time expansion (STFX) of the gradient flow has been used in a variety of applications, such as meson mixing, for comparison to data from lattice field theory. These computations have usually utilised the method of projectors, which necessitates vanishing quark masses. However, it has been suggested by Hiromasa et. al. that the full...
I will show preliminary results for the complete NLO corrections to the $ pp \to t\bar{t}+X$ process in the lepton+jets decay channel at the Run III energy of $\sqrt{s} = 13.6\,\text{TeV}$ at the LHC. The calculation includes all resonant and non-resonant Feynman diagrams, interference effects, and Breit-Wigner propagators as well as all higher-order QCD and EW effects. The integrated and...
We report on a new framework for multiloop calculations that makes use of FORM and Mathematica. While FORM is employed for computationally heavy tasks such as amplitude evaluation or insertion of reduction tables, less performance-critical such as topology identification and minimization are done with FeynCalc. The interfaces to IBP-Reduction tools such as FIRE or KIRA are handled via the...
Despite being an elegant mechanism to explain Dark Matter (DM) production, freeze-in introduces challenges: If DM interacts via non-renormalizable operators, the predictions are highly sensitive to initial conditions, such as the reheating temperature of the universe. These issues are particularly relevant in models in which the universe deviates from radiation domination and the entropy of...
A natural dark matter candidate in many theories of strongly-interacting dark sectors is the dark pion $\pi_D$, which is a composite particle that is expected to have a mass close to or below the GeV scale. In many cases, these theories also contain a light vector meson, $\rho_D$, that can be produced together with dark pions through dark showers created in particle collisions. Cosmological...
Light-cone distribution amplitudes (LCDAs) for the $\Lambda_b$ baryon enter as universal hadronic matrix elements in QCD factorization approaches for energetic decays. Observables (e.g. form factors) can then be expressed as a convolution of the LCDA and a hard scattering kernel to the desired order in the strong coupling. The LCDAs are genuinely non-perturbative quantities that describe the...
An essential ingredient to the calculation of heavy meson lifetimes are non-perturbative matrix elements of four-quark operators. The gradient flow formalism provides a way for their calculation in lattice gauge theory. This requires the knowledge of the coefficients of the short-flow-time expansion of the corresponding operators, which can be calculated in perturbation theory. In this talk,...
In this talk, I will present the Multi-Improved NLO (MINLO) method as an alternative to the conventional fixed-order NLO approach, which depends on an arbitrary choice of renormalization and factorization scales. The MINLO framework dynamically determines these scales based on the most probable branching histories and incorporates Sudakov form factors to resum large double logarithms that...
Weakly supervised anomaly detection has been shown to find new physics with a high significance at low injected signal cross sections. If the right features and a robust classifier architecture are chosen, these methods are sensitive to a very broad class of signal models. However, choosing the right features and classification architecture in a model-agnostic way is a difficult task as the...
Weakly supervised anomaly detection has been shown to be a sensitive and robust tool for Large Hadron Collider (LHC) analysis. The effectiveness of these methods relies heavily on the input features of the classifier, influencing both model coverage and the detection of low signal cross sections. In this talk, we demonstrate that improvements in both areas can be achieved by using energy flow...
Jets are ubiquitous observables in collider experiments, composed of complex collections of particles that require classification. Over the past decade, machine learning-based classifiers have significantly enhanced our jet tagging capabilities, with increasingly sophisticated models leading to further improvements. This raises a fundamental question: How close are we to the theoretical limit...
Foundation models are a very successful approach to linguistic tasks. Naturally, there is the desire to develop foundation models for physics data. Currently, existing networks are much smaller than publicly available Large Language Models (LLMs), the latter having typically billions of parameters. By applying pretrained LLMs in an unconventional way, we introduce large networks for cosmological data.