Parton distribution functions at the first phase of the LHC - Pedro Jimenez-Delgado

Page created by Everett Gordon
 
CONTINUE READING
Parton distribution functions at the first phase of the LHC - Pedro Jimenez-Delgado
Parton distribution functions
    at the first phase of the LHC
       DESY THEORY WORKSHOP 2012:
       Lessons from the first phase of the LHC

             Pedro
             !!    Jimenez-Delgado

!                         !
Parton distribution functions at the first phase of the LHC - Pedro Jimenez-Delgado
PDFs at the first phase of the LHC
    Parton distributions, factorization, RGE, etc.
    Current global (NNLO) parton distribution groups
    Benchmark cross sections at LHC
    Data (to be) used in global PDF analysis
    Theory status and “globality”of the PDFs
    Treatments of heavy quarks
    Corrections to DIS cross sections
    Least squares estimation and correlations
    Propagation of experimental errors
    Parametrizations and the dynamical approach
!
    The role of the input scale    !
Parton distribution functions at the first phase of the LHC - Pedro Jimenez-Delgado
Parton distributions, factorization, RGE, etc.
Parton distribution functions enter in most        P1
                                                              x1 P1
calculations for LHC via (collinear)
factorization formula:                                          x 2P2
                                                    P2

Input distributions are extracted from data using the renormalization
group (DGLAP) equations:

Light quark flavors and gluons only; heavy quark contributions
generated
 !        perturbatively (intrinsic !heavy quark contributions irrelevant)
Parton distribution functions at the first phase of the LHC - Pedro Jimenez-Delgado
Current global (NNLO) PDF groups
   ABM: Careful treatment of experimental correlations, nuclear
   and apower corrections in DIS, FFNS
   MSTW: negative input gluons at small-x, rather “large”
        , GMVNS
  HERAPDF: Only HERA data, less negative gluons, GMVFS
   NNPDF: neural-network parametrization, Monte Carlo
   approach for error propagation, GMVFNS
   CTEQ-TEA: parametrization with exponentials, substantially
   inflated uncertainties, GMVFNS
   JR: dynamical (and “standard”) approach, rather “small”
           , FFNS
  !                                !
(there are more groups and studies focused on particular aspects)
Parton distribution functions at the first phase of the LHC - Pedro Jimenez-Delgado
Benchmark cross sections at LHC
                                                   Benchmark cross
                                                   sections should be
                                                   well under control
                                                   There have been several
                                                   efforts to understand
                                                   them (PDF4LHC,
                                                   LHC Higgs WG, etc.)

                                                   Spread in predictions
                                                   generally larger than
                                                   accuracy from each:
                                                   theoretical uncertainties
                                                   are not included in errors!
 !                               !
              "#$%&'()!%*!+$,!-./012!345667!6428
Parton distribution functions at the first phase of the LHC - Pedro Jimenez-Delgado
Benchmark cross sections at LHC
     They have been measured during the first phase of LHC

 !                                !
                                        ".9:;!?8
Parton distribution functions at the first phase of the LHC - Pedro Jimenez-Delgado
Benchmark cross sections at LHC
     They have been measured during the first phase of LHC

                                           "#@.#A!-BC?>!345647!52455D 8

                                             Single top           ratio

 !                                !            "#@.#AE:FGHE4564E5>08
Parton distribution functions at the first phase of the LHC - Pedro Jimenez-Delgado
Data (to be) used in global PDF analysis
   Pre-LHC data:
Inclusive DIS structure
functions (FT and HERA)
Drell-Yan (or CC DIS)
needed for
Neutrino dimuon data
sensitive to strangeness
W/Z production (Tevatron)
provides additional
(redundant) information
Gluon only enters (at LO)
in jet production (large-x)
and!
     semi-inclusive             !
heavy-quark production in DIS (small-x)   "-+I*(J$%!C+*+!KILMN8
Data (to be) used in global PDF analysis

        "O,!PM%$$%IQ!R:CS.9:!45648            "P,!TLM*($+()%)Q!R:CS.9:!45648

We will go from predicting LHC measurements to using them for
 !                                !
constraining the parton distributions (some groups have already started)
Theory status and “globality” of the PDFs
Most important ingredients (RGE, inclusive DIS and DY)
known up to NNLO
Heavy quark (semi-inclusive) contributions to DIS and jet production
known only upt to NLO (NNLOapp at best)
What goes in the different NNLO fits?
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!#/P!!!!!!!PA@U!!!!!!!9=B#!!!!!GG-CH!!!!!:@!!!!!
Treatments of heavy quarks

Both schemes are connected (effective heavy-quark PDFs):

VFNS should not be use for DIS. GMVFNs interpolate:

However it is model-dependent, unnecessary for present data
(HERA), and suffers from problems at higher orders (e.g. diagrams
 with
 !
      both, charm and bottom lines)   !
Corrections to DIS cross sections
Cross section (with all mass terms) constructed from corrected
structure functions:

Target-mass-corrections have well known expressions
Higher twist can be parametrized
Nuclear corrections from nuclear wave functions (models)
Usual approach of imposing cuts does not remove all the need for
power corrections; responsible for some differences "#/P8^

 !                              !
Least squares estimation and correlations
 The need for an appropriate treatment of experimental correlations
 have been recognized in the last years (accuracy)
 A convenient method for doing it (equivalent to the standard
 correlation matrix approach) is by shifts between theory and data [CTEQ]:

The optimal shifts for a given theory can be determined analytically:

  !                                 !
Least squares estimation and correlations
 Care needed for multiplicative errors

leads to bias towards smaller theoretical
predictions (smaller errors and shifts for
lower central values) [d'Agostini]

A solution is to take:

but iteratively! (otherwise bias towards larger
predictions). For parameter scans a fixed
theory must be used (not to reintroduce the bias)
This could be part of the reason for the         "-
Propagation of experimental errors
A convenient setup (equivalent to standard linear error propagation) in PDF context
is the Hessian method; a quadratic expansion around the minimum

T is the “tolerance parameter”, defines the size of the errors.
One construct “eigenvector PDFs”:
Which are used to calculate derivatives at appropriate points:

(ABM
  !  uses standard propagation; NNPDF
                                !     standard Monte Carlo)
Parametrizations and the dynamical approach
Parametrizations:
 “function” may be polynomial, contain exponentials, neural networks, ...
Since we are free to (and have to) select an input scale for the RGE:
   At low-enough only “valence” partons would be “resolved”
   ! structure at higher appears radiatively (QCD dynamics)
 DYNAMICAL:                            “STANDARD”:
              optimally determined     Arbitrarily fixed
 Valence-like structure                Fine tunning to particular data
 QCD “predictions” for small-x         Extrapolations to unmeasured regions
 More predictive, less uncertainties   More adaptable, marginally smaller

There are no extra constraints involved in the dynamical approach
Physical
  !
         motivation for contour conditions
                                    !
                                           ≠ non-perturbative structure
Parametrizations and the dynamical approach
An illustration: GJR08 input

  !                            !
Parametrizations and the dynamical approach
An illustration: GJR08 input

  !                            !
Parametrizations and the dynamical approach
An illustration: GJR08 input

  !                            !
Parametrizations and the dynamical approach
An illustration: GJR08 input

  !                            !
Parametrizations and the dynamical approach

Evolution from dynamical scales:
larger “evolution distance”+ valence-like structure (of the input distributions)
! less uncertainties and steeper gluons (correspondingly smaller )
   !
Fine tunning marginal (e.g. for DIS in! JR09,              ,              )
The role of the input scale
Once an optimal solution is found using an input scale, equally good
solutions do exist at different scales:
Any dependence is due to shortcomings of the estimation: procedural bias
For example (but not exclusively)
parametization bias
Note, e.g., that backwards evolution to
low scales leads to oscillating gluons
(imposible to cast with finite precision)
Excersise: systematic study with
progresively more flexible parametrizations

Allow also for negative input gluons:
  !                                         !
The role of the input scale
 The variation of           with        decreases with increasing flexibility

These variations can be used to estimate the (remaining) procedural bias!
(devise a measure: e.g. in (G)JR half the difference between dynamical and standard)
Allowing
   !     for negative gluons does not! improve the description (                       )
The role of the input scale
By considering variations with one can estimate (a lower limit to)
the procedural error ! additional uncertainty for each quantity

Results stabilize at NNLO20 but variations do not (substantially) decrease
Following
   !        our “recipe” we would estimate
                                     !                   ; about the same
size than the error from experimental uncertainties!
Outlook
Generally there is agreement between LHC measurements and
predictions (with differences here and there)
LHC data will help to determine the structure of the nucleon

Differences between PDF groups understood to some extent, but
theoretical uncertainties systematically disregarded:
                Total PDF error underestimated!

Dynamical approach has greater predictive power in the small-x
region: More constrained without additional constraints

 Procedural bias is significant for some quantities and can be
 estimated by input scale variations
  !                               !
You can also read