Microsoft Exchange Server on VMware vSphere - Exchange Server 2019 / vSphere 6.7

Page created by Guy Mason
 
CONTINUE READING
Microsoft Exchange Server on VMware vSphere - Exchange Server 2019 / vSphere 6.7
Microsoft Exchange Server on
VMware vSphere
Exchange Server 2019 / vSphere 6.7
BEST PRACTICES GUIDE
Microsoft Exchange Server on VMware vSphere - Exchange Server 2019 / vSphere 6.7
Microsoft Exchange Server on
                                                                                                                                VMware vSphere

Table of Contents

1.     Introduction ........................................................................................................................................... 5
     1.1       Purpose........................................................................................................................................ 5
     1.2       Target Audience .......................................................................................................................... 5
     1.3       Scope ........................................................................................................................................... 6
     1.4       External References .................................................................................................................... 6
2.     ESXi Host Best Practices for Exchange ............................................................................................... 7
     2.1       CPU Configuration Guidelines ..................................................................................................... 7
       2.1.1       Physical and Virtual CPUs ...................................................................................................... 7
       2.1.2       Architectural Limitations in Exchange Server .......................................................................... 7
       2.1.3       vSphere Virtual Symmetric Multiprocessing ............................................................................ 7
       2.1.4       CPU Reservations ................................................................................................................... 9
       2.1.5       Virtual Cores and Virtual Sockets............................................................................................ 9
       2.1.6       Hyper-threading ..................................................................................................................... 12
       2.1.7       “L1 Terminal Fault – VMM” and Hyper-threading.................................................................. 12
       2.1.8       Non-Uniform Memory Access ............................................................................................... 12
       2.1.9       vNUMA and CPU Hot Plug .................................................................................................... 14
     2.2       Memory Configuration Guidelines ............................................................................................. 14
       2.2.1       ESXi Memory Management Concepts .................................................................................. 14
       2.2.2       Virtual Machine Memory Concepts ....................................................................................... 14
       2.2.3       Memory Tax for Idle Virtual Machines ................................................................................... 15
       2.2.4       Allocating Memory to Exchange Virtual Machines ................................................................ 15
       2.2.5       Memory Hot Add, Over-subscription, and Dynamic Memory ................................................ 16
     2.3       Storage Virtualization ................................................................................................................. 17
       2.3.1       Raw Device Mapping ............................................................................................................ 19
       In-Guest iSCSI and Network-Attached Storage ................................................................................. 21
       2.3.2       Virtual SCSI Adapters ........................................................................................................... 21
       2.3.3       Virtual SCSI Queue Depth .................................................................................................... 22
       2.3.4       A Word on MetaCacheDatabase (MCDB) ............................................................................ 23
       2.3.5       Exchange Server 2019 on All-Flash Storage Array .............................................................. 23
       2.3.6       Using VMware vSAN for Microsoft Exchange Server Workloads ......................................... 26
       2.3.6.1.        Hybrid vs. All-Flash vSAN for Exchange Server ............................................................... 27
       2.3.6.2.        General vSAN for Exchange Server Recommendations .................................................. 27
     2.4       Networking Configuration Guidelines ........................................................................................ 28
       2.4.1       Virtual Networking Concepts ................................................................................................. 28
       2.4.2       Virtual Networking Best Practices ......................................................................................... 30
       2.4.3       Sample Exchange Virtual Network Configuration ................................................................. 30

                                                                                                         BEST PRACTICES GUIDE / PAGE 2 OF 62
Microsoft Exchange Server on VMware vSphere - Exchange Server 2019 / vSphere 6.7
Microsoft Exchange Server on
                                                                                                                           VMware vSphere

     2.5       Power Management ................................................................................................................... 31
       2.5.1      Server Hardware BIOS Settings ............................................................................................ 32
       2.5.2      ESXi Host Power Settings ..................................................................................................... 33
       2.5.3      Windows Guest Power Settings ............................................................................................ 34
3.     Using vSphere Technologies with Exchange Server 2019 ................................................................ 36
     3.1       Overview of vSphere Technologies ........................................................................................... 36
       3.1.1      vSphere HA ........................................................................................................................... 36
       3.1.2      vSphere vMotion.................................................................................................................... 36
     3.2       vSphere Distributed Resource Scheduler ................................................................................. 39
       3.2.1      vMotion and DRS Together ................................................................................................... 39
       3.2.2      Enable DRS in Fully Automated Mode .................................................................................. 40
       3.2.3      Use Anti-Affinity Rules for Exchange Virtual Machines ........................................................ 40
       3.2.4      DRS Groups and Group-Based Rules .................................................................................. 43
     3.3       vSphere High Availability ........................................................................................................... 44
       3.3.1      Admission Control ................................................................................................................. 45
       3.3.3      Using vSphere HA with Database Availability Groups .......................................................... 45
4.     Exchange Performance on vSphere .................................................................................................. 47
     4.1       Key Performance Considerations .............................................................................................. 48
     4.2       Performance Testing ................................................................................................................. 48
       4.2.1      Internal Performance Testing ................................................................................................ 48
       4.2.2      Partner Performance Testing ................................................................................................ 49
     4.3       Ongoing Performance Monitoring and Tuning........................................................................... 49
5.     VMware Enhancements for Deployment and Operations .................................................................. 52
     5.1       VMware NSX for vSphere .......................................................................................................... 52
       5.1.1      VMware NSX Edge ............................................................................................................... 52
       5.1.2      VMware NSX Distributed Firewall ......................................................................................... 54
     5.2       VMware vRealize Operations Manager ..................................................................................... 55
     5.3       Site Recovery Manager ............................................................................................................. 56

                                                                                                     BEST PRACTICES GUIDE / PAGE 3 OF 62
Microsoft Exchange Server on VMware vSphere - Exchange Server 2019 / vSphere 6.7
Microsoft Exchange Server on
                                                                                          VMware vSphere

List of Figures

Figure 1. Previous Virtual Machine CPU Allocation Recommendation                                        9
Figure 2. New Virtual Machine CPU Allocation Recommendation                                            10
Figure 3. NUMA Architecture Sizing Scenarios                                                           13
Figure 4. Virtual Machine Memory Settings                                                              14
Figure 5. VMware Storage Virtualization                                                                18
Figure 6. Storage Multi-pathing Requirements for vSphere                                               19
Figure 7. Storage Distribution with Multiple vSCSI Adapters                                            22
Figure 8. Common Points of Storage IO Queues                                                           23
Figure 9. Cost of Ownership Comparison                                                                 25
Figure 10 - Data Reduction Ratio on XtremIO                                                            25
Figure 11 - VMware vSAN                                                                                26
Figure 12. vSphere Virtual Networking Overview                                                         29
Figure 13. Sample Virtual Network Configuration                                                        31
Figure 14. Default ESXi 6.x Power-Management Setting                                                   32
Figure 15. Recommended ESXi Host Power-Management Setting                                              34
Figure 16. Windows CPU Core Parking                                                                    35
Figure 17. Recommended Windows Guest Power Scheme                                                      35
Figure 18. vSphere Distributed Resource Scheduler Anti-Affinity Rule                                   41
Figure 19. HA Advanced Configuration Option for DRS Anti-Affinity Rules                                42
Figure 20. Improved vSphere HA and DRS Interoperability in vSphere 6.7                                 42
Figure 21. Must Run on Rule Example                                                                    43
Figure 22. Should Run on Rule Example                                                                  44
Figure 23. Virtual Machine Perfmon Counters                                                            50
Figure 24 - Load-Balancing Exchange Server 2016 with NSX Edge                                          54
Figure 25. NSX Distributed Firewall Capability                                                         55
Figure 26. vRealize Operations                                                                         56
Figure 27. VMware Site Recovery Manager – Logical Components                                           58
Figure 28. Challenges with Exchange Server DAG as a DR Solution                                        59
Figure 29. Faster Exchange Service Recovery with Site Recovery Manager Automated DR Workflows 60
Figure 30. Failover Scenarios with Site Recovery Manager                                               61

                                                                       BEST PRACTICES GUIDE / PAGE 4 OF 62
Microsoft Exchange Server on VMware vSphere - Exchange Server 2019 / vSphere 6.7
Microsoft Exchange Server on
                                                                                           VMware vSphere

1. Introduction
Microsoft Exchange Server is the dominant enterprise-class electronic messaging and collaboration
application in the industry today. Given the multitude of technical and operational enhancements in the
latest released version of Microsoft Exchange Server (2019), customers are expected to continue using
Exchange Server, which should retain its dominant position in the enterprise.
Concurrent usage of the Exchange Server native high availability feature (Database Availability Group or
DAG) with VMware vSphere® native high availability features has been fully and unconditionally
supported by Microsoft since Exchange Server 2016. Microsoft continues the trend by extending this
declarative statement of support for virtualization to the 2019 version of Exchange Server.
Because the vSphere hypervisor is part of the Microsoft Server Virtualization Validation Program (SVVP),
virtualizing an Exchange Server 2019 instance on vSphere is fully supported.
This document provides technical guidance for VMware customers who are considering virtualizing their
Exchange Server on the vSphere virtualization platform.
Enterprise communication and collaboration is now so integral to an organization’s operations that
applications such as Exchange Server are now routinely classified as mission-critical. Organizations
expect measurable and optimal performance, scalability, reliability, and recoverability from this class of
applications. The main objective of this guide is to provide the information required to help a customer
satisfy the operational requirements of running Exchange Server 2019 on all currently shipping and
supported versions of VMware vSphere up to vSphere version 6.7.

1.1     Purpose
This guide provides best practice guidelines for deploying Exchange Server 2019 on vSphere. The
recommendations in this guide are not specific to any particular hardware, nor to the size and scope of
any particular Exchange implementation. The examples and considerations in this document provide
guidance but do not represent strict design requirements, as the flexibility of Exchange Server 2019 on
vSphere allows for a wide variety of valid configurations.

1.2     Target Audience
This guide assumes a basic knowledge and understanding of vSphere and Exchange Server 2019.
    •   Architectural staff can use this document to gain an understanding of how the system will work as
        a whole, as they design and implement various components.
    •   Engineers and administrators can use this document as a catalog of technical capabilities.
    •   Messaging staff can use this document to gain an understanding of how Exchange might fit into a
        virtual infrastructure.
    •   Management staff and process owners can use this document to help model business processes
        to take advantage of the savings and operational efficiencies achieved with virtualization.

                                                                        BEST PRACTICES GUIDE / PAGE 5 OF 62
Microsoft Exchange Server on VMware vSphere - Exchange Server 2019 / vSphere 6.7
Microsoft Exchange Server on
                                                                                           VMware vSphere

1.3     Scope
The scope of this document is limited to the following topics:
    •   VMware ESXi™ Host Best Practices for Exchange – Best practice guidelines for preparing the
        vSphere platform for running Exchange Server 2019. Guidance is included for CPU, memory,
        storage, and networking.
    •   Using VMware vSphere vMotion®, VMware vSphere Distributed Resource Scheduler™ (DRS),
        and VMware vSphere High Availability (HA) with Exchange Server 2019 – Overview of vSphere
        vMotion, vSphere HA, and DRS, and guidance for usage of these vSphere features with
        Exchange Server 2019 virtual machines (VM).
    •   Exchange Performance on vSphere – Background information on Exchange Server performance
        in a VM. This section also provides information on official VMware partner testing and guidelines
        for conducting and measuring internal performance tests.
    •   VMware Enhancements for Deployment and Operations – A brief look at vSphere features and
        add-ons that enhance deployment and management of Exchange Server 2019.
The following topics are out of scope for this document.
•   Design and Sizing Guidance – Historically, sizing an Exchange environment is a guessing game,
    even after using the Exchange Server Role Requirements Calculator (also known as Exchange
    Calculator) available from Microsoft. As of this writing, Microsoft has not updated the Exchange
    Calculator to include sizing considerations for Exchange Server 2019. This gap makes it especially
    critical for customers to be judicious in baselining their Exchange Server sizing exercise – to not only
    ensure that they allocate adequate resources to the Exchange Server workloads, but to also ensure
    they do not unnecessarily over-allocate such resources.
    •   Availability and Recovery Options – With the release of Microsoft Exchange Server 2013 and the
        introduction of the Database Availability Group (DAG) feature, VMware published a technical
        guide on the availability and recovery choices available for supporting a virtualized Microsoft
        Exchange Server environment in a vSphere infrastructure. Because these considerations have
        not changed in any significant way (from a purely workload virtualization point of view), customers
        are encouraged to consult the prescriptive guidance contained in the existing guide as they
        determine how to optimally protect their virtualized Exchange Server workloads on vSphere.
This and other guides are limited in focus to deploying Microsoft Exchange Server workloads on VMware
vSphere. Exchange deployments cover a wide subject area, and Exchange-specific design principles
should always follow Microsoft guidelines for best results.

1.4     External References
This document includes references to external links on third-party websites for the purposes of clarifying
statements where necessary. While these statements were accurate at the time of publishing, these third-
party websites are not under VMware control. This third-party content is subject to change without prior
notification.

                                                                       BEST PRACTICES GUIDE / PAGE 6 OF 62
Microsoft Exchange Server on VMware vSphere - Exchange Server 2019 / vSphere 6.7
Microsoft Exchange Server on
                                                                                           VMware vSphere

2.      ESXi Host Best Practices for Exchange
A well-designed VMware vSphere hypervisor platform is crucial to the successful implementation of
virtualized enterprise applications such as Exchange Server. The following sections outline general best
practices for designing vSphere for Exchange Server 2019.

2.1     CPU Configuration Guidelines
The latest release of vSphere (vSphere 6.7) has dramatically increased the scalability of VMs, enabling
configurations of up to 128 virtual processors for a single VM. With this increase, one option to improve
performance is to simply create larger VMs. However, additional considerations are involved in deciding
how much processing power should be allocated to a VM. This section reviews features that are available
in vSphere with regard to virtualizing CPUs. Where relevant, this document discusses the impact of those
features to Exchange Server 2019 and the recommended practices for using those features.

2.1.1 Physical and Virtual CPUs
VMware uses the terms virtual CPU (vCPU) and physical CPU (pCPU) to distinguish between the
processors within the VM and the underlying physical processor cores. VMs with more than one vCPU
are also called symmetric multiprocessing (SMP) VMs. The virtual machine monitor (VMM) is responsible
for virtualizing the CPUs. When a VM begins running, control transfers to the VMM, which is responsible
for virtualizing guest operating system instructions.

2.1.2 Architectural Limitations in Exchange Server
Microsoft provides guidelines to calculate the required compute resources for a single instance of
Exchange Server (as an application) so that Exchange Servers do not experience unintended
performance degradation due to incorrect sizing. These maximums are the same whether the Exchange
Server is virtualized or installed on physical servers.
See the following table.
Table 1. Exchange Server Maximum Supported Compute Resource

  Configuration Item                                    Maximum Supported

  Memory Per Exchange Server Instance                   256 GB

  Number of CPUs per Exchange Server Instance           2 Sockets, 48 cores

2.1.3 vSphere Virtual Symmetric Multiprocessing
VMware virtual symmetric multiprocessing (vSMP) enhances VM performance by enabling a single VM to
use multiple physical processor cores simultaneously. The most recent version of vSphere (version 6.7 as
of the time of publishing) supports allocating up to 128 virtual CPUs per VM. The biggest advantage of an
SMP system is the ability to use multiple processors to execute multiple tasks concurrently, thereby
increasing throughput (e.g., the number of transactions per second). Only workloads that support
parallelization (including multiple processes or multiple threads that can run in parallel) can benefit from
SMP.
Be aware of the maximum 2-sockets and 48-vCPU requirement for Exchange Server 2019 when making
a sizing decision. Allowances of up to 128 vCPUs to a VM should be less important in this context.

                                                                       BEST PRACTICES GUIDE / PAGE 7 OF 62
Microsoft Exchange Server on VMware vSphere - Exchange Server 2019 / vSphere 6.7
Microsoft Exchange Server on
                                                                                         VMware vSphere

VMware strongly recommends allocating resources to a VM based on the actual needs of the applications
hosted on the VM.
The ESXi scheduler uses a mechanism called relaxed co-scheduling to schedule processors. Strict co-
scheduling requires all vCPUs to be scheduled on physical cores simultaneously, whereas relaxed co-
scheduling monitors time skew between vCPUs to make scheduling or co-stopping decisions. A leading
vCPU might decide to co-stop itself to allow for a lagging vCPU to catch up. Consider the following points
when using multiple vCPUs:
    •   VMs with multiple vCPUs perform well in the latest versions of vSphere, as compared with older
        versions where strict co-scheduling was used.
    •   Regardless of relaxed co-scheduling, the ESXi scheduler prefers to schedule vCPUs together,
        when possible, to keep them in sync. Deploying VMs with multiple vCPUs that are not used
        wastes resources and might result in reduced performance of other VMs.
    For detailed information regarding the CPU scheduler and considerations for optimal vCPU allocation,
    please see the section on ESXi CPU considerations in Performance Best Practices for VMware
    vSphere 6.7.
    •   VMware recommends allocating multiple vCPUs to a VM only if the anticipated Exchange
        workload can truly take advantage of all the vCPUs.
    •   Use the Microsoft-provided Exchange Server Role Requirements Calculator tool to aid in your
        sizing exercise.

        Note the following:
    o   The Exchange calculator is intentionally generous in its recommendations and limits. The
        recommendations might not be optimal for a virtualized workload.
    o   The calculator does not factor in the Non-Uniform Memory Access (NUMA) topology of a given
        hardware when making compute resource recommendations. While Exchange Server (as an
        application) is unaware of NUMA optimization, VMware still recommends sizing a VM with the
        physical NUMA topology in mind. See Section 2.1.8, Non-Uniform Memory Access.
    o   The calculator assumes a 10% hypervisor overhead in its computation. Although VMware testing
        indicates a variation of 3%-5% in a worst-case performance scenario, VMware recommends not
        changing this value in the calculator when modelling Exchange Server 2016 VMs for capacity.
        Given the relative age of Exchange Server 2019, the true impact of the hypervisor on Exchange
        Server 2019 is currently unknown. Leaving this value unchanged helps customers remain as
        compliant as possible with Microsoft requirements.
    •   If the exact workload is not known, size the VM with a smaller number of vCPUs initially and
        increase the number later, if necessary.
    •   Microsoft supports up to 2:1 virtual-to-physical CPU allocation for Exchange Server 2019 in a
        virtual environment. VMware recommends that, for the initial sizing of performance-critical
        Exchange VMs (production systems), the total number of vCPUs assigned to all the VMs be no
        more than the total number of physical cores on the ESXi host machine, not hyper-threaded
        cores. By following this guideline, you can gauge performance and utilization within the
        environment until you are able to identify potential excess capacity that could be used for
        additional workloads.
    •   Although larger VMs are possible in vSphere, VMware recommends reducing the number of
        virtual CPUs for a VM if monitoring of the actual workload shows that the Exchange application is
        not benefitting from the increased vCPUs.

                                                                      BEST PRACTICES GUIDE / PAGE 8 OF 62
Microsoft Exchange Server on VMware vSphere - Exchange Server 2019 / vSphere 6.7
Microsoft Exchange Server on
                                                                                        VMware vSphere

2.1.4 CPU Reservations
Setting a CPU reservation sets a guaranteed CPU allocation for the VM. This practice is generally not
recommended because reserved resources are not available to other VMs and flexibility to manage
changing workloads is restricted. However, SLAs and multitenancy may require a guaranteed amount of
compute resource to be available. In these cases, reservations can be used. VMware has conducted
tests on vCPU over-commitment with SAP and Microsoft SQL Server workloads, demonstrating that
performance degradation inside the VMs is linearly reciprocal to the over-commitment. Because the
performance degradation is graceful, any vCPU over-commitment can be effectively managed by using
DRS and vSphere vMotion to move VMs to other ESXi hosts to obtain more processing power.

2.1.5 Virtual Cores and Virtual Sockets
vSphere now supports configuration of the number of virtual cores per virtual socket in the VMware
vSphere Web Client. This feature provides two functions:
    •   When used with virtual Non-Uniform Memory Access (vNUMA)-enabled VMs, this setting can be
        used to present specific NUMA topologies to the guest operating system.
    •   More commonly, this feature allows a guest operating system to utilize all of its assigned vCPUs
        in the case of an operating system that is limited to a certain number of CPUs.
On vSphere, vCPUs can be allocated to a VM by socket, or by number of cores per socket. Historically,
VMware had been recommending that, for performance considerations, the number of vCPUs allocated
to a VM should be allocated using the “Sockets” options in a vSphere environment. Customers were
encouraged to leave the “Cores per Socket” option unchanged from its default value of “1”. Controlling
vCPUs by number of cores was a configuration option intended to help customers overcome the
limitations in previous versions of the Windows operating system.
Figure 1. Previous Virtual Machine CPU Allocation Recommendation

                                                                     BEST PRACTICES GUIDE / PAGE 9 OF 62
Microsoft Exchange Server on VMware vSphere - Exchange Server 2019 / vSphere 6.7
Microsoft Exchange Server on
                                                                                         VMware vSphere

Since Exchange Server 2019 requires a minimum OS version of Windows Server 2019, and the operating
system does not suffer from the same socket limitations of prior Windows versions, it is logical to assume
that previous guidance to “leave cores-per-socket at default” would persist. This is no longer valid,
however, due to the architectural changes and optimizations VMware has made to CPU scheduling
algorithms in newer versions of vSphere (since version 6.5).
VMware now recommends that, when presenting vCPUs to a VM, customers should allocate the vCPUs
in accordance with the PHYSICAL NUMA topology of the underlying ESXi Host. Customers should
consult their hardware vendors (or the appropriate documentation) to determine the number of sockets
and cores physically present in the server hardware and use that knowledge as operating guidance for
VM CPU allocation. The recommendation to present all vCPUs to a VM as “sockets” is no longer valid in
modern vSphere/ESXi versions.
The following is a high-level representation of the new vCPU allocation for VMs in a vSphere version 6.5
infrastructure and newer.
Figure 2. New Virtual Machine CPU Allocation Recommendation

                                                                    BEST PRACTICES GUIDE / PAGE 10 OF 62
Microsoft Exchange Server on
                                                                                       VMware vSphere

VMs, including those running Exchange Server 2019, should be configured with multiple virtual sockets
and cores which, together, equal the number of vCPUs intended. This sockets-cores combination should
reflect the topology of the sockets-cores present on the motherboard.
Where the number of vCPUs intended for a VM is not greater than the number of cores present in one
physical socket, all of the vCPUs so allocated should come from one socket. Conversely, if a VM requires
more vCPUs than are physically available in one physical socket, the desired number of vCPUs should
be evenly divided between two sockets.
NOTE: The 2-socket prescription is based on Microsoft’s restated requirements for a single Exchange
Server.
While VMs using vNUMA may benefit from this option, the recommendation for these VMs is to use virtual
sockets (CPUs in the web client). Exchange Server 2019 is not a NUMA-aware application and
performance tests have shown no significant performance improvements by enabling vNUMA. However,
Windows Server 2019 OS is NUMA-aware and Exchange Server 2019 (as an application) does not
experience any performance, reliability, or stability issues attributable to vNUMA.

                                                                   BEST PRACTICES GUIDE / PAGE 11 OF 62
Microsoft Exchange Server on
                                                                                         VMware vSphere

2.1.6 Hyper-threading
Hyper-threading technology (recent versions are called symmetric multithreading, or SMT) allows a single
physical processor core to behave like two logical processors, so that two independent threads are able
to run simultaneously. Unlike having twice as many processor cores that can roughly double
performance, hyper-threading can provide anywhere from a slight to a significant increase in system
performance by keeping the processor pipeline busier. For example, an ESXi host system enabled for
SMT on an 8-core server sees 16 threads that appear as 16 logical processors.
Previous guidance provided by Microsoft regarding Exchange sizing and the use of hyper-threading led to
some confusion among those looking at virtualizing Exchange Server. Microsoft has since updated all
applicable documents to clarify that statements relating to hyper-threading and Exchange Server do not
apply to virtualization platforms. Microsoft’s guidance in this respect is expected to complement
Microsoft’s “Preferred Architecture” design option, which does not incorporate any virtualization design
choices, options or considerations. See Ask The Perf Guy: What’s The Story With Hyper-threading and
Virtualization? for the most recent guidance from Microsoft.
vSphere uses hyper-threads to provide more scheduling choices for the hypervisor. Hyper-threads
provide additional targets for worlds, a schedulable CPU context that can include a vCPU or hypervisor
management process. For workloads that are not CPU-bound, scheduling multiple vCPUs onto a physical
core’s logical cores can provide increased throughput by increasing the work in the pipeline. The CPU
scheduler schedules to a whole core over a hyper-thread, or partial core, if CPU time is lost due to hyper-
thread contention. Consequently, VMware recommends enabling hyper-threading on the ESXi host if the
underlying hardware supports the configuration.

2.1.7 “L1 Terminal Fault – VMM” and Hyper-threading
For customers concerned about the effects of the Speculative-Execution vulnerability in Intel processors
for vSphere (CVE-2018-3646) issue on virtualized workloads, VMware has provided the following
resources documenting the recommended resources and tools for mitigation:
•   VMware response to ‘L1 Terminal Fault - VMM’ (L1TF - VMM) Speculative-Execution vulnerability in
    Intel processors for vSphere
•   HTAware Mitigation Tool Overview and Usage
VMware recommends that customers do not disable hyper-threading as doing so precludes potential
vSphere scheduler enhancements and mitigations that will allow the use of both logical processors of a
hyper-thread-capable core. Disabling hyper-threading to mitigate the concurrent-context attack vector will
introduce unnecessary operational overhead, as hyper-threading may need to be re-enabled in
subsequent vSphere updates.

2.1.8 Non-Uniform Memory Access
In NUMA systems, a processor or set of processor cores has access to memory with very little latency.
The memory and its associated processor or processor cores are referred to as a NUMA node. NUMA-
aware operating systems and applications can make decisions as to where a process might run relative
to the NUMA architecture. This allows processes to access memory local to the NUMA node rather than
having to traverse an interconnect and incur additional latency. Exchange Server 2019 is not NUMA-
aware, but both ESXi and Windows Server 2019 are.
vSphere ESXi provides mechanisms for letting VMs take advantage of NUMA. The first mechanism is
transparently managed by ESXi while it schedules a VM’s virtual CPUs on NUMA nodes. By attempting to
keep all of a VM’s vCPUs scheduled on a single NUMA node, memory access can remain local. For this

                                                                     BEST PRACTICES GUIDE / PAGE 12 OF 62
Microsoft Exchange Server on
                                                                                         VMware vSphere

to work effectively, size the VM to fit within a single NUMA node. This placement is not a guarantee,
however, as the scheduler migrates a VM between NUMA nodes based on the demand.
The second mechanism for providing VMs with NUMA capabilities is vNUMA. When enabled for vNUMA,
a VM is presented with the NUMA architecture of the underlying hardware. This allows NUMA-aware
operating systems and applications to make intelligent decisions based on the underlying host’s
capabilities. By default, vNUMA is automatically enabled for VMs with nine or more vCPUs on vSphere.
Because Exchange Server 2019 is not NUMA-aware, enabling vNUMA for an Exchange VM does not
provide any additional performance benefit, nor does doing so incur any performance degradation.
Consider sizing Exchange Server 2019 VMs to fit within the size of the physical NUMA node for best
performance. The following figure depicts an ESXi host with four NUMA nodes, each comprising 20
physical cores and 128GB of memory. The VM allocated with 20 vCPUs and 128 GB of memory can be
scheduled by ESXi onto a single NUMA node. Likewise, a VM with 40 vCPUs and 256 GB RAM can be
scheduled on 2 NUMA nodes.
Figure 3. NUMA Architecture Sizing Scenarios

A VM allocated with 24 vCPUs and 128 GB of memory must span NUMA nodes in order to accommodate
the extra four vCPUs, which might then cause the VM to incur some memory access latency as a result of
four vCPUs outspanning a single NUMA node. The associated latency can be minimized or avoided
through the use of the appropriate combination of vNUMA control options in the VM’s Advanced
Configuration options. See Specifying NUMA Control in the VMware vSphere Resource Management
Guide.
While a VM allocated with 48 vCPUs and 256 GB can evenly span multiple NUMA nodes without
incurring the memory access latency issues described earlier, such a configuration is neither
recommended nor supported because the number of NUMA nodes (sockets) required to accommodate
the configuration exceeds Microsoft’s maximum 2-sockets recommendation.

                                                                     BEST PRACTICES GUIDE / PAGE 13 OF 62
Microsoft Exchange Server on
                                                                                         VMware vSphere

For large environments, VMware strongly recommends that customers thoroughly test each configuration
scenario to determine whether additional latency associated with remote memory-addressing warrants
creating additional, smaller rather than larger VMs.
Verify that all ESXi hosts have NUMA enabled in the system BIOS. In some systems, NUMA is enabled
by disabling node interleaving.

2.1.9 vNUMA and CPU Hot Plug
Enabling CPU hot add for a VM on vSphere disables vNUMA for the VM. As Exchange Server does not
benefit from either vNUMA or CPU hot add, VMware recommends CPU hot add for an Exchange Server
2019 VM not be enabled.

2.2     Memory Configuration Guidelines
This section provides guidelines for memory allocation to Exchange Server 2019 VMs. These guidelines
consider vSphere memory overhead and VM memory settings.

2.2.1 ESXi Memory Management Concepts
vSphere virtualizes guest physical memory by adding an extra level of address translation. Shadow page
tables make it possible to provide this additional translation with little or no overhead. Managing memory
in the hypervisor enables the following:
    •   Memory-sharing across VMs that have similar data – same guest operating systems
    •   Memory over-commitment – allocating more memory to VMs than is physically available on the
        ESXi host
    •   A memory-balloon technique – VMs that do not need all of their allocated memory give memory
        to VMs that require it
For a more detailed discussion of vSphere memory-management concepts, see the Memory Virtualization
Basics and Administering Memory Resources sections in the vSphere Resource Management Guide for
more detailed information.

2.2.2 Virtual Machine Memory Concepts
The following figure illustrates the use of memory settings parameters in the VM.
Figure 4. Virtual Machine Memory Settings

The vSphere memory settings for a VM include the following parameters:
    •   Configured memory – memory size of VM assigned at creation
    •   Active Memory – the amount of physical memory that is being used by VMs in a vSphere
        infrastructure. vSphere allocates guest operating system memory on demand.

                                                                     BEST PRACTICES GUIDE / PAGE 14 OF 62
Microsoft Exchange Server on
                                                                                           VMware vSphere

    •   Swappable – VM memory that can be reclaimed by the balloon driver or by vSphere swapping.
        Ballooning occurs before vSphere swapping. If this memory is in use by the VM (touched and in
        use), the balloon driver causes the guest operating system to swap. Also, this value is the size of
        the per-VM swap file that is created on the VMware vSphere Virtual Machine File System
        (VMFS).
    •   If the balloon driver is unable to reclaim memory quickly enough, or is disabled or not installed,
        vSphere forcibly reclaims memory from the VM using the VMkernel swapping mechanism.

2.2.3 Memory Tax for Idle Virtual Machines
It is a common practice for Exchange Server administrators to allocate as many resources as possible to
an Exchange Server (subject to the supported maximum prescribed by Microsoft). Administrators typically
do so to prepare for a "worst-case demand" scenario, minimizing the administrative interventions required
in unexpected cases of increased loads and pressure on a production Exchange Server infrastructure. In
a physical Exchange Server environment, such preparation is usually advisable and not problematic.
However, VMware highly recommends against this practice for virtualized Exchange Servers running in a
vSphere infrastructure. This is due to the implications of unintended penalties associated with over-
allocating memory to VMs in a vSphere environment.
If a VM is not actively using all of its currently allocated memory, ESXi charges more for idle memory than
for memory that is in use. This is done to help prevent VMs from hoarding idle memory. The idle memory
tax is applied in a progressive fashion. The effective tax rate increases as the ratio of idle memory to
active memory for the VM rises. VMware strongly recommends that vSphere and Exchange Server
administrators allocate only as many resources as are required by Exchange Server workloads, rather
than allocating resources for worst-case scenarios. Using the Exchange Calculator and thoroughly testing
the recommended sizing in a controlled (pre-production) environment is the best way for administrators to
determine the proper amount of memory resources to allocate to a VM.
If necessary, customers can modify the idle memory tax rate by adjusting the VM’s Mem.IdleTax
advanced configuration option. When combined with the Mem.SamplePeriod option, customers are able
to control how the vSphere determines target memory allocations for VMs in the infrastructure.

2.2.4 Allocating Memory to Exchange Virtual Machines
Microsoft has developed a thorough sizing methodology for Exchange Server that has matured with
recent versions. VMware recommends using the memory-sizing guidelines set by Microsoft. The amount
of memory required for Exchange Server 2019 is driven by the expected size of loads that will be
generated and the total number of Exchange Servers that will support the loads in the environment. Load
characteristics include (but are not limited to) the following:
    •   Number of mailbox users
    •   Profile of mailbox usage pattern (size and number of emails sent and received)
    •   Type and number of devices used for accessing emails
    •   Anti-virus and other messaging security and hygiene solutions deployed on the server
    •   Type and frequency of backup solution in use
    •   High availability and resilience requirements
As Exchange Servers are memory-intensive, and performance is a key factor, such as in production
environments, VMware recommends the following practices:
    •   Do not overcommit memory on ESXi hosts running Exchange workloads. If memory over-
        commitment cannot be avoided, use vSphere memory-allocation options to guarantee required

                                                                      BEST PRACTICES GUIDE / PAGE 15 OF 62
Microsoft Exchange Server on
                                                                                            VMware vSphere

        memory size to the Exchange Server, or to limit memory access for other, non-essential VMs in
        the vSphere cluster.
    •   For production systems, it is possible to achieve this objective by setting a memory reservation to
        the configured size of the Exchange Server VM.
    Note the following:
    o   Setting memory reservations might limit vSphere vMotion. A VM can be migrated only if the target
        ESXi host has free physical memory equal to or greater than the size of the reservation.
    o   Setting the memory reservation to the configured size of the VM results in a per-VM VMkernel
        swap file of near zero bytes that consumes less storage and eliminates ESXi host-level swapping.
        The guest operating system within the VM still requires its own page file.
    o   Reservations are recommended only when it is possible that memory might become
        overcommitted on hosts running Exchange VMs, when SLAs dictate that memory be guaranteed,
        or when there is a desire to reclaim space used by a VM swap file.
    o   There is a slight, appreciable performance benefit to enabling memory reservation, even if
        memory over-commitment in the vSphere cluster is not expected.
    •   It is important to right-size the configured memory of a VM. This might be difficult to determine in
        an Exchange environment because the Exchange JET cache is allocated based on memory
        present during service start-up. Understand the expected mailbox profile and recommended
        mailbox cache allocation to determine the best starting point for memory allocation.
    •   Do not disable the balloon driver (which is installed with VMware Tools ™) or any other ESXi
        memory-management mechanism.
    Note the following:
    o   Transparent Page Sharing (TPS) enables ESXi hosts to more efficiently utilize its available
        physical memory to support more workloads. TPS is useful in scenarios where multiple VM
        siblings share the same characteristics (e.g., the same OS and applications). In this configuration,
        vSphere is able to avoid redundancy by sharing similar pages among the different Exchange
        Server VMs. This sharing is transparent to the applications and processes inside the VM.
For security reasons, inter-VM page-sharing is disabled by default on current versions of vSphere. While
a VM continues to benefit from TPS in this configuration (i.e., the VM is able to share pages internally
among its own processes and components), a greater benefit can be realized by enabling inter-VM page-
sharing. See Sharing Memory Across Virtual Machine in the vSphere Resource Management Guide.
Enable DRS to balance workloads in the ESXi host cluster. DRS and reservations can give critical
workloads the resources they require to operate optimally. More recommendations for using DRS with
Exchange Server 2016 are available in the Using vSphere Technologies with Exchange Server 2016
section below.

2.2.5 Memory Hot Add, Over-subscription, and Dynamic Memory
vSphere exposes the ability to add memory to a VM while the VM is powered on. Modern operating
systems, including Windows, support this feature and are able to instantaneously detect and use the hot-
added memory.
vSphere enables an administrator to allocate virtual memory to VMs beyond the physically available
memory size of the ESXi host. This condition is called over-allocation, or over-subscription. Over-
subscription is possible and non-intrusive, and is an essential core benefit of virtualization, as repeated
testing has shown that VMs do not all fully utilize their allocated resources at the same time. If all VMs

                                                                       BEST PRACTICES GUIDE / PAGE 16 OF 62
Microsoft Exchange Server on
                                                                                           VMware vSphere

request their resources at the same time at any point in time, resource over-commitment on the ESXi or
cluster occurs.
Transient resource over-commitment is possible within a virtual environment. Frequent or sustained
occurrence of such incidents is problematic for critical applications such as Exchange Server.
Dynamic memory is a Microsoft Hyper-V construct that does not have a direct equivalence on vSphere.
Even in an over-commitment scenario, the VM on vSphere is never induced to believe that its allocated
memory has been physically reduced. vSphere uses other memory-management techniques for
arbitrating contentions during a resource over-commitment condition.
Microsoft Exchange Server’s JET cache is allocated based on the amount of memory available to the
operating system at the time of service start-up. After being allocated, the JET cache is distributed among
active and passive databases. With this model of memory pre-allocation for use by Exchange databases,
adding memory to a running Exchange VM provides no additional benefit unless the VM was rebooted or
Exchange services restarted. Consequently, memory hot-add is neither useable by nor beneficial to an
Exchange Server VM and is therefore neither recommended nor supported. In contrast, removing
memory JET has allocated for database consumption impacts performance of the store worker and
indexing processes by increasing processing and storage I/O.
Microsoft support for the virtualization of Exchange Server 2019 states that the over-subscription and
dynamic allocation of memory for Exchange VMs is not supported. To help avoid confusion, refer to the
preceding paragraphs to understand why these requirements are not relevant to Exchange Servers
virtualized on the vSphere platform.
Over-subscription is different from over-commitment. Over-subscription is benign and does not impact
VMs. Over-commitment is the adverse extension of over-subscription and should be avoided in all cases.
However, if it’s expected that an ESXi cluster may occasionally experience resource contention as a
result of memory over-commitment, VMware recommends judiciously reserving all memory allocated to
Exchange Server VMs.
Because ESXi does not support hot-unplug (i.e., hot removal) of memory from a Windows VM, the only
way to reduce the amount of memory presented to a VM running Exchange Server 2019 is to power off
the VM and change the memory allocation. When the VM powers on again, the OS will see the new
memory size, and Exchange Server will reallocate the available memory to its worker processes. This is
not dynamic memory.

2.3     Storage Virtualization
VMFS is a cluster file system that provides storage virtualization optimized for VMs. Each VM is
encapsulated in a small set of files and VMFS is the default storage system for these files on physical
SCSI disks and partitions. VMware supports Fiber Channel, iSCSI, and network-attached storage (NAS)
shared-storage protocols.
It is preferable to deploy VM files on shared storage to take advantage of vSphere vMotion, vSphere HA,
and DRS. This is considered a best practice for mission-critical Exchange Server deployments that are
often installed on third-party, shared-storage management solutions.
VMware storage virtualization can be categorized into three pillars of storage technology, as illustrated in
the following figure. The storage array is the physical storage pillar, comprising physical disks presented
as logical storage volumes in the next pillar. Storage volumes, presented from physical storage, are
formatted as VMFS datastores or with native file systems when mounted as raw device mappings. VMs
consist of virtual disks or raw device mappings that are presented to the guest operating system as SCSI
disks that can be partitioned and formatted using any supported file system.

                                                                      BEST PRACTICES GUIDE / PAGE 17 OF 62
Microsoft Exchange Server on
                                                                                           VMware vSphere

Figure 5. VMware Storage Virtualization

Exchange Server has improved significantly in recent releases and Exchange Server 2019 continues
those improvements, making Exchange Server less storage I/O intensive than before. This reality informs
Microsoft’s preference for commodity-class direct-attached storage (DAS) for Exchange Server. While the
case for DAS and JBOD storage for Exchange appears reasonable from an I/O perspective, the
associated operational and administrative overhead for an enterprise-level production Exchange Server
infrastructure do not justify this guidance.
To overcome the increased failure rate and shorter lifespan of commodity storage, Microsoft routinely
recommends maintaining multiple copies of Exchange data across larger sets of storage and Exchange
Servers than operationally necessary.
While VMware supports the use of converged storage solutions for virtualizing Exchange Server on
vSphere, VMware recommends that customers using such solutions thoroughly benchmark and validate
the suitability of such solutions and engage directly with the applicable vendors for configuration, sizing,
performance and availability guidance.
Even with the reduced I/O requirements of an Exchange Server instance, without careful planning,
storage access, availability and latencies can still manifest in an Exchange server infrastructure.
VMware recommends setting up a minimum of four paths from an ESXi host to a storage array. To
accomplish this, the host requires at least two host bus adapter (HBA) ports.

                                                                       BEST PRACTICES GUIDE / PAGE 18 OF 62
Microsoft Exchange Server on
                                                                                        VMware vSphere

Figure 6. Storage Multi-pathing Requirements for vSphere

The terms used in the preceding figure are:
    •   HBA – a device that connects one or more peripheral units to a computer and manages data
        storage and I/O processing
    •   Fibre Channel (FC) – a gigabit-speed networking technology used to build storage area networks
        (SANs) and transmit data
    •   Storage Processor (SP) – a SAN component that processes HBA requests routed through an FC
        switch and handles the RAID/volume functionality of the disk array

2.3.1 Raw Device Mapping
VMFS also supports raw device mapping (RDM). RDM allows a VM to directly access a volume on the
physical storage subsystem and can be used only with Fiber Channel or iSCSI. RDM provides a symbolic
link or mount point from a VMFS volume to a raw volume. The mapping makes volumes appear as files in
a VMFS volume. The mapping file, rather than the raw volume, is referenced in the VM configuration.
Connectivity from the VM to the raw volume is direct and all data is stored using the native file system,
NTFS. In the case of a failure of the VMFS datastore holding the RDM mapping file, a new mapping file
can be created. Access to the raw volume and its data is restored, and no data loss occurs.
There is no performance difference between a VMFS datastore and an RDM. The following are the only
conditions that impose a requirement for RDM disks for VMs in current versions of vSphere:
    •   If the backup solution performs hardware-based VSS snapshots of VMs (or otherwise requires
        direct storage access)
    •   When the VM will be participating in a Windows Server Failover Clustering configuration that
        requires the clustered VMs to share the same disks

                                                                    BEST PRACTICES GUIDE / PAGE 19 OF 62
Microsoft Exchange Server on
                                                                                      VMware vSphere

Because the Exchange Server clustering option does not require sharing disks among the nodes, the only
scenario for RDM disks for a virtualized Exchange Server on vSphere is one for which the backup
solution vendor requires such configuration.
The decision to use VMFS or RDM for Exchange data should be based on technical requirements. The
following table summarizes the considerations when making a decision between the two.
Table 2. VMFS and Raw Disk Mapping Considerations for Exchange Server 2019

                         VMFS                                                RDM

      •   Volume can contain many VM disk files,         •   Ideal if disks must be dedicated to a
          reducing management overhead                       single VM
      •   Increases storage utilization and provides     •   May be required to leverage array-level
          better flexibility and easier administration       backup and replication tools (VSS)
          and management                                     integrated with Exchange databases
      •   Supports existing and future vSphere           •   Facilitates data-migration between
          storage virtualization features                    physical and VMs using the LUN swing
                                                             method
      •   Fully supports VMware vCenter™ Site
          Recovery Manager™                              •   Fully supports vCenter Site Recovery
                                                             Manager
      •   Supports the use of vSphere vMotion,
          vSphere HA, and DRS                            •   Supports vSphere vMotion, vSphere HA,
                                                             and DRS
      •   Supports VMFS volumes and virtual
          disks/VMDK files up to 62TB                    •   Supports presenting volumes of up to
                                                             64TB (physical compatibility) and 62TB
                                                             (virtual compatibility) to the guest
                                                             operating system

                                                                  BEST PRACTICES GUIDE / PAGE 20 OF 62
Microsoft Exchange Server on
                                                                                         VMware vSphere

In-Guest iSCSI and Network-Attached Storage
Similar to RDM, in-guest iSCSI initiator-attached LUNs provide dedicated storage to a VM. Storage
presented using in-guest iSCSI is formatted natively using NTFS within the Windows guest operating
system and bypasses the storage management of the ESXi host. Presenting storage in this way requires
that additional attention be provided to the networking infrastructure and configuration at the vSphere
level and the physical level.
Although VMware testing has shown that NAS-attached virtual disks perform well for Exchange
workloads, Microsoft does not currently support accessing Exchange data (mailbox databases, transport
queue, and logs) stored on network-attached storage. This includes accessing Exchange data using a
UNC path from within the guest operating system, as well as VMs with VMDK files located on NFS-
attached storage.
When using in-guest iSCSI to present storage to an Exchange Server VM, confirm that the iSCSI NIC is
exempted from Windows Server failover clustering, or any other non-storage-related processes or
components. Similarly, VMware recommends that customers use jumbo frames and10GB networks to
support such a storage configuration option.

2.3.2 Virtual SCSI Adapters
VMware provides two commonly used virtual SCSI adapters for Windows Server 2019: LSI Logic SAS
and VMware Paravirtual SCSI (PVSCSI). The default adapter when creating new VMs running Windows
on vSphere is LSI Logic SAS, and this adapter can satisfy the requirements of most workloads with no
additional drivers. A VM can be configured with up to four virtual SCSI adapters.
The LSI Logic SAS type adapter can accommodate up to 15 storage devices (volumes) per controller, for
a total of 60 volumes per VM. The LSI Logic SAS controller is recommended for the operating system
volumes and for workloads which do not require a lot of IOPS for their operations.
Exchange Server 2019 continues to reduce the amount of I/O generated to access mailbox data. In
addition, Exchange Server 2019 now supports the use of larger volumes to accommodate multiple
Exchange Server databases per volumes. This improvement enables customers to fully leverage
VMware’s increased datastore size (62TB) and to locate multiple Exchange Server mailbox database
volumes (VMDKs) in large datastores, instead of creating multiple smaller datastores to satisfy the
constrained volume size limits in previous versions of Exchange Server.
The Paravirtual SCSI adapter is a high-performance vSCSI adapter developed by VMware to provide
optimal performance for virtualized business-critical applications. The advantage of the PVSCSI adapter
is that the added performance is delivered while minimizing the use of hypervisor CPU resources. This
leads to lower hypervisor overhead required to run storage I/O-intensive applications.
PVSCSI also supports a larger number of volumes-per-controller (64), enabling customers to create up to
256 volumes per VM. VMware recommends that customers limit the number of PVSCSI controllers
attached to a single VM to three if they intend to take advantage of this increased volumes-per-controller
feature. Based on this recommendation, customers should expect to be able to able to allocate a
maximum of 192 PVSCSI-connected volumes per VM.
In environments supporting thousands of users per mailbox server, VMware recommends that customers
use the PVSCSI controllers for all Exchange Server mailbox database and transaction logs volumes. To
ensure optimal utilization, VMware recommends that customers allocate as many PVSCSI controllers as
possible in these environments and ensure that all volumes attached to VM are evenly distributed across
all allocated PVSCI controllers.

                                                                     BEST PRACTICES GUIDE / PAGE 21 OF 62
Microsoft Exchange Server on
                                                                                          VMware vSphere

Figure 7. Storage Distribution with Multiple vSCSI Adapters

2.3.3 Virtual SCSI Queue Depth
Queue depth denotes the number of I/Os that can pass through a storage path at one time. All other I/Os
beyond this number are queued until the path has more room to accommodate them. Because there are
multiple points through which an I/O will pass from within a guest operating system before it reaches the
physical storage device, customers should pay special attention to their storage configuration to avoid
unnecessary and undesirable I/O queuing for their Exchange Server VMs.
When presenting storage to a VM, the virtual disk is connected to one virtual SCSI controller. Each SCSI
controller has a finite (and configurable) queue depth, which varies among the virtual SCSI controller
types available in vSphere.
The PVSCSI controller is the optimal SCSI controller for an I/O-intensive application on vSphere. This
controller has a queue depth of 64 (per device) and 254 (per controller) by default (double the size of an
LSI Logic SAS controller). The PVSCSI controller’s per-device and per-controller queue depths can also
be increased to 254 and 1024 respectively, providing even more increased I/O bandwidth for the
virtualized workload.
Because of these increased performance and throughput benefits, VMware recommends that customers
choose PVSCSI as the virtual SCSI controllers for their Exchange Server data, databases and logs
volumes.
Note the following:
    •   Using PVSCSI for Exchange Server and Exchange DAG is officially supported on vSphere 6.7,
        although such configurations have been in common use since vSphere 5.1
    •   The PVSCSI driver is not native to the Windows operating system. Therefore, customers using
        PVSCSI controllers must keep the VMware Tools instances on their VMs updated on a regular
        basis. VMware Tools is the vehicle for delivering the PVSCSI drivers to the OS.

                                                                      BEST PRACTICES GUIDE / PAGE 22 OF 62
You can also read