![]() ![]() Each node has 4 CPU cores and 8 GB of RAM associated with it. The output tells me that this system has two NUMA nodes, node 0 and node 1. We’ll be coming back to this topic in a later post in this series.īy way of example, by running numactl -hardware on a Red Hat Enterprise Linux 7 system I can examine the NUMA layout of its hardware: On such systems workloads receive a performance boost not only when their memory is local to the CPU on which they are running but when the I/O devices they use are too, and (relative) degradation where this is not the case. Newer motherboard chipsets expand on this concept by also providing NUMA style division of PCIe I/O lanes between CPUs. The end result is that while NUMA facilitates faster memory access for CPUs local to the memory being accessed, memory access for remote CPUs is slower. While the memory bandwidth of the interconnect is typically faster than that of an individual node it can still be overwhelmed by concurrent cross node traffic from many nodes. An interconnect bus provides connections between nodes, so that all CPUs can still access all memory. This type of division has been key to the increasing performance of modern systems as focus has shifted from increasing clock speeds to adding more CPU sockets, cores, and - where available - threads. In modern multi-socket x86 systems system memory is divided into zones (called cells or nodes) and associated with particular CPUs. This resulted in memory access times that were the same regardless of which CPU in the system was performing the operation and was referred to as Uniform Memory Access (UMA). Historically, all memory on x86 systems was equally accessible to all CPUs in the system. Administrators wishing to take advantage of these features can now create customized performance flavors to target specialized workloads including Network Function Virtualization (NFV) and High Performance Computing (HPC). These enhancements allow OpenStack Compute (Nova) to have greater knowledge of compute host layout and as a result make smarter scheduling and placement decisions when launching instances. The OpenStack Kilo release, extending upon efforts that commenced during the Juno cycle, includes a number of key enhancements aimed at improving guest performance. ![]()
0 Comments
Leave a Reply. |