Difference between revisions of "Team False Sharing"

From CDOT Wiki
Jump to: navigation, search
(Eliminating False Sharing)
Line 1: Line 1:
 +
= Analyzing False Sharing and  Ways to Eliminate False Sharing =
 +
== Team Members ==
 +
# [mailto:hnaidu@myseneca.ca?subject=DPS921 Harika Naidu]
 +
# [mailto:msivanesan4@myseneca.ca?subject=DPS921 Mithilan Sivanesan]
 +
# [mailto:hnaidu@myseneca.ca;msivanesan4@myseneca.ca?subject=DPS921 eMail All]
 +
 +
== Introduction ==
 +
'''What is False Sharing (aka cache line ping-ponging)?''' <br>
 +
False Sharing is one of the sharing pattern that affect performance when multiple threads share data. It arises when at least two threads modify or use data that happens to be close enough in memory that they end up in the same cache line. False sharing occurs when they constantly update their respective data in a way that the cache line migrates back and forth between two threads' caches.
 +
In this article, we will look at some examples that demonstrate false sharing, tools to analyze false sharing, and the two coding techniques we can implement to eliminate false sharing.
 +
 
=Cache Coherence=
 
=Cache Coherence=
 
In Symmetric Multiprocessor (SMP)systems , each processor has a local cache. The local cache is a smaller, faster memory which stores copies of data from frequently used main memory locations. Cache lines are closer to the CPU than the main memory and are intended to make memory access more efficient. In a shared memory multiprocessor system with a separate cache memory for each processor, it is possible to have many copies of shared data: one copy in the main memory and one in the local cache of each processor that requested it. When one of the copies of data is changed, the other copies must reflect that change. Cache coherence is the discipline which ensures that the changes in the values of shared operands(data) are propagated throughout the system in a timely fashion.
 
In Symmetric Multiprocessor (SMP)systems , each processor has a local cache. The local cache is a smaller, faster memory which stores copies of data from frequently used main memory locations. Cache lines are closer to the CPU than the main memory and are intended to make memory access more efficient. In a shared memory multiprocessor system with a separate cache memory for each processor, it is possible to have many copies of shared data: one copy in the main memory and one in the local cache of each processor that requested it. When one of the copies of data is changed, the other copies must reflect that change. Cache coherence is the discipline which ensures that the changes in the values of shared operands(data) are propagated throughout the system in a timely fashion.
Line 9: Line 20:
 
False sharing is a well-know performance issue on SMP systems, where each processor has a local cache. it occurs when treads on different processors modify varibles that reside on th the same cache line like so.
 
False sharing is a well-know performance issue on SMP systems, where each processor has a local cache. it occurs when treads on different processors modify varibles that reside on th the same cache line like so.
 
<br style="clear:both" />
 
<br style="clear:both" />
[[File:CPUCacheline.png]]
+
[[Media:CPUCacheline.png]]
 
<br style="clear:both" />
 
<br style="clear:both" />
 
The frequent coordination required between processors when cache lines are marked ‘Invalid’ requires cache lines to be written to memory and subsequently loaded. False sharing increases this coordination and can significantly degrade application performance.
 
The frequent coordination required between processors when cache lines are marked ‘Invalid’ requires cache lines to be written to memory and subsequently loaded. False sharing increases this coordination and can significantly degrade application performance.

Revision as of 15:19, 17 December 2017

Analyzing False Sharing and Ways to Eliminate False Sharing

Team Members

  1. Harika Naidu
  2. Mithilan Sivanesan
  3. eMail All

Introduction

What is False Sharing (aka cache line ping-ponging)?
False Sharing is one of the sharing pattern that affect performance when multiple threads share data. It arises when at least two threads modify or use data that happens to be close enough in memory that they end up in the same cache line. False sharing occurs when they constantly update their respective data in a way that the cache line migrates back and forth between two threads' caches. In this article, we will look at some examples that demonstrate false sharing, tools to analyze false sharing, and the two coding techniques we can implement to eliminate false sharing.

Cache Coherence

In Symmetric Multiprocessor (SMP)systems , each processor has a local cache. The local cache is a smaller, faster memory which stores copies of data from frequently used main memory locations. Cache lines are closer to the CPU than the main memory and are intended to make memory access more efficient. In a shared memory multiprocessor system with a separate cache memory for each processor, it is possible to have many copies of shared data: one copy in the main memory and one in the local cache of each processor that requested it. When one of the copies of data is changed, the other copies must reflect that change. Cache coherence is the discipline which ensures that the changes in the values of shared operands(data) are propagated throughout the system in a timely fashion. To ensure data consistency across multiple caches, multiprocessor-capable Intel® processors follow the MESI (Modified/Exclusive/Shared/Invalid) protocol. On first load of a cache line, the processor will mark the cache line as ‘Exclusive’ access. As long as the cache line is marked exclusive, subsequent loads are free to use the existing data in cache. If the processor sees the same cache line loaded by another processor on the bus, it marks the cache line with ‘Shared’ access. If the processor stores a cache line marked as ‘S’, the cache line is marked as ‘Modified’ and all other processors are sent an ‘Invalid’ cache line message. If the processor sees the same cache line which is now marked ‘M’ being accessed by another processor, the processor stores the cache line back to memory and marks its cache line as ‘Shared’. The other processor that is accessing the same cache line incurs a cache miss.

Coherent.gif


Identifying False Sharing

False sharing occurs when threads on different processors modify variables that reside on the same cache line. This invalidates the cache line and forces an update, which hurts performance. False sharing is a well-know performance issue on SMP systems, where each processor has a local cache. it occurs when treads on different processors modify varibles that reside on th the same cache line like so.
Media:CPUCacheline.png
The frequent coordination required between processors when cache lines are marked ‘Invalid’ requires cache lines to be written to memory and subsequently loaded. False sharing increases this coordination and can significantly degrade application performance. In Figure, threads 0 and 1 require variables that are adjacent in memory and reside on the same cache line. The cache line is loaded into the caches of CPU 0 and CPU 1. Even though the threads modify different variables, the cache line is invalidated forcing a memory update to maintain cache coherency.

#include <iostream>
#include <omp.h>
#include "timer.h"
#define NUM_THREADS 8
#define SIZE 1000000

int main(int argc, const char * argv[]) {
    int* a = new int [SIZE];
    int* b = new int [SIZE];
    int* sum_local = new int[NUM_THREADS];
    int sum = 0.0;
    int threadsUsed;
    Timer stopwatch;
    
    for(int i = 0; i < SIZE; i++){//initialize arrays
        a[i] = 1;
        b[i] = 1;
    }
    
    omp_set_num_threads(NUM_THREADS);
    stopwatch.start();
#pragma omp parallel
{
    int threadNum = omp_get_thread_num();
    sum_local[threadNum]=0;
#pragma omp for
    for(int i  = 0; i < SIZE; i++){//calcultae sum of product of arrays
        if(i==0){threadsUsed = omp_get_num_threads();}
        sum_local[threadNum] += a[i] * b[i];
    }
#pragma omp atomic
    sum += sum_local[threadNum];
}
    stopwatch.stop();
   
    std::cout<<sum<<std::endl;
    std::cout<<"Threads Used: "<<threadsUsed<<std::endl;
    std::cout<<"Time: "<<stopwatch.currtime()<<std::endl;

    return 0;
}

ExecVsThreadsFalse.png

Eliminating False Sharing

Thread Local Variables

Padding