Difference between revisions of "GPU621/False Sharing"

From CDOT Wiki
Jump to: navigation, search
(Drawbacks)
(Drawbacks)
Line 207: Line 207:
 
'''- tons of wasted memory'''
 
'''- tons of wasted memory'''
  
For every 4 bytes for storing a thread's calculation, 60 bytes is empty padding. This empty space increases rapidly as the number of threads increases.
+
For every 4 bytes for storing a thread's calculation, 60 bytes is empty padding. This empty space rapidly increases as the number of threads increases.
  
  
 
'''- the cache line only contains one piece of data with the rest being completely empty'''
 
'''- the cache line only contains one piece of data with the rest being completely empty'''
  
This is what we wanted, but this is incredibly inefficient. The whole point of the cache is to improve performance by contain relevant data to minimize cache misses. In practice, a processor will likely be juggling other threads with their own data that may or may not be related to this program. By padding out the cache line, we are forcing cache misses to occur.
+
This is incredibly inefficient. The whole point of the cache is to improve performance by contain relevant data to minimize cache misses. In practice, a processor will likely be juggling other threads with their own data that may or may not be related to this program. By padding out the cache line, we are forcing cache misses to occur.
  
  

Revision as of 17:03, 26 November 2021

Analyzing False Sharing

Group Members

- Kevin Chou

Introduction

False sharing is a well-known problem that can occur when taking advantage of parallelization in modern multi-processor computers with shared memory. Although the results will be correct, frequent false sharing results in significant performance loss and reduction in scalability, effectively nullifying the benefits of parallel computing.


The Cache

What is a Cache?

Memory hierarchy.png

Before we can talk about false sharing, we need a brief overview of a cache. Like other storage mediums, a cache is a place used to store data and instructions that the CPU may use.

So why do we have so many types?

In an ideal world, the CPU would have lightning-fast access to all data on your machine at any given time to maximize the power and speed of modern-day CPUs. In reality, there are limitations and trade-offs when balancing speed, storage size, location, and cost. To compromise, a computer utilizes many types of memory. To name the important ones in the hierarchy, we have secondary memory like hard drives and SSDs which can permanently hold vast quantities of data but are slow. Next is Dynamic Random Access Memory(DRAM) which is much smaller, faster, and volatile. Last, we have the cache or Static Random Access Memory(SRAM), which is extremely fast, but even smaller and more expensive.

If the CPU solely relied on getting data from secondary memory, the slow access speed of the storage device would become a massive bottleneck in computation time. There would be huge gaps of time where the CPU sits around doing nothing waiting for data. To the end user, your machine would appear to be extremely sluggish and non-responsive.

At the same time, only small amounts piece of data is needed at a given moment. Even if you brought everything in from memory, most of it will be unused. Utilizing the hierarchy of memory storage, the most relevant data can be siphoned from secondary memory ahead of time and stored in the cache and RAM. When searching for data, the CPU can grab it from the cache, a cache hit. If it is not there, a cache miss, then it looks moves down the hierarchy until it finds it. Minimizing the number of cache misses ensures the CPU has a steady flow of data it can quickly retrieve and compute.


Cache Coherence and Cache Line

Cache coherence.png

Each processor has their own local cache. When data is needed, a fixed block of memory is transferred to the cache; this block is known as a cache line.

However, if they each modify their own copy, how do we know which one correct? We can’t store all the different versions or pick a random one to be the correct version, this would create havoc in our computer systems.

The answer is we need cache coherence. Cache coherence is defined as the uniformity of shared resource data that ends up stored in multiple local caches. In other words, we must keep all the local caches synchronized. The challenge of doing so is known as the Cache Coherence Problem. To solve this problem, multi-processor systems rely on cache coherence protocols to manage and maintain the cache.






False Sharing

What is False Sharing?

Now that we understand the basics of a cache, how does this relate to the concept of false sharing?

False sharing occurs when multiple processors modify data that resides on the same cache line. When this data is written back to memory, the shared cache lines are marked as invalid or dirty. Processors must fetch an updated copy introducing significant delays due to memory access speeds. As you increase the number of processors, this quickly spirals out of control as there is an increasing chance the cache line is invalid.

The key thing to note is that you do not need to modify the same piece of data. If the modified data happens to belong to the same cache line, the cache will be invalidated, forcing a memory update to maintain cache coherency.


Example

In workshop 2, we briefly encountered false sharing. However, we did not get a formal explanation in lecture, so this section will serve to provide more context to the problem.

We were asked to multi-thread a serial version of a simple algorithm that calculated PI by integrating 1/(1 + x^2). The serial version utilized a scalar sum variable to accumulate the calculations. For our naïve attempt, we identified a potential issue with using a scalar sum in a multi-threaded program. We have no direct control on the order a thread finishes their assigned work and leaving it up to chance resulted in varying resulted in a race condition. This is not good. To get around this we came up with the idea to change sum to an array. Now, each thread can index this array by their own thread id and store their calculations.

#include <iostream>
#include <iomanip>
#include <cstdlib>
#include <chrono>
#include <omp.h>

#define NUM_THREADS 8 // number of threads to request

using namespace std::chrono;

// report system time
void reportTime(const char* msg, steady_clock::duration span)
{
    auto ms = duration_cast<milliseconds>(span);
    std::cout << msg << " - took - " <<
        ms.count() << " milliseconds" << std::endl;
}

int main(int argc, char** argv)
{
    if (argc != 2)
    {
        std::cerr << argv[0] << ": invalid number of arguments\n";
        std::cerr << "Usage: " << argv[0] << "  no_of_slices\n";
        return 1;
    }

    int n = std::atoi(argv[1]);
    steady_clock::time_point ts, te;

    // calculate pi by integrating the area under 1/(1 + x^2) in n steps
    ts = steady_clock::now();

    int actual_thread_count;
    double pi = 0.0f;
    double sum[NUM_THREADS] = { 0.0f };
    double step = 1.0 / (double)n;

    omp_set_num_threads(NUM_THREADS);
    #pragma omp parallel
    {
        int id, num_threads;
        double x;

        id = omp_get_thread_num();
        num_threads = omp_get_num_threads();

        // get master thread to return how many threads were actually created
        if (id == 0)
        {
            actual_thread_count = num_threads;
        }

        // each thread is responsible for calculating the area of a specific set of sections underneath the curve
        for (int i = id; i < n; i = i + num_threads)
        {
            x = ((double)i + 0.5f) * step;
            sum[id] += 1.0f / (1.0f + x * x);
        }
    }

    // sum up each calculation to get approximation of pi
    for (int i = 0; i < actual_thread_count; i++)
    {
        pi += 4 * sum[i] * step;
    }

    te = steady_clock::now();

    std::cout << "n = " << n <<
        std::fixed << std::setprecision(15) <<
        "\n pi(exact) = " << 3.141592653589793 <<
        "\n pi(calcd) = " << pi << std::endl;
    reportTime("Integration", te - ts);
}


Results

Execution time of naive implementation without any optimization levels (Od)

The algorithm calculates the correct answer, but the performance is absolutely terrible. The reason is that an int array is a contiguous block of memory with each integer taking up 4 bytes. Assuming a 64 byte cache line, our entire array only takes up half of the cache opening up the possibility for multiple threads to share the same cache line resulting in false sharing. Although there were cases where higher thread count produced better results, there were many cases that performed worse than a single thread. This is due to the scheduling of thread execution that is out of the programmer's hands. It is possible that the selected schedule managed to minimize the frequency of false sharing giving better performance. However, this is extremely unreliable, so we need a better solution to false sharing.










Solutions to False Sharing

Padding

Assuming a 64 byte cache line, what if each element in the array was the only piece of data in the cache line for each thread?

Array comparison between naïve and padded implementation. Colored blocks represent index handled by individual threads. Grey blocks represent empty space.

This observation was the basis to our first solution to false sharing. The idea was to pad out each element in the array with enough space to act as a boundary or separator for the cache line. If we can force each thread to only bring in their array index to their cache then each individual thread will have a distinct cache line, thus eliminating false sharing.

In the workshop, we accomplished this by making several modifications to the original code:


1. Defining padding size to match 64 byte cache line.

#define PAD 15


2. Change sum to a 2D array.

double sum[NUM_THREADS][PAD + 1] = { 0.0f };


3. Modify loops to only store data in the first element of each row of the 2D array

for (int i = id; i < n; i = i + num_threads)
{
    x = ((double)i + 0.5f) * step;
    sum[id][0] += 1.0f / (1.0f + x * x);
}

for (int i = 0; i < actual_thread_count; i++)
{
    pi += 4 * sum[i][0] * step;
}

Drawbacks

While our idea works, there are several issues that prevent it from being an ideal solution:


- tons of wasted memory

For every 4 bytes for storing a thread's calculation, 60 bytes is empty padding. This empty space rapidly increases as the number of threads increases.


- the cache line only contains one piece of data with the rest being completely empty

This is incredibly inefficient. The whole point of the cache is to improve performance by contain relevant data to minimize cache misses. In practice, a processor will likely be juggling other threads with their own data that may or may not be related to this program. By padding out the cache line, we are forcing cache misses to occur.


- must know cache size.

The cache size can vary between machines making the code not very portable.

Synchronization

Other Alternatives

Conclusion

References

  1. https://www.youtube.com/watch?v=OuzYICZUthM&list=PLLX-Q6B8xqZ8n8bwjGdzBJ25X2utwnoEG&index=7&ab_channel=OpenMP
  2. https://www.youtube.com/watch?v=yi0FhRqDJfo&ab_channel=PowerCertAnimatedVideos
  3. https://docs.oracle.com/cd/E37069_01/html/E37081/aewcy.html#scrolltoc
  4. https://www.geeksforgeeks.org/memory-hierarchy-design-and-its-characteristics/
  5. https://www.geeksforgeeks.org/cache-coherence/
  6. https://www.pcmag.com/encyclopedia/term/cache-line#:~:text=Browse%20Encyclopedia-,A,size%20by%20the%20system%20designer.
  7. https://www.youtube.com/watch?v=r_ZE1XVT8Ao&ab_channel=NesoAcademy
  8. https://www.codeproject.com/Articles/85356/Avoiding-and-Identifying-False-Sharing-Among-Threa