Difference between revisions of "Fall 2019 SPO600 Weekly Schedule"

From CDOT Wiki
Jump to: navigation, search
(Week 2 - Class II)
(Tags: Mobile edit, Mobile web edit)
(Evaluation)
(29 intermediate revisions by 3 users not shown)
Line 17: Line 17:
 
|-
 
|-
  
|2||Sep 9||[[#Week 2 - Class I|Intro to Assembler / Using Makefiles]]||[[#Week 2 - Class II|Assembler Lab (Lab 3)]]||[[#Week 2 Deliverables|Blog your conclusion to Labs 1 and 2, and blog about your initial work on Lab 3.]]
+
|2||Sep 9||[[#Week 2 - Class I|Intro to Assembler / Using Makefiles]]||[[#Week 2 - Class II|Assembler Lab (Lab 3)]]||[[#Week 2 Deliverables|Blog your conclusion to Labs 1 and 2, blog about your initial work on Lab 3, and set up Slack access.]]
 
|-
 
|-
  
Line 23: Line 23:
 
|-
 
|-
  
|4||Sep 23||[[#Week 4 - Class I|Computer Resources and Performance / Baseline Builds / Benchmarking and Profiling]]||[[#Week 4 - Class II|Software Build Lab (Lab 4)]]||[[#Week 4 Deliverables|Blog your Lab 4 results.]]
+
|4||Sep 23||[[#Week 4 - Class I|Binary Representation of Data / Data Type Selection / Algorithm Selection]]||[[#Week 4 - Class II|Algorithm Selection (Lab 4)]]||[[#Week 4 Deliverables|Blog your lab 4 results.]]
 
|-
 
|-
  
|5||Sep 30||[[#Week 5 - Class I|Binary Representation of Data]]||[[#Week 5 - Class II|Algorithm Selection (Lab 5) / Inline Assembler]]||[[#Week 5 Deliverables|Blog your lab 5 results.]]
+
|5||Sep 30||[[#Week 5 - Class I|SIMD and Vectorization / Inline Assembler]]||[[#Week 5 - Class II|SIMD Lab]] (Lab 5)||[[#Week 5 Deliverables|Blog your lab 5 results.]]
 
|-
 
|-
  
|6||Oct 7||[[#Week 6 - Class I|SIMD and Vectorization / Inline Assembler]]||[[#Week 6 - Class II|Investigation: Inline Assembler]] (Lab 6)||[[#Week 6 Deliverables|Blog your lab 6 results.]]
+
|6||Oct 7||[[#Week 6 - Class I|Compiler Optimizations / Computer Resources and Performance / Benchmarking and Profiling]]||[[#Week 6 - Class II|SIMD Lab (Continued) (Lab 5)]]||[[#Week 6 Deliverables|Blog your Lab 5 results.]]
 
|-
 
|-
  
|7||Oct 14||[[#Week 7 - Class I|Projects!]]||[[#Week 7 - Class II|Project selection]]||[[#Week 7 Deliverables|Catch up on any missed labs, blog about your project selection progress.]]
+
|7||Oct 14||[[#Week 7 - Class I|Building Software / Projects!]]||[[#Week 7 - Class II|Project selection]]||[[#Week 7 Deliverables|Catch up on any missed labs, blog about your project selection progress.]]
 
|-
 
|-
  
Line 38: Line 38:
 
|-
 
|-
  
|8||Oct 28||[[#Week 8 - Class I|Intrinsics and SIMD]]||[[#Week 8 - Class II|Project Hacking]]||[[#Week 8 Deliverables|Blog blog about your project.]]
+
|8||Oct 28||[[#Week 8 - Class I|Memory Ordering / Barriers / Acquire-Release Semantics]]||[[#Week 8 - Class II|Project Hacking]]||[[#Week 8 Deliverables|Blog blog about your project.]]
 
|-
 
|-
  
Line 44: Line 44:
 
|-
 
|-
  
|10||Nov 11||[[#Week 10 - Class I|Memory Ordering / Barriers / Acquire-Release Semantics]]||[[#Week 10 - Class II|Project Hacking]]||[[#Week 10 Deliverables|Blog about your project.]]
+
|10||Nov 11||[[#Week 10 - Class I|Intrinsics]]||[[#Week 10 - Class II|Project Hacking]]||[[#Week 10 Deliverables|Blog about your project.]]
 
|-
 
|-
  
Line 63: Line 63:
 
!Category!!Percentage!!Evaluation Dates
 
!Category!!Percentage!!Evaluation Dates
 
|-
 
|-
|Communication||align="right"|20%||September (5%), October (5%), November (5%), end of course (5%).
+
|Communication||align="right"|20%||September (Oct 2 - 5%), October (November 10 - 5%), November (5%), end of course (5%).
 
|-
 
|-
 
|Quizzes||align="right"|10%||May be held during any class, usually at the start of class. A minimum of 5 one-page quizzes will be given. No make-up/retake option is offered if you miss a quiz. Lowest 3 scores will not be counted. Students with Test Centre accommodations may choose to write the quizzes in the class, or alternately write a monthly quiz in the Test Center.
 
|Quizzes||align="right"|10%||May be held during any class, usually at the start of class. A minimum of 5 one-page quizzes will be given. No make-up/retake option is offered if you miss a quiz. Lowest 3 scores will not be counted. Students with Test Centre accommodations may choose to write the quizzes in the class, or alternately write a monthly quiz in the Test Center.
Line 69: Line 69:
 
|Labs||align="right"|10%||See deliverables column above. All labs must be submitted by the end of the course, but it is best if you stay on top of the labs and submit according to the table above.
 
|Labs||align="right"|10%||See deliverables column above. All labs must be submitted by the end of the course, but it is best if you stay on top of the labs and submit according to the table above.
 
|-
 
|-
|Project work||align="right"|60%||3 stages: 15% (Nov 1), 20% (Nov 22), 25% (Dec 11).
+
|Project work||align="right"|60%||3 stages: 15% (Nov 8), 20% (Nov 29), 25% (Dec 11).
 
|}
 
|}
  
Line 168: Line 168:
 
## Optional (strongly recommended): [[SPO600 Host Setup|Set up a personal Linux system]].
 
## Optional (strongly recommended): [[SPO600 Host Setup|Set up a personal Linux system]].
 
## Optional: Purchase an AArch64 development board (such as a [http://96boards.org 96Boards] HiKey or Raspberry Pi 3 or 4. (If you use a Pi, install a 64-bit Linux operating system on it, not a 32-bit version).
 
## Optional: Purchase an AArch64 development board (such as a [http://96boards.org 96Boards] HiKey or Raspberry Pi 3 or 4. (If you use a Pi, install a 64-bit Linux operating system on it, not a 32-bit version).
# Complete [[SPO600 Compiled C Lab|Lab 1]] and write it up on your blog.
+
# Complete [[SPO600 Code Review Lab|Lab 1]] and write it up on your blog.
  
 +
== Week 2 ==
  
 
=== Week 2 - Class I ===
 
=== Week 2 - Class I ===
 
* [[Make and Makefiles]]
 
* [[Make and Makefiles]]
 
* [[Assembly Language]]
 
* [[Assembly Language]]
* [[SPO600 Assembler Lab|Assembler Lab]] (Lab 4)
+
* [[SPO600 Assembler Lab|Assembler Lab]] (Lab 3)
 
 
  
 
=== Week 2 - Class II ===
 
=== Week 2 - Class II ===
Line 183: Line 183:
 
* Blog your results and conclusion to [[SPO600 Code Review Lab|Code Review Lab (Lab 1)]] and [[SPO600 Compiled C Lab|Compiled C Lab (Lab 2)]]
 
* Blog your results and conclusion to [[SPO600 Code Review Lab|Code Review Lab (Lab 1)]] and [[SPO600 Compiled C Lab|Compiled C Lab (Lab 2)]]
 
* Blog about your initial work on [[SPO600 Assembler Lab|Lab 3]].
 
* Blog about your initial work on [[SPO600 Assembler Lab|Lab 3]].
 +
* Set up your account on the [[SPO600_Communication_Tools#Slack|Seneca Open Source Slack Workspace]].
 +
 +
== Week 3 ==
 +
 +
=== Week 3 - Class I ===
 +
 +
* ''Sysadmin for Devs''
 +
** In-class discussion of tips and tricks for efficient work on a Linux server
 +
 +
=== Week 3 - Class II ===
 +
* Finish [[SPO600 Assembler Lab|Lab 3]]
 +
 +
=== Week 3 - Deliverables ===
 +
* Finish and blog your detailed results for the [[SPO600 Assembler Lab|Assembler Lab]] (Lab 3)
 +
 +
== Week 4 ==
 +
 +
=== Week 4 - Class I ===
 +
* Binary Representation of Data
 +
** Integers
 +
*** Integers are the basic building block of binary numbers.
 +
*** In an unsigned integer, the bits are numbered from right to left starting at 0, and the value of each bit is <code>2<sup>bit</sup></code>. The value represented is the sum of each bit multiplied by its corresponding bit value. The range of an unsigned integer is <code>0:2<sup>bits</sup>-1</code> where bits is the number of bits in the unsigned integer.
 +
*** Signed integers are generally stored in twos-complement format, where the highest bit is used as a sign bit. If that bit is set, the value represented is <code>-(!value)-1</code> where ! is the NOT operation (each bit gets flipped from 0&rarr;1 and 1&rarr;2)
 +
** Fixed-point
 +
*** A fixed-point value is encoded the same as an integer, except that some of the bits are fractional -- they're considered to be to the right of the "binary point" (binary version of "decimal point" - or more generically, the ''radix point''). For example, binary 000001.00 is decimal 1.0, and 000001.11 is decimal 1.75.
 +
*** An alternative to fixed-point values is integer values in a smaller unit of measurement. For example, some accounting software may use integer values representing cents. For input and display purposes, dollar and cent values are converted to/from cent values.
 +
** Floating-point
 +
*** Floating point numbers have three parts: a ''sign bit'' (0 for positive, 1 for negative), a ''mantissa'' or ''significand'', and an ''exponent''. The value is interpreted as <code>''sign'' mantissa * 2<sup>exponent</sup></code>.
 +
** Sound
 +
** Graphics
 +
** Compression techniques
 +
*** Huffman encoding / Adaptive arithmetic encoding
 +
*** Repeated sequence encoding (1D, 2D, 3D)
 +
*** Decomposition
 +
*** Pallettization
 +
*** Psychoacoustic and psychovisual compression
 +
* Problem: Scaling Sound
 +
** Naive approach
 +
** Lookup table
 +
** Fixed-point multiply and shift
 +
 +
=== Week 4 - Class II ===
 +
* [[SPO600 Algorithm Selection Lab|Algorithm Selection Lab]] (Lab 4)
 +
 +
=== Week 4 Deliverables ===
 +
* Blog your results to [[SPO600 Algorithm Selection Lab|Lab 4]]
 +
 +
 +
== Week 5 ==
 +
 +
=== Week 5 - Class I ===
 +
* SIMD
 +
** SIMD is an acronym for "Single Instruction, Multiple Data", and refers to a class of instructions which perform the same operation on several separate pieces of data in parallel. SIMD instructions also include related instructions to set up data for SIMD processing, and to summarize results.
 +
** SIMD is based on very wide registers (128 bits to 2048 bits on implementations current as of 2019), and these wide registers can be treated as multiple "lanes" of similar data. These SIMD registers, also called vector registers, can therefore be thought of as small arrays of values.
 +
** A 128-bit SIMD register can be used as:
 +
*** two 64-bit lanes
 +
*** four 32-bit lanes
 +
*** eight 16-bit lanes
 +
*** sixteen 8-bit  lanes
 +
** Each architecture has a different notation for SIMD registers. In AArch64 (which will be our focus):
 +
*** Vector usage uses the notation v''n''.''s'' where ''n'' is the register number and ''s'' is the shape of the lanes, expressed as the number of lanes and a letter indicating the width of the lanes: q for quad-word (128 bits), d for double-word (64 bits), s for single-word (32 bits), h for half-word (16 bits), and b for byte (8 bits). Therefore, <code>v0.16b</code> is vector register 0 used as 16 lanes of 8 bits (1 byte) each, while <code>v8.4s</code> is vector register 8 used as 4 lanes of 32 bits each. Most instructions permit either 64 or 128 bits of the register to be used.
 +
*** Scalar usage uses the lane width letter followed by the vector register number. Therefore, <code>q3</code> refers to vector register 3 used as a single 128-bit value, and <code>s3</code> refers to the same register used as a single 32-bit register. Note that these are the same register referred to as v3 for vector usage. When using less than 128 bits, the remaining bits are either zero-filled (unsigned usage) or sign-extended (signed usage: the upper bits are filled with the sign bit, i.e., the same value as the high bit of the active part of the register).
 +
** Most SIMD operations work on corresponding lanes of the operand registers. For example, the AArch64 instruction <code>add v0.8h, v1.8h, v2.8h</code> will take the value in the first lane of register 1, add the value in the first lane of register 2, and place the result in the first lane of register 0. At the same time, the other lanes are processed in the same way, resulting in 8 simultaneous addition operations being performed.
 +
** A small number of SIMD operations work across lanes, e.g., to find the lowest or highest value in all of the lanes, to add the lanes together, or to duplicate a single value into all of the lanes of a register. These are usually used to set up or summarize the results of SIMD operations -- for example, a value of 0 might be duplicated into all of the lanes of a result register, then a loop applied to sum array data into the results register, and then a lane-summing operation performed to merge the results from all of the lanes.
 +
* SIMD capabilities can be used in a program in one of three different ways:
 +
*# The compiler's ''auto-vectorizer'' can be used to identify sections of code to which SIMD is applicable, and SIMD code will automatically be generated.
 +
*#* This works for the basic SIMD operations, but may not be applicable to advanced SIMD instructions, which don't clearly map to C statements.
 +
*#* The compiler will be very cautious about vectorizing code. See the Resources section below for insight into these challenges.
 +
*#** In order to vectorize a loop, among other things, the number of loop iterations needs to be known before the loop starts, memory layout must meet SIMD alignment requirements, loops must not overlap in a way that is affected by vectorization.
 +
*#** The compiler will also calculate a cost for the vectorization: in the case of a small loop, the extra setup before the loop and processing after the loop may negate the benefits of vectorization.
 +
*#* Vectorization in applied by default only at the -O3 level in most compilers. In GCC:
 +
*#** The main individual feature flag to turn on vectorization is <code>-ftree-vectorize</code> (enabled by default at -O3, disabled at other levels).
 +
*#** You can see all of the vectorization decisions using <code>-fopt-info-vec-all</code> or you can see just the missed vectorizations using <code>-fopt-info-vec-missed</code> (which is usually what you want to focus on, because it show only the loops where vectorization was ''not'' enabled, and the reason that it was not). This approach is generally very portable.
 +
*# We can explicitly include SIMD instructions in a C program by using [[Inline Assembly Language|Inline Assembler]]. This is obviously architecture-specific, so it is important to use C preprocessor directives to include/exclude this code depending on the platform for which it is compiled, and to use a generic C implementation on any platform for which you are not providing an inline assembler version.
 +
*# ''C Intrinsics'' are function-like extensions to the C language. Although they look like functions, they are compiled inline, and they are used to provide access to features which are not provided by the C language itself. There is a group of intrinsics which provide access to SIMD instructions. However, the benefit of using these over inline assembler is debatable. SIMD intrinsics are not portable, and should be included with C preprocessor directives like inline assembler.
 +
 +
=== Week 5 - Class II ===
 +
* [[SPO600 SIMD Lab]] (Lab 5)
 +
 +
=== Week 5 Resources ===
 +
==== Auto-vectorization ====
 +
* [https://gcc.gnu.org/projects/tree-ssa/vectorization.html Auto-Vectorization in GCC] - Main project page for the GCC auto-vectorizer.
 +
* [http://locklessinc.com/articles/vectorize/ Auto-vectorization with gcc 4.7] - An excellent discussion of the capabilities and limitations of the GCC auto-vectorizer, intrinsics for providing hints to GCC, and other code pattern changes that can improve results. Note that there has been some improvement in the auto-vectorizer since this article was written. '''This article is strongly recommended.'''
 +
* [https://software.intel.com/sites/default/files/8c/a9/CompilerAutovectorizationGuide.pdf Intel (Auto)Vectorization Tutorial] - this deals with the Intel compiler (ICC) but the general technical discussion is valid for other compilers such as gcc and llvm
 +
==== Inline Assembly Language ====
 +
* [[Inline Assembly Language]]
 +
* [http://developer.arm.com ARM Developer Information Centre]
 +
** [https://developer.arm.com/products/architecture/a-profile/docs/den0024/a ARM Cortex-A Series Programmer’s Guide for ARMv8-A]
 +
* The ''short'' guide to the ARMv8 instruction set: [https://www.element14.com/community/servlet/JiveServlet/previewBody/41836-102-1-229511/ARM.Reference_Manual.pdf ARMv8 Instruction Set Overview] ("ARM ISA Overview")
 +
* The ''long'' guide to the ARMv8 instruction set: [https://developer.arm.com/docs/ddi0487/latest/arm-architecture-reference-manual-armv8-for-armv8-a-architecture-profile ARM Architecture Reference Manual ARMv8, for ARMv8-A architecture profile] ("ARM ARM")
 +
==== C Intrinsics - AArch64 SIMD ====
 +
* [https://developer.arm.com/architectures/instruction-sets/simd-isas/neon/intrinsics ARM NEON Intrinsics Reference]
 +
* [https://gcc.gnu.org/onlinedocs/gcc/ARM-C-Language-Extensions-_0028ACLE_0029.html GCC ARM C Language Extensions]
 +
 +
 +
=== Week 5 Deliverables ===
 +
* Blog about the SIMD Lab
 +
 +
 +
== Week 6 ==
 +
 +
=== Week 6 - Class I ===
 +
* [[Compiler Optimizations]]
 +
 +
* Advanced Compiler Optimizations
 +
** [[Profile Guided Optimization]]
 +
** [[Link Time Optimization]]
 +
 +
* [[Profiling]]
 +
 +
=== Week 6 - Class II ===
 +
* Continue work on the [[SPO600 SIMD Lab|SIMD Lab]] (Lab 5)
 +
 +
=== Week 6 Deliverables ===
 +
* Blog about your results to Lab 5
 +
 +
 +
== Week 7 ==
 +
 +
=== Week 7 - Class I ===
 +
 +
Building software...
 +
* Configuration Systems
 +
** make-based systems
 +
*** [https://www.gnu.org/software/automake/manual/html_node/index.html The GNU Build System: autotools, autoconf, automake]
 +
**** GNU autotools makes extensive use of the ''configuration name'' ("triplet") -- ''cpu-manufacturer-operatingSystem'' or ''cpu-manufacturer-kernel-operatingSystem'' (e.g.,
 +
**** config.guess and config.sub
 +
*** CMake
 +
*** qmake
 +
*** Meson
 +
*** iMake and Others
 +
** Non-make-based systems
 +
*** Apache Ant
 +
*** Apache Maven
 +
*** Qt Build System
 +
* Building in the Source Tree vs. Building in a Parallel Tree
 +
** Pros and Cons
 +
** [https://www.gnu.org/software/automake/manual/html_node/VPATH-Builds.html#VPATH-Builds GNU automake ''vpath'' builds]
 +
* Installing and Testing in non-system directories
 +
** Configuring installation to a non-standard directory
 +
*** Running <code>configure</code> with <code>--prefix</code>
 +
*** Running <code>make install</code> as a non-root user
 +
*** DESTDIR variable for <code>make install</code>
 +
** Runtime environment variables:
 +
*** PATH
 +
*** LD_LIBRARY_PATH and LD_PRELOAD (see the [http://man7.org/linux/man-pages/man8/ld.so.8.html ld.so manpage])
 +
** Security when running software
 +
*** Device access
 +
**** Opening a TCP/IP or UDP/IP port below 1024
 +
**** Accessing a <code>/dev</code> device entry
 +
***** Root permission
 +
***** Group permission
 +
*** SELinux Type Enforcement
 +
**** Enforcement mode
 +
***** View enforcement mode: <code>getenforce</code>
 +
***** Set enforcement mode: <code>setenforce</code>
 +
**** Changing policy
 +
***** [https://fedoraproject.org/wiki/SELinux/audit2why audit2why]
 +
***** [https://fedoraproject.org/wiki/SELinux/audit2why audit2allow]
 +
* Build Dependencies
 +
* Packaging
 +
 +
* General information about the SPO600 projects
 +
** Goal
 +
** Stages
 +
** Approaching the Project
 +
 +
 +
=== Week 7 - Class II ===
 +
* [[Fall 2019 SPO600 Project|Project Selection]]
 +
 +
=== Week 7 Deliverables ===
 +
* Catch up on any incomplete labs (and blog about them)
 +
* Blog about your project selection progress
 +
 +
 +
== Week 8 ==
 +
 +
=== Week 8 - Class I ===
 +
 +
==== Overview/Review of Processor Operation ====
 +
 +
* Fetch-decode-dispatch-execute cycle
 +
* Pipelining
 +
* Branch Prediction
 +
* In-order vs. Out-of-order execution
 +
** Micro-ops
 +
 +
==== Memory Basics ====
 +
 +
* Organization of Memory
 +
** Process organization
 +
*** Text, data
 +
*** Stack
 +
*** Heap
 +
** System organization
 +
*** Kernel memory in process maps
 +
*** Use of unallocated memory for buffers and cache
 +
* Memory Speeds
 +
* Cache
 +
** Cache lookup
 +
** Cache synchronization and invalidation
 +
** Cache line size
 +
* Prefetch
 +
** Prefetch hinting
 +
 +
==== Memory Architecture ====
 +
 +
* Virtual Memory and Memory Management Units (MMUs)
 +
** General principles of Virtual Memory and operation of MMUs
 +
** Memory protection
 +
*** Unmapped Regions
 +
*** Write Protection
 +
*** Execute Protection
 +
*** Privilege Levels
 +
** Swapping
 +
** Text sharing
 +
** Demand Loading
 +
** Data sharing
 +
*** Shared memory for Inter-Process Communication
 +
*** Copy-on-Write (CoW)
 +
** Memory mapped files
 +
 +
==== Memory Statistics ====
 +
 +
* Resident Set Size (RSS) and Virtual Set Size (VSS)
 +
* Total memory consumption per process
 +
* Total system memory consumption
 +
 +
==== Software Impact ====
 +
* Alignment checks
 +
* Page boundary crossing
 +
 +
=== Week 8 - Class II ===
 +
 +
* Project Discussion
 +
 +
=== Week 8 Deliverables ===
 +
 +
* Blog about your project work
  
 
<!--  
 
<!--  
Line 285: Line 525:
 
=== Week 5 - Class II ===
 
=== Week 5 - Class II ===
 
* SIMD and Auto-vectorization
 
* SIMD and Auto-vectorization
* Inline Assembler
+
* [[Inline Assembly Language|Inline Assembler]]
 +
* C Intrinsics
 
* [[SPO600 Vectorization Lab|Vectorization Lab]] (Optional lab - recommended)
 
* [[SPO600 Vectorization Lab|Vectorization Lab]] (Optional lab - recommended)
  
Line 483: Line 724:
  
 
* Organization of Memory
 
* Organization of Memory
** System organization
 
 
** Process organization
 
** Process organization
 
*** Text, data
 
*** Text, data
 
*** Stack
 
*** Stack
 
*** Heap
 
*** Heap
 +
** System organization
 
* Memory Speeds
 
* Memory Speeds
 
* Cache
 
* Cache
Line 499: Line 740:
  
 
* Virtual Memory and Memory Management Units (MMUs)
 
* Virtual Memory and Memory Management Units (MMUs)
** General principles of VM and operation of MMUs
+
** General principles of Virtual Memory and operation of MMUs
 
** Memory protection
 
** Memory protection
 
*** Unmapped Regions
 
*** Unmapped Regions
Line 507: Line 748:
 
** Swapping
 
** Swapping
 
** Text sharing
 
** Text sharing
 +
** Demand Loading
 
** Data sharing
 
** Data sharing
** Shared memory for Inter-Process Communication
+
*** Shared memory for Inter-Process Communication
** Copy-on-Write (CoW)
+
*** Copy-on-Write (CoW)
** Demand Loading
 
 
** Memory mapped files
 
** Memory mapped files
  

Revision as of 12:44, 1 November 2019

This is the schedule and main index page for the SPO600 Software Portability and Optimization course for Fall 2019.

Important.png
It's Alive!
This SPO600 weekly schedule will be updated as the course proceeds - dates and content are subject to change. The cells in the summary table will be linked to relevant resources and labs as the course progresses.

Schedule Summary Table

This is a summary/index table. Please follow the links in each cell for additional detail which will be added below as the course proceeds -- especially for the Deliverables column.

Week Week of... Class I
Tuesday 1:30-3:15
Room B1024
Class II
Friday 11:40-1:25
Room K1263
Deliverables
(Summary - click for details)
1 Sep 2 Introduction / How is code accepted into an open source project? (Lab 1) / Computer Architecture Overview Compiled C Lab (Lab 2) Set up accounts.
2 Sep 9 Intro to Assembler / Using Makefiles Assembler Lab (Lab 3) Blog your conclusion to Labs 1 and 2, blog about your initial work on Lab 3, and set up Slack access.
3 Sep 16 Sysadmin for Devs Assembler Lab (Lab 3) Continued Blog about Lab 3.
4 Sep 23 Binary Representation of Data / Data Type Selection / Algorithm Selection Algorithm Selection (Lab 4) Blog your lab 4 results.
5 Sep 30 SIMD and Vectorization / Inline Assembler SIMD Lab (Lab 5) Blog your lab 5 results.
6 Oct 7 Compiler Optimizations / Computer Resources and Performance / Benchmarking and Profiling SIMD Lab (Continued) (Lab 5) Blog your Lab 5 results.
7 Oct 14 Building Software / Projects! Project selection Catch up on any missed labs, blog about your project selection progress.
Oct 21 Reading Week
8 Oct 28 Memory Ordering / Barriers / Acquire-Release Semantics Project Hacking Blog blog about your project.
9 Nov 4 Atomics Project Hacking Blog about your project.
10 Nov 11 Intrinsics Project Hacking Blog about your project.
11 Nov 18 ifunc Project Hacking Blog about your project.
12 Nov 25 Projects Project Hacking Blog about your project.
13 Dec 2 Projects Wrap-up Discussion Blog about your project.
Exam Dec 9 Exam Week - No exam in this course!

Evaluation

Category Percentage Evaluation Dates
Communication 20% September (Oct 2 - 5%), October (November 10 - 5%), November (5%), end of course (5%).
Quizzes 10% May be held during any class, usually at the start of class. A minimum of 5 one-page quizzes will be given. No make-up/retake option is offered if you miss a quiz. Lowest 3 scores will not be counted. Students with Test Centre accommodations may choose to write the quizzes in the class, or alternately write a monthly quiz in the Test Center.
Labs 10% See deliverables column above. All labs must be submitted by the end of the course, but it is best if you stay on top of the labs and submit according to the table above.
Project work 60% 3 stages: 15% (Nov 8), 20% (Nov 29), 25% (Dec 11).

Week 1

Week 1 - Class I

Introduction to the Problems

Porting and Portability
  • Most software is written in a high-level language which can be compiled into machine code for a specific computer architecture. In many cases, this code can be compiled for multiple architectures. However, there is a lot of existing code that contains some architecture-specific code fragments written in architecture-specific high-level code or in Assembly Language.
  • Reasons that code is architecture-specific:
    • System assumptions that don't hold true on other platforms
    • Code that takes advantage of platform-specific features
  • Reasons for writing code in Assembly Langauge include:
    • Performance
    • Atomic Operations
    • Direct access to hardware features, e.g., CPUID registers
  • Most of the historical reasons for including assembler are no longer valid. Modern compilers can out-perform most hand-optimized assembly code, atomic operations can be handled by libraries or compiler intrinsics, and most hardware access should be performed through the operating system or appropriate libraries.
  • A new architecture has appeared: AArch64, which is part of ARMv8. This is the first new computer architecture to appear in several years (at least, the first mainstream computer architecture).
  • At this point, most key open source software (the software typically present in a Linux distribution such as Ubuntu or Fedora, for example) now runs on AArch64. However, it may not run as well as on older architectures (such as x86_64).
Benchmarking and Profiling

Benchmarking involves testing software performance under controlled conditions so that the performance can be compared to other software, the same software operating on other types of computers, or so that the impact of a change to the software can be gauged.

Profiling is the process of analyzing software performance on finer scale, determining resource usage per program part (typically per function/method). This can identify software bottlenecks and potential targets for optimization.

Optimization

Optimization is the process of evaluating different ways that software can be written or built and selecting the option that has the best performance tradeoffs.

Optimization may involve substituting software algorithms, altering the sequence of operations, using architecture-specific code, or altering the build process. It is important to ensure that the optimized software produces correct results and does not cause an unacceptable performance regression for other use-cases, system configurations, operating systems, or architectures.

The definition of "performance" varies according to the target system and the operating goals. For example, in some contexts, low memory or storage usage is important; in other cases, fast operation; and in other cases, low CPU utilization or long battery life may be the most important factor. It is often possible to trade off performance in one area for another; using a lookup table, for example, can reduce CPU utilization and improve battery life in some algorithms, in return for increased memory consumption.

Most advanced compilers perform some level of optimization, and the options selected for compilation can have a significant effect on the trade-offs made by the compiler, affecting memory usage, execution speed, executable size, power consumption, and debuggability.

Build Process

Building software is a complex task that many developers gloss over. The simple act of compiling a program invokes a process with five or more stages, including pre-proccessing, compiling, optimizing, assembling, and linking. However, a complex software system will have hundreds or even thousands of source files, as well as dozens or hundreds of build configuration options, auto configuration scripts (cmake, autotools), build scripts (such as Makefiles) to coordinate the process, test suites, and more.

The build process varies significantly between software packages. Most software distribution projects (including Linux distributions such as Ubuntu and Fedora) use a packaging system that further wraps the build process in a standardized script format, so that different software packages can be built using a consistent process.

In order to get consistent and comparable benchmark results, you need to ensure that the software is being built in a consistent way. Altering the build process is one way of optimizing software.

Note that the build time for a complex package can range up to hours or even days!

General Course Information

  • Course resources are linked from the CDOT wiki, starting at https://wiki.cdot.senecacollege.ca/wiki/SPO600 (Quick find: This page will usually be Google's top result for a search on "SPO600").
  • Coursework is submitted by blogging.
  • Quizzes will be short (1 page) and will be held without announcement at any time, generally at the start of class. There is no opportunity to re-take a missed quiz, but your lowest three quiz scores will not be counted, so do not worry if you miss one or two.
    • Students with test accommodations: an alternate monthly quiz is available in the Test Centre. See the professor for details.
  • Course marks (see Weekly Schedule for dates):
    • 60% - Project Deliverables
    • 20% - Communication (Blog and Wiki writing)
    • 20% - Labs and Quizzes (10% labs - completed/not completed; 10% for quizzes - lowest 3 scores not counted)
  • All classes will be held in an Active Learning Classroom -- you are encouraged to bring your own laptop to class. If you do not have a laptop, consider signing one out of the Learning Commons for class, or using a smartphone with an HDMI adapter.
  • For more course information, refer to the SPO600 Weekly Schedule (this page), the Course Outline, and SPO600 Course Policies.

Course and Setup: Accounts, agreements, servers, and more

How open source communities work


Week 1 - Class II

  • Compiler Operation
    • Stages of Compilation
      1. Preprocessing
      2. Compiling
      3. Assembling
      4. Linking
  • Analyzing compiler output
    • Disassembly
  • Compiled C Lab (Lab 2)

Week 1 Deliverables

  1. Course setup:
    1. Set up your SPO600 Communication Tools - in particular, set up a blog and add it to Planet CDOT (via the Planet CDOT Feed List).
    2. Add yourself to the Current SPO600 Participants page (leave the projects columns blank).
    3. Generate a pair of keys for SSH and email the public key to your professor, so that he can set up your access to the class servers.
    4. Optional (strongly recommended): Set up a personal Linux system.
    5. Optional: Purchase an AArch64 development board (such as a 96Boards HiKey or Raspberry Pi 3 or 4. (If you use a Pi, install a 64-bit Linux operating system on it, not a 32-bit version).
  2. Complete Lab 1 and write it up on your blog.

Week 2

Week 2 - Class I

Week 2 - Class II

Week 2 Deliverables

Week 3

Week 3 - Class I

  • Sysadmin for Devs
    • In-class discussion of tips and tricks for efficient work on a Linux server

Week 3 - Class II

Week 3 - Deliverables

  • Finish and blog your detailed results for the Assembler Lab (Lab 3)

Week 4

Week 4 - Class I

  • Binary Representation of Data
    • Integers
      • Integers are the basic building block of binary numbers.
      • In an unsigned integer, the bits are numbered from right to left starting at 0, and the value of each bit is 2bit. The value represented is the sum of each bit multiplied by its corresponding bit value. The range of an unsigned integer is 0:2bits-1 where bits is the number of bits in the unsigned integer.
      • Signed integers are generally stored in twos-complement format, where the highest bit is used as a sign bit. If that bit is set, the value represented is -(!value)-1 where ! is the NOT operation (each bit gets flipped from 0→1 and 1→2)
    • Fixed-point
      • A fixed-point value is encoded the same as an integer, except that some of the bits are fractional -- they're considered to be to the right of the "binary point" (binary version of "decimal point" - or more generically, the radix point). For example, binary 000001.00 is decimal 1.0, and 000001.11 is decimal 1.75.
      • An alternative to fixed-point values is integer values in a smaller unit of measurement. For example, some accounting software may use integer values representing cents. For input and display purposes, dollar and cent values are converted to/from cent values.
    • Floating-point
      • Floating point numbers have three parts: a sign bit (0 for positive, 1 for negative), a mantissa or significand, and an exponent. The value is interpreted as sign mantissa * 2exponent.
    • Sound
    • Graphics
    • Compression techniques
      • Huffman encoding / Adaptive arithmetic encoding
      • Repeated sequence encoding (1D, 2D, 3D)
      • Decomposition
      • Pallettization
      • Psychoacoustic and psychovisual compression
  • Problem: Scaling Sound
    • Naive approach
    • Lookup table
    • Fixed-point multiply and shift

Week 4 - Class II

Week 4 Deliverables

  • Blog your results to Lab 4


Week 5

Week 5 - Class I

  • SIMD
    • SIMD is an acronym for "Single Instruction, Multiple Data", and refers to a class of instructions which perform the same operation on several separate pieces of data in parallel. SIMD instructions also include related instructions to set up data for SIMD processing, and to summarize results.
    • SIMD is based on very wide registers (128 bits to 2048 bits on implementations current as of 2019), and these wide registers can be treated as multiple "lanes" of similar data. These SIMD registers, also called vector registers, can therefore be thought of as small arrays of values.
    • A 128-bit SIMD register can be used as:
      • two 64-bit lanes
      • four 32-bit lanes
      • eight 16-bit lanes
      • sixteen 8-bit lanes
    • Each architecture has a different notation for SIMD registers. In AArch64 (which will be our focus):
      • Vector usage uses the notation vn.s where n is the register number and s is the shape of the lanes, expressed as the number of lanes and a letter indicating the width of the lanes: q for quad-word (128 bits), d for double-word (64 bits), s for single-word (32 bits), h for half-word (16 bits), and b for byte (8 bits). Therefore, v0.16b is vector register 0 used as 16 lanes of 8 bits (1 byte) each, while v8.4s is vector register 8 used as 4 lanes of 32 bits each. Most instructions permit either 64 or 128 bits of the register to be used.
      • Scalar usage uses the lane width letter followed by the vector register number. Therefore, q3 refers to vector register 3 used as a single 128-bit value, and s3 refers to the same register used as a single 32-bit register. Note that these are the same register referred to as v3 for vector usage. When using less than 128 bits, the remaining bits are either zero-filled (unsigned usage) or sign-extended (signed usage: the upper bits are filled with the sign bit, i.e., the same value as the high bit of the active part of the register).
    • Most SIMD operations work on corresponding lanes of the operand registers. For example, the AArch64 instruction add v0.8h, v1.8h, v2.8h will take the value in the first lane of register 1, add the value in the first lane of register 2, and place the result in the first lane of register 0. At the same time, the other lanes are processed in the same way, resulting in 8 simultaneous addition operations being performed.
    • A small number of SIMD operations work across lanes, e.g., to find the lowest or highest value in all of the lanes, to add the lanes together, or to duplicate a single value into all of the lanes of a register. These are usually used to set up or summarize the results of SIMD operations -- for example, a value of 0 might be duplicated into all of the lanes of a result register, then a loop applied to sum array data into the results register, and then a lane-summing operation performed to merge the results from all of the lanes.
  • SIMD capabilities can be used in a program in one of three different ways:
    1. The compiler's auto-vectorizer can be used to identify sections of code to which SIMD is applicable, and SIMD code will automatically be generated.
      • This works for the basic SIMD operations, but may not be applicable to advanced SIMD instructions, which don't clearly map to C statements.
      • The compiler will be very cautious about vectorizing code. See the Resources section below for insight into these challenges.
        • In order to vectorize a loop, among other things, the number of loop iterations needs to be known before the loop starts, memory layout must meet SIMD alignment requirements, loops must not overlap in a way that is affected by vectorization.
        • The compiler will also calculate a cost for the vectorization: in the case of a small loop, the extra setup before the loop and processing after the loop may negate the benefits of vectorization.
      • Vectorization in applied by default only at the -O3 level in most compilers. In GCC:
        • The main individual feature flag to turn on vectorization is -ftree-vectorize (enabled by default at -O3, disabled at other levels).
        • You can see all of the vectorization decisions using -fopt-info-vec-all or you can see just the missed vectorizations using -fopt-info-vec-missed (which is usually what you want to focus on, because it show only the loops where vectorization was not enabled, and the reason that it was not). This approach is generally very portable.
    2. We can explicitly include SIMD instructions in a C program by using Inline Assembler. This is obviously architecture-specific, so it is important to use C preprocessor directives to include/exclude this code depending on the platform for which it is compiled, and to use a generic C implementation on any platform for which you are not providing an inline assembler version.
    3. C Intrinsics are function-like extensions to the C language. Although they look like functions, they are compiled inline, and they are used to provide access to features which are not provided by the C language itself. There is a group of intrinsics which provide access to SIMD instructions. However, the benefit of using these over inline assembler is debatable. SIMD intrinsics are not portable, and should be included with C preprocessor directives like inline assembler.

Week 5 - Class II

Week 5 Resources

Auto-vectorization

  • Auto-Vectorization in GCC - Main project page for the GCC auto-vectorizer.
  • Auto-vectorization with gcc 4.7 - An excellent discussion of the capabilities and limitations of the GCC auto-vectorizer, intrinsics for providing hints to GCC, and other code pattern changes that can improve results. Note that there has been some improvement in the auto-vectorizer since this article was written. This article is strongly recommended.
  • Intel (Auto)Vectorization Tutorial - this deals with the Intel compiler (ICC) but the general technical discussion is valid for other compilers such as gcc and llvm

Inline Assembly Language

C Intrinsics - AArch64 SIMD


Week 5 Deliverables

  • Blog about the SIMD Lab


Week 6

Week 6 - Class I

Week 6 - Class II

Week 6 Deliverables

  • Blog about your results to Lab 5


Week 7

Week 7 - Class I

Building software...

  • Configuration Systems
    • make-based systems
      • The GNU Build System: autotools, autoconf, automake
        • GNU autotools makes extensive use of the configuration name ("triplet") -- cpu-manufacturer-operatingSystem or cpu-manufacturer-kernel-operatingSystem (e.g.,
        • config.guess and config.sub
      • CMake
      • qmake
      • Meson
      • iMake and Others
    • Non-make-based systems
      • Apache Ant
      • Apache Maven
      • Qt Build System
  • Building in the Source Tree vs. Building in a Parallel Tree
  • Installing and Testing in non-system directories
    • Configuring installation to a non-standard directory
      • Running configure with --prefix
      • Running make install as a non-root user
      • DESTDIR variable for make install
    • Runtime environment variables:
    • Security when running software
      • Device access
        • Opening a TCP/IP or UDP/IP port below 1024
        • Accessing a /dev device entry
          • Root permission
          • Group permission
      • SELinux Type Enforcement
        • Enforcement mode
          • View enforcement mode: getenforce
          • Set enforcement mode: setenforce
        • Changing policy
  • Build Dependencies
  • Packaging
  • General information about the SPO600 projects
    • Goal
    • Stages
    • Approaching the Project


Week 7 - Class II

Week 7 Deliverables

  • Catch up on any incomplete labs (and blog about them)
  • Blog about your project selection progress


Week 8

Week 8 - Class I

Overview/Review of Processor Operation

  • Fetch-decode-dispatch-execute cycle
  • Pipelining
  • Branch Prediction
  • In-order vs. Out-of-order execution
    • Micro-ops

Memory Basics

  • Organization of Memory
    • Process organization
      • Text, data
      • Stack
      • Heap
    • System organization
      • Kernel memory in process maps
      • Use of unallocated memory for buffers and cache
  • Memory Speeds
  • Cache
    • Cache lookup
    • Cache synchronization and invalidation
    • Cache line size
  • Prefetch
    • Prefetch hinting

Memory Architecture

  • Virtual Memory and Memory Management Units (MMUs)
    • General principles of Virtual Memory and operation of MMUs
    • Memory protection
      • Unmapped Regions
      • Write Protection
      • Execute Protection
      • Privilege Levels
    • Swapping
    • Text sharing
    • Demand Loading
    • Data sharing
      • Shared memory for Inter-Process Communication
      • Copy-on-Write (CoW)
    • Memory mapped files

Memory Statistics

  • Resident Set Size (RSS) and Virtual Set Size (VSS)
  • Total memory consumption per process
  • Total system memory consumption

Software Impact

  • Alignment checks
  • Page boundary crossing

Week 8 - Class II

  • Project Discussion

Week 8 Deliverables

  • Blog about your project work