Difference between revisions of "SYA810 RAID Benchmarking Lab"

From CDOT Wiki
Jump to: navigation, search
(Kezhong Liang)
(Kezhong Liang)
Line 280: Line 280:
 
|-
 
|-
 
| 72.2998 s, 149 MB/s
 
| 72.2998 s, 149 MB/s
| 81.9451 s, 131 MB/s
+
|  
| 84.1626 s, 128 MB/s
+
|  
  
 
|-
 
|-
 
| 71.5798 s, 150 MB/s
 
| 71.5798 s, 150 MB/s
| 81.2652 s, 132 MB/s
+
|  
| 80.0712 s, 134 MB/s
+
|  
  
 
|-
 
|-
| 84.9578 s, 126 MB/s
+
| 78.171 s, 137 MB/s
| 82.1158 s, 131 MB/s
+
|  
| 79.9252 s, 134 MB/s
+
|  
  
 
|-
 
|-
| 81.3142 s, 132 MB/s
+
| 70.153 s, 153 MB/s
| 83.2068 s, 129 MB/s
+
|  
| 84.5978 s, 127 MB/s
+
|  
  
 
|-
 
|-
| 83.765 s, 128 MB/s
+
| 74.2309 s, 145 MB/s
| 86.082 s, 125 MB/s
+
|  
| 83.889 s, 128 MB/s
+
|  
  
 
|-
 
|-
| 85.7293 s, 125 MB/s
+
| 73.7941 s, 146 MB/s
| 82.9806 s, 129 MB/s
+
|  
| 82.3351 s, 130 MB/s
+
|  
  
 
|-
 
|-
| 87.1597 s, 123 MB/s
+
| 74.8228 s, 144 MB/s
| 81.5961 s, 132 MB/s
+
|  
| 82.4766 s, 130 MB/s
+
|  
  
 
|-
 
|-
| 90.8568 s, 118 MB/s
+
| 72.4651 s, 148 MB/s
| 83.8913 s, 128 MB/s
+
|  
| 84.7573 s, 127 MB/s
+
|  
  
 
|-
 
|-
| 93.2736 s, 115 MB/s
+
| 74.2831 s, 145 MB/s
| 89.3263 s, 120 MB/s
+
|  
| 84.7258 s, 127 MB/s
+
|  
  
 
|-
 
|-
| 88.5399 s, 121 MB/s
+
| 72.935 s, 147 MB/s
| 85.9754 s, 125 MB/s
+
|  
| 84.0276 s, 128 MB/s
+
|  
  
 
|}
 
|}

Revision as of 13:50, 11 February 2009

Task

The machine 'scotland' is equipped with six 1.5TB disk drives. Partition 9 on each drive is unused. We're going to use these partitions to benchmark various RAID levels.

  1. Create a RAID array of the six partitions (/dev/sd[a-f]9) using the appropriate mdadm command(s).
  2. Create an ext3 filesystem on that array.
  3. Benchmark the array using the Winter_2009_SYA810_Block_Device_Benchmark_Scripts. Repeat the benchmarks at least 3 times -- more if the results seem to vary substantially.
  4. Stop the RAID array.

Important Notes

  • Do not use any devices other than /dev/sd[a-f]9 as members of the array.
  • Do not have any spare elements in your array.
  • Make sure the system is quiet (not doing any background processing) when you run your tests.

Who's Doing Which RAID Level

Write you name on here according to RAID levels chosen in class.

RAID Level Person
Append
0 Milton Paiva Neto Script
1
4
5
6 Nestor Scripts
1+0 Kezhong Liang
RAID 5+LVM

Scheduling

Person Date/time (YYYY-MM-DD HH:MM-HH:MM) Tests
Chris Tyler 2009-02-02 08:00-17:00 Multiseat testing
Chris Tyler 2009-02-04 08:00-17:00 Multiseat testing
Nestor Chan 2009-02-06 ??:00- 2009-02-06 ??:00 Raid 5 with multi-size writing test

(change my time if u need a slot..)

Milton Paiva 2009-02-09 14:30-17:10 Raid 0
Kezhong Liang 2009-02-10 23:30 - 2009-02-11 ??:?? RAID - 1+0
Mohak Vyas 2009-02-11 16:00 - ?? RAID - 4

Results

Record your results here. A table would be nice!

  • Author: Milton Paiva
  • Raid Type: RAID0
  • Files: 10
  • Size: 10 GB


Scripts


#!/bin/bash
#
# Script first written by Nestor Chan - Bossanesta and modified by Milton Paiva Neto <milton.paiva@gmail.com>
# Create 10 files with with 10 GBs fully of zeros

time -p (for ((x=1; x<=10; x++))
do
       dd if=/dev/zero of=fakefile$x bs=1G count=10;
done
sync
)
# this suppose to call all small script one by one, and to be run with nohup.
# bug: it calls each script in background..

echo "Start Testing"
free
date
echo "==================================="

/home/bossanesta/test-10kb

free
date
/home/bossanesta/test-1mb


free
date
/home/bossanesta/test-100mb

free
date
echo "Finish Testing"


 

[root@scotland raid0]# sh +x speedtest.sh

TEST1


10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 44.497 s, 241 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 42.6988 s, 251 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 45.4546 s, 236 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 43.6104 s, 246 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 45.3155 s, 237 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 44.858 s, 239 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 46.2515 s, 232 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 47.0685 s, 228 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 43.8201 s, 245 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 44.8941 s, 239 MB/s

TEST2

10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 41.8107 s, 257 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 45.8173 s, 234 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 45.3462 s, 237 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 45.1653 s, 238 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 46.6794 s, 230 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 44.382 s, 242 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 44.6695 s, 240 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 44.7078 s, 240 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 43.8667 s, 245 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 46.4641 s, 231 MB/s
real 460.54
user 0.00
sys 323.48

Final Results


  • Create files
real 455.94
user 0.00
sys 324.02
  • Delete files
real	2m40.236s
user	0m0.000s
sys	0m7.715s

Kezhong Liang

  • Raid Type: RAID 1+0
  • Test RAID 1+0 disk performance steps:
mdadm -C /dev/md9 -l1 -n2 /dev/sd[a,b]9
mdadm -C /dev/md10 -l1 -n2 /dev/sd]c,d]9
mdadm -C /dev/md11 -l1 -n2 /dev/sd[e,f]9
cat /proc/mdstat
mdadm -C /dev/md12 -l0 -n3 /dev/md9 /dev/md10 /dev/md11
mkfs.ext3 /dev/md12
mkdir /media/raid10
mount /dev/md12 /media/raid10
    • Run the scripts, and write down the result. After finished, delete the raid10 I've created.
umount /dev/md12
mdadm --fail /dev/md12 /dev/md9
mdadm --fail /dev/md12 /dev/md10
mdadm --fail /dev/md12 /dev/md11
mdadm --fail /dev/md9 /dev/sd[a,b]9
mdadm --fail /dev/md10 /dev/sd[c,d]9
mdadm --fail /dev/md11 /dev/sd[e,f]9
mdadm -S /dev/md12
mdadm -S /dev/md9
mdadm -S /dev/md10
mdadm -S /dev/md11
  • The result of using my Script
The First Time The Second Time The Third Time
Read Rate: 204 MB/sec Read Rate: 170 MB/sec Read Rate: 204 MB/sec
Write Rate: 102 MB/sec Write Rate: 102 MB/sec Write Rate: 93 MB/sec
  • The result of using Milton's Script
The First Time The Second Time The Third Time
72.2998 s, 149 MB/s
71.5798 s, 150 MB/s
78.171 s, 137 MB/s
70.153 s, 153 MB/s
74.2309 s, 145 MB/s
73.7941 s, 146 MB/s
74.8228 s, 144 MB/s
72.4651 s, 148 MB/s
74.2831 s, 145 MB/s
72.935 s, 147 MB/s