SYA810 RAID Benchmarking Lab

From CDOT Wiki
Jump to: navigation, search

Task

The machine 'scotland' is equipped with six 1.5TB disk drives. Partition 9 on each drive is unused. We're going to use these partitions to benchmark various RAID levels.

  1. Create a RAID array of the six partitions (/dev/sd[a-f]9) using the appropriate mdadm command(s).
  2. Create an ext3 filesystem on that array.
  3. Benchmark the array using the Winter_2009_SYA810_Block_Device_Benchmark_Scripts. Repeat the benchmarks at least 3 times -- more if the results seem to vary substantially.
  4. Stop the RAID array.

Important Notes

  • Do not use any devices other than /dev/sd[a-f]9 as members of the array.
  • Do not have any spare elements in your array.
  • Make sure the system is quiet (not doing any background processing) when you run your tests.

Who's Doing Which RAID Level

Write you name on here according to RAID levels chosen in class.

RAID Level Person
Append
0 Milton Paiva Neto Script
1
4
5
6 Nestor Scripts
1+0 Kezhong Liang
RAID 5+LVM

Scheduling

Person Date/time (YYYY-MM-DD HH:MM-HH:MM) Tests
Chris Tyler 2009-02-02 08:00-17:00 Multiseat testing
Chris Tyler 2009-02-04 08:00-17:00 Multiseat testing
Nestor Chan 2009-02-06 ??:00- 2009-02-06 ??:00 Raid 5 with multi-size writing test

(change my time if u need a slot..)

Milton Paiva 2009-02-09 14:30-17:10 Raid 0
Kezhong Liang 2009-02-10 23:30 - 2009-02-11 15:30 RAID - 1+0
Mohak Vyas 2009-02-11 16:00 - ?? RAID - 4

Results

Record your results here. A table would be nice!

  • Author: Milton Paiva
  • Raid Type: RAID0
  • Files: 10
  • Size: 10 GB


Scripts


#!/bin/bash
#
# Script first written by Nestor Chan - Bossanesta and modified by Milton Paiva Neto <milton.paiva@gmail.com>
# Create 10 files with with 10 GBs fully of zeros

time -p (for ((x=1; x<=10; x++))
do
       dd if=/dev/zero of=fakefile$x bs=1G count=10;
done
sync
)
# this suppose to call all small script one by one, and to be run with nohup.
# bug: it calls each script in background..

echo "Start Testing"
free
date
echo "==================================="

/home/bossanesta/test-10kb

free
date
/home/bossanesta/test-1mb


free
date
/home/bossanesta/test-100mb

free
date
echo "Finish Testing"


 

[root@scotland raid0]# sh +x speedtest.sh

TEST1


10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 44.497 s, 241 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 42.6988 s, 251 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 45.4546 s, 236 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 43.6104 s, 246 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 45.3155 s, 237 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 44.858 s, 239 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 46.2515 s, 232 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 47.0685 s, 228 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 43.8201 s, 245 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 44.8941 s, 239 MB/s

TEST2

10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 41.8107 s, 257 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 45.8173 s, 234 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 45.3462 s, 237 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 45.1653 s, 238 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 46.6794 s, 230 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 44.382 s, 242 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 44.6695 s, 240 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 44.7078 s, 240 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 43.8667 s, 245 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 46.4641 s, 231 MB/s
real 460.54
user 0.00
sys 323.48

Final Results


  • Create files
real 455.94
user 0.00
sys 324.02
  • Delete files
real	2m40.236s
user	0m0.000s
sys	0m7.715s

Kezhong Liang

  • Raid Type: RAID 1+0
  • Test RAID 1+0 disk performance steps:
mdadm -C /dev/md9 -l1 -n2 /dev/sd[a,b]9
mdadm -C /dev/md10 -l1 -n2 /dev/sd]c,d]9
mdadm -C /dev/md11 -l1 -n2 /dev/sd[e,f]9
cat /proc/mdstat
mdadm -C /dev/md12 -l0 -n3 /dev/md9 /dev/md10 /dev/md11
mkfs.ext3 /dev/md12
mkdir /media/raid10
mount /dev/md12 /media/raid10
    • Run the scripts, and write down the result. After finished, delete the raid10 I've created.
umount /dev/md12
mdadm --fail /dev/md12 /dev/md9
mdadm --fail /dev/md12 /dev/md10
mdadm --fail /dev/md12 /dev/md11
mdadm --fail /dev/md9 /dev/sd[a,b]9
mdadm --fail /dev/md10 /dev/sd[c,d]9
mdadm --fail /dev/md11 /dev/sd[e,f]9
mdadm -S /dev/md12
mdadm -S /dev/md9
mdadm -S /dev/md10
mdadm -S /dev/md11
  • The result of using my Script
The First Time The Second Time The Third Time
Read Rate: 204 MB/sec Read Rate: 170 MB/sec Read Rate: 204 MB/sec
Write Rate: 102 MB/sec Write Rate: 102 MB/sec Write Rate: 93 MB/sec
  • The result of using Milton's Script
The First Time The Second Time The Third Time
72.2998 s, 149 MB/s 69.5242 s, 154 MB/s 74.3364 s, 144 MB/s
71.5798 s, 150 MB/s 68.4565 s, 157 MB/s 76.9103 s, 140 MB/s
78.171 s, 137 MB/s 69.6763 s, 154 MB/s 81.2547 s, 132 MB/s
70.153 s, 153 MB/s 69.0485 s, 156 MB/s 74.9017 s, 143 MB/s
74.2309 s, 145 MB/s 69.2078 s, 155 MB/s 71.7726 s, 150 MB/s
73.7941 s, 146 MB/s 73.1318 s, 147 MB/s 69.8053 s, 154 MB/s
74.8228 s, 144 MB/s 69.0291 s, 156 MB/s 75.1002 s, 143 MB/s
72.4651 s, 148 MB/s 75.5166 s, 142 MB/s 70.8875 s, 151 MB/s
74.2831 s, 145 MB/s 74.2722 s, 145 MB/s 72.1222 s, 149 MB/s
72.935 s, 147 MB/s 70.74 s, 152 MB/s 71.1439 s, 151 MB/s
real 751.44 user 0.00 sys 386.13 real 943.22 user 0.00 sys 394.61 real 980.07 user 0.00 sys 400.46
  • Conclusion
I did this lab three times. I failed in the first time, because the sdf9 couldn't be used. Then I used 4 disks to do the lab, it succeeded. After I
asked Chris, I found the problem. The sdf9 was used by another raid(md_d8, it is inactive), so I stopped it. Finally, I finished the lab. I compared
the result with Milton's and my second times. I find the disk speed of the raid 10 is slower than raid0(it should be) and the speed using 6 disks is 
faster than 4 disks.

Mohak Vyas

  • RAID type: RAID-4
# mdadm --create /dev/md8 --level=4 --raid-devices=4 /dev/sd[a,b,c,d]9
# cat /proc/mdstat
......... 
.........
.........
md8 : active raid4 sdd9[4] sdc9[2] sdb9[1] sda9[0]
     619353600 blocks level 4, 64k chunk, algorithm 0 [4/3] [UUU_]
     [>....................]  recovery =  0.7% (1505028/206451200) finish=49.9m
in speed=68410K/sec
# mkfs.ext3 /dev/md8
# mount /dev/md8 /mnt1
Run your performance script. 
My script creates 2000 100KB files. It took 1792.0 Seconds to create those files.
Testing with Kezhong's script:
The write disk performance: 68 MB/sec
The read disk performance: 204 MB/sec