Benchmark Hard Drive

From RARFORGE
Jump to: navigation, search

Contents

Linux

DD

Q: What is the difference between the following?
 dd bs=1M count=128 if=/dev/zero of=test
 dd bs=1M count=128 if=/dev/zero of=test; sync
 dd bs=1M count=128 if=/dev/zero of=test conv=fdatasync
 dd bs=1M count=128 if=/dev/zero of=test oflag=dsync
A: The difference is in handling of the write cache in RAM:
dd bs=1M count=128 if=/dev/zero of=test''

# The default behaviour of ''dd'' is to not "sync" (i.e. not ask the OS to completely write the data to disk before ''dd'' exiting). The above command will just commit your 128 MB of data into a RAM buffer (write cache) -- this will be really fast and it will show you the hugely inflated benchmark result right away. However, the server in the background is still busy, continuing to write out data from the RAM cache to disk.
dd bs=1M count=128 if=/dev/zero of=test; sync''

#Absolutely identical to the previous case, as anyone who understands how *nix shell works should surely know that adding a ''; sync'' does not affect the operation of previous command in any way, because it is executed independently, after the first command completes. So your (wrong) MB/sec value is already printed on screen while that ''sync'' is only preparing to be executed.
dd bs=1M count=128 if=/dev/zero of=test conv=fdatasync''

# This tells ''dd'' to require a complete "sync" once, right before it exits. So it commits the whole 128 MB of data, then tells the operating system: "OK, now ensure this is completely on disk", only then measures the total time it took to do all that and calculates the benchmark result.
dd bs=1M count=128 if=/dev/zero of=test oflag=dsync''

# Here ''dd'' will ask for completely synchronous output to disk, i.e. ensure that its write requests don’t even return until the submitted data is on disk. In the above example, this will mean sync'ing once per megabyte, or 128 times in total. It would probably be the slowest mode, as the write cache is basically unused at all in this case.
Which one do you suggest?
dd bs=1M count=128 if=/dev/zero of=test conv=fdatasync 
# This behaviour is perhaps the closest to the way real-world tasks behave. If your server or VPS is really fast and the above test completes in a second or less, try increasing the ''count='' number to 1024 or so, to get a more accurate averaged result.


Bonnie++

There are two parts to the bonnie++ ‘benchmark.’ The first part works with large files and the second part tests with small files. As the bonnie++man page states, the first part simulates what would happen to a filesystem when being used as a database server, while the second part is good for things like a mail cache or for a cluster node hosting a large number of small output files that need processing.

The primary output of Bonnie++ is a human-readable, plain-text in 80 columns, which is designed to fit well when pasted into email and which will work well in most terminals. The second type of output is CSV (Comma Seperated Values). This can easily be imported into any spreadsheet, database or graphing tool.

  • Sequential Output = writes
  • Sequential Input = reads


mkdir ./foo
bonnie++ -d ./foo/

Writing a byte at a time...done
Writing intelligently...done
Rewriting...done
Reading a byte at a time...done
Reading intelligently...done
start em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.

Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec  %CP K/sec  %CP K/sec %CP K/sec  %CP  /sec  %CP
hopstore     15392M   964  96 486390  35 211723  16  4513  95 537595  16 350.3   8
Latency              8350us     205ms     567ms   31617us   43317us   92862us

Version  1.96       ------Sequential Create------ --------Random Create--------
hopstore            -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 19712  16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
Latency             16638us     270us     282us     113us       6us      48us

1.96,1.96,hopstore,1,1362613531,15392M,,964,96,486390,35,211723,16,4513,95,537595,16,350.3,8,16,,,,,19712,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,8350us,205ms,567ms,31617us,43317us,92862us,16638us,270us,282us,113us,6us,48us


  • The last line in the output is the CSV. You can use 'bon_csv2txt' or 'bon_csv2html' to see the output.
echo 1.96,1.96,hopstore,1,1362613531,15392M,,964,96,486390,35,211723,16,4513,95,537595,16,350.3,8,16,,,,,19712,16, +++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,8350us,205ms,567ms,31617us,43317us,92862us,16638us, 270us,282us,113us,6us,48us | bon_csv2txt
Version      1.96   ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
hopstore     15392M   964  96 486390  35 211723  16  4513  95 537595  16 350.3   8
Latency              8350us     205ms     567ms   31617us   43317us   92862us
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files:max:min        /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
hopstore         16 19712  16  +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
Latency             16638us     270us     282us     113us       6us      48us
echo 1.96,1.96,hopstore,1,1362613531,15392M,,964,96,486390,35,211723,16,4513,95,537595,16,350.3,8,16,,,,,19712,16, +++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,8350us,205ms,567ms,31617us,43317us,92862us,16638us, 270us,282us,113us,6us,48us | bon_csv2html > disktest.html
  • to lazy to fize the /td issue
Version 1.96Sequential OutputSequential InputRandom
Seeks
Sequential CreateRandom Create
SizePer CharBlockRewritePer CharBlockNum FilesCreateReadDeleteCreateReadDelete
K/sec% CPUK/sec% CPUK/sec% CPUK/sec% CPUK/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU
hopstore</TD>15392M</TD>964</TD>96</TD>486390</TD>35</TD>211723</TD>16</TD>4513</TD>95</TD>537595</TD>16</TD>350.3</TD>8</TD>16</TD>19712</TD>16</TD> +++++</TD>+++</TD>+++++</TD>+++</TD>+++++</TD>+++</TD>+++++</TD>+++</TD>+++++</TD>+++</TD></TR>
Latency</TD>8350us</TD>205ms</TD>567ms</TD>31617us</TD>43317us</TD>92862us</TD>Latency</TD>16638us</TD> 270us</TD>282us</TD>113us</TD>6us</TD>48us</TD></TR>

</TABLE>

Personal tools
Namespaces

Variants
Views
Actions
Navigation
Toolbox