Benchmark Hard Drive

From RARForge
Jump to navigation Jump to search

Linux[edit]

DD[edit]

Q: What is the difference between the following?[edit]

<source lang=bash>

dd bs=1M count=128 if=/dev/zero of=test
dd bs=1M count=128 if=/dev/zero of=test; sync
dd bs=1M count=128 if=/dev/zero of=test conv=fdatasync
dd bs=1M count=128 if=/dev/zero of=test oflag=dsync

</source>

A: The difference is in handling of the write cache in RAM:[edit]

<source lang=bash enclose="div"> dd bs=1M count=128 if=/dev/zero of=test

  1. The default behaviour of dd is to not "sync" (i.e. not ask the OS to completely write the data to disk before dd exiting). The above command will just commit your 128 MB of data into a RAM buffer (write cache) -- this will be really fast and it will show you the hugely inflated benchmark result right away. However, the server in the background is still busy, continuing to write out data from the RAM cache to disk.

</source>

<source lang=bash enclose="div"> dd bs=1M count=128 if=/dev/zero of=test; sync

  1. Absolutely identical to the previous case, as anyone who understands how *nix shell works should surely know that adding a ; sync does not affect the operation of previous command in any way, because it is executed independently, after the first command completes. So your (wrong) MB/sec value is already printed on screen while that sync is only preparing to be executed.

</source>

<source lang=bash enclose="div"> dd bs=1M count=128 if=/dev/zero of=test conv=fdatasync

  1. This tells dd to require a complete "sync" once, right before it exits. So it commits the whole 128 MB of data, then tells the operating system: "OK, now ensure this is completely on disk", only then measures the total time it took to do all that and calculates the benchmark result.

</source>

<source lang=bash enclose="div"> dd bs=1M count=128 if=/dev/zero of=test oflag=dsync

  1. Here dd will ask for completely synchronous output to disk, i.e. ensure that its write requests don’t even return until the submitted data is on disk. In the above example, this will mean sync'ing once per megabyte, or 128 times in total. It would probably be the slowest mode, as the write cache is basically unused at all in this case.

</source>

Which one do you suggest?[edit]

<source lang=bash highlight="1" > dd bs=1M count=128 if=/dev/zero of=test conv=fdatasync

  1. This behaviour is perhaps the closest to the way real-world tasks behave. If your server or VPS is really fast and the above test completes in a second or less, try increasing the count= number to 1024 or so, to get a more accurate averaged result.

</source>


Bonnie++[edit]

There are two parts to the bonnie++ ‘benchmark.’ The first part works with large files and the second part tests with small files. As the bonnie++man page states, the first part simulates what would happen to a filesystem when being used as a database server, while the second part is good for things like a mail cache or for a cluster node hosting a large number of small output files that need processing.

The primary output of Bonnie++ is a human-readable, plain-text in 80 columns, which is designed to fit well when pasted into email and which will work well in most terminals. The second type of output is CSV (Comma Seperated Values). This can easily be imported into any spreadsheet, database or graphing tool.

  • Sequential Output = writes
  • Sequential Input = reads


<source lang=bash enclose=div> mkdir ./foo bonnie++ -d ./foo/

Writing a byte at a time...done Writing intelligently...done Rewriting...done Reading a byte at a time...done Reading intelligently...done start em...done...done...done...done...done... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...done.

Version 1.96 ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP hopstore 15392M 964 96 486390 35 211723 16 4513 95 537595 16 350.3 8 Latency 8350us 205ms 567ms 31617us 43317us 92862us

Version 1.96 ------Sequential Create------ --------Random Create-------- hopstore -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--

             files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                16 19712  16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++

Latency 16638us 270us 282us 113us 6us 48us

1.96,1.96,hopstore,1,1362613531,15392M,,964,96,486390,35,211723,16,4513,95,537595,16,350.3,8,16,,,,,19712,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,8350us,205ms,567ms,31617us,43317us,92862us,16638us,270us,282us,113us,6us,48us </source>


  • The last line in the output is the CSV. You can use 'bon_csv2txt' or 'bon_csv2html' to see the output.

<source lang=bash enclose=div> echo 1.96,1.96,hopstore,1,1362613531,15392M,,964,96,486390,35,211723,16,4513,95,537595,16,350.3,8,16,,,,,19712,16, +++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,8350us,205ms,567ms,31617us,43317us,92862us,16638us, 270us,282us,113us,6us,48us | bon_csv2txt Version 1.96 ------Sequential Output------ --Sequential Input- --Random-

                   -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--

Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP hopstore 15392M 964 96 486390 35 211723 16 4513 95 537595 16 350.3 8 Latency 8350us 205ms 567ms 31617us 43317us 92862us

                   ------Sequential Create------ --------Random Create--------
                   -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--

files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP hopstore 16 19712 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ Latency 16638us 270us 282us 113us 6us 48us </source> <source lang=bash enclose=div> echo 1.96,1.96,hopstore,1,1362613531,15392M,,964,96,486390,35,211723,16,4513,95,537595,16,350.3,8,16,,,,,19712,16, +++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,8350us,205ms,567ms,31617us,43317us,92862us,16638us, 270us,282us,113us,6us,48us | bon_csv2html > disktest.html </source>

  • to lazy to fize the /td issue
Version 1.96Sequential OutputSequential InputRandom
Seeks
Sequential CreateRandom Create
SizePer CharBlockRewritePer CharBlockNum FilesCreateReadDeleteCreateReadDelete
K/sec% CPUK/sec% CPUK/sec% CPUK/sec% CPUK/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU
hopstore15392M96496486390352117231645139553759516350.38161971216 ++++++++++++++++++++++++++++++++++++++++
Latency8350us205ms567ms31617us43317us92862usLatency16638us 270us282us113us6us48us