Home » Opinions » The Rules of RAID » Promise and Areca Controller Performance

Promise and Areca Controller Performance

We’d been around the houses with two integrated RAID controllers which both support RAID 5 and weren’t particularly impressed with either of them so now it was time to get serious. We installed a Promise SX4-M PCI RAID card which uses a Promise PDC20621 controller chip. This card has four SATA ports and typically sells for £170 inc VAT at Dabs, so it costs more than a high end motherboard with either an nForce4 or Intel ICH7R controller.

We started with a pair of drives in a striped array and saw that the HD Tach results suggested a fast read speed, however our file transfer and PCMark05 tests showed that the Promise was very similar in performance to the integrated nForce4 controller.

Striping was a different story as the Promise was much slower than both the nVidia and Intel chips. Not impressive when you consider its price.

And so we come to RAID 5. It didn’t seem fair, pitting a dedicated controller card against two ‘free’ RAID solutions, but once again things didn’t turn out as simply as we would have hoped. Although the Promise card was very fast in our read tests the write tests were a disappointment and returned results that were similar to the Intel ICH7R controller.

Now it was time to wheel out the big guns. We installed an Areca ARC-1220 in the P4N Diamond motherboard. This eight-port card uses a PCI Express 4x interface so you’ll need an SLI motherboard to support both it with a single graphics card. The ARC-1220 costs £435 including VAT from Scan or TMC and not only supports RAID 5 but also RAID 6 and has 128MB of RAM.

With two drives striped together the Areca returned very similar results to the nForce4 controller, but we can’t really imagine anyone using this controller in this manner.

Next we mirrored two drives and the Areca started to show its strength. Although the read speed was good, the write speed really pulled ahead of the nVidia chip. Take a look at the HD Tach graph and you’ll see that write speed starts incredibly well and then drops off abruptly to a level that is similar to the other controllers.

With three drives in a RAID 5 array the Areca gets better and better. Speeds are very similar to a straightforward mirrored array and comparisons with integrated controllers become meaningless. Read speeds are very similar to the nForce4 controller.

As we had eight ports to work with on the Areca we decided to do one last experiment. We connected up five 400GB Seagate 7200.8 drives, one 500GB Hitachi 7K500, one 400GB WD4000KD and a 250GB Samsung SP2504C in a striped array, which totals 3,150GB or 3.15Terabytes. We’ve never seen an array this large before.

We were annoyed that the Areca only recognised the array as 2,000GB and once it was formatted Windows saw it as 1,863GB in size. Conventional wisdom says that you should always use drives of the same make, model and capacity in a RAID array but we were treating that more as a guideline than a rule, and just look at the results that we got. With a massive average read speed of 168.8MB/second and a write speed of 129.1MB/second the Areca looked promising, and in our file transfer test we read and wrote 4GB of files in less than two minutes.

This is striping taken to the max, where we have so many potential points of failure that it is just scary, yet the benefits are questionable. Apart from the physical problems of housing this many drives and then keeping the whole shooting match cool there are terrible risks to your data.

For the record we also ran tests on an MSI K8N Diamond with nForce4 SLI with an AMD Athlon 64-FX-57 with 1GB of Corsair PC3200 RAM. This chipset doesn’t support RAID 5 so we ran tests on striped and mirrored arrays and the results were very similar to nForce4 for Intel. Despite the physical differences between the two chipsets (single chip, no memory controller and dual chip with controller) it is clear than nVidia has got the art of hard drive control down to a fine art.

You have exceeded your hourly limit of requests

comments powered by Disqus