VTRAK Performance

churnd's picture

I also have several older stype Apple RAIDs and a new Active Storage XRAID, which I have been using to compare to the VTRAK. The older Apple RAIDs are configured RAID5 with 512k stripe, and the Active XRAID is dual RAID6 configured for general I/O (not sure about stripe width). We recently purchased a VTRAK E610f 16TB SATA for additional storage.

I"ve configured the VTRAK as one large RAID6 LUN with a hot spare with a 64k stripe and using Apple's recommended settings for the controller options as per their HT1200 KB under "Manual Config". Settings for the LUN are Read Policy: Read Ahead... Write Policy: Write Back. This server is only a file server, using SMB and AFP. All testing was done locally on the server.

While I understand this config doesn't fall under Apple's recommended configurations for optimal performance, I'm getting throughput results that are very low, and I feel they could be better.

I originally started off with the above configuration, was not happy with the results, reconfigured using Apple's dual RAID5 with two hot spares config. My results were slightly better, but still nothing to get excited about.

Now, I suspect it's the 64k stripe width, but I notice that all of Apple's configuration scripts use that setting, and if you set up the LUN manually using the Express setting with the "File Server" option, it defaults to the 64k setting. I assume 64k is some kind of magic number with this particular hardware?

The only other info I've been able to obtain from searching around is someone said they got better throughput from disabling "Adaptive Writeback Cache" (AWC). From what I understand, the AWC is enabled to prevent data loss during a power outage IF the controller batteries are being reconditioned. Our DC has multiple power backup UPS systems, so losing power isn't really something I'm concerned about. Is it safe to leave this disabled? Should I change the LUN settings to WriteThru?

The only benchmark I've been using is Xbench to compare results. Several tests on the RAID6 LUN yielded similar results:

[b]Adaptive Writeback Cache Disabled/b
[code]Results 174.99
System Info
Xbench Version 1.3
System Version 10.5.8 (9L30)
Physical RAM 12288 MB
Model Xserve3,1
Drive Type Promise VTrak E610f
Disk Test 174.99
Sequential 217.72
Uncached Write 404.84 248.57 MB/sec [4K blocks]
Uncached Write 192.68 109.02 MB/sec [256K blocks]
Uncached Read 115.50 33.80 MB/sec [4K blocks]
Uncached Read 486.86 244.69 MB/sec [256K blocks]
Random 146.28
Uncached Write 49.64 5.25 MB/sec [4K blocks]
Uncached Write 158.82 50.84 MB/sec [256K blocks]
Uncached Read 4572.65 32.40 MB/sec [4K blocks]
Uncached Read 1463.73 271.61 MB/sec [256K blocks]
/code
[b]Adaptive Writeback Cache Enabled/b
[code]Results 35.49
System Info
Xbench Version 1.3
System Version 10.5.8 (9L30)
Physical RAM 12288 MB
Model Xserve3,1
Drive Type Promise VTrak E610f
Disk Test 35.49
Sequential 49.23
Uncached Write 153.71 94.38 MB/sec [4K blocks]
Uncached Write 15.91 9.00 MB/sec [256K blocks]
Uncached Read 100.26 29.34 MB/sec [4K blocks]
Uncached Read 522.69 262.70 MB/sec [256K blocks]
Random 27.74
Uncached Write 12.79 1.35 MB/sec [4K blocks]
Uncached Write 15.38 4.92 MB/sec [256K blocks]
Uncached Read 3855.73 27.32 MB/sec [4K blocks]
Uncached Read 1385.63 257.11 MB/sec [256K blocks]
/code
Here's a result from my [b]Active XRAID/b:
[code]Results 252.78
System Info
Xbench Version 1.3
System Version 10.4.11 (8S2169)
Physical RAM 2048 MB
Model Xserve1,1
Drive Type Active AC16SFC01
Disk Test 252.78
Sequential 167.65
Uncached Write 356.11 218.65 MB/sec [4K blocks]
Uncached Write 145.23 82.17 MB/sec [256K blocks]
Uncached Read 90.42 26.46 MB/sec [4K blocks]
Uncached Read 321.97 161.82 MB/sec [256K blocks]
Random 513.54
Uncached Write 406.11 42.99 MB/sec [4K blocks]
Uncached Write 254.57 81.50 MB/sec [256K blocks]
Uncached Read 3754.20 26.60 MB/sec [4K blocks]
Uncached Read 883.29 163.90 MB/sec [256K blocks]/code

So, basically I'm looking for answers to these questions:

    Should I try using a larger stripe width? Any disadvantages of leaving AWC disabled?/list

    Thanks!

JesusAli's picture

Hmm... the situations are so different, I don't even know if comparing the numbers makes any sense, but here are my results, regardless:

[code]Results 286.88
System Info
Xbench Version 1.3
System Version 10.5.8 (9L34)
Physical RAM 4096 MB
Model Xserve2,1
Drive Type TBM_XsanVol_4F
Disk Test 286.88
Sequential 318.57
Uncached Write 201.82 123.92 MB/sec [4K blocks]
Uncached Write 163.98 92.78 MB/sec [256K blocks]
Uncached Read 796.10 232.98 MB/sec [4K blocks]
Uncached Read 4054.23 2037.63 MB/sec [256K blocks]
Random 260.92
Uncached Write 2141.84 226.74 MB/sec [4K blocks]
Uncached Write 155.21 49.69 MB/sec [256K blocks]
Uncached Read 243.55 1.73 MB/sec [4K blocks]
Uncached Read 231.76 43.00 MB/sec [256K blocks/code

Our detailed numbers are completely schizo.

I am running a Promise E and J Unit. Each with 16 750GB SATA disks, setup according to Promise and Apple's recommend installation scripts.

My MetaData LUN is hosted in the E Unit. I've heard that can slow down performance.
My main XSan Volume is comprised of 4 LUNs, each made of 6 disks RAID 5'ed together.

I am connecting all four 4GB/s Fiber connections to the Fiber Switch, and the test was run from my MDC which is connected with two 4GB/s connections, so my bandwidth is definitely limited by the disks at this point.

churnd's picture

JesusAli wrote:
Hmm... the situations are so different, I don't even know if comparing the numbers makes any sense, but here are my results, regardless:

[code]Results 286.88
System Info
Xbench Version 1.3
System Version 10.5.8 (9L34)
Physical RAM 4096 MB
Model Xserve2,1
Drive Type TBM_XsanVol_4F
Disk Test 286.88
Sequential 318.57
Uncached Write 201.82 123.92 MB/sec [4K blocks]
Uncached Write 163.98 92.78 MB/sec [256K blocks]
Uncached Read 796.10 232.98 MB/sec [4K blocks]
Uncached Read 4054.23 2037.63 MB/sec [256K blocks]
Random 260.92
Uncached Write 2141.84 226.74 MB/sec [4K blocks]
Uncached Write 155.21 49.69 MB/sec [256K blocks]
Uncached Read 243.55 1.73 MB/sec [4K blocks]
Uncached Read 231.76 43.00 MB/sec [256K blocks/code

Our detailed numbers are completely schizo.

I am running a Promise E and J Unit. Each with 16 750GB SATA disks, setup according to Promise and Apple's recommend installation scripts.

My MetaData LUN is hosted in the E Unit. I've heard that can slow down performance.
My main XSan Volume is comprised of 4 LUNs, each made of 6 disks RAID 5'ed together.

I am connecting all four 4GB/s Fiber connections to the Fiber Switch, and the test was run from my MDC which is connected with two 4GB/s connections, so my bandwidth is definitely limited by the disks at this point./quote

Wow, XSAN makes a huge difference. :) Unfortunately, overkill for my setup. :(

I am happy with my current results with Adaptive Writeback Cache (AWC) disabled, but I'm slightly worried about whatever unknown consequences might come from leaving it off. Any advice?

Oh, I'd be even more interested in seeing your results on your XSAN with AWC off.

JesusAli's picture

churnd, how are you connecting your VTRAK E-Unit to your Client computers?

churnd's picture

VTRAK is plugged in directly to the Apple 4GB Quad FC card in my Xserve. Xserve is a file server on my LAN.

I know it's not an XSAN setup, but I hoped I'd still get help from you guys anyway since you're experts in the field. :)

JesusAli's picture

[b]I haven't worked with the Promise as just a RAID Unit, without Xsan... but I would guess the largest utility would come from utilizing BOTH controllers in the VTRAK E-Unit./b

Simultaneously spreading out a file load over the concurrent use of multiple physical disks (RAID 5 or 6) is what provides speed, so doing that simultaneously with two dedicated hardware Controllers, and two different groups of disks, sounds like the way to go.

So making 1 RAID 6 volume seems like a bottleneck because can't only 1 Controller (and subsequently, one two Fiber Ports) be in charge of any 1 Logical Disk?

So it does seem that two RAID 5's, each made from 7 Physical Disks, (leaving 2 Global Spares) would be a good idea.

But what I don't know is, can the Promise "offer up" those two RAID 5's as 1 Volume? ...or is that what you need Xsan to accomplish?

Is that what the Metadata Controllers do for Xsan, keep track of how all the LUNs are Striped together?

What about making a Disk Utility Software RAID 0 out of the two RAID 5's offered by Promise? Would the software RAID 0 be the bottle neck?

Thawk9455's picture

Randomly came accross this.  The last poster is onto something but there is more to it.  The Promise controllers are also optimized for a certain number of disks, I believe 8 not counting parity disks in this case.  Going over that will actually slow down the system as it has to compute parity for more disks than the system is optimized for.

Add to it tthat you're only using one of the controllers, which also increases your bottleneck and you're not going to see anywhere near the performance these systems should be able to provide.