LUN config for Promise Ex30 (non Apple)

BlackF1re's picture

Good Evening

I'm about to configure a Promise Ex30 (non apple) for an Apple Xsan Environment (Yes, Promise already confirmed that works with the hardware i have, like a Qlogic 5800, Atto and LSI cards)

I was wondering about the LUN Configuration.

The system is a 24 bay, dual controller, HDD NL-SAS of 3 TB.

What configuration do you recommend for performance?
Workflow is for both HD Full HD ProRes, and 4k XAVC (So 30 - 40 MB/s for every layer)
But i could also manage a layer of SonyRAW 4K from an F55 in 3840x2160 50p, that means 500 MB/s

2 LUN of 11 disks Raid 5 + 1 dedicated spare?
or
4 LUN of 6 disks Raid 5 without spares?

I have a dedicated metadata array 4 Gbit, cause of the presence of multiple volumes

The ALUA should be enabled for better performance? (Clients will be 10.9 / 10.8.5)
What about the stripe size?

Thank you all for your time.

abstractrude's picture

With your 24 bay unit. I would 4-6 disk RAID6 luns if you can afford losing a little space. No hot spares of course. You will see better read performance from RAID6 and you will have the peace of mind during those long rebuilds. ALUA should always be enabled for 10.7 or later. If you have 10.6 clients this will cause issues.

The non Apple unit works fine for most environments. The only thing the Apple unit has now that Promise has brought a lot of the Apple changes to their other products is more RAM on the controllers. A lot more RAM. Which you can probably upgrade. That RAM can be pretty important for video though.

-Trevor Carlson
THUMBWAR

ChrisS's picture

There are a few important details that you don' t include in your description:

1. Number of Xsan clients.
2. Your wiring from the x30 (now many of the 8-ports are wired)
3. Are you multi-pathing with two QLogic switches?
4. Is there a file-server in your workflow (this requires a certain data rate (typically up to 100MB/s if gig-E connected).

Typically for the codec styles your are using, you should estimate based on three-to-four streams per client (so multiply the data rate of the codecs by three or four).

In our installations we would create 2 RAID6-LUNs with four global revertible spares (it a little bit of waste). Check the best practice page on Promise: http://goo.gl/ujTA7K

RAID 6 is used because of the size of physical disks these days. If you use RAID 5, rebuild during multiple drive failure could cause a catastrophic data failure.

abstractrude's picture

I really don't recommend going above 8 disks on a Promise unit. Performance seems to find a sweet spot at that size. Additionally, rebuild times are much higher. Also, I wouldn't use global spares in RAID6 configs if you have easy access to the unit to install a replacement. But everyone has their own needs. For example if you don't need the space, maybe a spare makes sense.

-Trevor Carlson
THUMBWAR

BlackF1re's picture

Good Morning guys.
In the end, i deployed the system.
I have two dedicated metadata controllers 10.9.1
5 clients 10.8.5 (planning to add 3 more new Mac Pro latest edition)
The Ex30 is updated with the latest firmware, which is absolutely the same of the Apple Vtrak X30, and Promise confirmed this, as well as the compatibility with Apple systems and Xsan Too. (No, the xsan scripts are not there)
Is connect ed with 2 fibres for each controller to a Qlogic5800V, and there is an uplink with 2 uplink cables 10gbit to a Qlogic5600Q, also compatible.

Configured with 4 Lun Raid 5 with 6 drives each, i manager to obtain 1400 MB/s from a workstation with Atto FC84-EN

The metadata partition is on another fibre Channel array, specifically dedicated

everything went well for about 4 days, then i Start experiencing random volume unmounting (in the system i have 2 more volumes, builded with Old Vtrak, and an AuroraLS, that are working without issues.

It comes to my Mind, that the Ex30 is the only One with ALUA enabled, and yes, the only issue should be with Snow Leopard, but i think i'm going to disable it and see what happens.

The craziest thing is that the volume result mounted on the clients (no, it's not the bug of the folder with the same name of the Volume), and all the logs are absolutely perfect.

Any idea?

I'll keep you posted