Sorry to spam the forums with new threads (my other one here), but does anyone know anything about Active Storage ActiveRAIDs and MPIO support? I'm curious if maybe my LUNs are configured wrong. I'm trying to get as much bandwidth through to one client as possible, but I'm hitting a limit somewhere. In the activeadmin command line tool, there's a multipath command that allows you to set the multipathing to "device" or "lun."
Device level multipathing uses a Fibre Channel World Wide Node Name strategy of "identical", a Fibre Channel connection mode of "fabric" and a per-controller LUN affinity. Device level multipathing is required by Mac OS X.
LUN level multipathing uses a Fibre Channel World Wide Node Name strategy of "distinct", a Fibre Channel connection mode of "fabric" and requires symmetric LUN mappings.
Is the bit about device level multipathing being required by OS X still valid since Lion?
we are having a major issue with our XSAN.
Here what we are using two Promise VTRak E-Class and a Mac Pro that functions as the XSAN controller. The Mac Pro has Mountain Lion installed.
After a shut down (for maintenance) the Volume (which are both RAIDS combined) will not start. There are error markers under the Volume and under the LUN tab there the yellow ! markers. But I can not find any information why.
Now, not sure if this is crazy or not but a coworker thought it would be a good idea to reconfigure another Mac Pro as a XSAN controller and added the Promise Raids. (I would think this is not good choice)
Good news is the Volume works but now data! :( So he did Data Recover scan and said that the "data" is there. But we know how well that goes right?
After finding this out this was done, I reattached the Raids to the the original Mac Pro. After doing this I logged into the VTrak and looked if I can see any issue here what I noticed that the originally assigned LUN mapping was missing.
So what can I do to the original controller see the Volumes with out loosing data?
Has anyone written a guide or blurb about this they can share to save writing something?
In essence, you have 2+ ports on a FC HBA (ATTO or Apple etc) in the inits and go into a QLogic, and have then 2+ ports on the target controllers. Best practice for gaining performance/redundancy via multipath zoning - all ports in same zone, or split into an A-A zone and B-B zone etc.
I saw pgsengstock's thread here and it jogged my memory about having to write some docs up and worst of all, create the omnigraphs :(
Sorry to rehash an old question here, but I'm panicking a bit.
I've done the math, and our SAN should be capable of pushing out 2000MB/s+, easy. I have a Mac Pro Server with an Apple quad port fibre channel card installed. My Qlogic switch ports are all set correctly per the recommended Xsan/Qlogic settings from Apple. I'm barely breaking 500MB/s through this one server, though. I'm seeing all the LUNs on every FC port, so I don't understand why this is an issue. It seems to me like I've either misconfigured something or I've greatly underestimated the capabilities of this card.
Here's a summary of the fiber paths:
I've got 4x 16TB ActiveRAIDs, each configured with two 7-drive RAID5 LUNs. Each RAID controller is spanned across two switches. There are five Qlogic switches in total, and all the 10Gb stacking ports are connected for maximum redundancy. The RAID controllers populate most of switches 1 and 2. The Mac Pro Server with the quad FC card is connected to switches 1-4.
I'd gladly take any recommendations for config changes or hardware upgrades. Perhaps we just need a different FC card?
Does anyone know if the block size used by the metadata matches the block size used by the volume? I'm wondering for performance tuning for ZFS ZVOLs where you can specify the block size at creation time.
I have a client with a small Xsan volume (about 8TB). They got an used Xserve RAID, and we split it into 2 LUNs and added them into the volume to expand the storage size. We also did a defrag afterwards.
Everything seems to work, except FCPX now works very slow, especially during exporting. When we copy to / from the Xsan directly in Finder, it's still fast, so we are not quite sure what the problem is. One thing we did notice is when FCPX is working on local external hard drive, the CPU load is 99% all the time during exporting. However when working on the Xsan volume, the CPU load only hits 99% for a quarter of the duration during exporting. Sounds like something is preventing FCPX from working 100% that way it should.
Wondering if anyone has run into this before and can offer some insight. I am rebooting the whole thing tonight, and if that still doesn't fix it, then I'll have to backup, wipe, and re-do.
If I have most of my storage in a configuration where there is one RAID head with a bunch of JBODs attached so they all share a single controller and fibre port, is it still useful to let Xsan stripe across LUNs? If I didn't break the storage into smaller LUNs so Xsan can stripe them, I could be more economical and use RAID-6 with larger LUNs.
I have some raid units that are being run through an Atto SCSI->Fibre bridge. This unit went down so I shut down my SAN while the unit was getting repaired. Now that I have put everything back the LUNS that were showing up from the fibre bridge are not showing up anymore in Xsan Admin and thus the volumes are not able to mount. Interestingly the disks show up in Disk Utility and as Xsan formatted. Any ideas?