Sorry to rehash an old question here, but I'm panicking a bit.
I've done the math, and our SAN should be capable of pushing out 2000MB/s+, easy. I have a Mac Pro Server with an Apple quad port fibre channel card installed. My Qlogic switch ports are all set correctly per the recommended Xsan/Qlogic settings from Apple. I'm barely breaking 500MB/s through this one server, though. I'm seeing all the LUNs on every FC port, so I don't understand why this is an issue. It seems to me like I've either misconfigured something or I've greatly underestimated the capabilities of this card.
Here's a summary of the fiber paths:
I've got 4x 16TB ActiveRAIDs, each configured with two 7-drive RAID5 LUNs. Each RAID controller is spanned across two switches. There are five Qlogic switches in total, and all the 10Gb stacking ports are connected for maximum redundancy. The RAID controllers populate most of switches 1 and 2. The Mac Pro Server with the quad FC card is connected to switches 1-4.
I'd gladly take any recommendations for config changes or hardware upgrades. Perhaps we just need a different FC card?
Submitted by Anonymous (not verified) on Wed, 07/31/2013 - 4:28pm
With Xsan 2.2, Spotlight supports two different levels of searching: File system search and Indexed search.
File system search (FsSearch) - Searches against file system attributes are enabled but no indexed searching of content is performed.
Indexed search (ReadWrite) - Full file system and indexed content is enabled; the index will be updated as file system content changes.
By default, Indexed search is enabled when Spotlight is activated for an Xsan volume in Xsan 2.0 or later.
The FsSearch option requires Mac OS X v10.6 metadata controllers using Xsan 2.2.
Does anyone know if the block size used by the metadata matches the block size used by the volume? I'm wondering for performance tuning for ZFS ZVOLs where you can specify the block size at creation time.
I have a client with a small Xsan volume (about 8TB). They got an used Xserve RAID, and we split it into 2 LUNs and added them into the volume to expand the storage size. We also did a defrag afterwards.
Everything seems to work, except FCPX now works very slow, especially during exporting. When we copy to / from the Xsan directly in Finder, it's still fast, so we are not quite sure what the problem is. One thing we did notice is when FCPX is working on local external hard drive, the CPU load is 99% all the time during exporting. However when working on the Xsan volume, the CPU load only hits 99% for a quarter of the duration during exporting. Sounds like something is preventing FCPX from working 100% that way it should.
Wondering if anyone has run into this before and can offer some insight. I am rebooting the whole thing tonight, and if that still doesn't fix it, then I'll have to backup, wipe, and re-do.
If I have most of my storage in a configuration where there is one RAID head with a bunch of JBODs attached so they all share a single controller and fibre port, is it still useful to let Xsan stripe across LUNs? If I didn't break the storage into smaller LUNs so Xsan can stripe them, I could be more economical and use RAID-6 with larger LUNs.
I have some raid units that are being run through an Atto SCSI->Fibre bridge. This unit went down so I shut down my SAN while the unit was getting repaired. Now that I have put everything back the LUNS that were showing up from the fibre bridge are not showing up anymore in Xsan Admin and thus the volumes are not able to mount. Interestingly the disks show up in Disk Utility and as Xsan formatted. Any ideas?
We have one SANBox 5602 in our facility, serving Mac Pro workstations with Atto Celerity FC-41 and 42 HBAs. It used to serve our Avid Unity with no issues. Last week I got rid of the Unity and upgraded all our systems to 10.8.4.
The problem is an invisible "quota" where only the first 6 systems to be powered on get to see the storage. Any consecutive machines see nothing on the fabric, yet ALL the devices (switch and HBAs) on all ports see a good link, and the switch reports the clients are active and logged in to the fabric.
I already went through the entire fiber infrastructure - ensuring all HBAs have the latest firmware and drivers, ditto for the switch, and matching SFP's and fiber runs. The switch is fully licensed.
I tried a switch factory reset; set a fixed port type and speed (F for the hosts, the storage only takes FL); ensured only hosts have I/O Streamguard; tried putting everyone in 1 zone, and even disabling zoning altogether; I even tried Device Scan on targets.
Nothing made any difference. The only workaround I know is to power-cycle everything which just restarts the "quota" again.
I should disclose that I'm not running XSAN but metaSAN... but this is independent of any SAN software; if the underlying fabric doesn't work then nothing works (besides, I'm still considering XSAN :-) ).
Qlogic inspected the switch service dumps and couldn't find a single thing wrong with it. I'm at my wit's end, and know most everyone here is experienced with similar fiber setups. Do you have any ideas??