I’m looking for some advice on reconfiguring our Xsan storage setup.
We currently have a Vtrak E class with 16 2TB drives and another VTrak E class with 16 1TB drives with an attached J Class with 16 1TB drives.
We are looking to configure these to serve up a single Xsan volume that will be used for almost exclusively HD video content (students will be using Final Cut Pro X).
When configuring a new volume for HD video with Xsan admin it is recommended to use LUNs in multiples of 4 (in fact if you don’t then the Xsan configuration ‘wizard’ tells you that “For best performance, use multiples of 4 LUNs in each affinity that it used for data”)
Looking through the documentation ([url]http://support.apple.com/kb/HT1200/url) I don’t see a way to configure the available VTraks to have a number of LUNs that is a multiple of 4. Should I just go with [url]http://support.apple.com/kb/HT1160/url) for the 16 2TB Vtrak and [url]http://support.apple.com/kb/HT1163/url for the second VTrak and it’s expansion chassis?
Any advice would be greatly appreciated as it is not often I get to work on Xsan configurations and I’m definitely feeling a bit rusty!!
After one month of work something goes wrong and the XSAN Volume let disappear files created and copied inside
after this strange situation i'm trying to copy files from the XSAN to another backup disk but during copy finder start hangs so i need to restart the mdc and mdc backup server
Stopping Volume making a sudo cvfsck -j command and fsck -nv to check the integrity and finding some issue about metadata journal i use cvfsck -wv to repair xsan and i saw disappear from my volume a big list of files and data about 2TB
after repair appears a list of files and folders
After this commad my Open Directory master stop working
do you know how i can recover lost files?
and fix this error
I have an iscsi Volume created with ATTO Xtend San latest version Server 10.8.5
I know XSAN is now part of the OS, but I really bunged up the config files of one of my XSAN clients running Mavericks. I used to be able to uninstall XSAN and reinstall and add the client back in. Can't do that now. I think I know the answer to this (re-install the OS) but how can I re-install just XSAN?
I've got a Sanbox 9100 with (5) SB9004 - 4/2/1G blades in it and have run out of ports. I'm having a hard time sourcing out 4/2/1G blades but can come up with 8/4/2G SB9008 blades pretty easily. Anyone know if its kosher to pop those newer blades in to the 9100? I'm assuming it will work fine and that any ports I connect that have 8G at both ends will utilize it and it downscales otherwise???
I'm looking for a ActiveRaid main chassis (with or without drives) or expansions chassis to replace an Expansion chassis that is giving me trouble. I'm also looking for some spare Active raid controllers.
We are a post-production company based in Amsterdam, The Netherlands.
This something wholly unsupported but we're playing around with it anyway for experimental reason. If anyone has a thoughts I would love to hear it.
We have a lab / sand box system using commodity storage. Specifically it's a FreeNAS server. We are using it as an iSCSI target and have setup a series of zvols as device extents. On the client side we have Mac pro's, we use one physical port for the iSCSI network, the other port is virutalized with a connection to our public network and metadata network. We are using globalsan client side to connect to the iSCSI storage. We labeled the LUNs, XSAN let us use them to create a SAN.
So my question, one of the things were playing with is resizing volumes and adjusting storage on the fly (or at least with a restart). So lets say I re-size my zvols on my iSCSI side (which I did) which should create a larger volume on the XSAN side, but it doesn't. The volume stays the same size. So how would I tell XSAN that my LUNs (and subsequently my volume) should be larger?
Yes, this is like crossing the streams with our proton packs. It's just a fun thought experiment about using low cost hardware to take advantage of FCPX's SAN Location benefits in low cost labs.
I am wondering how many setups out there have 6 switches in the fibre stack?
I am again seeing a build out with 6 switches in the stack, and in the past i have seen huge issues with latency
so wanted feedback on users that have had success with that many switches?
I know a 9000 series is the smart way to go.
Had some serious funkiness with our SAN volume. Lots of inode issues. I ran cvfsck -j, then -nv, then -wv, which seems to have cleaned up all the errors. After trying to start the volume again, it immediately went into a RPL_upgrade process. A couple times, that failed near the end (fsm process crashed), then the volume would fail over to the other MDC and start the who RPL process again. I saw a "completed" message at least three times now, but the volumes will still not mount correctly. Both MDCs have their fsm processes running in standby mode now, though the volume appears to be started and hosted by the master in Xsan Admin. Mounting the volume anywhere just causes another crash.
After running cvadmin activate on the master MDC, it eventually tells me:
Admin Tap Connection to FSM failed
This also kicks off another round of RPL upgrades.
I'm hoping I don't have to completely restore our backup, because that'll take days… Any suggestions great appreciated. Thank you!
I have these problem. The Xsan display incorrect lun size. As you can see the Disk Utility app shows the correct configuration, but the Xsan admin displays one LUN of 20TB (I dont where it came from) and another 42TB (XsanLUN1).
I think it has to show 2 LUNS of 20TB each acording to Disk Utility app shows.
The drive began experience read write problems and I think it would be parte of these problem.
Can you help me?
We've had three Qlogic 5800-switches "bricked" in under a year at work, running firmware 126.96.36.199. Two of them because of a power failure, and one of them simply didn't want to turn on again after turning it off. But isn't a power failure simmilar to turning it off? I know there is a shutdown command in the command line interface, but this is very well hidden in some old manuals for the SANbox switches. We have always simply cut the power to turn off Qlogic 5600/5800 switches, without issues, and suddenly this year three 5800 switches die when cutting the power. Not good.
Anybody else had issues with 5800-switches going into "brick mode" with the heartbeat LED constantly lit? Pushing the maintenance mode button doesn't help. It just tries to reload, and then it's stuck again.
The 5600-swtiches we have, have been running solid since 2006 though, without any serious issues.