Andrew Allen's picture

Xsan Failback


We have an Xsan 2.1 environment with 3 SANs, 3 MDCs and 11 clients. Occasionally we've had the odd failover occur over the years. We're currently in a state where the 3 MDCs and 2 of the clients are running Mavericks and Xsan 3.1. We're eventually aiming to move all cilents to Mavericks and Xsan 3.1 but in the mean time the older snow leopard machines are running Xsan 2.2.2 (build 148)

Every once in a while a SAN will failover to it's secondary metadata controller. However, recently we had our second SAN failover and then fail BACK to the original controller. I read in the physical copy of the Xsan 2 Administration guide (2009) I have that this is called failback and that a failback should never occur automatically: it must be manually instigated by person. However, we had a failback occur without a person insitigating it.

Has anyone experienced this? Is it a concern? I'm heading to the site to investigate the console logs and I'll post them below shortly.


Gerard's picture

Xsan Permission Propagation


Within my environmentw we have:

A 70Tbs Xsan volume running v3.0, two 10.8.5 MDCs on Xserves, combination of Promise E&J raids, four Qlogic 5600 switches and another Xserve used as a fileserver sharing out folders from the Xsna volume.

I have noticed when I propogate ACL permissions with full read/write, from the root folders, down our folder strucutre, the permissions don't trickle down all the way.

Ex: I have a folder structure with ten folders and within the last folder, there are files. Via Xsan Admin, the files lack ACL rights because the last (tenth) folder doesn't have proper rights. Though, the above folders (1-9) do have the ACL rights. 

Is there any type of bug within Xsan where permissions stop tricking down at a certain point?


larspetter's picture

Users not able to log in after recreating ldap in Mavericks


After recreating ldap in 10.9.2 (fresh setup, importing users/groups) and rebinding the 10.6.8 Fcsvr installation to new ldap, users are unable to log in.

Any ideas?

Users are able to login on the server så det binding in the OS is working.

Debugging fcsvr gives this:

dsAttrTypeStandard:RecordName lpo lpo

} 21:33:04.503661 0xb0103000       DEBUG2 findNodeForRecord node.C:165 [DS] using node /LDAPv3/ for auth21:33:04.503681 0xb0103000       DEBUG2 runThread PmsAuthUser.C:115 [PxModelStore] found node for user:lpo21:33:04.503703 0xb0103000       { auth::auth auth.C:33 [DS] this=0xb0102d6c21:33:04.503716 0xb0103000       } auth::auth auth.C:33 [DS] this=0xb0102d6c21:33:04.503729 0xb0103000       DEBUG3 runThread PmsAuthUser.C:131 [PxModelStore] waiting on sem21:33:04.503744 0xb0103000       DEBUG3 runThread PmsAuthUser.C:133 [PxModelStore] finished waiting on sem21:33:04.503761 0xb0103000       DEBUG2 runThread PmsAuthUser.C:143 [PxModelStore] authenticating with token: username=bHBv21:33:04.503777 0xb0103000       { saslStart auth.C:432 [DS] this=0xb0102d6c PPS, username=bHBv21:33:04.516185 0xb0103000       } saslStart auth.C:432 [DS] this=0xb0102d6c21:33:04.516224 0xb0103000       DEBUG1 doAuthStep auth.C:152 [DS] auth failed with result -2421:33:04.516249 0xb0103000       DEBUG2 runThread PmsAuthUser.C:147 [PxModelStore] auth status: -1448321:33:04.516265 0xb0103000       DEBUG2 runThread PmsAuthUser.C:192 [PxModelStore] unsuccessful auth21:33:04.516364 0xb0103000       { auth::~auth auth.C:39 [DS] this=0xb0102d6c21:33:04.516470 0xb0103000       } auth::~auth auth.C:39 [DS] this=0xb0102d6c21:33:04.516476 0xa0c54540       { readCB KsSlaveThread.C:108 [KsStream] this=0x357ce9021:33:04.516495 0xb0103000         DEBUG3 ~node node.C:40 [DS] closing node21:33:04.516593 0xa0c54540         DEBUG2 acceptEvent KsNode.C:148 [KsStream] accepting event [ evt { SLAVE_REPLY_VALUE = { CODE = E_LOGIN, DESC = Please re-enter the username and password or contact the server administrator.Please note that the username and password are case-sensitive., SRC_FILE = PmsAuthUser.C, SRC_LINE = 206, OPEN_DIRECTORY_ERROR = -14483 } } ] on node ["PmsTask_UserLogin" 0x3568e80, ref=4, wref=3] lockToken=1536 holding locks:(335cffd7-6133-4910-af40-0b0154e123af WR token=1536) taskState=0 dbqueue=0x2805944 needTrans<

abstractrude's picture

Avid Releases 2013 Mac Pro Config Guide


Supported CPU Single Intel® 6-Core Xeon® E5 Processor @ 3.5GHz 12MB cache / 1866MHz memory Single Intel® 8-Core Xeon® E5 Processor @ 3.0GHz 25MB cache / 1866MHz memory Single Intel® 12-Core Xeon® E5 Processor @ 2.7GHz 30MB cache / 1866MHz memory
Supported Video Card Dual AMD FirePro D500 or D700 graphics with 3GB or 6 GB GDDR5 VRAM
System Storage 256 or 512 or 1 TB PCIe based flash storage
Standard AVID memory configuration
  • 16GB (4 x 4GB) DDR3 1866 ECC memory
  • 32GB (4 x 8GB) DDR3 1866 ECC memory
  • 64GB (4 x 16GB) DDR3 1866 ECC memory
Memory configuration constraints The 12 GB memory option is NOT supported as it only uses 3 DIMM slots that results in poor performance

henrique's picture

Enable Access Log in Xsan




I have a Active Storage with Xsan volume, i need an access log to know who modifies or deletes a file.

If this log does not exist, could recommend an auditing software for mac?



Mac OS X 10.6.8

Xsan 2.2.1

Active Storage AC16SFC01






Expanding xServe RAID Array - confused over whether I'll lose my data.


I'm not using Xsan, but I'm using hardware I'm hoping (praying) someone here has in-depth experience administering.


I don't want to have to reformat the RAID and I don't want to lose data because the organization will suffer extended downtime. I've used ChronoSync to back up the entire RAID twice onto two 2TB USB drives. Each one took over two days to complete, so if I mess this up and cause data loss by merging slices, it'll take at least that long to get the data back on the system.


These and Apple's documentation are a few of the many different sources of info on this topic I've sought.



My situation.


I inherited administration responsibilities for an xServe RAID using the latest firmware (1.5.1).

The xServe is an intel xeon dual 2 GHz, running Mac OS X Server 10.6.8

The RAID had six (sigh) drives in the left controller. None in the right.

When I became admin, one of these six was dead. So I went on eBay and got three refurb ADM's.

--one to replace the dead drive.

--another to expand the RAID 5 to a full seven drives. (from 2.5T to 2.73T)

--a third to put on the shelf as a spare.


I am using RAID Admin utility 1.5.2b3.

The replacement drive rebuilt beautifully. Happy 6 drive RAID.

I put in the 7th drive and expanded. It took several days.

I'm at the point of "merging" slices that gives everyone pause about losing data.


Can Intel Macs can expand the file system with this slice-merging function without data loss whereas Power PC Macs cannot?


Next, my RAID as I inherited it was one slice. That is to say, one volume. One partition. In RAID Admin under "Arrays and Drives" all the little drives display a [1]. They did when the RAID was degraded to 5 drives. They did when I added the 6th drive replacement. They do now, having used the Expansion function under "Advanced" to add the 7th drive.


So, next question. WAS IT EVER SLICED?

That seems to be a deal-breaker for merging slices without data loss.


For this, I looked in RAID Admin to my Events tab which dates all the way back to 2008.

I see exactly zero mentions of slicing anywhere.

RAID Admin Utility screen shot

EricInDenver's picture

RAID not mounting after doubling space

We recently added 3 new 24TB racks to our DotHill server and after 2 days, the server crashed.  We've been troubleshooting for a few days now and have been navigating in the meta controller to possibly have the server at least mount again.  After digging and running cvfsck, we have managed to get to this point.  I have pasted where we are at now.


meta1:~admin$ sudo cvfsck -n OCEAN3


Checked Build disabled - default

*Warning*: Icb has a configuration mismatch!

Icb mismatches configuration:

  File System Stripegroup count Mismatch. exp-5 rcv-3


Created directory /tmp/cvsck451a for temporary files.

Attempting to acquire arbitration block… successful.


Creating MetadataAndJournal allocation check file.

Creating Video-1 allocation check file.

Creating Video-2 allocation check file.

*Fatal*: Metadata inode for stripe group 'Video-3" has been damaged. Giving up.

*Fatal*: cvfsck ASSERT failed "idinobuf.idi_flags & InodeFlagMetaData" file cvfs ck.c, line 6072


It seems that it is not recognizing our new added racks, given the Count Mismatch.  I've read in another forum about this and it said that the sector count could be exceeding the kernal limit.  We'd like to try and at least have the all the raids be recognized so we can go in and reconfigure it back to the original raid, giving up on adding the new space for now.


Any thoughts would be apprecciated and please forgive me if I've missed some basic info that someone may need to help with our issue.  Just ask and I'll find/supply any additional info necessary.





abstractrude's picture

Mac Pro Shipments Still Months Out

The new Mac Pro continues to have extremely constrained supplies and shipping times are now missing their stated dates. MacRumors has an interesting article on this, has anyone else had issues getting the new Mac Pro?

Can anyone remember an Apple proudct being backordered this long? Maybe early Powerbooks?

Apple Knowledge Base's picture

About the Snow Leopard Graphics Update (Apple KB)

The Snow Leopard Graphics Update contains stability and performance fixes for graphics applications and games in Mac OS X v10.6.4.

Read more:

undercover's picture

Achievement Unlocked: Upgrade!

I finally did it.

After looking forward / fearing the upgrade for years, I finally migrated off my old XServes to Mac Minis, and moved from Mac OS 10.5 to 10.9.

The move had been long delayed for lack of scheduled down time, lack of funding for new servers, and a little bit of fear of things going wrong.

The background story.

My clients (6 of them) were all running 10.6, even though the servers were 10.5. I almost never had trouble with XSAN itself, except the occasional ACL issue or snfsdefrag necessary when disks got near full. We had been planning to upgrade for a long time, but for one reason or another it just never happened. Even lately, I had planned to upgrade to new Mac Pros for clients when they were released, but I think we will change our short term plans to work with four new iMacs and four of our old Mac Pros.

Of course adding new clients required an XSAN upgrade, which also required new servers since I was on first-gen XServes. So Mac Mini + xMac chassis + old fiber cards were the plan, whenever we were able to get the funds set aside.

Things get scary.

My util server failed about a year back. It really wasn't doing much at the time so it wasn't a great loss. But then my mdc2 failed. It was always the more stable mdc. Sometimes XSAN admin would not work well on mdc1. Also mdc2 was my OD master. So I had to promote mdc1 which was the replica.

Fortunately, mdc1 continued to work fine.

Time to prepare.

So we purchased the Mac Mini Servers back in October, but we were in the middle of our busy projects, so I could not upgrade until January at the earliest. I got the team to stay on top of backups in case things went sour and to prepare for the migration. I also made a time machine backup of my mdc, exported lunlabels, and copied xsan config folder.

Should be good to go? Am I forgetting anything?

It's time.

Shut it all down. Disconnect clients, shut down volumes, etc. Connect mini, plug in mdc1 time machine backup, import data. Go through setup.

Got hung up on network config. Apparently it locks if you have two network ports connected. Unplug one, reboot, do it again, replug network. No big deal. Oh wait, fiber zoning. Was using an old card, had to re-alias and re-zone it.

After all that, one of my XSAN volumes decided to mount! Even before I launched XSAN admin. Launch XSAN admin, start second volume, things look good!

Do it again.

First problem - these minis shipped with 10.8. Shut down volumes, download Mavericks, do another time machine backup, install. Good to go.

And again.

Turn on new mdc2, upgrade to Mavericks, change fiber zoning, add second mdc in XSAN admin. No problems.

Oh wait, where's Open Directory? 

For some reason my OD did not make it through the upgrade to 10.8. No big deal, as my OD pretty much consists of one group. Recreated it on mdc1, setup replica on mdc2. [note I plan to move primary to util server].

Re-add the clients!

Booted up clients. Some of them somehow popped up one of the volumes before I re-added them in XSAN admin. Weird. Added them in XSAN admin, and despite complaining that their XSAN versions were older, everything worked fine.  Turned spotlight on for both volumes. Everything is good! Woohoo!

This was not nearly as bad as I thought it was going to be!

Still to do:

  • Setup util server for SMB shares.
  • Upgrade clients to 10.9 (some will stay at 10.6 because they are old)
  • Add new iMacs when they arrive
  • Change IP to new subnet
  • Change to new Active Directory domain
  • Figure out if I can now remove lunzero from my NexSAN satabeast. I think Abs told me something about ALUA in 10.7+, will have to go look at old notes.


Subscribe to Xsanity RSS