Andrew Allen's picture

Can a corrupt/buggy Xsan filesystem cause extremely low read/write speeds on a SAN?

For the sake of brevity, I've leave out the 3 months of background information trying to resolve the problems we've had.

We have a SAN that used to have 450-550 MG/s read/write speeds. It preformed this way for 4.5 years. Now it's writing and reading at much lower speeds, like 40-150 MG/s write speeds and 250-300 read speeds. YES, we've investigated all the likely things: SFPs, fibre cables, thoroughly investigated the fibre switch, checked the RAID controller, etc.

My question is simply this: Does anyone know if it's possible for a degraded/corrupted/buggy Xsan filesystem to cause such heavily reduced speeds? To our knowledge, we have never run a cvfsck on this Xsan volume and it's been in heavy use for almost 5 years. I'm going to do run this tomorrow. Could the Xsan filesystem be the cause of our degraded drive speeds?

aaulich's picture

Archiware P5 Archive v5.1.2 adds incremental archives to CP Archive App

Archiware has just released the 5.1.2 version of P5, their data management suite, which the CP Archive App uses to add archiving features to Cantemo Portal.

Version 5.1.2 introduces a feature I've been asked for many times in the past, and finally it's there: incremental archives.
Though P5 has already supported incremental archives in earlier versions, version 5.1.2 adds this feature to the command-line API, which the CP Archive App uses. P5 now checks for every file the CP Archive App sends to P5 on the command-line, if it has already been archived and hasn't changed since then.

If a file is new or has been modified since its original archiving, P5 will archive it again, otherwise, it will not store it on tape again.
This is a huge benefit for media archives, as most restores of media files happen to read these files only without modifying them.
The CP Archive App as well as other solutions usually trigger archiving of items automatically x days after creation, so without incremental archiving your archive can grow much faster than you expect.

If you prefer to archive complete projects without linking to older, existing tapes, you can still do this. Incremental archiving can be turned on in the UI while setting up an archive plan:

As you can see in the screenshot, activating incremental archiving is a single switch in the settings of an archive plan.

I think this alone is a very good reason to upgrade (or switch) to Archiware P5 Archive.

growl88's picture

Xsan 2 on a Xserve2,1 (10.6.8) and Marvericks

Hi everibody,

I've a question about the old osx server and his client. Can I have different OSX version client using Xsan 2 on my 10.6.8 Xserve2,1 server?

Because i want to buy a new mac, and maybe upgrade another one, but I don't know if can coexist different OS-Client in the same Xsan network

Thamk you so much!


Andrew Allen's picture

Xsan Failback


We have an Xsan 2.1 environment with 3 SANs, 3 MDCs and 11 clients. Occasionally we've had the odd failover occur over the years. We're currently in a state where the 3 MDCs and 2 of the clients are running Mavericks and Xsan 3.1. We're eventually aiming to move all cilents to Mavericks and Xsan 3.1 but in the mean time the older snow leopard machines are running Xsan 2.2.2 (build 148)

Every once in a while a SAN will failover to it's secondary metadata controller. However, recently we had our second SAN failover and then fail BACK to the original controller. I read in the physical copy of the Xsan 2 Administration guide (2009) I have that this is called failback and that a failback should never occur automatically: it must be manually instigated by person. However, we had a failback occur without a person insitigating it.

Has anyone experienced this? Is it a concern? I'm heading to the site to investigate the console logs and I'll post them below shortly.


Gerard's picture

Xsan Permission Propagation


Within my environmentw we have:

A 70Tbs Xsan volume running v3.0, two 10.8.5 MDCs on Xserves, combination of Promise E&J raids, four Qlogic 5600 switches and another Xserve used as a fileserver sharing out folders from the Xsna volume.

I have noticed when I propogate ACL permissions with full read/write, from the root folders, down our folder strucutre, the permissions don't trickle down all the way.

Ex: I have a folder structure with ten folders and within the last folder, there are files. Via Xsan Admin, the files lack ACL rights because the last (tenth) folder doesn't have proper rights. Though, the above folders (1-9) do have the ACL rights. 

Is there any type of bug within Xsan where permissions stop tricking down at a certain point?


larspetter's picture

Users not able to log in after recreating ldap in Mavericks


After recreating ldap in 10.9.2 (fresh setup, importing users/groups) and rebinding the 10.6.8 Fcsvr installation to new ldap, users are unable to log in.

Any ideas?

Users are able to login on the server så det binding in the OS is working.

Debugging fcsvr gives this:

dsAttrTypeStandard:RecordName lpo lpo

} 21:33:04.503661 0xb0103000       DEBUG2 findNodeForRecord node.C:165 [DS] using node /LDAPv3/ for auth21:33:04.503681 0xb0103000       DEBUG2 runThread PmsAuthUser.C:115 [PxModelStore] found node for user:lpo21:33:04.503703 0xb0103000       { auth::auth auth.C:33 [DS] this=0xb0102d6c21:33:04.503716 0xb0103000       } auth::auth auth.C:33 [DS] this=0xb0102d6c21:33:04.503729 0xb0103000       DEBUG3 runThread PmsAuthUser.C:131 [PxModelStore] waiting on sem21:33:04.503744 0xb0103000       DEBUG3 runThread PmsAuthUser.C:133 [PxModelStore] finished waiting on sem21:33:04.503761 0xb0103000       DEBUG2 runThread PmsAuthUser.C:143 [PxModelStore] authenticating with token: username=bHBv21:33:04.503777 0xb0103000       { saslStart auth.C:432 [DS] this=0xb0102d6c PPS, username=bHBv21:33:04.516185 0xb0103000       } saslStart auth.C:432 [DS] this=0xb0102d6c21:33:04.516224 0xb0103000       DEBUG1 doAuthStep auth.C:152 [DS] auth failed with result -2421:33:04.516249 0xb0103000       DEBUG2 runThread PmsAuthUser.C:147 [PxModelStore] auth status: -1448321:33:04.516265 0xb0103000       DEBUG2 runThread PmsAuthUser.C:192 [PxModelStore] unsuccessful auth21:33:04.516364 0xb0103000       { auth::~auth auth.C:39 [DS] this=0xb0102d6c21:33:04.516470 0xb0103000       } auth::~auth auth.C:39 [DS] this=0xb0102d6c21:33:04.516476 0xa0c54540       { readCB KsSlaveThread.C:108 [KsStream] this=0x357ce9021:33:04.516495 0xb0103000         DEBUG3 ~node node.C:40 [DS] closing node21:33:04.516593 0xa0c54540         DEBUG2 acceptEvent KsNode.C:148 [KsStream] accepting event [ evt { SLAVE_REPLY_VALUE = { CODE = E_LOGIN, DESC = Please re-enter the username and password or contact the server administrator.Please note that the username and password are case-sensitive., SRC_FILE = PmsAuthUser.C, SRC_LINE = 206, OPEN_DIRECTORY_ERROR = -14483 } } ] on node ["PmsTask_UserLogin" 0x3568e80, ref=4, wref=3] lockToken=1536 holding locks:(335cffd7-6133-4910-af40-0b0154e123af WR token=1536) taskState=0 dbqueue=0x2805944 needTrans<

abstractrude's picture

Avid Releases 2013 Mac Pro Config Guide


Supported CPU Single Intel® 6-Core Xeon® E5 Processor @ 3.5GHz 12MB cache / 1866MHz memory Single Intel® 8-Core Xeon® E5 Processor @ 3.0GHz 25MB cache / 1866MHz memory Single Intel® 12-Core Xeon® E5 Processor @ 2.7GHz 30MB cache / 1866MHz memory
Supported Video Card Dual AMD FirePro D500 or D700 graphics with 3GB or 6 GB GDDR5 VRAM
System Storage 256 or 512 or 1 TB PCIe based flash storage
Standard AVID memory configuration
  • 16GB (4 x 4GB) DDR3 1866 ECC memory
  • 32GB (4 x 8GB) DDR3 1866 ECC memory
  • 64GB (4 x 16GB) DDR3 1866 ECC memory
Memory configuration constraints The 12 GB memory option is NOT supported as it only uses 3 DIMM slots that results in poor performance

henrique's picture

Enable Access Log in Xsan




I have a Active Storage with Xsan volume, i need an access log to know who modifies or deletes a file.

If this log does not exist, could recommend an auditing software for mac?



Mac OS X 10.6.8

Xsan 2.2.1

Active Storage AC16SFC01






Expanding xServe RAID Array - confused over whether I'll lose my data.


I'm not using Xsan, but I'm using hardware I'm hoping (praying) someone here has in-depth experience administering.


I don't want to have to reformat the RAID and I don't want to lose data because the organization will suffer extended downtime. I've used ChronoSync to back up the entire RAID twice onto two 2TB USB drives. Each one took over two days to complete, so if I mess this up and cause data loss by merging slices, it'll take at least that long to get the data back on the system.


These and Apple's documentation are a few of the many different sources of info on this topic I've sought.



My situation.


I inherited administration responsibilities for an xServe RAID using the latest firmware (1.5.1).

The xServe is an intel xeon dual 2 GHz, running Mac OS X Server 10.6.8

The RAID had six (sigh) drives in the left controller. None in the right.

When I became admin, one of these six was dead. So I went on eBay and got three refurb ADM's.

--one to replace the dead drive.

--another to expand the RAID 5 to a full seven drives. (from 2.5T to 2.73T)

--a third to put on the shelf as a spare.


I am using RAID Admin utility 1.5.2b3.

The replacement drive rebuilt beautifully. Happy 6 drive RAID.

I put in the 7th drive and expanded. It took several days.

I'm at the point of "merging" slices that gives everyone pause about losing data.


Can Intel Macs can expand the file system with this slice-merging function without data loss whereas Power PC Macs cannot?


Next, my RAID as I inherited it was one slice. That is to say, one volume. One partition. In RAID Admin under "Arrays and Drives" all the little drives display a [1]. They did when the RAID was degraded to 5 drives. They did when I added the 6th drive replacement. They do now, having used the Expansion function under "Advanced" to add the 7th drive.


So, next question. WAS IT EVER SLICED?

That seems to be a deal-breaker for merging slices without data loss.


For this, I looked in RAID Admin to my Events tab which dates all the way back to 2008.

I see exactly zero mentions of slicing anywhere.

RAID Admin Utility screen shot

EricInDenver's picture

RAID not mounting after doubling space

We recently added 3 new 24TB racks to our DotHill server and after 2 days, the server crashed.  We've been troubleshooting for a few days now and have been navigating in the meta controller to possibly have the server at least mount again.  After digging and running cvfsck, we have managed to get to this point.  I have pasted where we are at now.


meta1:~admin$ sudo cvfsck -n OCEAN3


Checked Build disabled - default

*Warning*: Icb has a configuration mismatch!

Icb mismatches configuration:

  File System Stripegroup count Mismatch. exp-5 rcv-3


Created directory /tmp/cvsck451a for temporary files.

Attempting to acquire arbitration block… successful.


Creating MetadataAndJournal allocation check file.

Creating Video-1 allocation check file.

Creating Video-2 allocation check file.

*Fatal*: Metadata inode for stripe group 'Video-3" has been damaged. Giving up.

*Fatal*: cvfsck ASSERT failed "idinobuf.idi_flags & InodeFlagMetaData" file cvfs ck.c, line 6072


It seems that it is not recognizing our new added racks, given the Count Mismatch.  I've read in another forum about this and it said that the sector count could be exceeding the kernal limit.  We'd like to try and at least have the all the raids be recognized so we can go in and reconfigure it back to the original raid, giving up on adding the new space for now.


Any thoughts would be apprecciated and please forgive me if I've missed some basic info that someone may need to help with our issue.  Just ask and I'll find/supply any additional info necessary.






Subscribe to Xsanity RSS