xadrdm's picture

Volume = MDC LUN?

We have decided that the best way to share out the storage space to different servers is to publish out a separate volume for each server. This keeps with our needs: accurate reporting in client Finders for storage left to server, and quota control for specific areas as defined by server access.

This was the direction we were going, until we found that each Volume created needs a separate MetaData LUN!

Each MD LUN needs to be on a separate Xraid controller by itself. We can not, according to the documentation and Apple support, put more than one MD LUN on a Xraid controller. That means if we have 4 volumes, and that is number of servers we planned on using the Xsan just to start with, we need 4 separate sides on Xraids for MD to achieve the deployment. We are not equipped to handle that. We only have 2 Xraids at this time, and one side of one of them is in production already for direct attached storage for a server.

Using other strategies, like affinities, will not work for us. Affinities will give the wrong space size in the Finder on clients, creating lots of confusion. In addition, quotas will not work the way need them to. We want to restrict user space on one portion of a server only, not across the whole set of servers.

We worked for weeks and weeks on diagrams for this, and even got Appel Sales quites, with diagrams of using only one LUN on one side of one Xraid for MD for use with several servers, and knowing the need and budget of our company in regards to this project, it is regrettable that we have come so far in planning the deployment only to have this unknown pop up out of nowhere. I don't see how we could have know of this issue, and certainly it wasn't clear to us after our time with Apple Consulting Services.

Waiting for a fix for the Finder is possible, but I think that it will not go well here if we let all this equipment lay dormant until the fix has be completed by Apple, tested by us, and the fix incorporated into every machine here.

An creative ideas as to what we can do to deal with this issue?

ScottyD's picture

Xsan Admin doesn't show unlabeled luns correctly.

Xsan 1.1 and 10.3.9 on all MDCs and clients.

I've seen this on two different occasions.

The first time it showed 1 lun containing 2.02 TB of space. However, this was incorrect and the number should have been 2 RAID5 luns with a total of 2.02 TB. I made a huge mistake and added what was listed in Xsan Admin to an existing storage pool. Then the lun started to drop of the volume. The result was copious amounts of data loss on the Xserve RAID that I was migrating from. This could have been caused by the fact that I never initialized the pre-existing Xserve RAID ie the one that was showing up incorrectly. The bottom line is don't add the lun if it shows up incorrectly.

The 2nd occurrence of this potential bug is showing up now. I decided to redo the volume from scratch. I reformatted my metadata lun which should show up as a 233GB RAID1 lun. Instead it's showing up as 467.5GB. What should I do? My guess is that I need to label my luns using the command line.

Any suggestions or insights?

Scott Dominick

Drocko's picture

Volume expansion questions

Hello everyone.

I'm setting up a new Xsan for a group of video editors. I have one XServe RAID unit set up with 2 MDC's and everything is working great.

One of our video editing G5s is not on the Xsan right now. It has it's own Xserve RAID hooked up to it and it is filled with data (like 2TB of it.).

1. Can I put a second HBA (that will be connected to the Xsan) into the G5 that already has the Xserve RAID attached, upload the data from the RAID to the Xsan ?

2. After I do that can I add this Xserve RAID to the Xsan? Will I lose the data that is on the san? Will the volume on the Xsan grow?

I can't lose any data while doing this. Of course I don't want to lose any data while doing this. Is this possible?

Thanks in advance.

Pogo Films's picture

Xsan coping with high data rates

We are toying with the idea of using shared storage. Our facility is comprised of 6 suites. Our max requirements are 1 SD Online (10bit uncompressed Pal), 2 HD Online (10 Bit uncompressed 1080) 1 HD VFX suite (10 bit Uncompressed 1080), 2 DV offlines. We were really looking forward to implimenting Xsan as shared storage would save us loads of aggrivation moving between suites and to a degree allow for multiple suites working on a series.

We have been made aware of concerns that our required data rates could cause Xsan to fall over. I had thought that more Raid controllers = more bandwidth, and if we have 1 raid for each HD suite we should be fine.

We were going to start with 3 Raids and add as needed. We have seen what sort of setup is needed through meetings w/ apple and others and apple say that it works! We feel it is too much expense for a gamble as more and more of our work is dependant on HD capabilities. Being in the UK there are precious few doing HD on FCP and I havent been able to spot anyone doing HD on Xsan at the moment.

Are these valid concerns?
Can we get the data rates high enough to support HD through Xsan on top of our other suites with the proper redundancy?

Thank you and awaiting your reply....

xadrdm's picture

XSAN 1.1 and ACL's

Just a quick question here. I am curious to know if there is any way in which an XSAN based volume can be made to allow ACL controls rather than just the basic POSIX permissions from either workgroup manager or the CLI.

The reason for this is that Windows based clients plugged into the directory are being managed for ACL's on other regular server shares and it would be nice to allow them on the XSAN based folders whether they are shared out over fibre and or over TCP/IP.

Thanks for any information on this...

jbrooksii's picture

Apple G5 XSAN MDC's with WIN2k clients

We are having an issue with trying to connect the Win2k clients to the SAN. This is a new SAN with 2 G5 xserves(MDC), a SANBox5200 switch, and 2 win2k adic clients. Latest and greatest on all software...Xsan 1.1 and ADIC 2.5.1 on the windows boxes...The 2 G5's can mount the SAN volume fine. I can't for the life of me get the win2k client to see the SAN Volume. All servers can see each other on meta network which is 192.168.255.X, and all the servers can see the FC attaches storage. Still talking with ADIC but wanted to hear from the real world.

nick's picture

A Umask Tutorial

Take Off That Umask So We Can See You


When a user creates a file in OS X, that file receives an initial set of permissions determined by the system's umask, or user file mode creation mask. The default umask in OS X dictates that the creator of a file has full read and write access, while everyone else has read-only access. This pesky default umask has bothered OS X users for quite some time. Users create files and drag them to server volumes shared by groups, only to realize that other group members can read the files, but not modify them. Collaboration is effectively broken, but not beyond repair ...

ron's picture

Interesting question

Interesting question this morning:

Hi Ron. One of my editors is having trouble when he opens our san through the finder, his performance becomes extremely slowed down and sluggish. When looking at the activity viewer at this point, the finder shows to be hogging all of the system resources. One thing he thought could be the cause is that when he clicks on the san and goes to "view options" the "calculate size" option is clicked on (as it seems to be on every system). He clicks it off but when he looks at the options again, it is clicked back on again. Could this be the problem and if so, how do we change the setting so the san doesn't continually revert to calculating it's size? Thanks.

p.s clients and MDC's are running 10.3.9

brandon's picture

Xsanity Defrag 1.1 Released


[url=http://www.xsanity.com/easyfile/file.php?show=20050707160216586]Download Xsanity Defrag 1.1/url

Xsanity Defrag 1.1 has been updated for Xsan 1.1. The app is a GUI wrapped around the snfsdefrag command-line utility that ships with Xsan, allowing you to defragment files on Xsan volumes or storage pools with ease. It indicates the level of fragmentation on your Xsan volumes or storage pools, and allows you to monitor the progress of the defragmentation process.

Open files or files that have been modified in the past 10 seconds are not defragmented. If a file is modified while defragmentation is in progress, defragmentation will stop and the file will be skipped.

Xsanity Defrag will run only as superuser. It is recommended that Xsan volumes to be defragmented be unmounted on all client computers before defragmentation.


brandon's picture

Xsanity Defrag 1.1 Released

We're happy to announce an update to our home-grown Xsan defragmentation app, Xsanity Defrag.

Xsanity Defrag 1.1 has been updated for Xsan 1.1. The app is a GUI wrapped around the snfsdefrag command-line utility that ships with Xsan, allowing you to defragment files on Xsan volumes or storage pools with ease. It indicates the level of fragmentation on your Xsan volumes or storage pools, and allows you to monitor the progress of the defragmentation process.

We'll be providing support for the app in a dedicated forum on this site. All feedback is welcomed!


Subscribe to Xsanity RSS