xsanguy's picture

client kernel panics - Apple 4gb FC adpaters?

MDC OS: 10.7.5

Client os: 10.7.4 Server - MacPro 4,1 - 4Gbit LSI 7404EP.

Client copies a few (sometimes hundred) GB to the xSan Volume then KPs hard.

Stable if not accessing the FC a lot.

Other machines on the SAN are fine. (but are 2GBit.)

Thoughts?

Apple Knowledge Base's picture

Mac OS X v10.6: Macs that use the 64-bit kernel (Apple KB)

Learn which Macs can use the 64-bit kernel in Mac OS X v10.6 and which use it by default.

Read more: http://support.apple.com/kb/HT3770

jtopoleski's picture

Should I upgrade to 10.7 or right to 10.8

Forums: 

We are prepping to upgrade our MDCs to 10.7 but I was wondering if I might be better served jumping to 10.8 since eventually the hardware they are editing on will be replaced and the replacement will likely be 10.8 clients forcing me to upgrade anyway.

Right now we are running 10.6.8 and everything is running fine, but we are planing to upgrade our Atempo and CatDV software so in my mind it would make sense to upgrade the servers anyway.

What has everyone else done? Are there any pitfalls I should be ware of outside of the normal prep you do for upgrades of MDCs?

mcjunkie's picture

Corrupted inode. Volume won't start.

Yesterday, we got a big problem.
Dis shutting down then rebooting MDC, Xsan volume won't start all of sudden.
Please refer to cvfsck result below. Has anyone ever seen this kind of error?
If there is any person or company to fix this issue, please contact to me.(mcjunkie@malgn.com)
I want to ask you to fix the the volume with payment as soon as possible.

Currently all LUNs(metadata LUN & userdata LUN) are recognized normally.
I'm at South Korea and you can access the MDC at remote.

MDC OS : 10.8.1
Xsan 3.0

Please let me know if you have any question.

sh-3.2# cvfsck -nv XSAN
Checked Build disabled - default.

BUILD INFO:

  1. !@$ Revision 4.2.2 Build 7443 (480) Branch Head
  2. !@$ Built for Darwin 12.0
  3. !@$ Created on Sun Jun 24 23:05:13 PDT 2012

Created directory /tmp/cvfsck251a for temporary files.
Creating Meta allocation check file.
Creating Data1 allocation check file.
Creating Data2 allocation check file.

    • NOTE ** Read Only Check.

File system journal will not be recovered.
The results may be inconsistent and mis-leading.

Super Block information.
FS Created On : Wed Aug 29 16:10:19 2012
Inode Version : '2.6' - 4.0 inode version with big inodes (0x206)
File System Status : *Dirty*
Allocated Inodes : 13312
Free Inodes : 13278
FL Blocks : 3
Next Inode Chunk : 0x696b
Metadump Seqno : 0
Restore Journal Seqno : 0
Windows Security Indx Inode : 0x6
Windows Security Data Inode : 0x7
Quota Database Inode : 0x8
ID Database Inode : 0xc
Client Write Opens Inode : 0x9

Stripe Group Meta ( 0) 0x3a35400 blocks.
Stripe Group Data1 ( 1) 0x57500c00 blocks.
Stripe Group Data2 ( 2) 0x57500c00 blocks.

Inode block size is 1024

Building Inode Index Database 13312 (100%).

  • Error*: buildinodes: Corrupted IEL block/0x696b Index 208: Repairing.
  • Error*: buildinodes: Corrupted inode 0xd2d6d0 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6d1 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6d2 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6d3 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6d4 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6d5 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6d6 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6d7 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6d8 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6d9 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6da (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6db (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6dc (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6dd (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6de (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6df (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6e0 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6e1 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6e2 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6e3 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6e4 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6e5 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6e6 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6e7 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6e8 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6e9 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6ea (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6eb (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6ec (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6ed (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6ee (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6ef (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6f0 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6f1 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6f2 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6f3 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6f4 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6f5 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6f6 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6f7 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6f8 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6f9 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6fa (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6fb (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6fc (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6fd (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6fe (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d6ff (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d700 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d701 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d702 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d703 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d704 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d705 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d706 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d707 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d708 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d709 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d70a (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d70b (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d70c (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d70d (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d70e (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d70f (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d710 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d711 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d712 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d713 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d714 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d715 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d716 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d717 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d718 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d719 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d71a (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d71b (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d71c (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d71d (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d71e (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d71f (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d720 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d721 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d722 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d723 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d724 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d725 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d726 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d727 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d728 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d729 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d72a (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d72b (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d72c (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d72d (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d72e (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d72f (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d730 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d731 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d732 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d733 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d734 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d735 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d736 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d737 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d738 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d739 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d73a (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d73b (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d73c (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d73d (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d73e (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d73f (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d740 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d741 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d742 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d743 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d744 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d745 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d746 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d747 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d748 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d749 (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d74a (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d74b (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d74c (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d74d (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d74e (Bad Marker(s)).
  • Error*: buildinodes: Corrupted inode 0xd2d74f (Bad Marker(s)).

Building Inode Index Database 13824 (103%).

  • Error*: buildinodes: Corrupted IEL block/0x698b Index 464: Repairing.

Segmentation fault: 11

cinesys-jdub's picture

Can we help you convert from Xsan to StorNext?

What is your expectations from Xsan?

Is Xsan fitting your needs?

As a reseller, its been brought to my attention of an aggressive conversion program to assist with the financial side of converting from Xsan to Quantum's StorNext. For more information, feel free to contact us regarding the special pricing thats availble. http://www.quantum.com/Solutions/Apple/Index.aspx

The conversion to StorNext based MDC's is quite simple. Your current hardware and clients are supported with StorNext. And with any newer Lion and Mountain Lion clients, the client license is free making the transition more worth while.

A little info about who CineSys Oceana is -
Cinesys Oceana is the result of the merger of two of North America’s premier distributors of hardware, software, and specialized services to the Media and Entertainment Industry.

CineSys, founded in 1997, is well known as the industry’s leading supplier of motion picture hardware, software, and engineering support in the southern United States.

Oceana Digital, headquartered in Toronto, Canada, is one of the leading suppliers of systems, support, and software to the Media and Entertainment Industry across Canada and the north-eastern United States.

With offices in New York, Miami, Atlanta, Dallas, Houston, Toronto, and Vancouver, the new CineSys Oceana will continue to provide exceptional sales and support services to the Media & Entertainment Industry in most major production centers across the continent.

We were honored with an award this year at IBC as well as being a finalist (http://www.ibc.org/page.cfm/link=328).

You can find us at:
http://cinesysinc.com

tomran's picture

Network groups are "fetching" on Xsan Volumes ONLY

We're using an all apple OD infrastracture with 10.7.4 and Xsan 2.3 and for the past year or so have been expiriencing this fetching issue ever since 10.6.
Not a while ago we've came across Apple's offical article on how-to fix hexadecimal group ID's such as FFFFEEEE-DDDD-CCCC-BBBB-AAAA82000400 from http://support.apple.com/kb/TS3556.

After reading Apple's article we've found out that the problem occures whenever a user logs into the machine BEFORE OD communications is completed.
To get things to work, we usually unmount the Xsan volume and then re-mount whenever the user is ALREADY logged in. This has worked for us for a couple of months till we came across a new problem; After re-mounting we're able to see the right permissions on TERMINAL but on FINDER it's still fetching.

Finder vs. Terminal:
[img]http://i48.tinypic.com/35leqnm.png/img
[img]http://i45.tinypic.com/4zusro.png/img

I've tried re-launching Finder without getting lucky, it seems like every time I re-launch Finder a different network group appears as "Fetching..."

dombera's picture

Zoning help: Qlogic, ActiveRaid and Promise VTrak

Hello Everyone!
I am very new to XSan and I am currently having some performance issues with our setup.
When I`ve started at my new workplace there wasn`t any zoning setup on our XSan Fabric.
I have started reading up and playing with the setup but I believe I can make things run much smoother and faster.

Equipment we have:

Fibre switches:
1. Qlogic 5602
2. Qlogic 5200
3. Qlogic 5600

I have updated every switch to the same latest firmware version - 7.4.0.29.0
All Switches have all port licensed.
All of them are connected to each other via XPAK cables utilising all 4 XPAK ports on each switch.
Each has it`s own IPv4 assigned. IPv6 is disabled. Each switch has different domain number: 1,2,3

Metadata switches:
1x Netgear GS724T
1x Netgear GS724TS

Metadata Controllers:
Primary: OSX 10.6.8 , XSan 2.2 (Build 248)
Secondary: OSX 10.6.8 , XSan 2.2 (Build 248)

Storage:
1x VTrak E610f (head unit)
1x VTrak J610f
1x Active Storage ActiveRaid ES

Clients:
Around 15 OSX clients

Storage to switches original Apple 4GB cables.
Finstar transceivers all round plus brand new fibre cables.

Current zoning setup:
1x Zone Set containing:
2x zones: Controllers&Storage, Clients&Storage
3x aliases: ClientsOnly, Controllers (2x metadata controllers), Storage (VTraks and ActiveRaid)

Zones:
1. Controllers&Storage with aliases:
zone 1: Controllers + Storage
zone 2: ClientsOnly + Storage

Now I have read many dosuments and online docs but there are so many opinions and varioations..
I was wondering if specialist form here cpould help me optimise the performance of my XSan.
With few clients pushing data is fast: performance seems to be ok- around 25 minutes to copy 70GB file.
When more clients kicks in performance seems to drop drastically: files just sit and wait as in the queue to start moving.

What do you guys think of my zoning?
I have read that I should create zone for each client containing: only this client + storage alias? This should seperate traffic and increase performance..

I would really appreciate every single look at this thread, comment and input.

Thank you in advance.

LilooTubs's picture

Heh.. Need get the X-Rumer 7.5.31

Forums: 

Ooops! Want get for free the xrumer 7.5.31 Elite
Or maybe for money. Anybody sell? I can pay via PayPal!..

Thanks you

PaulRS's picture

Duplicate LUN

Hi All;

I have an installation with 4 volumes and 4 MDCs and all Promise storage (12 E/J pairs + 3 x30s). Trouble is XSAN Admin reports a duplicate LUN on all but 1 of the machines (I have tried it on all the MDCs and 2 clients). It's the metadata LUN for one of the volumes. 2 of the MDCs pick the bogus LUN and will not start fsm for this volume. From the command line cvlabel -L, cvadmin disks, and diskutil list do not show the duplicate LUN.

I have tried trashing the XSAN Admin support & cache files and reconnect it to the SAN, but it still shows the duplicate LUN & the same 2 MDCs can't host the volume. I can't figure out where XSAN Admin gets its LUN list from. These are all XServe 3,1 running 10.7.4.

Thanks for your help
Paul

yoth's picture

stornext FX client version

Anyone know how to find what version we have currently installed on our one windows client?

All I can find is client revision 3.1.2 build 8.

I need to know if it is 1.4, 2 or 2.2.

I'm looking at upgrading our 2.2 xsan to mountain lion and this is the first of my hurdles I'm researching.

thanks!

Pages

Subscribe to Xsanity RSS