Error message

  • Notice: Undefined variable: comment in phpbb_redirect_viewtopic() (line 134 of /var/sites/x/xsanity.com/public_html/sites/all/modules/phpbb2drupal/phpbb_redirect.module).
  • Notice: Trying to get property of non-object in phpbb_redirect_viewtopic() (line 134 of /var/sites/x/xsanity.com/public_html/sites/all/modules/phpbb2drupal/phpbb_redirect.module).

Xsan Volume Starts and then Stops within few seconds.

T.K Sreekar's picture

Hi,

We have setup Xsan 2.0 which stopped working suddenly one day. I have restarted the MDC, Promise Vtrak and ran cvfsck command with options -vj, -vn, -vw, but to avail no positive results. AFter that, the Volume starts and then stops after few seconds. I am afraid I may have to rebuil the volume.
If there anyone who can help me to sort out this issue.

Configuration.
Xserve Intel Xeon - 01 (PMC)
Promise Vtrack E-Class 12TB - 01
Mac Pro Clients - 03
Mac OS X 10.5.2
Xsan 2.0

one Mac Pro client acts as the SMC. I have copied few entries from the system log of the PMC.

Jul 29 12:40:48 pmc servermgrd[50]: xsan: get_fsmvol_stripe_groups: SNFS Generic Error
Jul 29 12:40:48 pmc fsm[109]: PANIC: /Library/Filesystems/Xsan/bin/fsm "OpHangLimitSecs exceeded VOP-Rename 183.06 secs Conn[125] Thread-0xb4eb5000 Pqueue-0x20dc38 Workp-0x67d0018 MsgQ-0x67d0008 Msg-0x67d005c now 0x4532456de73e4 started 0x453244bf53999 limit 180 secs.\n" file queues.c, line 619
Jul 29 12:40:48 pmc KernelEventAgent[56]: tid 00000000 received VQ_NOTRESP event (1)
Jul 29 12:40:48 pmc servermgrd[50]: xsan: [50/103B90] ERROR: get_fsmvol_stripe_groups: Unable to get UTF8 string of †¬Á^A
Jul 29 12:40:48 pmc KernelEventAgent[56]: tid 00000000 type 'acfs', mounted on '/Volumes/STORAGE', from '/dev/disk4', not responding
Jul 29 12:40:48 pmc servermgrd[50]: xsan: get_fsmvol_stripe_groups: SNFS Generic Error
Jul 29 12:40:48 pmc servermgrd[50]: xsan: [50/103B90] ERROR: get_fsmvol_stripe_groups: Unable to get UTF8 string of †¬Á^A
Jul 29 12:40:48 pmc KernelEventAgent[56]: tid 00000000 found 1 filesystem(s) with problem(s)
Jul 29 12:40:48 pmc loginwindow[55]: 1 server now unresponsive
Jul 29 12:40:48 pmc fsm[109]: Xsan FSS 'STORAGE[0]': PANIC: aborting threads now.
Jul 29 12:40:48 pmc com.apple.launchd[1] (com.apple.servermgrd[50]): Exited abnormally: Broken pipe
Jul 29 12:40:49 pmc kernel[0]: Reconnecting to FSS 'STORAGE'
Jul 29 12:40:49 pmc fsmpm[103]: PortMapper: Initiating activation vote for FSS 'STORAGE'.
Jul 29 12:40:53 pmc kernel[0]: Reconnect successful to FSS 'STORAGE' on host '10.0.0.6'.
Jul 29 12:40:53 pmc fsmpm[103]: PortMapper: Reconnect Event for /Volumes/STORAGE
Jul 29 12:40:53 pmc kernel[0]: Using v2 readdir for 'STORAGE'
Jul 29 12:40:53 pmc KernelEventAgent[56]: tid 00000000 received VQ_NOTRESP event (1)
Jul 29 12:40:53 pmc fsmpm[103]: PortMapper: Requesting MDS recycle of /Volumes/STORAGE
Jul 29 12:40:53 pmc loginwindow[55]: No servers unresponsive
Jul 29 12:40:54 pmc servermgrd[17304]: servermgr_calendar: created default calendar virtual host
Jul 29 12:41:01 pmc ReportCrash[17314]: Formulating crash report for process fsm[109]
Jul 29 12:41:01 pmc fsmpm[103]: PortMapper: FSS 'STORAGE' disconnected.
Jul 29 12:41:01 pmc fsmpm[103]: PortMapper: kicking diskscan_thread -1338454016.
Jul 29 12:41:01 pmc fsmpm[103]: Portmapper: FSS 'STORAGE' (pid 109) exited on signal 6
Jul 29 12:41:01 pmc ReportCrash[17314]: Saved crashreport to /Library/Logs/CrashReporter/fsm_2008-07-29-124100_PMC.crash using uid: 0 gid: 0, euid: 0 egid: 0
Jul 29 12:41:11 pmc fsmpm[103]: PortMapper: RESTART FSS service 'STORAGE[0]' on host pmc.pub.vasanth.tv.
Jul 29 12:41:11 pmc fsmpm[103]: PortMapper: Starting FSS service 'STORAGE[0]' on pmc.pub.vasanth.tv.
Jul 29 12:41:11 pmc fsmpm[103]: PortMapper: FSS 'STORAGE'[0] (pid 17315) at port 55625 is registered.
Jul 29 12:41:18 pmc com.apple.metadata.mds[54]: XSANFS_FSCTL_SpotlightRPC fsctl failed (errno = 35)ERROR: _MDSChannelInitForXsan: _XsanCreateMDSChannel failed: 35
Jul 29 12:41:18 pmc com.apple.metadata.mds[54]: ERROR: _MDSChannelXsanFetchAccessTokenForUID: dead channel
Jul 29 12:41:18 pmc mds[54]: (Error) Message: MDSChannel RPC failure (fetchPropertiesForContext:) [no channelAccessToken]
Jul 29 12:41:18 pmc mds[54]: (Error) Store: {channel:0x33e610 localPath:'/Volumes/STORAGE'} bring up failed -- will retry
Jul 29 12:42:29 pmc kernel[0]: FusionMPT: Notification = 9 (Logout) for SCSI Domain = 0

Jul 29 19:29:15 pmc KernelEventAgent[56]: tid 00000000 received VQ_NOTRESP event (1)
Jul 29 19:29:15 pmc fsm[108]: Xsan FSS 'STORAGE[0]': PANIC: aborting threads now.
Jul 29 19:29:16 pmc fsmpm[102]: PortMapper: Initiating activation vote for FSS 'STORAGE'.
Jul 29 19:29:27: --- last message repeated 5 times ---
Jul 29 19:29:27 pmc ReportCrash[149]: Formulating crash report for process fsm[108]
Jul 29 19:29:27 pmc fsmpm[102]: PortMapper: FSS 'STORAGE' disconnected.
Jul 29 19:29:27 pmc fsmpm[102]: PortMapper: kicking diskscan_thread -1338454016.
Jul 29 19:29:27 pmc ReportCrash[149]: Saved crashreport to /Library/Logs/CrashReporter/fsm_2008-07-29-192926_PMC.crash using uid: 0 gid: 0, euid: 0 egid: 0
Jul 29 19:29:28 pmc fsmpm[102]: PortMapper: Initiating activation vote for FSS 'STORAGE'.
Jul 29 19:29:28 pmc fsmpm[102]: Portmapper: FSS 'STORAGE' (pid 108) exited on signal 6
Jul 29 19:29:30 pmc fsmpm[102]: PortMapper: Initiating activation vote for FSS 'STORAGE'.
Jul 29 19:29:36 pmc kernel[0]: Could not mount filesystem STORAGE, cvfs error 'Timeout' (25)
Jul 29 19:29:38 pmc fsmpm[102]: PortMapper: RESTART FSS service 'STORAGE[0]' on host pmc.pub.vasanth.tv.
Jul 29 19:29:38 pmc fsmpm[102]: PortMapper: Starting FSS service 'STORAGE[0]' on pmc.pub.vasanth.tv.
Jul 29 19:29:38 pmc fsmpm[102]: PortMapper: FSS 'STORAGE'[0] (pid 153) at port 49199 is registered.
Jul 29 19:51:06 pmc fsm[330]: PANIC: /Library/Filesystems/Xsan/bin/fsm "free_pending_inode_thread: Cannot lookup inode [0x807f800000ce3134]! -Unknown error: 0\n" file inode.c, line 3510
Jul 29 19:51:06 pmc fsmpm[102]: PortMapper: Initiating activation vote for FSS 'STORAGE'.
Jul 29 19:51:06 pmc fsm[330]: Xsan FSS 'STORAGE[0]': PANIC: aborting threads now.
Jul 29 19:51:08 pmc fsmpm[102]: PortMapper: Initiating activation vote for FSS 'STORAGE'.
Jul 29 19:51:15: --- last message repeated 3 times ---
Jul 29 19:51:15 pmc ReportCrash[335]: Formulating crash report for process fsm[330]
Jul 29 19:51:15 pmc servermgrd[259]: xsan: [259/1DBB00] ERROR: get_fsmvol_at_index: Could not connect to FSM because Admin Tap Connection to FSM failed: [errno 54]: Connection reset by peer\nFSM may have too many connections active.
Jul 29 19:51:15 pmc servermgrd[259]: xsan: [259/1DBB00] ERROR: -[SANFilesystem spotlightSearchLevelForVolume:]: 'STORAGE': -doSpotlightRpcForVolume failed (2)
Jul 29 19:51:15 pmc fsmpm[102]: PortMapper: FSS 'STORAGE' disconnected.
Jul 29 19:51:15 pmc fsmpm[102]: PortMapper: kicking diskscan_thread -1338454016.
Jul 29 19:51:15 pmc com.apple.xsan[43]: xsan:perfDispatchMicroseconds = 316
Jul 29 19:51:15 pmc com.apple.xsan[43]: xsan:perfFunctionMicroseconds = 2673
Jul 29 20:13:47 pmc servermgrd[50]: xsan: [50/1CF380] ERROR: -[SANFilesystem spotlightSearchLevelForVolume:]: 'STORAGE': -doSpotlightRpcForVolume failed (2)
Jul 29 20:13:47 pmc servermgrd[50]: xsan: [50/1CF380] ERROR: get_fsmvol_at_index: Could not connect to FSM because File System Manager "STORAGE" on Macedit1.xsan.vasanth.tv is on standby.
Jul 29 20:13:47 pmc servermgrd[50]: xsan: [50/1CF380] ERROR: -[SANFilesystem spotlightSearchLevelForVolume:]: 'STORAGE': -doSpotlightRpcForVolume failed (2)
Jul 29 20:13:47 pmc servermgrd[50]: xsan: [50/103B90] ERROR: get_fsmvol_at_index: Could not connect to FSM because File System Manager "STORAGE" on Macedit1.xsan.vasanth.tv is on standby.
Jul 29 20:14:00 pmc fsm[218]: Xsan FSS 'STORAGE[0]': FSM Alloc: Stripe Group "UDATA" 141652367 free blocks in 131231 fragments inserted.
Jul 29 20:14:00 pmc fsm[218]: Xsan FSS 'STORAGE[0]': FSM Alloc: Stripe Group "UDATA" 1668681 free blocks in 120419 fragments ignored.
Jul 29 20:14:00 pmc fsm[218]: Xsan FSS 'STORAGE[0]': Windows Security has been turned off in config file but clients have been requested to enforce ACLs. Windows Security remains in effect.
Jul 29 20:14:00 pmc fsm[218]: Xsan FSS 'STORAGE[0]': PANIC: /Library/Filesystems/Xsan/bin/fsm "free_pending_inode_thread: Cannot lookup inode [0x807f800000ce3134]! -Unknown error: 0 " file inode.c, line 3510
Jul 29 20:14:00 pmc fsm[218]: PANIC: /Library/Filesystems/Xsan/bin/fsm "free_pending_inode_thread: Cannot lookup inode [0x807f800000ce3134]! -Unknown error: 0\n" file inode.c, line 3510
Jul 29 20:14:00 pmc servermgrd[50]: xsan: [50/1CF380] ERROR: get_fsmvol_at_index: Could not connect to FSM because Admin Tap Connection to FSM failed: [errno 54]: Connection reset by peer\nFSM may have too many connections active.
Jul 29 20:14:00 pmc servermgrd[50]: xsan: [50/1CF380] ERROR: -[SANFilesystem spotlightSearchLevelForVolume:]: 'STORAGE': -doSpotlightRpcForVolume failed (2)
Jul 29 20:14:00 pmc fsm[218]: Xsan FSS 'STORAGE[0]': PANIC: aborting threads now.
Jul 29 20:14:00 pmc com.apple.xsan[43]: xsan:perfDispatchMicroseconds = 314
Jul 29 20:14:00 pmc com.apple.xsan[43]: xsan:perfFunctionMicroseconds = 554
Jul 29 20:14:00 pmc servermgrd[50]: xsan: [50/1CF380] ERROR: get_fsmvol_at_index: Could not connect to FSM because Could not find File System Manager for "STORAGE" on pmc.xsan.vasanth.tv.
Jul 29 20:14:00 pmc servermgrd[50]: xsan: [50/1CF380] ERROR: -[SANFilesystem spotlightSearchLevelForVolume:]: 'STORAGE': -doSpotlightRpcForVolume failed (2)
Jul 29 20:14:00 pmc servermgrd[50]: xsan: [50/1CF380] ERROR: get_fsmvol_at_index: Could not connect to FSM because Could not find File System Manager for "STORAGE" on pmc.xsan.vasanth.tv.

Thanks.

Mr.Bean's picture

Is this the same case if you try hosting the Volume on other controller?

cheers,
Mr.Bean :D

T.K Sreekar's picture

Yes. The same error if I make another client machine as controller.

Mr.Bean's picture

Can you do a graceful restart of the whole setup including the FC switch and check.
1. After repairing the Volume using "cvfsck" did you recheck if the volume was clean or dirty?
2. at one of the places there is an error for Disk 4. Can you check that?
3. The FSM process crashed. Can you post the crash report of that?

cheers,
Mr.Bean :D

T.K Sreekar's picture

Hello Bean,

I have restarted the whole setup including the FC and ETH switches.

1. After repairing, the Volume is CLEAN.

2. There is no problem with Disk4. That disk is configured as Spare Disk.

3. The FSM process crash indicates due to the Free Inode Thread.

I have copied some entries from the CVLOG file,

[0730 12:02:22] 0xa0663fa0 (Info) Self (pmc.pub.vasanth.tv) IP address is 192.168.0.150.
[0730 12:02:22.556548] 0xa0663fa0 (Debug) No fsports file - port range enforcement disabled.
[0730 12:02:22] 0xa0663fa0 (Info) Listening on TCP socket pmc.pub.vasanth.tv:58459
[0730 12:02:22] 0xa0663fa0 (Info) Node [0] [pmc.pub.vasanth.tv:58459] File System Manager Login.
[0730 12:02:22] 0xa0663fa0 (Info) Service standing by on host 'pmc.pub.vasanth.tv:58459'.
[0730 12:02:23.439685] 0xa0663fa0 (Debug) Standby service - NSS ping from Macedit1.xsan.vasanth.tv:49586.
[0730 12:02:23.439928] 0xa0663fa0 (Debug) FOUsurpCheck: read ARB info (pass 1): host (192.168.0.151:49225) conns 0 age 1217397925.00 secs his delta 0.00 secs my delta 0.00 secs.
[0730 12:02:23.439934] 0xa0663fa0 (Debug) FOUsurpCheck: polling ARB block to check for active peer (pass 1).
[0730 12:02:24.440167] 0xa0663fa0 (Debug) FOUsurpCheck: read ARB info (pass 2): host (192.168.0.151:49225) conns 0 age 1217397925.00 secs his delta 0.00 secs my delta 1.00 secs.
[0730 12:02:24.440192] 0xa0663fa0 (Debug) FOUsurpCheck: peer found idle (pass 2): his conns 0 my votes 1 arb usurpee ::.
[0730 12:02:24] 0xa0663fa0 (Info) Branding Arbitration Block (attempt 1) votes 1.
[0730 12:02:26.442570] 0xa0663fa0 (Debug) Cannot find fail over script /Library/Filesystems/Xsan/bin/cvfail.pmc.pub.vasanth.tv - looking for generic script.
[0730 12:02:26] 0xa0663fa0 (Info) Launching fail over script ["/Library/Filesystems/Xsan/bin/cvfail" pmc.pub.vasanth.tv 58459 STORAGE]
[0730 12:02:26.480741] 0xa0663fa0 (Debug) Starting journal log recovery.
[0730 12:02:26.631061] 0xa0663fa0 (Debug) Completed journal log recovery.
[0730 12:02:26.631317] 0xa0663fa0 (Debug) Inode_init_post_activation: FsStatus 0xd07, Brl_ResyncState 1
[0730 12:02:26] 0xb8e2f000 (Info) FSM Alloc: Loading Stripe Group "MDJ". 698.48 GB.
[0730 12:02:26] 0xb8eb1000 (Info) FSM Alloc: Loading Stripe Group "UDATA". 6.82 TB.
[0730 12:02:26] 0xb8e2f000 (Info) FSM Alloc: Stripe Group "MDJ" active.
[0730 12:02:27] 0xb8eb1000 (Warning) FSM Alloc: Stripe Group "UDATA" 141652367 free blocks in 131231 fragments inserted.
[0730 12:02:27] 0xb8eb1000 (Warning) FSM Alloc: Stripe Group "UDATA" 1668681 free blocks in 120419 fragments ignored.
[0730 12:02:27] 0xb8eb1000 (Info) FSM Alloc: free blocks 141652367 with 0 blocks currently reserved for client delayed buffers.Reserved blocks may change with client activity.
[0730 12:02:27] 0xb8eb1000 (Info) FSM Alloc: Stripe Group "UDATA" active.
[0730 12:02:27] 0xa0663fa0 (Warning) Windows Security has been turned off in config file but clients have been requested to enforce ACLs. Windows Security remains in effect.
[0730 12:02:27] 0xa0663fa0 (Info) File system 'STORAGE' requires UTF8-NFC file names
[0730 12:02:27] 0xa0663fa0 (Info) File System Service 'STORAGE[0]' now active on host 'pmc.pub.vasanth.tv:58459'.
[0730 12:02:27] 0xb8dad000 (**FATAL**) PANIC: /Library/Filesystems/Xsan/bin/fsm "free_pending_inode_thread: Cannot lookup inode [0x807f800000ce3134]! -Unknown error: 0
" file inode.c, line 3510
[0730 12:02:27.558342] 0xa0663fa0 (Debug) Listener_thread: flushing journal.
[0730 12:02:27.558359] 0xa0663fa0 (Debug) Listener_thread: journal flush complete.
[0730 12:02:27.558397] 0xa0663fa0 (Debug) FSM memory SUMMARY resident size 64MB.
[0730 12:02:27.558401] 0xa0663fa0 (Debug) FSM I/O SUMMARY writes total/6.
[0730 12:02:27.558405] 0xa0663fa0 (Debug) FSM I/O SUMMARY writes journal/0 sb/0 buf/0 abm/0.
[0730 12:02:27.558412] 0xa0663fa0 (Debug) FSM I/O SUMMARY writes inode/0 ganged/0 (0.00%).
[0730 12:02:27.558415] 0xa0663fa0 (Debug) FSM wait SUMMARY inode pool expand waits/0.
[0730 12:02:27.558418] 0xa0663fa0 (Debug) FSM wait SUMMARY journal waits/0.
[0730 12:02:27.558421] 0xa0663fa0 (Debug) FSM wait SUMMARY journal bytes used avg/245760 max/245760.
[0730 12:02:27.558424] 0xa0663fa0 (Debug) FSM wait SUMMARY free buffer waits/0.
[0730 12:02:27.558427] 0xa0663fa0 (Debug) FSM wait SUMMARY free inode waits/0.
[0730 12:02:27.558432] 0xa0663fa0 (Debug) FSM wait SUMMARY revokes/0 avg/0 min/0 max/0.
[0730 12:02:27.558435] 0xa0663fa0 (Debug) FSM threads SUMMARY max busy hi-prio/0 lo-prio/0.
[0730 12:02:27.558438] 0xa0663fa0 (Debug) FSM threads SUMMARY max busy dmig/0 events/0.
[0730 12:02:27.558441] 0xa0663fa0 (Debug) FSM msg queue SUMMARY hi-prio now/0 min/0 max/0.
[0730 12:02:27.558445] 0xa0663fa0 (Debug) FSM msg queue SUMMARY lo-prio now/0 min/0 max/0.
[0730 12:02:27.558448] 0xa0663fa0 (Debug) FSM msg queue SUMMARY dmig now/0 min/0 max/0.
[0730 12:02:27.558451] 0xa0663fa0 (Debug) FSM msg queue SUMMARY events now/0 min/0 max/0.
[0730 12:02:27.558458] 0xa0663fa0 (Debug) FSM cache SUMMARY inode lookups/6 misses/5 hits/16.67%.
[0730 12:02:27.558461] 0xa0663fa0 (Debug) FSM cache SUMMARY free incore inodes now/8192 min/8191 max/8192.
[0730 12:02:27.558465] 0xa0663fa0 (Debug) FSM cache SUMMARY buffer lookups/12 misses/9 hits/25.00%.
[0730 12:02:27.558469] 0xa0663fa0 (Debug) FSM cache SUMMARY free buffers now/2048 min/1 max/2048.
[0730 12:02:27.558472] 0xa0663fa0 (Debug) FSM cache SUMMARY attrs now/0 min/0 max/0.
[0730 12:02:27.558476] 0xa0663fa0 (Debug) FSM extent SUMMARY extent lookups/0 misses/0 hits/0.00%.
[0730 12:02:27.558479] 0xa0663fa0 (Debug) FSM extent SUMMARY hint tries/0 misses/0 hits/0.00%.
[0730 12:02:27.558508] 0xa0663fa0 (Debug) SG SUMMARY MDJ space total/698.48 GB free/698.26 GB (99.97%)
[0730 12:02:27.558526] 0xa0663fa0 (Debug) SG SUMMARY MDJ space minfree/698.26 GB (99.97%) maxfree/698.26 GB (99.97%)
[0730 12:02:27.558531] 0xa0663fa0 (Debug) SG SUMMARY MDJ alloc extent cnt/0 avgsize/0.00 B.
[0730 12:02:27.558558] 0xa0663fa0 (Debug) SG SUMMARY 135 btree free space fragments 168 splay tree fragments
[0730 12:02:27.558564] 0xa0663fa0 (Debug) SG SUMMARY UDATA space total/6.82 TB free/2.11 TB (30.94%)
[0730 12:02:27.558570] 0xa0663fa0 (Debug) SG SUMMARY UDATA space minfree/2.11 TB (30.94%) maxfree/2.11 TB (30.94%)
[0730 12:02:27.558574] 0xa0663fa0 (Debug) SG SUMMARY UDATA alloc extent cnt/0 avgsize/0.00 B.
[0730 12:02:27.558578] 0xa0663fa0 (Debug) SG SUMMARY 131232 btree free space fragments 261630 splay tree fragments
[0730 12:02:27.558584] 0xa0663fa0 (Debug) PIO Read SUMMARY RAID1_MDJ_C1 cnt/133 maxq/9.
[0730 12:02:27.558589] 0xa0663fa0 (Debug) PIO Read SUMMARY RAID1_MDJ_C1 avg/11824 min/133 max/46360.
[0730 12:02:27.558594] 0xa0663fa0 (Debug) PIO Read SUMMARY RAID1_MDJ_C1 sysavg/3585 sysmin/120 sysmax/5905.
[0730 12:02:27.558598] 0xa0663fa0 (Debug) PIO Read SUMMARY RAID1_MDJ_C1 avglen/666477 minlen/512 maxlen/1048576.
[0730 12:02:27.558602] 0xa0663fa0 (Debug) PIO Write SUMMARY RAID1_MDJ_C1 cnt/6 maxq/1.
[0730 12:02:27.558606] 0xa0663fa0 (Debug) PIO Write SUMMARY RAID1_MDJ_C1 avg/481 min/364 max/575.
[0730 12:02:27.558611] 0xa0663fa0 (Debug) PIO Write SUMMARY RAID1_MDJ_C1 sysavg/468 sysmin/354 sysmax/560.
[0730 12:02:27.558615] 0xa0663fa0 (Debug) PIO Write SUMMARY RAID1_MDJ_C1 avglen/5802 minlen/512 maxlen/16384.
[0730 12:02:27] 0xb8dad000 (**FATAL**) PANIC: aborting threads now.
Logger_thread: sleeps/11 signals/0 flushes/7 writes/7 switches 0
Logger_thread: logged/74 clean/74 toss/0 signalled/0 toss_message/0
Logger_thread: waited/0 awakened/0
[0730 12:02:46] 0xa0663fa0 (Info) Server Revision 3.1.0 Build 2 (339.12)
[0730 12:02:46] 0xa0663fa0 (Info) Built for Darwin 9.0 ppc
[0730 12:02:46] 0xa0663fa0 (Info) Created on Tue Feb 19 17:29:50 PST 2008
[0730 12:02:46] 0xa0663fa0 (Info) Built in /SourceCache/XsanFS/XsanFS-339.12
[0730 12:02:46] 0xa0663fa0 (Info)
Configuration:
DiskTypes-3
Disks-3
StripeGroups-2
ForceStripeAlignment-1
MaxConnections-139
ThreadPoolSize-256
StripeAlignSize-32
FsBlockSize-16384
BufferCacheSize-32M
InodeCacheSize-8192
RestoreJournal-Disabled
RestoreJournalDir-None

Please help me. Do I need to do a cvfsck -C or cvupdatefs

I need help very badly.

Thanks

ravi's picture

Try cvfsck -C to clobber the free inode list and see if that resolves your issue. It shouldn't impact your data.

Mr.Bean's picture

Any Luck T.K Sreekar??

cheers,
Mr.Bean :D

T.K Sreekar's picture

Yes. Mr. Bean.. I got in lots, the LUCK, in this case. Clobbering the free node list worked.

Thanks to you guys.

Hi palamudikkilli,

What does it mean when it says, "Metadata usage will be increased after running this command." Just curious to learn, could you please brief... if it does not hurt you.

Thanks
Sreekar

ravi's picture

Sreekar

The metadata usage may be increased, not necessarily will be increased. The Apple guys who participate in this forum might be able to give a more precise technical answer dealing with the topic of free inodes and Xsan's dynamic allocation of inodes as needed (since we don't have any documentation to go by on Apple's/Quantum's practical implementation of those concepts). If I were to surmise, they are referring to the amount of metadata and metadata lookups needed to locate a file's data. One possible way to cut this down is to periodically defragment the volume to keep the file extents count low (for every 10 extents needed for a file's data, a new inode needs to be associated with that file and this causes increased metadata reads--see the "Advanced Fragmentation Analysis" section of the snfsdefrag man page).

Cheers

T.K Sreekar's picture

Thanks Palamudikkilli.. I got more than I needed. Thanks for all the help..

Cheers.
Sreekar