Mandakini's picture

Xsan 2.2.1- Volumes do not get mount

Hi All,

Need a urgent and quick help for the following.

I am trying to mount volumes on apple client using Xsan 2.2.1

When I hit cvadmin command I get the following error.

'The Xsan file system services on 127.0.0.1 may be stopped
Xsan administrator
Error in getting central control info'.

I have already tried the below steps many times.
1) Unistalled and installed Xsan admin 2.2.1
2) Recreated the xsan file,the config.plist file and the automount.plist file
3) Tried multiple reboots.
4) Removed the uuid file.
5) There is no .auth_secret file that needs to be removed.

Any help will be greatly appreciated.

huntson's picture

Interesting failover situation

Any clue why my volumes are failing over when there appears to be no issue? I have my switch configured correctly with fresh installs of Lion and the latest updates. Here is my log. I am seeing a few errors which don't necessarily make sense as everything is properly communicating and all.

] 0x103581000 INFO Disk rescan found 9 disks
[0721 23:35:28] 0x103581000 NOTICE compare_disks: label transition for disk /dev/rdisk12
[0721 23:35:41] 0x102e81000 NOTICE PortMapper: waking up diskscan thread.
[0721 23:35:41] 0x103581000 INFO Starting Disk rescan
[0721 23:35:41] 0x103581000 INFO Disk rescan delay completed
[0721 23:35:41] 0x103581000 INFO Disk rescan found 10 disks
[0721 23:35:41] 0x103581000 NOTICE compare_disks: label transition for disk /dev/rdisk6
[0721 23:36:49] 0x7fff738b8960 NOTICE PortMapper: launching configuration reload thread
[0721 23:36:49] 0x1003af000 NOTICE PortMapper: fsmpm configuration reload initiated (flags FFFFFFFF)
[0721 23:36:49] 0x1003af000 INFO NSS: Primary Name Server is '192.168.0.5' (192.168.0.5)
[0721 23:36:49] 0x10370a000 INFO NSS: Name Server '192.168.0.5' (192.168.0.5) port is 49858, revision is 0x0102.
[0721 23:36:49] 0x1003af000 INFO NSS: Secondary #1 Name Server is '192.168.0.6' (192.168.0.6)
[0721 23:36:49] 0x102f87000 (debug) nss_port_acquire_thread [4344803328] exit
[0721 23:36:49] 0x1003af000 (debug) FSS 'San' STOPPED -> STOPPED (idle)
[0721 23:36:49] 0x1003af000 NOTICE PortMapper: fsmpm configuration reload complete
[0721 23:36:49] 0x10378d000 INFO NSS: Name Server '192.168.0.6' (192.168.0.6) port is 51531, revision is 0x0102.
[0721 23:36:49] 0x103281000 (debug) nss_port_acquire_thread [4347924480] exit
[0721 23:36:49] 0x7fff738b8960 (debug) Start: Setting AUTOSTART for FSS 'San'
[0721 23:36:49] 0x1003af000 NOTICE PortMapper: Starting FSS service 'San[1]' on mdc.xsan.rvchost.int.
[0721 23:36:49] 0x7fff738b8960 (debug) FSS 'San' STOPPED (idle) -> LAUNCHED, next event in 60s
[0721 23:36:49] 0x7fff738b8960 (debug) FSS 'San' LAUNCHED -> REGISTERED
[0721 23:36:49] 0x7fff738b8960 NOTICE PortMapper: FSS 'San'[1] (pid 436) at port 49230 is registered.
[0721 23:36:49] 0x7fff738b8960 (debug) Dropping 192.168.0.5 coordinator 0 for new 49858
[0721 23:36:49] 0x7fff738b8960 INFO NSS: Standby FSS 'San[1]' at id 192.168.0.5 port 49230 (pid 436) - registered.
[0721 23:36:49] 0x103687000 NOTICE Portmapper: File System RAS events undeliverable to Coordinator '192.168.0.5'. Please upgrade Xsan on this host.
[0721 23:36:49] 0x7fff738b8960 (debug) Dropping 192.168.0.6 coordinator 0 for new 51531
[0721 23:36:49] 0x7fff738b8960 INFO NSS: Standby FSS 'San[0]' at id 192.168.0.6 port 49198 (pid 426) - registered.
[0721 23:36:49] 0x103687000 NOTICE Portmapper: File System RAS events undeliverable to Coordinator '192.168.0.6'. Please upgrade Xsan on this host.
[0721 23:36:49] 0x7fff738b8960 (debug) NSS: Coordinator 192.168.0.5 flags changed from 0x2 to 0x7
[0721 23:36:49] 0x7fff738b8960 (debug) NSS: Coordinator 192.168.0.5 id is 192.168.0.5
[0721 23:36:49] 0x7fff738b8960 (debug) Heartbeat from ID 192.168.0.5 updating LOCAL San to 192.168.0.5:49230
[0721 23:36:49] 0x7fff738b8960 (debug) NSS: Coordinator 192.168.0.6 flags changed from 0x2 to 0x7
[0721 23:36:49] 0x7fff738b8960 (debug) NSS: Coordinator 192.168.0.6 id is 192.168.0.6
[0721 23:36:49] 0x7fff738b8960 (debug) NSS: Computing nss_coord_sum
[0721 23:36:50] 0x7fff738b8960 INFO NSS: election initiated by 192.168.0.6:51531 (id 192.168.0.6) - admin request.
[0721 23:36:50] 0x7fff738b8960 INFO NSS: Vote call for FSS San is inhibited - vote dis-allowed.
[0721 23:36:56] 0x7fff738b8960 (debug) NSS: FSS mount list for client 192.168.0.6 (id 192.168.0.6) - San
[0721 23:36:56] 0x7fff738b8960 (debug) NSS: New mount registered for 'San'.
[0721 23:36:56] 0x7fff738b8960 (debug) NSS: FSS mount list for client 192.168.0.5 (id 192.168.0.5) - San
[0721 23:36:57] 0x102f87000 NOTICE PortMapper: Mount Event for /Volumes/San on /dev/disk13
[0721 23:37:02] 0x103581000 INFO Starting Disk rescan
[0721 23:37:41] 0x103581000 INFO Disk rescan delay completed
[0721 23:37:41] 0x103581000 INFO Disk rescan found 10 disks
[0721 23:40:14] 0x7fff738b8960 (debug) find_fsm fsm San ipaddr 192.168.0.6 port 49198 TestLink failed: getsockopt(SO_ERROR) returned error 61 [errno 61]: Connection refused
[0721 23:40:14] 0x7fff738b8960 INFO NSS: Active FSS 'San[0]' at 192.168.0.6:49198 (pid 426) - dropped.
[0721 23:40:15] 0x7fff738b8960 INFO NSS: election initiated by 192.168.0.6:51531 (id 192.168.0.6) - client request.
[0721 23:40:15] 0x7fff738b8960 (debug) find_fsm fsm San ipaddr 192.168.0.6 port 49198 TestLink failed: getsockopt(SO_ERROR) returned error 61 [errno 61]: Connection refused
[0721 23:40:15] 0x7fff738b8960 NOTICE PortMapper: Initiating activation vote for FSS 'San'.
[0721 23:40:15] 0x7fff738b8960 (debug) Initiate_nss_vote for FSS San
[0721 23:40:15] 0x7fff738b8960 (debug) NSS: sending message (type 2) to Name Server '192.168.0.5' (192.168.0.5:49858).
[0721 23:40:15] 0x7fff738b8960 (debug) NSS: sending message (type 2) to Name Server '192.168.0.6' (192.168.0.6:51531).
[0721 23:40:15] 0x7fff738b8960 (debug) NSS: FSS activation initiated by coordinator 192.168.0.6:51531 (id 192.168.0.6) votes 1
[0721 23:40:15] 0x7fff738b8960 INFO NSS: Vote call for FSS San is inhibited - vote dis-allowed.
[0721 23:40:20] 0x103281000 NOTICE PortMapper: Reconnect Event for /Volumes/San
[0721 23:40:20] 0x103281000 NOTICE PortMapper: Requesting MDS recycle of /Volumes/San
[0721 23:40:26] 0x7fff738b8960 INFO NSS: Standby FSS 'San[0]' at id 192.168.0.6 port 49365 (pid 510) - registered.
[0721 23:40:50] 0x7fff738b8960 NOTICE PortMapper: Stopping FSS 'San'
[0721 23:40:50] 0x7fff738b8960 NOTICE PortMapper: FSS 'San' has been stopped.
[0721 23:40:50] 0x7fff738b8960 (debug) FSS 'San' REGISTERED -> DYING, next event in 60s
[0721 23:40:50] 0x7fff738b8960 INFO NSS: Standby FSS 'San[0]' at 192.168.0.6:49365 (pid 510) - dropped.
[0721 23:40:51] 0x7fff738b8960 NOTICE PortMapper: Initiating activation vote for FSS 'San'.
[0721 23:40:51] 0x7fff738b8960 (debug) Initiate_nss_vote for FSS San
[0721 23:40:51] 0x7fff738b8960 (debug) NSS: sending message (type 2) to Name Server '192.168.0.5' (192.168.0.5:49858).
[0721 23:40:51] 0x7fff738b8960 (debug) NSS: sending message (type 2) to Name Server '192.168.0.6' (192.168.0.6:51531).
[0721 23:40:51] 0x7fff738b8960 INFO NSS: election initiated by 192.168.0.5:49858 (id 192.168.0.5) - client request.
[0721 23:40:51] 0x7fff738b8960 INFO NSS: Active FSS 'San[1]' at 192.168.0.5:49230 (pid 436) - dropped.
[0721 23:40:51] 0x7fff738b8960 (debug) NSS_VOTE2 to 192.168.0.5:49858
[0721 23:40:51] 0x7fff738b8960 (debug) NSS: removing vote inhibitor for FSS 'San'.
[0721 23:40:51] 0x7fff738b8960 (debug) start_fss_vote could not find FSS San in master - vote aborted.
[0721 23:40:51] 0x1003af000 (debug) Portmapper: FSS 'San' (pid 436) exited with status 0 (normal)
[0721 23:40:51] 0x1003af000 (debug) FSS 'San' DYING -> STOPPED (explicit request)
[0721 23:40:52] 0x7fff738b8960 (debug) Start: Setting AUTOSTART for FSS 'San'
[0721 23:40:52] 0x1003af000 NOTICE PortMapper: Starting FSS service 'San[1]' on mdc.xsan.rvchost.int.
[0721 23:40:52] 0x7fff738b8960 (debug) FSS 'San' STOPPED (explicit request) -> LAUNCHED, next event in 60s
[0721 23:40:52] 0x7fff738b8960 (debug) FSS 'San' LAUNCHED -> REGISTERED
[0721 23:40:52] 0x7fff738b8960 NOTICE PortMapper: FSS 'San'[1] (pid 520) at port 49538 is registered.
[0721 23:40:52] 0x7fff738b8960 INFO NSS: Standby FSS 'San[1]' at id 192.168.0.5 port 49538 (pid 520) - registered.
[0721 23:40:52] 0x7fff738b8960 (debug) Heartbeat from ID 192.168.0.5 updating LOCAL San to 192.168.0.5:49538
[0721 23:40:52] 0x7fff738b8960 INFO NSS: Standby FSS 'San[0]' at id 192.168.0.6 port 49377 (pid 512) - registered.
[0721 23:40:53] 0x7fff738b8960 NOTICE PortMapper: Initiating activation vote for FSS 'San'.
[0721 23:40:53] 0x7fff738b8960 (debug) Initiate_nss_vote for FSS San
[0721 23:40:53] 0x7fff738b8960 (debug) NSS: sending message (type 2) to Name Server '192.168.0.5' (192.168.0.5:49858).
[0721 23:40:53] 0x7fff738b8960 (debug) NSS: sending message (type 2) to Name Server '192.168.0.6' (192.168.0.6:51531).

cmonico's picture

Xsan not starting after adding storage

I added storage to my Xsan this am and now I cant get it started...I get this error in terminal:

State: BLOCKED (configuration file problem) 2012-07-21 09:44:49
Last Admin: START 2012-07-21 09:44:48
Last Termination: exit(19) 2012-07-21 09:44:49
Launches 5, core dumps 0, flags

FSS 'XSAN01' start unsuccessful. Check event log for more details.

Any advice would be greatly appreciated.

CitraMalus's picture

Xsan 1.4 LUN got unlabeled-Volume does not mount -SOLVED

Hey Guys..

One of our client tried something terrible on a live Xsan Volume. They connected one Windows 2000 server to FC switch and tried to merge three LUNS of the XSAN Volume from Windows OS. After failing to do so, the three XSAN LUNS got unlabeled and now the Volume does not mount. So my doubt is,

1. Can we just relabel the LUNS to mount the Volume and preserve the Data?
2. If the answer for the above is NO, then will they lose data from the entire Volume, if we try to add (Just like expanding) the LUNs after relabeling it?

Please Help. the client is a TV Broadcaster and want their precious data.
Forgot to mention, the Xsan is 1.4 and the Mac OS X 10.5.

THANKS.

kittonian's picture

MOV File Corruption on XSAN 2.3

We're getting very strange, but very damaging, file corruption happening with ProRes MOV files residing on XSAN volumes. While the files can still be opened without issue, there are frames within the MOV that are being corrupted (skips, garbled images, etc.).

Unfortunately this seems to be happening very randomly and it doesn't look like it's a specific volume problem as I created a brand new volume and saw the problem rear its head within a week.

Every time we have shut down the system and brought it back online I have run cvsck -jv, cvfsck -nv, and if there are any issues (which has only happened once) cvfsck -wv. All the LUNs are online and I don't see any errors in the logs.

Apparently this is something that has been around for quite a long time as the owner of this company used to work for Apple in the iTunes unit back in 2006 and they saw this same issue. Looks like Apple isn't telling anyone about it and haven't ever addressed it, but I'd love to hear any ideas/solutions from the community.

This only seems to happen with ProRes MOV files. All MPG files are just fine.

Here's a rundown of our XSAN hardware:

(2) Mac Mini MDCs running Lion Server with 2xSSD drives in a RAID 1 mirror with a TrendNet 1GB USB ethernet adapter and a Promise SANLink
Brocade DCX-8518 Fiber Switch
(6) Proware 42-bay chassis with dual 2GB RAID controllers and dual 1100 watt power supplies
(20) ATTO Faststream 8550 SAS->Fiber RAID controllers
(40) 24-bay JBODs

Most all of the clients are running Mac Pro machines with Lion (a few are running Snow Leopard) and we've got 8 Windows 7 machines running StorNext FX client software.

altavek's picture

Missing LUNs on XRAID

I've got 2 LUNs on the top controller that are not showing up in XSAN admin and even Disk Utility. I've reset the RAID controller, checked all cables, cleaned all dust out of the connections on the mid-plane board, repaired the LUN map, renamed them to something other than what they were previously named, even swapped the controllers. I can see them in RAID admin and nothing is reporting an error.

Any help would be greatly appreciated. Thanks!

francisyo's picture

Antivirus on MDC's and Clients

Hello guys,

Is it advisable to install Antivirus for Mac on our Xsan servers(MDC) and clients? I just want to scan xsan volumes for viruses. Is it recommended? I haven't tried it and just want to know your opinion on this matter. And for those people who have experiences on antivirus on xsan please do share. Thanks

abstractrude's picture

Quantum Stornext 4.3 Released. Supports Named Streams.

The most relevant section of the press release

"For Mac OS X only-based StorNext File System environments, StorNext 4.3 now supports Named Streams. StorNext 4.3 enables simpler hardware upgrades and disaster recovery. With the ability to migrate file system metadata stored on the metadata controller to a different disk geometry, the software allows customers to take advantage of new drive capacities and technologies."

More after the break....

kaeddong's picture

xsan 2.2 admin guide leads to questions about.

Hello
xsan 2.0 admin guide looks at man Xsan Capacities "Number of computers on a SAN (metadata controllers and clients) Maximum 64" manual on the appearance of 2.2 or 2.3 I can not see in the manual.
Each version is not limited?
  If there is a limit to how much the maximum you're telling me.<

mrtubz's picture

NDMP Backups or Better

Hi All

We have recently implemented a BlueARC NAS device with 60TB of storage and have a Dual LTO 5 tape library. The BlueARC supports NDMP backup and from a post on your knowledge base you are not looking to implement NDMP in your software in the future. My question really is what would be the best way to get the full potential from the LTO drives when using Presstore?

I currently have a Trunked 4Gb link to our switch and 10GB to our NAS device, the Tape Library is connected over 4Gb Fibre, but I very rarely get above 5% buffer on the tape device, so the drives are obviously not being fed quickly enough. I have a client with the same setup but Atempo Time Navigator running NDMP and they get a significantly higher throughput to their drives.

Thanks

Pages

Subscribe to Xsanity RSS