larspetter's picture

Users not able to log in after recreating ldap in Mavericks

Tags: 

After recreating ldap in 10.9.2 (fresh setup, importing users/groups) and rebinding the 10.6.8 Fcsvr installation to new ldap, users are unable to log in.

Any ideas?

Users are able to login on the server så det binding in the OS is working.

Debugging fcsvr gives this:

dsAttrTypeStandard:RecordName lpo lpo

} 21:33:04.503661 0xb0103000       DEBUG2 findNodeForRecord node.C:165 [DS] using node /LDAPv3/ldapserver.xxx.xx for auth21:33:04.503681 0xb0103000       DEBUG2 runThread PmsAuthUser.C:115 [PxModelStore] found node for user:lpo21:33:04.503703 0xb0103000       { auth::auth auth.C:33 [DS] this=0xb0102d6c21:33:04.503716 0xb0103000       } auth::auth auth.C:33 [DS] this=0xb0102d6c21:33:04.503729 0xb0103000       DEBUG3 runThread PmsAuthUser.C:131 [PxModelStore] waiting on sem21:33:04.503744 0xb0103000       DEBUG3 runThread PmsAuthUser.C:133 [PxModelStore] finished waiting on sem21:33:04.503761 0xb0103000       DEBUG2 runThread PmsAuthUser.C:143 [PxModelStore] authenticating with token: username=bHBv21:33:04.503777 0xb0103000       { saslStart auth.C:432 [DS] this=0xb0102d6c PPS, username=bHBv21:33:04.516185 0xb0103000       } saslStart auth.C:432 [DS] this=0xb0102d6c21:33:04.516224 0xb0103000       DEBUG1 doAuthStep auth.C:152 [DS] auth failed with result -2421:33:04.516249 0xb0103000       DEBUG2 runThread PmsAuthUser.C:147 [PxModelStore] auth status: -1448321:33:04.516265 0xb0103000       DEBUG2 runThread PmsAuthUser.C:192 [PxModelStore] unsuccessful auth21:33:04.516364 0xb0103000       { auth::~auth auth.C:39 [DS] this=0xb0102d6c21:33:04.516470 0xb0103000       } auth::~auth auth.C:39 [DS] this=0xb0102d6c21:33:04.516476 0xa0c54540       { readCB KsSlaveThread.C:108 [KsStream] this=0x357ce9021:33:04.516495 0xb0103000         DEBUG3 ~node node.C:40 [DS] closing node21:33:04.516593 0xa0c54540         DEBUG2 acceptEvent KsNode.C:148 [KsStream] accepting event [ evt { SLAVE_REPLY_VALUE = { CODE = E_LOGIN, DESC = Please re-enter the username and password or contact the server administrator.Please note that the username and password are case-sensitive., SRC_FILE = PmsAuthUser.C, SRC_LINE = 206, OPEN_DIRECTORY_ERROR = -14483 } } ] on node ["PmsTask_UserLogin" 0x3568e80, ref=4, wref=3] lockToken=1536 holding locks:(335cffd7-6133-4910-af40-0b0154e123af WR token=1536) taskState=0 dbqueue=0x2805944 needTrans<

abstractrude's picture

Avid Releases 2013 Mac Pro Config Guide

Highlights:

Supported CPU Single Intel® 6-Core Xeon® E5 Processor @ 3.5GHz 12MB cache / 1866MHz memory Single Intel® 8-Core Xeon® E5 Processor @ 3.0GHz 25MB cache / 1866MHz memory Single Intel® 12-Core Xeon® E5 Processor @ 2.7GHz 30MB cache / 1866MHz memory
Supported Video Card Dual AMD FirePro D500 or D700 graphics with 3GB or 6 GB GDDR5 VRAM
System Storage 256 or 512 or 1 TB PCIe based flash storage
Standard AVID memory configuration
  • 16GB (4 x 4GB) DDR3 1866 ECC memory
  • 32GB (4 x 8GB) DDR3 1866 ECC memory
  • 64GB (4 x 16GB) DDR3 1866 ECC memory
Memory configuration constraints The 12 GB memory option is NOT supported as it only uses 3 DIMM slots that results in poor performance 

http://resources.avid.com/supportfiles/attach/AVIDMacProIvyBridgeConfigguideRevA.pdf

henrique's picture

Enable Access Log in Xsan

Forums: 

Dear,

 

I have a Active Storage with Xsan volume, i need an access log to know who modifies or deletes a file.

If this log does not exist, could recommend an auditing software for mac?

 

Versions:

Mac OS X 10.6.8

Xsan 2.2.1

Active Storage AC16SFC01

 

 

Thanks.

 

Henrique

Expanding xServe RAID Array - confused over whether I'll lose my data.

Forums: 

I'm not using Xsan, but I'm using hardware I'm hoping (praying) someone here has in-depth experience administering.

 

I don't want to have to reformat the RAID and I don't want to lose data because the organization will suffer extended downtime. I've used ChronoSync to back up the entire RAID twice onto two 2TB USB drives. Each one took over two days to complete, so if I mess this up and cause data loss by merging slices, it'll take at least that long to get the data back on the system.

 

These and Apple's documentation are a few of the many different sources of info on this topic I've sought.

 

https://discussions.apple.com/message/3330193

https://discussions.apple.com/message/3643384

https://discussions.apple.com/message/4543149

 

 

My situation.

 

I inherited administration responsibilities for an xServe RAID using the latest firmware (1.5.1).

The xServe is an intel xeon dual 2 GHz, running Mac OS X Server 10.6.8

The RAID had six (sigh) drives in the left controller. None in the right.

When I became admin, one of these six was dead. So I went on eBay and got three refurb ADM's.

--one to replace the dead drive.

--another to expand the RAID 5 to a full seven drives. (from 2.5T to 2.73T)

--a third to put on the shelf as a spare.

 

I am using RAID Admin utility 1.5.2b3.

The replacement drive rebuilt beautifully. Happy 6 drive RAID.

I put in the 7th drive and expanded. It took several days.

I'm at the point of "merging" slices that gives everyone pause about losing data.

 

Can Intel Macs can expand the file system with this slice-merging function without data loss whereas Power PC Macs cannot?

 

Next, my RAID as I inherited it was one slice. That is to say, one volume. One partition. In RAID Admin under "Arrays and Drives" all the little drives display a [1]. They did when the RAID was degraded to 5 drives. They did when I added the 6th drive replacement. They do now, having used the Expansion function under "Advanced" to add the 7th drive.

 

So, next question. WAS IT EVER SLICED?

That seems to be a deal-breaker for merging slices without data loss.

 

For this, I looked in RAID Admin to my Events tab which dates all the way back to 2008.

I see exactly zero mentions of slicing anywhere.

EricInDenver's picture

RAID not mounting after doubling space

We recently added 3 new 24TB racks to our DotHill server and after 2 days, the server crashed.  We've been troubleshooting for a few days now and have been navigating in the meta controller to possibly have the server at least mount again.  After digging and running cvfsck, we have managed to get to this point.  I have pasted where we are at now.

 

meta1:~admin$ sudo cvfsck -n OCEAN3

Password:

Checked Build disabled - default

*Warning*: Icb has a configuration mismatch!

Icb mismatches configuration:

  File System Stripegroup count Mismatch. exp-5 rcv-3

 

Created directory /tmp/cvsck451a for temporary files.

Attempting to acquire arbitration block… successful.

 

Creating MetadataAndJournal allocation check file.

Creating Video-1 allocation check file.

Creating Video-2 allocation check file.

*Fatal*: Metadata inode for stripe group 'Video-3" has been damaged. Giving up.

*Fatal*: cvfsck ASSERT failed "idinobuf.idi_flags & InodeFlagMetaData" file cvfs ck.c, line 6072

 

It seems that it is not recognizing our new added racks, given the Count Mismatch.  I've read in another forum about this and it said that the sector count could be exceeding the kernal limit.  We'd like to try and at least have the all the raids be recognized so we can go in and reconfigure it back to the original raid, giving up on adding the new space for now.

 

Any thoughts would be apprecciated and please forgive me if I've missed some basic info that someone may need to help with our issue.  Just ask and I'll find/supply any additional info necessary.

 

Thanks!

 

Eric

abstractrude's picture

Mac Pro Shipments Still Months Out

The new Mac Pro continues to have extremely constrained supplies and shipping times are now missing their stated dates. MacRumors has an interesting article on this, has anyone else had issues getting the new Mac Pro? 

http://www.macrumors.com/2014/02/27/mac-pro-delayed-orders/

Can anyone remember an Apple proudct being backordered this long? Maybe early Powerbooks?

Apple Knowledge Base's picture

About the Snow Leopard Graphics Update (Apple KB)

The Snow Leopard Graphics Update contains stability and performance fixes for graphics applications and games in Mac OS X v10.6.4.

Read more: http://support.apple.com/kb/HT4286

undercover's picture

Achievement Unlocked: Upgrade!

I finally did it.

After looking forward / fearing the upgrade for years, I finally migrated off my old XServes to Mac Minis, and moved from Mac OS 10.5 to 10.9.

The move had been long delayed for lack of scheduled down time, lack of funding for new servers, and a little bit of fear of things going wrong.

The background story.

My clients (6 of them) were all running 10.6, even though the servers were 10.5. I almost never had trouble with XSAN itself, except the occasional ACL issue or snfsdefrag necessary when disks got near full. We had been planning to upgrade for a long time, but for one reason or another it just never happened. Even lately, I had planned to upgrade to new Mac Pros for clients when they were released, but I think we will change our short term plans to work with four new iMacs and four of our old Mac Pros.

Of course adding new clients required an XSAN upgrade, which also required new servers since I was on first-gen XServes. So Mac Mini + xMac chassis + old fiber cards were the plan, whenever we were able to get the funds set aside.

Things get scary.

My util server failed about a year back. It really wasn't doing much at the time so it wasn't a great loss. But then my mdc2 failed. It was always the more stable mdc. Sometimes XSAN admin would not work well on mdc1. Also mdc2 was my OD master. So I had to promote mdc1 which was the replica.

Fortunately, mdc1 continued to work fine.

Time to prepare.

So we purchased the Mac Mini Servers back in October, but we were in the middle of our busy projects, so I could not upgrade until January at the earliest. I got the team to stay on top of backups in case things went sour and to prepare for the migration. I also made a time machine backup of my mdc, exported lunlabels, and copied xsan config folder.

Should be good to go? Am I forgetting anything?

It's time.

Shut it all down. Disconnect clients, shut down volumes, etc. Connect mini, plug in mdc1 time machine backup, import data. Go through setup.

Got hung up on network config. Apparently it locks if you have two network ports connected. Unplug one, reboot, do it again, replug network. No big deal. Oh wait, fiber zoning. Was using an old card, had to re-alias and re-zone it.

After all that, one of my XSAN volumes decided to mount! Even before I launched XSAN admin. Launch XSAN admin, start second volume, things look good!

Do it again.

First problem - these minis shipped with 10.8. Shut down volumes, download Mavericks, do another time machine backup, install. Good to go.

And again.

Turn on new mdc2, upgrade to Mavericks, change fiber zoning, add second mdc in XSAN admin. No problems.

Oh wait, where's Open Directory? 

For some reason my OD did not make it through the upgrade to 10.8. No big deal, as my OD pretty much consists of one group. Recreated it on mdc1, setup replica on mdc2. [note I plan to move primary to util server].

Re-add the clients!

Booted up clients. Some of them somehow popped up one of the volumes before I re-added them in XSAN admin. Weird. Added them in XSAN admin, and despite complaining that their XSAN versions were older, everything worked fine.  Turned spotlight on for both volumes. Everything is good! Woohoo!

This was not nearly as bad as I thought it was going to be!

Still to do:

  • Setup util server for SMB shares.
  • Upgrade clients to 10.9 (some will stay at 10.6 because they are old)
  • Add new iMacs when they arrive
  • Change IP to new subnet
  • Change to new Active Directory domain
  • Figure out if I can now remove lunzero from my NexSAN satabeast. I think Abs told me something about ALUA in 10.7+, will have to go look at old notes.
Andrew Allen's picture

Unable to Demote or Make Metadata Controllers

I have recently taken over managing a Xsan system. The system is composed of 2 (soon to be 3) SANs, 3 metadata controllers and 11 client machines. MDC1 (MetaData Controller 1) hosts SAN1, and MDC2 hosts SAN2. They fail over to each other. The third controller is also the Final Cut Server. It is the last failover. The SANs, clients and MDCs a're all connected via a Qlogic fibre switch. The switch is zoned such that each client is in it's own zone with Aliases for the 2 (soon to be 3) SANs.

This Xsan system was incredibly messy. It was running 4 different versions of Xsan (2.1, 2.2, and 2.2.2 and 3.0 on one client) on the clients. Two of the 3 controllers were running Mountain Lion to match the one edit system running Mountain lion. However the third controller was never updated from Snow Leopard 10.6.4. The rest of the clients were running various flavors of Snow Leopard.

We've needed to upgrade to Mavericks to use the newest Adobe Creative Cloud. There are two workstations we want to be Mavericks workstations to use this newest CC. We want to keep the rest of the work stations using Snow Leopard and Final Cut 7. We have updated all of the client systems to Snow Leopard 10.6.8 and Xsan 2.2.2. One of these clients will be moved to Mavericks eventually. There is also a 10.8.5 Mountain Lion machine that will also be moved to Mavericks.

In order to move to Mavericks we, of course, need to upgrade our MDCs to Mavericks and Xsan 3.1. We have done this as instructed. While I'm sure such a ragtag mixture of OS and Xsan versions is not ideal, the system has functioned this way for years.

The problem we have is that we need a new MDC to host SAN3. We have upgraded one of the clients to Mavericks in hopes of making it an MDC and demoting the old Final Cut Server (which we do NOT want to upgrade to Mavericks).

However, the action options to Remove a Computer from the San, Promote to Metadata Controller, Make Client and . . . something else, are all grayed out.

I am well aware that in order to do this actions, all of the clients and metadata controllers must be turned on and connected to the SAN. They are! The existing SANs are visible and functional on each and every client. Xsan seems to think however that some machine isn't on the network. I assume that's problem because it's acting like one of the machines it's not connected to the SAN and not allowing us to preform the actions that require all the MDCs/clients to be online.

 

My hunch is that problem is the Final Cut server. It is a 10.6.8 MDC while the MDC1 and MDC2 are Mavericks controllers. Surely this is causing a problem, but we don't want to upgrade the Final Cut Server to Mavericks b/c it's the Final Cut Server and we use Final Cut 7. (Can anyone confirm that Final Cut Server 7 works properly on Mavericks?)

So we're in a pickle. We can't demote the Final Cut server or make a new MDC. We can't remove it from the SAN either because of the option to do so is greyed out.

We have not actually added the third SAN yet in Xsan but the Volumes are visible in Xsan and the Disk Utility and properly configured for our set up. I just haven't actually made a third SAN in Xsan yet because we wanted to have the new Mavericks MDC in place to host it.

What can be done to get us out of this position? I've been trying to integrate this new SAN all week and we've been hung up on this for a long time.

Any thoughts and help is much appreciated.

aaron's picture

Mac Pro Tricks

A Mac Pro Easter egg discovered by Robert Hammen.

 

Pages

Subscribe to Xsanity RSS