arls1481's picture

Can someone help me out with some basics on snfsdefrag?
I've read through a ton of info out there on usage of cvfsck -f to identify the amount of files with fragmentation but how do I correlate that in to snfsdfrag in order to "clean-up" the fragmentation?

arls1481's picture

as an add on to this, how can I find __defragtmp files that have been created by stalled/failed snfsdefrag runs?

arls1481's picture

arls1481 wrote:
as an add on to this, how can I find __defragtmp files that have been created by stalled/failed snfsdefrag runs?/quote

Disregard this one:

sudo find . -name "*__defragtmp"

incase anyone wants it.

singlemalt's picture

It sounds like you’re trying to feed\pipe output from cvfsck -f to snfsdefrag. Why? I’m not trying to sound flippant but have you read the man page for snfsdefrag?

Normally you just run snfsdefrag with the appropriate options depending on what you’re wanting to accomplish. Usually -G -K -k an in some cases -m. Sometimes -p can be helpful if there is a lot of file turnover. Are you jut wanting to defrag a bunch of files? You really don’t need to specify which files. If the files can’t be defragged snfsdefrag will just skip them so why bother trying to tell snfsderag which ones to operate on?

arls1481's picture

I should have detailed that those were two separate trains of thought; sorry for the confusion.

Yes yes, RTFM, I'm a preacher for that too...

I have a two-part situation; one of which I resolved on my own.
I had previously tried running snfsdefrag and had to force-quit because it was running for days on end, as a result of that, I knew i'd ended up with some __defragtmp files but didn't exactly know how to snuff them out. I figured it out on my own shortly after I posted that part.

The primary question I had was that I was looking for some real-world usage of snfsdefrag on a large shared volume with many,many files (not large sequences from FCP or whatnot) ranging in size from bits to +500GB blocks (bit-level replicas of hard drives). Which tend to get pretty snarled up over time so I need to defrag the thing but when I try to run snfsdefrag against the root of the shared volume it will pork out after a bit and idle for days because it's running against such a vast array of files with tons of extents all over the place. So what I need (aside from snfsdefrag against each individual folder on the volume) is a focused way to defrag stuff that has a large amount of extents and ignore the small stuff. Does that make any sense?

I'm not looking to pipe cvfsck to snfsdefrag, because im not so sure you could. But what I'm looking to do is take results from a cvfsck that reveals large files with long extents and use it as targeting for snfsdefrag.

I've read MattG's xsanity article on dealing with defrag and it's great but I seem to keep running in to stalled processes when I try anything since I've got such a mess...
I'm trying snfsdefrag with a m (more) set up pretty high now to see what that does.
sudo snfsdefrag -v -r -m 100 /Volumes/Foo

singlemalt's picture

Ah! Got it. Now this makes more sense. In that case why not use -r (or just cd into the desired directory).
In other words take 5 or 10 or 30 of the worst, put them in their own folder on the volume then run
snfsdefrag -r //. If it’s still stalling out then you may have really serious free space fragmentation.
In other words snfsdefrag is searching, possibly in vain, for a large enough chunk of free space to rewrite the file into a single extent.
Search the Volume’s cvlog for the word ignore very soon after the last time the volume was started. If you see something like;
(Warning) FSM Alloc: Stripe Group “” free blocks in fragments ignored.
Then the volumes free space is swiss cheesed into small fragments.
cvfsck -f will quantify how bad it is.

If it’s NOT free space fragmentation then maybe you just have a damaged file (no valid EOF maybe?). If thats the case then using -r should help
find out which one it is. Just run it on some folder, rinse and repeat on other folders until it stalls. When it stalls take half the files out,
try again. In other words do a search by halves until you find the culprit.

singlemalt's picture

Oh. I should add if the line “ABMFreeLimit Yes” (without quotes) has been added to the "global section for defining file system-wide parameters" section of the volume cfg file
then you will never see the “FSM Alloc… Ignored” warning in the cvlog regardless of how badly the free space is fragmented. So you should check that as well.