Ack... complete misuse of logging...

Dec 05, 2008 02:12

Well, Coding Horror is discussing The Problem With Logging. And I have to admit, I was horrified by what he described.

Of course, I'm a big fan of log4j, so this is interesting. First off, the logging library got a lock? wtf? That should not happen. Hopefully that's a log4net thing, and log4j is free from the idea that it needs a mutex. Second off... I found insight in his "more lines of code".

Well, I was never a "debugger" sort of guy, nope, I was always more into the "printf" style(not print or println, its printf! You heard me :) ). In particular, if it got bad enough, just do a binary search with:
printf("Here 1");
...
printf("Here 2");
And zero in on the problem :).

Okay, now you're probably horrified and may not think I'm much of a programmer, but I don't care.

I want to address the "more code = more bugs" conundrum presented.

One thing I always ran into was the fact that I had to remove those printf statements before the code was released/polished/finished.

Now, I agree that, in general "more code = more bugs", but when you go through deleting code, that, in some ways, increase the code "size" as well. Many times I have "deleted code that didn't matter" WAY too often to discover later that "oh yes it did" matter, so removing your printf style debug code will add one more test/verify stage to your release process.

So, in the regard, log4j style logging enables a different approach. I can add log statements around code that has been problematic, and not delete it. Now, I have deleted logging statements from my code, on occasion, but generally, I have found that logging enables me to make less changes to my code that requires me to wastefully reverify that all is well(in some sense, all changes require verification).

And that, my friends, is the purpose of log4j. Well, that and pretty much always logging "fatalities". If my code exits when it shouldn't(but the error is unrecoverable), it needs to be logged.

Now, log4j style is even one better in a situation with a server or other app which runs "headless". Its probably the only way to observe it with minimal intrusion. In the article Jeff mentions that if its valuable enough to log, its valuable enough to present in the UI. The issue with this is if you have no access to the "UI" because a bug is preventing that. "printf"/log4j style logging enables you have a primitive layer which is dumb, so dumb that its nearly impossible to fail. In fact, if it fails, you likely have an Epic Fail coming up soon(Which is, in itself, useful to know). Log4j is dumb, and that has value. Fancy things fail, so having a lower level is valuable in and of itself.

Anyway, this is probably waxing way to eloquent over logging, but mostly I realized the article was probably an example of doing it wrong. Of course, use log4j style logging however it works for you, but its purpose is not an event log or UI tool or historical record. Nope, its purpose is to enable printf style braindead debugging whose chief value is that it rarely fails(because its too simple). So yeah, that means if you have a logging style which is somewhat other than "ad hoc" you're probably doing it wrong(note: this *is* style, but its pretty much ad-hoc and random, mostly focussed on the places where the trouble seems to be coming from, and one better than printf debugging because you don't have to remove code that a) forces a reverification and b) is likely to be needed again if the area is problematic).

Edit: This timely post by Linus Torvalds(yes THAT Linus), Debugging Hell, is about exactly the sort of case when you want some braindead, last ditch logging type facility. At least, it seems to me like it proves my point.

software engineering, mlp

Previous post Next post
Up