The process is really simple when you strip it down to the essentials:
Requirements -> Functional decomposition -> Size estimate -> Estimation tool -> Effort, cost, and schedule estimates
First you gather some high-level requirements and try to get a reasonable functional breakdown of the software to be built. You want to break it into pieces that individual developers would feel comfortable assigning a SLOC number to.
Then you get a bunch of developers to come up with these SLOC numbers on their own (no peeking at each other's estimates). This is where a tool like Surveyor (or even your find command, as long as everyone is using the same command) will come in handy, as it allows developers to measure previously written code that they feel is similar to the proposed modules.
Next, everybody gets together and the anonymous results are compared. There is a certain amount of give and take as the team tries to reconcile the outlying data points. In my experience, the outliers are always from those who didn't fully understand the requirements. The majority of the numbers have an almost uncanny tendency to converge (which I think reflects the experience of the estimators).
Once the size estimates have converged enough (leaving room for uncertainty), they are tallied up and fed into an estimation tool (I use Construx Estimate, which is a bit unstable but at least it's free of charge) that is calibrated with past projects' actual effort and schedule data. The estimation tool, via all that PHB math you alluded to, then produces cost and effort estimates, along with staffing recommendations and scheduling options. It's pretty neat when it all works (and I've seen it work several times to great effect).
The best example being not counting comments. They take time to write, they add to the quality of the software, yet still virtually everybody insists on not counting them. As for blank lines, use a source reformatter to get your sources into a known format (you should do that anyhow), and you can even compare blank lines.
If you want to count comments, then count comments. The point is, if your estimation process takes past historical data into account, and you count comments when looking at code you've already written, then you're using a standardized (for you) size metric, which keeps everything consistent. Things only break down if you're using industry productivity data instead of your own historical data, but this is generally considered unreliable anyway.
I don't really care much what someone wrote about line count. I'm aware that I've been working on F-22 type software and DC-3 software, working in a crack team and working in a team of people who never should have touched a compiler. I've worked in a happy little software house and I've worked in a big corp where the word "outsourcing" was uttered twice a day. I've worked in Visual Studio and I've worked with little more than vim and gcc, worked on C++ source that was "C with classes" and on C++ source that was thick with templates and multiple inheritance.
Me too... what's your point? I've been able to work without excessive schedule pressure because I can produce really, really good estimates. Is that bad?
Line count is one factor. Many, many others come into the equation. As such, I consider anything beyond that "find"-statement above overkill.
Those many other factors are accounted for in later stages of estimation. That's how you arrive at an effort number. However, size numbers are independent of the skill level of your team or any other external factors. Size is intrinsic to the software that's to be built, which is what makes it a great proxy measure to get the estimation process started.
Go ahead and use find. As long as you use the same find command and keep things consistent, it shouldn't matter. I just find Surveyor easier to work with and more powerful.