Sunday, May 17, 2015

What Do Sergey Brin, Larry Paige, Bill Gates, ..., etc. Really Have in Common?

Here's a different take on the key attributes that the most successful innovators and entrepreneurs are often claimed to have held in common. You've no doubt seen it as I have - it's a favorite speakers' technique to cite highly successful people as models for all of us. It's a handy way to bolster a set of admirable attributes they already had in mind and want to promulgate for their own purposes.

How many times have you heard or seen this popular refrain; something like: "What do Bill Gates, Sergey Brin, Larry Paige, Will Wright, Jimmy Wales and Jeff Bezos have in common?"

Then what usually follows is a nice list of personality attributes (things like ambition, drive, determination, get-up-and-go, stubborn refusal to give up or quit, maybe even: creativity, rebelliousness, introversion, geekiness, etc.) that just happen to nicely support the speaker's theme. But these are soft personality attributes that any researcher knows are devilishly difficult to measure unambiguously. To me it's such a self-serving argument that it's self-evidently a suspicious one.

Instead, I say all these successful high-tech innovators and successful business leaders had these three things in common:
1): Attended a Montessori school or pre-school
2)  Knew how to write code
3)  Grew up in a family that was financially secure

The only one I know is true is #1, but I bet the others are true as well. They're certainly more likely to be true and they're much more objective and measurable, so at least I can be proven wrong. Am I?

We constantly hear, instead, about those soft but inspiring attributes that motivational speakers would have us believe they all held in common. Do we really think these six different individual personalities are all that similar on such actually quite complex characteristics? Besides, who has ever measured and profiled all six of them in that way?

Saturday, December 13, 2014

A Fond Recollection

"When research walks on the field, judgment does not walk off." - as told to me by Dick Kampe, my boss at Citicorp years ago. Thanks Dick, I haven't forgotten.

 

Monday, July 21, 2014

Tiny American English Word Settles into a New Home in our Spoken Sentence

"Have you ever noticed...?" (as Andy Rooney used to say in his 60 Minutes monologue) how so many people nowadays are opening their first sentence - whenever it becomes their turn to speak - with the same little two-letter word (that actually appears in this sentence, too, though in a more meaningful way)? It's the innocent-looking innocuous little word: "so".

No matter what the previous question, comment or statement was, or the nature of the conversation hitherto, there it is. Nor does the context, content or subject matter, matter. In virtually all cases and instances, this habit can be seen or perhaps I should say: heard. No wait, not "habit", it's more like a tick. A mindless meaningless linguistic tick, that's what it is.

One curiosity about this phenomenon is that "so" appears to have found this new home well away and far from its first home inside our spoken sentence structure. It has jumped right to the front of the line. Perhaps it snuck up there one day when no one was looking and now it thinks it has found a more prestigious and prominent home (from which I'll wager it'll be hard to dislodge). Nice ticky trick, especially for such a tiny little word - if it can get away with it!

You can look and listen for it yourself. Now that I've mentioned it, you'll find it with ease. It's all over the place. Watch enough talking head programs - especially the Charlie Rose show, but really, almost any traditional or online chat, conversation or interview show and you'll see it. I can almost guarantee it.

If the interviewee is a successful entrepreneur in IT, an author, a hi-brow elite, a young person with some noteworthy accomplishment earning notoriety, an arts or entertainment figure, or a more hardscrabble success story type (though with them it seems less common), and no matter the subject, field of learning or work represented, it's there. Be he/she a sports figure, business leader, playwright, a whiz-kid or just a whiz of some other kind, you'll hear them almost invariably respond when called upon by speaking thus: "So,...".

It's evidently an American (language) thing, but it's not exclusively so (oops, there it is, though I'll argue that here it has some real meaning to it). But it's more than just American because it's thoroughly global in application. People ranging widely in age, education, geography and culture, ethnic and global identity or background, all do it. Tech people for sure, but also non-techies.

OK, not "all do it" but many do.

Now, since it's usage implies something preceding, this "so" tick gloms onto the front end of so (oops!, there it is again but, again, I'll argue that here it holds some meaning) many a speaker's remarks when asked a question or when discussion just turns his/her way. It serves as a kind of purposeless lead-in to whatever - and I mean to ABSOLUTELY WHATEVER - that person says next. In this ticky context, it has no meaning whatsoever and carries none in the sentence in which it falls, even though it leads the sentence - indeed, it leads the entire spoken paragraph or soliloquy. It leads but does no other work whatever.

What's also peculiar about this development is that "so" used to be relegated to the end of so (oops,...) many sentences, again spoken sentences. Remember when a youngster (often they were young and perhaps less articulate and/or just constrained vocabularily) individual might say something like: "I didn't see what else I could do,... so....." or: "I told him: 'No I'm just not interested', ...so...", then just trailing off in that unfinished but resigned way with no more words to follow. In those days (now largely gone, aren't they?), it was just a dangling hanger-on word to signal, apparently, that the person had run out of thoughts and lacked a snappier, more confident or decisive way to close the comment, wanting to simply let the listener take it from there. Remember?

Anyway, that seems to have fallen by the wayside in favor, now, of this new and opposite tick. So (oops) OK, that's it, that's all I have to say. So long. (Oops, but wait, here the word is appearing in wholly different usage contexts, isn't it?) So (oops) these don't count even though, in any literal sense, their meanings aren't terribly clear, either, are they? So (oops), it's just an idiolect, I guess.

Saturday, May 12, 2012

Like NPS? I'll Take the "P", You Can Keep the "N" and "S"


Here's a simple exercise, or anecdote, to bring up when the subject of NPS (Net Promoter Score) arises:

"Did you hear about the CEO who's company saw these NPS numbers in his customer service groups quarterly reports: 15 in period 1, then it moved up steadily to reach 30 a few periods later.  Hooray!" (Right? - Wrong!)

15:  50 (in the top boxes of the "Would you recommend..." scale) minus 35 (in the bottom boxes) = 15. (or was it: 20 minus 5, which also equals 15?)          

30:  30  (in the top boxes of the "Would you recommend..." scale) minus 0 (in the bottom boxes) = 30.
(or was it:  50 minus 20? ...or: 60 minus 30? ...or: 65 minus 35?)

What actually (might have) happened is that the unhappy customers left, while the number of those who are the happiest dwindled by 20 percentage points. This may be why such a company could soon go bankrupt (if its competitors are on their toes!)  

(Of course, considering the alternative numbers in parentheses above, maybe what happened is that top box responses, i.e., "Recommenders", rose by 15 percentage points, e.g., from 50% to 65%?  ...or by 45 percentage points, e.g., from 20% to 65%?).

So, don't tell me your NPS, tell me the shape of your "Will/Won't Recommend Curve"!

Tuesday, August 31, 2010

Moving Parts Theory of Customer Service

Description unavailableImage by tanitta via Flickr
I'm quoting myself here again, but this piece is so old by now that I'm giving myself permission to do so. Oh yes, it's also terribly dated as you'll see - but it makes a point that still seems relevant

"Try to remember back to the first time you rode in a car with modern features like power steering, brakes, antenna and windows, a stereo tape deck, air conditioning and a sun roof. With all those new and complicated moving parts, didn't it occur to you that, sooner or later, something was bound to go wrong?  And didn't it, usually?

This is the principle behind the "Moving Parts Theory of Customer Satisfaction", which originated from a series of research studies I managed for the personal services side of a full-service bank (Northern Trust Bank, to be specific, my employer at the time) on the topic of customer satisfaction. When the findings showed areas like Personal Banker services, credit cards and checking services getting relatively lower satisfaction scores (though still quite high), while regular savings accounts and especially safe deposit box services (remember them?) received the highest scores, the Moving Parts Theory was born.

According to this theory, service areas or accounts having greater transaction volume and/or more personal interactions (i.e., more "moving parts") are presumed naturally to produce relatively lower satisfaction levels. After all, what can go wrong with delivery of a safe deposit box service? Originally called the "Moving Parts Hypothesis of Customer Service", the concept was elevated to a theory when, upon subsequent studies by the same bank, the effect persisted.

Implications of the Moving Parts Theory

One lesson learned from this experience was that the appropriate frame of reference for analyzing such satisfaction scores is to compare the bank's overall scores and those of each department, not against each other, but each against itself over time. Otherwise, ridiculous conclusions such as this can be drawn: "The Safe Deposit Box area is providing higher quality service than are the Personal Bankers."

Management must decide how high is up for satisfaction scores for the firm as a whole and for each service area. Only then can judgments begin to be made about the quality of service provided and progress toward goals. Of course, those management decisions need to observe the strategic imperatives of the firm. The bank that is committed to growing its "upscale" business with more sophisticated services and accounts may be well advised to expect increased volume of customer contacts, inquiries, and, perhaps, even lower satisfaction scores as it embarks on that program.

However, in the credit card business, it has been shown that while the Moving Parts Theory does seem to apply relative to cardmember volume, it need not automatically lead to lower overall satisfaction levels. The answer seems to lie in developing a professional and efficient customer service process as a proactive and strategically integrated program, not merely a remedial one."

Could it be - is this old message still on target?
Enhanced by Zemanta

Tuesday, August 10, 2010

"How satisfied are you with the way we 'dunned' you?"

Quality AssuranceImage via Wikipedia
Believe it or not, I actually saw a Quality Assurance manager present findings in response to this question in an Operations Dept. quarterly customer service review meeting. One of my not-so-fond - but definitely hilarious - memories from the corporate side of my career.

So enamored was that group with the concept of obtaining customer feedback re: its service delivery, that it laid down a template imposing a regimen so rigorous, uniform and mindless in its application, that this ridiculous question was the proud result. I remember commenting to a colleague in the coffee break room, that this question is like asking the wife how satisfied she is with the way her husband is beating her this year.

Can you top this one?  If so, leave a comment and explain!  I can't wait.
Enhanced by Zemanta

Sunday, July 25, 2010

"I Can't Get No Satisfaction"

Photo of Ron Wood and Mick Jagger on stage dur...Image via Wikipedia
When I see some customer satisfaction surveys, I can't help but hear that memorable Rolling Stones song title: "I Can't Get No Satisfaction".

What most offends me are those scales that use "expectations" as a key criterion, e.g., "Exceeded My Expectations" at one end to something like "Failed to Meet My Expectations" at the other end. At E-RM, two major objections prohibit use of this approach:
  1. Unless I know what your customers' expectations are, how do I know how to interpret their responses to this scale?
  2. Over time, a customer becomes accustomed to the level and quality of a brand's product, service, or the service itself. Thus, by definition, that customer's response should trend toward the mid-point of this scale, usually labeled "Met Expectations".
My most fundamental problem with this line of inquiry/measurement method is the vast amount of unknowns it embraces - indeed, it welcomes them! Questions abound as to what the firm’s long term goal is and whether exceeding expectations means something totally good (e.g., genuinely, pleasantly surprising and impressing the customer) or indicates some mismatch of customer communications and targeting. Ironically, a similar dilemma may inhibit the opposite result: ratings of a failure to meet expectations. Did the product (or service) really fail to deliver, or were customers misinformed about the level of service or product quality to “expect”? Data interpretation is tough enough without adding to the ambiguity with all these unknowns (or are they “unknown unknowns”?). My hope is that, now, they are at least “known unknowns”.

So, as a criterion measure for “satisfaction”, the expectations scale only adds to the problem. Can we please banish it forthwith?

Related articles by Zemanta
Enhanced by Zemanta

Saturday, July 24, 2010

Basic Purpose of Marketing Research: Reduce Risk and Uncertainty

"Judgment doesn't walk off the field when research walks on." 
A former boss of mine - a most impressive and charismatic businessman - told me that once many years ago and I've remembered it ever since.

It reminds me that, at its best, what customized marketing research must do is reveal fresh insights specifically on target to the most pressing strategic issues on the table. Then, that vital and indispensable managerial judgment can be exercised with more confidence, amid fewer risks and less uncertainty than otherwise.

It’s Not Enough to Uncover Needs and Wants

At E-RM we endeavor to go beyond basic “needs and wants” to uncover “whims and wishes”, so our clients know what "bells & whistles" are best to include when building (or re-building) a brand.

Consumers, whether B2C or B2B, want more than table stakes when they buy. E-RM believes they often end up choosing based on which brand best fulfills, not just their “needs and wants”, but also their “whims and wishes”. When well understood, insights about these potent “whims and wishes” can lead directly to meaningful - not frivolous - “bells and whistles” that can separate market leaders from the pack. When “whims and wishes-based bells and whistles” are linked to brand identify, you’re off to the races!

Friday, July 23, 2010

For a Usable Measure of Customer Satisfaction, Don’t Use the “S” Word

E-RM prefers to avoid the “S” word – satisfaction – in our quantitative customer "satisfaction" studies. That’s because it’s such a wimp-word; it packs no punch and is often abused (deliberately or otherwise) to show high ratings by applying it in an undemanding scale.

At E-RM we prefer to sue a scale that asks how Delighted vs. how Disappointed the respondent is with the brand, service, etc.

With a sufficiently stringent top box adjective (e.g., Very or Extremely) the scale becomes a solid and useful measure of attitudes and behavior tendencies. It also produces a nicely balanced positive / negative scale for discrete modeling of motivator and demotivator effects.

Try it, you'll like it!

Amid Social Media Fury, Don’t Forget Old Faithful Research Tools

Data abounds in today’s digital world of information abundance, but when - and how - do you determine how broadly representative the feedback is?

How do you work your way through the abundant, but raw, unstructured commentary from multiple Social Media channels to produce a fair, comprehensive composite picture of the view from the outside looking in?

A view that you can then integrate, tabulate or correlate w/specific brand, product, service quality ratings, rankings, and other preference or choice data from those same customers (or prospects, users, site visitors, etc.).

How, indeed, unless you have a solid program of primary research work on your agenda!

Research Cannot Serve Two Masters

If you'll pardon the Biblical paraphrase, here's why research cannot serve two masters.

Master #1:  Publicity for the firm in the form, for example, of high scores on customer satisfaction, to be bragged about in customer relations and PR efforts, newsletters, direct mail, blogs, corporate LinkedIn Profile, Facebook page, Twitter and other Social Media venues.

Master #2:  Honest market or customer feedback that’s useful for business decisions and competitive strategies because it’s valid, tough and unbiased.

The latter requires internal confidentiality and a demanding measurement regimen; the former shuns both like the plague. To design an appropriate research project for customer satisfaction, competitive brand allegiance, etc., I need to know which purpose is being served.

Of course, only one of the above purposes works for me, in any case.