~ Yes. It May Indeed Be About Downsizing.

I have a standard promotional talk that I do at conferences and early in the process of securing consulting engagements. Early on in it, I discuss the rationale for knowledge management and the pitfalls that projects can encounter when their sponsors and managers do a poor job of communicating their intentions to the knowledge workers who will be involved.

The value of knowledge capture and reuse, I intone, comes from enabling the organization to move responsibility for resolving frequently recurring issues down to the customer-facing agents who are the first to hear about them. The ideal I’m talking about is, of course, “first contact resolution” — enabling the first person who talks to the customer to quickly diagnose the problem, retrieve a resolution and remove the customer’s pain. If knowledge is captured effectively, it should lead to consistent answers and clients who are happier with the results. And because the issue stays at “Level 1” and does not have to be escalated to more senior technical specialists, it costs less to resolve issues this way.

All well and good. But there often is a suspicious murmur in organizations going through the KM adoption process, especially those who describe the KM process as a “do more with less” measure. The literature of KM is full of references to projects that foundered because the team refused to comply, as they assumed the whole purpose of the exercise was to capture the expertise that makes individuals essential, so that agents could be reduced to expendable, interchangeable parts.

No, I assure my audience — the knowledge base provides more leverage from a static set of resources. Reducing escalations and making each agent capable of handling a wider variety of issues, by pooling expertise and rewarding contribution to the collective knowledge base, will increase each agent’s value, not reduce it. KM is seldom, if ever, about reducing headcount. This is one of my favorite slides; it never fails to elicit a room full of self-satisfied nodding.

laid_off_01

There’s just one problem with it. These days, KM may very well be about reducing headcount. I’ve been involved in two projects recently in which enabling management to lay off staff was precisely the point. In one case, an assessment was intended to identify areas of expertise that had become redundant as a result of the restructuring of IT. In another, the knowledge base was conceived and built with the specific purpose of codifying procedures heretofore escalated to senior specialists, so that a number of those specialists could be let go and replaced by new kids who, using the knowledge base, would be doing the same work at 40% lower personnel cost.

I have interviewed people, to analyze and deconstruct their business processes so they could be modeled in knowledge tools, and to capture their trouble-shooting expertise — I knew some of these people would shortly be out of a job. They didn’t know, at least not in any official way. (Naturally, a lot of them knew in the many unofficial ways employees learn or intuit things like this — companies don’t go out of their way to hire stupid analysts.)

This isn’t what I set myself up as a KM consultant to do. It isn’t the sort of communication I advocate among members of a project team, or between a project sponsor and his or her constituents. And it isn’t what I’ve sold as the value proposition for KM for the last 18 years.

Nor, however, is it fundamentally wrong.

Companies are downsizing now, and it is pointless to make value judgments because of this. Some organizations will take the process too far, and one way this will inevitably hurt them in the long run is by reducing the essential store of knowledge that enables them to do what they do competitively. Management won’t see this until it is too late; at best, they will wind up inviting laid off experts back at far higher consulting rates. At worst, they will lose a technical or customer relationship edge that will allow a competitor to eat their lunch when demand eventually revives. But they will have no choice; the cuts often are mandatory and indiscriminate.

Can KM be a valuable enabler in a time of layoffs? I’m prepared to answer “yes.” You can gain a differential advantage by broadening the capabilities of your remaining knowledge workers, and this may be enormously important if you’re going to be trying to function with fewer of them. The advantage will be measurable if you are handling an equivalent volume of issues with little or no loss of productivity, as measured in terms of average time to close an incident, or some similar metric, and with little or no degradation of customer satisfaction.

Customer satisfaction is subjective and is dependent on expectation management. It’s probably a cold comfort, but customers are likelier to cut you more slack in the current economic climate. As for productivity gains, knowledge base proponents have always promised that knowledge sharing would enable greater capacity from thinning resources. The challenge is to motivate people who probably know their days in the organization may be numbered to facilitate this by sharing what they know, rather than simply taking it with them when they leave.

How do you get them to play along?  To be blunt: Pay them.

Don’t undermine everyone’s confidence in management and the organization by imposing a KM program and then dropping the hammer on the team you are downsizing. Bad times bring morale issues; you will only make them worse for your layoff survivors by cloaking your intentions and then springing them abruptly. They will only be convinced that more layoffs are coming in waves, and they will refuse to participate in (or may even sabotage) the KM initiative if they feel exploited.

If layoffs are coming, explain the circumstances, be plain about where and why the cuts are coming, and offer what may be an enticing carrot: A knowledge transfer exit program.

Explain that everyone in the affected team will be asked to help establish a knowledge repository for the business function they support — customer service, tech support, HR, inbound sales, whatever. The business processes for that function will be examined and deconstructed to identify the most frequently recurring, mission-critical issues, problems or transactions — e.g., the “top 200” technical problems managed by a service desk, the ones that place the greatest burden on the team. The solutions to those issues will be documented and encoded as core, “seed content” in a knowledge base, using a suitable software repository, so that all members of the team can diagnose and solve these issues quickly and effectively without escalating them to the next level experts.

Be frank — explain that for some members of the team, this project may be their last as employees of the organization, and that this is pure economics and not necessarily a value judgment on their individual worth to the organization. If possible, suggest that the quality of each individual’s contribution to the knowledge capture effort may be taken into account in the decisions as to who stays and who goes, but do not make this promise unless you truly mean it and know you have the authority to say it.

You are going to be taking people offline for some portion of their day to do this knowledge gathering — off the phone, away from the day to day work you hired them to do. You probably will take a hit to productivity, as it normally is measured, in order to get it done. Manage expectations with your customers. More importantly, let your employees know you anticipate the disruption and that no one will suffer consequences for this.

Finally, announce an amendment to the usual severance package for those you must let go. Offer to retain these employees for an additional period — three to five days, say — to participate in an intensive, facilitated debriefing and knowledge-building session in which these individuals work solely on completing the population, technical verification and polishing of the knowledge base. Position this as an opportunity to leave a personal mark on the shared knowledge of the team, and earn consideration for post-severance contract work or even eventual rehiring when the economic climate improves. Again, do not promise such consideration unless you know you have authority to do so and that contract work or rehire are plausible.

Some of your people will opt out, of course. Severance is a stressful business, fraught with resentment no matter how you handle it. But, handled correctly, with empathy and candor and with the right kind of facilitation, a knowledge transfer exit program might be the means by which you finally get the sponsorship, resource commitment and breathing room to launch a KM initiative you may have want to undertake in better days, but couldn’t.

Advertisement

~ New Partnership: Atlassian

These days, it’s difficult to get too excited about a new piece of collaboration technology, and I waver when I feel myself going too far with testimonials. But I’ve been working on a knowledge management project built on an enterprise wiki platform called Confluence, and I’m inclined at this point to pay it the ultimate compliment: The thing works. The project’s thrown me its share of curves, and the vendor, Australia-based Atlassian, has had a workable answer for every one of them.

I’m prepared to say that virtually everything I’ve done for the last 18 years with enterprise knowledge base tools, I can now do in Confluence at about one twentieth the license cost. This is, as we heard a lot during the late Presidential campaign, a game-changer — a textbook example of disruptive change in a market that was ripe for it.

confluence

I’m sure this is true of lots of open source wiki tools, and there are dozens of them to choose from. But I’m hip deep in this project, and I’m having what is unfortunately an all-too-rare experience with technology these days: I’m having a satisfying customer experience.

So KnowledgeFarm is now an Atlassian partner. Confluence isn’t Atlassian’s only product; the company is probably better known for Jira, a bug-tracking system used by a lot of corporate internal software development teams. And there are a host of other products I haven’t even seen yet. For now, I’m immersing myself in the subject matter expertise of Atlassian and its user community with a specific focus on Confluence as a knowledge base platform.

For years, I’ve opened conference presentations with a specific talking point: That in order for KM to amount to anything in corporate business functions, it has to shed all of its ivory-tower trappings and be accepted as a simple discipline, accessible to anyone with ordinary office competencies. I’m sticking to that story, but I’m adding a corollary. These are not the good times, economically, and nothing is sacred — any project deemed discretionary is at high risk of losing its funding. KM, to survive, has to get cheaper. A lot cheaper.

I see wiki tools like Confluence as the means to, for all intents and purposes, strip the enterprise software cost out of the KM funding equation, so that the executive sponsor can keep the project’s focus where it belongs: On people, process and content issues.

If you have an at-risk project, let’s talk about it.

~ Imaginary Interview: KM and ITIL

This is an interview that never happened. Something like this was supposed to take place in connection with my appearance at the HDI annual conference in 2008. The organizers typically publish interviews with some of the presenters in the run-up to the conference, but this one never actually occurred. I made it up. It’s what I’d have wanted to say.

Question. You’ve been involved in Knowledge Management for 18 years, as a software vendor, a process consultant, an author, an analyst and a conference speaker. We’ve talked about this a lot. Has KM panned out the way you anticipated it would in Customer Support?

Dorfman. Nope.

[pause]

Q. Nope? I think you have more on your mind than “nope.”

Dorfman. Well, for the entire decade of the 1990s, we made the excuse that organizations were working their way through the process of Customer Relationship Management adoption, which was a huge, six- or seven-figure investment for a lot of companies, and once they were out the other side of that, then they would get around to KM. It worked out that way to a degree, but not the degree we anticipated. Knowledge bases were supposed to be everywhere by 2004, 2005. They’re not.

Q. This isn’t like you.

Dorfman. I’m exaggerating a little. KM’s not a bust. Lots of organizations have deployed self-help knowledge bases to enable customers, and internal end users, in the IT service desk context, to solve some of their own issues, and thereby lower the cost for support by diverting incidents from the phone to the web or intranet site. Companies have gotten a lot of value from those. There are success anecdotes, certainly, in implementing knowledge bases for use by service desk analysts in the Incident Management and Problem Management processes. I wouldn’t be doing KM any more if I hadn’t had successes and didn’t believe knowledge management adds big value to customer support.

But there were some articles of faith, especially back when I was on the vendor side, that have not panned out. I wrote a book about knowledge-based problem resolution tools in 2005. At the time, vendors were still suggesting that differences in search and retrieval technology were critical differentiators between these systems. From a technologist point of view, the distinctions are very interesting, and there are meaningful differences for a small segment of the market. But from the end user perspective, for the vast majority of adopters, those aren’t the differences that matter.

The differences that matter are much simpler, more pragmatic things: Will the system integrate smoothly with our call management platform – the version we have in place today – at anything like a reasonable cost? Will it really help us make sense of all this documentation we have and that people care about? Will using the knowledge base get in our analysts’ way, and will they really be more productive if they’re now spending time searching and creating content? And even if they are more productive, how is this worth an additional $5000 a seat to my service desk?

Q. That’s a real number – $5000 a seat?

Dorfman. I’ve seen it in RFP responses, if you do the math. Of course it gets worse when you add in the cost of integration. I know running a software company is an expensive proposition. But a lot of prospective KM adopters are deciding that the price of tools is far in excess of the value they will get from the knowledge base, and on the whole I agree with them.

There used to be a low end to the KM tools market, where you could license a system with surprisingly strong functionality, at quite reasonable cost. But the low end has been very unstable. Those vendors have been acquired and their tools absorbed into higher-end CRM or IT service management platforms, or they have just not made it.

The other interesting question in 2005 was whether there was a genuine, sustainable market for vendors of pure problem resolution systems – at the time they were positioning themselves as “Service Resolution Management” vendors, and promoting this concept as if it were a full-fledged IT service management process, like Problem Management or Change Management. The subtext of course was that SRM should be a line item in the service desk budget.

I expressed some doubt about this in the book – in my mind, it remained to be seen whether buying a “best of breed” problem resolution system from a separate vendor would really make more sense than buying the knowledge base add-on from your incident management system vendor. I never questioned whether the best of breed tools were really technically superior – they have been and they are. But does that difference make up for the headaches of maintaining a separate vendor contract and relationship, and the cost of integration? Any vendor can make that case in certain marquee accounts, but my sense is that for the majority of cases, that is a very tough proposition to prove.

Q. So where can you make the case for best of breed?

Dorfman. Where you’re going to have a very large, sprawling knowledge base with tens of thousands of distinct solutions, especially when the people searching for solutions are relatively unsophisticated and may have a very unclear picture of what they’re looking for. But most knowledge bases are much smaller than that; in the real world, a knowledge base with 300 or 500 well crafted solutions covering issues that actually happen to your end users (as opposed to being things that obviously could happen but virtually never do) can be hugely valuable. And it doesn’t, or shouldn’t, cost you a king’s ransom to build and maintain it. Then again, if you’re, say, Cisco Systems, and the scope of the knowledge base is everything you sell, you probably should be looking at best of breed. The issue for the vendors is, how many Ciscos are there, really?

The other place where best of breed probably is worth the expense is where it really, really matters whether the answer you get from the knowledge base is correct. Let’s say you support a medical device, a blood glucose meter or a CAT scanner. You can easily imagine scenarios where providing the wrong solution to a technical question could really hurt someone, and put you at big liability risk. One way best of breed solutions justify their cost is by minimizing the likelihood of an answer being wrong, and by providing very sophisticated audit trails for content authoring and maintenance.

Q. OK. So here we are again, talking about knowledge tools. It’s ironic, since you’re no longer a tools guy. You’re really a process consultant.

Dorfman. True. Once people associate you with technology, it’s really hard to shake that. Hence the book.

Q. So let’s move to process. Are you a proponent of the Knowledge-Centered Support best practices?

Dorfman. Sure. There are some very profound things in the KCS framework – for example, the idea that the most valuable knowledge base content is generated in a bottom-up fashion, in the workflow, through actual customer interactions. The idea that you should only invest time and effort in generating content that real experience proves is relevant to your end users – common sense, really, but it’s a KCS principle. The evolved scheme for evaluating and controlling content quality is a valuable part of KCS.

I teach KCS and help organizations to adopt it. Or, rather, we borrow from KCS what makes sense in each specific case. The framework is good; it evolves and is getting better. But it isn’t necessarily complete, and I’ve seen some groups decide that parts of it are too formal to be realistic for them.

The most valuable thing about KCS, and this goes generally for best practices, is that it exists and is branded. KM adoption is a big deal. Its ultimate sponsor is likely to be someone remote from the actual processes in which it is going to be applied and who may have little personal involvement in its implementation. You need to convince the sponsor that funding the KM project will provide a benefit to the business and reflect well on him or her. The existence of a widely adopted Best Practice supporting KM helps to make the case that there is a proven and documented “right way” to do KM, and that the proposed project has a high probability of success because Best Practice will be adopted. Honestly, this probably has as much value as anything KCS actually says.

Q. You’re now an IT Service Management/ITIL® practitioner.

Dorfman. Thanks for noticing.

Q. Yes. Well. The whole ITIL framework has been refreshed, and ITIL Version 3 has a lot to say about KM. Is that significant?

Dorfman. Yes.

[pause]

Q. You’re doing it again.

Dorfman. Dramatic pause for emphasis. Don’t mind me. I’m actually very intrigued by this.

ITIL has not been particularly helpful in clarifying or setting objectives for KM – until Version 3. The refresh finally puts knowledge management on the IT executive’s map – but not in the way KM proponents might have expected.

ITIL 2 was filled with glancing references to KM as focused on problem resolution. ITIL 3 establishes a context and a roadmap for the management of institutional knowledge – KM as a metaprocess for IT service management, beyond the everyday business of managing incidents, rooting out errors in the infrastructure and resolving recurring problems.

KM can offer significant productivity gains beyond the service desk. Knowledge sharing enables more informed decision-making, and the people who manage IT services are always making critical decisions, about changes to the infrastructure, organization of projects and teams, adoption of new technologies, protection of IT assets from disaster or hacking, and other relatively strategic issues remote from routine, tactical PC support.

So ITIL 3 is good for proponents of KM – especially process consultants. It’s unclear what the impact will be on the current vendors of KM software tools, most of which are narrowly focused on problem resolution. Many of them are scrambling to be seen as embracing ITIL 3. The visionary vendors will build collaboration and community-building tools onto the ITSM platforms of their clients, and some of what works for problem resolution will have a place in the larger ITIL 3 context. The vendors of the ITSM platforms themselves, the largest of which already provide their own KM solutions, are positioned best.

ITIL 3 raises the profile of KM among executives who provide the sponsorship and funding for IT. For the first time, recognition of the strategic value of institutional knowledge is elevated to a basic Best Practice.

Q. What’s really new in ITIL 3?

Dorfman. The framework recognizes and accounts for the fact that IT evolves. After seven years of ITIL 2, the organizations responsible for ITIL recognized that building out the framework would still bring the enterprise to a static endpoint, in which the dozen distinct processes are often owned by separate teams and isolated in silos. ITIL 3 is intended to take the adopting organization to a higher level of process maturity. Among the explicit goals of the refresh are to remove the process silos; to more closely integrate service management processes with the processes of the business; and to create a framework that recognizes that businesses, and IT infrastructures, constantly change. ITIL 3 embraces the concept of the IT service lifecycle, and the need to manage services through recurring cycles of design, implementation, adoption, operation, feedback and improvement.

Q. Did you memorize that?

Dorfman. Yes.

Now, ITIL never explicitly advocated development of a knowledge base, although the Known Error Database, in Problem Management a repository for infrastructure errors and Solutions to those errors, certainly sounds like a knowledge base. ITIL never said how a Known Error Database should be built – it could be something as simple as a spreadsheet, or it could be a complex, enterprise knowledge management system.

The Incident Management process includes a step called Matching, where the analyst compares a new incident to past incidents, to classify it and identify it against known errors, and to suggest appropriate workarounds or fixes. If you’re conditioned to see this as an occasion to consult a knowledge base, you can, but ITIL never proposes a specific way of going about Matching.

KM can be a pervasive sub-process throughout IT service management, and ITIL finally offers some guidelines in version 3.

ITIL 3 proposes a “Service Knowledge Management System” – a solution intended to capture knowledge from sources ranging from one end of the service management process life cycle to the other. ITIL is vague about what an SKMS looks like. But it clearly envisions an enterprise knowledge platform, as opposed to a point solution for problem resolution.

Knowledge assets flow into the SKMS from a variety of sources and directions, including data, housed in the Configuration Management Database. Configuration data passes from the CMDB through a higher level logical repository called the Configuration Management System (CMS). At the top level, the Presentation Level, the SMKS is supposed to provide multiple views into the end results of knowledge processing that happens at lower levels. There’s a general, portal view, as well as specific dashboards for business functions such as governance, quality management, asset and configuration management, and the service desk. A customer view, for self-service, also is proposed. Obviously, this is different from a knowledge base where you ask a specific question and you get a list of possible answers.

Benefits include many of the conventional things vendors talk about in reference to problem resolution tools deployed in service desks: Better, faster, more accurate problem-solving; higher first-call resolution rates or lower rates of escalation to higher level subject matter experts; reduced analyst training and the like. But the ITIL conception of successful KM includes broader metrics such as successful adoption of new or changed services; greater responsiveness to changing business demands; and improved adoption of standards and policies.

Q. ITIL 3 is new. Has anyone actually built an SKMS?

Dorfman: Correct – it’s brand new, and no, I haven’t seen one yet. But I believe there are big opportunities here.

I don’t think any one vendor can provide all of the components at all layers of the architecture, and there are opportunities for narrowly-focused vendors to create high-value “snap-in” components, such as ready-made dashboard components for specific IT and business functions, to drop in at the Presentation Layer; SKMS-tailored business intelligence, query and monitoring tools for the Knowledge Processing Layer (where technologies commonly used in conventional service desk problem resolution could be applied as plug-ins); and other components.

If the SKMS is broadly accepted by the ITIL community as Version 3 takes the place of Version 2, Service Knowledge Management represents a huge opportunity for systems integrators.

Q. And for the companies who have tried to do KM?

Dorfman. ITIL 3 is an opportunity to get KM onto executive radar screens, maybe for the first time. Managers who have tried to promote KM adoption may see this as a golden moment to advance a personal objective, and they may be right.

But one piece of objective counsel in KM adoption does not change as a result of ITIL 3. The framework calls for a strategic vision for enterprise knowledge. But the best roadmap for success in knowledge management will get the adopter there in small increments. An effective KM adoption is a big win, and the way to win big is by repeatedly winning small.

So have a long term vision for the SKMS, but have a plan that gets you there by building it in bite-sized chunks. That will greatly reduce the risk of failure for you and for your executive sponsor.

~ Deja Vu: Social Media and Knowledge

I’ve lived through 18 years of boom and bust for knowledge management adoption. For most of that time I’ve been making the case that success or failure at KM depends far more on the organization’s ability to come to grips with the people/cultural, process and content issues than on factors specific to an adopted software tool. In fact, I started making that argument while I was employed by a tool vendor, in articles and in front of conference audiences.

But it’s just a fact of life: People understand and get jazzed about the business of buying enterprise software. They know how this works in their organizations; they know what authority they have, and who has the power to approve funds for software purchases, and what hoops they have to jump through to get a deal done. And let’s face it – we’re gearheads, we consultants and our corporate clients. We like tools. Software adoption is concrete and fun and motivating. It’s also, frequently, really expensive, but there’s a certain egotistical thrill in throwing those kinds of funds around.

Adopting process and all these other essential intangibles…that feels abstract and subjective, undocumented and risky. How do you know you’ve got it right? What does right look like?

My life as a process guy is easier thanks to the existence of globally accepted Best Practices like ITIL and Knowledge-Centered Support. But my biggest challenge, year after year, tends to be getting the client to put the horse where it belongs, in front of the cart. I.e., don’t start an initiative like KM adoption by firing off an RFP to tool vendors. Define your strategy and your objectives, figure out what your KM effort is supposed to actually do for you, see what resources you have available to help, inventory your knowledge and determine how to source what you don’t have, decide how you’re going to measure success…and then go looking for software that will help you accomplish all of the foregoing. On your terms, not the vendor’s.

I lose this argument about as often as I win it. Often I find myself engaged in remedial projects fixing problems caused by a mismatch between KM expectations and the capabilities of tools adopted because they were there on the client’s shelf before the project even started.

Projects that have prospective champions and even budgets never got off the ground because the client surveyed the market and found two-dozen vendors whose tools were, to their eyes, functionally indistinguishable. I wound up writing a book in 2005, laying out the real issues in KM adoption and profiling 13 different tools, just to clear away the confusion in the hopes that clients would feel comfortable enough to get moving and confront the real issues.

That’s how it’s been for knowledge based problem resolution systems – transactional knowledge base platforms designed to allow agents or customers to pose specific questions or problems and retrieve specific answers or diagnoses/solutions. And history is repeating itself for the Social Media category.

Problem resolution, or Service Resolution Management, as a few of the vendors were styling it a couple of years back, is just so…a couple of years back. Now it’s all about socializing knowledge, networking for insight and context and for the sheer fun of networking. Organizations will still need knowledge bases as long as end users go on asking questions and lodging complaints. But the buzz is all focused on forums, wikis, blogs and what, for lack of a term that seems satisfyingly descriptive, I call TTLLFs (Things That Look Like Facebook). For those of you who have been at this long enough to remember “Communities of Practice”…that’s where CoPs went, down the social networking rabbit hole.

Social media scare a lot of managers, who see their staffers’ productivity vanishing into the vortex of networking, instant messaging and, well, socializing. When, they ask, is the work supposed to get done? It’s going to take time for it to be universally understood that in businesses where the exchange of knowledge is a core, critical process, social media are the channels by which the work actually does get done.

But perhaps more immediate is the richness of choice issue. In about one third the time it took to happen to the problem resolution market, the field has become swamped with hard-to-distinguish social media players – Open Source wiki platforms at the free-to-cheap end of the spectrum; comprehensive, enterprise, multimedia-enabled social media packages like Awareness at the other; with all of the usual suspects in the portal and enterprise content management spaces in the middle, adding social media functionality. And then there’s Microsoft riding the increasingly insistent buzz for MS Office SharePoint Server (MOSS), which is getting to be a social media platform in its own right.

Once again, the point of analysis paralysis has gone from, “What do we need any of this new gear for?” to “Which flavor is the right one for us?”

Frustrating for adopters. A gravy train for Gartner, Forrester and IDC. Simultaneously a speedbump for new projects and, perhaps, an opportunity for people like me. * sigh * Is this going to require another book?

~ More Petulant Thoughts About Energy

As I await the arrival of a service technician to fix my Kenmore oven and range, whose electronics apparently fried during a recent brownout, I’m pondering the likelihood of a future marked by frequent recurrences of this kind of life-hiccup.

 

blackout

We’ve been without a stove for a week. A two-hour brownout caused various devices in the house to shut down and then flail like landed trout, trying to come back to life. Most things came through this intact. But the stove’s will to live and fight its way out of the induced coma was so strong that it apparently burned out two different circuit boards clicking on and off, eventually settling irretrievably into something called Sabbath Mode.

This event worries me because we’ve had two power failures in the last week, and every damn thing in the house is electronic. Like NutraSweet, silicon is a ubiquitous element of middle class life. Can all these solid state ganglia that run our lives withstand frequent power drops and surges? Because I’m imagining these interruptions becoming increasingly common as utilities, like everyone else, fret through the ongoing — let’s say normalization — of fuel prices in the USA, relative to those of the rest of the world.

I’m reminded of the 2005 rolling blackouts in California. Those were caused by the failure of infrastructure to keep up with demand, but it wasn’t as though transformers were blowing up on every corner. It was a failure of the market to keep up an adequate supply — a failure of economics. Do continuing rises in fuel costs mean that we’re headed for more such supply side failures, and more rolling blackouts, this time not limited to specific markets like California’s?

Given the friction generated every time utilities attempt to raise their rates, this strikes me as likely, and suggests another prognostication: There are lots of appliance service calls in your future and mine, as power drops and spikes cause seizures in more and more of our cherished devices, whose innards may turn out to be more vulnerable than anyone’s recognized before.

Does Kenmore know this? Does anyone know? Is anyone tracking the failure rates of personal electronics? (How many American homes have uninterruptible power supplies? Is that a new market opportunity for someone?) Is anyone making market projections for replacement boards?

And what are the customer service implications? I mean, I’m not having a good experience today. The technician is running 3 to 4 hours behind schedule (maybe more — I’ll know when he eventually shows up). I do know that four replacement parts were due to be sent here prior to this call and only two got here. I’m wondering why I had to tell him this — isn’t he supposed to know where all these parts are before he shows up?

Maybe this is going to be a typical customer experience with electronics-dependent appliances from here on. Maybe some of these things, for that very reason, aren’t worth the trouble to own. There are lots of days when I miss experiences like turning a knob and having a blue flame pop up to heat my frying pan. How well an individual manufacturer’s products hold up under changing power conditions could become a point of differentiation when it comes time to buy a washing machine or a stove or a phone system.

(Phones…it used to be that when the power went out, the one thing you could still rely on was the phone. Not any more — the line’s still alive, but the phone now is a complex console and remote handset system stuffed with electronics and sucking power. I’ve already had two such systems eviscerated by lightning strikes. Today the lifeline is my cell phone…if it happens to be charged.)

I never was much of a gadget guy, and I tend to use only the most basic features of the electronics I do own. Maybe one of the little side benefits of the fuel cost crisis is that my attitude about these things will be vindicated. It feels that way today.

~ Fuel Costs vs. Face Time

The news for this traditional travel holiday weekend in the US is that fuel costs are actually cutting into Americans’ travel plans. It’s the first significant year-on-year drop in projected travel since the 1979 fuel crisis. This time around, it could be different, though. We all saw the ’79 gas shock as a skirmish in the West’s cold war with OPEC — and as a temporary event. Today, however, there is a sense that oil prices in the US are permanently resetting themselves at a level more consistent with those the rest of the world pays.

Combine this ground level observation with the desperate measures airlines are taking to cope with fuel prices and it becomes obvious that something basic in the economies of the industrialized world is changing. Already, the relationship between service businesses and their customers is changing, and not for the better. And within organizations, this is going to have an effect on our relationships with one another. While the data really aren’t there yet to prove this, it is reasonable to expect the rising costs of air fares, car rentals and hotels to cut significantly into business travel, soon. (Possibly a leading indicator: Air travel in first and business class is already off. This is where the real value is for the airlines.)

What this means, if it’s borne out, is that we’re all going to start seeing less of one another.

That’s worrisome to those of us whose business it is to see to it that people in organizations successfully exchange knowledge. Knowledge doesn’t travel solely in explicit forms — in e-mails, memos, articles published in knowledge bases, or bulletins posted in SharePoint sites. Knowledge often is transmitted subliminally, or accidentally, in the body language and conversational nuance that only comes from face time.

That’s why executives insist on reviewing the troops in person; it’s why meetings (when they’re run effectively) continue to matter. If companies cut back sharply on travel to meetings, something that materially affects productivity and competitiveness — for individual companies and eventually for the economy as a whole — is going to be lost.

Closer to ground level, in service desks and call centers (the ones that are still physically located in North America), there is growing interest in a parallel coping strategy: The “agent at home” concept, in which customer service or help desk agents, instead of commuting every day to a warren of cubicles, do what they do from home, connected through CRM software, IP telephony and instant messaging.

What the agent at home loses is the opportunity to “prairie-dog” — to collaborate on problems with peers one cubicle away. But it’s a sacrifice many agents will willingly make when it costs $70 to gas up the minivan. This option is already making it possible for companies to find and retain capable people in jobs that are neither glamorous nor particularly lucrative — but actually can be done rather effectively from home, if these people are both disciplined and knowledgeable. So the agent at home has the feel of inevitability, at least in the US.

It seems self-evident that as people increasingly view themselves as knowledge workers (since everything that isn’t knowledge work is being outsourced), their ability to pick one another’s brains is of strategic importance to the organizations they work for. It is unrealistic to conceive of an agent at home operation without some kind of shared knowledge base. That one is obvious.

But work at every level, in every functional area in organizations, is increasingly collaborative, and collaboration is increasingly virtual. Those companies that have thus far resisted adoption of social media applications (forums, wikis, collaboration systems, blogging platforms, Facebook) will have to investigate them to fill the knowledge gap created when the travel clamps go on and peers become, more and more, disembodied voices and blog personae.

All organizations are feeling the pressure to change the way they operate. Asking technology to enable managers to go on functioning as they always have — e.g., using videoconferencing to try to run the same types of meetings they used to run in the face to face world — will be a short-lived pattern. Work is changing — and the tipping point, when it becomes too obvious to ignore, is likely to be expressed as a price per gallon.

~ KM and the 800 Lb. Gorilla

[This article also appeared in the online newsletter CRMAdvocate.]

We management consultants love to draw attention to our enterprise-scale experience. But we all work with smaller teams — departmental units within large companies, and small to midsize businesses. This surely is the case in Knowledge Management. Even at a mammoth multinational, the first beachhead for KM is likely to be a small functional group responsible for a specific type of work, such as an IT Service Desk.

Once, there were a range of effective KM solutions for small service desks — a reasonably stable low end of the market. But the low-end vendors have been swallowed by enterprise solution suppliers and moved up-market. At one pharmaceutical company I’ve assisted, a small, cost-center functional team needed a KM solution and sent its RFP to the usual suspects, only to see two of the household name vendors decline to respond because the budget for the license was under a quarter million dollars.

I have a couple of questions for my friends in the software space. With all due respect, guys…are you having a good time, in the present economy, explaining to people who run departmental service desks where they’re going to find $250,000 in defensible value in a problem resolution tool? I mean, of course, before the cost of integrating this tool with the incident management system effectively doubles that investment?

Oh, and have you noticed the 800-pound gorilla in the room? His name is Web 2.0.

You’ve heard of Web 2.0. What you may or may not have gathered is that it isn’t a new generation of technology. It’s a mindset. What it signifies is that the value to the business that your applications generate comes not from the designed-in features, but from the contributions of the end users — especially the content, but to an increasing degree the user-modifiable attributes of the software.

In fact, users are generating some of today’s most interesting software. The Open Source Community, driven by a geeky but grand ethic that software ingenuity is meant to be shared for little or no cost, has spawned thousands of useful applications, including many that look and act a lot like KM systems.

Guys, who’s really your competition in the problem resolution space? Heh…you’re all looking at each other. Have you seen what wikis can do?

In case you’ve never tried one, a wiki is basically a user-editable web site. The most famous example is, of course, Wikipedia, the user-generated, living encyclopedia. You don’t like the entry on your favorite rugby team, Iranian movie director or management fad? Just register, and you can go in and change it. It’s the wisdom of crowds, on steroids. It’s fast becoming the way knowledge proliferates across the globe. And you can now buy the technology that makes this possible.

The service desk world has begun to notice wikis in a big way — and to ask a really interesting question about them: What does a “conventional” knowledge management tool do that you can’t do with a wiki?

What is a knowledge management tool, really? It has three fundamental components:

  • A repository for “solutions” — essentially a document management system;
  • A means for retrieving solutions based on specific queries — generally some variant on search; and
  • A workflow engine to manage the authoring, review, approval, publishing and eventual retirement of solutions.

There are service desks whose knowledge requirements are such that they cannot depend solely on search for quick and specific knowledge retrieval. These situations are very choice opportunities for KM vendors…but they’re rare. Most knowledge bases are small and narrow in scope, and, given the comfort nearly everyone has now with Google and its brethren, search tends to suffice.

A wiki will, in general, satisfy the content management and search requirements. Try a few — the web site WikiMatrix lists 80-odd wiki platforms, many of which provide at least trial access free. What they lack, by design, is the authoring workflow engine.

Orthodox KM proposes that the workflow be thoughtful, carefully designed to insure that all knowledge content is fully vetted for accuracy and style, and that the tool enforce or at least facilitate this process. So that’s a point of differentiation for the commercial problem resolution tools. (Actually, it might be a rather compelling point if the content and the process for generating it are subject to compliance audits.)

But, how much differentiation? Is it $250,000 worth? Or, conversely, would having to manage this workflow manually, without help from the tool, be okay if it saved $250,000 in license fee? Because, while some of the wiki platforms are marketed and priced like enterprise KM tools, there are quite capable little wiki platforms that can be had for as little as $50 a year. A few are actually free.

Could a service desk design a KM process that engineers around the limitations of wiki technology so that it actually meets its problem resolution needs? I suspect a growing number of them are going to try.