The Semantic Advantage

February 24, 2011

What’s up with Watson?

Filed under: products for semantic approach,semantic technology,Watson — Phil Murray @ 1:13 pm

The IBM/Watson Jeopardy! “challenge ” — three days of Jeopardy! matches first aired on Feb 14-16, 2011 — is a watershed event in the related worlds of Knowledge Representation (KR) and search technology. The match features IBM hardware, software, and data resources developed over seven years by a dedicated IBM team matching wits with two all-time Jeopardy! contestants. Mainstream media are playing it up, too. (Get the IBM perspective on Watson at http://www-943.ibm.com/innovation/us/watson/.)

The result: A big win for Watson. And IBM. And potentially very big losses for those working in the fields associated with Knowledge Representation and information search.

The angst of the KR community is evident in the posts to the Ontolog forum immediately preceding and during the televised challenge. (See the forum archives at http://ontolog.cim3.net/forum/ontolog-forum/.) for Feb. 9, 2011 and following days.) A profession already in “We need to make a better case for our profession.” mode received a  major jolt from IBM’s tour-de-force demonstration of “human” skills on a popular game show.

Although Watson incorporates significant ideas from the KR and search communities — it was, after all, developed by experts from those communities — it’s the effectiveness of the statistical component that drives much of the uneasiness of the KR community. Watson relies heavily on such statistical search techniques as the co-occurrence of words in texts. Lots of texts.

By contrast, the KR community focuses more heavily on interpreting and representing the meaning of natural language — usually by building a model of language from the ground up: concepts assembled according syntax. The results range from simple “taxonomies” that support advanced search in organizations to very large “computer ontologies” that can respond to open-ended natural-language queries and attempt to emulate human problem-solving. But none, so far, that can lay claim to besting smart humans in a challenge most think of as uniquely human.

So major sales of new search engines in big business are going to come to a screeching halt until upper management figures out what happened. All they know now is that an IBM machine outperformed two really smart humans in the domain of common knowledge and made their current and planned investments in search technology look like losing bets. Budget-chomping losers at that.

Why Watson?

Did IBM invest substantial expertise and millions of dollars of computer hardware and software to create what one contributor to the Ontolog forum called a “toy.” Yes, it is a “toy” in the sense that it is designed to play a quiz show.

But oh what an impressive toy! And you know it’s an important toy precisely because the people who understand it best — the members of the KR community — are really queasy about it, devoting hundreds of posts — many of them very defensive — to this subject on the Ontolog forum. Ever notice how participants in live political debates get louder and interrupt more frequently when the weaknesses in their arguments are being exposed?

The good news is that these discussions have surfaced and explored the root goals and benefits of the KR field itself — often in langauge that makes those goals and benefits more accessible to the outside world than discussions on the fine points of semantic theory.

IBM’s end game, of course, is quite simple:

  1. Demonstrate that the path it took has been successful — especially relative to other solutions —  and
  2. Make the buying public aware of that success.

And what could be a more perfect audience than diehard Jeopardy! watchers — millions of college-educated viewers every night, many of whom will influence buying decisions in business and government organizations. IBM consultants won’t have to explain what they’re talking about to non-technical decision makers. The decision makers will include more than a few Jeopardy! watchers. Even better, the mainstream media has been talking about the Watson challenge for days already, often misunderstanding and exagerratng the nature of Watson’s victory.

Score a big win for IBM. A really big win.

What does Watson do?

If you haven’t watched the three-day Jeopardy! event, you can find it in several places online. Beware of sites that charge for downloads.

The DeepQA/Watson project team leader, David Ferrucci, gives a very good explanation of how it works here: http://www-943.ibm.com/innovation/us/watson/watson-for-a-smarter-planet/building-a-jeopardy-champion/how-watson-works.html.

What Watson does not do

Watson is a brilliant achievement, both in terms of technology and marketing. But you need to take it all with a grain of salt. To begin with, the Jeopardy! categories chosen for this challenge have at least two significant constraints: No audio clues and no visual clues. Watson cannot “see” pictures or videos, and it responds only to electronically encoded text.

In theory, at least, those limitations could be overcome quite easily. We already have smartphone apps that will “listen” to a radio tune and tell you the name of that tune. Speech-recognition apps for smartphones and personal computers are remarkably good. Identifying the voice of a particular person seems plausible, too, if the detective shows are accurate. Facial recognition software and applications that identify other objects in static images are available now.

I’m not qualified to tell you how effective such applications are, but they seem impressive to me. And — just as Watson has extracted information from millions of texts for use during the show, there’s no reason to assume that its designers could not build structured descriptions of non-text resources prior to the show. Watson might, in fact, have a huge advantage in establishing matches with such non-text objects relative to humans … at least some of the time.

How the Jeopardy format is an advantage to Watson

The Jeopardy! format itself imposes inherent constraints — most of which are advantageous to the Watson team. And the IBM Watson team fully understands that. They just don’t talk about it too much — perhaps because what it does do is so remarkable.

  1. The Jeopardy clue team consciously limits the difficulty of each clue in several ways.
    • Some clues are harder than others, but most rely on “general knowledge.” Using its human experience, the clue team avoids clues that would be too difficult for the average smart person. Such constraints limit Watson’s advantage. Giving the value of pi to 10 places or listing all vice presidents of the US would be child’s play for Watson. When it comes to raw memory, Watson is going to win.
    • The clues rarely require analysis of complex conditions. After all, the object of the game is for humans to come up with the right question in a few seconds. The absence of more complex and subtle clues is generally an advantage for Watson.
    • The clues and questions fall within the cultural experience of Americans with a typical college education. Listing great Bollywood films would be easy for Watson but tough for most Americans. (That may change over time.)
  2. The response to most clues is a question that identifies a small set of concepts or entities — usually only one.
    • By “entity” I mean specific people, places, or things. [Who/What is] Henry VIII, Lake Nicaragua, and The Declaration of Independence are among the specific “questions” I have heard.
    • By “concept” I mean a class of things, whether concrete or abstract — like dogs, weaponry, human health, or happiness. I believe that if we took a statistical survey of Jeopardy! questions (the responses), we would find that the clue frequently consists of lists of things belonging to a class (definition by extension — a subset of the things in that class) rather than definition by intension (a set of properties that define a class). I suspect that this also favors Watson in a substantial way.

So Ken Jennings and Brad Rutter took a thumping on national television because categories that might have favored humans at this time were eliminated, and because there are other significant constraints imposed by the “rules” of the game itself. The thumping could have been worse. And IBM knew that.

So is Watson besting humans at a human skill?

In its Jeopardy! challenge, is Watson besting humans at a human skill? That’s the picture often painted in the media:

IBM trumpets Watson as a machine that can rival a human’s ability to answer questions posed in natural human language.

Source: Computer finishes off human opponents on ‘Jeopardy!’ By Jason Hanna, CNN
February 17, 2011

Well, it really depends on what you mean by “answering questions.” Sometimes you are looking for the name of a British monarch or slight changes in spelling that result in strange changes in meaning.

However, in most senses, what Watson’s designers have asked it to do is very simple when compared to what humans do when they answer questions. (See above, “How the Jeopardy format is an advantage to Watson.”)  Humans also do not ask random questions. (OK, your young children and some of your adult friends may do that, but those are different challenges.) In fact, your objective in asking a question is usually to carefully identify and frame the right question so that you improve your chances to get the answers you want … in order to address a specific problem. Unless, of course, you are a quiz-show contestant or taking a fill-in-the-blanks final history exam.

Keep in mind that, as more than one contributor to the Ontolog forum has observed, Watson doesn’t “understand” its responses. It only knows that its responses are correct when Alex Trebek says so. And, unlike in most human exchanges of meaning, it has no goals or purposes in mind, so it doesn’t know what the next question should be.

In many senses, Watson is an advanced search engine — like Google. Once you understand the nature of the game, there’s a temptation to call the Jeopardy!/Watson match a cheap parlor trick. But it wasn’t so cheap, was it? Still, brilliant work by the Watson team. Clever, too. (That’s not a criticism.) They really understood the nature of the game.

Watson got an unexpected boost from Alex Trebek, too, as Doug Foxvog noted on the Ontolog forum. My wife and I are longtime Jeopardy! watchers. It seems to us that Alex and his “clue team” have become increasingly arbitrary in their acceptance of specific answers, whether for the correct phrasing of the question or for error in facts. Some of their judgments are clearly wrong. That’s understandable. It’s the trend that irritates us, so we end up yelling at Alex. I guess we need to “get a life.”

Those are my abstract complaints. Looking at the multiple responses considered by Watson (shown on the bottom of the screen in the broadcast) gives you a gut feel for how little true “understanding” is involved. And you can be certain that the [types of] clues Watson responds to correctly are different from the types of clues humans respond to correctly. Statistically, there will be variance in the specific correct answers.

There’s more to be learned (by the general public, like me) about what actually happened by more careful analysis of the Jeopardy!/Watson challenge. But we need to let it go as a metaphor for computers outsmarting people.

Could Watson-like technology solve business problems?

Could Watson-like technology solve business problems? In some important ways, Yes. It could be customized to answer a variety of business-oriented questions with a high degree of confidence … and tell you how confident it was about the responses it provided. Applied to a narrow domain rather than the open-ended domain of common knowledge (as on Jeopardy!), Watson-like technology should have a high degree of confidence in most of its responses when retrieving information from a massive resource, and like a typical search engine, it should be able to tell you where it found those answers.

That’s truly valuable, especially when the retrieval problem is well understood. It might even qualify as a good return on investment, in spite of Peter Brown’s comment on the Ontolog forum:

That’s because “artificial intelligence” is neither. It is neither artificial – it requires massive human brainpower to lay down the main lines of the processors’ brute force attacks to any problem; and It is not intelligent – and I seriously worry that such a failed dismal experiment of the last century now re-emerges with respectable “semantic web” clothing.

Source: Posting to the Ontolog forum by Peter Brown, Re: [ontolog-forum] IBM Watson’s Final Jeopardy error “explanation”17-feb-2011, 9:27 am.

It won’t be cheap, at least initially. But that’s not the real problem. Watson team leader David Ferrucci himself brings up the medical/diagnostic possibilities. And who has the most money today, after all???!!!!

In the end, however, neither Watson nor Google nor the inevitable Watson-like imitators will do what we need most. Nor will the work of the KR community when it focuses solely on machine interpretation of natural language. Not by themselves.

Watson-like technologies also risk becoming the end itself — the beast that must be fed — just like the many current information technologies they are likely to replace. It will be a great tragedy if the KR community, the search community, and the organizations and individuals they serve assume that Watson-like approaches are the primary solution to today’s information-driven business problems.

But Watson-like technologies are an important complement to what we need most. As well as a brilliant achievement and a watershed event in technology.

March 13, 2010

Putting meaning to work

Filed under: knowledge work,Uncategorized — Phil Murray @ 3:30 pm

KMWorld [finally] published my article, “Putting meaning to work.” See http://www.kmworld.com/Articles/ReadArticle.aspx?ArticleID=61174&PageNum=1

It begins ….

Committing vast resources to the “fragmented and miscellaneous” aspect of our Internet-driven economy is a deer-in-the-headlights reaction to the superabundance of information. That reaction might be unavoidable, but it is also unfortunate, because information—and, in particular, unstructured content—is a surface characteristic of knowledge-based activities, not their essence. Focusing exclusively on new ways to handle or respond to the superabundance of information distracts us, ironically, from solving the most important problems of the Information Age.

Let me know what you think.

December 7, 2009

Resisting the hive mentality

Filed under: knowledge work — Phil Murray @ 6:53 pm

We certainly need better ways to find expertise in organizations, but we should carefully consider the implications of HiveMind and other technologies that look at the surface manifestations of behaviors rather than at the actors and activities themselves.

From the BinaryPlex website:

We’re building a product called HiveMind that helps you know what knowledge and expertise the people in your organization are demonstrating, without them needing to update a manual profiling system. Our philosophy is to manage information on behalf of people instead of adding to the flood. We call this “People Centric Software” [emphasis added].

NO, it’s not people-centric. It’s information-centric. It does what it does based on information, in the same way that you can learn things about bees and ants by converting their movements and interactions into information and interpreting that information.

But people aren’t bees or ants. The core problem, for both managers and individuals knowledge workers, is that the knowledge-based organization quite literally does not know what its members are doing. It does not know (or have a record of) who is engaged in what activities with what tools. It does not have an accounting of the inputs and outputs of those activities. It does not track — except in formally organized projects or processes — who or what processes are the beneficiaries of those activities. This is an astonishing reality accepted as par for the course, a level of ignorance that would be considered grounds for immediate dismissal in a manufacturing environment.

An information-driven tool is a poor solution for this problem. Consider the following:

  • A solution like HiveMind replaces analysis of work (what people actually do on a daily basis) with guessing games. That doesn’t seem like a great organizational policy, especially when it is possible to know what they actually do. A well-conceived analysis of work activities does not have to be intrusive, time-consuming, or static. It can be helpful to individuals themselves, to managers, and to the organization.
  • The HiveMind solution asserts a top-down association between language (vocabulary) and skills or job roles. I’m sorry, but pairing a language-based solution — even one supported by well-designed ontologies — with static and/or highly conventional descriptions of work activities is going to produce only a marginal advantage. What’s more, many (maybe most) productive work activities occur at a finer level of granularity or specificity than “skill/expertise” or “job description.”
  • Like other applications that skim large amounts of information for certain kinds of facts or for consumer sentiment, an algorithmic analysis of information about skills is a “derivative” instrument. As we have seen in the marketplace, derivatives are often accorded the highest value, even though the information on which they are based is farthest from reality. And we know what happens when derivatives drive mind share and management. This is not a stretched metaphor. Meaning connected to reality is the source of value in knowledge-based organizations.
  • The worst possible situation is one in which a solution seems to make sense, especially when it grabs the imagination but is actually deeply wrong-headed or distracting. I think this one falls into the latter category.
  • All practices and technologies that create, consume, or process information — especially the language we use to communicate meaning — ultimately have a deep impact on how we work. You have to work out the implications before you adopt those practices and technologies.

Do you really want to encourage a hive mentality? A hive perspective? Do you really believe we behave like bees? That our individual acts have value only when they are summed? [paranoia alert] I believe that just the opposite is true, but there are people who want us to believe that because they know how to steal some of that value from us.

November 15, 2009

An old favorite plus a new favorite = solution

Filed under: knowledge management,products for semantic approach — Phil Murray @ 3:03 pm

No rants about search engines this week. Instead, praise for a terrific desktop search engine — dtSearch — and ABC Amber SeaMonkey Converter, one of many converters offered by Yernar Shambayev’s Process Text Group.

Online searches lead mostly to … more online searches instead of to reusable value. But we can’t live without them, and I have to admit that Google and Yahoo! are steadily improving the effectiveness of their products. However, sometimes our needs are more narrowly defined than locating something in all the world’s information.

When I’m building a network of knowledge using the approach I have designed, I need to know whether the idea or concept I want to add to the network is the same as — or similar to — other ideas or concepts already in the network. Let me stop for a minute and define idea as an observation about reality — the equivalent in meaning to a simple sentence in natural language. Contrast that with concept — the essential name of a thing, whether material or imagined. Concept appears to be the preferred terminology for practitioners who construct taxonomies (or facets), thesauri, and ontologies that organize such entities into larger structures. I won’t go into the fine points here.

I have not built a rich ontology of the concepts in the ad hoc spaces I discuss, and I haven’t found any affordable tools that allow me to look for similarities among ideas. So I resort to a very simple practice: I maintain a directory in which each idea and each concept occupies a separate file. The file contains the name of the concept or idea, explanations of those items, and text examples that contain instances of those items. A full text search of that directory using the new concept or idea as the query retrieves the search engine’s best guess at files that contain similarities with concepts and ideas already in the network of knowledge.

Or not. Because most search engines are primarily string-matching tools, and the files retrieved may not be what I want.

dtSearch is better than that. In addition to the features you might expect in a good or desktop enterprise search engine — including stemming, wild cards, fuzzy search, proximity search, and Boolean operators — you have the option of looking for files that contain synonyms based on Princeton’s WordNet — a kind of semantic network that anyone can use. So even if you can’t keep track of synonyms, the dtSearch tools will. You can add your own synonyms, too, within dtSearch.

Great stuff. Some consider the dtSearch interface dated, but I think it’s highly functional. Real easy to set up separate named indexes for different sets of directories, too. (Excuse me. I’m dating myself. We call them “folders” now, don’t we?)

I also use dtSearch for a variety of other search tasks — including finding emails from the thousands I have captured in SeaMonkey. Making those emails accessible in a reasonable (and, ideally, consistent) way has been virtually impossible. The native SeaMonkey search features — like those in other email clients I have encountered — are simply inadequate.

And even if those email search features were superb, they wouldn’t solve the problem, because SeaMonkey stores each mailbox as one big file. I do mean big for some of my mailboxes. So finding a huge file is almost meaningless. Big files will satisfy many queries unless you use proximity searches and other tricks, and even if one mailbox does contain the information you want, it may take a long time to find the right spot within that file. And you have to go through the same process if you want to execute that query again.

ABC Amber SeaMonkey Converter solves that problem by allowing me to split SeaMonkey mailboxes into separate HTML files. (I could use ABC Amber options to convert them to text or a couple dozen other output formats, but I prefer HTML for a variety of reasons.) When I use a dtSearch query against the directories containing those exported HTML emails, I get a highly relevant selection of small files — exactly what I want.

Very easy, too. When I ran ABC Amber the first time, it found the SeaMonkey mailboxes automatically. The emails in each folder were displayed in a list, and you can easily select as many or as few as you wish. Oh, and I should mention that ABC Amber promotional pages stress the ability of the converter to output a single, integrated HTML file from a mailbox. That’s a plus for many people, but not what I want.

I also tested the mailbox-to-TreePad converter. (You just click a different output option in ABC Amber.) The results were flawless and the TreePad outliner let me explore view the email content by date. Cool.

One caution: As of this writing, it appears that SeaMonkey has changed where it places email folders. So folders I created with SeaMonkey 2.0 — and any new email since the changeover — did not show up in the ABC Amber converter, but I was able to redirect the program to the new location using an ABC Amber option. I have advised the Process Text people about this.

UPDATE (16-nov-2009): The ProcessText people have already updated the converter. It now finds the SeaMonkey 2.0 mailboxes automatically. That was quick!

I’ve been using dtSearch for nearly a decade now. It’s still worth the money — about $200 for an individual license. Adding ABC Amber SeaMonkey Converter (about $20) to my set of tools will really make a difference.

September 16, 2009

Using circles and arrows

Many “semantic” practices and applications — including “brainstorming” and construction of computer ontologies — involve the use of (a) circles or other symbols (“nodes”) to represent concepts or ideas and (b) arrows (connecting arcs or “edges”) to represent the relationships among the concepts or ideas.

(Tim Berners-Lee uses the phrase “circles and arrows” in at least one of his papers: “The Semantic Web starts as a simple circles-and-arrows diagram relating things, which slowly expands and coalesces to become global and vast.” in “The Semantic Web lifts off” by Tim Berners-Lee and Eric Miller. ERCIM News, No. 51, October 2002. http://www.ercim.org/publication/Ercim_News/enw51/berners-lee.html. His original vision is for metadata for documents.)

The graphic representation is not the tool itself in some cases, but a method of helping users visualize and/or manipulate complex, abstract data that is difficult for the average human to understand quickly — for example, RDF expressed in XML.

Mapping arguments on a whiteboard in support of decision making is a common practice in many meetings. (But integrating those representations into subsequent discussions is almost always a challenge.)

We need a much better and more widely usable set of tools for such purposes, but just applying current, limited tools is useful in its own right. One thing you definitely begin to understand as you try to deconstruct your arguments into meaning — especially when using graphical tools for that purpose — is that the process itself is useful in getting to meaning.

The process is useful in exposing what is tangential, peripheral or simply irrelevant. You tend to create and refine elemental, focussed, unambiguous assertions that can be verified as true or debunked.

You certainly expose conditions and constraints that apply to those assertions. Sweeping generalizations quickly become far less general … but often more useful. And you find that most of what you have written is not part of the core meaning that you want to represent and transfer.

That process, however, is still not easy. And you need to have a set of guidelines to keep yourself on track.

I will explore some of the tools and issues in this area in future posts.

August 10, 2009

The problem of situating ideas

Filed under: semantic technology,visualization of semantic information — Phil Murray @ 9:06 pm

I have papers scattered across my office. Some are printed documents filled with marginalia. Some started blank and are now filled with isolated observations. Some of those observations are in the form of sentences written in small blocks at different angles on the page or enclosed in circles or rectangles linked to to other blocks by curved and straight lines. Some are post-it notes inserted into books I’m reading.

I have a stack of spiral-bound notebooks I use for taking notes at meetings. My notes and comments are liberally interspersed among those notes. Some pages are filled with notes and comments by themselves.

My computer files contain notes in at least 10 different formats (right now — a system for building help files, five (maybe six) different PIMs, outlines made with TreePad, files in Open Office Writer, HTML files created with Sea Monkey, emails and HTML files exported from email, and text files created in Notepad++). Some of the products I’m reviewing contain notes and ideas locked in those particular tools.

Other ideas are scattered across the Web in wikis, blogs, and several web sites.

Let’s face it. I have a problem. Those ideas are not “situated.” They have little or no context. I don’t know — or at least I cannot demonstrate — how they are connected and where they overlap or duplicate each other. And that’s a problem, because I certainly don’t remember most of them.

I have no way around one of the roadblocks to improving this situation: Sometimes I can’t easily record those ideas on a computer. It’s just inconvenient.

At other times, using the computer just seems inappropriate. (Yesterday, I wrote six pages of notes on paper about using Ron C. de Weijze’s Personal Memory Manager (PMM) — a tool for capturing and integrating ideas on your computer! –while sitting at my computer … with PMM open.)

I do go back periodically and try to capture some of the stuff on paper, putting large X’s through notes that I have transcribed. That helps a bit, but it doesn’t connect them. It does little to make their meaning explicit or trace their impact on other ideas. It does nothing to aid in finding the other contexts in which this idea may have occurred.

I have long railed against trapping ideas in formats that makes them effectively not re-usable. That’s a problem with most concept-mapping tools and PIMs — even those that support export to Web formats. I really thought I could solve my problem with David Karger’s Haystack or the NEPOMUK semantic desktop, now being commercialized by (or as) Gnowsis, but I found them clumsy, incomplete, or lacking support.

But the problem of situating those ideas has become so great — and the value of connecting and superimposing stucture on those ideas has become so obvious — that I am giving up (for at least a while) my insistence on making everything (including relationships) convertible to RDF and XHTML. Or DITA.

So I’m going to try to live in the proprietary world of Personal Memory Manager for a while. “Try” is the operative word. And I will do so within a set of constraints, including continuing to create content in HTML — XHTML as much as possible — and referencing those files in PMM, rather than embedding them solely in PMM.

UPDATE: KMWorld [finally] published my article, “Putting meaning to work.” See http://www.kmworld.com/Articles/ReadArticle.aspx?ArticleID=61174&PageNum=3.

July 9, 2009

What is the Semantic Web all about?

Filed under: semantic technology — Phil Murray @ 12:22 pm

Best quick read on the Semantic Web I’ve seen in a long time — by James Hendler, an expert and innovator in computer ontologies himself:

What is the Semantic Web all about?

Hendler was co-author of the Scientific American article that was primarily responsible for bringing Tim Berners-Lee’s ideas to a much broader audience, so he brings much more authority to his observations than, well, just about everyone else except Tim B-L himself.

Among the best information points and assessments in the brief post:

  • A brief description of his own work on Simple HTML Ontology Extensions, which preceded formalization of the Semantic Web.
  • A very brief history of the Semantic Web itself, including its DARPA funding.
  • His distinctions among “linked data,” “Web 3.0,” and the “Semantic Web.”

Required reading if you want to talk sensibly about the Semantic Web and semantic technologies and practices in general.

This is also a good time for me to remind people of another item on my list of required readings: Barry Smith and Chris Welty, “Ontology: Towards a New Synthesis.” Available at http://www.cs.vassar.edu/~weltyc/papers/fois-intro.pdf

Also brief and clear.

May 27, 2009

How to embarrass and anger workers: Tips from HR

Filed under: odds and ends — Phil Murray @ 10:24 pm

Every once in a while, people who should know better make it seem that stereotypes are true. This Human Resources expert thought she was helping. But she simply ended up embarrassing all HR people:

What’s in a Name? Strategies for Reviving a Culture During Turbulent Times

…  which recommends that  you can improve  flagging morale in a company by giving people titles like:

Manager of Customer Delight
Chief People Officer
Director of First Impressions

Commenters almost unanimously agreed that this wasn’t just unproductive; it was counter productive.

For example: “Oh gawd no…. treating people like children and giving them stupid titles doesn’t invigorate anyone it only embarrasses them. Hey, why not give them a funny hat with a propeller that they can walk around with showing they they’ve been promoted to “chief dork”. Ever see all the eyes roll and here the groans at team building events when the “fun” activity is really just dumb and childish?  …  Stop giving me candy during meetings whenever I answer one of your questions. I’m not a child and I’m not a dog either.”

Nailed it.

My eyes rolled fully back in my head whenever I heard managers talk about how they wanted to “empower” their workers. I haven’t heard that in a while.

Add the “silly job title” strategy to the long list of things that are sure indicators that a company is about to crumble.

Other telltale tactics and policies:

  • You get 10 times as many trinkets (T-shirts, toys, gadgets) as raises.
  • The Board of Directors insists that you hire a VP from Digital (DEC) or Xerox in order to “take the company to the next level.” (OK, I’m dating myself with that one.)
  • Management announces that they are building (or moving to) a huge, gleaming new site … before they have a major customer.
  • The CTO of your small company insists that it can create a competitive market advantage by building a slightly better product in a market in which such products are, in effect, commodities.
  • The marketing team is based on a different continent.

All of those things — and a few others — happened where I worked a few years ago. And tens of millions of venture capital [predictably] went down the drain.

April 10, 2009

An economy of meaning. Or, why “semantics” is ugly but important.

Filed under: semantic technology — Phil Murray @ 8:32 pm

An economy of meaning? Yep. And I mean that explicitly in the sense of economic competitiveness and socio-economic solutions that pay direct attention to meaning. Not information.

Of course, we all rely on information. Always will. But we can’t rely on information the way we once did. We might be proud of our bookshelves or our long lists of browser bookmarks, but they’re decreasingly effective in helping us solve our problems. It’s not our fault. It’s the fault of information itself. There’s just too much to handle and apply.

That is, in part, the message of the Semantic Web. And my use of the phrase an economy of meaning is closest perhaps to Ilkka Tuomi’s use of the phrase Towards the New Economy of Meaning in his presentation, “Networks of Innovation”, which focuses on new socially-constructed forms of innovation.

But a “semantic perspective” is much more than that. It’s more than just innovation. It’s not just about controlling costs. It’s about survival. It is the reason, for example, that progress in providing healthcare has ground to a halt, even slipping backward at times, in spite of rapid advances in medical knowledge and treatment technology and in spite of massive financial resources.

Together with former associates at the Center for Semantic Excellence (CSE), I have been evaluating the causes of the problems in healthcare in the United States for many months — from multiple perspectives. But it got personal recently when my wife underwent hip-replacement surgery.

  • In the pre-op phase, we experienced a surgical team that operated with both efficiency and humanity … while surrounded by dozens of different technologies. (I misplaced my wife’s cane because I tried to put it someplace in that maze of technologies where people wouldn’t trip over it.) Lots of direct personal contact.
  • On the recovery floor, it was often quite different. The hallways were crowded with what my wife (a medical professional herself) refers to as “COWs” — Computers on Wheels — and other technologies. The staff spent a lot of time at those laptops. Caregivers are, of necessity these days, at the beck and call of what CSE member Tom Bigda-Peyton pointedly refers to as “people not in the room.”

    My personal impression was that the less-competent and less-caring members of that staff were far more concerned with what those screens demanded of them than what their patients (and their families) asked of them. It had little to do with inadequate staffing. There seemed to be an abundance of personnel. That can be deceiving, of course, but both minor requests (a blanket for the chilled elderly patient in the next bed) and important treatment concerns went unaddressed for  far longer than seemed reasonable. The staff ignored our surgeon’s standard practices for pain management.

  • At home, the first 45-minute visit by a physical therapist was occupied almost exclusively by notetaking … on yet another laptop. Not much therapy. A technician arrived to draw blood that would be used to check levels of Coumadin, a blood-thinning agent vital to safe recovery in such invasive surgical procedures. The results were available in a few hours to the doctor’s staff, but they seemed in no hurry to report those results — and deliver any changes in dosage — even with a long weekend coming up. In fact, the staff seemed rather clueless about the importance of correct and timely adjustments of the medication.

Nothing went wrong. My wife is recovering much more rapidly than expected. The surgeon did his job very well. And neither my wife nor I would want to go back to pre-technology medicine.

What’s troubling here is less the current state of care than the warning signs of problems to come: the demands of capturing information, demands that distract from properly interpreting that information and delivering the service itself; the growing role of intermediaries who have no stake in how well that service is provided (in particular, those “people not in the room”); and the continuing growth in costs, even as technology is applied successfully to specific requirements.

Those troubles are hardly limited to healthcare. And the only way to solve them is to understand how we have created, transferred, and applied meaning in these situations in the past … and how we must do so in a world now dominated by information.

March 30, 2009

New publications

Filed under: semantic technology — Phil Murray @ 8:30 pm
Tags:

I’m finally adding some of the things I’ve written over the past year to my web site. The following two papers are available at http://www.semanticadvantage.com/id22.html

  • The Idiot Savant of Search
  • “Sarkozy bites Obama child”: A commentary on the benefits and distractions of the Semantic Web

Enjoy. Comments are very welcome.

Next Page »

Create a free website or blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.