The Semantic Advantage

February 24, 2011

What’s up with Watson?

Filed under: products for semantic approach,semantic technology,Watson — Phil Murray @ 1:13 pm

The IBM/Watson Jeopardy! “challenge ” — three days of Jeopardy! matches first aired on Feb 14-16, 2011 — is a watershed event in the related worlds of Knowledge Representation (KR) and search technology. The match features IBM hardware, software, and data resources developed over seven years by a dedicated IBM team matching wits with two all-time Jeopardy! contestants. Mainstream media are playing it up, too. (Get the IBM perspective on Watson at http://www-943.ibm.com/innovation/us/watson/.)

The result: A big win for Watson. And IBM. And potentially very big losses for those working in the fields associated with Knowledge Representation and information search.

The angst of the KR community is evident in the posts to the Ontolog forum immediately preceding and during the televised challenge. (See the forum archives at http://ontolog.cim3.net/forum/ontolog-forum/.) for Feb. 9, 2011 and following days.) A profession already in “We need to make a better case for our profession.” mode received a  major jolt from IBM’s tour-de-force demonstration of “human” skills on a popular game show.

Although Watson incorporates significant ideas from the KR and search communities — it was, after all, developed by experts from those communities — it’s the effectiveness of the statistical component that drives much of the uneasiness of the KR community. Watson relies heavily on such statistical search techniques as the co-occurrence of words in texts. Lots of texts.

By contrast, the KR community focuses more heavily on interpreting and representing the meaning of natural language — usually by building a model of language from the ground up: concepts assembled according syntax. The results range from simple “taxonomies” that support advanced search in organizations to very large “computer ontologies” that can respond to open-ended natural-language queries and attempt to emulate human problem-solving. But none, so far, that can lay claim to besting smart humans in a challenge most think of as uniquely human.

So major sales of new search engines in big business are going to come to a screeching halt until upper management figures out what happened. All they know now is that an IBM machine outperformed two really smart humans in the domain of common knowledge and made their current and planned investments in search technology look like losing bets. Budget-chomping losers at that.

Why Watson?

Did IBM invest substantial expertise and millions of dollars of computer hardware and software to create what one contributor to the Ontolog forum called a “toy.” Yes, it is a “toy” in the sense that it is designed to play a quiz show.

But oh what an impressive toy! And you know it’s an important toy precisely because the people who understand it best — the members of the KR community — are really queasy about it, devoting hundreds of posts — many of them very defensive — to this subject on the Ontolog forum. Ever notice how participants in live political debates get louder and interrupt more frequently when the weaknesses in their arguments are being exposed?

The good news is that these discussions have surfaced and explored the root goals and benefits of the KR field itself — often in langauge that makes those goals and benefits more accessible to the outside world than discussions on the fine points of semantic theory.

IBM’s end game, of course, is quite simple:

  1. Demonstrate that the path it took has been successful — especially relative to other solutions —  and
  2. Make the buying public aware of that success.

And what could be a more perfect audience than diehard Jeopardy! watchers — millions of college-educated viewers every night, many of whom will influence buying decisions in business and government organizations. IBM consultants won’t have to explain what they’re talking about to non-technical decision makers. The decision makers will include more than a few Jeopardy! watchers. Even better, the mainstream media has been talking about the Watson challenge for days already, often misunderstanding and exagerratng the nature of Watson’s victory.

Score a big win for IBM. A really big win.

What does Watson do?

If you haven’t watched the three-day Jeopardy! event, you can find it in several places online. Beware of sites that charge for downloads.

The DeepQA/Watson project team leader, David Ferrucci, gives a very good explanation of how it works here: http://www-943.ibm.com/innovation/us/watson/watson-for-a-smarter-planet/building-a-jeopardy-champion/how-watson-works.html.

What Watson does not do

Watson is a brilliant achievement, both in terms of technology and marketing. But you need to take it all with a grain of salt. To begin with, the Jeopardy! categories chosen for this challenge have at least two significant constraints: No audio clues and no visual clues. Watson cannot “see” pictures or videos, and it responds only to electronically encoded text.

In theory, at least, those limitations could be overcome quite easily. We already have smartphone apps that will “listen” to a radio tune and tell you the name of that tune. Speech-recognition apps for smartphones and personal computers are remarkably good. Identifying the voice of a particular person seems plausible, too, if the detective shows are accurate. Facial recognition software and applications that identify other objects in static images are available now.

I’m not qualified to tell you how effective such applications are, but they seem impressive to me. And — just as Watson has extracted information from millions of texts for use during the show, there’s no reason to assume that its designers could not build structured descriptions of non-text resources prior to the show. Watson might, in fact, have a huge advantage in establishing matches with such non-text objects relative to humans … at least some of the time.

How the Jeopardy format is an advantage to Watson

The Jeopardy! format itself imposes inherent constraints — most of which are advantageous to the Watson team. And the IBM Watson team fully understands that. They just don’t talk about it too much — perhaps because what it does do is so remarkable.

  1. The Jeopardy clue team consciously limits the difficulty of each clue in several ways.
    • Some clues are harder than others, but most rely on “general knowledge.” Using its human experience, the clue team avoids clues that would be too difficult for the average smart person. Such constraints limit Watson’s advantage. Giving the value of pi to 10 places or listing all vice presidents of the US would be child’s play for Watson. When it comes to raw memory, Watson is going to win.
    • The clues rarely require analysis of complex conditions. After all, the object of the game is for humans to come up with the right question in a few seconds. The absence of more complex and subtle clues is generally an advantage for Watson.
    • The clues and questions fall within the cultural experience of Americans with a typical college education. Listing great Bollywood films would be easy for Watson but tough for most Americans. (That may change over time.)
  2. The response to most clues is a question that identifies a small set of concepts or entities — usually only one.
    • By “entity” I mean specific people, places, or things. [Who/What is] Henry VIII, Lake Nicaragua, and The Declaration of Independence are among the specific “questions” I have heard.
    • By “concept” I mean a class of things, whether concrete or abstract — like dogs, weaponry, human health, or happiness. I believe that if we took a statistical survey of Jeopardy! questions (the responses), we would find that the clue frequently consists of lists of things belonging to a class (definition by extension — a subset of the things in that class) rather than definition by intension (a set of properties that define a class). I suspect that this also favors Watson in a substantial way.

So Ken Jennings and Brad Rutter took a thumping on national television because categories that might have favored humans at this time were eliminated, and because there are other significant constraints imposed by the “rules” of the game itself. The thumping could have been worse. And IBM knew that.

So is Watson besting humans at a human skill?

In its Jeopardy! challenge, is Watson besting humans at a human skill? That’s the picture often painted in the media:

IBM trumpets Watson as a machine that can rival a human’s ability to answer questions posed in natural human language.

Source: Computer finishes off human opponents on ‘Jeopardy!’ By Jason Hanna, CNN
February 17, 2011

Well, it really depends on what you mean by “answering questions.” Sometimes you are looking for the name of a British monarch or slight changes in spelling that result in strange changes in meaning.

However, in most senses, what Watson’s designers have asked it to do is very simple when compared to what humans do when they answer questions. (See above, “How the Jeopardy format is an advantage to Watson.”)  Humans also do not ask random questions. (OK, your young children and some of your adult friends may do that, but those are different challenges.) In fact, your objective in asking a question is usually to carefully identify and frame the right question so that you improve your chances to get the answers you want … in order to address a specific problem. Unless, of course, you are a quiz-show contestant or taking a fill-in-the-blanks final history exam.

Keep in mind that, as more than one contributor to the Ontolog forum has observed, Watson doesn’t “understand” its responses. It only knows that its responses are correct when Alex Trebek says so. And, unlike in most human exchanges of meaning, it has no goals or purposes in mind, so it doesn’t know what the next question should be.

In many senses, Watson is an advanced search engine — like Google. Once you understand the nature of the game, there’s a temptation to call the Jeopardy!/Watson match a cheap parlor trick. But it wasn’t so cheap, was it? Still, brilliant work by the Watson team. Clever, too. (That’s not a criticism.) They really understood the nature of the game.

Watson got an unexpected boost from Alex Trebek, too, as Doug Foxvog noted on the Ontolog forum. My wife and I are longtime Jeopardy! watchers. It seems to us that Alex and his “clue team” have become increasingly arbitrary in their acceptance of specific answers, whether for the correct phrasing of the question or for error in facts. Some of their judgments are clearly wrong. That’s understandable. It’s the trend that irritates us, so we end up yelling at Alex. I guess we need to “get a life.”

Those are my abstract complaints. Looking at the multiple responses considered by Watson (shown on the bottom of the screen in the broadcast) gives you a gut feel for how little true “understanding” is involved. And you can be certain that the [types of] clues Watson responds to correctly are different from the types of clues humans respond to correctly. Statistically, there will be variance in the specific correct answers.

There’s more to be learned (by the general public, like me) about what actually happened by more careful analysis of the Jeopardy!/Watson challenge. But we need to let it go as a metaphor for computers outsmarting people.

Could Watson-like technology solve business problems?

Could Watson-like technology solve business problems? In some important ways, Yes. It could be customized to answer a variety of business-oriented questions with a high degree of confidence … and tell you how confident it was about the responses it provided. Applied to a narrow domain rather than the open-ended domain of common knowledge (as on Jeopardy!), Watson-like technology should have a high degree of confidence in most of its responses when retrieving information from a massive resource, and like a typical search engine, it should be able to tell you where it found those answers.

That’s truly valuable, especially when the retrieval problem is well understood. It might even qualify as a good return on investment, in spite of Peter Brown’s comment on the Ontolog forum:

That’s because “artificial intelligence” is neither. It is neither artificial – it requires massive human brainpower to lay down the main lines of the processors’ brute force attacks to any problem; and It is not intelligent – and I seriously worry that such a failed dismal experiment of the last century now re-emerges with respectable “semantic web” clothing.

Source: Posting to the Ontolog forum by Peter Brown, Re: [ontolog-forum] IBM Watson’s Final Jeopardy error “explanation”17-feb-2011, 9:27 am.

It won’t be cheap, at least initially. But that’s not the real problem. Watson team leader David Ferrucci himself brings up the medical/diagnostic possibilities. And who has the most money today, after all???!!!!

In the end, however, neither Watson nor Google nor the inevitable Watson-like imitators will do what we need most. Nor will the work of the KR community when it focuses solely on machine interpretation of natural language. Not by themselves.

Watson-like technologies also risk becoming the end itself — the beast that must be fed — just like the many current information technologies they are likely to replace. It will be a great tragedy if the KR community, the search community, and the organizations and individuals they serve assume that Watson-like approaches are the primary solution to today’s information-driven business problems.

But Watson-like technologies are an important complement to what we need most. As well as a brilliant achievement and a watershed event in technology.

Advertisements

November 15, 2009

An old favorite plus a new favorite = solution

Filed under: knowledge management,products for semantic approach — Phil Murray @ 3:03 pm

No rants about search engines this week. Instead, praise for a terrific desktop search engine — dtSearch — and ABC Amber SeaMonkey Converter, one of many converters offered by Yernar Shambayev’s Process Text Group.

Online searches lead mostly to … more online searches instead of to reusable value. But we can’t live without them, and I have to admit that Google and Yahoo! are steadily improving the effectiveness of their products. However, sometimes our needs are more narrowly defined than locating something in all the world’s information.

When I’m building a network of knowledge using the approach I have designed, I need to know whether the idea or concept I want to add to the network is the same as — or similar to — other ideas or concepts already in the network. Let me stop for a minute and define idea as an observation about reality — the equivalent in meaning to a simple sentence in natural language. Contrast that with concept — the essential name of a thing, whether material or imagined. Concept appears to be the preferred terminology for practitioners who construct taxonomies (or facets), thesauri, and ontologies that organize such entities into larger structures. I won’t go into the fine points here.

I have not built a rich ontology of the concepts in the ad hoc spaces I discuss, and I haven’t found any affordable tools that allow me to look for similarities among ideas. So I resort to a very simple practice: I maintain a directory in which each idea and each concept occupies a separate file. The file contains the name of the concept or idea, explanations of those items, and text examples that contain instances of those items. A full text search of that directory using the new concept or idea as the query retrieves the search engine’s best guess at files that contain similarities with concepts and ideas already in the network of knowledge.

Or not. Because most search engines are primarily string-matching tools, and the files retrieved may not be what I want.

dtSearch is better than that. In addition to the features you might expect in a good or desktop enterprise search engine — including stemming, wild cards, fuzzy search, proximity search, and Boolean operators — you have the option of looking for files that contain synonyms based on Princeton’s WordNet — a kind of semantic network that anyone can use. So even if you can’t keep track of synonyms, the dtSearch tools will. You can add your own synonyms, too, within dtSearch.

Great stuff. Some consider the dtSearch interface dated, but I think it’s highly functional. Real easy to set up separate named indexes for different sets of directories, too. (Excuse me. I’m dating myself. We call them “folders” now, don’t we?)

I also use dtSearch for a variety of other search tasks — including finding emails from the thousands I have captured in SeaMonkey. Making those emails accessible in a reasonable (and, ideally, consistent) way has been virtually impossible. The native SeaMonkey search features — like those in other email clients I have encountered — are simply inadequate.

And even if those email search features were superb, they wouldn’t solve the problem, because SeaMonkey stores each mailbox as one big file. I do mean big for some of my mailboxes. So finding a huge file is almost meaningless. Big files will satisfy many queries unless you use proximity searches and other tricks, and even if one mailbox does contain the information you want, it may take a long time to find the right spot within that file. And you have to go through the same process if you want to execute that query again.

ABC Amber SeaMonkey Converter solves that problem by allowing me to split SeaMonkey mailboxes into separate HTML files. (I could use ABC Amber options to convert them to text or a couple dozen other output formats, but I prefer HTML for a variety of reasons.) When I use a dtSearch query against the directories containing those exported HTML emails, I get a highly relevant selection of small files — exactly what I want.

Very easy, too. When I ran ABC Amber the first time, it found the SeaMonkey mailboxes automatically. The emails in each folder were displayed in a list, and you can easily select as many or as few as you wish. Oh, and I should mention that ABC Amber promotional pages stress the ability of the converter to output a single, integrated HTML file from a mailbox. That’s a plus for many people, but not what I want.

I also tested the mailbox-to-TreePad converter. (You just click a different output option in ABC Amber.) The results were flawless and the TreePad outliner let me explore view the email content by date. Cool.

One caution: As of this writing, it appears that SeaMonkey has changed where it places email folders. So folders I created with SeaMonkey 2.0 — and any new email since the changeover — did not show up in the ABC Amber converter, but I was able to redirect the program to the new location using an ABC Amber option. I have advised the Process Text people about this.

UPDATE (16-nov-2009): The ProcessText people have already updated the converter. It now finds the SeaMonkey 2.0 mailboxes automatically. That was quick!

I’ve been using dtSearch for nearly a decade now. It’s still worth the money — about $200 for an individual license. Adding ABC Amber SeaMonkey Converter (about $20) to my set of tools will really make a difference.

September 16, 2009

Using circles and arrows

Many “semantic” practices and applications — including “brainstorming” and construction of computer ontologies — involve the use of (a) circles or other symbols (“nodes”) to represent concepts or ideas and (b) arrows (connecting arcs or “edges”) to represent the relationships among the concepts or ideas.

(Tim Berners-Lee uses the phrase “circles and arrows” in at least one of his papers: “The Semantic Web starts as a simple circles-and-arrows diagram relating things, which slowly expands and coalesces to become global and vast.” in “The Semantic Web lifts off” by Tim Berners-Lee and Eric Miller. ERCIM News, No. 51, October 2002. http://www.ercim.org/publication/Ercim_News/enw51/berners-lee.html. His original vision is for metadata for documents.)

The graphic representation is not the tool itself in some cases, but a method of helping users visualize and/or manipulate complex, abstract data that is difficult for the average human to understand quickly — for example, RDF expressed in XML.

Mapping arguments on a whiteboard in support of decision making is a common practice in many meetings. (But integrating those representations into subsequent discussions is almost always a challenge.)

We need a much better and more widely usable set of tools for such purposes, but just applying current, limited tools is useful in its own right. One thing you definitely begin to understand as you try to deconstruct your arguments into meaning — especially when using graphical tools for that purpose — is that the process itself is useful in getting to meaning.

The process is useful in exposing what is tangential, peripheral or simply irrelevant. You tend to create and refine elemental, focussed, unambiguous assertions that can be verified as true or debunked.

You certainly expose conditions and constraints that apply to those assertions. Sweeping generalizations quickly become far less general … but often more useful. And you find that most of what you have written is not part of the core meaning that you want to represent and transfer.

That process, however, is still not easy. And you need to have a set of guidelines to keep yourself on track.

I will explore some of the tools and issues in this area in future posts.

August 10, 2009

The problem of situating ideas

Filed under: semantic technology,visualization of semantic information — Phil Murray @ 9:06 pm

I have papers scattered across my office. Some are printed documents filled with marginalia. Some started blank and are now filled with isolated observations. Some of those observations are in the form of sentences written in small blocks at different angles on the page or enclosed in circles or rectangles linked to to other blocks by curved and straight lines. Some are post-it notes inserted into books I’m reading.

I have a stack of spiral-bound notebooks I use for taking notes at meetings. My notes and comments are liberally interspersed among those notes. Some pages are filled with notes and comments by themselves.

My computer files contain notes in at least 10 different formats (right now — a system for building help files, five (maybe six) different PIMs, outlines made with TreePad, files in Open Office Writer, HTML files created with Sea Monkey, emails and HTML files exported from email, and text files created in Notepad++). Some of the products I’m reviewing contain notes and ideas locked in those particular tools.

Other ideas are scattered across the Web in wikis, blogs, and several web sites.

Let’s face it. I have a problem. Those ideas are not “situated.” They have little or no context. I don’t know — or at least I cannot demonstrate — how they are connected and where they overlap or duplicate each other. And that’s a problem, because I certainly don’t remember most of them.

I have no way around one of the roadblocks to improving this situation: Sometimes I can’t easily record those ideas on a computer. It’s just inconvenient.

At other times, using the computer just seems inappropriate. (Yesterday, I wrote six pages of notes on paper about using Ron C. de Weijze’s Personal Memory Manager (PMM) — a tool for capturing and integrating ideas on your computer! –while sitting at my computer … with PMM open.)

I do go back periodically and try to capture some of the stuff on paper, putting large X’s through notes that I have transcribed. That helps a bit, but it doesn’t connect them. It does little to make their meaning explicit or trace their impact on other ideas. It does nothing to aid in finding the other contexts in which this idea may have occurred.

I have long railed against trapping ideas in formats that makes them effectively not re-usable. That’s a problem with most concept-mapping tools and PIMs — even those that support export to Web formats. I really thought I could solve my problem with David Karger’s Haystack or the NEPOMUK semantic desktop, now being commercialized by (or as) Gnowsis, but I found them clumsy, incomplete, or lacking support.

But the problem of situating those ideas has become so great — and the value of connecting and superimposing stucture on those ideas has become so obvious — that I am giving up (for at least a while) my insistence on making everything (including relationships) convertible to RDF and XHTML. Or DITA.

So I’m going to try to live in the proprietary world of Personal Memory Manager for a while. “Try” is the operative word. And I will do so within a set of constraints, including continuing to create content in HTML — XHTML as much as possible — and referencing those files in PMM, rather than embedding them solely in PMM.

UPDATE: KMWorld [finally] published my article, “Putting meaning to work.” See http://www.kmworld.com/Articles/ReadArticle.aspx?ArticleID=61174&PageNum=3.

July 9, 2009

What is the Semantic Web all about?

Filed under: semantic technology — Phil Murray @ 12:22 pm

Best quick read on the Semantic Web I’ve seen in a long time — by James Hendler, an expert and innovator in computer ontologies himself:

What is the Semantic Web all about?

Hendler was co-author of the Scientific American article that was primarily responsible for bringing Tim Berners-Lee’s ideas to a much broader audience, so he brings much more authority to his observations than, well, just about everyone else except Tim B-L himself.

Among the best information points and assessments in the brief post:

  • A brief description of his own work on Simple HTML Ontology Extensions, which preceded formalization of the Semantic Web.
  • A very brief history of the Semantic Web itself, including its DARPA funding.
  • His distinctions among “linked data,” “Web 3.0,” and the “Semantic Web.”

Required reading if you want to talk sensibly about the Semantic Web and semantic technologies and practices in general.

This is also a good time for me to remind people of another item on my list of required readings: Barry Smith and Chris Welty, “Ontology: Towards a New Synthesis.” Available at http://www.cs.vassar.edu/~weltyc/papers/fois-intro.pdf

Also brief and clear.

April 10, 2009

An economy of meaning. Or, why “semantics” is ugly but important.

Filed under: semantic technology — Phil Murray @ 8:32 pm

An economy of meaning? Yep. And I mean that explicitly in the sense of economic competitiveness and socio-economic solutions that pay direct attention to meaning. Not information.

Of course, we all rely on information. Always will. But we can’t rely on information the way we once did. We might be proud of our bookshelves or our long lists of browser bookmarks, but they’re decreasingly effective in helping us solve our problems. It’s not our fault. It’s the fault of information itself. There’s just too much to handle and apply.

That is, in part, the message of the Semantic Web. And my use of the phrase an economy of meaning is closest perhaps to Ilkka Tuomi’s use of the phrase Towards the New Economy of Meaning in his presentation, “Networks of Innovation”, which focuses on new socially-constructed forms of innovation.

But a “semantic perspective” is much more than that. It’s more than just innovation. It’s not just about controlling costs. It’s about survival. It is the reason, for example, that progress in providing healthcare has ground to a halt, even slipping backward at times, in spite of rapid advances in medical knowledge and treatment technology and in spite of massive financial resources.

Together with former associates at the Center for Semantic Excellence (CSE), I have been evaluating the causes of the problems in healthcare in the United States for many months — from multiple perspectives. But it got personal recently when my wife underwent hip-replacement surgery.

  • In the pre-op phase, we experienced a surgical team that operated with both efficiency and humanity … while surrounded by dozens of different technologies. (I misplaced my wife’s cane because I tried to put it someplace in that maze of technologies where people wouldn’t trip over it.) Lots of direct personal contact.
  • On the recovery floor, it was often quite different. The hallways were crowded with what my wife (a medical professional herself) refers to as “COWs” — Computers on Wheels — and other technologies. The staff spent a lot of time at those laptops. Caregivers are, of necessity these days, at the beck and call of what CSE member Tom Bigda-Peyton pointedly refers to as “people not in the room.”

    My personal impression was that the less-competent and less-caring members of that staff were far more concerned with what those screens demanded of them than what their patients (and their families) asked of them. It had little to do with inadequate staffing. There seemed to be an abundance of personnel. That can be deceiving, of course, but both minor requests (a blanket for the chilled elderly patient in the next bed) and important treatment concerns went unaddressed for  far longer than seemed reasonable. The staff ignored our surgeon’s standard practices for pain management.

  • At home, the first 45-minute visit by a physical therapist was occupied almost exclusively by notetaking … on yet another laptop. Not much therapy. A technician arrived to draw blood that would be used to check levels of Coumadin, a blood-thinning agent vital to safe recovery in such invasive surgical procedures. The results were available in a few hours to the doctor’s staff, but they seemed in no hurry to report those results — and deliver any changes in dosage — even with a long weekend coming up. In fact, the staff seemed rather clueless about the importance of correct and timely adjustments of the medication.

Nothing went wrong. My wife is recovering much more rapidly than expected. The surgeon did his job very well. And neither my wife nor I would want to go back to pre-technology medicine.

What’s troubling here is less the current state of care than the warning signs of problems to come: the demands of capturing information, demands that distract from properly interpreting that information and delivering the service itself; the growing role of intermediaries who have no stake in how well that service is provided (in particular, those “people not in the room”); and the continuing growth in costs, even as technology is applied successfully to specific requirements.

Those troubles are hardly limited to healthcare. And the only way to solve them is to understand how we have created, transferred, and applied meaning in these situations in the past … and how we must do so in a world now dominated by information.

March 30, 2009

New publications

Filed under: semantic technology — Phil Murray @ 8:30 pm
Tags:

I’m finally adding some of the things I’ve written over the past year to my web site. The following two papers are available at http://www.semanticadvantage.com/id22.html

  • The Idiot Savant of Search
  • “Sarkozy bites Obama child”: A commentary on the benefits and distractions of the Semantic Web

Enjoy. Comments are very welcome.

March 26, 2009

Mills Davis’ “Web 3.0 Manifesto”

Filed under: semantic technology — Phil Murray @ 7:10 pm
Tags: ,

A few weeks ago, Mills Davis offered me an evaluation copy of his “Web 3.0 Manifesto: How Semantic Technologies in Products and Services Will Drive Breakthroughs in Capability, User Experience, Performance, and Life Cycle Value.”

I jumped at the opportunity, because Davis is one of those rare intelligences who can get his arms around complex market and technology trends, providing substantive new information and helpful perspective at the same time. A friend accused him of being “too far ahead of the curve,” but I’d love being insulted like that from time to time.

In this dense 32-page report, Davis

  • Differentiates semantic (“Web 3.0”) technologies from “Web 1.0” (connecting information) and “Web 2.0” (social computing) phases.
  • Describes the link between semantic technologies and generation of value.
  • Provides a graphic representation of semantic technology product and service opportunities broken down into 70 discrete “elements of value.” Each opportunity is described in the text. Some random examples: visual language & semantics, semantic cloud computing, and collective knowledge systems.
  • Assesses general market readiness for semantic technologies.
  • Lists over 300 “suppliers” (“research organizations, specialist firms, and major players”) in the semantic technologies space.

What does “Web 3.0” represent?

According to Davis, Web 3.0 is starting now. “It is about representing meanings, connecting knowledge, and putting these to work in ways that make our experience of internet more relevant, useful, and enjoyable.”

What do “semantic solutions” include, according to Davis? Well, pretty much everything that isn’t structured data in the traditional sense. That’s not unreasonable, if you accept — as I do — that if you are dealing with meaning and you believe that everything is connected and meaningful, then it’s really hard to avoid semantics. And I will, once more, quote the simple but extraordinarily astute observation of Aw Kong Koy: “You can’t manage what you can’t describe.”

You may think you’re new to “semantic technologies” but you’re not. If you’re reading this, you probably use and understand relational databases. You may actually design them. And if you do, you have engaged in a form of semantic modeling for business requirements. In fact, as fellow CSE member Samir Batla (See Batla’s Semanticity blog.) observes, the idea of relational databases and the Semantic Web’s Resource Description Format (RDF) both have roots in first-order logic.

This “semantic” thing is simple, really: It’s the necessary solution to having too much information and too little time to consume it. Engineers get it. Just hand me the schematic! You can talk all you want about principles of product or building design — or even about a specific product — but I want to see how, exactly, Tab A fits into Slot B. I want the realities expressed explicitly … and in a consistent way. Tab A doesn’t fit into Slot B until that happens.

The heart of semantic technologies: knowledge representation

It’s simple, really. But that doesn’t mean it’s easy, because we’re dealing with one of the most difficult challenges facing business and computing: representing knowledge. The domain of knowledge representation has been with us for a while, and in his Manifesto, Davis clearly asserts that it is the rock on which semantic technologies rest: “In Web 3.0, knowledge representation (KR) goes mainstream. This is what differentiates semantic technologies from previous waves of IT innovation.”

But we do have to distinguish between (a) KR in the broad sense of representing [common sense] reality — as targeted by the massive Cyc ontology, for example — and (b) the practical and quite limited representations of reality that are and will be used for most business applications in the next few years, in which the representation (typically, perhaps, an ontology or simply an RDF resource) enables applications to understand each other in better (but still limited) ways by referencing a common/shared “understanding” of a narrow domain.

Sometimes the product of a KR project is a life’s work, as Cyc has been for Doug Lenat. At other times, it is much more modest — little more than normalizing and organizing a small part of a domain’s vocabulary.

The core graphic

The core graphic of Davis’ Manifesto (“Web 3.0 Semantic Technology Product and Service Opportunities”) is a quadrant of functions that follows the AQAL model — interior vs. exterior and individual vs. collective axes. (See, for example, Completing the AQAL model: Quadrants, states and types.) This quadrant-based arrangement of semantic applications is actually quite useful in getting a handle on the possible dimensions of semantic solutions, but — in spite of Davis’ high-level descriptions of each area — it doesn’t eliminate the need for more structured explanations of the application areas … let alone validate their existence. (And I’m definitely not ready to commit to the Holon/AQAL perspective on the world.)

Quadrants aside, the core objection from some corners will be that Davis includes activities and solutions that are not drawn from the Semantic Web. Well, I have two responses to that: (1) Davis is absolutely right to talk about more than the Semantic Web and (2) some distinguished folks in the semantic community — which existed long before the Semantic Web — have expressed resentment that academic inquiry into semantic approaches is increasingly limited to the Semantic Web brand. I can’t verify that this is the case; I’m just reporting what has been written by a few experts.

If I have an objection, it is that applying such broad labels to the many real and possible areas of semantic activity in business may contribute to further “siloing” of applications, one of the business problems that semantic approaches should actually help solve. Everybody wants to be a specialist, but this is a time for semantic generalists. And a semantic infrastructure should enable (useful) deconstruction of conventional models for business processes, technology, and creation of value, especially in knowledge work. (Take that, Mills! I can speak high-level, too!)

Another surface criticism: Just putting the word semantic in front of current work practices and technologies does not mean they do or will exist, at least by those names. Let’s not get too far ahead of ourselves with this labelling thing. It’s reminiscent of early (mid 1990s) pontifications on knowledge management, in which one well-known KM “guru” opined a need for “knowledge reporters” and other gurus raced to assert the need for “knowledge engineers.” Well, it turns out that several existing, widely known professions (including, but not limited to, systems analysts and technical writers) were already filling that “knowledge reporter” gap. And “knowledge engineers” had been around for a long time building expert systems. The news of a sudden new need for their job title was a bit of a surprise to them.

Recommendation: Go get it

Mills Davis’ dense, sweeping, high-level look at the promise of “semantic solutions” will open your eyes, give you pause for thought, and make your brain hurt. Each sentence requires — and deserves — careful parsing. And it will at times make you go “Huh?”

Manifestos are like that, I guess. But better your brain should hurt every once in a while than simply be filled up with comfortable fluff.

February 5, 2009

Reducing dependence on tacit knowledge

Filed under: knowledge management,semantic technology — Phil Murray @ 7:33 pm
Tags:

Much is made of the importance of tacit knowledge — which might be loosely understood as “things you do on autopilot” or highly internalized experience that can be applied in work situations. Examples of the value of tacit knowledge might include the stock trader with 10 clients on the line or a nurse practitioner making rapid decisions about the status and treatment of a distressed infant.

You’ll see references to the importance of tacit knowledge everywhere you turn. (In my experience, nearly everyone without a background in “knowledge management” who becomes interested in KM latches on to this idea uncritically.) One rationale for this mindset is the generalization that you can’t really capture knowledge in an explicit or formal way. That is usually combined with the assertion that these skills are the most important skills in an organization — not all that trivial explicit knowledge (articulated knowledge) stuff (which anyone get his or her hands on).

(BTW, everyone seems to claim that everyone else is misinterpreting Michael Polanyi’s tacit vs. explicit distinction. [See, for example, the Wikipedia entry on Tacit knowledge http://en.wikipedia.org/wiki/Tacit_knowledge.] I simply don’t care. Argue among yourselves and don’t send me any nasty pedantic emails on the topic. I use the distinction in the way described above. And please don’t send me your favorite definition of knowledge.)

Sure, we depend on tacit knowledge in many cases where we are applying knowledge to work. But overemphasis on tacit knowledge as a business strategy or vital business practice is fundamentally wrongheaded and counterproductive.

  • What is tacit for one person may be very explicit for another. Part of the problem today is that, as individuals, we are forced to deal with a much wider range of situations and conditions than in the past. There are so many more things that touch our jobs and so much more information about those things is readily available to us. But someone, somewhere has in fact explicitly represented much of what we as individuals deem “tacit.”
  • A corollary: Examined closely, any particular skill that depends on highly internalized information may turn out, in fact, to be easily represented not only explicitly, but also very formally. Knowledge engineers — in the traditional sense, creators of expert systems — have demonstrated this to be true in many cases.
  • The dividing line between internalized “knowledge” and information is very fuzzy. These days, nearly every application of knowledge to work is deeply dependent on explicit knowledge and information.
  • The emphasis on tacit knowledge is fundamentally elitist … and shortsighted. The working assumption is that those who already demonstrate or are capable of demonstrating superior skills in an activity deserve more attention. This attention and investment in time and money may actually be counterproductive, because when the expert walks out the door, so does his knowledge. People who are deeply committed to improving their knowledge and skills will do so anyway, assuming you let them. Those who do not have that drive for excellence and improvement aren’t going to be prodded like cattle into improved learning and better behaviors.
  • The “tacit agenda” heavily emphasizes the role of learning in an organization. But I agree with my friend Jim Giombetti that focusing on learning — enhancing the knowledge of individuals — is fundamentally a bad investment for enterprises, especially if it comes at the expense of more thoughtful approaches to making knowledge work more effective. In general, you simply don’t get a good, predicatable return on that investment.
  • Tacit knowledge simply doesn’t apply in some situations. Lately I’ve been listening in on the NASA/Ontolog conversation about Ontology in Knowledge Management & Decision Support (OKMDS). The diverse and distributed community in this discussion can’t depend in any significant way on tacit knowledge. That is probably true of many enterprises and communities of practices as well. (The large pool of experts in IBM comes to mind.)

Don’t get me wrong. The last thing I want organizations to do is to chain experts to desks and make them write down their “knowledge” in formal ways. By the time they finish doing so, the world has changed. And it’s simply impossible to treat this kind of knowledge capture as a manageable top-down enterprise activity.

But it is vital, IMHO, to pursue ways of converting what we know as individuals into what is useful for others in the organization to know. Technology and new thinking about knowledge work will help us do so.

July 9, 2008

Normalizing ideas

Filed under: semantic technology — Phil Murray @ 2:37 pm
Tags: ,

The relational database model rests on the basic principle of normalization of data.

Semantic technology approaches need to apply this principle, too. Not just at the level of concepts but also — and perhaps just as importantly — at the level of ideas. By ideas I mean complex expressions or assertions about reality, like “Our opportunity in the marketplace is to apply IBM’s UIMA to unstructured information in the enterprise.”
The “truth” of that assertion is obviously critical to the success of a company in that business. However, even if such assertions can be specified in a theory of meaning (like an ontology language), it’s not clear that it can be asserted to be true by any means other than the consensus of experts.
Next Page »

Blog at WordPress.com.