The Semantic Advantage

March 26, 2009

Mills Davis’ “Web 3.0 Manifesto”

Filed under: semantic technology — Phil Murray @ 7:10 pm
Tags: ,

A few weeks ago, Mills Davis offered me an evaluation copy of his “Web 3.0 Manifesto: How Semantic Technologies in Products and Services Will Drive Breakthroughs in Capability, User Experience, Performance, and Life Cycle Value.”

I jumped at the opportunity, because Davis is one of those rare intelligences who can get his arms around complex market and technology trends, providing substantive new information and helpful perspective at the same time. A friend accused him of being “too far ahead of the curve,” but I’d love being insulted like that from time to time.

In this dense 32-page report, Davis

  • Differentiates semantic (“Web 3.0”) technologies from “Web 1.0” (connecting information) and “Web 2.0” (social computing) phases.
  • Describes the link between semantic technologies and generation of value.
  • Provides a graphic representation of semantic technology product and service opportunities broken down into 70 discrete “elements of value.” Each opportunity is described in the text. Some random examples: visual language & semantics, semantic cloud computing, and collective knowledge systems.
  • Assesses general market readiness for semantic technologies.
  • Lists over 300 “suppliers” (“research organizations, specialist firms, and major players”) in the semantic technologies space.

What does “Web 3.0” represent?

According to Davis, Web 3.0 is starting now. “It is about representing meanings, connecting knowledge, and putting these to work in ways that make our experience of internet more relevant, useful, and enjoyable.”

What do “semantic solutions” include, according to Davis? Well, pretty much everything that isn’t structured data in the traditional sense. That’s not unreasonable, if you accept — as I do — that if you are dealing with meaning and you believe that everything is connected and meaningful, then it’s really hard to avoid semantics. And I will, once more, quote the simple but extraordinarily astute observation of Aw Kong Koy: “You can’t manage what you can’t describe.”

You may think you’re new to “semantic technologies” but you’re not. If you’re reading this, you probably use and understand relational databases. You may actually design them. And if you do, you have engaged in a form of semantic modeling for business requirements. In fact, as fellow CSE member Samir Batla (See Batla’s Semanticity blog.) observes, the idea of relational databases and the Semantic Web’s Resource Description Format (RDF) both have roots in first-order logic.

This “semantic” thing is simple, really: It’s the necessary solution to having too much information and too little time to consume it. Engineers get it. Just hand me the schematic! You can talk all you want about principles of product or building design — or even about a specific product — but I want to see how, exactly, Tab A fits into Slot B. I want the realities expressed explicitly … and in a consistent way. Tab A doesn’t fit into Slot B until that happens.

The heart of semantic technologies: knowledge representation

It’s simple, really. But that doesn’t mean it’s easy, because we’re dealing with one of the most difficult challenges facing business and computing: representing knowledge. The domain of knowledge representation has been with us for a while, and in his Manifesto, Davis clearly asserts that it is the rock on which semantic technologies rest: “In Web 3.0, knowledge representation (KR) goes mainstream. This is what differentiates semantic technologies from previous waves of IT innovation.”

But we do have to distinguish between (a) KR in the broad sense of representing [common sense] reality — as targeted by the massive Cyc ontology, for example — and (b) the practical and quite limited representations of reality that are and will be used for most business applications in the next few years, in which the representation (typically, perhaps, an ontology or simply an RDF resource) enables applications to understand each other in better (but still limited) ways by referencing a common/shared “understanding” of a narrow domain.

Sometimes the product of a KR project is a life’s work, as Cyc has been for Doug Lenat. At other times, it is much more modest — little more than normalizing and organizing a small part of a domain’s vocabulary.

The core graphic

The core graphic of Davis’ Manifesto (“Web 3.0 Semantic Technology Product and Service Opportunities”) is a quadrant of functions that follows the AQAL model — interior vs. exterior and individual vs. collective axes. (See, for example, Completing the AQAL model: Quadrants, states and types.) This quadrant-based arrangement of semantic applications is actually quite useful in getting a handle on the possible dimensions of semantic solutions, but — in spite of Davis’ high-level descriptions of each area — it doesn’t eliminate the need for more structured explanations of the application areas … let alone validate their existence. (And I’m definitely not ready to commit to the Holon/AQAL perspective on the world.)

Quadrants aside, the core objection from some corners will be that Davis includes activities and solutions that are not drawn from the Semantic Web. Well, I have two responses to that: (1) Davis is absolutely right to talk about more than the Semantic Web and (2) some distinguished folks in the semantic community — which existed long before the Semantic Web — have expressed resentment that academic inquiry into semantic approaches is increasingly limited to the Semantic Web brand. I can’t verify that this is the case; I’m just reporting what has been written by a few experts.

If I have an objection, it is that applying such broad labels to the many real and possible areas of semantic activity in business may contribute to further “siloing” of applications, one of the business problems that semantic approaches should actually help solve. Everybody wants to be a specialist, but this is a time for semantic generalists. And a semantic infrastructure should enable (useful) deconstruction of conventional models for business processes, technology, and creation of value, especially in knowledge work. (Take that, Mills! I can speak high-level, too!)

Another surface criticism: Just putting the word semantic in front of current work practices and technologies does not mean they do or will exist, at least by those names. Let’s not get too far ahead of ourselves with this labelling thing. It’s reminiscent of early (mid 1990s) pontifications on knowledge management, in which one well-known KM “guru” opined a need for “knowledge reporters” and other gurus raced to assert the need for “knowledge engineers.” Well, it turns out that several existing, widely known professions (including, but not limited to, systems analysts and technical writers) were already filling that “knowledge reporter” gap. And “knowledge engineers” had been around for a long time building expert systems. The news of a sudden new need for their job title was a bit of a surprise to them.

Recommendation: Go get it

Mills Davis’ dense, sweeping, high-level look at the promise of “semantic solutions” will open your eyes, give you pause for thought, and make your brain hurt. Each sentence requires — and deserves — careful parsing. And it will at times make you go “Huh?”

Manifestos are like that, I guess. But better your brain should hurt every once in a while than simply be filled up with comfortable fluff.

Advertisements

February 5, 2009

Reducing dependence on tacit knowledge

Filed under: knowledge management,semantic technology — Phil Murray @ 7:33 pm
Tags:

Much is made of the importance of tacit knowledge — which might be loosely understood as “things you do on autopilot” or highly internalized experience that can be applied in work situations. Examples of the value of tacit knowledge might include the stock trader with 10 clients on the line or a nurse practitioner making rapid decisions about the status and treatment of a distressed infant.

You’ll see references to the importance of tacit knowledge everywhere you turn. (In my experience, nearly everyone without a background in “knowledge management” who becomes interested in KM latches on to this idea uncritically.) One rationale for this mindset is the generalization that you can’t really capture knowledge in an explicit or formal way. That is usually combined with the assertion that these skills are the most important skills in an organization — not all that trivial explicit knowledge (articulated knowledge) stuff (which anyone get his or her hands on).

(BTW, everyone seems to claim that everyone else is misinterpreting Michael Polanyi’s tacit vs. explicit distinction. [See, for example, the Wikipedia entry on Tacit knowledge http://en.wikipedia.org/wiki/Tacit_knowledge.] I simply don’t care. Argue among yourselves and don’t send me any nasty pedantic emails on the topic. I use the distinction in the way described above. And please don’t send me your favorite definition of knowledge.)

Sure, we depend on tacit knowledge in many cases where we are applying knowledge to work. But overemphasis on tacit knowledge as a business strategy or vital business practice is fundamentally wrongheaded and counterproductive.

  • What is tacit for one person may be very explicit for another. Part of the problem today is that, as individuals, we are forced to deal with a much wider range of situations and conditions than in the past. There are so many more things that touch our jobs and so much more information about those things is readily available to us. But someone, somewhere has in fact explicitly represented much of what we as individuals deem “tacit.”
  • A corollary: Examined closely, any particular skill that depends on highly internalized information may turn out, in fact, to be easily represented not only explicitly, but also very formally. Knowledge engineers — in the traditional sense, creators of expert systems — have demonstrated this to be true in many cases.
  • The dividing line between internalized “knowledge” and information is very fuzzy. These days, nearly every application of knowledge to work is deeply dependent on explicit knowledge and information.
  • The emphasis on tacit knowledge is fundamentally elitist … and shortsighted. The working assumption is that those who already demonstrate or are capable of demonstrating superior skills in an activity deserve more attention. This attention and investment in time and money may actually be counterproductive, because when the expert walks out the door, so does his knowledge. People who are deeply committed to improving their knowledge and skills will do so anyway, assuming you let them. Those who do not have that drive for excellence and improvement aren’t going to be prodded like cattle into improved learning and better behaviors.
  • The “tacit agenda” heavily emphasizes the role of learning in an organization. But I agree with my friend Jim Giombetti that focusing on learning — enhancing the knowledge of individuals — is fundamentally a bad investment for enterprises, especially if it comes at the expense of more thoughtful approaches to making knowledge work more effective. In general, you simply don’t get a good, predicatable return on that investment.
  • Tacit knowledge simply doesn’t apply in some situations. Lately I’ve been listening in on the NASA/Ontolog conversation about Ontology in Knowledge Management & Decision Support (OKMDS). The diverse and distributed community in this discussion can’t depend in any significant way on tacit knowledge. That is probably true of many enterprises and communities of practices as well. (The large pool of experts in IBM comes to mind.)

Don’t get me wrong. The last thing I want organizations to do is to chain experts to desks and make them write down their “knowledge” in formal ways. By the time they finish doing so, the world has changed. And it’s simply impossible to treat this kind of knowledge capture as a manageable top-down enterprise activity.

But it is vital, IMHO, to pursue ways of converting what we know as individuals into what is useful for others in the organization to know. Technology and new thinking about knowledge work will help us do so.

July 9, 2008

Normalizing ideas

Filed under: semantic technology — Phil Murray @ 2:37 pm
Tags: ,

The relational database model rests on the basic principle of normalization of data.

Semantic technology approaches need to apply this principle, too. Not just at the level of concepts but also — and perhaps just as importantly — at the level of ideas. By ideas I mean complex expressions or assertions about reality, like “Our opportunity in the marketplace is to apply IBM’s UIMA to unstructured information in the enterprise.”
The “truth” of that assertion is obviously critical to the success of a company in that business. However, even if such assertions can be specified in a theory of meaning (like an ontology language), it’s not clear that it can be asserted to be true by any means other than the consensus of experts.

December 12, 2007

Fun with technology haters

Filed under: knowledge management,semantic technology — Phil Murray @ 8:47 pm
Tags: ,

Recently I posted an extended opinion to the OKMDS (Ontology for Knowledge Management and Decision Support) forum (okmds-convene) about paying more attention to structured support of discussion as the source of good decisions in enterprises … and less attention to computer ontologies. (The full post follows.) I made the unpardonable mistake of asserting that such discussion can and should be supported by technology.

Well, the technology-haters came out of the woodwork to complain about the pitfalls of technological approaches to knowledge management … making posts to the forum [perhaps] written in a word-processor, sent via email, organized and published via Web-supported forum technology, and viewed by more than a few people in their Web browsers and email clients (or via an RSS feed). I’ll bet some of them saved the thread on their desktops where it was indexed automatically with Google Desktop or another desktop search engine.

I was warned about the dangers of “throwing technology” at a problem, of confusing knowledge management with the tools used for it, and of being biased by my lurid obsession with technology (“If you’re a hammer, everything looks like a nail.”).

Umm, er, has everyone forgotten that people identify their business needs, then choose, deploy, use, and manage the technologies to address them? I’ve noticed that my Nissan Murano works fine most of the time, as long as I put gas in it and avoid large trees. I was aware that it didn’t get 40 mpg when I bought it, but it was big enough to haul the stuff and people I needed to transport. And it satisfied my vanity, too.

On the other hand, has everyone forgotten that designers and developers of KM technology were inspired by work problems. (Well, OK, there have been more than a few who see the world as a Matrix-like playground.) Did their resulting creations provide good solutions to those problems? Not always. Do they make bad assumptions about how their solutions will be adopted and applied? Not infrequently. But given a choice between listening to delirious rants about epistemology and listening to how a technologist addressed a real problem, I always tune in to the latter … when I’m looking for answers that work.

The irony is, perhaps, that I’m not a technologist. Never wrote or sold software. Sure, I’ve written a few small Perl scripts. Wrote SGML DTDs and XML Schemas. Served up tons of documentation on computer software. Used hundreds of different applications for creating, managing, organizing, and publishing content. Examined hundreds more software applications for KM requirements, but rejected most of them as inadequate for those purposes.

I’m not ashamed of my technology bent. Many aspects of “managing knowledge” are potentially improved, if not always solved, by technology:

  • Providing scalable solutions in general.
  • Capturing and integrating the enterprise’s “knowledge.” (I’ll admit that it’s not “knowledge” in the dictionary sense. Then we get into another endless argument about “tacit” and “explicit” knowledge. I don’t want to go there, either.)
  • Serving as a collective memory.
  • Enabling detection of patterns and relationships in communications that are effectively undetectable to individual human brains.
  • Observing and quantifying the flow of information among people in organizations. (Think Social Network Analysis.)

And that’s just the short list, of course. You can’t make judgments about the effectiveness of technologies you don’t know about … or which don’t exist yet.

Folks, ya gotta examine the needs first … from as many different perspectives as possible. Even, God forbid, a “people-oriented” perspective. But first of all from a business perspective.

——————

As a relative newbie to formal ontologies, I apologize in advance for imprecise terminology. You did ask for input from the KM community, right?

The Ontolog forum recently featured a wonderful discussion between John Sowa, Leo Obrst and others on standards for ontologies and the utility of ontologies in enterprises. (Ontolog forum, 17-nov-2007) The two distinguished experts disagreed gently on the topic, but it appears that they agree ontologies are important for solving enterprise problems.

I have to object … also gently … and with qualifications. The distinguished members of the ontology community have demonstrated that ontologies and other technologies that have evolved from the study of logic and language can be applied successfully to data-mining, interpretation of natural language, and even situation awareness requirements in military scenarios, but the recent buzz about ontologies is — like the recent buzz about metadata in general — largely a reaction to the superabundance of unstructured information. From the perspective of what enterprises need in general, this emphasis seems to be an overreaction.

I do believe that ontologies in general and the Semantic Web in particular will play huge roles in “semantic” aspects of business activities. However, overemphasizing ontologies and metadata distracts us from the most common enterprise activities: the core processes of building an opportunity (or a domain) and successfully managing the enterprise. In the knowledge-based enterprise, those processes consist primarily of identifying and evaluating facts and conditions — statements about market and physical realities. An enterprise or domain — for example, the extended NASA community — is the sum of all the conditions and responses for that enterprise.

Stated in a somewhat different way, decisions and and implementation particulars grow out of evaluation and acceptance (or rejection) of ad hoc assertions. By ad hoc assertions I mean statements that identify an opportunity or situation — for example, “The presence of Chinese and Japanese satellites around the Moon is a threat to our pre-eminence in space exploration.” Or “Nanotubes based on composites are the best choice for building a space elevator.”

By evaluation of those assertions, I mean such statements as, “This assertion is relevant (or valid).” Or, “This assertion must be expressed more precisely.” Other evaluations may consist of identifying (and quantifying) the impact of a set of assertions A on assertion B — an analog to applying If … andIf … Then logic to conditions in programming. But I’m not talking about precise programming statements. Evaluations are (or can be) collective, quantitative, “fuzzy,” qualitative, and/or arbitrary.

No matter what our particular role in a knowledge-driven organization may be, we communicate and evaluate assertions on a daily basis — in meetings, casual encounters, emails, personal note-taking, forums, and documentation of many types. These activities of the organization are frequent, pervasive, and vital to successful decision making and execution. In spite of that, there is no mandate to apply technology or business practices to making these activities more effective.

That’s unfortunate. Assertions and the evaluations of those assertions can be represented explicitly. They can be known — expressed as structured objects. They can be supported directly by technology, management practices, and education of workers in semantic principles. Decisions can be traced back to the conditions/assertions that influence those decisions.

Objects of this type and relationships among those objects can also be visualized easily. I am aware that software applications for specifying, visualizing, and evaluating assertions do exist. But those I have seen (like the Compendium Institute’s Compendium hypermedia tool) seem fundamentally disconnected from goals of precise representation of the meaning in natural language. They lack methods of formally representing assertions as objects that can be addressed with multiple tools, or they simply don’t scale well. Others simply don’t make the distinction between assertion and evaluation of assertions at all.

Supporting these core semantic activities of the organization should drive how knowledge-development, knowledge-representation, and information-management technologies are selected and implemented in enterprises — not vice versa. Similarly, the goal of enterprise strategies and personnel-management tactics should be, first and foremost, to make these core semantic activities as effective as possible.

Re-centering our focus on assertions and evaluation of assertions provides several important advantages:

  • Participants in the enterprise can deal with such assertions directly. Average Joes and Janes have a fundamental grasp of what constitutes an assertion — although they will need help in framing and evaluating assertions in a structured way. (It’s a lot like parsing a sentence. Not everyone’s favorite activity, but a lot easier than grasping subtleties of inheritance of semantic properties in ontologies.)
  • Participants can “see” the impact of their participation. Feedback is vital to participation. And a record of decision making can be kept and analyzed.
  • Evaluations can be weighted. An approximate sum of the evaluations of the quality of assertions — evaluations contributed by multiple stakeholders in the enterprise — should point to good decisions, especially as the number of relevant assertions and evaluations grows.
  • Participation in gathering and evaluation of assertions can be a source of objective information in performance evaluations.
  • Assertions become product. If you’re a software developer, you can see specifications emerge from assertions. (Currently, implementers have to leap that great chasm between unstructured descriptions of functionality and structured modeling of processes.) If you’re a tech writer, you can see that the rationale for features and functionality — and sometimes the behavior of features themselves — is captured in the assertion/evaluation process.

I want to stress that I’m not dismissing the importance of ontologies. Among other things, ontologies should support interpretation and management of assertions and evaluations. But we need to take a step back and re-center our pursuit of effective solutions for the challenges facing knowledge-based organizations. We have to ask,

  • What matters most to the participants in an organization?
  • What explicit information or “knowledge” most directly affects understanding and successful decision making?
  • What information is most directly relevant to a broad cross-section of people in the organization?
  • What information can people react to or evaluate with minimal education and effort?
  • How, in general, can participants most effectively contribute to improvement of the information that leads to success?

If this is yet another spin on the issues, I hope it is at least a more positive spin.

Phil Murray

November 27, 2007

Semantic technologies and hypertext

Filed under: hypertext,semantic technology — Phil Murray @ 3:45 pm
Tags: ,

I suspect that if you’re interested in semantic technologies, there’s a good chance that you go back a ways in hypertext. Some of you even know that “hypertext” didn’t start with the World Wide Web.

My interest in hypertext began in 1988, a year or so after the watershed event of the community — the Hypertext 87 workshop at Chapel Hill. That exposes my age a bit, but Andries van Dam’s Hypertext 87 Keynote Address, makes it clear that I’m a relative newbie. Van Dam observes in his keynote, “I’m a Johnny-come-lately to hypertext: I didn’t get started until 1967 …”

The hypertext pioneers I met in the ’80s and ’90s were immensely talented and just downright interesting to be with. And I suspect that many of them have strongly influenced — yea, verily, are still strongly influencing — the still-emerging domain of semantic technologies.

I’d like to begin the process of finding out where they are now … and how what they did contributed to the way we work now.

Let me start with Bob Glushko, who recently co-authored Document Engineering (MIT Press, 2005) with Tim McGrath. Bob was closely associated with one of the more successful commercial pre-Web hypertext systems — Electronic Book Technologies’ DynaText application.

More recently, Bob can lay claim to first use of the term semantic literacy:

We emphasize “computer literacy” (desktop applications and web surfing) but I’ve never heard anyone fret about how poorly people name and define the things and concepts that their computer applications capture and process for them, which seems more important to me. We need “semantic literacy” or maybe even “ontological literacy” but maybe we don’t teach it because it is too hard to explain what they mean.

Amen to that. (But I am envious that Bob came up with the term first.)

Hypertext seems to have infected Bob and many others, including myself, with a compulsion to solve the core problem of the Information Age: Finding useful knowledge in an ocean of information.

« Previous Page

Create a free website or blog at WordPress.com.