Categories
BlogSchmog Of Course

Semantic Wikis

One of the buzzphrases at WikiSym 2006 was “semantic wikis” (corporate wiki, wikis in education and wiki markup being the other main areas). From a user perpsective, the main problem with semanticizing stuff is the tedious nature of tagging. Even scrolling through my list of two dozen blog categories is annoying, so I can’t imagine doing the same to a page of wiki content that continues to change over time.

Three papers were presented on this topic this past Tuesday morning in Odense: SweetWiki, Towards Wikis as semantic hypermedia, and Constrained Wiki: an Oxymoron?.

SweetWiki
Michel Buffa presented a semantic wiki called SweetWiki, still in a prototype state. The name comes from “Semantic WEb Enabled Technology” (SWEET) and is intended to address some of the problems wikis face simply by being an open-structure, living document.

Wikis don’t scale well. At present, there are few standards. Migration is difficult, and the open structure makes navigation difficult, too. These problems have been around as long as wikis have. SweetWiki began in 2005 as an attempt to answer the question, What if we were to re-invent wikis now?

The project wanted to maintain a site’s “wikiness” while addressing the shortcomings above. There is a WYSIWYG editor and no Wiki markup language. AJAX technology is used to enhance wizards to link WikiWords, and dynamic content is made possible by SPARQL, a query language for meta data. Most importantly, much of the tagging is done behind the scenes or as part of normal navigation.

SweetWiki extracts meta data and then queries it using SPARQL to maximize the power of that information. Most of the meta data is generated automatically. The user can access the ontology editor (for tagging) or the WYSIWYG editor (for page content). As impressive as the concept is the obvious care that went into selecting the combination of available technologies to make it all work. The presentation on the prototype needs some usability, but the programming decisions are very sound.

Towards Wikis as semantic hypermedia
In the next talk, Elena Paslaru Bontas Simperl reported on results of analysis made by Networked Information Systems at Free University of Berlin. The team was motivated by the high popularity of wiki systems and interest in domain-dependent semantic extensions. Also, wiki technology, she argued, should be based on foundation of design and architecture.

The analysis examined ontologies — machine-understandable syntax and XHTML — and several features of modern hypermedia. Among those features were transclusions, annotations, and backtracking. The conclusion confirms that semantic technologies can augment wikis for organization and navigation, but the current world design and limited multimedia support are hurdles to clear. Simperl proposed the creation of a Wiki object model (WOM) and a standardized, remotely accessible API to allow for a distributed architecture in which these services can evolve.

Constrained Wiki: an Oxymoron?
Angelo Di Iorio, of the University of Bologna, asks the question: Is wiki editing free? Meaning, are wiki authors free to change and produce new content at will? His answer is no. Even wiki editing is often bounded by some implicit rules. However, explicit management of these rules can help authoring.

Examples of implicit rules include:

  • Structures are correctly repeated over multiple pages
  • A page is syntactically correct (MoinMoin)
  • In-line content is well-formed (OpenWiki, SnipSnap)
  • Templates and lists are consistent (Wikipedia)
  • A user page is modifiable only by its owner (TWiki)

These issues are addressed either through adhoc solutions, implemented through modifications to the wiki engine or a plug-in, or manual corrections by “WikiGnomes.”

Light-constraints (LC) are constraints on wiki content that can be violated, temporarily or permanently. By making these constraints explicit, users are given a number of potential benefits. More information is provided for both readers and authors, leading to more efficient community collaboration. A posteriori checks are moved a priori.

A basic wiki page is characterized by its Markup (a string with the actual wiki markup), Name (a string univocally denoting the page), and a Version ID (a string denoting the version number). For each light constraint, a validator is used to decide whether a page fulfills the constraint. This is a computational entity. The status reveals the outcome of the last validation attempt, and the context indicates whether pages required for validations. There are complications, of course, namely the context of the validation. Running validators only on the submitted content creates gaps, since changes can affect the validity of other pages, too. Angelo’s solution is to notify a batch validator of any SAVE events and letting it run on all pages included in that context.

For more information, see:

By Kevin Makice

A Ph.D student in informatics at Indiana University, Kevin is rich in spirit. He wrestles and reads with his kids, does a hilarious Christian Slater imitation and lights up his wife's days. He thinks deeply about many things, including but not limited to basketball, politics, microblogging, parenting, online communities, complex systems and design theory. He didn't, however, think up this profile.