Monthly Archives: February 2008

DITA wiki

DITA and wiki hybrids – they’re here

Combinations - DNA and dice, relevant to Darwin?

Lisa Dyer and Alan Porter presented at last week’s DITA Central Texas User Group meeting, and both told tales of end-user doc written and sourced in DITA, with wikitext in mind as an output. About 20 people attended and we all enjoyed the show. I wanted to post my notes to follow up, and I’ll post a link to slide shows as well.

This post covers Lisa Dyer’s presentation on a wiki sourced with DITA topics. I’ll write another post to cover Alan’s presentation.

Although, actually, first, Bob Beims shared Meet Charlie, a description of Enterprise 2.0. Seems very appropriate for the discussions we’ve had at recent Central Texas DITA User Group meetings talking about wikis and RSS subscriptions and web-based documentation.

Lisa has made her presentation available online. My notes are below the slideshow.

DITA source to wiki output case study

Lisa Dyer walked us through her DITA to wiki project. Their high level vision and business goals merged with a wiki as one solution, and Lombardi has customers who had requested a wiki. Lombardi’s wiki is available to customers that have a support login, so I won’t link to it, but she was able to demo the system they’ve had in place since July 2007.

What wiki toolset – open source or entprise wiki engine?

On the question of choosing an open source or enterprise wiki engine, Lisa said to ask questions while evaluating, such as where do you want the intellectual property to develop? Will you pay for support? Who are your key resources internally, and do you need to supplement resources with external help? They found it faster to get up and running and supported with an enterprise engine and chose Confluence, but she also noted that you “vote” for updates and enhancements with dollars rather than, say, community influence. (Editorial note – I’m opining on whether you get updates to open source wiki engines through community influence. I’m certain you can pay for support and enhancements to open source efforts with dollars.)

Run a pilot wiki project

She recommends a pilot wiki, internal only at first, to ferret problems out while building in time to fix the problems. While Michele Guthrie from Cisco couldn’t present on the panel at the last minute, she also has found that internal-only wikis helped them understand the best practices for wiki documentation.

Meet customer needs – or decipher what they want and need

Lisa said that customers wanted immediate updates, knowledge of what’s new with the product and doc (800 pages worth), and wanted to tell others what they had learned. She found that all of these customer requests could be met with a wiki engine – RSS feeds, immediate updates, and the ability to share lessons learned. At her workplace, customers work extensively with the services people and document the implementation specifically, and that information could be scrubbed of customer-specific info. They found that rating and voting features give good content more exposure. Also, by putting the information into wikis, they found that there were fewer “I can’t find this information” complaints.

Intelligent wiki definition and separate audiences for each wiki

They have two wikis – one is for end-user documentation, one is for Services information. In the screens she showed us, Wiki was the tab label for the Services wiki, Documentation was the tab label for the doc wiki. The Documentation wiki does not allow anyone but the technical writers to edit content, but people can comment on the content and attach their own documents or images. The Services wiki allows for edits, comments, and attachments. The customers and services people wanted a way to share their unsanctioned knowledge such as samples, tips, and tricks, and the wiki lets them do that. The Services wiki has all the necessary disclaimers of a community-based wiki, such as “use this info at your own risk” type of disclaimers. Edited to add: The search feature lets users search both wikis, though.

Getting DITA to talk wiki

There are definite rules they’ve had to follow to get DITA to “talk wiki” and to ensure that Confluence knows what the intent is for the DITA content. For one, when they want to use different commands for UNIX and Windows steps in an installation or configuration task, they would use ditaval metadata around in the command line text (using the “platform” property) and use conditional processing for that topic. However, because of the Confluence engine’s limitation of one unique name for each wiki article, they had to create separate Spaces for each condition of the deliverable (UNIX Admin guide or Windows Admin guide, for example). This limit results in something like 12 Spaces, but considering it’s output for several books for separate platforms, 32 individual books in all, that number of Spaces didn’t seem daunting to me. She uses a set of properties files during the build process to tell Confluence what file set to use, and what ditavals they’re seeking, and then passes the properties to the ant build task. The additional wiki Spaces does mean that your URLs aren’t as simple as they could be – but in my estimation, they’re not completely awful either.

While I was researching this blog post further, Lisa also added these details about the Spaces and their individual SKU’s (Stock Keeping Unit, or individual deliverable). “Building on this baseline set of spaces, each new SKU would add 1 to 7 spaces hosting 3 to 21 deliverables, depending on the complexity of the ditaval rules and the product. Obviously, the long pole in this system is ditaval. A more ideal implementation would probably be to render the correct content based on user preferences (or some other mechanism to pass the user’s context to the engine for runtime rendition). Or, a ditaslice approach where you describe what you need, and the ditaslice is presented with the right content. Certainly innovation to be done there.

Creating a wiki table of contents from a DITA map

She creates a static view of the TOC from the DITA map as the “home page” of the wiki. She currently uses the Sort ID assignment a DITA map XSLT transform to generate the TOC. She said they implemented a dynamic TOC based on the logical order of the ditamap by dynamically adding a piece of metadata to each topic – a sort id using a {set-sort-id} Confluence macro. The IDs are used to populate a page tree macro (the engine involved is Direct Web Remoting, or DWR… an Ajax technology). Currently, their dynamic TOC is broken due to a DWR engine conflict, which should be fixed in the next release. In the meantime, they are auto-generating a more static but fully hyperlinked TOC page on the home page of each Space. A functional solution, not great for back and forth navigation, but it shows the logical order which is pretty critical for a decent starting point.

Dynamic TOC created with sort-id attribute

DITA conref element becoming a transcluded wiki article

Another innovation she wanted to demonstrate was the use of DITA conrefs output as translusions in the Confluence wiki engine, so that in the wiki, the transcluded content can’t be edited inside of an article that transcludes the content. I don’t think it quite behaved the way she wanted it to, but knowing it’s a possibility is exciting. Edited to add: This innovation really does work, Lisa simply was looking at the wrong content (she admits, red-faced.) 🙂

Wikitext editor view of a conref referenced into a wiki page with a wiki macro

Burst the enthusiasm bubble, there are limitations and considerations

One limitation that I observed is that when you transform the DITA source to Confluence wikitext, there are macros embedded, so when someone clicks the edit tab in the wiki, they must edit in wikitext, not the rich-text editor, to make sure the macros are preserved. In the case of the Documentation wiki, they can instruct their writers to always use the wikitext editor. But, for the Services wiki, one attendee asked if users prefer the wikitext editor, and Lisa believes they do. Someone running MoinMoin at their office said they finally just disabled the rich text editor because they didn’t want to risk losing the “cool” things that they could do with wiki text. The problem at the heart of this issue is that if users really like the wikitext editor and do a lot of “fancy” wiki text markup (like macros), then another wiki user using the rich-text editor can break the macros by saving over in rich text. Edited to add: Lisa wrote me with these additional details which are very helpful – “actually, the macros are preserved when in Rich Text Editor (RTE) mode. the problem is that it looks ugly as heck – and if the user is not techie, potentially confusing. the RTE does add all kinds of espace characters to the content– in a seeming random way – and can negatively impact the formatting in general when viewing, but it doesn’t seem to affect our macros. However, if a user wants to use macros to spiffy up the content, then wiki markup mode is definitely recommended.”

If you’re interested in a copy of the case study, you can purchase it for $10 here:

Thumbnail of PDF

White paper, Structured Wikis in Software Engineering

This white paper describes using Darwin Information Typing Architecture (DITA) and wiki collaborative authoring environments in concert to enable software development processes including Agile development.


<div style=”width:425px” id=”__ss_301959″><strong style=”display:block;margin:12px 0 4px”><a href=”” title=”Lombardi Wikis – a CenTex DITA UG panel presentation”>Lombardi Wikis – a CenTex DITA UG panel presentation</a></strong><object width=”425″ height=”355″><param name=”movie” value=”” /><param name=”allowFullScreen” value=”true”/><param name=”allowScriptAccess” value=”always”/><embed src=”” type=”application/x-shockwave-flash” allowscriptaccess=”always” allowfullscreen=”true” width=”425″ height=”355″></embed></object><div style=”padding:5px 0 12px”>View more <a href=””>presentations</a> from <a href=””>lisa.dyer</a>.</div></div>


Wiki for documentation

Stewart Mader has created a series of videos available at called 21 Days of Wiki Adoption.

Each video is short, encapsulated, and easily digested when you need a break. I’m really enjoying them, and the cool US map background behind Stewart.

Stewart Mader day 12 of wiki adoption video series - Documentation

Day 12 is Documentation. Great ideas there, involving authoring in the wiki and using the wiki engine to publish a PDF. I’m working on a blog entry describing authoring in DITA and outputting to a wiki, but it’s also nice to hear of the other way around.


Upcoming Austin XO User Group Meeting Saturday 2/23

Hey, that’s tomorrow! Sorry for the late post. I have reserved the Spicewood Springs library branch meeting room from 10-11AM Saturday February 23rd. I couldn’t get a weeknight reservation there, but we could work our way around north, central, and south pretty easily for future meetings. We’ll try it and see.

XO is fun

The library has a wireless connection, the meeting room is free, and it can hold up to 85 people. All Austin XO owners, kids, and interested people are welcome. It will definitely be kid friendly, and there may even be cupcakes.

You can check for updates. RSVP by going to and posting an update with the number of people (and XOs) that you’ll

Please also send me ideas for a “theme” for future meetings. This one’s theme will be “communicating”. For example, I just learned that there’s a Jabber server set up for Austin. To set up your XO, open the Terminal activity and type:

sugar-control-panel -s jabber

I read the Dallas-FortWorth meeting notes for ideas and they had a wide variety of activities at their meeting. I also like the idea of showing an emulation of the Sugar interface, because I could display my laptop on an overhead for demonstrations.


How to be an Agile Technical Writer with a cool acronym like XTW

One reason why I like the show Dirty Jobs so much is because Mike Rowe, the host, is so respectful and honest about the work that people do each day. Dirty Jobs offers such a great viewpoint on work that is done each and every day. I recently discovered that Sarah Maddox, a technical writer at Atlassian (makers of the Confluence wiki engine), has written two great posts about being an Agile technical writer, or an eXtreme Technical Writer (XTW). These posts on Agile Technical Writing offer wonderful windows into the work that technical writers are doing around the world. Plus, they offer some down-to-earth how-tos that make sense to apply in a modern technical writing career. If you’re thinking about technical writing as a career, check out these two posts for your research, because I believe that agility is one of the best skills you can bring to this career path when I consider the direction it is heading.

  • Plus don’t miss the great photo art mashup in the agile technical writer part II, another excellent post that really describes what it’s like to write in the Agile development environment.

Here are some highlights from each that I could identify with:

  • Responding to IMs from all over. Carrying on multiple conversations intelligently is a gift.
  • Concerns about information overload – it’s daunting, but do-able. (Okay, funny side note – My typo for overload was “overlord.” Yipes. That can happen too.) My advice is to ride the serendipitous river of information. Sounds like hippy advice but somehow it works.
  • What a wonderful viewpoint of how the daily life of a technical writer has changed so greatly over the years. I listened to Linda Oestriech’s podcast about the Direction the STC is Heading at Tech Writer Voices and one great quote from her was, “We’re not the technical writer from the 70s.” I’d say we’re not even the technical writer from the 90s.
  • Love the Swaparoo idea – similar to pair programming, but call it a swaparoo when you want to trade tasks with another writer to get cross-product knowledge.
  • Respond to customers from the varied means of communication that is offered to you in this awesome world of documentation. And if it doesn’t have to do with the doc, don’t meddle, pass it along to the support team.

Thanks Sarah, for sharing such a great “day in the life” slice for technical writers.

Authoring and delivery tools for the Agile Technical Writer?

I’ve had a question via email recently, asking something like, “What is the ideal toolkit for writing in an Agile environment?” Or, “What would you choose if you had to write in an Agile environment in order to be most effective as a technical writer?”

It’s tempting to actually try to answer that question with a straightforward response of one tool – but of course the answer is not that easy. The product and audience and company that you’re writing for all dictate the documentation deliverable with far more weight than the “manufacturing” process that is used to build the product. Sarah’s posts don’t directly mention their toolkit, but her “eat your own dog food” bullet point hints that the doc is delivered via their wiki engine. (Sarah, do correct me if that’s an inaccurate leap in logic.)

But, if the product you’re documenting isn’t itself a wiki, you’re going to need to evaluate tools. I borrow directly from Don Day’s editor evaluation heuristic for a methodology for evaluating a writing and publishing toolkit to fit in an Agile environment. Evaluate a tool (no matter what you’re trying to deliver or how you’re authoring it) using “cost/benefit judgments on the features that mean most to your intended scenarios, business rules that need to be supported, and the willingness of your team to learn some new ways of doing things.” Well stated, Don, and whether you’re trying to find the right DITA toolkit or the right Agile toolkit, scenarios and business processes are quite useful. Anyone have great authoring or publishing scenario or business process suggestions for the XTW?


Can DITA train writers? Or does it require too much programming?

DITA for writers (content creators)

I just did a search on for books for beginning technical writers and also to investigate what books are being written for our profession and for others wanting to start in our profession. I came across a book called Writing Software Documentation: A Task-Oriented Approach that suggests three categories for writing:

  • writing to teach (for eager learners)
  • writing to guide (for reluctant users of the product)
  • writing to provide a reference (for experts who need only occasional support)

I immediately saw a connection to the three content types that DITA prescribes:

  • concepts to teach understanding
  • tasks to guide performance
  • reference to offer facts or lists of information

Because writers have to immediately place the information they want to record into one of these three types of information, they are being trained on how to write in a task-oriented, performance-based manner, via DITA. I am especially interested in this “training” for wiki authors and talked about the idea at our recent presentation at the Central Texas DITA User Group meeting.

DITA for publishers (formatters)

Recently a few techpubs bloggers have been talking about DITA and its weaknesses, such as a lack of online help outputs, and how difficult it can be technically if you don’t already have a staff with pseudo-programming skills. Gordon Mclean writes “DITA is not the answer” and I think the question he is trying to answer is, “what is a single-sourcing tool we can use in our environment (which includes Technical Communications, Training, Pre-Sales and Marketing) with our current resources?” Instead of DITA, it looks like he’ll go with Author-it.

Since I just this past year moved from BMC, which is still moving to DITA, to a small techpubs group that uses Author-it, I can understand his reasoning and agree with his business case assessment. The toolchain for DITA is very nearly there, but often a CMS-based approach has too much overhead for small companies. It can be cost overkill when you have few topics to contain.

Scott Nesbitt followed up with his post, “DITA’s not THE answer for single sourcing.” I think he’s spot on with the analysis “it’s difficult to get good PDF or online help from DITA without extensively customizing XSL stylesheets or passing DITA source files through tools like FrameMaker, Flare, or WebWorks.” One of his commenters said something about consultants smelling blood in the water, yikes. In other words, I think he meant that XML consultants knew how much customization would be desired and can have a feeding frenzy on the potential work possibilities. My guess is that the people who have been around XML for years know that there are still basic needs for output, and their experiences have shown them that nothing that is structured is an “out of the box” experience. So much of the success depends on your content to begin with.

I’ve found the same conclusions about the output in my experience. When you dig into single sourcing, be it with DITA or another tool (Madcap Flare, Author-it, FrameMaker, RoboHelp, and the Adobe Technical Communication Suite) the real business-case killer seems to still be, where can I get pretty PDFs that are formatted just as I like them? With DITA, one answer is to go get the Mekon FrameMaker plug-in for the DITA Open Toolkit. No XSLT-FO knowledge required.

People love their tools to get their pretty PDFs or sleek online help systems. Plus, so many of the employers out there have a lot of content that looks pretty nice already in a specific tool. The legacy documentation may be one reason why hasn’t DITA helped our industry get away from tool love. Tech writers and their employers fall in love with tools. I’m not saying Gordon or Scott are tool lovers, but certainly some people they’re hiring will be. There is probably also an element of “if it ain’t broke, don’t fix it.”

DITA for all?

Sarah O’Keefe has a thought-provoking analogy in her comments on her post signed “DITA Dissident.” The analogy is that creating desserts using a frozen pie crust is one method of getting results. If a pretty PDF is your ultimate dessert, then for some, DITA is a bag of flour, meaning you’d better be a skilled baker if you’re going to use it for the best pie (PDF) ever. For others, DITA is a frozen pie crust that makes a perfectly good cherry pie (PDF) or apple pie (plain HTML) or chocolate creme pie (Eclipse help). Although isn’t the filling the content and the pie crust the DITA map?

Their conversation first started with Eliot Kimber discussing DITA’s use for narrative documents. Alan Porter talks about DITA use for narrative writing as well, but in a different line of thought in his post, Is DITA Just a Story?

All the posts I’ve linked to are enjoyable for me to read and to point to and to think about. I’ve read it before, and I’ll say it again, I believe along with others that DITA has the potential to transform our industry. Just last night I said to the San Antonio STC group, today we all speak HTML tags pretty fluently. In ten years, will we all speak DITA tags just as fluently? “I wrote the shortdesc according to the guidelines and it works for the topic, but I am not sure if my conref target is going to be there every time. I guess I should rewrite the concept topic.” Heed the warnings and experiences of others before making the leap to topic-oriented single-sourcing or your expectations and those of your customers may not be met.


Upcoming wiki talks in the central Texas area

Next week I’m presenting at the Alamo STC Chapter, giving a talk titled “A Technical Writer’s Role in Web 2.0 — Wiki-fy Your Doc Set.” It’s at the Igo Library in northwest San Antonio and you’ll want to refer to their website for directions. It’s Tuesday February 12th with the presentation starting at 7:00.

I plan to update the presentation from the last time I gave the presentation at the Austin STC chapter and I’ll post the slides to slideshare when they’re ready. I’ll take it out of Google Presentation format and go with PowerPoint since the 800 x 600 display was pretty dismal using Google Presentations. It’s too bad because sharing that presentation was so easy.

The week after next on Wednesday February 20th, the Central Texas DITA User’s Group is continuing the wiki panel discussion we started in January with three more speakers talking about their wiki experiences, including one wiki that uses DITA as source. Here are the presenters:

The networking starts at 7:00 with the panel starting at 7:30. It’s at the Freescale campus on Parmer and directions are available on the DITA wiki. I’m looking forward to this presentation as an audience member as well to learn about more wiki best practices and DITA conversions to wikitext.


Info architecture work that sometimes makes my head hurt

  • Most info architects agree – planning for reuse is harder than conditional text. But even conditional text can be difficult, especially if there are multiple conditions that overlap. The winner of the “most conditional text” contest was this commenter on my talk.bmc post with 64 conditions in a FrameMaker document.
  • I still struggle with topic authoring – but I’m finally “over” separating content from format. Whew! That only took a couple of years. This week I’m chunking of information using the rule of “seven plus or minus two.” That doesn’t usually make my head hurt, until I start coming up with all sorts of scenarios (maybe the user wants to set up their web site pricing for a DVD sale in the month of March!) and then I find myself writing too many topics.
  • I also read Jon Udell’s great post about potential reasons why hasn’t really gone mainstream, Discovering versus teaching social information management.  I think my own tag merging and pruning best practices need work.  My favorite lines are from the comments, such as “people need to both realize that they can do that database query, and that they can refer to the results using a stable URL. I’m coming to believe that both those operations are still way beyond the capabilities of mainstream web users.”
  • And finally, inline linking versus grouping links together. Usability studies and experts disagree on the correct way to link. I’m not sure I have the answers yet either. Better keep studying and linking.

What are some aspects of information architecture that are making your head hurt lately?

work writing

Find your user’s vocabulary and use his or her key terms as keywords

I just used this “trick” to find out what job titles are relevant for some of the task analysis we’re doing while writing new materials. I think it helps you get into your user’s shoes and also realize the value that your software or hardware product brings to those who decide to become an expert user with it. Here is an example – plug in your keywords and see what you find out about your users.

  1. Go to, a job search aggregator site.
  2. Type in the name of the main product you’re documenting. In my case, it’s a software product called iMIS.
  3. Fill in a location that you think would have a lot of interest or activity around your software product. For my product, that location is Washington, DC.

Voila – look through the search results and pick out 5 keywords to use either as index entries, as role or persona names the next time you do task analysis, or sprinkle the terms liberally in the headings of your online documentation to aid in findability.

Example job titles from my scenario: database administrator, project leader, project coordinator, manager, accountant, administrative assistant, and a sprinkling of director.

If I were to subscribe to the RSS feed for this search, I’d call it yet another use for RSS feeds. For me, though, it’s a nice one-time check on the types of jobs people are trying to do with the software product I document.

Try it and let us know what you find, especially if any of it is surprising to you.