I attended the central Texas DITA users group meeting last night, and wanted to write up some notes. We had two speakers share their thoughts after attending two related conferences this spring.
Bob Beims from Freescale shared his thoughts on attending the DITA 2006 conference at North Carolina State in Raleigh, NC, the first conference of its kind. He thinks he heard there were 185 attendees, and was pleasantly surprised at the range of users he met there. People were from medical companies with products for nurses, from the financial industry, from power and electric companies, and there was the hardware and software crowd. He had a couple of great quotes from different sessions. How about: “This is not rocket science… it is really bow and arrow stuff that has been implemented with technology.” from Michael Priestly of IBM, or “there’s never enough time and money to do things right, but always enough time and money to do things twice!” from Bernard Aschwanden of Publishing Smarter. I personally liked “Take the leap (or fall off the cliff!)” from Bob himself.
Bob said he realized that DITA solves some topic orientation problems that our industry has faced for decades. He was pleased at the rate and pace at which the DITA Technical Committee is churning out releases… 1.1 due out soon, and 1.2 in the next nine months. He feels that the OASIS leadership proves that DITA is not “just an IBM thing.” He thinks DITA maps should be awarded innovation of the year. He said, if you hate the limitations of FrameMaker conditional text, you’ll love the future of DITA with key values ( DITA proposed feature #40) that would allow boolean queries against conditions for output. A conditional text tags contest ensued, with a starting bid of documents with 13 conditional text tags and finally someone with a Frame document with 39 conditional text markers won the contest. 🙂 I appreciated his comments on the two strata of tools — either very expensive, very functional, and easy to use, or (almost) free, fairly functional, but you’d better be a gear head to use ’em. He sees a definite lack in conversion helpers for legacy content. Of course, with those words, a lively discussion ensued about transforming content versus just getting the text out by converting. Nearly all those experienced in unstructured to structured conversion projects discover, a real human has to figure out how to make topics out of the text that comes from a conversion. People who had done conversions said that Perl on MIFs out of Frame does the trick for getting out text, but in some cases you’re better off starting from scratch to plan for reuse and true topic orientation. Still, a conversion script (or set of scripts) at least takes your existing text into a structured start. Bob also said that something he has learned while researching from many presentations inside and outside of the DITA conference is that you must develop an Information Architect role or you’ll end up chasing your tail when it comes to truly gaining benefit from a topic-oriented architecture for your information.
What does Bob see as next for DITA? He’d like to see a lower bar for entry. Currently the entry “fee” includes a lot of time for preparing your content and training your writers, skills necessary to participate are high, and there’s money required for a bat and ball. He thinks there can be integration with non-DITA XML information streams, especially for those who interface with manufacturing industries. His example from Freescales’ perspective was the RosettaNet effort, where hardware manufacturers can offer “product and detailed design information, including product change notices and product technical specifications” via XML specifications. Incorporating that with DITA topics would help them build their information deliverables. He also noted that the DITA community might be a small one, but it is definitely composed of bleeding edge technology and technologists.
Next, Paul Arellanes, an information architect at IBM, gave his impressions of the Content Management Strategies 2006 conference in San Francisco. He saw a definite eagerness to adopt and use the DITA Open ToolKit as well as eagerness to reuse, reuse, reuse. His talk, Taxonomy Creation and Subject Classification for DITA Topics was highly attended (standing room only) and very well received. He also stressed the importance of training on topic orientation before going to XML. He has a programming background, and likens DITA to object-oriented documentation. He’d like to see code reviews of how the tags are used and if they’re used correctly. He got a couple of good ideas at the conference for how to build code reviews into the document review cycle. I’ll talk about those in the next paragraph. Paul talked about reuse and asked if it’s a boon or a curse? Can you reuse a topic if you can’t find it? What if the topic was never designed for reuse in the first place? How can you design for reuse in the first place? He’d like some best practices for reuse.
He said that implementing DITA is a chance to change your documentation processes — going to topics with a fresh start at content is more successful than a legacy conversion due to being able to build and design for reuse. His takeaways are that we need best practices for reuse, he’d like to build in source code reviews, and found a cool method for doing that with an editor’s CSS process that checks syntax. These are the common errors that you could find and mark up with CSS (basically, colorcoding the output after running it through a syntax checker built on CSS). Often these types of syntax/markup errors happen because the writer is tagging for looks, not for meaning of the content, but it can also happen with legacy conversion.
- placement of index entries
- sections that should be separate topics
- use of definition lists to create sections
- ordered list tags instead of using step tags
- lists of parameters in ordered lit tags instead of param list
- use of unordered list tags with bold instead of definition list
- use of <ol> or <ul> instead of substeps or choices element in a task topic
- use of <filepath> for variables and terms
- menucascade not used
- uicontrol not used
Paul also has good ideas for the future, including a troubleshooting or problem analysis and determination specialization from the task topic, and perhaps a way to pull out DITA elements from a topic and plugging it into interactive content using AJAX. He was pleased to see that the skill set among attendees is pretty high, including XML, XSLT, SAX, FOP, CSS and Ant build tool skills.
Interestingly, as far as our group could see, Adobe was not represented at the DITA 2006 conference, even though they have a group implementing DITA for solutions documentation.
If you’re like me and didn’t attend the DITA 2006 conference, you might enjoy (as I did) the transcript of Norman Walsh’s talk. Norm is the chair of the DocBook Technical Committee, and DocBook and DITA are constantly pitted against each other for solving the problems of information developers.