Monthly Archives: August 2006


Seducing users to learn by reading the manual

Thought provoking post over at Creating Passionate Users — shiny slick seducing user manuals

From a recent post by Kathy Sierra at Creating Passionate Users, “What if instead of seducing potential users to buy, we seduced existing users to learn?” I don’t really like the title of her post, which is Why marketing should make the user manuals! because I do think that skilled and talented technical writers can be organized in different locations in the company. But I do like the heart of her question above, about convincing users to learn and care about the technical details of a product. Armed with technical knowledge, our products and accompanying manuals should help us kick butt at our jobs or hobbies.

Manuals as sales tools

She also talks about arming your sales force with these slick glossy manuals and using manuals to sell your product. I’ve discussed a “Tech writers as sales reps?” article and dissected and questioned the best practices in that article in a series of posts. Yes, well-done technical manuals can help close the deal.

Yes, there’s real skill to this tech writing if you want it to be good

I think that good tech writing is a highly-valued skill and includes all the components of design, layout, conviction/convincing, inspiring, motivating, and teaching that the ultimate manual should include. Since I’m motivated to help everyone realize the value of good manuals for selling and sustaining products, I have written commentary on the best practices for tech writing from the “Tech writers as sales reps?” article, including Best practices in tech comm for fit in the organization, Best practices in tech comm for customer feedback, and Best practices for Document Management Systems.

A great example of a “slick” manual

Many tech writers are passionate about their jobs and just need the opportunity to let that passion spill over into their writing, conveying the technical information that inspires users to evangelize about the product they’re using. One such interesting example of a slick manual showed up in the comments on Kathy’s post — a link to a user manual for a microphone. Now, the opening style of the writing has a few too many exclamation points for my taste as a writer and reader, but it is a neat example of bubbling enthusiasm for technical topics and a product that you’ll get the most out of when you thoroughly understand the technical details. Here’s the opening paragraph that is enjoyable but not necessarily technical.

We know you hate to read manuals. So do we! But because the Snowball is such a unique recording tool, we really hope you take the time to familiarize yourself with its features and try the suggested application tips that are designed to help you get the most out of the Snowball. You might just learn something too! With proper care and feeding, the Snowball will reward you with many years of recording and performance enjoyment and it won’t end up as a pool of water on your desktop! Now on with the show. (No refrigeration necessary.)
This introduction sets expectations that this is not a typical user manual while also offering very compelling reasons to continue reading. So I did.

Next, I was delighted with the explanation of the technical design of the mic. A little twinge of early elementary school target audience but not too pedantic.

The Snowball uses two separate capsules to offer you a wide variety of applications. The first capsule generally “hears” what’s right in front of it in a fixed cardioid pattern with a neutral sonic signature (engineering geeks call this unidirectional). The second capsule generally “hears” everything around it with a brighter overall sound (engineering geeks call this omnidirectional).

The best part of this manual is that it offers scenario information for each type of sound that you might be capturing with the mic. Here’s the text from the Strings section. The Snowball is an excellent choice for miking all members of the bowed string family. In general, the diaphragm should be angled toward the instrument’s bridge to pick up a blend of body resonance and bow sound. On bass and cello, placement from 3 to 6 inches in front of the bridge is usually ideal. For violin and viola, it is preferable to position the microphone 1 to 2 feet above the instrument. Angle the diaphragm toward the bridge for more bow sound and low tones, or toward the tuning pegs to capture a more diffuse, brighter sound.
Great practical advice with immediate steps to take to troubleshoot certain problems. That’s good tech writing to me.

Back to the heart of the matter… bridge the gap

I think that my main point on this topic of blurring the line between the stereotypical “boring black and white” manual and “enticing full color” marketing material is that while it may seem like there’s a dividing line between manuals and the rest of the docs that go out with a product, that chasm should be bridged early and often. Keep the conversations going between your users and writers as well as your writers and sales and marketing to continue to help users kick butt after the purchase.

Kathy says she’ll follow up with a Part II to this post and I look forward to reading more. She has a very engaging and enjoyable blog, one that helps you think about passion in new ways.


Facing the Dell laptop with a Sony battery recall… can a CMDB help?

Determining how a CMDB could help the business with a recall like the Dell laptop with a Sony battery fire hazard

So, was your laptop affected by the recent Sony battery recall? I have a Dell Latitude D600 and had to check the serial number but fortunately the battery number did not match those on the recall website that you check to see if you need to get a new one.

Now, if a CMDB had contained the serial number of my battery, could I have been saved that extra step? It’s a question of granularity for the CMDB – when would you kick yourself for not going more granular on your CMDB? And is it possible to think of all scenarios such as this, especially for all hardware parts that go into laptops and servers and desktops? I sincerely doubt it’s worth the trouble… until something like this recall comes up and then I wonder.

It seems like entering all that information into your CMDB is not worth it for these rare exceptions when you want the information. Until the information could be automatically discovered somehow, it’s just as easy to have your end-users look it up for themselves. If the serial number information was available from the manufacturers or through discovery, it could be a federated attribute in an Asset Management database rather than stored in the CMDB. But, for a level of granularity that helps you pinpoint a subset of your entire collection of hardware, you could use the CMDB to help you determine who might be affected, based on who has laptops or who has Dell laptops with the exact model numbers that are affected. This sounds like a sensible and balanced approach.

How about you? Any ideas on the practicality of granularity for these recall situations? What is the next step — Change Management for tracking all the replaced batteries?

Updated to add: Here’s a link to a relevant podcast with Tom Bishop, where he talks about the relativity of data. Thanks to Ynema’s comment I can get even more familiar with the best approaches to these types of CMDB design questions.


DITA Open Toolkit now has a user guide

Just released last week, the DITA Open ToolKit now has its own User Guide

Don Day just announced the release of the DITA Open ToolKit User Guide, which you can download from the Open Source website as a PDF or as HTML. Naturally, it was authored using DITA. My tiny contribution to some of the troubleshooting information pales in comparison to the amount of work that the team of Anna van Raaphorst and Dick Johnson did to get all the content into topics and tested thoroughly. It supports up to the most recent release of the ToolKit which is 1.2.2.

Don has a post on the dita-users Yahoo Group that talks more specifically about the efforts, other complementary documentation (like the DITA User Guide from the folks at Comtech and the version 1.0 language specification), and a call to continue to contribute and refine the content.

Great going, open source community! I expect to use the dickens out of this Guide and will contribute what I can as we continue with our DITA implementation work.


I won’t be at BMC UserWorld

But I have a really good excuse…

I’m cruising through the BMC UserWorld session catalog, looking at the sessions I’d like to attend but can’t. Fortunately, it’s for a very good reason. I’ll be a little bit too pregnant to board the plane for the trip home to Austin from San Francisco!

The main conference starts August 29th and goes until September 1st. I’ll be 32 weeks pregnant on the 30th of August, and both my doctor and the airlines discourage airplane travel after 32 weeks. It’s our second child so I do have some experience with the whole pregnancy scene, and completely agree not to board a plane at that point in the gestation period.

The BMC UserWorld website itself is a treat with some videos of the hosts of the various tracks. David Wagner has a blog at talk.bmc and David also has a cool video on the UserWorld site. The video style is like the iPod styling with a dual-chromatic tint for the entire video. The effect is rather cool.

Speaking of iPod-styled images, here’s our first born son dancin’ with a photoshopped iPod. I used this graphic for our holiday newsletter this year, following these instructions in a tutorial on photoshop Lab. The second baby on the way is also a boy, so we’re looking forward to lots of fun and adventures!

Enjoy BMC UserWorld and let me know what you learn. I plan to have lots of guest bloggers while I’m on maternity leave in a few more months, so keep an eye out for some new voices and ideas on my blog this fall and winter.


Using the DITA catalog for your specializations, creating a Public ID

Thought our discovery might help you as you specialize DITA

At BMC we are dusting off some DITA specializations that we did about two years ago. We were using the non-OASIS release of DITA at the time, and our XML guru discovered that where the DITA dtd, mod, and entity files once said “IBM,” they now say “OASIS.” So all of our specializations had to be changed so that the entity declarations had //OASIS// instead of //IBM//.

But even with those changes, I still couldn’t get ant to build my DITA map files. No matter what relative path I put in my topic file, pointing to the DTD, nor where I would put the topic XML files relative to the DTD directory, ant refused to find the DTDs, giving me errors like “Cannot find \dtd\tBasicConcept.dtd” or “Cannot find C:\DITA-OT1.2.2\projects\dtd\tBasicConcept.dtd.” Originally, I thought that the problem was that ant looks relative from the ant basedir directory (it’s set to C:\DITA-OT1.2.2 on my machine) instead of relative from the topic file itself. However, I learned that public identifiers are your friends and you can avoid these relative path problems via the DITA catalog file in the DITA Open Toolkit.

Using Public IDs

I didn’t actually discover this helpful method of tracking DTD references. Our XML guru and information architect discovered that we needed to have entries in the catalog-dita.xml file in order for ant to find the specializations. That addition allows us to use public IDs instead of absolute or relative paths to the DTDs in our topic XML files. Whew!

So, in my topic files, I can refer to our specialized Concept topic, “tBasicConcept.dtd” (no relative or absolute file path) and as long as the catalog-dita.xml file has these entries for the publicIds, the DITA Open Toolkit finds the DTD and builds happily.

Here are examples of the catalog entries:

<public publicId="-//BMC//DTD DITA topic BMC Basic

Concept//EN" uri="bmcproc/tBasicConcept.dtd"></public>

<public publicId="-//BMC//ELEMENTS DITA topic BMC Basic
Concept//EN" uri="bmcproc/tBasicConcept.mod"></public>

Here are examples of the referrer in the XML topic file:

<!DOCTYPE basicConcept PUBLIC "-//BMC//DTD DITA topic BMC Basic

Concept//EN"  "tBasicConcept.dtd">

Woops, the catalog can be overwritten

After a little searching on the dita-users Yahoo Group, I did find the following caveat for altering your catalog-dita.xml file directly. From Deborah Pickett in this message “Note: catalog-dita.xml is a generated file. It is overwritten when you run the “integrator” task. If you find your changes are being removed, put them in catalog-dita_template.xml instead.” I think you only run the integrator task when you are installing plug-ins, and so far we haven’t installed plug-ins in our Open Toolkit environment. So our modified catalog file is safe for now.

Separating your authoring environment from your processing environment

Now, if you were really paying attention to the catalog entries, you’d notice we’re using a “bmcproc” directory for our specializations. In our environment, we have two sets of specializations by design. One is a processing specialization, and that is the one I’m using for the ant builds. The other set of specializations is a “bmcauth” directory, and we have slightly stricter authoring standards in that set of specializations. But, we want the processing to work with the out-of-the-box transforms supplied by the Open Toolkit, so we designed more DITA-specific processing specializations.

An example of the difference between our authoring environment and the processing environment is that our BMC definition of a section is more restrictive to disallow a mixed PCDATA content model (which non-specialized DITA allows). This stricter definition prevents you from writing a section without tagged markup in our authoring environment. Where DITA allows: “#PCDATA |; | %basic.block; | %title; | %txt.incl;” BMC allows just: “%title;, (%basic.block; | %txt.incl;)+” and hopefully our content will be more semantically tagged because of it.


Learning a lot by reading through the ITIL glossary

I’m impressed with the level of detail the OGC has for their vocabulary surrounding ITIL

In working on a glossary of terms for Business Service Management, I’ve discovered the ITIL glossary, version 3.0, and I’m reading it with interest. It’s a writing task taken very seriously with wonderful cross referencing and consistency of terms and usage. I haven’t yet found a contradiction or hole in logic so hats off to the writers who put it together. The only additional feature I’d like is clickable cross-references to the other terms used within a definition.

With the terms of use, you can reference the definitions as long as you’re not using the glossary specifically to sell your own products or services, and you must use the term accurately. So it’s a wonderful resource and source of content.

One item I found very interesting while studying the glossary: IT Service seems interchangeable with any Service. However, that interchangeability is not specifically spelled out in the IT Service definition. In the Service definition they call Service synonymous with IT Service. But, there’s also the Business Service definition which has an example such as financial services. So the generic term “Service” is never related to say, banking services or financial services, but “Service” is always related to IT Services.

Confused yet? I confess I am a little confused as well, especially if I want to use Business Service Management principles to interrelate banking services and IT services such as ATM software (firmware?) that runs on Linux. At that point, I guess it’s all about Service, no matter what type you’re talking about. I’ll have to ask Peter Armstrong where he draws the line for his definitions of service management.

What criteria do you use for your definitions of IT Service in contrast to Business Service?


When user manuals cause a recall

A postcard insert will be the remedy for an incorrect hotline listing

I wonder what the tech writer’s explanation is on this Honda and Acura Owner’s Manual Correction. Any tech writers for Honda who want to let us know the inside scoop? The story says that the National Highway Traffic Safety Administration has recalled over a million cars and motorcycles because the manual contains incorrect information for NHTSA’s vehicle safety hotline. Reuters reports that it only affects 2006 and 2007 models. They’re sending out postcards to dealers and owners with the corrected information.

I own a 2000 Honda Accord so I looked at the manual just to get an idea of how much text is affected, and in mine it appears in the last column in one page, and it’s only about 9 lines. The paragraph probably measures less than 3 inches by 3 inches.

In looking at the text to be corrected, I was reminded of the great sticker days of a grad school experience I had. As a grad student I worked on instructional materials for science teacher workshops. Once, when a typo got through our many layers of editing, we had to print and then peel-and-stick stickers by hand on a brochure to cover up and correct that typo. I don’t remember exactly how it got through, nor do I recall what we were correcting, but I sure do remember putting stickers on hundreds of brochures.

I know that there’s hardly any fair comparison between a brochure for science teachers and critical safety information for vehicles, but I do know incorrect information can make its way into any deliverable and we’re always on the look out for those last minute corrections. But sometimes our best efforts are thwarted when something makes it to print. Ah, for the correctibility of online doc… but wait, there’s always Google’s cache or the Wayback Machine at that we can’t just put a sticker on. (although you can request that Google remove a URL if you need to.) Plus, once a product ships with integrated online help, there’s no way to correct help content other than a reinstall.

What are some of your favorite correction memories that you’d like to share? Postcards? White-out? Stickers? Any more creative cover-ups?


Blogger podcasts are now live

As part of a blogger series, you can listen to me talk instead of reading what I write

Always interested in trying the next cool communication technology, I jumped at the chance to record a podcast when asked. As a result, my first podcast is live now. It’s about a half hour long, but as you will soon hear, I talk too fast. Not that fast talking makes it go any quicker, but it is crammed full of connections. Ynema Mangum came up with all the interview questions, conducted the interview, and did a great write up as well, thanks Y!

I talk about why I like to blog, how tech writing helps with my blogging, wikis and tech doc, integration doc, system administration, how RSS is like Tivo to me, how enthusiastic I am about DITA, and my excitement for OPML.

Anne Gentle podcast, Exploring Information Technology

Please do listen to our casual and fun conversation and let me know what you think.

Now I have to go listen to Steve Carl’s podcast, and you should too!

DITA talk.bmc

Evaluating XML editors for DITA

Notes from the July 2006 Central Texas DITA User Group meeting

At this month’s meeting, Don Day discussed “More than just another XML editor” to help us all with editor evaluations that are specific to DITA support. Our summer series of topics have been centered around tools, and next month a Quadralay rep will present “something cool.” Quadralay, the folks who bring us WebWorks, is here in Austin so we’re lucky to have some vendors in the neighborhood.

Evaluating DITA editors

Don talked us through an editor evaluation from start to finish. When they’re available, I’ll link to Don’s slides, but I’ll also do a notes dump here. These notes are my thoughts, and not a summary of Don’s presentation.

He stepped us through the classes of editors and also worked his way up to what types of use cases the enterprise users are looking for. I found the use cases very helpful. Yes, there are DITA specific features to keep in mind as well. But the installation and configuration, customization for writers, suitability for your company’s strategic format, usability, and productivity gains are the right considerations. It’s also good to hear from someone who has been working in an XML editor for a while and can tell you what to look for next, once you satisfy your first few rounds of requirements (for example, the editor has to validate XML, that’s definitely a first round requirement.)

After validation, what sets an XML editor apart? For me, on the next level is the entities support. For DITA, this support is all about how usable the conref feature is for reusing parts of content. As an example, if you want your writers to use a standard list of product names, you can use conref to bring in only the correct names from the master list. (Think of Framemaker variables).

The next level is the resource manager, how well does it handle graphics and links to external documents? Let’s definitely not make that a pain when it’s straightforward in other writing tools.

After that, consider which views your writers can see – structured, tags on or off, and try to anticipate what your writers will want in the views. For me, that’s not easy, because every writer will want a different view, but customization is definitely a needed feature. Does it have pre- and partial-rendering views (or would you rather it didn’t to help train writers to get away from output and format and focus on content?)

Additional pluses include native (or plugin) viewing of notation or namespaced data, such as SVG graphics. For hardware companies I would imagine this is a must have. For software companies, I’m still investigating how important this feature is – I imagine it’s not a deal breaker for us.

The next plus he named was the suitability of user interface features to user assistance content authoring. My guess is that this is a huge one for most writers – does the editor feel like an author’s tool or a programmer’s tool? Now, interestingly, while I was drafting this post, I read The Content Wrangler’s great summation of his recent interviews with people doing DITA implementations. It’s a top ten list — 10 DITA Lessons Learned From Tech Writers in the Trenches. My favorite gems are from number 9: Don’t Fall In Love With Software.

  • Far too many technical writers fall hopelessly in love with an authoring tool for no apparent business reason.
  • Software love affairs are good for vendors because they convert regular, ordinary writers into mindless, unpaid software evangelists.
  • Love affairs of this sort are bad for organizations that employ technical writers because they prevent us from asking questions about our own choices, our own motivations, and the impact our personal preferences may have on the organizations for which we work. Love is an emotional thing, while business is about return on investment, profit and longevity.

This is a great post, one that I will read and re-read and tell others to read as we go through more planning for our structured authoring projects.

One item that Don mentioned that I hadn’t though of before this talk are whether the editor supports macros. (However, I also think that macro support might fly in the face of the previous “author’s tool or programmer’s tool” question.) His example macro was of a cut and paste across multiple docs with a save at the end, saving the writer a lot of time by using a simple recorded macro.

Another good consideration is the viability of the company that makes the editor you like. Will service and support be around in the years to come? That’s always a good question to investigate while testing out software to purchase.

Does the editor match your processing framework (or be made to do so?) On an enterprise level this is really important. It has to be able to get the docs in front of customers in a sensible manner without a lot of intervention (in my perfect world).

Typical authoring scenarios

Now, to me, these are authoring scenarios straight out of IBM, so your company may have different scenarios in mind. I have a few more I might add as well that I’ll be evaluating.

  • Start a new topic. This one is huge for people who are accustomed to the ease of templates. How about starting a new Map though? We’re one of the few companies I’ve heard of specializing map files so that writers just have map templates that they’d fill in with specific topics. We might back off that idea as it causes a lot of specialization work, but it’s a scenario I’d like to try out.
  • Clean up a topic newly migrated. This one is definitely a biggie with thousands of migrated topics, youch. But, if you only plan to write new content as topics, this scenario might not be a high priority
  • Revise existing already validated content. Yes, this scenario can make or break an editor’s ease of use. How difficult is it to pick through to discover where you broke your perfectly valid document (and how much flex does the editor give you while you try to organize your structure and sentences?)
  • Create conrefs and links. I think of writing a glossary using conrefs to a giant master glossary and just picking and choosing the terms that matter for your particular deliverable.
  • Authoring metadata, then hiding metadata that you don’t want to read in the topic as you author. I think of the prolog in a topic with things like authors and versions, info that might get in the way of readability as you author.
  • Inline alternative content in place and the ability to hide it. I think of index entries cluttering up your nice content while you author.
  • Pre-rendered views and partially rendered views With this scenario, I think of whether you want to be able to view Headings in a certain font or size, and viewing other formatting renditions.

Let’s talk about DITA-friendly features

Awareness of DITA’s approach to topics, such as specialization awareness, conref awareness, and Don especially feels that CSS styling and XSLT styling, both W3c standards, are a high priority for DITA friendliness. Easy to use for topic-length content and documentation is another DITA-friendly item. Specifically for CSS, a nicety is substring matches on class attributes. Proprietary styling isn’t exactly in the best interest of a DITA effort where part or most of the goal is re-use (not just of your topics but of the processes to make the topics).

Another consideration (from the audience, I believe) is how well will the tool pick up on new releases of the DITA Open Toolkit? A responsive vendor will be well aware of us chomping at the bit for the newest releases especially for new specializations and stylesheets or transforms.

Imparted final wisdom

The business benefits of DITA are at the heart of the matter. Think of how your business plans to use DITA to help your deliverables shine… will you require new specialization to do so and therefore want a lot of design features? Or will you stick to the core info types (concept, task, and reference), and therefore move your primary criterion to customization or maintenance? As Don says, these guides will help you select top candidates, but may not reduce the selection to one very quickly, so you must represent your business’s value system in the final decision. Now, that said, his last question on his Heuristic Evaluation is “Will your writers use it?” That’s a big question to keep in mind!

I’d love to hear your thoughts on what editor evaluations you’ve done – not necessarily a discussion on “this editor is the best” but more about what scenarios we should test, and why, especially from a business case perspective.