Monthly Archives: February 2006


How many songs are on your iPod?

Interesting discovery about the reality of how much we’re willing to load on an iPod, wondering what parallels could be drawn for infrastructure management

I read on collision detection the other day that on the average, people only have 375 songs stored on their mp3 device, with iPod owners having a slightly larger average at 504 songs. While it’s still impressive that about two hundred CDs fit in a tiny hand-held device, it’s still a small percentage of the number of songs that could be stored on the bigger capacity mp3 players and iPods. With an upper-end 20,000 song capacity, don’t you think most people would have at least 2,000 songs on the average, or about 10% filled? Not so, according to this report where they surveyed 1,062 people who own digital music players. Fascinating!

Along these lines, it sounds like our forecast about infrastructure monitoring using BMC Performance Manager is on track. You don’t necessarily want to monitor everything under the sun including whether the kitchen sink is draining properly. You only want to monitor a set of parameters that matter to your business. Like the “most played songs” marker on your iTunes library, a “most monitored parameter” marker is what you seek when you set up your library of infrastructure and application monitoring parameters. What do you think? Simpler, fewer, and more targeted seems to be the way to go. Plus, connect the monitoring activities to business services and business applications that matter to your bottom line.

From the iPod to the Infrastructure and Application Management Route to Value and back again in two paragraphs. I’m no James Burke but I sure admire his Connections. Although I had a really hard time reading Twin Tracks, where the book layout has one story line written on the left-hand pages and the other story line written on the right-hand pages.

So, how many songs are on my iPod? I have the 512K iPod Shuffle, and it contains over 100 songs, which puts it at “full.” Based on capacity, I’m making the most of my investment, right? Or perhaps I just don’t have time to listen to more music than that. 🙂


How to implement a document or records management system that meets ISO standards

As a follow-on to the “Best Practices in document management systems post, I received an email with a pointer to a comprehensive doc management process

I received a great follow up email from Bob, the person who originally asked me about best practices in document management systems. He shared what he found as he continued to research. Here’s his note, and it’s a very informative one at that! Thanks Bob!

[This information] might help anyone tasked with finding and implementing a records management system OR a document management system for any organization, to properly organize and execute the project.

Australia’s National Archives’ DIRKS How-To manual for government entities tells exactly how, in minute detail, to go about the process of determining your organization’s needs, orchestrating buy-in from your organization, and deciding on and implementing a records management system AS A FORMAL, ISO-COMPLIANT PROJECT:

Technical communicators may not necessarily be schooled in Project Management fundamentals or ISO guidelines when they are tapped by their organization to help orchestrate such a project; if this applies to you, you might take a peek at this; it’s jammed with helpful practical advice and it’s the most thorough document of its type that I’ve found in several months of dogged web-digging.

One could even probably search/replace the words ‘document management’ for ‘records management’ and replace “government agency” with “Corporation” and end up with a fairly usable skeletal methodology for implementing a corporate document management system, too… as long as you understand the distinction between ‘documents’ and ‘records’…though I grant this idea might make people grind their teeth…:-)

Incidentally, there are guidelines and tips vaguely like DIRKS posted on the websites of various U.S. state Archives – for example, there’s one at:

Bob says he hopes this helps someone else who finds themselves in a similar situation someday. Let us know if you find it helpful and if you have any tips to add.


Best practices in tech comm for fit in the organization

Discussion of where a tech pubs department best fits in a company, and what staffing levels are best suited for success

This post is another part in a series about best practices in technical publications for software companies. Other posts include “ Questions for best practices in technical publications” and “Best practices for document management systems.”

A couple of the best practices listed in the article, “Tech writers as sales reps?” that we referred to for our Austin STC Meeting in October 2005 are related to where a tech pubs department best fits in a company, and what staffing levels are best suited for success. Specifically, the three best practices that the article mentions are:

#4: Have a reasonable ratio of writers to developers.
#5: Place technical writers somewhere sensible in your org chart.

#7: Encourage technical writers to meet customers.
#8: Use customer advisory boards to get feedback on documentation.

I then asked each manager to answer the questions below from their perspective. Here are summaries of the answers and wisdom given.

Q: The author of this article places technical writers in customer support. Is that the right placement for your company and team?

A: All three managers have teams that are located not in customer support, but rather are located in research and development, although the writers at smallest company sit near their technical consultants. So, their interpretation for the article was that perhaps customer support played a very strong role in that particular company, so from a power standpoint, it was a good placement in his case. One manager noted that regardless of where you are in the org chart, you can be physically located near development in order to build rapport and get information that you need.

All the managers realize that having the documentation team close to a group that drives the company can be a positive position. They noted that this positioning can also be a double-edged sword, because a shift in power can place a layoff target on the documentation team.

Q: Do you agree with the author’s 8:1 developer:writer approach? How do you estimate your ratios? What’s the right ratio? How have you used this ratio when asking for resources?

As the article notes that even 20 or 30 developers to each writer can exist, the managers stated that these figures can vary. At the smallest company, a 5 to 1 ratio is in effect, which works very well. The range was anywhere from 5:1 to 10:1, but the managers agreed that the age and maturity of the product being documented should really affect this number. Newer products should have fewer developers per writer. This idea sounded good to me, that the ratio really depends on the new material needed versus maintenance of established documentation.


DITA from the trenches

Information Architect from IBM, Kristin Thomas, presented to the Central Texas DITA User’s Group meeting last week, and here are my notes.

Here are my notes from the February Central Texas DITA Users Group meeting. You can join the Yahoo Group for our user group at There were about sixteen of us in attendance. Kristen Thomas spoke to us about converting the AIX doc set to DITA from SGML. Don descriptively called it “DITA from the trenches.”

Here’s her bio:

Kristin Thomas is the information architect for IBM’s AIX information center. She has written installation and security documentation for AIX, and she also oversaw the conversion of the AIX library to DITA and topics. Before working on AIX, Kristin wrote Linux documentation in IBM’s Linux Technology Center for open source projects, such as the Linux Standard Base, openCryptoki, and the IBM Carrier Grade Open Framework reference implementation. Kristin has been with IBM since 2000, writing documentation for IBM’s Software Group and Systems and Technology Group. She is a 2000 graduate of the Masters of Arts in Technical Communication program at Texas Tech University.

We talked about some administrative items first with Don Day. Next month they’re hoping to schedule France Baril, who has done a lot of DITA work and may be in Austin next month. Don asked for ideas for topics future meetings. One person requested a Task modeler sample talk, which is Eclipse plug-in. it’s a tool that lets you quickly create DITA maps and generate stub content for a prototype build, including support for major map structures like hierarchies (like tables of contents) and relationship tables (like related links) and the base DITA topic types (concept, task, reference). Download it from It’s available under an evaluation license, but you do need to register with alphaworks before downloading.

We had some discussion about an animal mascot to represent our user’s group. I vote for some Darwin-related animal but the finch with the specialized beak is already taken as is an iguana.

Don also let us know about the new web site.

Here are my notes from Kristen’s talk — not really cohesive or comprehensive, but it represents the parts I took away. I’m hoping to link to her slides once she posts them. (Updated to add the link, but you have to be a member of the CTDUG Yahoo Group for access.) All in all a very informative talk. I always like it when you can ask questions of the people really using the technology.

IBM’s AIX docs were originally in SGML, and went from 13421 to 18062 files by going to topic-based DITA authoring (XML-based). They call their deliverables information units, not books or help. They use an Eclipse-based authoring environment.

Users loved that they can customized info to their needs, that they can find things consistently in the same place as other info units, and so on.

Workflow: writer creates pre-conversion out line, editor helps them by reviewing it and helping them “type” the info, meaning, look at the sections and determine whether they belong in a concept, reference, or task topic.With the outline, they’d track topic IDs and filenames for use later with an Excel spreadsheet, to help with broken links and tracking and so on. Next they’d run a conversion script on the existing content, and then fix everything that was broken. Quite the undertaking it sounds like.

It helps you to have meaningful IDs on things, although I asked about any nomenclature scheme for naming IDs, and they don’t have one. Typically it’s just the topic title. I would suppose that starting the file name with con_ for concept, ref_ for reference, and tsk_ for a task might help you sort in the file system. And yes, they only use a file system with relative paths, not a heavy-duty content management system. IBM uses flat file systems for keeping their DITA content and uses the map for metadata. So a writer might have 300 flat files in one folder for one information unit, and only have the file system tools and the ditamap for searching through or ordering files.

With DITA, you manage links in maps, no cross linking between files is allowed/recommended.

IBM uses Xenu, a web-based link checker, post-processing. After a build is complete, reports with lists of broken links are sent out.

The editor reviewed outline before doing conversion, checking topic typing checks, flow, and so on. Kristen said this review was VERY helpful – don’t underestimate the value of your editors for their ability to step back and get the big picture when you’re buried in the details.

There was some pre-migration work, such as clean up tags, outline, check duplicates of information.

For users – information units are grouped by category. Here’s the URL to their content which mostly contains DITA topics.

They found that task topics were the most difficult for writers to write up, and I would agree. The name of the <cmd> tag was hard for writers to remember at first (DITA uses <step><cmd>Click this thing</cmd></step> for each number step in a list… yes, some writers would get around it by just doing numbered lists in a generic topic file rather than a task file.

They had some content they didn’t convert, such as man pages, but hope to move towards a specialization of ref topic that would describe a man page.

Their search engine brings up the text within a <shortdesc> tag in addition to the <title> text, so it’s important to put in <shortdesc> info.

Don showed us a Syrna app uses XSLT for styling, so you could tell it to display the shortdesc and title when working in a DITA map file.

Lessons learned

  • Offer writer training as close to the conversion dates as possible. Delay means more difficulty in remembering how you’re supposed to do topic-oriented information typing.
  • Create a frequently asked questions page, and log questions that everyone can access, such as members of other teams who will be doing migrations later.
  • Have weekly meetings with all your writers working on the content, led by the information architect.
  • Solicit feedback from everyone involved at the end (and throughout, as well, she noted verbally).
  • Have a realistic timeframe for conversion. (I can’t remember how long she said this took them! I’ll find out and post it later.)
  • Be ready to cheerlead for something that might not be popular at first. Quote from Kristen — ‘someone has to be the optimist.’

An ITIL-centric search engine

Dr. ITiL tipped me off to a specialized search engine for ITIL

There’s a new research tool for those of us learning about and researching ITIL (Information Technology Infrastructure Library). You can find it at I’ve used it and I like the categories it gives you to search within — like Google’s Images, News, or Groups categories, you can choose news, articles, white papers, blogs, or training, once you do your search, and find the keywords within that type of document category. It defaults to searching within articles. He says journalists like using it and I would agree, the hits I’m seeing are good quality docs.

Thanks again, Dr. ITiL, for a good research tidbit.


Creating and visualizing simulations in virtual worlds

Creating training and visualization for the real world in a virtual one

I found this Wired News article, ” Making a Living in Second Life” fascinating because of a tidbit within it about virtual training projects in Second Life. Second Life is a virtual world built entirely by the people who spend time online there with avatars represented by three dimensional graphics. I’m sure I’m not doing the experience justice, but that’s my take on it.

The interesting link to the real world is something I hadn’t realized before this article — people are building virtual training simulations that can be run in this virtual world. Here’s the quote from the article:

Just ask Rufer-Bach, known in Second Life as Kim Anubus, who works full time making virtual objects for real-life organizations. In a recent contract with the UC Davis Medical Center, Rufer-Bach created virtual clinics in Second Life to train emergency workers who might be called upon to rapidly set up medical facilities in a national crisis. The work is funded by the Centers for Disease Control. “In the event of a biological attack … the CDC have to set up emergency 12-hour push sites, to distribute antibiotics,” said Rufer-Bach.

To create the most realistic simulation possible, Rufer-Bach crafted about 80 distinct objects, “from chairs (to) a forklift, plumbing, wiring,” she said. The end result is a training environment that’s not only lifelike, but relatively inexpensive. “There are substantial advantages to doing this training in the virtual world,” said UC Davis professor Peter Yellowlees. For one thing, it’s “incredibly cheaper.”

Seems like it would be more interesting to be trained this way than in other forms of online training. This approach would break down barriers of geography as well as obtaining and physically manipulating the physical objects (anyone can be a fork-lift driver in a simulation). The Second Life website has this great set of selling points for experiential learning within Second Life’s environment, such as practicing skills safely, learn from mistakes, and so on.

For some reason, I’m also reminded of a visual website traffic analysis tool, I believe it’s VisitorVille, where you see the MSN or Yahoo search engine traffic represented by a bus bringing in visitors to your site, in real time. Now that is a cool visual representation that allows you to learn immediately and hopfeully spot patterns and trends right away.

I think that the visual representation of an airport and the gaming feel are both factors that make the BSM airport simulation class so effective for learning. What are some other examples of neat graphical or virtual representations of work you’re doing or training to do?


Hybrid approach for performance monitoring

A hybrid approach for performance monitoring is as useful and practical as a hybrid approach to your car’s engine

A recent news story points out that hybrid cards aren’t getting gas mileage that drivers expected based on EPA testing. Consumer Reports is testing hybrids with their own standards, which they hope offer a realistic test environment. The automotive industry is responding by encouraging drivers to take courses that will help drivers change their driving habits so that the combined technologies are optimized. However, the winning proposition given by a hybrid vehicle is that the combination of two technologies should give you the benefits of each while lessening the detrimental affects of one or the other. Each technology should be able to help the other.

Interestingly, electric and gas combustion combinations aren’t the only hybrids available either. Before the holidays, slashdotters were talking about the fact that BMW just announced a steam hybrid engine that collects 80% of the heat off the exhaust systems and reuses it to assist the gas engine, offering power and torque gains. With these automotive hybrids, you gain the extra power and pickup of a gas-based vehicle when you need it, but also conserve gas and dollars at the pump by relying on electric power when you can.

Much like a hybrid vehicle’s technology choices, BMC Performance Manager has choices for you to deploy agents to collect data, or to use a lightweight piece to collect only the data you’re most interested in, or do remote monitoring only. How you optimize your system for data collection depends on the type of “driving” you do and what your goals are for the environment you’re monitoring. Bill Hunter and I recently co-authored a white paper titled “Agentless or Agent-based Monitoring? Choose a Hybrid Approach with BMC Performance Manager” that offers more explanation of the relating your monitoring to the business and choosing the approach that makes the most sense for the type of monitoring. Here’s the introductory paragraph.

As most administrators know, some IT components are more important than others. Most availability and performance-monitoring products, however, cannot make that distinction. Consider for example, that a router serving a single employee fails while that employee is at home eating dinner two time zones away. Do you need to be awakened in the middle of the night just to fix the router, which the employee will not even need until the next business day? Most likely the answer is no. Ideally, you need the flexibility to monitor what matters at the level of granularity that is required by the business. Choosing between robust products with deep functionality and simple ones with “good enough” functionality is a common dilemma.

The BMC® Performance Manager product from BMC® Software resolves this issue by combining the robust functionality of BMC® PATROL® product architecture with the simplicity of BMC® Performance Manager Express.

Download the paper if you’d like to read more. What do you think about hybrid technology offered to us today? Feel free to let us know your thoughts.


Watch and read books in progress

O’Reilly publishers are releasing content for a book while it’s still being written

O’Reilly’s calling it “Rough Cuts” and it’s a really neat concept. While a book is being written, drafts are posted in electronic format and you can download, read, and comment on the book. I think both readers and authors alike will benefit from this arrangement. Especially for topics like Ajax (Asynchronous JavaScript And XML), Ruby, and Ruby on Rails where people are learning more and more very quickly and books-in-progress might be the only way to give and get information.

The introductory description on the first page says you get to help Shape the book (scrubbing out the term Edit) and I like that. Books can be molded and sometimes sculpture terminology works better than flat paper metaphor. Think of ways you can shape your current writing projects. What are your ideas for works-in-progress? Do you like what O’Reilly’s doing here?

Thanks, Erik, for sending that to me!


Supplementing product documentation with Google searches and blogs

Today I found a counter argument to the previous post about how good product documentation makes the product worthwhile

Earlier this week I posted about how good product documentation can sell a product, but today I came across “Manuals, conversations, and RSS” by CTO Sean McGrath, where he talks about playing “a well known IT adventure game known as “catch the randomly recurring problem in the mission critical system”.” I’m sure many of you IT adventurers have played this game as well.

He estimates that his information seeking time is being spent in these areas:
10% Reading vendor manuals
20% Googling, then reading
70% Reading developer blogs, user mailing lists etc. Of this 70%, he further breaks it down as:
RSS feeds: 20%
RSS-only search engines: 20%
Blog surfing: 30%

Connecting to conversations, that’s what it’s all about. What an interesting look at two different approaches to getting the info you need to solve a problem. Perhaps debugging requires more detailed information than setup and administration as the previous post talks about? Still, it helps me realize that product doc doesn’t always provide for every user’s needs.

That said, we should constantly strive for some good combinations of deliverables and delivery methods that can work for a broad range of needs. For example, the concept of a DITA/wiki combination offers structure to an editable web site that both the product developers and end users could edit and add to in a structured way. We’d need an authoring tool that’s like a webform that can validate XML against a DTD, and a wiki that can accept the DITA XML topics and display them as navigable, editable wiki pages.

Another neat combination that’s already out there is user-supplemented help, as described on the Usable Help blog, where the help itself can contain comments and conversations occur through those comments. As Gordon Meyer says, it “allows end-users to communicate directly with the developer, and more importantly, with each other about the quality of the documentation and the features of the software.” Well put.

While I can’t always retrace the exact steps I take to a certain article, I like to explain how I find some of these links, and in this case, I found it as a link from “Exploring Agile Methods for Web Design in a post titled ” Why QA professionals throw away manuals and blog instead.”

I won’t ignore the fact that the blogs at are also the opportunity for conversation with our end users. I’d love to hear more about your thoughts on documentation, our products, BSM, ITIL, you name it, and we’ll talk about it. Think about ways that you can open conversations with your end users when you roll out new IT applications. What are some of your ideas?