Monthly Archives: April 2006

talk.bmc

Invention and instruction go hand in hand

Walk through some neat sites and ideas I’ve seen lately related to instructions and inventing new things

I found a really nifty website the other day — instructables.com. It contains instructions for do-it-yourself projects, such as a Rubik’s cube that operates using magnets instead of a mechanical turning mechanism. I also enjoy the Make magazine blog, although I have yet to pick up a copy of the magazine itself. Maybe I’ll get a copy of their new sister zine, Craft. Make has their own “how-tos” section with instructions as well, and some links to instructables as well, which are more project-oriented instructions.

Another related item that came across my newsfeed this morning is about the Idea to Product Competition at the University of Texas. My favorites are The Micro Dynamo, an ultra-compact human-powered battery charger capable of recharging cell phones (and other portable devices), and ParkSpot, a service that lets you locate and/or reserve parking spots in real time via any cell phone.

talk.bmc

Merriam Webster’s Pocket Dictionary on your iPod

For about US$10, you can have a pocket dictionary on your iPod

Found this iPod Pocket Dictionary on my Gizmodo feed yesterday, and thought I’d pass it along. Since it’s the pocket version of the Merriam Webster dictionary, it’s only about 40,000 words. Interestingly, it appears that the interface doesn’t make you spell out the word by scrolling through letters, instead you select the first letter, then scroll through the choices. Sounds like the right design balance (limit the lookup choices, but ensure the interface isn’t frustrating to the user.)

Now, an additional feature that would really combine the audio power of the iPod with the dictionary would be a pronunciation guide that speaks the word aloud on demand. I really appreciate that feature in the online version of Merriam Webster at www.m-w.com. I’m usually a decent speller but can really butcher word pronunciations. The example word on the product site, abstract, has different pronunciations for the verb and the noun. Seize the opportunity for the technology mashup when you can, I say.

talk.bmc

Notes from the Central Texas DITA User Group meeting

Two speakers shared their takeaways from DITA 2006 and CMS 2006

I attended the central Texas DITA users group meeting last night, and wanted to write up some notes. We had two speakers share their thoughts after attending two related conferences this spring.

Bob Beims from Freescale shared his thoughts on attending the DITA 2006 conference at North Carolina State in Raleigh, NC, the first conference of its kind. He thinks he heard there were 185 attendees, and was pleasantly surprised at the range of users he met there. People were from medical companies with products for nurses, from the financial industry, from power and electric companies, and there was the hardware and software crowd. He had a couple of great quotes from different sessions. How about: “This is not rocket science… it is really bow and arrow stuff that has been implemented with technology.” from Michael Priestly of IBM, or “there’s never enough time and money to do things right, but always enough time and money to do things twice!” from Bernard Aschwanden of Publishing Smarter. I personally liked “Take the leap (or fall off the cliff!)” from Bob himself.

Bob said he realized that DITA solves some topic orientation problems that our industry has faced for decades. He was pleased at the rate and pace at which the DITA Technical Committee is churning out releases… 1.1 due out soon, and 1.2 in the next nine months. He feels that the OASIS leadership proves that DITA is not “just an IBM thing.” He thinks DITA maps should be awarded innovation of the year. He said, if you hate the limitations of FrameMaker conditional text, you’ll love the future of DITA with key values ( DITA proposed feature #40) that would allow boolean queries against conditions for output. A conditional text tags contest ensued, with a starting bid of documents with 13 conditional text tags and finally someone with a Frame document with 39 conditional text markers won the contest. :) I appreciated his comments on the two strata of tools — either very expensive, very functional, and easy to use, or (almost) free, fairly functional, but you’d better be a gear head to use ‘em. He sees a definite lack in conversion helpers for legacy content. Of course, with those words, a lively discussion ensued about transforming content versus just getting the text out by converting. Nearly all those experienced in unstructured to structured conversion projects discover, a real human has to figure out how to make topics out of the text that comes from a conversion. People who had done conversions said that Perl on MIFs out of Frame does the trick for getting out text, but in some cases you’re better off starting from scratch to plan for reuse and true topic orientation. Still, a conversion script (or set of scripts) at least takes your existing text into a structured start. Bob also said that something he has learned while researching from many presentations inside and outside of the DITA conference is that you must develop an Information Architect role or you’ll end up chasing your tail when it comes to truly gaining benefit from a topic-oriented architecture for your information.

What does Bob see as next for DITA? He’d like to see a lower bar for entry. Currently the entry “fee” includes a lot of time for preparing your content and training your writers, skills necessary to participate are high, and there’s money required for a bat and ball. He thinks there can be integration with non-DITA XML information streams, especially for those who interface with manufacturing industries. His example from Freescales’ perspective was the RosettaNet effort, where hardware manufacturers can offer “product and detailed design information, including product change notices and product technical specifications” via XML specifications. Incorporating that with DITA topics would help them build their information deliverables. He also noted that the DITA community might be a small one, but it is definitely composed of bleeding edge technology and technologists.

Next, Paul Arellanes, an information architect at IBM, gave his impressions of the Content Management Strategies 2006 conference in San Francisco. He saw a definite eagerness to adopt and use the DITA Open ToolKit as well as eagerness to reuse, reuse, reuse. His talk, Taxonomy Creation and Subject Classification for DITA Topics was highly attended (standing room only) and very well received. He also stressed the importance of training on topic orientation before going to XML. He has a programming background, and likens DITA to object-oriented documentation. He’d like to see code reviews of how the tags are used and if they’re used correctly. He got a couple of good ideas at the conference for how to build code reviews into the document review cycle. I’ll talk about those in the next paragraph. Paul talked about reuse and asked if it’s a boon or a curse? Can you reuse a topic if you can’t find it? What if the topic was never designed for reuse in the first place? How can you design for reuse in the first place? He’d like some best practices for reuse.

He said that implementing DITA is a chance to change your documentation processes — going to topics with a fresh start at content is more successful than a legacy conversion due to being able to build and design for reuse. His takeaways are that we need best practices for reuse, he’d like to build in source code reviews, and found a cool method for doing that with an editor’s CSS process that checks syntax. These are the common errors that you could find and mark up with CSS (basically, colorcoding the output after running it through a syntax checker built on CSS). Often these types of syntax/markup errors happen because the writer is tagging for looks, not for meaning of the content, but it can also happen with legacy conversion.

  • placement of index entries
  • sections that should be separate topics
  • use of definition lists to create sections
  • ordered list tags instead of using step tags
  • lists of parameters in ordered lit tags instead of param list
  • use of unordered list tags with bold instead of definition list
  • use of <ol> or <ul> instead of substeps or choices element in a task topic
  • use of <filepath> for variables and terms
  • menucascade not used
  • uicontrol not used

Paul also has good ideas for the future, including a troubleshooting or problem analysis and determination specialization from the task topic, and perhaps a way to pull out DITA elements from a topic and plugging it into interactive content using AJAX. He was pleased to see that the skill set among attendees is pretty high, including XML, XSLT, SAX, FOP, CSS and Ant build tool skills.

Interestingly, as far as our group could see, Adobe was not represented at the DITA 2006 conference, even though they have a group implementing DITA for solutions documentation.

If you’re like me and didn’t attend the DITA 2006 conference, you might enjoy (as I did) the transcript of Norman Walsh’s talk. Norm is the chair of the DocBook Technical Committee, and DocBook and DITA are constantly pitted against each other for solving the problems of information developers.

talk.bmc

Time to change the name of my talk.bmc blog

I want to reflect the content I’ve got

I’ve been blogging since last September, believe it or not, with almost 75 posts to show for it. After talking it over with my peers and the web team and studying the content including the categories and trends for my content, I’ve decided to change the name of this blog to “Exploring Information Design and Development.”

My Bloglines subscription already picked it up, which is quite cool. It is probably time to re-import the talkbmc.opml file to get feeds for the new additions. Welcome all!

talk.bmc

Getting expert content from outside sources

Riffing on some ideas for getting authoritative technical content from new sources

What can you do when expert content is hard to come by? I’m talking about the upper-crust trusted sources of technical information, much like how A-list bloggers are set to get the higher page rankings on certain topics. Even Technorati is allowing you to filter by authority now when you search for keywords. From the Technorati blog: “The new Filter By Authority slider makes it easy to refine a search and look for either a wider array of thoughts and opinions, or to narrow the search to only bloggers that have lots of other people linking to them.” So is there a shortcut to authority? No, but you can find ways to connect to authoritative content. Here are a couple of ideas.

So let’s say you’re a work-a-day humble tech writer and you haven’t yet made it to a high level of authority on a technical subject, but your users are constantly looking for higher-tech, higher-value documentation. What can you do? This blog post explores two ideas about expanding the sphere of collection when it comes to technical documentation. Look high and low and especially outside of the content owned by your company ,and you can find documentation that your users want and need. Both of these ideas are not mine originally, I just helped implement them technically. A former manager of mine gets all the credit for thinking creatively about technical documentation and jumping through the legal hoops to make it happen. Thanks, Mike!

The first idea I’ve presented at an STC Conference and published as an article at WritersUA. In a nutshell, it’s go to the companies that have products that your companies’ products integrate with, and talk to them about information sharing. I’m talking about single sourcing in a whole new way. You’re sourcing content from other companies, and giving other companies your source to integrate into their products. This concept is what the future of technical publications can look like, especially with XML-source standards such as DITA and Docbook to facilitate sharing. Although, in this case, unstructured FrameMaker was the source file. You can read the links to discover the details on converting that source to the output we needed.

In this case, the particular type of content we pursued was error messages — message text, explanation, and user response — for the major database vendors IBM, Oracle, Sybase, and SQL Server. The product our team was documenting at the time integrated with all these database vendors and often the product passed through the vendor error directly. Since Oracle (and all the others) had comprehensive error message documentation that was similarly structured, we asked for the source files and with some legal contract work, received them. Once we got the source content, I wrote up a process for transforming it all to structured XML, then wrote an XSLT transform that could work in the product itself, transforming the content on the fly, offering HTML that contained both the explanation for the error and what you could do to correct or workaround the error. Now that is expert content, directly from the vendors who create the error message.

The second idea I haven’t really written up yet, until now. Mike Wethington talked about it at the Region 5 conference in the fall of 2003. An email exchange with Cote and a post at his People Over Process blog prompted me to write it up here. The other place we sought out expert content was from O’Reilly, a well known and trusted technical book publisher. It was in the first year or so that Safari, the online book repository, was offering content using a subscription model. With much legal wrangling that I know little about, my manager and the BMC legal team wrote up a contract with O’Reilly book publishers to offer selected reference books in an online format for selected expert database products. The book titles were all Oracle-centered titles, and we selected books that we knew were popular sellers and contained the information that users of our database products would find helpful for both day-to-day tasks as well as future planning and fine-tuning tasks. For this particular product, we supplied a “teaser” set of content, letting the user know that O’Reilly books are available and giving an example of the content they would get if they purchased a permanent license. According to Cote, CITTIO claims they were the first product to integrate with Safari. That may well be, but we had a precursor to that, imbedding the content, shipping it with the product. They claim access to 3,300 books, and we just had eight. :) I’m not sure which implementation is the more usable, but I love that more and more expert technical content is being distributed and shared in these ways.

talk.bmc

Follow up for ITIL and monitoring

I posted a scenario last week and got some feedback from the CONTROL-M folks

Shortly after posting my ITIL and monitoring scenario about BMC Performance Manager notifying CONTROL-M that an SAP job was failing, I got a great response about how CONTROL-M even takes it a step further towards ITIL ideals with the Batch Impact Manager module.

Just read your blog. I wanted to update you on a couple of items which help to fill out the drive to ITIL standards achievement in the operations and monitoring environments.

BMC CONTROL-M has an additional module called BMC Batch Impact Manager. This module allows a user to reduce a large flow of jobs to a single service instance and monitor those services critical to the business. Is there any other kind? When a service is predicted by Batch Impact Manager to fail its due out time, Batch Impact Manager issues alerts to operations and provides an interface to Service Impact Manager to focus attention on bringing the service back to normal. Therefore if an SAP process was included in that critical service, as soon as the non-availability of the SAP process impacted the service, CONTROL-M and or Batch Impact Manager would start to squeal.

Great stuff! I’ve just added some links and spelled out the acronyms, otherwise unretouched. The embarrassing thing is, I knew about this module but hadn’t made the connection. Thanks Ronnie for bringing it to my attention!

talk.bmc

How to save money by monitoring closely

How monitoring voltage closely can save money… are there parallels for monitoring your systems closely?

On my way in to work the other day, I caught an NPR story, Utility’s ‘Voltage Reduction’ Plan Saves Energy, about how a Washington state public utility is using “conservation voltage reduction” to save energy, and light bulbs. The electric company monitors the amount of voltage they send to customers (the amount of electrical “water pressure” through the “pipes”). In the U.S., they are supposed to send 120 volts +/-5%, which is 114 volts to 126 volts. The way most power companies currently send out power causes the first half mile radius of customers get 125 volts, and then customers in a mile radius get 124 volts, and so on, so they send out more power than they need to ensure the furthest out customers get output that meets the regulation even after reductions due to distance. However, by regulating precisely, they can reduce the initial power send-out to 117 volt, a cut of 2.5%, which buys the average customer 450 kw, and saves the company $3.5 million per year. Now, if you get down to 111 to 112 volts, there’s the risk of damage to appliances, but apparently a sophisticated monitoring system can prevent that type of drop. A voltage regulation startup Microplanet lets power companies use electronic voltage regulators to precisely regulate voltage and deliver more precise electrical “water pressure.” Now that is a great application of precision monitoring technology.

If you’re inclined to take monitoring of electricity usage into your own hands, there’s the nifty “Kill-A-Watt” electrical usage meter. Using a similar device in Australia, a boingboing.net reader found that his computers were surprisingly inexpensive to run, but he now turns off the whole TV-DVD-VCR stack every night, and makes sure the coffee machine is only turned on while it’s making coffee.

This monitoring of electrical power leads me to believe that precisely monitoring server behavior should allow you to make small incremental changes in server behavior that can save time, power, or resources such as disk space. How about closely monitoring application that require a lot of heavy-duty processing power and finding ways to virtualize those servers? You maintain one piece of hardware but have multiple applications running in virtual environments. That’s all I could come up with for now, but there have to be other examples. I do know that making sure all your monitors have a power save mode and use it can definitely cut back on electricity costs. What other ideas do you have for saving money by monitoring?