Category Archives: techpubs

techpubs writing

Treat Docs like Code: Doc Bugs and Issues

This is a follow on to a post on opensource.com about using git and GitHub for technical documentation. In the OpenSource.com article, I discuss reviews and keeping up with contributions. This post talks about fixes and patches.

htakashi_typewriter

What about doc issues in GitHub, how do you get through all those?

In OpenStack, we document how to triage doc bugs, and that’s what you need to do in GitHub, is establish your process for incoming issues. Use Labels in GitHub to indicate the status and priority. Basically, you have to accept that it’s a doc bug, if it’s not a doc bug, ask for more information from the reporter. If you want to create labels for particular deliverables, like the API doc or end-user doc, you can further organize your doc issues. You will need to define priorities as well — what’s critical and must be fixed? What’s more “wishlist” and you’ll get to it when you can? If you use similar systems for both issues and pull requests you’ll have your work prioritized for you when you look at the GitHub backlog.

How can you encourage contributors to create a good pull request for docs?

The best answer for this is “documentation” but also great onboarding. Make sure someone’s first pull request is handled well with a personal touch. There’s a lot of coaching going on when you do reviews. Ensure that you’ve written up “What goes where” as this is often the hardest part of doc maintenance for a large body of work that already exists. This expansion problem is only getting harder in OpenStack as more projects are added. We’re having a lot of documentation sessions at the OpenStack Summit this week and we’d love to talk more about creating good doc patches.

One person I work with uses GitHub emojis every chance he gets when he reviews pull requests. I think that’s fun and sets a nice tone for reviews.

Nitpicking can be averted if you point to your style guide and conventions with a good orientation to a newcomer so that new contributors don’t get turned off by feeling nitpicked.

Have you heard of anyone who has combined GitHub with a different UI “top layer” to simplify the UI?

O’Reilly has done this with their Atlas platform. For reviews, the Gerrit UI has been extremely useful to a large collection of projects like OpenStack. There’s Penflip, which is a better frontend for writers than GitHub. The background story is great in that it offers anecdotes about GitHub being super successful for collaborative writing projects.

I think that GitHub itself is fine if your docs are treated like code. I think GitHub is great for technical writing, API documentation, and the like. Academic writers haven’t found GitHub that much of a match for their collaborative writing, see “The Limitations of GitHub for Writers” for example. It’s the actual terms that have to be adapted and adopted for GitHub to be a match for writers. For example, do you track doc bugs (issues) and write collaboratively with added content treated like software features? I say, yes!

If you just want simple markup like markdown for collaborative writing, check out Beegit. With git in the name I have to wonder if it’s git-backed, but couldn’t figure it out from a few minutes on their site. Looks promising but again, for treating docs like code, living and working with developers.

API community content strategy techpubs

Deep Data Dive into Supporting Developers

Here at Rackspace, we watch several sources to determine if a developer is having trouble with Rackspace Cloud services, many of which are based on OpenStack. We even have a notification tool, aptly named peril, that offers an aggregation of the many sites where developers may seek help. I’ve been on the Rackspace developer experience team since the very early days when we started supporting developers at the code level a few years ago. We monitor stackoverflow.com, serverfault.com, and superuser.com for questions tagged with Rackspace, rackspace-cloud, fog, cloudfiles, jclouds, pyrax, or keystone. At Rackspace we have been supporting cross-cloud SDKs such as Apache jclouds, Node.js pkgcloud, Ruby Fog, Python Pyrax, .NET, and PHP. Let’s look at the data from these many places to find out the patterns for application development.

A developer is in peril!

We have a group email address that we monitor for developer support requests. In the last year, about 6.25% of support requests came in through email. We saw nearly half of support requests on the jclouds bug list (JIRA tracker) and a community forum at community.rackspace.com. Tracking Github issues on our supported SDKs was another 40% with the 14% remaining support requests coming from Stack Overflow, where we track certain tags on questions asked. Here’s a screenshot showing what our notifications look like in a Slack channel internally. I like how it cycles through various alert messages, “Heads up, incoming!” and not shown is “BWEEEEEP BWEEEEP BWEEEEP” which naturally makes us want to help!

Peril example
True story: sometimes our slurps catch and notify before our email server. We are on it!

Documentation comments

If you’ve read my blog for a while or my book, you know I appreciate documentation that offers back-and-forth discussion in comment threads. We use Disqus comments for developer documentation at Rackspace. These comments tend to uncover three categories of requests:

  • Request for help when something doesn’t work as expected
  • Request for a feature that doesn’t exist
  • Request for correction: pointing out typos or incorrect formatting, such as JSON examples that lose their indentation

We see about 20 comments a month on API docs spanning all our products, with about 27.5% on Cloud Files (Object Storage), about 20% Cloud Servers (Compute), and Identity coming in third at 12%. In OpenStack docs, we have a doc bug link that serves for the third type of comment — pointing out a doc bug — but not on the API reference, yet. One pattern we see that’s comparable to a review of the document is someone asking a bunch of questions at once to gain understanding.

Stack Exchange sites

Stack Exchange sites are question and answer sites with built-in features to boost motivations for answering questions posted by others. Some examples sites include Stack Overflow for developers and Server Fault for administrators.

We track these tags on Stack Overflow with our peril tool:

rackspace rackspace-cloud fog cloudfiles jclouds pyrax keystone

When people don’t get a satisfactory answer on Stack Overflow, one interesting pattern is that they come to ask.openstack.org, the OpenStack open source equivalent site.

The Stack Exchange API is a ton of fun to scrape data. I wrote some Python scripts that request data from these calls, using the tag “openstack”:

  • Top Answerers: Matt Joyce, Everett Toews, and Lorin Hochstein are great at answering questions on Stack Overflow, and since Everett’s on the developer support team at Rackspace that makes sense.
  • Related Tags: openstack-nova, cloud, python are the top three related tags for openstack on Stack Overflow.
  • Top Tags: python, ruby, csharp/.net, php, javascript(node.js). One interesting observation is that overarching concepts like security and networking were often tagged along with the language itself.
  • Frequently Asked Questions: It’s not surprising that authentication is the root of the most frequently asked questions. Networking is also particularly complex and it shows in the number of questions asked. What was interesting though is the number of questions about monitoring and metrics on the cloud consumption itself.
  • Unanswered Questions: For the openstack tag, the unanswered questions had less than 100 views, compared to over 1000 views for the answered questions.

While your browser automatically decompresses the results when you enter http://api.stackexchange.com/2.2/tags/openstack/top-answerers/all_time?site=stackoverflow in your browser, I had to figure out how to have Python unzip the results and put them into JSON. Then I got stuck trying to automate putting the JSON into CSV, so I used konklone.io/json/ to convert the JSON to CSV.

Once I had the CSV files, I could use Tableau to get interesting data visualizations like a bubble grouping for related tags showing the frequency of the tags.

Tag bubbles

Github data

I found that Github issues were a great place to dive into the use cases for particular software development kits. I even discovered that our Austin-based real estate agent’s web site was developed by Rackspace Cloud php-opencloud users!

TryStack

One extremely helpful sandbox for trying OpenStack services through API use is Trystack, with over 15,000 users and growing every year. I find it very helpful to have my own sandbox besides the Rackspace Cloud to test the actual API calls and see what is returned from OpenStack. In looking at the logs, I found many queries about quotas which also explain the many questions about policies. It’s a free, community-donated cloud, and support is entirely through a Facebook group, so it’s a bit unusual but I still found interesting data around it.

Other findings

I found that many of the SDKs do not yet have full support for certain OpenStack services. For example, jclouds, the Java multi-cloud toolkit has a lot of users but doesn’t yet support the newest OpenStack services like Orchestration (heat templates), the Metering module ceilometer, the new versions of APIs for Images and Identity, and the latest storage policies implementation for Object Storage.

I also surmise that the service catalog coming through the Identity service needs stricter documentation and expectations setting. So the OpenStack API Working Group is tackling that issue by first discovering all the common patterns for the service catalog. Feel free to join the OpenStack API Working Group and review the incoming suggestions for consistency going forward or review patches going into OpenStack services that affect APIs.

Presentation

You can get the full presentation from slideshare.net, or watch me give it on YouTube.

API techpubs tools work

API Archaeology: Complexity and sizing of an interface

For both OpenStack and Rackspace cloud APIs, we use WADL, Web Application Description Language, to build an API reference listing for all REST API calls. In a previous post I discuss how the reference pages at http://developer.openstack.org/api-ref.html are made with WADL source and a Maven plugin, clouddocs-maven-plugin. You can see the output for the Rackspace API reference page at http://api.rackspace.com, built with the same tool chain but different branding through CSS. I can discuss the tooling decision process in another post, but let’s talk about ongoing upkeep and maintenance of this type of API reference information.

In this post I want to dig beneath the surface to discover how complex these APIs are, how that complexity might translate into difficulty or time spent in documenting the interfaces, and discuss some of the ways you could assign the work for creating and maintaining reference information for APIs. In another post I said, start with a list. This post looks into what happens after you have a list and need more lists to know the shape and size of your API and its documentation needs.

Some of the complexity also lies in documenting the parameters and headers for each API. Just like unearthing the walls of an ancient structure, you can look at the various ways an API is put together by looking at the number of calls, the number of parameters on each call, whether there are headers on any given call, and how the calls are grouped and related. I’ve summarized some of that below for the comparison cultures, er, grouped APIs.

pedroszflickr

Definitions

A call is defined as a GET PUT POST DELETE command sent to a resource. These are known as HTTP verbs.

A header is defined as an optional or required component of a HTTP request or response. There are plenty of standard headers, what I’m talking about here are the extra headers defined by the API you document specifically.

A parameter may be a query parameter or provide a way to filter the response for example. Parameters specify a varying part of the resource and your users need to know what parameters are available and what they can do with them.

Running the numbers

To get these numbers, I first built each reference site so that the WADL files can be built into a single folder, which lets me do a grep for a count.

So I cloned each repo, ran mvn clean generate-sources with in the api-ref directory, then ran this command from with in from within api-site/api-ref/target/docbkx/html

grep -c “rax:id” wadls/*.wadl | sort -n -k2 -t:

Then I imported the output from the command as a colon-delimited file to a spreadsheet to get these counts.
OpenStack API Reference Metrics
Number of Compute v2.0 calls:290
Number of Networking v2.0 calls:92
Number of Orchestration v1.0 calls:41
Total documented calls:755

Rackspace API Reference
Number of Cloud Files calls:21
Number of Compute v2.0 calls:70
Number of Networking v2.0 calls:18
Total documented calls:670

Here’s a breakdown for just a few of the OpenStack APIs header and parameter counts.
Object Storage API Parameters: 12, Headers: 75
Volume API Parameters: 23 Headers: 0
Compute core API Parameters: 69 Headers: 1

Other metrics

We track doc bugs for the OpenStack API reference in Launchpad with the openstack-api-site project. There are nearly 200 doc bugs logged against the API Reference right now.

The three APIs with the most calls for Rackspace are Monitoring, Email & Apps, and Load Balancers, all of which are not OpenStack APIs. So a full two-thirds of Rackspace calls are not OpenStack-sourced. However, this means that a full third of Rackspace calls are identical to OpenStack.

What are some of the differences between OpenStack and Rackspace?
Extensions are complete in OpenStack; Rackspace only implements a handful of extensions.

Internal (admin-only) and external (user) calls are documented in OpenStack; Rackspace API Ref only documents external calls.

Rackspace has paid API writers and accepts pull requests on Github; OpenStack docs are written by writers and developers in the community (often with corporate sponsors) using the OpenStack gerrit process.

Conclusion

So that’s a lot of numbers, but what’s your point? My point is that making lists helps you determine the size and complexity of documenting multiple APIs. Not all companies or projects will have more than one API to document, but as we move towards more application interfaces for more business reasons, I believe that writers and developers need to get really accurate in their estimations of just how much time to allocate to document their APIs and do it well. Since these estimates are for API reference information only, don’t fail to also estimate time to write and maintain viable, tested example code as well. That’s a post for another day, thanks for reading about the complexity and comparison of OpenStack and Rackspace cloud APIs.

content strategy techpubs

How to get started writing API docs

I know a lot of people who want to consume awesome API docs. Let’s talk about what it takes to get started writing them. I’m not talking about completing your API docs. I’m talking about just getting started, what does it take?

For API documentation especially REST APIs, I’d advocate a reference-first approach. Like the couch to 5K program for running, let’s start API documentation on your couch. You look under your coffee table and find your shoes. You pull them on (make sure you have wrinkle-free socks!) and lace them up. You’re ready!

 

https://www.flickr.com/photos/jypsygen/3321559694/sizes/l/in/photolist-64vRYA-82CrNJ-86fGxc-9qLHZf-e8mSA1-acUfDf-kpF6vK-jKgd2h-jC5u26-ktM7vp-kpEjtn-kvG8qZ-cymDph-fMuVKS-7WuzPS-9V1uGW-8NfZuz/
Please write API docs! Photo courtesy jypsygen on Flickr.

If you are working on REST APIs, I’d also recommend understanding how they’re designed. I like the book, “A Practical Approach to API Design:From Principles to Practice,” by D Keith Casey Jr. and James Higginbotham. You can read a sample for free online, and then support their efforts by buying a copy at a price you set!

If you work is with another style of API, be sure you understand the underlying reasons for using that interface. It’ll help you understand your audience first.
Make a list. First, a list of all the calls. Then for each call, make a list of requirements. What must a user give to the interface to get back what they want? What are the requests, what are the responses? Then list the optional parameters the call can take. Are there any headers that can be sent or received? Be sure to write those on your list also.

Let’s not get tied up in tools yet, this initial writing work can be on a notebook or any text editor. You don’t run out and buy an expensive heart monitor or activity sensor when you’re just starting out as a runner. Figure out how far you can get with a pair of shoes (your notebook) before investing in cool tools and gear. Otherwise, you’ll get distracted by the cool tools and gear and not write stuff down!

With these lists, you’re building scaffolding. Just like a running program, you need to make a pattern to learn to pace yourself and spend your energy wisely. Once you have a list of the calls, you can write or diagram the users, the tasks they want to complete, and then see if your reference is filled in for all the users and tasks. I highly advocate the reference-first approach as it’ll help you test the completeness and helpfulness of your documentation.

I have some additional posts I’m writing for this year where I want to dive into increasing complexity of APIs so that writers and develeopers can estimate the amout of time needed for good documentation. I’ll also analyze some tooling for REST API documentation and offer benefits and tradeoffs for different tools. Looking forward to digging into API documentation in 2015!

community techpubs tools

Tearing down obstacles to OpenStack documentation contributions

Rip. Shred. Tear. Let’s gather up the obstacles to documentation contribution and tear them down one by one. I’ve designed a survey with the help of the OpenStack docs team to determine blockers for docs contributions. If you’ve contributed to OpenStack, please fill it out here:

https://docs.google.com/forms/d/136-BssH-OxjVo8vNoOD-gW4x8fDFpvixbgCfeV1w_do/viewform

barriers_sameleighton
I want to use this survey to avoid shouting opinions and instead make sure we gather data first. This survey helps us find the biggest barriers so that we can build the best collaboration systems for documentation on OpenStack. Here are the obstacles culled from discussions in the community:

  • The git/gerrit workflow isn’t in my normal work environment
  • The DocBook and WADL (XML source) tools are not in my normal work environment
  • My team or manager doesn’t value documentation so we don’t make time for it
  • Every time I want to contribute to docs, I can’t figure out where to put the information I know
  • When I’ve tried to patch documentation, the review process was difficult or took too long
  • When I’ve contributed to docs, developers changed things without concern for docs, so my efforts were wasted
  • Testing doc patches requires an OpenStack environment I don’t have set up or access to in a lab
  • I think someone else should write the documentation, not me
  • I would only contribute documentation if I were paid to do so

Based on the input from the survey, I want to gather requirements for doc collaboration.

We have different docs for different audiences:

  • cross-project docs for deploy/install/config: openstack-manuals
  • API docs references, standards: api-site and others

These are written with the git/gerrit method. I want to talk about standing up a new docs site that serves our requirements:

Experience:
Solution must be completely open source
Content must be available online
Content must be indexable by search engines
Content must be searchable
Content should be easily cross-linked by topic and type (priority:low)
Enable comments, ratings, and analytics (or ask.openstack.org integration) (priority:low)

Distribution:
Readers must get versions of technical content specific to version of product
Modular authoring of content
Graphic and text content should be stored as files, not in a database
Consumers must get technical content in PDF, html, video, audio
Workflow for review and approval prior to publishing content

Authoring:
Content must be re-usable across authors and personas (Single source)
Must support many content authors with multiple authoring tools
Existing content must migrate smoothly
All content versions need to be comparable (diff) across versions
Content must be organizationally segregated based on user personas
Draft content must be reviewable in HTML
Link maintenance – Links must update with little manual maintenance to avoid broken links and link validation

Please take the survey and make your voice heard! Also please join us at a cross-project session at the OpenStack Summit to discuss doc contributions. We’ll go over the results there. The survey is open until the first week of May.

community techpubs work writing

OpenStack Operations Guide Mini Sprint

oreilly-openstack-ops-guide

We held a two-day mini-sprint in Boston at the end of January to update the OpenStack Operations Guide. You may remember the first five-day sprint was in Austin in February 2013. This time, the sprint was shorter with fewer people in Boston and a few remote, but we had quite specific goals:

  • Update from Folsom to Havana (about a year’s worth of OpenStack features)
  • Roadmap discussion about nova-network and neutron, the two software-defined networking solutions implemented for OpenStack
  • Add upgrade instructions from grizzly to havana
  • Implement and test the use of parts to encapsulate chapters
  • Address editor comments from our developmental editor at O’Reilly
  • Add a reference architecture using RedHat Enterprise Linux and neutron for networking

Some quick wins for adding content were:

We added and updated content like mad during the two days:

The two toughest updates are still in progress, and our deadline for handover to O’Reilly is this Wednesday. The first tough nut to crack was getting agreement on adding an example architecture for Red Hat Enterprise Linux. We are nearly there, just a few more fixes to go, at https://review.openstack.org/#/c/69816/. The second is testing the upgrade process from grizzly to havana on both Ubuntu and RedHat Enterprise Linux. That’s still in progress at https://review.openstack.org/#/c/68936/.

The next steps for the O’Reilly edition are proofreading, copyediting, and indexing over the next six weeks or so. I’ll be keeping the O’Reilly edition in synch with our community-edited guide. As always, anyone in the OpenStack community can contribute to the Operations Guide using the steps on our wiki page. This guide follows the O’Reilly style guide rather than our established OpenStack documentation conventions. I’m looking forward to a great future for this guide and we’re all pretty happy with the results of the second mini-sprint.

Thanks to everyone making this a priority! Our host at MIT was Jon Proulx joined by Everett Toews who braved airport layovers and snow, and Tom Fifield who wrote the most patches despite a complete lack of sleep. Joe Topjian worked on edits for months leading up and has been tireless in making sure our integrity and truth lives on through this guide. Thanks too to the the hard working developmental editor at O’Reilly who offered lunch in Boston, Brian Anderson, joined by Andy Oram. David Cramer got DocBook parts working for us in time for the sprint. Summer Long worked long and hard on the example architecture for RedHat. Our remote reviewers Matt Kassawara, Andreas Jaeger, and Steve Gordon were so valuable during the process and ongoing. Shilla Saebi gave some nice copyediting this past week. What an effort!

community content strategy techpubs tools

A Few of my Favorite Things for 2013

tumblr_m8lsqnjijt1qgx2ojo4_250

This year has been filled with interesting finds, discoveries, and productivity. Plus oxford commas! Here are my favorite things for 2013.

The Hunger Games trilogy, because it’s like a window into a mind of a smart writer who writes with purpose.

The Documentation chapter of the Developer Support Handbook has to be one of my very favorite things I discovered this year. I’m on the Developer Relations Group team at Rackspace and this is a great handbook for all of our team.

Animated GIFs, pronounced jifs, am I right? OpenStack Reactions cracks me up.


Grace Hopper Conference by the Anita Borg Institute, especially the Open Source day, and the GNOME Outreach Program for Women which OpenStack started participating in this year. Women in technology are my favorite!

The Houzz App on my Android tablet for eye candy while messy remodeling was actually happening. Plus it’s the best content remix site I’ve seen in a while, more targeted than Pinterest.

Photo kids

Probably the best photo of my kids this year, I make it a favorite because at their ages it’s difficult to get one of the both of them.

OpenStack Docs Boot Camp

OpenStack Security Guide book sprint, read it at http://docs.openstack.org/sec/.

oreilly-openstack-ops-guide

OpenStack Operations Guide book sprint, now an O’Reilly edition, read it at http://docs.openstack.org/ops/.

How about you? What are some of your favorite things from this past year?

techpubs work writing

Who Wrote OpenStack Havana Docs?

I know, I know, OpenStack is too obsessed with statistics for contributors. I agree! I want to rise above it but the trend release-over-release for docs is way too tempting for me in my research lab for documentation. So allow me to indulge in some analysis that is similar to my post about this last release, Who Wrote OpenStack Grizzly Docs? Remember? We had 79 docs contributors overall, with 3 of us writing half of the changes for Grizzly. This time we had 130 docs contributors with 7 of us writing just over half the changes in overarching install/config/deploy/operations guides. Progress! We also had at least three supporting companies hire writers dedicated to OpenStack upstream docs. Full of win. I love having OpenStack job postings to offer great tech writers.

Worcester Terrace

We added 100,000 lines more than last release. What? Yes, it’s true. Much of that is from our autodoc efforts, pulling all 1500 configuration options directly from the code, but that’s a compelling number to share.

I ran the same scripts as last time to maintain consistency. All these stats are from the tools that aren’t being maintained any more from openstack-gitdm.

Here are the numbers for the openstack-manuals repository only:

Processed 966 csets from 130 developers
92 employers found
A total of 187524 lines added, 275056 removed (delta -87532)

Developers with the most changesets
 Andreas Jaeger 233 (24.1%)
 annegentle 89 (9.2%)
 Tom Fifield 70 (7.2%)
 Diane Fleming 70 (7.2%)
 Christian Berendt 65 (6.7%)
 Sean Roberts 44 (4.6%)
 Stephen Gordon 41 (4.2%)
 Summer Long 21 (2.2%)
 Lorin Hochstein 20 (2.1%)
 nerminamiller 17 (1.8%)
 Gauvain Pocentek 15 (1.6%)
 Emilien Macchi 15 (1.6%)
 Colin McNamara 14 (1.4%)
 Shaun McCance 13 (1.3%)
 Deepti Navale 11 (1.1%)
 Phil Hopkins 9 (0.9%)
 Aaron Rosen 8 (0.8%)
 Kurt Martin 8 (0.8%)
 Scott Radvan 7 (0.7%)
 Edgar Magana 7 (0.7%)
 Covers 80.434783% of changesets

This is another interesting data set:

Employers with the most hackers (total 136)
Red Hat                     12 (8.8%)
IBM                         12 (8.8%)
Rackspace                    8 (5.9%)
HP                           7 (5.1%)
Nicira                       4 (2.9%)
Mirantis                     4 (2.9%)
SUSE                         2 (1.5%)
Yahoo!                       2 (1.5%)
eNovance                     2 (1.5%)

I also ran these same stats for the api-site repository, where the API user docs are sourced. These docs are still quite different from a contributor and sourcing standpoint and I’m not sure why. Rackspace dedicating Diane Fleming has made a huge difference here, think of what we could do with one more dedicated API doc writer? Ok, we can’t clone Diane, but think of the possibilities.

Developers with the most changesets
Diane Fleming 46 (58.2%)
annegentle 5 (6.3%)
Cyril Roelandt 2 (2.5%)
Kersten Richter 2 (2.5%)
Brian Rosmaita 2 (2.5%)
Rupak Ganguly 2 (2.5%)
QingXin Meng 2 (2.5%)
ladquin 2 (2.5%)
dcramer 2 (2.5%)
Employers with the most hackers (total 23)
Rackspace 5 (21.7%)
IBM 5 (21.7%)
Red Hat 2 (8.7%)

I’m also a web analytics hound. What docs were the most accessed during the lead-up to the Havana release? Here are the top five:

  1. Swift Developer site
  2. Install Guides (Basic and Deploy, both Ubuntu and RedHat/Fedora/Centos)
  3. API Quick Start
  4. OpenStack Operations Guide
  5. Getting Virtual Machine Images page from the OpenStack Compute Administration Guide

To me, these stats show that we’re doing the right things such as dedicating a contractor to the install guide. Thank you Cisco. We still have areas to improve, such as API docs and ensuring end-user docs are a top priority. Our readers are definitely after both deployment and consumption of OpenStack clouds.

techpubs work

OpenStack DocImpact Flag Walk-through

In our unconference sessions at OpenStack Docs Boot Camp, we talked about integration with development, the DocImpact flag, and I came up with these guidelines for what a DocImpact commit message should contain. I wanted to talk about it here on my blog before posting to the wiki to see if I’m missing anything crucial.

  • Who would use the feature?
  • Why use the feature?
  • What is the exact usage for the feature? For a CLI call, provide examples of all the parameters the patch includes.
  • Does the feature also have permissions/policies attached?
  • If it is a configuration option, which flag grouping should it go into? (Defaults and a description are already required by their gate test on nova.)

I’ll walk through it with a DocImpact bug I worked on today titled “havana: nova Add force_nodes to scheduler hints“. I went on IRC to ask the developer the questions and got these answers:

  • Who would use the feature? Only administrators using the baremetal driver for nova.
  • Why use the feature? Baremetal is used when doing high-performance computing or standing up clouds from baremetal. Turns out, this driver is getting moved out of nova soon, and spans the realm of Heat + TripleO, and still a work in progress, but getting really close. HP is using it already to get compute nodes ready for action. This specific feature is used to specify exactly which node to send the next set of commands to.
  • What is the exact usage for the feature? For a CLI call, provide examples of all the parameters the patch includes. It’s a call on the nova boot command with the –availability_zone flag, and you can have nova boot –availability_zone=zone:host,node. You have to get the node in the form of a UUID.
  • Does the feature also have permissions/policies attached? In this case, it turns out some API features didn’t land in Havana, and so if you have to use a UUID for a baremetal node, then you have to do a SQL database call in order to get the UUID. This usage requiring database access is sort of “ugly” so I’m still trying to decide what to do with it. Also, I recall a lot of discussion around scheduler hints passed in with the –availability_zone flag so I’m still working on this doc bug. It would help greatly if the developer had done some of this end-user thinking up front, but he couldn’t have known back in May that the API changes wouldn’t land in September.
  • If it is a configuration option, which flag grouping should it go into? (Defaults and a description are already required by their gate test on nova.) For the baremetal driver, we already have docs for the configuration reference, but it was a good check to do. I found that the wiki page had a redirect to a simpler page name, so I’ve included that in my patch.
techpubs work writing

Discipline and Diplomacy: Docs in the Open

In open source, all sorts of interesting connections happen. In open source documentation, an even more narrowly defined group of folks connect the dots for others. Recently I was interviewed by Mirantis, an OpenStack services startup, about my involvement with OpenStack documentation. They’re doing a series of interviews with the technical leads in OpenStack. We had a good time talking, and here’s an excerpt with a link to the full interview. I wanted to share it for my readers to see my open source views as a snapshot.

Mirantis: Can you please introduce yourself?

Anne Gentle: I work on OpenStack documentation full time at Rackspace, and I actually was the second hire Rackspace did for the OpenStack project. It was the greatest match I could ever wish for. I wrote a book in 2009 about how to do community collaborative documentation, and I had volunteered a lot with open source docs projects. This job showed up in my backyard in Austin, Texas, and I just jumped at it.

Q: What is the major difference between open source and closed source documentation?

A: The first big difference is interest in open-ness everywhere, from authoring to publishing to display. I was even asked if all of our fonts are open source in the first few weeks I started! Our toolchain is open to anyone to author with or tinker and re-use themselves. The second difference is in licensing. In a closed source environment, the documentation is very legally bound to provide a certain service-level or billing agreement. The idea behind open collaborative docs is that anyone can edit them and, in some communities, the ethos is very involved in the attribution of content. That’s a really good case for creative commons licenses.

So there’s a whole range, but a lot of it is around licensing and the freedom of the content, I also believe there’s a lot of interesting innovation going on in open source. For many of the same reasons you would do open source coding, I think there’s a similar draw for open source docs.

Q: What makes open source documentation so special?

A: There is a need to have a lot of discipline around documentation, and open source surprisingly lends itself to that. Open source, especially projects that try to tie docs to code as much as possible, are actually going to be very disciplined in their processes. Read more…