Tag Archives: techcomm

community techpubs work writing

Book Sprint for OpenStack Security Guide

The legendary book sprint method has come through again! This past week in a bunker, I mean, secure location near Annapolis, a team of security experts got together to write the OpenStack Security Guide. I’m pleased as can be to have the privilege of sharing the epub with you here and now, the evening of the fifth day!

Download the epub file and start reading. One of the goals for this book is to bring together interested members to capture their collective knowledge and give it back to the OpenStack community.

This cover gives you a glimpse of the amazing feat this team pulled off. We’ll have HTML and PDF in the next couple of weeks to fulfill your multi-output consumption wants and needs. For now, fire up your ereader, and start reading! The team wants your input.

content strategy techpubs work writing

Tools and skills in the red

If this isn’t a snapshot of our industry, I don’t know what is.

A couple of observations:

  • “Documentations” [sic] to me indicates an English-second-language speaker. Members listing that term as a skill is 245K, larger than the 107K “Technical Documentation”.
  • Looks like it’s an easy popularity contest winner for “Technical Documentation” over “Technical Communication” with nearly 5 times as many LinkedIn members citing “Technical Documentation” as a Skill.
  • Content strategy as a Skill listing is growing 16% year over year.

Fascinating snapshot. What do you think of this data capture at this point in time?

tools work writing

DocBook, ePub, Hackathon, What More Could You Ask For?

This Friday, on 11/11/11, the Austin Rackspace office is holding a Hackathon. The projects range from “fix the arcade game” to “install notification system to indicate availability of the men’s room” to my pet hack project, “create epub output for Rackspace and OpenStack manuals.”

Here’s a short introduction about making epubs from the FLOSS Manuals book, E-Book Enlightenment.

Of all the formats for e-books only EPUB combines small file sizes with the ability to do formatted text and illustrations. An EPUB is like a website contained in a Zip file, with a Table of Contents attached. It is also in one important way different from a website. A website is made with HTML (usually) but an EPUB is made with XHTML.

The difference is small but crucial. HTML is meant to be forgiving. If you make a web page you can leave out some tags, fail to close tags, or close tags in a different order than you opened them in. A web browser is supposed to forgive that, as much as possible. XHTML, on the other hand, is like HTML that is not forgiving. You can’t leave out a tag or put in a tag where the XHTML browser does not expect it. If an XHTML browser discovers an error in your page it can simply refuse to display it.

The end result is that an XHTML browser is easier to make than an HTML browser. A lot easier. It does put a burden on the e-book author to get his tags right, but in practice you’ll never create an XHTML file by hand.

With automation in mind, we’re going to use our existing toolchain to make the epub. Earlier this year, Robert Nagle, a tech writer in Houston, wrote up his findings about Docbook and epub. He writes for the Teleread website. What’s interesting to me is that he wrote up this blog post last year (11/7/10) about his findings while making epub from DocBook source and this past September said his next priority is moving his workflow to Oxygen + Ant + DocBook. David Cramer, our Doc Build Developer, has briefly tested the Maven-based toolchain by simply adding the goal generate-epub to a pom.xml file and building. That method did not copy over images. Then he tried building as part of the clouddocs plugin and received an error. We’ll start our debugging with the toolset, but we’ll also need debugging of our DocBook source as well.

Most of the “success” of the hack fest will be having fun and not sweating the small stuff. To me, we can call it done when we have epub examples for one OpenStack book and one Rackspace book that you can page through and read on a plane. Images should behave within reason, tables should be readable, and notes should be designated as such. Beyond those criteria, we’re exceeding expectations.

In case you’re curious about our tool chain for OpenStack docs, I have instructions on the OpenStack wiki. If you’re on a Mac or Linux machine, here are the quick steps for getting started with the tool chain for the openstack-manuals project:

1. First install the Apache Maven project.
With Macports already installed on a Mac, you can do this:

sudo port install maven2
or on Ubuntu

sudo apt-get install maven2

2. Install Git by referring to Mac or Linux instructions.

3. Get (git it? Ha!) and then build the docs with these commands:

git clone https://github.com/openstack/compute-api.git
cd compute-api/openstack-compute-api-1.1
mvn clean generate-sources

You will see a /target directory containing HTML and PDF output. Perhaps after Friday, you’ll also see epub output, who knows? While Friday’s Hackathon is for Austin Rackers, I’m happy to share our experiences here. Here’s hoping the OpenStack community will benefit from our hacking.

techpubs wiki

Why Wiki?

In OpenStack-land, the wiki was chosen before I got here. It has a couple of flaws for my vision for open source documentation, which became more apparent when I recently outlined my reasoning for what content goes where. By walking through the “what goes where” talk about documentation and audience, I realized something about the system we have set up (and will tweak more) and it’s this: a wiki revisions a page at a time, but our doc system with comments revisions a site at a time, namely a release at a time. It’s an important distinction to make so that you decide what a web “page” contains. It’s important to be able to articulate your needs for a web page and how it’s updated so that you can tell authors how much information you need in a page.

A page at a time is really tough for ongoing maintenance, when you don’t choose correctly the amount of information in a page. There are also difficulties with the rather immature technology in many wikis. Wikis were designed for simple editing, fast publishing. What if you have a sweeping name change? Anyone doing tech comm in a wiki knows that’s a headache for many wiki systems. What about a final spell check? Page at a time. How about search and replace? Page at a time. And let’s not talk about the times when you have to add an entire column to a table in an ASCII-based wikitext way. Your wrists will revolt, I assure you.

But an interesting expectation for a wiki that wasn’t considered when choosing the wiki is the need for comments on each page. Right now the only web systems for OpenStack that enable comments on a page are the OpenStack blog and the docs site. The docs.openstack.org site is built with DocBook XML with Disqus comments embedded on each page. It’s not quite perfect, as we’re learning as we go that not everyone wants to moderate every comment from every book on the site, and we’re still learning how to turn off hundreds of comments at a time, but it’s a great solution to a specific need. Yes, even the comment system selection needs an information architect analysis before you begin, but that’s the topic of a future post.

Ironically, our wiki doesn’t have a way to comment. So we just use it for project documentation and any comments on a project design are actually done in person at a Design Summit, which is coming up next week. Even with the great opportunity for in-person interaction at the Summit, I get the feeling there are more people wanting to “talk back” and give feedback. And certainly people want to receive feedback, to a point where they specifically bring docs to me and request publishing through the docs.openstack.org system due to the commenting system.

So what this meandering post is coming around to stating is this: You don’t need a wiki to gather feedback on your docs. Find a way to embed comments on each page and a way for collaborators to edit and you’ve got two of the basic end-goals done.

Now, make sure the end goals are known in the first place. It’s possible you need a wiki because your information would be best written a page or an article at a time. This statement is obvious when looking through your hindsight lens but difficult to be disciplined enough to state in the first place. Answer the question “why” about five levels down and it’s likely you have a solution. It may or may not be a wiki, but it’s certain that if you’re producing web content, readers have certain expectations about the content when they come to it.

techpubs tools wiki work writing

Observations from the Open Help Conference

I attended the Open Help Conference in lush, green, over-20-inches-of-rain Cincinnati over the weekend and learned so much. I wanted to share my observations in a longer format than the 140 characters I used sporadically during the conference.

First of all, if you want to read through the presentations, we’ve gathered them on the Open Help event page in Slideshare. My presentation was called Sprints and Stacks: Building a Documentation Community. I walked through my experiences ramping up open source documentation for the OpenStack project. I was pretty honest about my expectations going in and the reality that ensued, which you can read about in the notes.

My Giveaways

These points are all made in my talk, but I wanted to share them here as well. Here are some of the surprises I’ve had while working on OpenStack:

  • Publishers want OpenStack content. I feel like I’m in a fight to be an acquisitions editor some days. Everyone wants an OpenStack book, or blog entries about OpenStack to publish on their site.
  • Doc sprint timing must occur with specific releases. Originally I thought I’d just run sprints at the Design Summit twice a year. Now I see that sprints should probably occur just prior to releases.
  • I once thought docs were a good entry point for new contributors. I now sense that it’s really difficult for new people to contribute to the docs, and I also believe that developers should write for developers.
  • Doc contributors need access to people they can interview incessantly. Plus they need access to hardware, big time. Neither are easy to offer now for OpenStack. I recognize this shortfall, yet haven’t solved the problem completely.
  • I am seriously shopping around the idea of holding an OpenStack tweet chat regularly. If you are interested in hosting a regular tweet chat, please let me know.
  • I have been amazed at the quality of content about OpenStack coming from bloggers. I am happily contacting them and incorporating CC-licensed content.

My Takeaways

I felt like I was in such great company. Shaun McCance did a great job recruiting like-minded people who love to share, discuss, and have fun with documentation. We had half-and-half women and men in attendance, and lots of women presenters. This ratio is a big deal to me as I get deeper into open source. Gender balance in open source and technology in general is a difficult sometimes contentious topic, but we’re definitely onto something good with this group. One of my favorite tweets was this one:

‘The number of women in an IT organization is the same as the number of people named “Dave”‘ (quoting @Loquacities (Lana Brindley) #openhelp)

and another great one from @jenzed:

The discussions at #openhelp are much enriched by having participants who are not members of the open source choir.

We had less than thirty people in attendance, but these were “my people.”

XML versus wiki

I have to talk about the dichotomy between using a wiki for community-contributed documentation and using an XML plus version-control system for collaborative authoring. Both methods were well-represented by Red Hat and Mozilla.

One of Red Hat’s 60 technical writers, Lana Brindley, spoke about their awesome XML-based writing workflow in Open Source Documentation in Four Easy Steps (and one slightly more difficult one). They have dedicated editors, a style guide, an IRC channel dedicated to grammar, and a search engine specifically created for writers to find content to reuse, plus a topic-based doc platform that allows writers to put together doc builds, which they built themselves when they realized they didn’t want to build a component CMS. What she described was a very mature and high-quality, yet agile and flexible and open, documentation process. Red Hat rivals any of the large enterprise documentation projects I’ve seen and accomplishes everything an enterprise needs to, yet with open source tooling, standards in XML, and somewhat hacked-together tool chains. Their content is translated to 23 languages, a feat only also accomplished by such companies as Dell, IBM, and Microsoft. Their culture sounds amazing, working for the “good guys” with tales of writers and developers working together to build the needed tool chains. I learned a lot about Publican and translations, about when to keep tweaking and when to release something to the wild.

On the Mozilla side, Janet Swisher presented a talk titled, Engaging developers in Mozilla’s documentation. Their documentation process is tightly coupled with development practices including tracking doc requests in Bugzilla. Considering that they write completely on the wiki-based Mozilla Developer Network, and that they’ve perfected their workflow over years, it seemed like a perfect match. They are also running frequent doc sprints, on their third this year. Their easy-to-use doc tools paired with their high number of page views have created a perfect storm for recruiting contributors and their content. For example, Google’s writers have chosen to put their content that’s not just about Chrome but geared towards Chrome web development on MDN. This gathering sounds like a perfect combination, curation from the creation point. She also included the reasons why people don’t contribute. One, their site is so beautiful and engaging that people don’t see the Edit button (they don’t know it’s a wiki), two, they’ve got “yet another login” syndrome, and they don’t want to bother with setting up an account, or three, they’re too intimidated by the relative “fame” of the original author to change “the” documentation.

Lots of Work

During Dru’s talk about Starting an Open Source Certification Program, I tweeted the following and @Sheppy (Mozilla writer Eric Sheppard) agreed with me.

Great talk from BSD author and community manager Dru Lavigne about certification programs at #openhelp. Wow just wow, the amount of work.

We’re in the early investigation stages of certification on OpenStack and we have a long road ahead of us. From the number of surveys to psychometricians ( had to look that one up) to collaborative exam design, open source certification is a unique program to run. Exam delivery and Angoff sessions were new concepts to me. Exam delivery has been an ongoing, six-years-in-the-making, process for BSD. Six years gives you pause. These groups are serious about certification and it shows. Fortunately we’ve hired Belinda Lopez who can navigate these areas and knows how to engage the community.

On the second day, we discussed language and localization and I learned that I can export our DocBook files to po for translators to fit into their workflow quite readily. You do have to be strict about freezing the English version, most likely, or you’ll have widely out of sync documents. Since I have two types of source files already, both RST and DocBook, I’m not certain about freezing the English version just yet. Perhaps after some reference configurations are available and documented and I get more hypervisor docs, we can look into translations. Lots of respect in the room for the project managers of translation projects in open source.

We also had a great discussion about recruiting writers and the difficulty in choosing channels for communication – go where your people are, or branch out to lots of social media sites? We had one of the Mozilla writers who has now participated in two doc sprints say that he found out about the doc sprint by searching for free t-shirts on Twitter. So he was a shining example of finding recruits through lots of channels, not just on your classics for your project. To me, this shows you that social media and tech comm have a connection and outreach and community support are upward trends. Jennifer Zickerman’s talk, Coordinating Documentation and Support: Turning Complaints into Contributions, was outright inspirational. Thunderbird has 10 employees, 50 contributors, and over 10 million users. These numbers are mindboggling, yet they manage and thrive. I tweeted,

Seriously cool way to create a support community with Twitter – see the Army of Awesome at http://bit.ly/mRty7U #openhelp

I’m very impressed with Mozilla, an organization that’s willing to take the negativity in some tweets and turn it into altruistic goodness and pay-it-forward attitudes with the Army of Awesome. These cutting-edge techniques are what we all should look for in open source projects. They get the job done with what they have because they are driven to help.

I relish the work ahead, and I so pleased to have the opportunity to work on open source docs with OpenStack. We had a great time at Open Help, and Scott Nesbitt has a nice write-up and links to each presentation on his post, Looking back at the Open Help conference. We’re all hoping Shaun has the energy to put one together again next year!

techpubs work writing

Playing with the Future of Technical Communication

I have a great group of mom friends who also happen to be technical communicators. One day last month, my friend posted this picture her 6-year-old daughter made and asked if technical writing is genetic. Ha!

The first 12 steps for making a dinosaur of Play-Doh

Her mom says, “These are her first 12 steps of 20 that show how to make a Play-Doh apatosaurus.” You may know the apatosaurus as the brontosaurus.

This elementary-school student definitely “gets” that the future of tech comm is in pictures. She knows her audience – likely a non-reading audience that can recognize numerals. Plus she starts at the very beginning, piecing the steps and not making any assumptions. She even shows how to pop the top off the container of Play-Doh.

I loved this illustration and just had to share. I got permission from the copyright holder through her legal representative, her mom. :) Thanks to both of you for sharing your talents!

So, how do we learn how to get the steps right? One educational exercise I’ve learned to demonstrate technical communication is to have students write out instructions for how to prepare a bowl of ice cream. You can have students write and illustrate the steps, and then exchange instructions to test the quality of the steps. Ice cream may never land in a bowl, or there may be no scoop tool, but it sure is fun to take a task and put it into the smallest self-contained step that you can. Students learn quickly that you should write down prerequisites and ensure your assumptions about the starting point match the end-users concept of the start point. I find myself coaching technical writing now that I work with a volunteer writing group. I wonder if I can run an ice cream demo at one of the doc sprints sometime. What are some other coaching ideas for technical writers?

 

The Data Transaction

Several times lately I’ve caught myself over-thinking just a bit while typing online. For example, just this week I typed in a tidbit of info, an answer to a question, as a reply on a friend’s status update in Facebook, only to delete it without clicking the Reply button. Once it was in response to a post about countertop materials. Another time it was a query about the best Mac money management software as a replacement for MS Money. Both times, I checked myself because I realized  that I didn’t want Facebook to have knowledge about my opinions or preferences! Now, those that know me would say that I’m normally very open and giving with information online. I love sharing my experiences. But lately I get the willies when I’m on the Facebook site and think of the enormous amounts of data they have about me.

You may scoff at such a realization – especially since I’ve been blogging for five years. But somehow my blog is different. I own the archives, I know how to take down posts, and though they are likely forever archived on the ‘net somewhere, I feel a little more control over their availability. With Facebook, I have no control over their storage or retrieval of my opinions.

Funny thing is, what I’ll often do if I still want to give my friends a bit of info or advice, I’ll hop over to… wait for it… GMail. Why do I trust Google with these tiny tidbits of information about myself and not Facebook? I’m not sure I know the answer yet, but I think about it more and more lately.

Affecting Online Help Statistics

Now, with my blog, I nearly always bring my thinking around to, how does this affect online help? I recently presented at the WebWorks RoundUp here in Austin, and was excited to hear about their new product, WebWorks Reverb. But one audience member asked a great question during my talk about web analytics. “How will data collection be affected if Congress passes a law that regulates how much information can be collected from a browser?” It’s a great question. In fact, just this week, browser maintainers Mozilla (Firefox) and Google (Chrome) have made privacy plugins available that give users the ability to select websites where they do not want to be tracked – using a header indicator, not by blocking cookies or scripts.

Here’s my take on where we stand today as we collect information about our online help and user assistance sites.

Thing is, you can already protect yourself online while browsing by installing plug-ins that refuse cookies, that limit tracing of personal information and identity, but they’re kind of a pain. And you have to understand the whole concept of what’s being collected and draw your own lines. Certainly, depending on the product you’re documenting, the percentage of people with high protection levels on their browser will be higher or lower. Government or regulated industries may already lock their employee’s browsers down tight, preventing data collection while they browse. In open source communities, I believe there’s a healthy disdain for data collection and a heightened awareness of what’s going on under the browser’s hood. It’s possible I’m only tracking 90% of my readers, or fewer. But I think that generally, readers of online help sites are willing to inform us of their searches, their time spent on site, if it helps us improve the content. Sarah Maddox’s post shows that Atlassian and its customers get great value from their online user documentation. As we implement more and more conversational content, it’s apparent that readers want to tell us what’s working well and what’s not. I’m heading to O’Reilly Strata next week to learn more about big data, telling data stories, and Twitter data analytics. I hope to learn more about data applications for technical communication.

What do you think? Are you more aware of the data you’re giving away about yourself? Are the trade-offs worth the data transaction?

techpubs tools

Why You Might Care about the Cloud

Talk about clouds, hybrid clouds, private clouds, and suddenly throw in software as a service and platform as a service, and you might be wondering, what does it mean? Whoa double rainbow, as some would say.

I wanted to put some perspective on the cloud for technical communicators. I’ve had a great guest blogger post from Ynema Mangum about cloud computing in the past titled, Clearing the Air on Cloud, but when I saw Ellis Pratt tweet about using the cloud for one of his projects, I followed up with him to learn more. Here’s an interview with Ellis Pratt of Cherryleaf about his recent experiences with cloud computing.

Q1: Could you describe the project when you recently used a cloud computing environment?

We’ve created a report publishing system, based on Confluence for a client. The reports are fire risk assessment reports, so they want the ability to complete the reports “on the road”.

Q2. What compelled you (or required) the use of virtual computers available on a network for the project?

We put a version of the prototype in the cloud for a number of reasons:

  • The client’s IT person hasn’t yet installed Confluence on their Virtual Private Network (VPN), so to keep the project rolling along, we created a version in the cloud that they could access and review the prototype of the system.
  • We’d also outlined in our proposal how they could host the system in the cloud instead of on their VPN, so it gave us the opportunity to show them what it would mean for the report writers.
  • Our own VPN can be slow at time and holds sensitive data, so it was an excuse to test out the potential of a hosted application server for our own use.
  • There may be cases in the future where clients would want to access a documentation solution hosted by us, so we wanted to research the possibilities and potential.

It was prompted by:

  • A chat I had at a 4Networking (business networking) meeting with a software developer, who said how cheap it was to create a application server these days. We’d looked at it about 6 months ago (when we put our file storage in the cloud), but had found it a bit too pricey. The prices have seem to have come down.
  • A blog on the Confluence blog about how you can turn an application server on and off, so you’re only paying for it when you need it.
There was also an underlying interest. We’ve been working on making it possible to work away from the office for longer periods of time. There’s a quite a difference between working off-site for a day and working off-site for a month.

Q3. Have you seen the Microsoft TV ads with the line “To the Cloud” (link)? What’s your reaction to that type of consumer messaging about the cloud?

Those adverts haven’t been running in the UK, as far as I am aware. The message seems to be the cloud is for when you’re up against a deadline and when you want to be cool. I suspect its aim is to get  non-technical people to associate the word “cloud” with Microsoft, so they go to the Microsoft site if and when they want to investigate what “the cloud” is.

It doesn’t tell you what the cloud is. Collaboration can be done on a LAN, a VPN and a wiki, so it doesn’t tell you how it differs from those options. I guess the key message to the consumer market is: work on any computer, anywhere you like. I’d like them to make some comparison to Google mail (or Hotmail), as many people will have experience of that.

Q4. For tech comm, do you think relevance to cloud computing lies in collaboration (access to more people and networks) or scale (access to more computing power) or another aspect?

I’d say collaboration, bypassing the IT dept (!) and the ability to work from home.

Q5. What’s a great way to introduce cloud computing to technical writers? How do you make it relevant?

I’d suggest real-time collaboration, the ability to work from anywhere, the ability to have a fast system, the ability to test software among a group, user generated content and the ability to make stuff web accessible in a way that doesn’t put any company-critical stuff at risk.

Q6. Do you agree with the statement “cloud computing is becoming “the 21st century equivalent of the printing press” from Nicholas Carr’s blog entry, The cloud press?

No, I believe collaborative authoring/cognitive surplus/wikis are the 21st century equivalent of the printing press – a low cost way to get more people to write (and read) more quickly.

The article does raise the Wikileaks/Amazon issue. Putting the rights and wrongs of Wikileaks aside, Amazon’s actions do show that cloud providers can terminate their service to you in an instant, if they choose to. Although it’s unlikely that many of us will host any thing as contentious as Wikileaks, it will lead people to ask, what would happen if they did pull the plug on us? There are also national laws to consider around data protection – EU laws, the Patriot act etc . We have our cloud data hosted in the European Union, for example.

Q7. How does cloud computing affect tech comm delivery?

It makes user generated content and a more distributed authoring team more possible. It makes it easier to get contractors to use your software.

It means we can do more Webby things with documentation.

The biggest challenge we faced was setting up the server. You can end up in this no-mans land between the hosting service and the application vendor where there’s no-one document telling you how to install the software in the cloud. It’s easier with Windows Server than with Linux, but then you’ll be paying more for your hosting.

Thanks Ellis for sharing your perspective and experiences!

Advocate for Community Documentation

“Anne, I see you as an advocate for community documentation” – what a great compliment. I was so pleased with the response to my STC Summit talk last week, Strategies for the Social Web for Documentation. Here’s the short description of the talk:

Let’s say that the most driven and driving developer on your team, who also happens to be a popular blogger, comes to you and asks why your end-user documentation doesn’t allow comments or ratings. Rather than stammering something about Wikipedia’s latest scandal, or reaching for imperfect responses that sound like lame excuses, do your homework and learn best practices from others who are implementing social web content that is conversational or based on community goals. Along the way you may realize there are good reasons not to implement a social media strategy, based on studying the potential community and time you’d spend in arbitration with community members on contentious issues, or you may discover that you can borrow from benefits of a single approach while still meeting business goals.

Objectives:

  1. Identify specific types of tools on the social web, such as tags, blogs, wikis.
  2. List risk areas and pitfalls.
  3. Identify writers’ roles with social media (instigator or enabler).
  4. Plan a strategy of listening, participating, building and then offering a platform or community.

I’ve also posted the slides on Slideshare for all to see and share with others.

While talking to technical writers who are struggling to find the vocabulary to describe their new way of working in a content curator or community role, I got the sense that we’re all trying to reinvent our approach to traditional documentation. Coming together at a real-time, in-person event helped me focus my thinking and I appreciate all the dedication that went into the event.

Talkin’ ’bout a revolution at the STC Summit 2010

I don’t know if it’ll sound like a whisper, but I am excited that my proposal was accepted for the 2010 STC Summit in Dallas! Here’s what I’ll be presenting:

I’m participating in a Content Strategy Progression as described on the STC Content Strategy Special Interest Group blog entry on said progression. I’ll talk about content that is “Shareable, Searchable, Sociable, and Don’t Forget Syndicated.” That should be a fun session, and I’m just sad I won’t be able to wander around the room myself and soak in the Content Strategy goodness!

My proposal for a presentation titled, “Strategies for the Social Web for Documentation” was accepted, hurrah. Here’s what I have as learning objectives for the session, but I’d love to hear your questions as well before I prepare all the slidedeck. What would you want to learn?

Session Objectives:

  1. Identify specific types of tools on the social web, such as tags, blogs, and wikis
  2. List risk areas and pitfalls
  3. Identify writers’ roles with social media (instigator or enabler)
  4. Plan a strategy of listening, participating, building and then offering a platform or community

Session Description:
Let’s say that the most driven and driving developer on your team, who also happens to be a popular blogger, comes to you and asks why your end-user documentation doesn’t allow comments or ratings. Rather than stammering something about Wikipedia’s latest scandal, or reaching for imperfect responses that sound like lame excuses, do your homework and learn best practices from others who are implementing social web content that is conversational or based on community goals. Along the way you may realize there are good reasons not to implement a social media strategy, based on studying the potential community and time you’d spend in arbitration with community members on contentious issues, or you may discover that you can borrow from benefits of a single approach while still meeting business goals.

(Kudos if you recognize the song lyrics to which the title and lead refer.)