Q&A with editing technologist James Mathewson

James Mathewson recently caused a buzz among editors on Twitter and elsewhere with a study that found a quantifiable value to editing.

Mathewson is the Global Search Strategy and Expertise Lead for IBM. He has trained more than 1,000 IBM employees on writing and editing topics.

In his 15-year career as a writer and editor, Mathewson has held positions as editor in chief of ComputerUser magazine, ComputerUser.com and ibm.com. He is co-author of “Audience Relevance and Search: Targeting Web Audiences with Relevant Content.” He blogs at writingfordigital.com.

In this interview with ACES, conducted by e-mail, Mathewson talks about his job and research about editing.

Question: Describe your job at IBM. What is your typical day like?

Answer: My job is evolving from editor in chief of ibm.com to global search strategy and expertise lead for IBM. As editor in chief, I set standards and created education and governance to help diverse content teams publish more effective Web content. It was a big job with more than 100,000 stakeholders covering more than 100 IBM brands in 91 countries. Because our top problem is creating more content than our users could possibly find and consume, most of my effort centered on trying to help teams publish only the most relevant content for their audiences. A big part of that involved developing and delivering education related to developing search-optimized content.

My new job is related to the EIC role. Because we still have a long way to go to get the 100,000+ stakeholders trained and enabled to create search-optimized content, I am now focusing on this aspect of the EIC role exclusively. The rest of the EIC job can pretty much run itself, because I helped develop governance systems that enable a more democratic model of standards creation and maintenance. Instead of me leading all that work, it’s distributed among several other people.

In a typical day, I work with colleagues around the globe using conference call and Lotus Live Meetings to collaborate on presentations for executives and stakeholders. Executive presentations typically recommend investments or strategic priorities related to our search-first content strategy. Stakeholder presentations are interactive workshop modules that teach writers and editors Web content best practices.

For example, this morning, I worked with two colleagues in our CIO office to develop a business case for more investment in content strategy, search transformation and Web optimization across the enterprise. This afternoon, I will work with colleagues across the company on our ongoing Search Transformation project. That project is a complete overhaul of the search functions on ibm.com and our intranet. Later I will connect colleagues who do similar things in different functional groups to help them collaborate on the larger content governance mission. This will result in new education avenues for writers and editors.

Q. Why did you decide to try to put a value on editing, and how did you go about doing that?

A. It is difficult to demonstrate the value of editing. When trying to find aspects of content processes to cut in order to save money, editing is often tops on the list. Against this backdrop, we continually need to justify editing as a vital part of any content process, particularly on the Web, where writers often don’t understand their audiences as well as they should.

The A/B test was conceived to try to preserve the editorial roles we had and to grow the editorial function where we see gaps in our content processes, particularly in emerging markets. I am pleased to say that the study is enjoying some success within IBM, as it is driving investment in more editorial resources. It is also validation for this kind of testing; I expect more of these A/B tests in the future to validate and strengthen the case for editors in the Web content process.

A/B tests take existing content and change it in one crucial way. The two versions are then published simultaneously to randomly selected visitors. The results (bounce rates, engagement rates) are measured and compared.

The version of the page with the lower bounce rates and the higher engagement
rates is then adopted as the only version. Typically, you repeat the process by changing another thing on the page and measuring the success of that change. Over time, this leads to optimized Web experiences for the audience.

Google does this every day on thousands of its search engine results pages, which is why it is hard to sustain search ranking if visitors who come to your site from Google bounce off your page, meaning they leave within 10 seconds without clicking anything but the back button.

Technically, our test was not a true A/B test because we changed multiple things on the test pages at one time. Technically, this is called a multivariate test, because it tests the result of making several changes at once. These are done when you don’t have time to test one thing at a time, or when you want to test the value of a process or best practice. In this case, the best practice we were testing was the existence of an editor in the Web editorial process.

Q. In your study for IBM, you conclude “that well-edited pages do 30 percent better than unedited pages,” meaning that readers clicked on links more often on edited pages vs. unedited ones.  What suggestions would you have for proving the value of editing to organization in which it is more difficult to set such a value — newspapers, magazines, nonprofits, government and so on?

A. It is harder to measure in the print medium. The way we did this when I was EIC of ComputerUser was through reader studies. We interviewed a representative sample of our regular readers and asked questions about what contributed to their reader loyalty. Quality editorial (the seven c’s — clean, credible, concise, compelling, complete, coherent and conversational) was always tops on the list by a long shot.

These tests consistently showed how important editors were in the magazine. Despite this, publishers were always looking to cut costs in the hope of dropping more of the revenue to the bottom line. In my experience, every cut to editorial resources resulted in a corresponding cut to revenue. I watched as ComputerUser slowly died the death of a thousand cuts. I was one of the last cuts. Perhaps that experience heightened my resolve to demonstrate the value of editorial investment.

We measure print and other media value in IBM through our Brand Health Monitor, which is a similar survey that measures the affect quality editorial has on brand perception, among many other things such as design. Hint: It helps a lot.

In our book (Audience Relevance and Search: Targeting Web Audiences with Relevant Content), we emphasize the differences between the print and Web medium. Testing the value of quality content is a key differentiator. If your Web users don’t like something, you can change it much more quickly than you can in print, if you can change it in print at all. That’s why A/B tests are so important in the Web medium. I would think this is true for all Web properties, regardless of the industry.

Q. If you were asked to send your bosses a memo on the value of editing, what concrete things would you mention beyond the 30 percent better engagement?

A. I did this just recently, with the A/B test I referenced on the blog. My VP was thrilled with the results. In addition, a lot of third-party resources show more indirect benefits, but nonetheless corroborate the A/B test. The best we have found are on the site  Usability.org, especially this one: http://www.usabilitynet.org/management/c_value.htm

Q. I like your use of history and its relationship to editing with the example of Jefferson and “citizen” vs. “subjects.” Can you think of instances where editing to get a clearer word choice would have a direct effect on the bottom line?

A. We are testing using more verbs in our headings, especially active verbs. (Gerunds don’t do well, in our studies.) We have found that our users use a lot more verbs in their search queries than we thought. The thinking is, if we use more active verbs in our headings, more of our target audience will find our content. Active verbs also attract more clicks than nouns. That is a highly generic statement, however.

As far as specific word choice, I can give you an example from a recent case study. I was in charge of pulling the keywords for a speech by IBM CEO Sam Palmisano. The speech was long and multifaceted, so trying to find one word or phrase that encapsulated its meaning was a considerable challenge. I ended up suggesting “sustainable development.” Even though he never used that exact phrase in the speech, every project he highlighted in the speech was a sustainable development project. I have since discovered that semantics alone shouldn’t dictate keyword choice.

There are all kinds of factors that come into play in the optimal words. In this case, the biggest one is that “sustainable development” is not a term typically associated with industry, but with government and academic activities. The other thing is, Google doesn’t do deep semantics. It does latent semantic indexing (LSI), which is a statistical program that counts the number of words and looks for matches to the most frequently used words. Even though the speech was about sustainable development, the fact that the words never appeared in the speech prevented Google from making that connection.

Though this is a negative example, we learned a lot from it. First, industry Web content owners should try to avoid nonprofit competition. Second, try to be more literal in your keyword choice. Those are valuable lessons for us.

Q. You mention the need for more studies. Any plans to explore that and share them with others in the writing and editing professions?

A. We will do more A/B tests like that one on a larger scale and in more environments. We didn’t come close to controlling for all the variables.

For example, Dave Harlan is an extraordinarily good editor, as I mentioned in the blog. We want to see the value of less superhuman editors. Also, the environments we chose might have unique characteristics that affect the audiences’ likeliness to engage. We want to try these tests in many other environments and see if this phenomenon is more general. I think it is.

Follow him on Twitter @James_Mathewson or connect with him on LinkedIn.