What happens inside the content world seems to be opaque and mysterious. Operating models for content are mostly stuck in the 1980s, taking word processing and spreadsheets as far as possible, but woefully inadequate for a world where content goes far beyond “documents”.
Content is expected to be componentised, re-used, adapted, reconfigured and repackaged, and delivered in multiple ways, into multiple interfaces including CMSs, apps, intranets, knowledge bases, support centres, and yes, even PDFs.
What can we learn about the need for content operations and why we can’t optimise operations by focusing on the last mile? Let’s get into the topic.
What is “the last mile”?
In supply chain management, there is a concept called “the last mile”. This refers to the final stage – the delivery aspect – of getting a product from its sender to its final destination. In the world of physical goods, the last mile usually covers the movement of goods from a transportation hub to its delivery point.
The goal of last mile delivery is to optimise the process so that the delivery goes as efficiently and cost-effectively as possible. The last mile has nothing to do with the design, manufacturing, packaging, or packing of the merchandise. The delivery aspect is concerned only with what happens once the goods leave the warehouse.
Let’s use the example of ordering food from a restaurant. The food is to be delivered by one of the ubiquitous food delivery services that pick up food orders from the restaurant and drop them off at customers’ homes. There is a strong business need to operationalise the process so that the businesses keep their competitive advantage. The delivery companies start optimising their processes. They develop an app to track their orders. The drivers use scooters instead of bicycles.
They study how to shave minutes from the process, all to make the customer happy. This is creating a strong operating model for the last mile.
The problem with optimising the last mile In the restaurant scenario, what happens if the kitchen isn’t optimised for faster food production, if the restaurant has a couple of pots, a few wooden spoons, and a bunch of kitchen staff running around, bumping into each other like a scene from the television show, Kitchen Nightmares? No matter how much the delivery services optimise their processes, the customers won’t get their food any faster if the kitchen isn’t organised for efficient production. When the operating model of the kitchen is poor, it becomes the weak link in the supply chain.
In the content world, the last mile is when delivery-ready content is delivered into a CMS – there are several flavours of CMS, it doesn’t really matter which one for purposes of this discussion – and awaits publication. The CMS is the delivery mechanism that takes content from the pick-up point to the consumer of that content. The entire industry is set up to think about optimising that last mile.
The optimisation continues, but unfortunately, optimising the production process upstream, in the content kitchen, so to speak, is not on the industry’s radar. This is more aptly called delivery optimisation.
Content operations starts before the last mile
The restaurant scenario is an apt metaphor for what happens in the content industry. There is a lot of work done to optimise the content management system – the last mile, using publication-ready content, akin to taking the ready-to-go content and delivering it to the right audiences. The content management systems manipulate the content in the ways that the business needs it to do. There is post-publication analysis and improvement of the delivery mechanism. But it’s not actually optimising the operating model for the production of content.
So who is in charge of optimising the production of content upstream, in the “kitchen”? Ask those responsible for the management of content delivery about optimising the delivery model for content production, and they’ll tell you that’s not their responsibility. They are responsible for optimising the delivery of the publication-ready content. Ask them what it takes to get content produced, and they’ll know shockingly little about the supply chain before content hits the last mile. It’s not dissimilar to the delivery drivers you see waiting outside of a busy restaurant, waiting for their orders to be ready.
What happens inside the kitchen is of no matter to them; they spring into action only once the packaged food is ready for delivery.
Who’s fixing the “content kitchen”?
Ask the product team who they think is responsible for optimising content production upstream, and you’re likely to get an “I don’t know” shrug or a guess at “whoever’s in charge of content”. Ask the person in charge of content, and you are likely to be told that’s a tech problem.
The problem is that whoever is responsible for content may be willing, but not particularly able, to fix the problem. So what happens in those content kitchens? Content production remains stuck in the 1980s, and content teams struggle to deliver content that keeps pace with product development, with the ability to scale in a resource-efficient way, with the need to produce regulatory-compliant content, and to reduce time to market.
The question begs to be asked: if the technologists don’t understand enough about content production processes and governance to implement a more efficient operating model, and the content people don’t understand enough about technology to know what is possible to gain more efficiency in their production, then who does? This is often the critical gap that leaves an organisation vulnerable, not just in the area of content production, but in other areas as well.
Sometimes, organisations bring in big-name consultants, usually with disappointing results. One product director told me that the single biggest pain point (and I believe he said exponentially problematic) in a prominent UK government project – was COVID-19 Test and Trace app – was content production. The Accenture consultant assigned to the project either would not or could not recommend an operating model that would accommodate content production in multiple languages, insisting that Confluence could do the job. (It blatantly couldn’t, and any content person worth their salt would know that.) How much this cost the public in time, resources, money, reputation, and effectiveness wasn’t measured, but is likely in the many millions of pounds.
The skills needed to figure out content operations spans a number of disciplines. There’s the business analysis, content analysis, process mapping, governance and change management, and technology aspects – as well as someone knowledgeable about the challenges and potential efficiencies to take the lead. When an organisation is fortunate enough to have a content operations strategist or content engineer on hand, that’s a step in the right direction – but there are precious few of those to go around. Convincing management and the other gatekeepers is often the largest challenge of all, as they’ve heavily invested in the last mile, and don’t want to hear that they now also need to invest in fixing their content kitchen problems.
Is fixing the content production process worth it?
The biggest challenge for organisations is to figure out whether the exercise is worth it. It’s said that companies measure what they value, and this is a good time to start measuring what happens upstream. So much waste is built into most content production systems that it’s only once all the waste has been identified, Lean for Services style, that an estimation can be made of how much time and effort can be saved and what benefits can accrue.
It should be noted that adopting an operating model that optimises content production generally does not affect the existing technology stack. The CMS continues to do what it does best: publish content for consumption by the user. A content operations stack replaces the word processing and/or spreadsheet and/or note-taking software that’s held together by things like Slack, Trello or Jira, and email. The finished content is still delivered to wherever it’s supposed to go, but likely with a lot more accuracy and automation.
Content production is not for the faint-hearted
The content production process is far more complicated than visible to anyone outside the content team. It’s not uncommon for a relatively refined process to have dozens of steps, including multiple editing, review, and approval loops, before being sent off, often by email or copy-and-pasted into a CMS. However, the relative cost of content production is generally not in the creation of content. If content truly worked as a supply chain, where content is created once, delivered, and forgotten, then calculating the cost would be quite straightforward. However, that is rarely the case – at least outside
of marketing content.
Content operates on a lifecycle, where it is revised, versioned, localised, and so on. In a single lifecycle, it’s not unusual to be able to reduce content production time by well over 50%. Once a piece of content has been slightly modified for, say, a handful of new products, then for a new version of each of those products, a single change now means a change to each one of those ten pieces of content. And the following year, there may be ten more variations. Tracking those variants through spreadsheets only goes so far, aside from being a slow and error-prone process. The biggest pain points and time wasters turn out to be things that are complete surprises to practitioners outside of content. Once the content lifecycle is subjected to any additional stress, such as adding a language variant, processes can slow to a crawl.
What kind of improvements can you get from content operations?
Here are some metrics from my own work with clients and/or colleagues who kindly shared with me:
- A government department that produced a particular type of content for about 50 audiences. Changing the operating model by moving content operations upstream to the author, calculated a reduction in the number of steps by 49%, production time by 65%, overall production cost by 66%, and length of time for a publishing cycle from 12-18 months to 4.5 months – meaning that they could deliver updated, accurate content to the CMS over 60% sooner.
- A start-up producing software that got embedded in medical devices needed to deliver content with a tight audit trail. They created specifications, code, code descriptions, and so on. By adopting an operating model that tagged all of the content with precise metadata, this company of a CEO and two developers were able to use a “build” command to combine all of the content and generate over 200 sets of content for various clients and uses, including online topics, embedded content, and PDF documents.
- A company producing complicated software – many modules with many functions per module – used a technology that isolated UI strings and labels in a library that could be auto-inserted into the interface and into the instructions on how to use the software that went into online help, training materials, and user guides. The savings in the upkeep of four outputs was reduced by 75%, while the always-accurate content ensured that the users could stay productive.
- A company that changed their operating model from a common method of using word processing software to create topics and spreadsheets to track where the content was used across multiple outputs. They upgraded to an authoring system that allowed them to create topic, auto-track re-used content, update that content in a single place and propagate the update throughout the content corpus with a single click, and deliver updated content to multiple outputs – a CMS, and to a publishing partner to go to print – with minimal effort. They eliminated the 48 tracking spreadsheets completely, significantly reduced production throughput time, and found that instead of expanding the team to deal with growth, the existing team could handle all the content and still have capacity to handle scale.
It may be tempting to say that these are outliers, exceptions to the rule. The truth is that there is too little data yet to make that call. All of the success stories presented at conferences and discussed between professionals seem to show over a 50% improvement, but we have to assume that there are failure stories that don’t get factored into the mix. Anecdotally, however, the organisations that go all-in reap significant benefits.
Where does the damage happen when it comes to content operations?
If the organisations that commit to the process reap significant benefits of content operations, where do things go wrong, and how does it damage the operating model? At the risk of making sweeping generalisations, let’s look at some of the situations that hobble content operations or do outright damage.
- Gaps in knowledge of staff outside of the content team. The lack of understanding of the rudimentary processes, problematic complications, and governance tensions that content people face on a daily basis is legendary. So when an organisation consults software developers, content management integrators, data scientists, or other technologists about what content people need to operate efficiently, it’s not surprising that they can’t help. They are used to dealing with the delivery side of things – once the content has actually been produced and finalised. When consulted, they tend to recommend software that’s inadequate for content developers to improve their operating model. The reaction can range from well-intentioned but uninformed to downright hostility and everything in between. The alternatives that are offered are often advantageous to those in charge of the delivery side, but damaging to the content people whose time and cost are never factored into the overall cost of operations.
- Gaps in knowledge of staff inside of the content team. Content people – writers, editors, content designers, content managers – are so used to coping with whatever tools they are given that they just limp along. They are provided with word processing tools that were
designed for “documents” such as business correspondence or reports. The right workflow tools, or even any workflow tools, are not provided, so spreadsheets are used as slow, manual, error-prone processes. In the final example in the section above, the conversations were quite typical and very enlightening. The department manager and team gave us a short list of “must haves” and a “wish list” that they would like, if affordable. They were surprised to discover that there were commercial solutions that could satisfy both their must haves plus the items on the wish list and a few more items that they didn’t realise they needed until we explained it to them. These conversations happen all too often; management within the content area traditionally have expertise on the editorial side of content, but not on the technical side. - Gaps in a content operations strategy. If you don’t know where you’re going, any route will do, goes an old saying. A good content strategy will have a section on content operations – operational efficiencies – and how this affects the velocity of content production and, in turn, how this affects the product itself. The background to the strategy should include a current-state operational baseline and a future-state operating model. The strategy should address processes, technologies, and governance. One calculation commonly overlooked is the consequence of content production. One organisation had over a dozen CMSs, and each time a new one was adopted, the product owner would do an assessment of the cost-benefit ratio. However, they neglected to include in the calculation the impact on the content people expected to maintain content in each of those systems. It was impractical to ask the central team of content designers, some staff and some contractors, to learn multiple systems, with multiple logins, with no centralised repository, workflow, or reporting functions. Given that the system is set up once, but the content is produced day in and day out, this oversight can range from highly inconvenient to downright volatile, depending on the needed speed and real-time accuracy of the content being published.
The idea that today an organisation can optimise content production by focusing on the last mile is no longer adequate, bordering on naïve. Using outdated technologies meant for casual business use, with governance models meant for documents, combined with outdated processes that don’t allow for management of components, the addition of metadata, or efficient delivery of content to the distribution systems is becoming (if it’s not already) a critical bottleneck that flows downstream to the final consumption point.