Since late 2012, one of my favorite infographics on publishing came out of Bowker, showing the percentage of book sales by major distribution channel:
In this graph, “eCommerce” represents book sales (both print and digital) happening through online retailers such as Amazon, Barnes & Noble, Wal-Mart, etc. In roughly a two-year period, the percentage of books sold online jumped from 25.1 percent to 43.8 percent. Meanwhile, large chain bookstores, such as Barnes & Noble and Borders, fell from 31.5 percent of book sales to 18.7 percent of book sales.
The large decline from 2011–2012 in bricks-and-mortar bookstore sales is attributable to the Borders bankruptcy. Barnes & Noble’s future is far from certain; they plan to close about 10 percent of their stores in the coming years, and one wonders if it will end up being more.
All this to say: the bookstore has long been the primary means of book discovery, but soon it will be a minor player in how books get marketed and sold. As sales increasingly move online, very different dynamics take hold, such as search optimization, algorithm-driven recommendations and social conversations. Probably the biggest buzzword these days in publishing insider circles is “metadata,” particularly ever since Nielsen released a study showing a dramatic increase in sales for books that satisfied the industry’s core and enhanced metadata requirements.
Core metadata includes: ISBN, title, author, category, price and publisher. Enhanced metadata includes: cover, blurb, author biography, sample chapters, quotes and reviews.
Metadata has different purposes depending on the context. For now, I want to primarily focus on the importance of metadata for a population of readers who are more likely to be discovering books online rather than in a store. In the online shopping environment, a reader has no personal guidance, but there is an unlimited selection of books. Results are based on recommendation algorithms, search algorithms driven by keywords and the book’s metadata.
In a talk by Ronald Schild at Frankfurt Book Fair’s CONTEC 2013 conference, called “The Future of Metadata,” he emphasized the need for semantic analysis, which relates to identifying the “core concept” of a book. Without semantic analysis, recommendations are less valuable; people do not search for books by ISBNs, but by themes (e.g., gay coming-of-age story set in Communist Czechoslovakia) and emotional topics.
What’s most fascinating about the metadata discussion thus far is how much it can affect the sales of fiction; conventional wisdom might’ve led us to believe that it would be most important for information-driven books, but Bowker’s data indicates just the opposite. In a talk at 2013 BookExpo America, Phil Madans from Hachette said, “If you don’t want your books to be found ever, use the Fiction: General category as your BISAC code.” Perhaps historically publishers have been less detailed with fiction metadata, thinking it doesn’t matter, but they have now changed course, delivering dramatic lifts in sales. The same discussion has also been happening in the self-publishing community, where authors have discovered that being very careful and intentional with their categories, keywords and summary descriptions have resulted in better visibility and thus sales.
The metadata discussion doesn’t just stop with filling out the fields appropriately when cataloguing a book. Veteran book marketer Peter McCarthy has argued that there are far more potential readers for each book than is ever reached, and that if publishers are to keep their value to authors, they need to be the best at connecting authors and titles to the most right readers. When he develops a marketing campaign, he uses a subset of 100 tools to triangulate, plan and execute, including a range of social analytics, search-engine optimization and other support tools, to help him understand how “ordinary” readers (not publishing insiders) go about searching for things – and to make sure those people find the right book. A good part of what McCarthy suggests amounts to uncovering and analyzing how online conversations represent potential markets for a book.
This falls in line with a keynote talk that journalist Sascha Lobo recently gave on how the Internet will change the book. His argument is that selling books has always been social, and – in fact – the social element has always been most important. People buy books that are talked about, and his contention is that the bestselling tool for books, on the Internet, is buzz. And buzz is exactly what McCarthy is attempting to quantify with his subset of 100 tools, and what metadata experts want to see captured, analyzed and displayed with book search results.
But the one question that often bothers more astute industry observers: Do readers really have trouble finding books, or is discoverability a problem of the publisher (and/or author)? If you take a look at the magazine/periodical world (or other media surveys), you often find that people’s biggest problem has nothing to do with finding stories, information or entertainment, but with having time to consume everything they find. One strategy in the self-publishing community, which has been a double-edged sword for authors, is keeping their prices very low (even free), and posing a low risk, to encourage a large volume of readers to buy. However, this can have the unintended effect of encouraging readers to download or buy many more books than they could ever read, with no or reduced consequences for not consuming what is bought.