Will books evolve?

Standard

Evolution takes a great deal of time and happens over generations rather than lifetimes.

The definition of a ‘book’ will not evolve until paper copies of books cease to be published.   Long player vinyl records are no longer mass produced for music but the notion of an album still exists on iTunes even through the physical product is almost dead.  A collection of songs in a particular style will continue to be defined as an album as will a collection of words on a theme continue to be defined as a book.

What will evolve however is the notion of a story in fiction or a designated expert writer in non-fiction.   Around a camp fire or when reading to children many people can contribute to a story orally or change stories that they tell.  Digitized fiction books will take this ‘playfulness’ we have with the creation of stories and provide more ways to play with a story.   Whereas digitized non-fiction books will result in their not being one single expert but a number of contributors who could all be accessible to the reader.  In much the same way as the springbeyondthebook project.

A book will remain a book as it exists in its current form but the notion of a story or non-fiction content will evolve in the future.

There will always be books

Standard

Books are important. They educate us. They give us a great time. They show us marvelous adventours. Don’t you think it is great to sit in Starbucks and read your book  in the oldschool way – the paper one.  The scent of a book and the way a book looks like when you read it is diffrent to read a book on the iPad. Books can not be exchanged by the digital books.

How will the concept of the book evolve in the future?

Standard

What is a book and what isn’t? Are books an accident of history? How do books shrink space and crunch time? And what on Earth is a zimboe? Read our authors’ ruminations on the evolving concept of the book to find out:

Learn more about our project and share your vision for the future of the book at SprintBeyondtheBook.com!

What Is a Book? Discuss

Standard

In the news sphere, there can be endless arguments over whether this person or that person is a journalist. It’s a pointless conversation, because the real question is: What is journalism? Edge cases are easy. The New York Times is journalism. The “BlahBlahBlog” isn’t. But it gets blurry fast, and that’s where the conversation gets interesting.

We’re starting to have the same discussion in the book world. Again, the edge cases are easy. Here’s a book:

Cover of Charlie Stross' book The Atrocity Archives

 

Charlie Stross’ novel comes in print — bound pages — and in several e-book formats. It’s a book, period.

Not all books in the traditional realm are based on text, of course, though I’m hard-pressed to name a book that doesn’t include at least some text. Graphic novels and the heavy oversized volumes of photography we put on our coffee tables are just as much books as Charlie’s novel or Moby-Dick. But just as a collection of blog posts isn’t a book, the latest installment in some comic series isn’t either (though we do call them comic books).

This is also a collection of bound pages. It’s not a book, at least not in the context I want to use here:

Notebook

 

The little notebooks I carry around, and into which I write notes of various kinds based on ideas and conversations, isn’t meant to be seen by others. It doesn’t start here and end there. It’s random. Book? Nope.

What about this volume, called Between Page and Screen:

Cover of the book "Between Page and Screen"

 

Its authors call it “an augmented reality book of poems.” Here’s a video of how it works.

Come back when you’ve watched it.

Is this really a book? Or is it something else, even if part of it fits between two covers?

Now check out “The Elements” on the iPad.

I love it. Is it a book? Probably, but I’m not sure what I’d say if I had to give a yes or no answer.

Welcome to the blurry world of tomorrow’s books — blurry in precisely the same way that some other media forms have become. It’s all about digital technology, of course, which subsumes everything that existed before, and then extends it into new realms. Things bleed into each other: The New York Times posts excellently produced video online, and the BBC publishes text-based articles.

The experimentation in book publishing today is great to see. People are using technology to push out the boundaries. At some point, though, what they create no longer seems to fit into any category with historical antecedents.

I’ve asked any number of people in recent months what a book is. The answers have ranged about as widely as you’d expect. Several zeroed in on a fairly simple but powerful notion: a book starts here, holds your attention for a non-trivial period of time and ends there. Then again, so does a walk in the woods, or a film.

I suspect a book will be anything we decide to call one. Traditional books, after all, span an enormous range of presentation methods, not just topics and styles. Maybe we’re just adding new methods.

Words take on new meanings in any case. When was the last time you dialed a phone number by turning a little wheel on a landline telephone with a wire connected to a wall plug? But you knew what I meant by dialing.

I do worry that our shrinking attention spans will make traditional reading less and less relevant. But, ever optimistic, I’ll predict that books — whatever that means — do have a future, because we need them.

Book as Fluke: A Thought Experiment

Standard

What if the existence of books were a cosmic accident, not something irrevocably part of our evolution, not intrinsic to the human experience, but more or less the most profitable thing to be produced from a printing press, a function of commerce and retail and/or a function of religious or scholarly systems? What if the book perpetuated itself not out of necessity, but through a human desire for profits, ego inflation and prestige? Particularly when looking at contemporary attitudes toward the book, as Richard Nash discussed in his essay “The Business of Literature” (2013), books might be seen as culturally “important” partly because of a public relations campaign mounted by the father of spin, Edward Bernays:

“Where there are bookshelves,” [Bernays] reasoned, “there will be books.” So he got respected public figures to endorse the importance of books to civilization, and then he persuaded architects, contractors, and decorators to put up shelves on which to store the precious volumes.

There is so much mythology and self-important discussion surrounding books that we sometimes forget the book is a technology, so old a technology it has disappeared into the background. A book as set of bound, typeset pages has nothing in particular to do with the survival of storytelling, reading or writing. But the advent of the printing press and the advent of the book are so closely connected that we tie the benefits and importance of the printing press (cheap and quick distribution of information) to the benefits and importance of the book (the vessel carrying the information). Perhaps they are inseparable throughout much of their history, but now that we’re undergoing another paradigm shift – a new way of distributing information quickly and cheaply, through the Internet – one has to question whether the book, which is tied so closely to the advent of the printing press, will retain its meaning, relevance and utility in the digital age.

The great growth of reading and writing we’re now experiencing is connected to the Internet’s abundance of information and instant-publishing opportunities, not books. In fact, books have been mostly absent from the Internet (for reading and reference) because they’re closed off in separate universes, not often made available for search, and not as freely distributed, copied and subscribed to as other digital media. In Google’s attempts to bring books inside the fold of the Web, they have faced innumerable challenges and legal battles from people who wish to strictly protect the copyright and profits related to books.

But it may not matter in the end, because the book – either as a unit of commerce or as a unit of attention – may not be the best way to satisfy the needs and desires of people who can instantly access information from mobile devices and be entertained by an unlimited amount of media. As Marcus Dohle said at the 2013 Frankfurt Book Fair, “We want [customers] to choose books as a future, and not Netflix – and that is a big task.” Industry consultant Mike Shatzkin has also said that the biggest challenge facing publishers isn’t the digitization of books or Amazon’s retail practices but the consumer deciding to pass the time by playing Angry Birds or scanning Facebook rather than reading a book.

This challenge, as Dohle says, is a big one. Some controversial articles have argued that the best storytelling today is found on TV, not books. Some have accused literary fiction of becoming irrelevant to contemporary life. Tim O’Reilly famously said the following on Charlie Rose in 2012:

I don’t really give a shit if literary novels go away. They’re an elitist pursuit. And they’re relatively recent. The most popular author in the 1850s in the US wasn’t Herman Melville writing Moby-Dick, you know, or Nathaniel Hawthorne writing The House of the Seven Gables. It was Henry Wadsworth Longfellow writing long narrative poems that were meant to be read aloud. So the novel as we know it today is only a 200-year-old construct. And now we’re getting new forms of entertainment, new forms of popular culture.

Recent innovations in delivering stories have not typically come from book publishers, but from start-ups or online-based companies that can closely evaluate reader reaction and behavior. Amazon, following the lead of other media start-ups, has launched digital initiatives such as Kindle Singles (to deliver stories between 10,000 and 30,000 words), Kindle Serials (to sell story subscriptions) and Kindle Worlds (to deliver fan fiction). None of these genres or formats really fit into the existing paradigm of the book or the legal strictures surrounding it; therefore the traditional publishing business, concerned about profits and marketability, has rarely pursued such content. Now that such areas are flourishing in the digital environment, we begin to recognize the artificial construct of the book – that its length, shape and purpose is based on manufacturing, marketing and other commercial considerations.

Yet some do effectively argue that the book has evolved to encompass the perfect unit of attention and the perfect length to expound on an idea. Maybe this is true, or maybe this is just what we’re used to; after all, attention spans appear to be changing. Still, it’s difficult to envision that a book-length work of fiction – the novel – will become extinct any time soon. It seems likely to continue, but as a less popular form. Consider how the invention of the LP once led musicians to focus on the art and craft of the album: now the digitization of music has ushered in the age of the single. Perhaps fiction is headed in the same direction, something more befitting our short bursts of attention or time when we’re seeking 5-10 minutes of entertainment while standing in line at the grocery store or waiting at the doctor’s office.

The idea actually under threat is the book as information vehicle. Much of the publishing industry – especially the educational sector – is acutely aware that the typical book doesn’t necessarily do the best job of imparting information. Many nonfiction publishers have completely stopped talking about “books” and now focus on content strategy and media agnosticism, recognizing the need to deliver information in many different channels, formats and environments. Wiley’s CEO Steve Smith has said in a range of talks that his company’s job as an educational publisher is not to deliver information or content, but to develop services. By that he means: servicing universities, students and professionals with online courses, assessment, workflow tools, communities and, yes, digital books.

I also wonder about the feasibility, particularly in the nonfiction realm, of culture continuing to deify the author, according him great respect, authority and prestige for producing a book. For writers that subject themselves to the wisdom of the crowd, whether through an agile publishing model that collects reader feedback or a series of blog posts, they’re deeply aware that their own knowledge and perspective, without the knowledge and input of others, often falls short. Case in point: Nature found that Wikipedia is about as accurate as Encyclopædia Britannica. Wikipedia of course has its weaknesses (mainly in structure and style), but the resource is still in its infancy when compared to Britannica.

As far as the role and primacy of the author in storytelling, I can’t help but refer once again to the strength of current TV writing, where a room of writers debate and produce a story arc collectively. While there is usually a creator or visionary, someone who has come up with the premise (as in the case of Vince Gilligan of Breaking Bad), most show creators are quick to give equal credit to the many writers they work with.

Bottom line, we forget that the idea of authorship – and the creation of copyright – came along with the printing press. Before the printing press, there really wasn’t any such thing as an “author.” There were scribes and historians, but authorship is a relatively new idea. With the digital age, we may see the role of the author start to disappear or diminish. Futurist David Houle has predicted this and said in an interview that the younger generations are not as concerned with control as they are influence. They are more interested in completing projects in a collaborative manner, rather than the ego- and identity-centered “I’ve got to go off by myself and create my work of art.” This latter attitude pretty much nails the primary mode and concern of novel writers today, who find themselves in dramatic opposition to the technology surrounding them. (See: Jonathan Franzen and Dave Eggers.)

Building Worlds Out of Books

Standard

We’ve been writing about the future of the book without having given much thought to the question of what a book is in the first place. Is it a physical, papery artifact, a thing? An autonomous textual unit of attention made up of meaningful bite-sized subunits? A word whose persistence in language is merely a matter of convention, a residue of more bookish times?

I’d like to propose that a book is a window onto a world. If this is true, we have good reason to believe that books will survive in a form that will remain recognizable to us.

Books project worlds by objectifying thought. They freeze in place a story, a longish idea or a description of life. Books are one means of taking a world, real or invented, and compressing it, encoding it and presenting it. Books shrink space and crush time. So long as we enjoy shrinking space and crushing time, we’ll crave book-like things.

Then again, in the same breath that they create worlds, books also destroy themselves. When I read a science fiction novel (and not only science fiction), I read for worlds. I define the word world as the sum total of relations – among things, characters, settings, laws, etc. – within a bounded imaginative space. If the book does its job, its bookishness will dissolve into the reader’s concern for characters and situations and plots. Even the most intensely avant-garde poetry (think Kenneth Goldsmith’s American trilogy) or the boldest experiments in book design (think Jonathan Safran Foer’s Tree of Codes) construct worlds. Even the most book-conscious books are finally self-effacing.

Which might lead us to doubt that books need to survive. If I can watch a skillfully made rendition of Frank Herbert’s Dune, of what special use is the novel? It’s true, literary style is only one sort of window onto interesting worlds. But it’s a window with strengths and weaknesses, zones of clarity and opacity. Despite a century of efforts to do so, no novel will ever offer the visceral experience of a play or film or television show or video game. Contrariwise, non-literary modes of world-building still stink at dramatizing thought or deploying metaphor. Within the domain of prose fiction, moreover, short stories can only hint at the fullness of an imagined or real world, a job the novel does with ease.

There are also economic reasons why books will likely survive. In an age of vertically integrated multinational media conglomerates, books remain useful as vehicles for the creation of worlds on the cheap, worlds that subsequently spawn other higher-margin worldish media products. A company like DC Comics sustains its comics division almost purely as a means of research and development for its profitable films. Film producers often outsource creativity to popular novels or book series. The book (whether of poetry, drama or prose) fits snugly in curricula and on syllabi at every level of education. Finally, the novel is still at the peak of the pyramid of narrative and cultural prestige. No other form comes close to capturing the imagination of a world-hungry public. These are forces that will, fortunately, be hard to dislodge.

The future of the concept of the book is therefore the future of the book’s capacity to facilitate the reader’s access to worlds. As long as humans are hungry for fully evoked worlds that include figuration or densely packed information or renditions of characters (or people) whose inner lives are richly accessible, something very much like the book will survive.

Do Zimboes Dream of Electric Sheep?

Standard

The act of reading is inextricably linked to the intertwined structures of language and consciousness.

We are conscious beings; as mammals, when we experience the world around us we weave a narrative account of our existence that gives us a retrospective timeline in which to anchor our viewpoint and sense of unitary identity.  We possess a “theory of mind” which allows us to ascribe intentionality to other organisms – the dog bit the postman because it was frightened (and fear provokes a fight/flight response) – a valuable survival ability during our prehistory on the plains of Africa. And we possess language with syntax and deep semantics and grammar, a possibly unique and very powerful capability that allows us to encode behavior and insights and transfer them from one mind to another.

Cognitive philosophers have, over the years, chewed on the concept of consciousness until it is grey and mushy about the edges – but with little digestive success. One thought experiment they use to examine this phenomenon is the idea of the zombie. In cognitive science, a zombie is a philosophical thought experiment: a human being with no interior state, no sense of identity, no “I.” Philosophical zombies do not, as far as we know, exist, but they possess a number of interesting attributes; they presumably eat, sleep, breathe and respond to stimuli, but possess no personhood. If you ask one who he or she is, or what they are experiencing, they won’t be able to frame a reply that encodes any sense of identity: they observe but they do not experience.

To probe some questions arising from philosophical zombies, Daniel Dennett proposed a new category: the  “zimboe.” A zimboe is a special type of zombie which, when asked, will deny that it is a zombie. That’s its sole specialty. It’s like an empty house where the lights are on and nobody’s home, but the absent householder has left a tape-recording of a dog barking or a baby crying playing on a continuous loop to convince burglars that it’s a bad prospect. If you ask a zombie about themselves they can’t tell you anything. If you ask a zimboe about themselves they will spin a convincing yarn, but it’s a lie – they don’t feel anything. Detecting a zimboe is next to impossible because they claim to be conscious; we might be surrounded by them, or even married to one, but we might never know.

When we read fiction or autobiography or any other narrative text that encodes a human experience as opposed to some assertion about the non-human universe, we are participating in an interesting process that Stephen King described as the nearest thing to telepathy that humanity has yet developed. An author has encoded their interior experience, serialized it as text and handed it to the reader in some kind of package. The reader then inputs the text and, using their theory of mind, generates a simulation of the interior mental states the writer was encoding.

What happens when a zimboe reads Pride and Prejudice and Zombies?

The lights are on, but there’s no consciousness present and therefore no theory of mind to be deployed to generate an emulation of the interior states of Jane Austen’s characters. You can quiz the zimboe about their reading matter and they can answer factual questions about the text, but they can’t tell you why Elizabeth and Mr. Darcy are feeling any given emotion, because they lack the theory of mind – the cognitive toolkit – necessary to infer interior states and ascribe them to other entities.

We may therefore expect zimboe lairs to be curiously deficient in the kind of reading matter that provokes emotional engagement and long interior arguments with recalcitrant fictional protagonists who need to recognize the error of their ways, pull their heads out of their fictional asses and sort themselves out.

And, more fundamentally, we may infer the existence of a cast-iron test for whether a person is a person or a zimboe…because zimboes can’t write fan fic. Not even bad fan fic. They probably can’t write any kind of fiction at all, or even reliably recognize the distinction between fiction and narrative fact.

Zimboes don’t dream of electric sheep. And, come the zombie apocalypse, we can use this fact to defend ourselves from them!

Further reading:

Mixtures and Compunds

Standard

As a religious publisher, I am aware of the curse that attaches to making the slightest change to Holy Writ.

All publishers know authors who object to the slightest editorial change to their great opus.

Movies exist, audiobook exists, games and quizzes exist; for news their is journalism. Adding videoclips or databases to a book makes a new synthetic or hybrid product, but the video is still video, pics are still pics, and text is text. Chemists make a distinction between a mixture and a compound: I am not convinced that books-plus can go from mixtures to something new.

Gutenberg was also a bookseller

Standard

It seems clear that consumers may want different formats: a library may desire a sewn hardback that will last centuries; a traveller may want a read-once papeback that fits in a handbag; an older reader may desire larger print. A student may want not whole books but the must-read chapters for his course. All these point to production on demand for books, either by ordering online or by finding bookshops that have printing machines as well. So there will be fewer, larger physical bookstores, and workers in said stores will need to work with publishing software.

This comes with a risk that copyright owners fail to get paid so shop managers and production people will need to be conscious of security as well.

Gutenberg would recognise a world where books are personalised, expensive and made to order, but he would be surprised that there is also the cheap alternative of buying the e-version.

Amazon has recently linked a cut price offer for an ebook to a previous physical purchase – the deal could work the other way also: taste the e-book, then buy physical. But having a history of customer purchasing is a huge advantage in making the link. How copyright owners can verify against abuse is not so clear!

 

the changing role of copy editors

Standard

As authors do not depend on publishers anymore in publishing their works work flows are changing. Authors are more independent in choosing their way of publishing. Still the process of writing itself remains hard work and is not easily done. Self publishing does not mean that the work of copy editors is no longer needed. Texts still have to be read and revised and read several times before they can be published. But where exactly is the place of the copy editor in ebook publishing? Roles are changing and I a m happy to be a part of the changes.

authors must have authority!

Standard

From a young age we are used to being ‘told’ to read this or that; mums, older siblings, teachers, counsellors, mentors, bosses and even competitors come up with forceful suggestions which have the advantage of making their job easier as well as providing topics for conversation.

Authorial authority is a self reinforcing things: how do people know you are an expert: because you have written a peer-reviewed book.  But how come it was reviewed at all, because you are an expert!

Authority is hardly a new concept: but internet collaboration does not seem to enhance it: Wikipedia may be a popular site, but how far can we trust entries? What is the educational level at which entries are written.

authors must have authority!

Standard

From a young age we are used to being ‘told’ to read this or that; mums, older siblings, teachers, counsellors, mentors, bosses and even competitors come up with forceful suggestions which have the advantage of making their job easier as well as providing topics for conversation.

Authorial authority is a self reinforcing things: how do people know you are an expert: because you have written a peer-reviewed book.  But how come it was reviewed at all, because you are an expert!

Authority is hardly a new concept: but internet collaboration does not seem to enhance it: Wikipedia may be a popular site, but how far can we trust entries? What is the educational level at which entries are written.

The Announcement of My Death Was Premature: a Traditional Book Speaks Out

Standard

Many have written my obituary, on more platforms than you can imagine; in old media newspapers, new media blogs, on-line magazines, e-alerts for smart phones. You name the venue and someone has rechoiced at or opined about my demise.

But most of the blather has been misdirected. True, books are evolving, but more in the way that birds and reptiles evolved from a common ancestor. Aspects of the DNA of one organism remains genetically inbedded in the other.

That’s what’s happening with me. Part of my DNA has been spliced, creating spin-offs that will diverge from me, blurring our connection until it becomes almost invisible. Within a short span, children will be enjoying interactive “experiences” that include text, pictures, embedded video, soundtracks, a plethora of options. A paper-bound book will seem as unrelatated to these as chiseled hieroglyphics seems to modern word processing.

But I will survive, surprisingly intact. Of course, I am not immune to the force of evolutionary change.  Already many experience me via dedicated ebook readers, tablets and phones, and wipeable, changeable smart paper beckons from the horizon.

But my magic remains undimmed by these new methods of access. I still have the power to transport a child or its parent into a world of mystical wonder, where pirates battle brigands, where grieving daughters mourn fathers, where a son seeks vengeance for an untimely murder.

I myself provide a unique experience: the opportunity to actually lose yourself in the words. No other medium can do that. And no technological innovation can destroy the siren song of my power.

So do not fret. Not only am I alive and well–in a world of thechnological cacophony, I’m growing stronger.

 

How will books be writ­ten and edited in the future?

Standard

Will there soon be a GitHub for books? Will frustrated authors finally destroy the hegemony of Microsoft Word? What is the role of editing in a literary marketplace rife with self-published books? And most importantly, what does Michael Foucault think about all of this? Find the answers to these questions and more as our authors confront the future of writers and their editors:

Today is your last chance to join us as a co-author! This morning we’ll be tackling the the question, “What will the eco­nom­ics of author­ship be in the future? In what new ways will authors engage with their read­ers?”

Why Microsoft Word Must Die

Standard

I hate Microsoft Word. I want Microsoft Word to die. I hate Microsoft Word with a burning, fiery passion. I hate Microsoft Word the way Winston Smith hated Big Brother. Our reasons are, alarmingly, not dissimilar….

Microsoft Word is a tyrant of the imagination, a petty, unimaginative, inconsistent dictator that is ill-suited to any creative writer’s use. Worse: it is an aspiring monopolist, having nearly 80 percent of the word processing field to itself. Such dominance has brutalized the minds of software developers to such an extent that few can imagine a word processing tool other than as a shallow imitation of the Redmond Behemoth. So what’s wrong with it?

I’ve been using word processors and text editors for nearly 30 years. There was an era before Microsoft Word’s dominance when a variety of radically different paradigms for text preparation and formatting competed in an open marketplace of ideas. One early and particularly effective combination was the idea of a text file containing embedded commands or macros that could be edited with a programmer’s text editor (such as ed or TECO or, later, vi or Emacs) and subsequently fed to a variety of tools: offline spelling checkers, grammar checkers and formatters like Scribe, Troff and LaTeX that produced a binary page image that could be downloaded to a printer.

These tools were fast, powerful, elegant and extremely demanding of the user. As the first 8-bit personal computers appeared (largely consisting of the Apple II and the rival CP/M ecosystem), programmers tried to develop a hybrid tool called a word processor: a screen-oriented editor that hid the complex and hostile printer control commands from the author, replacing them with visible highlight characters on screen and revealing them only when the user told the program to “reveal codes.” Programs like WordStar led the way, until WordPerfect took the market in the early 1980s by adding the ability to edit two or more files at the same time in a split screen view.

Then, in the late 1970s and early 1980s, research groups at MIT and Xerox’s Palo Alto Research Center began to develop the tools that fleshed out the graphical user interface of workstations like the Xerox Star and, later, the Apple Lisa and Macintosh (and finally the Johnny-come-lately imitator, Microsoft Windows). An ongoing war broke out between two factions. One faction wanted to take the classic embedded-codes model and update it to a graphical bitmapped display: you would select a section of text and mark it as “italic” or “bold” and the word processor would embed the control codes in the file and, when the time came to print the file, it would change the font glyphs being sent to the printer at that point in the sequence. But another group wanted to use a far more powerful model: hierarchical style sheets. In a style sheet system, units of text – words or paragraphs – are tagged with a style name, which possesses a set of attributes which are applied to the text chunk when it’s printed.

Microsoft was a personal computer software company in the early 1980s, mostly notable for their BASIC interpreter and MS-DOS operating system. Steve Jobs approached Bill Gates to write applications for the new Macintosh system in 1984, and Bill agreed. One of his first jobs was to organize the first true WYSIWYG word processor for a personal computer – Microsoft Word for Macintosh. Arguments raged internally: should it use control codes or hierarchical style sheets? In the end, the decree went out: Word should implement both formatting paradigms. Even though they’re fundamentally incompatible and you can get into a horrible mess by applying simple character formatting to a style-driven document, or vice versa. Word was in fact broken by design from the outset – and it only got worse from there.

Over the late 1980s and early 1990s Microsoft grew into a behemoth with a near-monopoly position in the world of software. One of its tactics became known (and feared) throughout the industry: embrace and extend. If confronted with a successful new type of software, Microsoft would purchase one of the leading companies in the sector and then throw resources at integrating their product into Microsoft’s own ecosystem, if necessary dumping it at below cost in order to drive rivals out of business. Microsoft Word grew by acquiring new subsystems: mail merge, spelling checkers, grammar checkers, outline processing. All of these were once successful cottage industries with a thriving community of rival product vendors striving to produce better products that would capture one another’s market share. But one by one, Microsoft moved into each sector and built one of the competitors into Word, thereby killing the competition and stifling innovation. Microsoft killed the outline processor on Windows, stalled development of the grammar checking tool, stifled spelling checkers. There is an entire graveyard of once-hopeful new software ecosystems, and its name is Microsoft Word.

This planned obsolescence is of no significance to most businesses, for the average life of a business document is less than 6 months. But some fields demand document retention. Law, medicine and literature are all areas where the life expectancy of a file may be measured in decades, if not centuries. Microsoft’s business practices are inimical to the interests of these users.

Nor is Microsoft Word easy to use. Its interface is convoluted, baroque, making the easy difficult and the difficult nearly impossible to achieve. It guarantees job security for the guru, not transparency. For the zen adept who wishes to focus on the task in hand, not the tool with which the task is to be accomplished, it’s a royal pain in the arse and a perpetual distraction. It imposes its own concept of how a document should be structured upon the writer, a structure best suited to business letters and reports (the tasks for which it is used by the majority of its users). Its proofing tools and change tracking mechanisms are baroque, buggy and inadequate for true collaborative document preparation; its outlining and tagging facilities are piteously primitive compared to those required by a novelist or thesis author; it’s macro language (a descendant of BASIC) is an insult to the intelligence of the programmer, and the procrustean dictates of its grammar checker would merely be funny if the ploddingly sophomoric business writing style it mandates were not so widespread.

But this isn’t why I want Microsoft Office to die.

The reason I want Word to die is that until it does, it is unavoidable. I do not write novels using Microsoft Word. I use a variety of other tools, from Scrivener (a program designed for managing the structure and editing of large compound documents, which works in a manner analogous to a programmer’s integrated development environment if Word were a basic text editor) to classic text editors such as Vim. But somehow, the major publishers have been browbeaten into believing that Word is the sine qua non of document production systems. They have warped and corrupted their production workflow into using Microsoft Word DOC files as their raw substrate, even though this is a file format ill-suited for editorial or typesetting chores. And they expect me to integrate myself into a Word-centric workflow, even though it’s an inappropriate, damaging and laborious tool for the job. It is, quite simply, unavoidable. And worse, by its very prominence, we become blind to the possibility that our tools for document creation could be improved. It has held us back for nearly 25 years already; I hope we will find something better to take its place soon.