Writers and Readers of the New Digital Economy, Unite!

Standard

Imagine this scenario: The Tweetosphere suddenly goes silent. The New York Times reports that erstwhile tweeters have declared a universal boycott of the social network. Jobs are disappearing, courtesy of technologies like theirs, while revenues soar. But Twitter isn’t sharing that windfall with its content creators, who argue that they are at the heart of its success. Sure, the network dangles the potential for social capital, which can be very nice, but there ain’t nothing like the real thing.

Perhaps they’ve read Jaron Lanier’s Who Owns the Future (2013) (the provocative book that inspired this thought experiment) and have realized that, if things continue as they are, it sure won’t be them. The only topic trending now is mute outrage, and it’s earsplitting. There’s just the lone tweet of a Twitter employee as she’s handed her pink slip: “WTF?! #We’reAllF*cked”

What is Twitter without tweeters? The same as any network without content: Useless. Bankrupt. (The irony that a universal boycott of Twitter could not be tweeted and would thereby render a universal boycott of Twitter nigh impossible is not lost on me. But stay with this….)

Now imagine that Twitter isn’t the only target of the boycotters’ wrath. Facebook, YouTube, Instagram and Tumblr are next. Soon all who participate in online “conversations” demand pay for their contributions—videos, music, comments, reviews, blog entries—and for the time spent creating them. The whole Internet, that great network and aggregator of networks, is on trial, and save for the bots who continue to generate derivative filler, and the odd contribution from employees of Amazon, Google, and their ilk, the whole thing goes static.

Network contributors, nay workers, are as mad as hell, and they’re not gonna…well, you know. They’re fed up with going hungry as the networks they feed employ alarmingly fewer people and grow ever more fat. At the core of their gripe is that these “free” online services are not boons for users, as they assure users, so much as they are boons for themselves. And it’s not only user content that’s free for them—it’s also user data, a commodity far more precious.

With so much money flowing toward those closest to these “Siren Servers” as Lanier calls them, and away from the middle class, something must give. For a digital economy to be viable and sustainable, a new paradigm is needed. In the above scenario, content creators are hellbent on being a the center of it. This is no socialist revolution. It’s an intensely capitalist one. Participants make networks valuable. Why, then, don’t they get paid?

The fact is Twitter is not “free.” Neither is Facebook or any other social network—or, for that matter, any online experience in which we, the participants, freely offer content and invaluable information about ourselves for others to use or sell as they will. Sure, it may seem that we’re the beneficiaries of largesse, and many wax utopian about the virtues of a world in which free content means cheaper living and return to a universal barter system, which means we won’t need much money at all. (I’ve spoken to a number of well-educated millennials aligned with the Free Culture Movement who actually believe this. I love their idealism but question what’s in their water.)

Others opine about the social capital made possible by networks like Twitter and how that can be parlayed into wealth. “I’ll plug my wares for free on social networks for free, millions will buy, and I’ll soon be rich.” And still others, followers of The Singularity with Mosaic tablets bearing Moore’s Law, proclaim that the exponential progress of technology will create unlimited wealth by the 2040s, so that in a few decades, we’ll all be rich (not to mention immortal)!

As Lanier keenly observes, we need a reality check. In a digital economy, “free” means participating blithely, thinking how lucky we are to live in such an age, as the Siren Servers, and those closest to them, co-opt our economic future.

But back to our thought experiment: What might it imply for the economy of writing and reading future books in a digitized world? A tweet is, after all, a byte-sized text generated by an author (or authors) who seeks an audience for her “work.” And followers of tweeters are readers who seek out these texts, engage with them and comment, carrying the conversation forward. Will future “books,” no matter what they look like or what what they’re called, really be so much different?

We talk about the “works” of authors and sometimes the “work” associated with reading and processing a great, challenging novel. Both require a great deal of attention and are a kind of work. Does posting a book review on Amazon or engaging in a book discussion on Goodreads constitute “work”? Why not, if these networks would be nothing without such content?

We English majors dream of being paid for doing what we love, but very few of us actually live that dream, and those who do are hampered by academic pressures like securing tenure or the economic pressures of a market that treats novels more like commodities than art.

What if there were a digital, network-based ecosystem that truly supported communities of readers and writers so that the telling of stories and interactions around them could flourish without its participants going broke? Compensation could come in the form of micro-payments from the host network, a top-down approach, but assessing the value of that content would be an emergent phenomenon, based on algorithms that factor in the community’s attention to and merit-rating of each contribution.

There are plenty of scary, dystopian implications here, but it’s easy to see how such an ecosystem could spur creativity and innovation in future books. And I, for one, might just tear myself away from the printed page and give up my privacy for a chance to get paid for doing something I love.

Image courtesy of Shutterstock, all rights reserved

Books Without Pages: Reading Beyond the Skeuomorph

Standard

Skeuomorphism, the idea that digitally designed “objects” should mimic their real-world counterparts, is decidedly out. Ask any designer of digital anything these days.

In 2012, designers, amid much controversy, hailed the dawn of the “post-linen” era (linen had been a trademark texture of Apple’s mobile devices, per an edict from on high), which was taken by many to be synonymous with the “post-Jobs” one. (See this October 2012 New York Times piece by Nick Wingfield and Nick Bilton to learn more.)

When Jobs died, his “spiritual partner at Apple,” Jony Ive, was named Senior Vice President of Design and, with the announcement of iOS 7 at the Apple Worldwide Developers Conference (WWDC), Apple declared the death of skeuomorphism.

Still, in 2014, skeuomorphs abound—in Apple software and beyond. Think of all those icons on your laptop and mobile device screens: the address book, the camera lens, the time-honored trash can. And there are auditory skeuomorphs: the shutter-click sound emitted by most camera phones when taking a picture. The click, of course, doesn’t come from a mechanical shutter on phones with camera apps but from a sound file in the phone’s operating system. Ditto for “analog” mobile phone ringtones, which hark back to a bygone era in the evolution of physical phone technology.

But tying digital experiences to physical ones, in addition to creating usability challenges (in music production, a physical knob or dial is much easier to operate than its digital representation), may limit our creativity as we envision future books. And the most obvious “knob” of digital books has to be the animated turns of deckled pages on tablet-based e-books.

Page turns are fast losing favor with designers and readers alike, who are choosing, where possible, the “swiping” alternative. Swiping from right to left moves the screen’s content in that same direction. I almost said, “Swiping from right to left turns the page,” which just goes to show that I’m still as stuck in the old skeuomorphic paradigm as anyone.

Must we see page turns to know that we’re reading a book? Do they provide a needed transition—a pause or breather between units of text—on which we’ve come to rely? Are they a comfort? A nostalgia? What might happen if we stopped defining books as units of thought broken down into other units called “pages”? Could dropping this convention give way to shedding other skeuomorphs endemic to e-books and free our imagination further? In a future no longer concerned with skeuomorphic concordances, can a “real page-turner” become a “real swiper”?

We certainly don’t need to continue representing facing pages; they’re an artifact of physical bookbinding and serve no practical or aesthetic function. But I think that digital narratives will always need to be broken down into discrete, quantifiable bytes—both for easy reference and to help orient the reader and give her a sense of her progress through them.

Pagination seems key in that it manages expectations. Before we begin reading, we want to have a sense of what our time commitment will be, so that we can make an informed choice about engaging with the content. With physical books, the very heft of the tome often settles these questions, however imprecisely. And we can always flip to the back of the book for a page count.

Likewise pages (and lines of pages) act as critical points of reference and orientation. While reading, we might make a mental note of an interesting passage on page 43 and then refer back to it a bit later. Or we may want to reference the passage in a critical exegesis. One of the boons of digital book design is that the reader may choose to adjust font size to their need or preference, thereby customizing the reading experience. But changing font sizes reflows text and alters page and line counts, making such referencing and orientation a fraught exercise.

Digital book designers are faced with a choice: include a progress bar, which removes dependence on page numbers but confounds easy referencing of the content, or include pagination. The latter yields a further choice: assign page numbers to every new page of the reflow (so if the 450-page book is now 7,000 pages, reflect that in the pagination) or tie the maximum page count to that of the physical version of the book, or to a particular font size. In the former case, page count can be daunting, and a deterrent to engagement. In the latter, any increase in font size will maintain the 450 pages, but the reader may now be faced with multiple “page 4s” to accommodate the fixed count. Clunky. Confusing. Unreferenceable. And we’ve once again tied ourselves to the physical world!

Good creative solutions for digital books, which offer no tactile cues, remain elusive. How do we move beyond the page paradigm? It may seem a small matter, but I think that examining our attachment to features of physical books—discerning which are artifacts of physical production and which are fundamentally supportive of the reading process in any format—can be a first step toward imagining future books whose integrity isn’t compromised by the drag of ancestral ties.

Why I’m Here

Standard

I’m here because I do not go lightly into the realm of digital books, with all of their implications for the intensely human experience of reading. My innately progressive bent turns stubbornly conservative when threatened with the extinction of the hallowed tomes I’ve counted among my closest friends since childhood. I caress physical books. I inhale them. I display them prominently. I achieve a sensory high every time I remove one from its shelf. As an art director, I especially delight in choosing paper stock, finishes, colors. From an early age, they’ve been my drug, and now I’m faced with the DTs of withdrawal.

And yet I manage the creative production of multimedia books that are best served in app form. And so I have found myself making book apps that combine novels, music, and art. I enjoy imagining and executing creative solutions for fusing narrative fiction with rich media in ways that feel faithful and organic to the content of the story. Though wary of gimmicks, bells and whistles associated with gaming, and the distractions that social media can introduce to an experience that requires deep attention, I do believe that fiction can be well-served by native app technology (though I remain adamantly unconvinced about e-books).

Moore’s Law means that these technologies are evolving at an exponential rate, and it can be a lot to keep up with. In addition to long, arduous editing and proofing cycles, there’s the interminable exercise known as beta testing. For an old-school book lover, this can be maddening! Designing for multiple platforms is time-consuming and costly, but the company I work for is in the unusual and fortunate position to be able to afford their high-quality development. I work with only one author and on only one or two titles at a time. Still, we will take a year or more to port a story from first draft to app. Books seem to take longer than ever!

The author I work with and for is even more conservative and anti-Web 2.0 than I, and it’s my job to carry his work forward into ever more daring and engaging formats that will reach new, presumably younger, audiences who demand social interactivity. (To me, reading has always been an inherently interactive experience, so I chafe at this idea, even as I write it.) My publishing company has the resources to push limits, and so I must lead the brigade and buck my own limited ideas about what a book can and should be. Incubating ideas with a coterie of science fiction writers, futurists, and publishers grappling with similar questions seems to me a very good start.