New Modes of Knowledge Production and the Book

Standard

It goes without saying that digital technologies have lowered the bar to writing, printing, and publishing books. And, yet, when we think about the future of the book, too often we (historians especially) imagine the book in terms of the large commercial or academic press that follows an age-old process through which authors sit down at a typewriter and peck away at the keyboard, filling page after page of text. What I’ve come to realize, though, is that I’ve come to these problems of the future of the book from a quite different point, a roundabout journey that began without much consideration of either platform or press.

The particular questions that I am presently exploring are specific. How do we deploy an e-publishing solution for mobile interpretive projects powered by Curatescape (+ Omeka)?  That problem has transformed my colleagues and partners into publishers, revealing a convergence between public humanities projects and traditional scholarly endeavors. This convergence suggests that as we sprint beyond the book, we should appreciate both the importance of the book’s unique presence as well as the ways in which the book can become enriched by new approaches to the production of knowledge.

Curatescape, the framework for mobile publishing developed by my research lab, emerged from several professional practices that have converged in the digital age.

Urban and public historians have long been curating landscape, well before the term “curation” was applied as widely as it has been in the digital age. Often emerging out of innovative community-driven teaching, these “local” historians and their students and collaborators studied neighborhoods, communities, and civic spaces. The outcomes of those works—papers, presentations, walking tours, and public history projects—frequently made their way back to the community through interactive projects, featuring dialogues between students and their key informants. That dialogue, framed by historical scholarship and primary source documents, yielded remarkable experiential learning, of the sort that produced civic engagement. This approach has become a standard feature on many university campuses through service learning and experientially-based classroom assignments. The digital age has yielded new ways to feature that work, ranging from blogs to digital archival platforms. Suddenly, we’ve moved from one-off projects to those that can (potentially) build upon one another.

The ability to create shared learning environments led innovators to create standards-based platforms and tools for publishing on the Internet. WordPress, Blogger, and Tumblr are the best-known present surviving tools from this moment, becoming common blogging (or microblogging) software. In the archival world, open-source archival content management systems emerged to help librarians and curators document and share their collections—books, material culture, and photographs. In academic and library settings, tools like Collective Access or Omeka have become commonly used archival systems, emulating blogging platforms in their approach to allowing heritage professionals to engage publics about their important cultural collections.

At the turn of the century, Oral History practice underwent dramatic transformation, driven by the emergence of digital tools for collecting, processing, and archiving oral history. The results accelerated trends underway in the field, away from reliance on written transcripts to mediate what is a deeply human and aural experience. Digital collection of stories democratized oral history by allowing anyone to record narratives.  And it made those sound files more sharable than they’d ever been. Coupled with easier indexing, annotating, and archiving, oral history became malleable and could be included easily in the emerging ecosystem of digital humanities projects. Setting aside the work of filmmakers, these trends allowed scholars and documentarians for the first time to widely share human voices as part of their interpretive work. As part of a broader proliferation of interpretive multimedia, the very nature of storytelling has shifted toward layered multimedia presentation.

In 2005, as these trends emerged and I engaged them with students, teachers, and colleagues, I was asked to produce content for history kiosks that would be located along a rapid bus route in Cleveland, Ohio. Our team built elaborate multimedia stories for these kiosks, which appeared on the streets at the very moment of the emergence of the iPhone. Recognizing that such locative technologies promised to transform cities into living museums, our team adapted the kiosk project to mobile devices. Bringing together a series of convergences—in engaged-student learning, open-source content management systems, and digital oral history—our first project, Cleveland Historical, developed as a web-based mobile interpretive project that allowed our team to curate the city through interpretive, layered multimedia stories. Cleveland Historical became the first iteration of Curatescape, a broader framework for mobile curation that uses the Omeka content management system as its core archive. Importantly, we don’t call our work a “platform” but a framework that uses multiple digital tools, content management systems, and standards. We exist within a broader system of knowledge production that is both technical and conceptual.

In building our Cleveland project, as well as working with more than 30 partners to launch their projects, we’ve realized that our teams of students, communities, and scholars are curating landscape through interpretive stories. They’re also publishing rich collections of multimedia stories that engage the landscape in remarkable ways. These projects transform how we experience place, and also provide an avenue for shaping conversations about place.

Critically, our audiences and interpreters also have challenged the boundaries of our community, urging us to produce information feeds to a variety of different formats, including e-books, and even real books. They want to read our interpretive historical stories as collectives, with different sorts of connections to other interpretive projects (both inside and outside the Curatescape system).

Quite suddenly, we’ve found ourselves asking what these travelogues should look like. We’re asking about the role of multimedia, the formats, and the outputs (e-books, print, how to format the RSS feed). We’re just as interested in the use cases: is this for local urban walking tours, thematic books that feature the apps’ tours, aggregations of stories across space—about parks or Civil Rights? The questions of what this might look like, and of what it means to write a book, have challenged our sense of the book itself. What is it that we’re publishing? If it is not a book, what is it? Critically, the convergence of tools, approaches, and materials suggests to me that whatever forms emerge should reflect emerging approaches to systems of knowledge production. Hearkening back to a mythic book as a standard and goal may be the wrong question to ask as we sprint toward the future of the book.

Books, Books Everywhere and Not a Drop to Drink

Standard

A significant impediment for a reader considering whether to enter into the world of a book is that it is resource-intensive. As C. Max Magee discussed, books are expensive in “time and emotional energy.” The overall commitment is significant, and perhaps even more importantly, the commitment required to sample is high too. Spending an hour and a half reading a book you decide you don’t like is a deeply unpleasant experience, and frequently the reader quantifies that loss in terms of dollars than time: “I can’t believe I wasted $15 on this piece of crap.” A book you don’t want to read is worse than the absence of value, it destroys value (subjectively, of course).

One ramification is that the price of a book has to be radically discounted in order to persuade a reader to take a risk on something that could prove to be a negative experience. Dollar for dollar, a book is the cheapest form of narrative cultural experience there is, cheaper than music or film, and the perceived value, in the consumer’s mind, of content in digital format exacerbates the situation, putting even more downward pressure on pricing. The shift in consumption patterns away from ownership towards an access model, one driven by companies like Netflix in films, and by Spotify, Pandora, Last.fm, etc. in music creates yet more pressure. Indeed, in one respect, piracy means that all content is now, in effect, free, if you know how and where to look.

Nevertheless, the cost denominated in time and emotional energy remains as high as ever, higher if you consider that we are now swimming in content. Almost all platform innovation around content in the past five hundred years has occurred at the level of supply, whereas relatively little effort has been expended figuring out how to integrate all the stories we’re now actively telling. Probably the greatest effort has been expended by search engines around finding things you know you’re looking for, and social networks in seeking to organize the output of social activity, whether than activity is expressed in short bursts of words, or in pictures and short videos. But little effort has been expended on the largest and most demanding agglomerations of words, and on considering how to permit serendipity. Serendipity seems to require a sense of an encounter with the unexpected which is difficult to engender when we expect to have stories flowing by us throughout space and time.

The primary new platform innovation in books in 2013-2014 has been the subscription service, which seeks to apply the film/TV/music paradigm shift: a shift toward paid streaming subscription and away from both advertising-supported analog streaming from broadcast radio and TV and away from pay-per-download models like iTunes.

Currently, however, these services—the most discussed are Oyster and Scribd—focus on acquiring the latest possible libraries of content (each tout 100,000+ titles) and the lowest price ($9.99 and $8.99 respectively). However, with so much content in the world, more than any human alive could even name, never mind consume, and with most of it available either for free already or easily hackable, what value could such services possibly provide a reader?

My belief is that the power of any such service will inhere less in its ability to make more reading available more cheaply, and more in its ability to help us integrate reading into our daily lives. How this will happen is probably the determining factor in both how these platforms will evolve and the extent to which people will migrate to these reading services from other modes of of acquiring content for reading. I’m now working for a service called Byliner which shares with Oyster and Scribd a library model and a monthly subscription fee. However, it is also exploring ways to structure the library in a manner than enables a satisfying journey through all the stories. In this regard it has one advantage over Oyster and Scribd which is that it began life as a publisher of stories that can be read, typically, in 30-40 minutes, with stories (fiction and narrative nonfiction) ranging in length from 5,000 to 20,000 words. As such, the reader is not called up in each instance to embark on a long, potentially unpleasant journey—the fact that the stories are shorter than full-length books allow the reader to nibble her way through and, if we are able to serve her up successive stories that appeal, we’re able, ideally, to bring about a progressive sense of depth. A different experience we’re exploring is to select five stories, around a particular theme, say Genius, or Hustle, or Lust, and send those to subscribers once a week. So the first structure is akin to a reader journeying through the City of Stories, while the second operates more like a wine club, delivering weekly a set of new stories to read.

Regardless of how these various enterprises evolve, their existence signifies a positive development in the business of digital content, in that they do not require the enormous number of users that large-scale advertising-driven corporations need to survive. Stories of a significant length do not interest advertisers, since an individual serious narrative is never going to attract millions of readers. So a model wherein there is predictable recurring revenue, based on readers looking for precisely that, is a positive outcome for the reading-writing ecosystem overall.

Books as Platforms for Surveillance

Standard

One major trend in current technological innovation is personalization. People can look up anything of interest with unprecedented speed, and are presented with information specifically tailored to their needs, preferences, and past behaviors. To effect this personalization, massive amounts of data are continuously collected about users’ interactions with technology—what they search for, what they look at, and what they choose to share with others online. There is a tension between the usefulness of having technology anticipate your needs and the Orwellian implications of having all the data you generate collected, stored, and analyzed.

In thinking about the production of e-books, we have to recognize that these knowledge systems will increasingly incorporate knowledge about the consumers of the books. For digital books to become more intelligent and adaptive to reader characteristics, they need to collect massive amounts of data about individual readers. Other essays from this book sprint have positioned e-books as platforms for performance, platforms for expression, and platforms for community in ways that emphasize the positive role of books in modern society. We also need to recognize that digital books, like much modern computing technology, are platforms for large-scale surveillance in ways that can have problematic implications.

One area of surveillance is the intentional actions users take: books they buy, books they read, passages they underline, annotations they make, and comments or reviews they leave for the broader online community. This data can be logged and stored, and it is easy to imagine scenarios where the act of reading books counter to your group norms is discouraged by the fact that it could be made public. Most text data will soon be able to be automatically interpreted, and comments and annotations will be crawled and categorized. The thought of an automated aggregation of every spontaneous and potentially trivial reaction by each individual reader across several years is somewhat discomfiting. On the other hand, this data generated by intentional actions is easily interpretable by readers themselves. In today’s world, many people are comfortable sharing this kind of information about themselves with their broader community. When readers have power to manage and curate this data as part of the way they present their identity, the collection of the data somehow seems less ominous.

A second area of surveillance is how books are read—user reactions to the text that are less intentional but integral to the act of reading itself. Gaze data can tell us where on the page the reader is looking at any given point in time; and while eye trackers are currently expensive and cumbersome, in the near future it is entirely likely that accurate tracking will be accomplished through camera-based technologies. Physiological data can provide information about readers’ emotional reactions to particular passages, and brain data can provide information about their cognitive states. While currently these technologies are intrusive and mostly limited to research applications, they will not always be.

The implications of this second kind of data collection are sinister. If Sara is assigned a reading from a textbook, and eye tracking indicates she barely glanced at one section, is that going to have negative academic consequences? Should it? If Jane has an emotional reaction to a passage that provokes a painful memory, should that be catalogued, stored, and interpreted, even if that information is never used? If Bob is recreationally reading a book on business, and cognitive state information indicates that he does not understand an essential concept, could that information be found and held against him later in a job interview for a position as a market analyst?

The more data we collect on the reader, the more we can tailor books to their unique needs and preferences. The knowledge system of the digital book of the future includes the characteristics of the reader. Readers themselves might want to examine that data, finding that it provides them with insight into their own habits, or curate that data, finding that it enhances how they wish to present themselves online. However, the collection of data which users do not produce intentionally while reading—gaze, physiological, and brain data—will mean that every failure of understanding or frustration is permanently indexed and potentially accessible. The future book is a platform for gathering an unprecedented level of information about each individual reader that catalogs their past experiences, current abilities, and potential for future success.

In the Future, We’ll All Have Pet Bots

Standard

Right now bots are primarily annoyances; 98% are spammers delivering often commercial come-ons via inscrutable language meant to evade anti-spam algorithms.

But some bots are more playful—intentional or unintentional performance art. Some recent examples that have bubbled up into the public consciousness include poetic e-book spammer turned subversive art project @Horse_ebooks and playful Twitter bot-makers Ranjit Bhatnagar and Darius Kazemi.

Bhatnagar’s @Pentametron finds a tweet inadvertently written in iambic pentameter and then finds another with a rhyming final syllable.

Kazemi’s @TwoHeadlines scans the web for headlines and mashes up two at a time, with results that sound inadvertently plausible.

Follow @robotuaries and it will occasionally tweet out a fake twitter obituary for you.

While these bots amuse, others are useful, keyed to stock market movements or weather conditions. New York Times senior software architect Jacob Harris has created iron_ebooks, a utility that allows you to create “a _ebooks account tweets derived from a regular twitter account,” effectively giving you a bizarro version of your twitter self for you to observe and enjoy.

 

@tofu_product does the same, but you have to ping it first.

These are rudimentary creatures, but even at this early stage they appear capable of poetry that can elicit the same reactions that traditional (i.e. human-created) poetry is intended to elicit. In the controlled world of Twitter, each bot performs its proscribed function, but what could future bots do?

Certainly there are whole business models built on creating bots that are meant to learn our habits and help us in our daily lives (including, of course, pushing advertising our way.) Google Now is a leading-edge example of this. Even now, it’s offering me things to do nearby, giving me the weather here in Arizona and at home in New Jersey and showing me links to new articles on a variety of websites it knows I read.

Here’s someone else’s Google Now:

But might there also be promise in these bots in the worlds of art and literature? To take the Twitter example, could a bot learn enough to send me bespoke bits of poetry or personalized aphorism that it knows will elevate my mind and mood?

What about a bot that breaks the 140-character bounds of Twitter to send me personalized machine-generated art, snippets of music, or found and remixed narrative, all riffing on cues found in my online travels?

A pet bot just for me that sends me art made just for me.

Talking It Out

Standard

As I sit here in a nearly silent room filled with creative thinkers about the future of books, I cannot avoid asking whether we’re doing this all wrong. As a couple of our participants have pointed out, it’s slightly perverse to bring these people together and then ask them to spend much of their time tapping silently at flimsy plastic input devices based on flawed 19th century machines.

Shouldn’t we be talking about this stuff instead?

I’d like to argue, borrowing from Churchill, that this method is the worst form of collaboration except for all the others. The book sprint that we’re running here is inspired by an ambition to reinvent the concept of the book, but perhaps more importantly, the process and performance of publishing. But it is also an effort to reimagine how intellectual conversations can happen. The best conversations are live, spontaneous, and require the high bandwidth of sharing a physical space. You can do it remotely, even by exchanging a series of letters over decades, but to actually create a sense of energy and improvisation—to get people thinking out loud and thinking together—you need live performance.

So the process of our book sprint needs to include live conversation but also something more. A great conversation, by definition, is not transferrable—you were there or you weren’t. Our challenge is to perform a kind of alchemy that distills the energy of collaborative thinking into a new medium. I say alchemy because this involves transmuting a fundamentally magical component out of another. The conversation itself is unique, and even an ESPN-style multipoint camera crew could not capture the live intensity of smart people thinking on their feet—at best, it would an archival recording of something cool that happened once.

The traditional solution to this problem has been to let people figure it out for themselves: have a great conversation, take it home with you, and maybe months or years later it will emerge as some kind of intellectual outcome. In the humanities, the process is even more stylized: almost all intellectual action happens before or after the big conference, when the paper gets written and when it gets revised. All that happens in the conference room is a bunch of people reading things at one another.

Our project here is not only to pose a series of provocative questions about the future of the book, but also to experiment with new processes for curating these conversations. The series of short writing deadlines and structured groups we’ve deployed here offer people a set of friendly challenges: converse, and then articulate your best ideas in a short essay. At its best the blending of these modes sharpens both the talking and the writing through a set of simple constraints. Our series of quick marches ask participants to articulate a few positions that are neither over-determined (because nobody had time to prepare, to do their work beforehand, to pick an answer before the question was fully voiced) nor consequence-free (because it’s not just a conversation, it’s a text that will live on through multiple publishing iterations).

So the exercise is a kind of thinking by doing on multiple levels of process. Everyone in this room is working out their own solution to the structure, the hurdles and pathways we’ve set before them. And collectively we are discussing the process of authorship and publishing itself. The most important part of the exercise is the possibility, really the embrace of failure. This is one of the beautiful things about a good conversation in performance: the inescapable flow of oral utterance, as Barthes (1975) or Ong (1982) argued, does not allow things to be unsaid, only to be reframed. The book sprint is a digital reinvention of that idea (not by forbidding revision, but by persistently nudging people out of their comfort zones).

The process is performance. The room is talking again; it’s filling with laughter and movement as people come out of another cycle to share notes, to talk things out and to keep pushing forward.

Living in an Amazon world

Standard

If nothing changes the trajectory, we book people are going to be living in an Amazon world. That means the future of the book hinges heavily on leveraging the tools, distribution muscle, and audience for Amazon.

In the short-term, great benefits. Amazon’s publishing platforms are inexpensive, easy to use, and guarantee wide coverage both within the U.S. and around the world. Whether print-on-demand (Amazon’s Createspace unit, chiefly) or pure e-book (Kindle), Amazon offers the full spectrum of services for both fledgling and mature publishers.

Does that mean we are condemned to learn to love the dark side of Janus-faced Amazon—its penchant for loss-leader pricing designed to reinforce technological “lock-in” (having a library of e-books, for instance, that operate only on the Kindle hardware family)? Or the infant Amazon enterprise of allowing owners of e-books to “share” them across computer networks, thus effectively depriving authors and content owners of payment?

To be sure, the position of Amazon in the world of book publishing is not yet hegemonic. Print publishers of seriousness, size, and scope, notably Oxford and Simon & Schuster (CBS) and MacMillan (Holtzbrinck), remain counterweights against any emergent Amazon monopoly. And in e-books, where Amazon reigns supreme, the traditional analog-to-digital transfer model, where the goal for the e-book is to replicate the print reading experience, opens Amazon to attacks from technological innovators who wish to leapfrog by revolutionizing the book, both as artifact and experience. Even today, so many platforms for book publishing are effectively free and “consumer friendly” that you not only can publish books easily in digital form, you can publish them in wide variety of ways, incorporating all media types in ways that both enhance the reading experience and deliver audio, video, and still photography. So as a practical matter, Amazon is not the sole option, not at all.

Yet the rising tide of Kindle means that readers, at least for the moment, are wedded to a platform that not only can’t be ignored but must be embraced. For the standpoint of the liquid present, then, the future of books is now and readers and authors alike are reading, writing and publishing…in an Amazon world.

 

Exhuming the Mastodon

Standard

“Let us tenderly and kindly cherish, therefore, the means of knowledge. Let us dare to read, think, speak, and write.”

—John Adams

As a cultural historian, and one involved in rethinking graduate education, the notion of pathways is resonant in obvious ways. We are heirs to a tradition of valuing archives that are arranged synchronically and chronologically (classes, curricula (L. to run), and credentials) to effect a set of knowledge outputs and practices—the educated individual, critically forged and capable. That person extends the means and ends. So, John Adams, thanks.

But what happens when those means clot or forestall the impulse to dare and act in language—when the pathways become sclerotic and unnecessarily difficult? I’m thinking, for the moment, of the dissertation as we’ve inherited it from the nineteenth century. It takes the form of a thesis, but really a book, chaptered, indexed, bound. It must be “defended,” in the form of an oral meeting that theoretically works as an opportunity to counter and call bullshit on written material that can cloak error or ambiguity in its formal, officializing guise of print. The defense completes the delivery of new knowledge, by the newly “minted” scholar.

We might view it as a kind of curtain lifting, not unlike the iconic Charles Wilson Peale, in his self-portrait as gatekeeper to the objects of knowledge: “The Artist in His Museum,” 1822.

Peale

Since 1822, the museum of scholarly production has advanced through a few more chambers, but the performative and architecture are basically the same. Of late, we then take the text product, make it a codex via arbitrary formatting, and then contract with Proquest to digitize it, make it available on the Internet (not open-access, but close), and then usually provide it to the degree-granting institution’s library to archive. Many humanities students have begun to choose to forego publication at the moment of credentialing, for fear that they might be precluding their pathway not into “knowledge” but into the publication systems that market knowledge—academic presses embedded themselves in a shrinking trade in knowledge commodities.

But that access issue is almost the least of the problems with the PATHWAY of doctoral credentialing. It’s the form itself. That culminating experience is the place where the “running” in curriculum hits obstacles, stalls, crashes, burns, evaporates. Perhaps the digital offers ways to dredge the riverbed and make that knowledge system much more fertile.

I’d like to see dissertations that continue the curriculum—that are, as the MLA and AHA are making preliminary steps toward advocating for, process projects. They would arise out of a richer mix of inputs than an advisor and several other co-advisors to include communities of intra- and inter-institutional faculty and students. They would break down the wall between institutional knowledge and its publics by inviting widespread access to the project as a work in process. Graduate faculties would be configured to critique and follow real-time progress rather than dangerously episodic check-ins. The archive too would not be spatially remote, giving the student little excuse to get “lost.” Indeed, the line between reading and curating would be forever blurred. And indeed the metaphor of “defense” becomes unnecessary, since that need to complement the discrete bounded knowledge-output, the one we must “suspect” of flaws, has always and already been produced through an engagement with multiple voices and assessments.

So rather than Peale in his museum, we’d have the dissertation as collaborative dig, pulling forth, over time.  As in:

Peale

Also Charles Wilson Peale, this is an image of “The Exhumation of the Mastodon, 1805-08.”  Note the temporality Peale foregrounds, the wheel in motion, the dating over a three year period—this is a rendering of process. And it’s a process of manufacturing knowledge collaboratively, over time. It is a lesson from the past about how not to bury things.

Traveling the Landscape of the Book

Standard

Here is what I want to ask: Books provide us paths through the world. Can the world provide us paths through books? Or, more appropriately, what can the world itself tell us about how we should sprint beyond the book?

So let me digress with a comment on reading and books as sensory experiences. Books are read; text is visual. Nearly every assumption built into the imaginary of books depends on reading and sight. Too often we often fail to appreciate the breadth and depth of books in terms of their sensory evocation, much less how we might experience what is within. Of course, books themselves are tactile. Old books, in particular, have a certain smell—for the historian, opening an old book is akin to the experience of that new car smell. Ahhh, yes, the mustiness of an old library. I fondly recall the reddish hue of the archives that adhered to my white gloves. More typically, we think of the senses in terms of the sensory experiences evoked by a book, a petit madeleine, chocolate, or the smell of baked bread in a Bret Easton Ellis novel. Do these evocations go only one way, from the book to the imagination to the senses? Can we reverse that path, bring the physical experience—of the senses, of the material, to the book? Wouldn’t that enhance our experience? I am thinking presently of how a sound historian has used the digital humanities to evoke the auditory sensibilities of early 20th century New York City. Our senses might offer entirely new paths into and through literature, allowing us to move beyond the book, envisioning a multisensory experience.

Likewise, reading itself is not just a literal act of moving eyes over text and processing that text, but it has become a metaphor for the production of knowledge itself. We do more than read text. We also “read” landscape, images, and environment. And yet, this imagining still elevates reading above not only the senses but also above the material world with all its depth and expressions. Of course, books have never been isolated from the world, but discussions of the book usually imagine them as knowledge systems all but closed from anything outside the human imagination. I would argue that imagination is shaped by social and historical experience. Rather than imagine books as blazing paths through our minds, perhaps we should look to social and historical experiences—to the materiality of everyday experience—to find ways of imagining paths through books themselves.

Consider how the landscape can be exposed, confronted, and expressed to create a path through a book, one where the materiality of space helps us find the logic of a book, or perhaps the materiality of experience—perhaps the work of an aged craft iron worker, whose voice and talents reveal narrative. What about hyper-textual approaches to the book, where links structure our reading—connections to the material, the ephemeral, the momentary?

I want a world of non-textual paths, generated by the materiality of the world, that structures our paths through individual books, libraries of books, or literatures. I don’t want to abandon the narrative, the story, the text, the argument in favor of the archival. Rather, I want a connectedness between book and materiality of experience that transforms not only our reading of the world but also our reading of the book.

As we sprint beyond the book, let’s not race toward the book as an individuated form (and I’m not advocating abandoning authorship) without connection to other books or to the materiality of experience. Rather, lets build something that is interlaced with the world, with the materiality of experience, including especially a richer sensory experience. Let’s create books that are meta-analytical and meta-experiential.

Creating Multiple Adaptive Paths Through the Book

Standard

A traditional book encourages the reader to take a direct path from beginning to end. Pages are arranged in a fixed order and numbered. But there are many cases where a book is not read in the order of its pages. Imagine Mary, who consults her textbook to understand a particular physics principle. She looks up the name of the principle in the index, and then turns directly to that page. After reading the description, Mary realizes that she doesn’t understand. She flips the pages to earlier in the textbook where she remembers a key related concept was first introduced.

Instructional texts are not the only contexts where you might want to navigate non-linearly. James is reading a crime novel. He reads a few pages, and then, as he always does, flips to the last chapter to see how the story ends. He finishes reading the book, remembers a part that he particularly liked, and then flips back to re-read it.

Digital technologies have opened up new possibilities for facilitating the way we navigate through texts. If Mary were reading a digital book, a search for the concept she does not understand might return a variety of relevant information: where the concept is first explained, what she needs to know to understand the new concept, and where that concept is later used in the text. The book could recommend, based on her knowledge, which content she should view first. Using hyperlinks, it is now possible to easily jump between different parts of a book, and using adaptive recommendations, a system can indicate which parts of a book are most relevant to a particular reader.

If James were reading a digital book, the possibilities of new technology suggest a more interactive and more personalized reading experience. The author could indicate multiple ways a book could be read to suit different preferences. For James, the book could be automatically reordered to present the final chapter first. Based on James’ reading behavior, the book could automatically infer which parts James liked the best, and link back to those parts at the end of the book.

To facilitate multiple paths through a book, there are several considerations related to technology and user experience design: semantic indexing, designing for non-linear navigation, making intelligent recommendations, and adaptive reconfigurations.

Semantic indexing. At a minimum, the content of the book needs to be indexed (either through natural language processing technologies or crowdsourcing) so that semantically meaningful links between different parts of the book can be made.

Designing for non-linear navigation. With non-linear navigation comes the need to design the book’s interface to support the user in taking multiple views of the text. Side-by-side split screen views should be facilitated so students can make direct comparisons between content. Reading history should be saved so the reader does not lose the page they were interested in, and can retrace their steps through the book if necessary.

Making intelligent recommendations. As the number of navigation paths increase, the reader may need recommendations for which path to view next. The quality of these recommendations depends on how effectively the book can construct a reader profile, interpret reading history, and understand how its contents can meet the reader’s needs.

Adaptive reconfigurations. For an engaging reading experience, a book could adaptively reconfigure its contents based on reader reactions and preferences. Using different navigation paths, writers could author multiple reading experiences within a single book, tailored toward different profiles.

One final consideration in this discussion is ensuring that these adaptive technologies support how readers perceive their own needs. In general, users want to maintain control when interacting with technologies. For this reason, recommendations may be better received than adaptive reconfigurations. Readers will want to be able to understand how the book is being reconfigured and potentially select their own path. As adaptive technologies become more sophisticated, the goal should be to enable the reader to make more informed choices about how and what they read. 

The Best of All Possible Worlds?

Standard

The traditions of serious publishing are imperiled by the emergence of new technologies that more easily, inexpensively—and at global scale—produce books of all kinds.

Academic and scholarly writers inside and outside of the academy face the vexing problem of abandoning traditional platforms for book publishing that have served their interests and embracing new forms of publishing that undermine the unity of the book.

The central question is: how will traditional books co-evolve with the new forms of books—purely digital or print-digital hybrids—in which text is unstable, merged with other media types, and increasingly ephemeral?

The traditional book is unlikely to vanish—never mind the forces of creative destruction at play in the publishing world—because copyright and intellectual property law privilege the book over other kinds of published artifacts (most dramatically, the “newspaper” article). Path dependence is a powerful ally to book traditionalists. Retro-book advocates benefit from a powerful nexus of institutions—universities, foundations, libraries and even book sellers—that will continue to support and enhance the traditional book.

The role of Kindle, the leading e-book seller, chiefly serves to reinforce the hegemony of the traditional book. The entire thrust of Amazon’s “innovation” around the Kindle is to improve and enhance the direct analog-to-digital transfer. The Kindle strives to replicate, not undermine or revolutionize, the traditional experience of book reading. Amazon’s reward for assuming the retro posture is market dominance. The market leader in e-books is curiously reinforcing the hegemonic position of the traditional bounded, print-on-paper book.

Scholars and serious thinkers face, perhaps improbably, the paradoxical situation that creative destruction and technological change are opening multiple pathways for publishing their work, in a real sense providing them with the best of both worlds: lower barriers to reaching readers through traditional book publishing and new hybrid forms of (multimedia) books that expand and redefine the notion of what a book is and can be.

We book authors of all stripes now exist in the best of all possible worlds—on the production side. The reader, for whom we care deeply, is more estranged from us than ever before. Therein lies the riddle of the author’s existence—and the reason why, bluntly, we authors are profoundly anxious, destabilized, and in fear of our inevitable doom.

 

Setting the Demons Loose

Standard

Many of the interventions offered to book culture or to what you could call the reading-writing economy are currently coming from start-ups, entities described by one entrepreneur-cum-academic as organizations formed to search for a business model. As such, they may fail to find that business model even though they succeed at finding outcomes. One that I worked with closely, Small Demons, found that fate. What we did find, while not a business model, is a tacit cultural map, one formed by the culturally resonant details set jewel-like within books, one which, when illuminated by a kind of UV light, glows so as to allow one to navigate through the storyverse—our term at Small Demons for the universe that exists parallel to the “In Real Life” one in which we live. A Borgesian world, then, a planet-like library with paths that may be traversed to allow a richer life for us humans.

The company created a taxonomy of keywords grouped as persons (fictional and/or real), places (fictional and/or real), and things (encompassing songs, movies, other books, events, sports, drugs, foodstuffs, cars, and so forth) and managed to use entity extraction software  to highlight those words in books, collect useful information about them, and link them to one another. One might then travel from Nick Hornby’s High Fidelity to Haruki Murakami’s Kafka on the Shore via Prince’s “Little Red Corvette.” Unlike the typical recommendation engines explored by C. Max Magee, these paths are not designed to lead from one recommended cultural artifact to the next but merely to offer an alternative mode of browsing. However, much like those services, it does offer signal amidst the noise, a heat map that offers clues to those artifacts, much like how surveying the restaurants in a urban plaza allows a prospective diner to gauge the vibe of each restaurant, see how the diners are dressed, the music playing, check out the decor.

In this respect, what Small Demons envisioned is books not just referring to one another but to entire cultural tapestries, situating these narratives within and around all other narratives, actual and imagined. From a commercial standpoint books transcend their ghetto, without abandoning their edges, they become permeable—which is in fact what they’ve always been. As such, the books become more truly themselves. As Rick Joyce, Chief Marketing Officer at Perseus, a consumer books company, likes to remark: “There are lots of books about shoes, but no shoes about books.” Books, by their very nature, contain worlds.

Now, how might Small Demons live on, not as a business but as a vision? During our existence what became clear was that there was an intense appetite amongst some (though by no means all) of the people who visited the site to actively participate, not just marvel (or frown). All the data we generated was generated in-house via automated entity extraction and a small group of editors tweaking the data. Users wanted to add data, both stuff that the computers missed and stuff the computers couldn’t ascertain. We could tell you that Dewar’s appeared in a book, but not who drank it, and what the role of the whiskey-drinking was in the plot. Was the protagonist drowning his sorrows? Was it spiked? Did she order Glenmorangie and was told nope, all we’ve got is Dewar’s? And so forth. As Erin Walker wrote, books are props in people’s lives, and so are the details within books, and people want to share those details, just as they like to share the books that contain them.

So if we are going to create tools to foster and support that impulse, the key thing will be to build into the system from the beginning the ability for users to add, amend, clarify, correct, and connect details they themselves see. We were not unaware of this need, we just didn’t move quickly enough to respond to it, and ran out of resources before we could deploy those tools.

Further to this principle, this data—from both an output and input standpoint—should live on the entire web, not just within the site or app. In other words, a read-write API. Again, this was something we were aware of, as there was a real appetite from web media companies large and small to integrate our data into their user experience, interest from libraries, interest from geo-location apps, interest from e-commerce retailers, from textbook publishers. But we ran out of time, in part because we didn’t prioritize it early enough. From a revenue-generating standpoint, this appetite for the API is clearly a major opportunity, if not the major opportunity, and would apply both to a for-profit or nonprofit entity.

That said, if it were a nonprofit it would be particularly wise to be aware of the larger context of linked open data. In other words, it should play well with others. Just like books do.

Following the Path from Book to Book

Standard

Some rights reserved by Walt Stoneburner

The question I get asked most often by strangers when they find out what I do: “What should I read next?”

The question is asked eagerly, and yet we are supposed to have solved this problem by now through the power of algorithms that ingest reader habits and learn reader behaviors and deliver book recommendations precisely calibrated to sate reader hungers.

Are these algorithms giving me the kind of life-changing book recommendation that I have received from other readers from time to time?

Is technology helping readers find better paths from book to book, with fewer false starts and pitfalls and more transformative and transporting experiences along the way?

The best book recommendation engine is the knowledgeable clerk at a well-stocked, well-curated independent bookstore. To this recommender you verbally input the last few books you read and liked, and she outputs a title, physically handing you the book which you can buy and read alongside a cup of coffee in the café next door.

This recommendation engine has been replicated in the online space via the very low-tech Biblioracle, an occasional feature of magazine themorningnews.org. In this feature, author John Warner, the son of an independent bookstore owner, gives bespoke recommendations to online commenters. They input the last five titles they read and enjoyed, he spits out a recommendation. To this eye, his recommendations are quite good.

biblioracle

Like the real-world experience it replicates, however, it is not scalable.

The question that I get asked so fervently from time to time—“What should I read next?”—is surprisingly fraught. Books represent a large investment for readers in money and especially time and emotional energy. Acquiring a book and investing the time to read 25 or 50 or 100 pages only to cast it aside is a souring experience, maybe enough to sour certain readers on reading entirely.

The stakes are high.

Part of Amazon’s business model hinges on the notion that it can mine your behavior to suggest products—for our purpose, books—that you will like and want to read.

In the real world space, this function is served by the “featured” front table in the bookstore, or by the books face-out on the shelves.

But these efforts are laden with commercial conflicts that seem bound to get in the way of providing a useful recommendation.

Publishers and bookstores engage in “cooperative advertising” by which publishers pay bookstores to secure prime shelf space and placement on front tables.

Amazon engages in similar practices, with promotion in its online bookstore often contingent on payments from publishers. Whether or not these considerations come into play with regard to Amazon’s book recommendations, they are opaque to the reader, and a temptation to push books or categories based on outside factors is undoubtedly strong.

amazon

Amazon’s recommendations are also curious in that they are, by default, based on what readers have bought and not necessarily what they have read and loved.

What should a recommendation engine strive to do?

  • Be transparent
  • Ignore retail considerations
  • Base recommendations on a reader’s reading habits
  • Seek clues to what factors might make reader enjoy a book that they wouldn’t otherwise pick up

Neither a human nor an algorithm can meet these requirements perfectly, but a human is better suited to grasp the intangibles in play.

So what can algorithms strive to do?

Cataloging sites like Goodreads and LibraryThing seem best placed. The sites give the reader control over which books they catalog and therefore which books are the basis for the recommendations. The sites also do not have an explicitly retail function (though Goodreads is now owned by Amazon), hopefully lessening the possibility of conflicts of interest.

LibraryThing

But the human element shouldn’t be dismissed as unworkable in the digital era:

Book communities may hold the most promise. Like-minded readers can offer recommendations that have the human touch, while crowd-sourcing makes the process scalable.

These idea may have to suffice until technology allows us each our own personal Biblioracle.

Proscenium and Thrust

Standard

The traditional mode of book publishing maps cleanly onto the dominant mode of theatrical performance of the last 19th century, one that has with some exceptions carried forth into the present day: a mode we could call Proscenium Realism. The proscenium first appeared in 1618 at the Farnese Theatre in Parma, Italy. However, it was not until the 19th century that it fully came into its own. It provided quite literally a frame for the performance as if it were a photograph, or an aperture through which the audience could peer into some actual “real” scene unfolding before their eyes. The Realist playwrights of the time wanted to create a sense that there was no artifice, that life as actually lived was occurring before the audience’s eyes—the proscenium enabled that. The more fantastical performances, including opera and ballet, could benefit from the picture window effect, that the audience was witnessing a complete and total illusion, that of a painting come to life.

In both cases, of course, there was an elaborate apparatus undergirding the entire performance. Actors running off-stage to get props, gas and then electric lights dimming as night falls, trees moving on and off, angels being lowered by winches, all carefully hidden by the walls and (when necessary) by the curtain closing and reopening to a new vista no less real than the one that preceded it.

Meanwhile, the world of publishing had been building a machine not dissimilar from the apparatus for producing the theatrical illusion. Theatre has its playwrights, yes, but also stage managers, lighting designers, scenic artists, actors, and composers, its lights, its rigging, its costumes, its sleight-of-hand around forced perspective, the clacking of coconut shells mimicking the clip-clop of horse’s hooves, and so forth. So too with publishing, though in that black box the machine was a manufacturing and distribution apparatus. As with the theatre there were wordsmiths yes, authors yes, at the beginning, but also agents to help frame and contextualize the authors for the editors, editors to evaluate the authors, but also to ensure the author’s writing fit style guides that wouldn’t trip up the ultimate consumer with anachronisms and inconsistencies, designers to create covers to serve partly as images to represent the book, in the manner of classical architecture, but also to help sell the book, like the tried-and-true maneuvers of the strip tease, showing a little of what’s there but suggesting that more, oh so much more is to come. Sales reps, whether the door-to-door snake oil peddlers of the 19th century selling subscriptions out of a bag, or the 20th century model of showing up at the retailers persuading them to stock that publisher’s inventory. Then too over the course of the second half of the century, all the innovations around distribution, often using computing power of the mainframe and PC variety—just-in-time inventory, demand forecasting, tighter product cycles, granular sales data.

And the writers and readers, opposite ends of the supply chain, in a strict producer-consumer relationship, stand at either end of the machine, marveling as its mysterious processes, selecting a handful of writers and magically transforming them into bestsellers, consigning readers to gape slack-jawed at its marvelous outputs, then rushing, after it was all over, the magical words THE END, read, to the stage door, where they hope to catch a glimpse of the creator before s/he is hustled to a waiting car.

In the theatre along comes Brecht and to rip down the curtain. While that is, and what is to follow is, a radical simplification of very complex processes, Brecht, for reasons combining the political and the aesthetic, proposed to blow up the entire architecture of illusion and realism, to show how things are actually made, to show why things were the way they were. Stage hands walked around, handed props to people, brandished the coconut shells, proudly ate the banana the peel of which would be dropped just in time for the actor to slip on it, actors changed costumes in full view, the lights turned around, no longer mimicking the dawn rising, instead turned onto the audience, now suddenly busted for being Peeping Toms. The means of production had been laid bare.

And to book publishing now enters the Internet, stage left, stage right, stage center. The fluorescent lights now turned on. Freelance cover designers now available as guns-for-hire since everyone has Photoshop now, agents trawling Wattpad for the popular writers, forums discussing royalty structures, kerning and leading. Short-run digital printing via Lulu, Lightning Source, Blurb, CreateSpace. Retail access via Amazon and Amazon and Amazon and Amazon. Tablets, phones, and E Ink devices rendering almost all the foregoing optional. The bride stripped bare by her bachelors, even. The world is now the stage.