Audiobooks are Books, Too

Every so often, some iteration of the same debate pops up somewhere on the internet: does listening to audiobooks really count as reading?

Predictably, this takes the form of one person calling out another because they didn’t really read a book, they just listened to it. Audiobooks, in other words, Don’t “count” in the same way print books do.

Now, I’ve interacted with the written word in a lot of ways—eyes on print, fingers on Braille, and audio red by humans and synthesized by text-to-speech. So I have some thoughts about this. I wrote them out in a massive twitter thread, and a few people asked me to publish it here as well. Here it is (slightly cleaned up and with links to relevant resources).



First Things: Words and their Jobs

First, let’s take a brief foray into linguistics and acknowledge that words do not have inherent meanings. People use and combine words to create and communicate meaning with others.

As semanticists say, “Words don’t mean. People mean.”

And in different contexts, people use words to mean different things. For the verb “to read,” there are three relevant usages to consider:

  1. reading as an ability
  2. reading as an activity
  3. reading as an accomplishment

To understand the differences and why they matter, we have to think historically. As time passes culture, technology, and lifestyles change in ways that create new communicative needs. Most of the time, these needs are met not by inventing wholly new words, but by adapting pre-existing words by analogy. The process by which this happens is seldom reasoned or systematic, tending rather to be intuitive and incidental.

Tactile writing is only two hundred years old and audiobooks are less than ninety, so it shouldn’t be surprising that our language has not fully adapted to their use. We are recycling the language of older technologies—spoken language and visual writing—to describe these new things and the ways we use them.

In the case of Braille and other tactile writing systems, the analogy with visual writing was clear and straightforward. Both used characters in a sequence to represent language across a page or other flat surface, and both were stable over time. Thus, the adoption of “reading” and “writing” language presented few problems outside of very technical contexts.

(Note—I don’t know if there were debates in the 1800s over whether the verb “to read” could be legitimately applied to Braille. If there were, that would be super interesting and I’d love to see them. In either case, reading quickly became the dominant way of talking about consuming Braille)

Controversy over audiobooks, I think, stems from uncertainty over which pre-existing technology they should be analogized to: printed texts or spoken language. The format is auditory, and thus resembles speech, but books, magazines, newspapers, signs, menus, etc. . are understood as essentially textual entities, which are read.

So in our language, do we privilege the format and delivery method, or the original/essential nature of the content?

The problem is different in each of the three usages of the verb “to read,” because each at its heart is trying to convey different information. Lets consider each in turn:

1) Reading as an ability

Basically, this answers the question “can you read?” In other words, if presented with a given physical object containing text, will you be able to decode its meaning?

There’s a lot to unpack about reading as an ability, but I’m not going to do it here. In this context, I think it’s safe to say that if you cannot read at least one print or tactile script in at least one language, you should not say you can read.

However, that doesn’t get to the heart of the debate or the ways people use the reading/listening distinction to flex on each other.

2) Reading as an activity

This answers the question “what are you doing?” Consider four answers:

  1. “I’m reading a book.”
  2. “I’m listening to a book.”
  3. “I’m listening to an audiobook.”
  4. “I’m reading an audiobook.”

If we imagine ourselves as sticklers who insist that print and audiobooks are so different that they require different verbs, then only the first and third answers make any sense at all. I mean, I suppose I could press my ear to my paperback copy of War and Peace, but I won’t get much out of it).

Now consider another scenario. I am pointing my phone at a large sign. I have the Seeing AI app up and it’s reading any text that comes into my camera’s view. You ask “what are you doing?” Two possible answers:

  1. “I am trying to see what that sign says.”
  2. “I am trying to read that sign.”

Now, you and I the imaginary sticklers know that both of these are absurd. I am not reading, in a literal sense, nor will I ever truly *see* what it says. What I should say is “I am pointing my phone at that sign so it can feed the image into an optical character recognition engine then translate the results into sound using text to speech software so that I can apprehend the information encoded on its surface.”

But the point of the first two answers is not to communicate the sum of their words. They are trying to communicate a more general point: I am trying to get the information from that sign into my head using a newfangled kind of technological mediation.

There are times when we can all turn off our inner literalists and realize that “reading” can be shorthand for getting textual information from a physical object into our heads.

So let’s not be sticklers, ok?

Of course, there may be times when it is important to specify the exact mode and method we used to apprehend some bit of text. This should be done to prevent or correct misunderstandings, but it applies equally to Braille and print.

For example, if a sighted someone asks to borrow your copy of 20,000 Leagues Under the Sea, you might disappoint them by saying you have it on audio, but they would probably not be thrilled if you dropped off ten massive Braille volumes, either.

And that leads to usage number 3.

3) Reading as an accomplishment

This is where it gets real, because this is where people start adding value judgments and putting each other down.

The pertinent question here is “did you read X?”

I often hear people say things like “you didn’t actually read X, you just listened to it.” What’s the point of saying this? It does more than maintain a procedural distinction, it establishes a hierarchy where reading is superior and listening is inferior. It implies that listening to a book is not an accomplishment in the same way that reading it visually or tactilely is. In some sense, it doesn’t “count.”

The foundations of this hierarchy lie in cultural notions regarding the types of material that are usually conveyed in written and oral form and the relative merits of each. Books, especially, are prestige objects because of their historical associations with education and class privilege. Historical roots notwithstanding, though,, is this hierarchy justified? Is there any inherent superiority to reading words from a page by eye or finger as opposed to hearing them read or synthesized into speech?

It depends on our goals. In my research, I use Braille for close reading (especially in non-English languages) and audio to work quickly through long articles and books. Keeping two lists in my head—one of things I have read in Braille and one of books I have listened to—would be untenable and pointless.

This is because the point of saying I have read something is to indicate that I have interacted with the information it contains and internalized it to some degree. If it could be demonstrated that comprehension and retention rates differed significantly between auditory and visual/tactile book input, then I could be convinced that we should insist on the terminological distinction. But they do not.

Neurological imaging studies reveal that listening to audiobooks activates the same cognitive and emotional regions of the brain as reading print, and tests of comprehension and retention do not show significant differences between audio and print consumption of text.

Additionally, a moment’s reflection reveals that not all visual or tactile reading leads to the same learning. Sometimes print and Braille reading are done with care and attention, and sometimes they are done while unwilling or distracted. I have learned a lot from reading print books and articles, and I have finished others and realized immediately that I could not tell you anything about what I had just read. The same can be said for audio reading. Most often, the returns we get from the time and energy we invest in reading have more to do with our focus and attention than with inherent qualities of the medium or modality.

To my mind, then, insisting on a value distinction between print/Braille and audio is baseless and counterproductive. The value of tine spent reading is in the changes it makes to your base of knowledge and depth of thought. Neither of these result directly or necessarily from the part(s) of your body you use in the process. So as a flex?? To boost your own intellectual achievements and cast doubt on those of others? It doesn’t work and it doesn’t make sense.

To sum up, here are a few takeaways:

Should we learn Braille? YES. I hope nothing I’ve said here implies that I don’t think Braille is important. Learn Braille to the extent that you are physically and neurologically able, because it gives you the opportunity to interact with information in a greater variety of ways in a greater variety of circumstances. Even if all you can do is read bathroom signs and label your medications, that’s better than nothing. And if you gain the fluency to read whole books? Go to town!

But should we enforce the distinction between Braille and audio, relegating audio always to second place? NO. Indulge your curiosity. Read widely in whatever medium is most accessible to you. Expand your perspective with print, Braille, audio, whatever. Don’t be discouraged and don’t be held back. Read read read read read!

And come on, people, if someone says they read a book and you KNOW they listened to the audio, don’t call them out or “correct” them on it. There’s no point to it and it’s not a good look.

Basically, be as precise as you want but don’t try to prove Braille is important by denigrating audio.

Braille is important.

Audio is important.

Nitpicking each other’s language to enforce a baseless distinction between the two is not.

My Braille Toolbox: Epilogue

The word

Building Braille: The History of Braille, and Where Design is Taking it Next

My friend and accessibility maestro Jennifer Sutton brought this article from Print Magazine to my attention yesterday, and it seemed like a fitting epilogue to my Braille Toolbox series (which starts here). People are doing more innovative and creative things with Braille than I knew or imagined!

My Braille Toolbox Part 4: What’s Next?

Well, we’ve reached the end of my small Braille toolbox (see Parts One, Two, and Three of this series, if you haven’t already), but the fun isn’t over! I still anticipate needing a few other Braille gadgets, and the inventions and innovations that are popping up everywhere in the Braille world make the future look very exciting.

What’s Next for Me?

 The next item down my Braille wishlist is an embosser—basically the Braille version of a regular printer.

A product photo of the Juliet 120 embosser, from the front.

This connects to a computer and embosses any text file in hard-copy Braille. It cuts down the time involved in making Braille, since you can type and edit on the computer with a QWERTY keyboard, make multiple copies, etc. Since my work involves a lot of comparing texts side-by-side, being able to print them out instead of switching back and forth on a Braille display will be a huge time-saver.

There are a couple of variables to consider when choosing a Braille embosser. Some only emboss on one side of the paper; others emboss both sides, staggering the lines of dots so they don’t interfere with one another. Some only do Braille text, while others specialize in tactile graphics, and some do both.

These machines tend to be expensive—from $2000 to about $7000 for personal embossers (industrial embossers can run $50,000 or more), so knowing what you want is critical.I’m very interested in trying to use tactile graphics to represent cuneiform texts, so that I can still read them in the original sign system, rather than relying on transliteration. I also anticipate a high volume of embossing, so double-sided would be very nice.

My current dream machine is the Juliet 120, from Humanware. It quickly embosses double-sided Braille and comes with tactile graphics software. Do you have a Braille embosser you love and think I should consider? Tell me about it in the comments!

What is the Future of Braille Tech?

A product photo of the new BLITAB Braille tablet.

Like everything in tech right now, there’s a lot of innovation happening in accessibility. For Braille displays, it looks like devices are going to get better, more diverse, and much cheaper in coming years. 

Humanware has created a Braille display/tablet hybrid, the BrailleNote Touch, which has a Braille display and traditional keyboard, as well as a touch screen interface that runs on Android.

A number of companies now produce multiline Braille displays, including Canute from Bristol Braille Technologies and the TACTIS100 from Tactisplay Corp.

These two are primarily for desktop use, but the race is on to produce the first Braille tablet/ebook—a standalone, full-page Braille display that is light and durable enough to be truly portable.

The first one to market will probably be BLITAB. This tablet is being developed by an international team in Austria, and it’s being intentionally designed for a worldwide user base, so it should handle multiple languages easily. The pins are raised and lowered by smart materials instead of mechanical actuators, which increases its durability and decreases its complexity and weight. It looks like BLITAB is now available for preorder, and will ship later this year!

Another company working on Braille tablets is Dot, which is already getting quite a bit of good press for their Braille smartwatch, the Dot Watch, which displays not only the time, but text messages and alerts from your phone. Once the Dot Watch ships (starting April 1), they will shift their R&D energy to developing two Braille tablets, the Dot Mini and the Dot Tab. 

There are rumors of other technologies in development, too, like rotary Braille displays that have the cells set on the edge of a rotating disc. This way, you could read continuously without even having to move your finger. 

I’m glad I chose to invest in a mature technology this time, because most of the next-generation Braille tech will need a few years to iron the kinks out, but I’m very excited about the amount of innovation and improvement that is happening.

My Braille Toolbox Part 3: The Refreshable Braille Display

(This is Part Three of a Series. Here’s Part One and Part Two)

Ok, we’re skipping a few historical developments here, but this is my beautiful new Braille display, the VarioUltra 20, by BAUM.

A photo showing the entirety of my new VarioUltra, out of its leather case and at a slanty angle. 

The current generation of Braille displays like the VarioUltra combine the functionality of two earlier pieces of technology, the Braille notetaker and the refreshable braille display. They can function as either independent PDAs or as displays for a phone or computer.

The Hardware

The display portion is a single line containing, in this case, 20 Braille cells. There is also a VarioUltra 40, with 40 cells, and other displays range from 14 to 80 cells in length.

These things are truly mechanical marvels. The tiny nylon pins that make the Braille dots are only spaced 2.2 mm apart, and they must all be able to raise and lower independently. Each of the 160 dots on this display is connected to its own lever, which is raised up and down by a crystal that expands under electrical current and contracts when it is removed. They refresh in a fraction of a second—much less than the time that it takes to move your finger from the end of the line back to the beginning. And though they be small, they must be reliable and durable enough to be read hundreds of thousands of times.

The interface is entirely tactile, and the device is simply rife with buttons to navigate menus and files, enter and edit text, and manage physical and wireless connections.

At the top of the unit, there is an eight-key keyboard that is analogous to the six-key keyboard on the Perkins Brailler. Each key corresponds to one dot. Below that is the row of  Braille cells, each of which has a small button above it which are used for cursor routing and text manipulations. On either side of the braille display are three buttons used to navigate whatever text, file, or website you are reading. The bottommost row contains a little joystick they call a NaviStick, used to navigate the operating system, four system keys, and two space bars.


In independent notetaker mode, the VarioUltra has its own OS with a suite of productivity apps: a text reader and word processor, PDF viewer, spreadsheet viewer, calculator, etc.  

I can store files on there, from notes and handouts to whole books. It’s finally going to make Braille portable for me in a real and useful way. I mean, if you wanted to take a book to the park or coffeeshop to read, what would you rather carry?

Photo of a stack of large Braille volumes, my library loan of Twenty Thousand Leagues Under the Sea, next to my sleek little VarioUltra.

When it’s hooked up to an iPhone, I can use it to read my email, articles, websites, facebook, twitter, and any other accessible material that’s available. 

Since this is my first Braille display, I anticipate a steep learning curve. To be honest, i haven’t even turned it on yet. This isn’t the intuitive, easy to pick up and start using technology we’ve gotten so used to. It’s the kind of technology where you read all the documentation before you even get started, and it still takes a while to get up to speed. With the crazy week i had, I just haven’t had that kind of time. That’s what the next few days are for.

I’m excited to get to know this device. I’m excited to carry Braille with me, to be able to read and work quietly again, and to get better and faster at reading Braille because I’m using it more and using it more seriously. I’ll keep you posted on how it goes!

My Braille Toolbox Part 2: The Perkins Brailler

(This post is part 2 in a series. See Part One here.) 

The second Braille gadget I acquired was the legendary Perkins Brailler. My case worker at the Department of Rehabilitation had an extra one lying around the warehouse, so he told me to take it home and see if I liked it. I wasn’t interested at first, thinking I would be getting some higher tech gizmos soon, and this one would just end up on a shelf. I’m glad I gave it a try, because it is a cool little machine and it has been a lot of fun to play around with.

So what exactly is a Brailler? Essentially, it is a manual Braille typewriter.

A photo of my Perkins Brailler from the front side.

Mechanical Braillewriters first appeared in 1892, but before the invention of the Perkins Brailler, they had all been expensive, fragile, and unreliable. When Dr. Gabriel Farrell took over as head of the Perkins School for the Blind in 1931, he determined to create a portable, durable, and inexpensive Brailler. He commissioned David Abraham, a woodworker who taught manual trades at the school, to design and engineer the project. The venture was funded by the Perkins School’s subsidiary, Howe Press, at a substantial financial risk. By the time the first run of units was produced, Howe Press had expended more than half of its capital on the project. 

Mr. Abraham spent around 15 years engineering the Perkins Brailler, and his skill and perfectionism showed. After its release in 1951, the company could barely keep up with demand. It was quite simply the best and most reliable Brailler on the market, and set the standard for other Braillers to meet for fifty years. Now there is a new and improved model, but I have one of the trusty originals.

Note that the keyboard only has nine keys. The six main keys, three per side, each emboss one of the six dots in a Braille cell. The inner two correspond to the top row, the next ones out to the middle row, and the furthest out to the bottom row. The raised button on the left is a line break, the one on the right is a backspace, and the middle button is space. The carriage must be returned manually to the beginning after every line break, using that swoopy piece of plastic just above the keyboard.

 Since each main key controls only one dot, every character must be typed by pressing the correct combination of keys simultaneously. If you miss a dot, you can use the backspace key to set the carriage over the previous cell and easily add it in. It’s not nearly as fast as typing on a QWERTY keyboard, but it is much, much faster than the slate and stylus. That page of notes that took me two hours before? Now I could type it in five or ten minutes.  

On top of that, it’s just plain fun to use. You can feed most kinds of paper into it, which means you can emboss on blank paper or add Braille to just about any printed page, card, gift tag, etc. Since I got my hands on it, I’ve been Brailling everything in sight. 

A photo of some gift tags i Brailled for Christmas presents.

The Upshot: This is still not the apotheosis of Braille technology, but its simplicity and versatility mean that I will probably always find uses for it.



Jan Seymour-Ford. “History of the Perkins Brailler.” Outlook for the Blind (Nov. 2009).

My Braille Toolbox: A Guided Trip through Braille-Writing History

A close up image of the VarioUltra 20 from the front.

My new Bluetooth Braille display finally arrived in the mail!

It has been on back order since December, and I’ve had these long weeks of waiting to think about Braille writing and how the technology has evolved over time. I have a few other Braille gadgets, and I realized my acquisitions had unintentionally imitated the course of Braille-writing technology.

So this week I am going to share a bit about the tools I have and how useful they are. I don’t have something from every stage in the development of Braille tech, but it will be enough to give you a general idea.

Let’s start at the very beginning.

The Slate and Stylus

Welcome to the nineteenth century! This simple tool was invented even before Braille. Napoleon wanted a way for his armies to communicate at night, without light or sound, so he commissioned a guy named Charles Barbier to create a writing system that could be read without any light. Barbier had the idea of using fingers to read raised dots and lines. He invented a system and the slate and stylus to write it. His system was too complicated and never caught on, but Louis Braille learned about it a few decades later, and simplified it to create the six-dot Braille system we use today.

I got my slate and stylus last August from the Lighthouse for the Blind in San Francisco. I picked it up on a whim, because I wanted practice using my newly acquired Braille and this was the cheapest and simplest way to get started.

An image of my hand holding a Braille stylus and pressing it into the back of a slate with a piece of paper inside.

It works kind of like a stencil. The slate is a hinged piece of metal that clamps onto a sheet of paper. It provides a template that ensures the exact spacing necessary to create readable Braille. To write, you have to press the stylus, a blunt awl, into the appropriate guide holes.

One dot at a time.


That’s right, backwards. Because you’re poking the dots in from the back to raise them on the front side, you have to write every line and every cell in the wrong direction, like writing in a mirror.

It takes forever.

And then you flip it over to see how many mistakes you made.

An image of my hand lifting up the front of the slate to reveal the sentence

The problem is, since Braille cells have two columns of dots, almost every character is the mirror image of another one. If you aren’t paying attention and forget to flip them, you end up with ‘i’ instead of ‘e’ or ‘z’ instead of ‘and.’ One time I was making a sheet of notes. It took me more than two hours. and when I was done it was filled with typos (Braille-os? stylos? I don’t know).

You can get a better sense of the process by using this neat Slate and Stylus Simulator I found.

The Upshot: it’s better than nothing, but barely.

My Quest for the Perfect Word Processor: Act Two

Photograph of a winding path through a dark forest. This is a quest, after all.

In Act One of this epic tale, our hero had fallen on dark days. Forced away from Mellel, his comfortable word-processing home, he began to wander the land seeking new possibilities and brighter horizons.

Now we see him revisiting familiar territory. Microsoft Word for Mac 2011 is already installed on his machine, after all. But it too offers only disappointment. Hazardous to navigate and full of unmarked and unlabeled dangers, it is a VoiceOver nightmare. 

He considers other options: Pages, VoiceDream Writer. These are friendly and accessible, but nowhere near full-featured enough for a dissertation. He falls to using TextEdit—at least it works well with VoiceOver. Perhaps he will write his whole dissertation in plain text and typeset it with LaTeX. But of course this is absurd. Navigating a document as long as a dissertation in plain text would be next to impossible. Plus he would have to learn LaTeX, so…

And then at last, on the verge of despair, he finds hope. There is a new version of Microsoft Word for Mac, and it has been substantially rebuilt and reconfigured. Word has always had features galore, of course, and is capable of handling large projects like books and dissertations. In the new 2016 version, the development team has increased VoiceOver compatibility and improved support for Hebrew (as long as the Hebrew keyboard is used). 

Almost all of the buttons, tabs, and menus are clearly labeled for VoiceOver, and navigating the interface is relatively easy. Setting VoiceOver Hotspots for the ribbon and main text pane makes it even more painless. The only problem with this is that the Hotspots for the ribbon are document-specific, so if you have two documents open at the same time, you have to make sure you go to the correct ribbon. 

Navigating long documents can also be cumbersome. You can navigate by page or line, but it would be very useful to be able to navigate within your document structure. The VoiceOver rotor could come in handy here, connecting the headings menu to document headings and allowing users to skip back and forth that way.

The biggest bug in Word for Mac 2016 comes when documents get long and cover multiple pages. If you make changes to early pages in the document that affect later pages, VoiceOver can get confused about what it should be reading . When you use the “Read Line” or “Read Paragraph” commands, it will read the wrong line or paragraph, or start or stop too early. When this happens, closing and reopening the document solves the problem, It is not insurmountable, but it does get very tedious. 

Track Changes and Comments—two critical tools in academic work—are also difficult to use, but these are acknowledged issues that Word is working to improve.

So our hero takes up this tool, imperfect though it is, and sets his hand to the work. But his vigilance remains constant, and from afar he hears rumours of a new kind of tool: a powerful writing suite with deep VoiceOver compatibility. Tune in next time, brave readers, as our hero encounters…the Scrivener.


(This  epic post reviewed MS Word for Mac 2016 Version 15.24. Any subsequent  improvements to accessibility in later versions are not covered)