Tag Archives: hard of hearing

How much listening is too much?

– Michele

Listening is hard work. At the end of a long day of meetings I’m exhausted. When I share this with my hearing colleagues they’ll say “Oh, I know—me too!” But is it the same? Really? 

Studies have shown that users of hearing aids like me, who rely on speech reading along with amplification, experience listening fatigue as much higher rates than hearing people (e.g., Bess and Hornsby, 2014). We are working much harder than everyone around us to piece things together and make sense from what we are able to hear. Most listening fatigue studies are on school-aged children and the few studies of adults show that “Adults with hearing loss require more time to recover from fatigue after work, and have more work absences.” (Hornsby et al., 2016). As academics, our jobs require us to listen to others all the time—in our classes, in faculty meetings, in seminars, and when meeting with students. How do we recognize cognitive fatigue due to too much listening and mitigate this fatigue so that we can manage our work responsibilities? This is a tremendous challenge for deaf/HoH academics and The Mind Hears will explore this topic in several blog posts. 

In this post I share how I figured out my daily listening limit, which turns out to be 3 hours with good amplification and clear speech reading. For many years, I pushed through my day not paying attention to how much time I was spending in meetings and classes. Some days I felt okay while other days I ended up utterly exhausted. The kind of exhausted where I can’t track conversation and even have trouble putting my own sentences together. When this happens, I can’t converse with my family and exercise class is out of the question because I can’t follow the instructor. I just take my hearing aids out and lie on the floor with the dog— I don’ need to speech read him and he gets me. Yay dogs!  

When I explain to my listening fatigue to non-native English speakers, they get it right away. They recognize that this listening fatigue is just like when they first moved to a country with a new language; while they had good command of the new language, following it all day exhausted them. Exactly! Except I’m not going to get any better at my native language.

After a while—actually a really long while because for many years I tried to work as if I was a hearing person due to internalized ableism, which really is a whole different blog topic—and now this sentence has really gotten off track so I’m going to start over. After a while, I started to realize that for my own health I needed to avoid becoming so exhausted that several times a week, I could only commune with the dog.

undefinedIt turns out that my fancy new Garmin watch that tells me to “MOVE” every hour also detects my stress level. This image at left is from a day at a conference. All I did that day was sit in one room listening to talks with occasional breaks for coffee and meals. My heart rate stayed elevated all day due to the work of following the conversation and the anxiety of constantly deciding whether I should ask for clarification on something I may have missed or just let it go. When even my watch is telling me ‘enough is enough’ or more specifically “You’ve had very few restful moments on this day. Remember to slow down and relax to keep yourself going”, it might be time to figure out how much listening is too much

So last February I tracked both my hours each day spent listening and my evening exhaustion level in my bullet journal. 

Actually, I didn’t track this much detail—I just made marks in my bullet journal for each hour and then noted whether this was manageable. Below are two example pages. For the day on the left, the 3 Xs represent 3 hours of listening and this was an OK day. The image on the right is from another day that month. The horizontal line below the Xs means that I was on the floor with the dog that evening after 5 hours of listening. 

Yes, I know that my handwriting is messy and I tend to kick a lot of tasks to the next day. But this blog post is not about my untidiness and unreliability. What I learned from this exercise was that any day including more than 3 hours of listening would be a tough an unmanageable day. Armed with this knowledge, I could start to try to rearrange my schedule to avoid having days with more than 3 hours of listening. 

Interestingly, this goes against the advice that many academics give each other. Early career researchers are encouraged to push all meetings to one day so that you have a day free for research. This is great advice… for a hearing person. For many deaf/HoH, we may do better with two free mornings a week rather than 1 full day so that no one day is overloaded with listening.

So how successful have I been? Moderately. While I have control over some aspects of my schedule, I don’t over others. I schedule my one-on-one meetings with my research assistants on days that I don’t have a lot of other meetings. If I’m teaching a 3-hour lab, sometimes it’s just impossible for me to have no other teaching or meetings that day. But I am considering restructuring my lab activities so that I don’t need to be ‘on’ the whole time. I’ve also started talking with my department head about my effort to limit my daily meetings; this involves educating him on why listening fatigue is different for me than for hearing faculty. Had I been more savvy, I might have negotiated a listening limit when I was hired. Take note of this, future academics! 

I’m still sorting out how to manage my day and eager to learn more from others on how they successfully manage listening fatigue. As I mentioned at the start of this post, The Mind Hears wants to have a series of posts about listening fatigue. Tell us how has this fatigue affected your work day and your health. What solutions have you found?

References cited

  • Bess, F.H., & Hornsby, B.W. (2014). Commentary: Listening can be exhausting—Fatigue in children and adults with hearing loss. Ear and hearing35(6), 592.
  • Hornsby, B.W., Naylor, G., & and Bess, F.H. (2016). A taxonomy of fatigue concepts and their relation to hearing loss. Ear and hearing37(Suppl 1), 136S.

Traveling and Conferences: When Bacteria Has a Party

In my first post for The Mind Hears, I want to tell you a little about my background, then outline some strategies that I’ve found successful for traveling and attending conferences.

I have been a regular at my ENT’s (ear, nose, and throat doctor) office since I was young, getting new tubes, replacement tubes, removing cholesteatomas, and repairing perforated ear drums. On a good day, I have about 50% of normal range of hearing—less if I have sinus or ear infections. Because I had my right ear completely reconstructed, I am unable to use any hearing aids effectively. Due to my upbringing in a impoverished rural town, I didn’t have access to speech therapy or options to learn American Sign Language. My loss of hearing wasn’t pronounced as an official disability, so I moved through most of my life trying to find creative ways to be successful at school or professionally. Now I wish I had spoken up more, but the aforementioned lack of resources and accommodations made it difficult.

Traveling is a necessity for (geo)scientists, from fieldwork, attending conferences, or networking with the scientific community. The quickest mode of transportation is air travel with changing pressure and humidity which apparently has a big impact on my sinus system. I remember attending the American Geophysical Union (AGU) Fall Meeting in San Francisco as an undergraduate in 2006, overwhelmed by the size of the conference and harder of hearing than usual. I thought I happened to catch a cold and tried to communicate my fellow scientists in loud poster sessions. I repeated this trip a few more times in graduate school and sure enough, the bacteria in my sinuses decided to have a party that moved to my ears. We all have flora in systems, mine just like to come unannounced and frequently. Later in graduate school, I traveled to Italy for fieldwork and I found myself with (surprise!) a sinus infection. It was not fun being in a foreign country and being unable to communicate at all in the local language; in addition, the infections made communicating effectively with my own team difficult. Nevertheless, I powered through these situations.

circe quote

Because of my experiences, I’ve found myself being more vocal about my needs; I’ve realized that I’m my best advocate. Here are some strategies that have helped me.

Medical help: I have built a great relationship with my ENT and we’ve developed a system for traveling which helps prevent weeks of sinus congestion and nearly complete deafness. I travel for my job too often to make visits to the ENT feasible prior to every trip; but occasional visits a few times a year help. Please note, this is my personal plan; please consult your physician. I take steroidal prednisone and prescription-strength Sudafed right before a flight—this medication regime means I have a better chance of flying with limited, or even better, no sinus impacts. One downside to the medications, however, is that I’m sensitive to the steroid; I feel amped and often can’t sleep that first night if there are significant time zone changes—west coast to east coast in particular. This is not a minor downside; my reaction can make important meetings stressful. But the benefits far outweigh the cons. Since I’ve become a chronic sinus infection patient, normal antibiotics on existing infections don’t work. Proactively heading off infections is my preference, since if I’m at a conference or meeting, I cannot wait the two weeks for the medications to work. Waiting would mean that I’d miss conferences with breakthrough discoveries and vital conversations. I don’t love that I have to depend on medication and the side effects, but it helps me to be an active participant in conferences rather than a passive observer.

Communication tips:

  • Live-captioning platforms and apps are improving, and more conferences are starting to use them for conferences and poster sessions.
  • Teleconferencing:
    • An example is InnoCaption, an app for both Android and iPhone that can be used for teleconferencing meetings. A federally administered fund pays for it, and you must register, as it enables a live stenographer to generate captions. It requires a 4G network or reliable Wi-Fi.
    • Another approach is using smarter work phones that can use programs such as CapTel to do live captioning. These are phones such as the Cisco (model 8861) that does live captions during video. There are also applications such as Jabber that enable you to transfer captions to a computer screen for smart accessibility.
  • Traveling to foreign countries: Google Translate now has several offline dictionaries! Five years ago if you didn’t have Wi-Fi or data, you didn’t have Google Translate. But I recently used Google Translate successfully for Spanish! Google Translate is simple to use by talking into your smartphone—you can get good translations to or from English.


  • I find it helpful to I sit up front in conference rooms both to hear better and be seen.
  • If I didn’t quite catch the presentation, I ask the speakers for business cards to get a copy of presentations or posters.
  • Depend on the conference moderators: Another technique to anticipate impaired hearing depends on the conference size and style. I’ve asked moderators in advance (via email) to repeat questions from the audience if I’m a speaker. This helps to ensure I understand the question and help with accents. I’ve had mixed results—often there is no moderator to contact directly; it means I have to track down that individual in person before the session, which is a lot of work.
  • Normalize captions: The best way to normalize is to use Google Slides or captioned presentations for everyone all the time!

What tricks and tips do you use for communicating?

circeBIO: Circe Verba, PhD, is a research geologist, science communicator, and STEMinst at a government lab. She earned her doctorate in geology and civil engineering at the University of Oregon. Dr. Verba specializes in using advanced microscopy (petrography and electron microscopy) and microanalysis techniques to tackle challenges facing the safe and efficient use of fossil energy resources. Outside of the lab, Dr. Verba is an avid LEGO buff, even designing her own set featuring field geology and a petrographic laboratory.

Captions and Craptions for Academics


In recent years, to my delight, captions have been showing up in more and more places in the United States. While I’ve been using captioning on my home TV for decades, now I see open captioning on TVs in public places, many internet videos, and most recently, in academic presentations. Everyone benefits from good captioning, not just deaf/HoH or folks with an auditory processing disorder. Children and non-native English speakers, for example, can hone their English skills by reading captions in English. And nearly everyone has trouble making out some dialogue now and then. But not all captioning is the same. While one form of captioning may provide fabulous access for deaf/HoH, another is useless. To ensure that our materials optimize inclusion, we need to figure out how to invest in the former and avoid the latter.


To unpack this a bit, I’m going to distinguish between 4 types of captioning that I’ve had experience with: 1) captions, 2) CART (communication access real-time translation), 3) auto-craptions, and 4) real-time auto-captions with AI. The first two are human produced and the last two are computer produced.

Captions: Captions are word-for-word transcriptions of spoken material. Open captions are automatically displayed, while closed captions require the user to activate the captions (click the CC option on a TV). To make these, a human produced script is added to the video as captions. Movies and scripted TV shows (i.e. not live shows) all use this method and the quality is usually quite good. In a perfect world, deaf/HoH academics (including students) would have access to captioning of this high quality all the time. Stop laughing. It could happen.

CART:This real-time captioning utilizes a stenotype-trained professional to transcribe the spoken material. Just like the court reporters who document court proceedings, a CART professional uses a coded keyboard (see image at right) to quickly enter phonemes that steno machineare matched in the vocabulary database to form words. The CART transcriptionist will modify the results as they go to ensure quality product. While some CART transcriptionists work in person (same room as the speakers), others work remotely by using a microphone system to listen to the speakers. Without a doubt, in-person CART provides way better captioning quality than remote CART. In addition to better acoustics, the in-person service can better highlight when the speaker has changed and transcriptionists can more easily ask for clarification when they haven’t understood a statement. As a cheaper alternative to CART, schools and universities sometimes use C-Print for lectures, where the non-steno-trained translators capture the meaning but not word-for-word translation. In professional settings, such as academic presentations, where specific word choice is important, CART offers far better results than C-Print but requires trained stenographers.

Some drawbacks of CART are that the transcription lags, so sometimes the speaker will ask “Any questions?” but I and other users can’t read this until the speaker is well into the next topic. Awkward, but eventually the group will get used to you butting in late. CART also can be challenging with technical words in academic settings. Optimally, all the technical vocabulary is pre-loaded, which involves sending material to the captionist ahead of time for the topics likely to be discussed. Easy-peasy? Not so fast!  For administrative meetings of over 10 people, I don’t always know in advance where the discussion will take us.  Like jazz musicians, academics enjoy straying from meeting agendas. For research presentations, most of us work on and tweak our talks up until our presentation. So getting advance access to materials for a departmental speaker can be… challenging.

Craptions:These are machine-produced auto-captions that use basic speech recognition software. Where can you find these abominationsless-than-ideal captions? Many YouTube videos and Skype use this. We call them ‘crap’tions because of the typical quality. It is possible that craptions can do an okay job if the language is clear and simple. For academic settings, these auto-craptions with basic speech recognition software are pretty much useless.


The picture at right shows auto-craption for a presentation at the 2018 American Geophysical Union conference about earthquakes. I know, right]Yes, the speaker was speaking in clear English… about earthquakes. The real crime of this situation is that I had requested CART ahead of time, and the conference’s ADA compliance subcontractor hired good quality professional transcriptionists. Then, the day before the conference, the CART professionals were told they were not needed. Of course, I didn’t know this and thought I was getting remote CART. By the time the craptions began showing up on my screen, it was too late to remedy the situation. No one that I talked with at the conference seemed to know anything about the decision to use craptions instead of CART; I learned all of this later directly from the CART professionals. The conference contractor figured that they could ‘save money’ by providing auto-craption instead of CART. Because of this cost-saving measure, I was unable to get adequate captioning for the two sessions of particular interest to me  and for which I had requested CART. From my previous post on FM Systems, you may remember that all of my sessions at that conference were in the auxiliary building where the provided FM systems didn’t work. These screw-ups meant it was a lousy meeting for me. Five months have passed since the conference, and I’m still pretty steamed. Mine is but one story; I would wager that every deaf/HoH academic can tell you other stories about material being denied to them because quality captioning was judged too expensive.

Real-time auto-caption with AI: These new programs use cloud-based machine learning that goes farbeyond the stand-alone basic speech recognition of craptioning software. The quality is pretty good and shows signs of continuous improvement. Google slides and Microsoft office 365 PowerPoint both have this functionality. Link to a video of Google Slides auto-caption in action.You need to have internet access to utilize the cloud-basedScreen Shot 2019-04-24 at 5.26.12 PM machine learning of these systems. One of the best features is that the lag between the spoken word and text is very short. I speak quickly and the caption is able to keep up with me. Before you start singing hallelujah, keep in mind that it is not perfect. Real-time auto captioning cannot match the accuracy of captioning or CART for transcribing technical words. Keep in mind that while it might get many of the words, if the captions miss one or two technical words in a sentence, then deaf/HoH still miss out. Nevertheless, many audience members will benefit, even with these missing words. So, we encourage all presenters to use real-time auto caption for every presentation. However, if a deaf/HoH person requests CART, real-time auto caption, even it is pretty darn good, should never be offered as a cheaper substitution. Their accommodation requests should be honored.

An offshoot of the real-time auto-caption with AI are apps that work on your phone. Android phones now have a Google app (Live Transcribe) that utilizes the same speech recognition power used in Google Slides. Another app that works on multiple phone platforms is Ava. I don’t have an Android phone and have only tried Ava in a few situations. It seems to do okay if the phone is close to the speaker, which might work in small conversation but poses problems for meetings of more than 3 people or academic presentations. Yes, I could put my phone up by the speaker, but then I can’t see the captions. So yeah, no.

What are your experiences with accessing effective captions in academic settings? Have you used remote captioning with success? For example, recently, I figured out that I can harness google slides real time auto-caption to watch webinars by overlapping two browser windows. For the first time, I can understand a webinar.  I’ve got a lot of webinars to catch up on! Tell us what has worked (or not worked) for you.

New Year’s Resolution: Make Your Workplace Accessible

The new year brings a fresh start to our lives; it’s a natural time to reflect on the year past and make plans for the coming year. For your new year, why not work towards making your academic workplace more accessible for your deaf/HoH colleagues? To help in this effort, we’ve assembled a list of guidelines that might improve your workplace’s inclusivity.

Ideas on this list come from a variety of sources but primarily our own experiences. Would you like to add to or revise the list? We welcome your comments and suggestions directly to the linked  google doc.   We will endeavor to update the list posted below as we collect more comments and suggestions on the google doc. If you find that you want to explore a topic in more detail, we encourage you to write a blog post for The Mind Hears—we will link your post  to this list.

What can you do to improve the academic workplace for your deaf and hard-of-hearing colleagues?

Overarching philosophy: If a participant requests accommodation for a presentation or meeting, you can work with them to figure out the best solution. It may be signed interpreters (there are different kinds of signing), oral interpreters, CART (Communication Access Realtime Translation), or FM systems (assistive listening devices). It could be rearranging the room or modifying the way that the meeting is run. Keep in mind that what works for one deaf/HoH person may not work for another person with similar deafness. What works for someone in one situation may not work at all for that same person in another situation, even if the situations seem similar to you. The best solution will probably not be the first approach that you try nor may it be the quickest or cheapest approach; it will be the one that allows your deaf and hard-of-hearing colleagues to participate fully and contribute to the discussion. Reaching the goal of achieving an academic workplace accessible to deaf/HoH academics is a journey; we’ve assembled this list to capture just a few tools that can help us on this journey.


  • Leave sufficient lights on in the room so that the speaker’s face and interpreters (if present) can be seen.
  • Have presenters use a microphone when it exists; do not let them assume they don’t need amplification. Ditto for audience questions.
    •  Note: check that the microphone system works well before the presentation. A bad microphone system can be worse than none at all
  • Have presenters repeat the questions from the audience before answering
  • If the presenter is deaf/HoH, the convener/host should be ready to repeat audience questions.
  • Encourage all presenters to use real-time auto-caption with Google slidesor Microsoft’s Presentation Translator add-on for PowerPoint (for Windows only at this point). At the end of January 2019, Microsoft Office 365 will include built-in real-time auto-caption. Our experience is that the AI with these programs far outperforms typical voice recognition software and has less lag than CART.
  • If speakers are using videos, encourage them either to turn on captioning for the videos (CC button, usually on lower right) or eliminate use of videos without captioning.
  • If CART services are provided for deaf/HoH participants, consider projecting the captioning onto a screen so that all in the room can benefit.
  • When available, use “looped” rooms for presentations (indicated by the symbol at right) that allow users of hearing aids and cochlear implants with telecoil functionality to access amplified sound directly
    • In the UK and US (2010 update to Americans with Disabilities Act), loop systems are mandated by law for any public venues that have amplified sound (summary of US regulation). However, our experience in the US is that few universities have such rooms for meetings and departmental presentations. In contrast, some of us have noticed that in the UK virtually all public institutions (even grocery stores!), have loop systems, but they are almost never turned on. It may be wise to notify the hosts 2 or more days in advance to make sure the loop system is powered up and turned on.
    • Some hearing aids and cochlear implants may not transmit looped sound and ambient sound at the same time; so don’t bother chatting with your deaf/HoH neighbor during the main presentation!!

Meetings > 10 people (e.g. faculty meetings) 

  • Start the meeting with a communication check. “Is this communication set up working for everyone?”.
  • As much as is possible with a large group, have all participants sit around a table or set of tables so that they face each other.
  • CART services can be helpful for meetings where multiple people are speaking. We have found that having a CART captionist in the room works better than working with a remote  captionist; having several microphones in the room does not always provide clear access to the speakers for the remote captionist.
  • Having a microphone that is passed from speaker and transmitter to a computer can help CART or one of the better-quality real-time auto-captioning softwares.
  • An FM system (assistive listening device) can help for such meetings. Using the microphone/transmitter as a ‘talking stick’ ensures that all conversations are amplified.  You can also place an onmi-directional FM microphone (or, even better, two) in the center of the room to catch conversation around the group.
    • Note: Unlike looped rooms, FM systems work with specific hearing aids or cochlear implants, so if the meeting has more than one deaf/HoH person using FM, the technology issues can become complex.
  • If conversation devolves to rapid interjections, the discussion leader should rein in the conversation and recap what was discussed.
  • For quick conversations, signaling the next speaker, for example by raising a hand, can help deaf/HoH know where to look next for speech reading. Hearing aids and cochlear implants are notoriously bad for directionality of sound and some of us only detect sound on one side. While interpreters strive to indicate who is talking throughout conversations, visual signaling can help us track the conversation
  • The meeting organizer should check in periodically to ensure that the communication environment is working for everyone.
  • A written summary distributed afterwards to meeting participants can ensure that everyone has the same information.

Meetings < 10 people (e.g. committee meetings) 

A lot of the strategies for larger meetings also work for small meeting. The following include notes specifically for meetings of smaller groups.

  • Have participants sit in a circle, so that all faces are visible for speech reading. Conference rooms with long narrow tables can be challenging.
  • Use the smallest room possible to accommodate the size of the meeting
  • Encourage meetings in rooms with minimized resonance; rooms with carpet and soundproof walls are better listening environments. Also, avoid rooms with a lot of external noise (e.g., busy roads or construction).
  • Use rooms with window treatments that can be adjusted to reduce glare so that speakers are not backlit.Discourage people from talking over one another in meetings.
  • Check in periodically to ensure that the communication environment is working for everyone.
  • Deaf/HoH people are notoriously bad at catching jokes, as comments are typically made more quickly they we can track conversation. It can be helpful to repeat jokes for your deaf/HoH neighbor.


  • Face people while conversing.
  • If the deaf/HoH person is using an signing or oral interpreter, direct all conversation to the deaf/HoH person, not the interpreter.
  • When you’re in a noisy room, you won’t be heard by speaking with cupped hands directly into the deaf/HoH person’s ears, because in this situation we can’t speech read your face. Similarly, we cannot understand whispering behind a cupped hand or into our ears. Come to think of it, whispering hardly ever works because the sound and speech reading information are so distorted—best avoided altogether.
  • Avoid covering your mouth. If you are chewing, please wait to speak until you are done chewing. Also avoid blocking visibility of your mouth with your cup at gatherings with coffee/tea.
  • If a deaf/HoH person asks for repetition, please repeat as closely as possible what you just said. Sometimes we hear part but not all. If you change up the words to reframe what you said, we are back to square one.
  • If a deaf/HoH person asks for repetition/clarification, never say “Oh, it’s not important.” This conveys that you don’t value their participation in the conversation. So even if you think your comment is not worth repeating, please repeat yourself to avoid excluding your colleague.
  • When a deaf/HoH person joins the conversation, it’s helpful to give them a little recap of the current topic of discussion.
  • Hearing aid and cochlear implant batteries go dead at the most inopportune times. Most of us go through one or more batteries per aid each week; the chances of this happening while we are in a presentation, meeting, or conversation are quite high. As we search for and replace the fiddly batteries, please keep in mind that we are missing out on the conversation.

Incidental conversations (e.g. passing in the hallway)  

  • When greeting your deaf/HoH colleagues in passing, give a wave. We may not hear a quiet greeting.
  • To get the attention of deaf/HoH colleagues, waving your hand where we can see it is more pleasant than being shouted at.
  • Not all deaf/HoH people wear hearing aids throughout their workday. Some of us enjoy periodically being able to take out our hearing aids or turn off our cochlear implants and focus in the quiet.
  • Our communication skills can vary with fatigue level. The cognitive fatigue of speech reading is taxing so that after a few hours of teaching and meetings with spoken conversation, we may avoid all conversations or switch from speaking to signing.
  • Not all deaf/HoH people speech read. Some of us rely on writing notes or using voice recognition apps on our mobile devices; you may be asked to communicate using modes unfamiliar to you. A motto of the deaf community: use whatever form of communication works.
  • Not all people are easy to speech read. People with facial hair and people who either don’t move their lips/face or over-enunciate can be very difficult to understand. Some of us hear high pitched voices better than low and some of hear low voices better. For better or for worse, many of us avoid conversations with people we don’t understand even though they may be wonderful people. It is not rude to ask if you are easy to understand and how you could be better understood.

You may have noticed that all of these considerations not only increase access for deaf and hard of hearing but make these situations more inclusive for all participants, such as non-native English speakers. Some of these strategies ensure that the loudest in the group doesn’t monopolize conversation and allow space for less confidence participants. If you make your workplace more accessible for your deaf and hard of hearing colleagues, you will make a more accessible workplace for everyone.

Other resources for deaf/HoH including papers about d/Deaf in academia

Contributors include: Michele Cooke, Ana Caicedo, Oliver Lamb, Wren Montgomery, Ryan Seslow, Megan Maxwell

Last revision date: 2 January 2019