Category Archives: daily grind

Navigating a masked world when you are deaf/HoH

-Ana

While the pandemic rages around the world, I know I have been incredibly lucky. Like many, I have struggled to keep my kids busy and to some degree engaged with their education, struggled to keep any semblance of work productivity, and struggled to remain optimistic about a return to a post-pandemic life that resembles my pre-pandemic one. However, I have been healthy, and nobody close to me has fallen sick. And—through the accident of timing—I have also experienced the pandemic in two geographic areas, one of which has thus far managed the coronavirus quite well (Germany), and one where I arrived once it was under control (Massachusetts, USA).

Definitely lucky.

And yet… There is a part of me that very much wants to throw a tantrum and howl at the moon about the unfairness of it all. All because of the need for face masks, which have greatly reduced my ability to communicate. 

In the last 4 months, face masks have emerged as the cheapest, most reliable method to stop the spread of COVID-19. We all have to wear them. And while all the deaf/hard of hearing (HoH) people I know are 100% behind mask wearing, many of us have been put in a bind. Navigating effective communication when out and about is never effortless for us. Lip-reading does not capture all spoken sounds, and there is a great cognitive load involved in filling the gaps to understand what is being said. Add masks, and communication with others becomes nearly impossible. 

To begin with, face masks make it very hard for those of us relying on speech- and lip-reading and on signed languages to understand speech.

This has been documented very eloquently in this article by Sara Nović for the Washington Post; in this interview of Gallaudet professor Dr. Julie Hochgesang; in this article by Shari Eberts for the “Living with Hearing Loss” blog; and this post by Nehama Rogozen for Slate magazine.

And, despite the feel-good idea of face masks with clear “windows,” our communication travails aren’t likely to end any time soon, as explained by Katherine Woodcock (@safeandsilent) in this and this blog post. 

And, surprisingly, masks pose an unexpected hazard to our hearing devices.

Alt Text: A worried face trying alternative orders to putting on a behind-the-ear hearing aid, face mask, and glasses. Each time the objects end up in a tangled mess (many thanks to M. Cooke for help with animation).

As a wearer of behind the ear (BTE) hearing aids and glasses—and now masks—I find that there are just too many things hanging from my ears. Trying to adjust or remove any of them leads to a tragi-comic (yes, I am still capable of laughing at myself as I nurse my tantrum) Rube Goldberg machine chain reaction that inevitably ends badly for at least one of my accessories. I derive some solace (and humor) from knowing I’m not the only one facing these issues: 

In the first frame the face of a person wearing a behind-the-ear hearing aid, face mask, and glasses celebrates that everything is on correctly and they can go out. In the second frame the glasses have fogged up.
Alt text: In the first frame the face of a person wearing a behind-the-ear hearing aid, face mask, and glasses celebrates that everything is on correctly and they can go out. In the second frame the glasses have fogged up.

But it is a pyrrhic sort of consolation. Inevitably I find that the effort of trying to navigate a masked world becomes too laborious, leading to a temptation to disengage and isolate. I want the world to beat COVID-19; I also want to not be cut off from the world. On the worst days it seems neither is possible.

Many of us are struggling to come up with solutions for this conundrum wrapped in a mask. Suggestions of relying on pen and paper or speech-to-text apps are helpful for short interactions, but I see friends starting to cautiously socialize in masks, an activity I feel cut out of. While I know that there is likely not a one-size solution for all of us deaf/HoH, I would love to collect suggestions on how to be a part of the masked world.

I leave you with some parting words from Sara Nović to hearing people:

“The burden of communication has never been solely on deaf people. The pandemic has simply unmasked the fact that we usually do most of the work for you. Now that we physically can’t, we need you to do your part.”

Sudden Remote Teaching – Deaf/HoH

-Ryan

Here we are navigating our 5th week of remote / online classes here in NYC (and beyond of course) and adapting to our “new lives”. I can’t think of anything else to call it as of right now, so I’m going with this. I say this from the perspective of integration as I’m very much still in the: “I’m really perplexed about how we are even in the position that we are in” phase along with having adjusted to this new life and fulfilled so many new, mandated compliances to keep my courses going simultaneously. (That was a long sentence, too!) I originally started writing this post about 3 weeks ago. A lot has changed, which makes it seem harder to update, since I’ve made more progress than I thought I would. Or could.

Along with following all of the administrative protocols, attending endless Zoom meetings, making course updates, reformatting everything, and dealing with the staggering amount of e-mail and overall communication—and that’s just work stuff—not including connecting with family and friends. Whew!—I’m finally starting to reflect on things. Or… wait, is my ego reflecting on what it thinks it is reflecting on? Reflection invites in ALL of the emotions. And the feelings—both positive and negative. And there’s been quite a bit of the negative! Why am I reminded of past failures at a time like this? We humans like routines, they help us stay focused and structured. Uncertainty isn’t something we’re really good at, right? Wrong, we certainly adapt, and adapt quickly. I can see that as I edit this post!

cog-fatigue

Here is a visual interpretation of me after the first 2 days of my 3-in-a-row-straight zoom meetings..

I have a lot of thoughts and feelings about the conversion to remote and online teaching in general. As I see it now, especially as a deaf/HoH professor who depends on “visual everything,” I have much more to organize than I thought. I teach simultaneously between 4 colleges here in NYC. I’m teaching 7 courses between all of these schools to 99.9% hearing people. As we know, the reality of “just switching to video chat classes” is NOT easy, even for a hearing person teaching hearing people. Especially if you’ve never done this before. Video chat platforms can actually work well for me if it’s a one-on-one situation, but add 5-15-24 people and access really changes.

Simply put, I need to see a face and mouth at all times to have access to a spoken conversation. Yes, I wear hearing aids but they are NOT magical devices that mean I can “hear” what normal hearing people hear. I don’t, not even close… Because I’m deaf. My hearing loss is degenerative and has been decaying over time since birth. So these days, I only catch about 30% with hearing aids. The other 70% of the conversation is absorbed from lip reading, speech patterns, emotional rapport, facial expressions, and body language. When it comes to only seeing someone’s face, head, and shoulders on a flat monitor or screen, that 70% contextual part is naturally limited, and understanding speech becomes harder.

IMB_LPIpfu

When switching to “synchronous style” remote teaching formats using Zoom, Skype, Google Meet or Google Hangouts or another video-chat platform (I recently tested Microsoft Teams that DOES have a live, real-time captioning feature. I much prefer this over the others and will be switching to this. You do need to download the desktop application and have access to the business or education licensing, though), things can get really challenging, especially with a sea of small icon-like faces as the number of people increase in the chat session. Three of my classes have 20-plus students in them. As I mentioned, one-on-one video chat works well for me, but add several others to the chat, and, well, the faces get smaller, and visual access decreases. I need to adapt to this by using a text-chat feature to support the visuals. This can be done and I have been making several adaptations as time has passed. However, typing out the conversation slows down the process, and others in the virtual classroom may become a little impatient. “Please be compassionate, please be patient, please put yourself in the shoes of others and try to understand.” Hmm, this is tough, especially if I’m your first Deaf professor. Believe me, I know I am. We are learning together in this experience, in real-crazy-time. Things will be tweaked as we go along. We can’t be selfish and expect communication to function as it would in the normal classroom. It’s just not the same thing

Aside from what I said above, accessibility has a context that expands and extends far beyond myself; it is collective and contextual. I can share my own experiences here but my experiences obviously relate to my life experiences as a whole—and that includes all of my students. I care about them deeply and protect them fiercely. They come first, always have, and I am fully responsible for making the choice to teach and work where I do. What does accessibility look like on my students’ end? My students have their own issues, struggles and problems. Some have no access to the internet, no access to a computer, laptop, desktop, smartphone, or tablet. Which means no access to certain software applications. Some do not have a physical space to sit and be present in a video chat class as their living space is shared with parents, siblings, and other relatives who are also home, and in some cases working from home. Many have lost their jobs altogether. Some are living with multiple family members who are sick, whether with Covid-19 or other pre-existing conditions. This all happens simultaneously. But what we don’t really talk about, when we discuss how wonderful all of these new adaptations are, are the emotional and psychological aspects of this entire situation. Do we have enough contrast yet to fully understand the current and continuing impact of the last 5 weeks? No way.

I have adopted the mantra of Compassion, Patience, Understanding, Accessibility, Adaptability, Inclusion, Helpfulness, and Humility. We can do this together both inside and outside academia. Fellow students, faculty, and colleagues, both those with accessibility needs and those who need help working with folks with accessibility needs, let’s pull together and contribute our resources and knowledge to help each other. Blogs like this one and other social media have a huge reach and can be used to share useful perspectives and resources.

It is also crucial that we communicate honestly with our colleagues, students and administration. I AM Guilty of this in the past myself! I have and continue to reach out to my people. All of my students already know that that I am deaf/HoH. I was upfront with them from Day One of our semester. I explained my communication needs and stated that I always need to see a face, lips, and body language to follow verbal conversations. If not, then we need to type, write, text, or make written communication happen. The application of a speech-to-text application like Cardzilla (that I love! iOs  Android) or another form of text/type/visual communication also helps! Of course, content management system (CMS) platforms like WordPress websites are also super effective, and I have built a website for every class that I teach! No, not Blackboard or Canvas. I build my own websites for my courses so that I have full autonomy of the admin aspects of communication and access and so much more.

The combination of Zoom and the CMS platforms have allowed for a relatively smooth integration for me. As I mentioned above I will integrate MS Teams this week over Zoom. Zoom allows for simultaneous Video, Audio, and Text Chat, so for me and my students, this is crucial! I can see a face to speech read and then ask for additional text follow up via text in the chat box. Plus, if turn the audio on my computer speaker up, in my case to Very High, I can place my iPhone next to it and have the Cardzilla app transcribe the audio to text. It is a hack, but it works, and I am grateful for that. My students have been super patient and seriously awesome at this point! Accessibility is EVERYTHING! Especially in this very NEW situation we find ourselves in.

Aside from teaching and hacking accessibility and expanding my awareness of how amazing our collective human potentials are, how are you all coping with the isolation, and order to stay home? I’m focusing on self-care. Making healthy meals and setting a cozy and loving environment in my space. I’m also making a lot of new art, I mean A LOT!

breakfast

IMG_3076

Ink-Jet-SeriesWIP-Paintings

Communication is EVERYTHING so please be mindful and specific about what you NEED.

Much Love to all!

How much listening is too much?

– Michele

Listening is hard work. At the end of a long day of meetings I’m exhausted. When I share this with my hearing colleagues they’ll say “Oh, I know—me too!” But is it the same? Really? 

Studies have shown that users of hearing aids like me, who rely on speech reading along with amplification, experience listening fatigue as much higher rates than hearing people (e.g., Bess and Hornsby, 2014). We are working much harder than everyone around us to piece things together and make sense from what we are able to hear. Most listening fatigue studies are on school-aged children and the few studies of adults show that “Adults with hearing loss require more time to recover from fatigue after work, and have more work absences.” (Hornsby et al., 2016). As academics, our jobs require us to listen to others all the time—in our classes, in faculty meetings, in seminars, and when meeting with students. How do we recognize cognitive fatigue due to too much listening and mitigate this fatigue so that we can manage our work responsibilities? This is a tremendous challenge for deaf/HoH academics and The Mind Hears will explore this topic in several blog posts. 

In this post I share how I figured out my daily listening limit, which turns out to be 3 hours with good amplification and clear speech reading. For many years, I pushed through my day not paying attention to how much time I was spending in meetings and classes. Some days I felt okay while other days I ended up utterly exhausted. The kind of exhausted where I can’t track conversation and even have trouble putting my own sentences together. When this happens, I can’t converse with my family and exercise class is out of the question because I can’t follow the instructor. I just take my hearing aids out and lie on the floor with the dog— I don’ need to speech read him and he gets me. Yay dogs!  

When I explain to my listening fatigue to non-native English speakers, they get it right away. They recognize that this listening fatigue is just like when they first moved to a country with a new language; while they had good command of the new language, following it all day exhausted them. Exactly! Except I’m not going to get any better at my native language.

After a while—actually a really long while because for many years I tried to work as if I was a hearing person due to internalized ableism, which really is a whole different blog topic—and now this sentence has really gotten off track so I’m going to start over. After a while, I started to realize that for my own health I needed to avoid becoming so exhausted that several times a week, I could only commune with the dog.

undefinedIt turns out that my fancy new Garmin watch that tells me to “MOVE” every hour also detects my stress level. This image at left is from a day at a conference. All I did that day was sit in one room listening to talks with occasional breaks for coffee and meals. My heart rate stayed elevated all day due to the work of following the conversation and the anxiety of constantly deciding whether I should ask for clarification on something I may have missed or just let it go. When even my watch is telling me ‘enough is enough’ or more specifically “You’ve had very few restful moments on this day. Remember to slow down and relax to keep yourself going”, it might be time to figure out how much listening is too much

So last February I tracked both my hours each day spent listening and my evening exhaustion level in my bullet journal. 

Actually, I didn’t track this much detail—I just made marks in my bullet journal for each hour and then noted whether this was manageable. Below are two example pages. For the day on the left, the 3 Xs represent 3 hours of listening and this was an OK day. The image on the right is from another day that month. The horizontal line below the Xs means that I was on the floor with the dog that evening after 5 hours of listening. 

Yes, I know that my handwriting is messy and I tend to kick a lot of tasks to the next day. But this blog post is not about my untidiness and unreliability. What I learned from this exercise was that any day including more than 3 hours of listening would be a tough an unmanageable day. Armed with this knowledge, I could start to try to rearrange my schedule to avoid having days with more than 3 hours of listening. 

Interestingly, this goes against the advice that many academics give each other. Early career researchers are encouraged to push all meetings to one day so that you have a day free for research. This is great advice… for a hearing person. For many deaf/HoH, we may do better with two free mornings a week rather than 1 full day so that no one day is overloaded with listening.

So how successful have I been? Moderately. While I have control over some aspects of my schedule, I don’t over others. I schedule my one-on-one meetings with my research assistants on days that I don’t have a lot of other meetings. If I’m teaching a 3-hour lab, sometimes it’s just impossible for me to have no other teaching or meetings that day. But I am considering restructuring my lab activities so that I don’t need to be ‘on’ the whole time. I’ve also started talking with my department head about my effort to limit my daily meetings; this involves educating him on why listening fatigue is different for me than for hearing faculty. Had I been more savvy, I might have negotiated a listening limit when I was hired. Take note of this, future academics! 

I’m still sorting out how to manage my day and eager to learn more from others on how they successfully manage listening fatigue. As I mentioned at the start of this post, The Mind Hears wants to have a series of posts about listening fatigue. Tell us how has this fatigue affected your work day and your health. What solutions have you found?

References cited

  • Bess, F.H., & Hornsby, B.W. (2014). Commentary: Listening can be exhausting—Fatigue in children and adults with hearing loss. Ear and hearing35(6), 592.
  • Hornsby, B.W., Naylor, G., & and Bess, F.H. (2016). A taxonomy of fatigue concepts and their relation to hearing loss. Ear and hearing37(Suppl 1), 136S.

New Year’s resolution 2020: Make Your Workplace Accessible

The new year brings a fresh start to our lives; it’s a natural time to reflect on the year past and make plans for the coming year. At the start of 2019 The Mind Hears offered a post on making your academic workplace more accessible for your deaf/HoH colleagues. For 2020, we’ve updated the list of recommendations on the google doc and expanded below on the reasons why you should work to improve your workplace’s inclusivity today.

Universal design your workplace: Our spaces become more inclusive for all when we improve access for any subgroup of our community. Consequently, by increasing the accessibility of our workplaces for our deaf and hard of hearing colleagues, we create a better workplace for everyone. This includes hearing folks who have auditory processing disorder, use English as their second language or are acquiring hearing loss during their careers. Chances are that someone in your department has hearing loss, whether they’ve disclosed this or not, and will benefit from your efforts to make your workplace more accessible (see post on Where are the deaf/HoH academics). This is why you should universal design your workplace now and not wait until someone who is struggling asks you to make modifications.

Sharing the work: With a google search you can find several resources on workplace accessibility for deaf/HoH employees, such as the Hearing Loss Association of America’s (HLAA) very useful employment toolkit. One drawback of these resources is that nearly all of the suggestions are framed as actions for the deaf/HoH employee. While deaf and hard of hearing academics need to be strong self-advocates and take steps to improve their accommodations, our hearing colleagues can help us tremendously by sharing the work and not expecting us to bear all of the burden of creating accessible workplaces. Speech reading conversations, planning accommodations and making sure that technology/accommodations function is never-ending and exhausting work that we do above and beyond our teaching, research and service. Your understanding and your help changing our workplaces can make a huge difference to us.  For example, if a speaker doesn’t repeat the question, ask them to repeat even if you heard the question just fine. The people who didn’t hear the question are already stressed and fatigued from working hard to listen, so why expect them to do the added work of ensuring speakers repeat questions. Repeating the question benefits everyone.

One size doesn’t fit all: If a participant requests accommodation for a presentation or meeting, follow up with them and be prepared to iterate to a solution that works. It may be signed interpreters (there are different kinds of signing), oral interpreters, CART (see post on Captions and Craptions), or FM systems (see post on Using FM systems at conferences). It could be rearranging the room or modifying the way that the meeting is run. Keep in mind that what works for one deaf/HoH person may not work for another person with similar deafness. What works for someone in one situation may not work at all for that same person in another situation, even if the situations seem similar to you. The best solution will probably not be the first approach that you try nor may it be the quickest or cheapest approach; it will be the one that allows your deaf and hard-of-hearing colleagues to participate fully and contribute to the discussion. Reaching the goal of achieving an academic workplace accessible to deaf/HoH academics is a journey.

What to be a better ally and make your workplace accessible for your deaf and hard of hearing colleagues? Follow this link to read our list of recommendations. This is a living document and we welcome your comments and suggestions either to this post or directly within the document.

 

I owe my career to the invention of email

-Michele

The title of this post says it all, really. Several times a week I marvel at all the work communication that I can do now that would have been extremely difficult several decades ago. I earned my PhD in 1996. So, I remember the days of making physical presentation slides, where you had to use special film and rush to the developing place in order to get your slides produced before the meeting. I also remember searching the World Wide Web for the very first time. I realize that I’m dating myself in these reminisciings and you are probably impatient with me to get to the point.

Email was around when I was in graduate school, but, at that time, most professionals relied on phoning each other to exchange ideas and get information. At my request, the Dean’s office of my graduate school installed a TTY for me to use for making professional phone calls. I used it a few times and I was very grateful that the captionists relayed voiced information so that I didn’t have to piece together the message from fragments heard on my amplified phone. For example, I used this for some job interviews (see our post on disclosing deafness in job search).

Around my third year of my PhD, I was also having serious doubts if an academic career was for me. This is not uncommon for any graduate students but my deafness exacerbated a sense of not belonging in academia. I didn’t know any deafened professors or researchers. How does one have a successful academic career with deafness? How will I follow discussions at meetings? How will I hear my students’ comments? How will I communicate with colleagues on the phone?

I’m still asking some of these questions and The Mind Hears blog is doing a great job of probing these questions (e.g., teaching large classes post). Fortunately, the last question is now moot. Email allows us to communicate with colleagues, exchange ideas and get questions answered. I don’t have to worry about how my voice sounds or whether I can hear folks. I don’t have to fuss with the complications of the relay service, though I still use my new captioned phone when necessary.

Do I ever miss being able to use the telephone? Heck no! I far prefer email or talking in person than using the phone. Recently Ana and were chatting with a hearing person and they suggested that we contact someone by phone. Ana and I looked at each other and I could tell from her face that she was thinking the same thing as me. A combination of “Ew! Why use the phone when you can email?” and “Eek! Don’t make me use the phone!”.  When I need an answer from someone on my campus, I take the opportunity to stroll over to their office.  Sometimes I get asked “Why didn’t you just phone rather than walk over here?” I laugh and say “It’s good to get out and about and besides, now I’ve had the chance to visit you.” If I choose to, I can use these encounters as opportunities to talk about my deafness. But honestly, the work of educating the hearing community is draining so I prefer to have some control over when and where educational moments occur.

I marvel that while I’ve met many deaf and hard of hearing academics that are younger than me, I’ve not yet met one older than myself who has navigated an academic career with significant deafness (i.e., not age related).  I wonder if this is because I was the leading edge of the email revolution that changed the way academics communicate. Academics even a few years my senior had to rely on telephones for networking. The thought of that makes me deeply appreciative of starting my academic career when I did. What an amazing and empowering time this is to be an academic! I’m grateful for the luck of living on this side of the email revolution. Thank you email!

So, I will end with this Haiku:

    The telephone sits on my desk
    Gathering much dust
    While I type and weave science

P.S. The increasing utilization of remote video conferencing is presenting new challenges for us deaf and hard of hearing academics. Who wants to contribute a The Mind Hears post on navigating these settings?

The sound we can see: working with hearing loss in the field

When I was 19 I went for a checkup with an audiologist and found out that I was hearing only 90% of what I should be. The doctor said that for my age, this was a high level of hearing loss, and attributed it possibly to the intense course of antibiotics I took for kidney failure when I was one year old. He suggested that I come back yearly to repeat the hearing exam, to verify if my ability further decreased below my current hearing levels. Of course I ignored this advice and never went back. When I started my graduate studies six years later, I decided it was finally time to visit the audiologist again, because I discovered that I could not hear the species of frog I had decided to base my research on. This was a very scary moment for me. How did I find myself in this situation?

In the last year of my undergraduate studies I took an ecology course and fell in love with the topic. I knew I wanted to earn a master’s degree in ecology, ideally working with animal populations. In Brazil, one has to take a standardized exam to enter a graduate program. I traveled 440 km to take the test and passed; I began my studies in the Federal University of Paraná located in Curitiba, in the south of Brazil. Among all the available mentors, there was one who carried out research on ecological dynamics of insects and anuran amphibians. I chose his lab and wrote a project proposal examining the population dynamics of an endemic species of stream frog (Hylodes heyeri) in the Atlantic Forest in Brazil, specifically Pico do Marumbi State Park, Piraquara, in the state of Paraná. Much of what I was to be doing was completely new to me: I had never worked with frogs and I also had never practiced the mark-and-recapture method. I thus faced a steep learning curve and had to learn a LOT about lab and fieldwork from my team and my mentor. In my first field outing, during which I was to learn how to identify and capture the species I would study, I discovered that I could not hear the frog. A labmate who accompanied me to the field said, “Are you listening? The frog is so close to us.” He thought I was not hearing the frog due to lack of experience, or because of the background noise of the stream. I worried that something else was amiss, and this finally prompted me to go back to my audiologist. There, I discovered that I had lost 2% more of my hearing, and this loss compromised treble sounds, those in the range of high to very high frequencies, precisely overlapping my frog’s vocalizations.

Now, I’m a PhD student and I use hearing aids programmed specifically for my hearing loss, which primarily encompasses frequencies above 4000 Hz. I was initially ashamed to wear hearing aids because people mocked them. But I didn’t consider changing projects, because I knew I could get help localizing the frog. I also knew there would be ways for me to analyze the sound without necessarily hearing it. Even with hearing aids, however, I can only hear the call of my frog when I am no more than 4 meters away. Other members of my lab can detect the sound of the frog from much farther away, even when they are 20 meters or more from the stream. This means that for every survey I carry out in the field, I need a person to accompany me to guide me to the frog, using their sense of hearing to identify the sound. But the assistance I receive in the field goes beyond locating my frog; the field can be dangerous for many reasons: I may not hear dangerous animals—such as puma, collared peccary, or leopards—approaching; and I may lose track of my team if people call me from too far away. Even for scientists without hearing loss, it is advisable not to carry out fieldwork alone.

In recent years, I have had the opportunity to learn Brazilian sign language (LIBRAS) in graduate courses. I am happy that it is a requirement for my degree! When I am in the field I communicate primarily with gestures. I am lucky that my frogs are diurnal, because I am able to see my companions in the field, making communication much easier. Once my companion hears the frog, they look at me so I can read their lips or we make gestures so as to not scare the frogs. Sometimes I use headphones, point the microphone of my recorder in the general direction of the frog, and increase the volume to better understand where the sound comes from—this trick of using my main research tool (my recorder) to find my frogs was taught to me by a friend who also carried out research in bioacoustics and had the challenge of finding a tiny mountain frog species that hid in leaf-litter (thank you, André Confetti). My frogs are also tiny, only 4 cm long. They camouflage in the streams and spook very easily, but in order to obtain my data, I need to get as close as 50 cm from the frog. Only then can I really start. The aim of my work is to analyze the effect of anthropogenic noise (such as traffic road sounds transmitted by playback) on frog communication. Once I am in position, I can play the anthropogenic sound, and record the frog’s call. I take these recordings back to the lab and experience the most rewarding aspect of my efforts to find these frogs. The recordings are transformed into graphs of the frequency and length of each call. Although I cannot hear the sounds my frog makes, I can see them! After seeing the sound I can analyze several call variables and calculate various statistics.

Would I recommend field work such as mine to somebody who finds themselves in my predicament? If you are open to creative workarounds, such fieldwork is possible for all. Having a field companion, using signs to communicate, and making use of the amplification provided by my recording equipment has solved the majority of my problems. Most important of all, having support from your mentor and other people who can help and you can trust is crucial. I do not intend to continue with bioacoustics research after I graduate, but if I need to mentor any students in the area, I’ll be happy to do it. I worry about my hearing loss too, in thinking of how it will affect my teaching in the future, because sometimes I hear words incorrectly and confuse their meaning. But I recently exposed my hearing loss in an interview; reading more at The Mind Hears and on other blogs has inspired me to worry less about my hearing loss and to continue to forge ahead in my career.

 

Biography: My name is Michelle Micarelli Struett and I am a doctoral candidate in the Graduate Program in Ecology and Conservation (where I also received my MS) at the Federal University of Paraná in Curitiba, Paraná, Brazil. My undergraduate was at Maringá State University in Maringá, which is also in Paraná. I am interested in animal behavior, especially in frogs, and in my research will examine multi-modal communication in the Brazilian Torrent Frog (Hylodes heyeri). This unique frog can sing from one or both sides of its mouth (it has two vocal sacs), depending on context. I will attempt to determine what that context is that stimulates those two possibilities (auditive, visual, or tactile), and how anthropogenic noise may interfere with communication and social interactions in this frog. Despite my hearing loss (which primarily encompasses frequencies above 4000 Hz), I have not been constrained from working with frog calls and bioacoustics.

Mandated equal opportunity hiring may not ensure equal considerations by hiring committees: A hypothetical scenario

-Ryan

Imagine that you are a deaf/hard-of-hearing (HoH) person applying for a full-time academic position in a U.S. public institution of higher learning. The position is listed nationally across multiple job boards. At the offering institution, deaf/HoH faculty, students, administrators, and staff members represent 1% of the population. You are highly qualified and display an extensive résumé with many accomplishments in your field and a strong history of service. Information about you is highly transparent on the internet at large.

You investigate and discover that the offering department does not currently have a deaf or hard-of-hearing person among their full-time and adjunct faculty.

Applying for the position:

When applying, you check the general “YES, I have a disability” box on the institution’s application and contact the human resources (HR) department directly to let them know that you are applying specifically as a deaf/HoH person. If you are offered an interview for the position, you request, as is your right, to meet with the search committee in person, rather than have the interview over a conference call. You cross your fingers, hoping that the HR department communicates with the department offering the position to ensure that they are presenting an equal opportunity for employment for those with disabilities. Does the HR department actually communicate your request for accommodation to the academic department? You may never know but let’s say that it does in this case…

Considerations of the hiring committee:

When the academic department’s search committee learns that you are deaf/HoH how will they respond? Are they experienced in the process of interviewing a deaf or hard-of-hearing person? How many interviews have they given to deaf/HoH applicants in the past? How many of those previous applicants were given an interview, made it to the second or third round of the process, and hired full-time? Where are the statistics to prove that equal opportunities are being given?

When the search committee learns of your request to meet in person for an interview because you are deaf/HoH, how aware and educated are the search committee members of Deaf culture and what it means to be deaf or hard of hearing? How aware are they of what it means to be a deaf/HoH faculty member teaching in a mainly all-hearing environment? Do they know the benefits of having a deaf or hard-of-hearing person as a part of their full-time or part-time faculty? What evidence is there within the department’s current publications, seminars, exhibitions, faculty development, and outreach efforts of awareness of the advantages brought about by workplace diversity that is inclusive of disability?

Is the typical academic faculty search committee equipped, skilled, and supportive enough to interview a deaf/HoH candidate if none of their members are deaf or hard of hearing? If they don’t have deaf/HoH members, are they sufficiently trained in deaf/HoH experiences to judge your application fairly against the numerous other applicants who do not have any disabilities? Are search committees trained enough to distinguish between medical and cultural models of disability, and to understand how these models impact their perceptions of your strengths? Are they savvy enough to move away from focusing on what the you can’t do, and focus instead on what your diverse perspective brings to the hiring unit?

Answers to many of the questions I ask above shouldbe part of the public record. My experience in the job search circuit thus far has left me disillusioned and believing that departmental search committees and HR departments are likely ill-equipped to handle deaf/HoH applicants. Studies have shown that search committees have many implicit biases. One of these biases is that since deafness may impede academic success, it is safer to hire a hearing applicant.

It’s time to fix this.

Have you ever been a on a faculty search committee where a deaf or hard-of-hearing person applied? If so, did that person receive the position? If not, would you like to share your experience?

Under-represented: Where are all the deaf and hard-of-hearing academics?

-Michele

Through working on The Mind Hears since Sept 2018, I’ve had the chance to meet some amazing deaf and hard-of-hearing scholars and researchers.Our backgrounds, areas of expertise, degrees of hearing, and jobs differ.But one very common experience for deaf/HoH at mainstream institutions (i.e. not at a primary deaf/HoH university), is thae lack of mentors who are deaf/HoH. This isolation drove us to start the blog. But our common experiences lead to the question: Where areall the deaf and hard-of-hearing academics?

The American Speech Language Hearing Association classifies degree of hearing loss on a scale of mild (26-40 db), moderate (41-55 db), moderately severe (56-70), severe (71-90), and profound (91+) (ASHA). Despite these straight-forward definitions, understanding the statistics on hearing loss requires nuance. While tests prove that many people have some degree of hearing loss, only a subset of these folks wear hearing aids or use signed language; even fewer request work accommodations. The National Institute on Deafness and Other Communication Disorders, part of the federal National Institutes of Health, reports that 14% of the working age adult population aged 20–69 has significant hearing loss (Hoffman et al., 2017). This 14% report a greater than 35 decibel threshold for hearing tones within speech frequencies in one or both ears (NIDCD). The number of people with high-frequency hearing loss is double the number with speech range loss (Hoffman et al., 2017). However, not hearing watch alarms or computer keyboards is not considered to be as impactful as missing speech range frequencies.

As Figure 1 shows, the statistics on hearing loss are further complicated by age, which correlates with incidence of hearing loss. Among folks aged 60–69 years, 39% have hearing loss (Hoffman et al., 2017). Within the larger disabled community, we crips joke that we are a community that can recruit new members. Joking aside, the reality is that if you are a hearing person reading this, there is a very good chance that hearing loss will affect you or someone close to you during your working lifetime. The Mind Hearscan be a valuable resource for folks with newly acquired hearing loss.

hoffman age
Figure 1: Modified from Hoffman et al., 2017

So where are the deaf and hard-of-hearing academics? Doctoral degrees are generally awarded to academics between the ages of 20 and 29; the incidence of significant hearing loss within this population is 2.2% (Hoffman et al., 2017). The National Science Foundation’s annual survey on doctoral recipients reports that 54,664 graduate students earned PhD degrees in 2017 (NSF 2017)—wow, that represents a lot of hard work! Great job y’all! Now, if the graduate student population resembles the general population, then we should expect that 1202 of those newly minted PhDs are deaf/HoH. Instead, the survey reports that only 654 PhDs, or 1.2%, were issued to deaf or hard of hearing people (NSF, 2017). This suggests that deaf/HoH PhDs have half the representation that they do within the general population.
Furthermore, the distribution of deaf/HoH PhDs is not even among the fields of the NSF doctoral survey. In 2017, as shown in Figure 2, each of the fields of Humanities and arts, Education, and Psychology and social sciences has a greater percentage of deaf/HoH than each of the fields of Engineering, Life sciences, Physical and earth sciences or Mathematics and computer sciences. It seems like I’ve heard of greater numbers of deaf/HoH scholars and researchers in the fields of Deaf Studies, Deaf Education and Signed Languages Studies than in other fields. This could impact the distribution. Or perhaps some fields are more friendly to deaf/HoH scholars and researchers. Nevertheless, deaf and HoH are underrepresented in all fields within scholars and researchers with PhDs.

2017 stats

So, what can we do? These numbers reveal why so many of us feel isolated in our experiences within academia. The Mind Hears is one effort to facilitate networking and raise awareness of inclusion issues for deaf/HoH academics.

 

 

References

American Speech-Language-Hearing Association. Available at https://www.asha.org/public/hearing/degree-of-hearing-loss/

Hoffman HJ, Dobie RA, Losonczy KG, Themann CL, Flamme GA. Declining Prevalence of Hearing Loss in US Adults Aged 20 to 69 Years. JAMA Otolaryngol Head Neck Surg. 2017;143(3):274–285. doi:10.1001/jamaoto.2016.3527

National Institute on Deafness and Other Communication Disorders (NIDCD), Available at https://www.nidcd.nih.gov/health/statistics/quick-statistics-hearing.

National Science Foundation, National Center for Science and Engineering Statistics. 2018. Doctorate Recipients from U.S. Universities: 2017. Special Report NSF 19-301. Alexandria, VA. Available at https://ncses.nsf.gov/pubs/nsf19301/.

 

Traveling and Conferences: When Bacteria Has a Party

In my first post for The Mind Hears, I want to tell you a little about my background, then outline some strategies that I’ve found successful for traveling and attending conferences.

I have been a regular at my ENT’s (ear, nose, and throat doctor) office since I was young, getting new tubes, replacement tubes, removing cholesteatomas, and repairing perforated ear drums. On a good day, I have about 50% of normal range of hearing—less if I have sinus or ear infections. Because I had my right ear completely reconstructed, I am unable to use any hearing aids effectively. Due to my upbringing in a impoverished rural town, I didn’t have access to speech therapy or options to learn American Sign Language. My loss of hearing wasn’t pronounced as an official disability, so I moved through most of my life trying to find creative ways to be successful at school or professionally. Now I wish I had spoken up more, but the aforementioned lack of resources and accommodations made it difficult.

Traveling is a necessity for (geo)scientists, from fieldwork, attending conferences, or networking with the scientific community. The quickest mode of transportation is air travel with changing pressure and humidity which apparently has a big impact on my sinus system. I remember attending the American Geophysical Union (AGU) Fall Meeting in San Francisco as an undergraduate in 2006, overwhelmed by the size of the conference and harder of hearing than usual. I thought I happened to catch a cold and tried to communicate my fellow scientists in loud poster sessions. I repeated this trip a few more times in graduate school and sure enough, the bacteria in my sinuses decided to have a party that moved to my ears. We all have flora in systems, mine just like to come unannounced and frequently. Later in graduate school, I traveled to Italy for fieldwork and I found myself with (surprise!) a sinus infection. It was not fun being in a foreign country and being unable to communicate at all in the local language; in addition, the infections made communicating effectively with my own team difficult. Nevertheless, I powered through these situations.

circe quote

Because of my experiences, I’ve found myself being more vocal about my needs; I’ve realized that I’m my best advocate. Here are some strategies that have helped me.

Medical help: I have built a great relationship with my ENT and we’ve developed a system for traveling which helps prevent weeks of sinus congestion and nearly complete deafness. I travel for my job too often to make visits to the ENT feasible prior to every trip; but occasional visits a few times a year help. Please note, this is my personal plan; please consult your physician. I take steroidal prednisone and prescription-strength Sudafed right before a flight—this medication regime means I have a better chance of flying with limited, or even better, no sinus impacts. One downside to the medications, however, is that I’m sensitive to the steroid; I feel amped and often can’t sleep that first night if there are significant time zone changes—west coast to east coast in particular. This is not a minor downside; my reaction can make important meetings stressful. But the benefits far outweigh the cons. Since I’ve become a chronic sinus infection patient, normal antibiotics on existing infections don’t work. Proactively heading off infections is my preference, since if I’m at a conference or meeting, I cannot wait the two weeks for the medications to work. Waiting would mean that I’d miss conferences with breakthrough discoveries and vital conversations. I don’t love that I have to depend on medication and the side effects, but it helps me to be an active participant in conferences rather than a passive observer.

Communication tips:

  • Live-captioning platforms and apps are improving, and more conferences are starting to use them for conferences and poster sessions.
  • Teleconferencing:
    • An example is InnoCaption, an app for both Android and iPhone that can be used for teleconferencing meetings. A federally administered fund pays for it, and you must register, as it enables a live stenographer to generate captions. It requires a 4G network or reliable Wi-Fi.
    • Another approach is using smarter work phones that can use programs such as CapTel to do live captioning. These are phones such as the Cisco (model 8861) that does live captions during video. There are also applications such as Jabber that enable you to transfer captions to a computer screen for smart accessibility.
  • Traveling to foreign countries: Google Translate now has several offline dictionaries! Five years ago if you didn’t have Wi-Fi or data, you didn’t have Google Translate. But I recently used Google Translate successfully for Spanish! Google Translate is simple to use by talking into your smartphone—you can get good translations to or from English.

Conferences:

  • I find it helpful to I sit up front in conference rooms both to hear better and be seen.
  • If I didn’t quite catch the presentation, I ask the speakers for business cards to get a copy of presentations or posters.
  • Depend on the conference moderators: Another technique to anticipate impaired hearing depends on the conference size and style. I’ve asked moderators in advance (via email) to repeat questions from the audience if I’m a speaker. This helps to ensure I understand the question and help with accents. I’ve had mixed results—often there is no moderator to contact directly; it means I have to track down that individual in person before the session, which is a lot of work.
  • Normalize captions: The best way to normalize is to use Google Slides or captioned presentations for everyone all the time!

What tricks and tips do you use for communicating?

circeBIO: Circe Verba, PhD, is a research geologist, science communicator, and STEMinst at a government lab. She earned her doctorate in geology and civil engineering at the University of Oregon. Dr. Verba specializes in using advanced microscopy (petrography and electron microscopy) and microanalysis techniques to tackle challenges facing the safe and efficient use of fossil energy resources. Outside of the lab, Dr. Verba is an avid LEGO buff, even designing her own set featuring field geology and a petrographic laboratory.

Captions and Craptions for Academics

-Michele

In recent years, to my delight, captions have been showing up in more and more places in the United States. While I’ve been using captioning on my home TV for decades, now I see open captioning on TVs in public places, many internet videos, and most recently, in academic presentations. Everyone benefits from good captioning, not just deaf/HoH or folks with an auditory processing disorder. Children and non-native English speakers, for example, can hone their English skills by reading captions in English. And nearly everyone has trouble making out some dialogue now and then. But not all captioning is the same. While one form of captioning may provide fabulous access for deaf/HoH, another is useless. To ensure that our materials optimize inclusion, we need to figure out how to invest in the former and avoid the latter.

craptions

To unpack this a bit, I’m going to distinguish between 4 types of captioning that I’ve had experience with: 1) captions, 2) CART (communication access real-time translation), 3) auto-craptions, and 4) real-time auto-captions with AI. The first two are human produced and the last two are computer produced.

Captions: Captions are word-for-word transcriptions of spoken material. Open captions are automatically displayed, while closed captions require the user to activate the captions (click the CC option on a TV). To make these, a human produced script is added to the video as captions. Movies and scripted TV shows (i.e. not live shows) all use this method and the quality is usually quite good. In a perfect world, deaf/HoH academics (including students) would have access to captioning of this high quality all the time. Stop laughing. It could happen.

CART:This real-time captioning utilizes a stenotype-trained professional to transcribe the spoken material. Just like the court reporters who document court proceedings, a CART professional uses a coded keyboard (see image at right) to quickly enter phonemes that steno machineare matched in the vocabulary database to form words. The CART transcriptionist will modify the results as they go to ensure quality product. While some CART transcriptionists work in person (same room as the speakers), others work remotely by using a microphone system to listen to the speakers. Without a doubt, in-person CART provides way better captioning quality than remote CART. In addition to better acoustics, the in-person service can better highlight when the speaker has changed and transcriptionists can more easily ask for clarification when they haven’t understood a statement. As a cheaper alternative to CART, schools and universities sometimes use C-Print for lectures, where the non-steno-trained translators capture the meaning but not word-for-word translation. In professional settings, such as academic presentations, where specific word choice is important, CART offers far better results than C-Print but requires trained stenographers.

Some drawbacks of CART are that the transcription lags, so sometimes the speaker will ask “Any questions?” but I and other users can’t read this until the speaker is well into the next topic. Awkward, but eventually the group will get used to you butting in late. CART also can be challenging with technical words in academic settings. Optimally, all the technical vocabulary is pre-loaded, which involves sending material to the captionist ahead of time for the topics likely to be discussed. Easy-peasy? Not so fast!  For administrative meetings of over 10 people, I don’t always know in advance where the discussion will take us.  Like jazz musicians, academics enjoy straying from meeting agendas. For research presentations, most of us work on and tweak our talks up until our presentation. So getting advance access to materials for a departmental speaker can be… challenging.

Craptions:These are machine-produced auto-captions that use basic speech recognition software. Where can you find these abominationsless-than-ideal captions? Many YouTube videos and Skype use this. We call them ‘crap’tions because of the typical quality. It is possible that craptions can do an okay job if the language is clear and simple. For academic settings, these auto-craptions with basic speech recognition software are pretty much useless.

IMG_3588

The picture at right shows auto-craption for a presentation at the 2018 American Geophysical Union conference about earthquakes. I know, right]Yes, the speaker was speaking in clear English… about earthquakes. The real crime of this situation is that I had requested CART ahead of time, and the conference’s ADA compliance subcontractor hired good quality professional transcriptionists. Then, the day before the conference, the CART professionals were told they were not needed. Of course, I didn’t know this and thought I was getting remote CART. By the time the craptions began showing up on my screen, it was too late to remedy the situation. No one that I talked with at the conference seemed to know anything about the decision to use craptions instead of CART; I learned all of this later directly from the CART professionals. The conference contractor figured that they could ‘save money’ by providing auto-craption instead of CART. Because of this cost-saving measure, I was unable to get adequate captioning for the two sessions of particular interest to me  and for which I had requested CART. From my previous post on FM Systems, you may remember that all of my sessions at that conference were in the auxiliary building where the provided FM systems didn’t work. These screw-ups meant it was a lousy meeting for me. Five months have passed since the conference, and I’m still pretty steamed. Mine is but one story; I would wager that every deaf/HoH academic can tell you other stories about material being denied to them because quality captioning was judged too expensive.

Real-time auto-caption with AI: These new programs use cloud-based machine learning that goes farbeyond the stand-alone basic speech recognition of craptioning software. The quality is pretty good and shows signs of continuous improvement. Google slides and Microsoft office 365 PowerPoint both have this functionality. Link to a video of Google Slides auto-caption in action.You need to have internet access to utilize the cloud-basedScreen Shot 2019-04-24 at 5.26.12 PM machine learning of these systems. One of the best features is that the lag between the spoken word and text is very short. I speak quickly and the caption is able to keep up with me. Before you start singing hallelujah, keep in mind that it is not perfect. Real-time auto captioning cannot match the accuracy of captioning or CART for transcribing technical words. Keep in mind that while it might get many of the words, if the captions miss one or two technical words in a sentence, then deaf/HoH still miss out. Nevertheless, many audience members will benefit, even with these missing words. So, we encourage all presenters to use real-time auto caption for every presentation. However, if a deaf/HoH person requests CART, real-time auto caption, even it is pretty darn good, should never be offered as a cheaper substitution. Their accommodation requests should be honored.

An offshoot of the real-time auto-caption with AI are apps that work on your phone. Android phones now have a Google app (Live Transcribe) that utilizes the same speech recognition power used in Google Slides. Another app that works on multiple phone platforms is Ava. I don’t have an Android phone and have only tried Ava in a few situations. It seems to do okay if the phone is close to the speaker, which might work in small conversation but poses problems for meetings of more than 3 people or academic presentations. Yes, I could put my phone up by the speaker, but then I can’t see the captions. So yeah, no.

What are your experiences with accessing effective captions in academic settings? Have you used remote captioning with success? For example, recently, I figured out that I can harness google slides real time auto-caption to watch webinars by overlapping two browser windows. For the first time, I can understand a webinar.  I’ve got a lot of webinars to catch up on! Tell us what has worked (or not worked) for you.