I look forward to and dread research conferences simultaneously.
I look forward to seeing my friends and colleagues, learning about new research, and exercising my neurons as I ponder different research topics and directions. I eagerly anticipate exploring the different cities and countries where the conferences are held. I long for those few days where I control my own schedule.
At the same time, I dread discovering that the provided access services are inadequate to catch the various research presentations and posters—the interpreting and/or captioning quality ranges from poor to excellent, so the significance of getting the gist of what is new research is >0.05 (I know I shouldn’t be using 0.05 as a baseline, my dear statistician friends). I also worry whether the quality of my research work is reflected accurately by the interpreters for my presentations.
But what I dread the most is being viewed as the deafperson, not as a scientist. At the first few conferences I attended, people would come up to me and ask questions such as, “How do you come up with signs for phytoplankton or photorespiration?” Often, they would try to strike up a conversation with the interpreter right in front of me and commiserate about how hard it must be to keep up with the scientific jargon, especially with people speaking at warp speeds. These conversations were always awkward since the interpreters know they cannot have personal conversations while they are interpreting. They would look to me for guidance on how to handle the situations, since they knew the protocol, even if my colleagues did not.
I’ve mastered responding with a strained smile on my face, “Yes, it isn’t easy. By the way, what is yourresearch on? And do you have a poster or talk here?” Most people get the hint and are more than happy to talk about their own research. After twenty years in the field, these encounters become less frequent, but they still occur.
Those encounters have become rarer over time because I have become more assertive about going up to other researchers to ask them about their work; but that assertiveness and confidence has come in part because of my growing scientific reputation in the field of estuarine science and oceanography. Now, I suspect that if I stand around and wait for people to come talk to me, they either won’t come due to fear, or they will come with the dreadedquestions. I truly appreciate my colleagues who come to me to discuss science.
At academic conferences, I am a scientist first, and deaf person second.
Dr. Solomon has been a faculty member at Gallaudet since 2000. She also is an adjunct at the University of Maryland Center for Environmental Science, and serves on masters and doctoral committees for research on increasing participation of deaf and hard of hearing people in STEM and estuarine science especially in the areas of nutrient and microbial dynamics.
Through working on The Mind Hears since Sept 2018, I’ve had the chance to meet some amazing deaf and hard-of-hearing scholars and researchers.Our backgrounds, areas of expertise, degrees of hearing, and jobs differ.But one very common experience for deaf/HoH at mainstream institutions (i.e. not at a primary deaf/HoH university), is thae lack of mentors who are deaf/HoH. This isolation drove us to start the blog. But our common experiences lead to the question: Where areall the deaf and hard-of-hearing academics?
The American Speech Language Hearing Association classifies degree of hearing loss on a scale of mild (26-40 db), moderate (41-55 db), moderately severe (56-70), severe (71-90), and profound (91+) (ASHA). Despite these straight-forward definitions, understanding the statistics on hearing loss requires nuance. While tests prove that many people have some degree of hearing loss, only a subset of these folks wear hearing aids or use signed language; even fewer request work accommodations. The National Institute on Deafness and Other Communication Disorders, part of the federal National Institutes of Health, reports that 14% of the working age adult population aged 20–69 has significant hearing loss (Hoffman et al., 2017). This 14% report a greater than 35 decibel threshold for hearing tones within speech frequencies in one or both ears (NIDCD). The number of people with high-frequency hearing loss is double the number with speech range loss (Hoffman et al., 2017). However, not hearing watch alarms or computer keyboards is not considered to be as impactful as missing speech range frequencies.
As Figure 1 shows, the statistics on hearing loss are further complicated by age, which correlates with incidence of hearing loss. Among folks aged 60–69 years, 39% have hearing loss (Hoffman et al., 2017). Within the larger disabled community, we crips joke that we are a community that can recruit new members. Joking aside, the reality is that if you are a hearing person reading this, there is a very good chance that hearing loss will affect you or someone close to you during your working lifetime. The Mind Hearscan be a valuable resource for folks with newly acquired hearing loss.
So where are the deaf and hard-of-hearing academics? Doctoral degrees are generally awarded to academics between the ages of 20 and 29; the incidence of significant hearing loss within this population is 2.2% (Hoffman et al., 2017). The National Science Foundation’s annual survey on doctoral recipients reports that 54,664 graduate students earned PhD degrees in 2017 (NSF 2017)—wow, that represents a lot of hard work! Great job y’all! Now, if the graduate student population resembles the general population, then we should expect that 1202 of those newly minted PhDs are deaf/HoH. Instead, the survey reports that only 654 PhDs, or 1.2%, were issued to deaf or hard of hearing people (NSF, 2017). This suggests that deaf/HoH PhDs have half the representation that they do within the general population.
Furthermore, the distribution of deaf/HoH PhDs is not even among the fields of the NSF doctoral survey. In 2017, as shown in Figure 2, each of the fields of Humanities and arts, Education, and Psychology and social sciences has a greater percentage of deaf/HoH than each of the fields of Engineering, Life sciences, Physical and earth sciences or Mathematics and computer sciences. It seems like I’ve heard of greater numbers of deaf/HoH scholars and researchers in the fields of Deaf Studies, Deaf Education and Signed Languages Studies than in other fields. This could impact the distribution. Or perhaps some fields are more friendly to deaf/HoH scholars and researchers. Nevertheless, deaf and HoH are underrepresented in all fields within scholars and researchers with PhDs.
So, what can we do? These numbers reveal why so many of us feel isolated in our experiences within academia. The Mind Hears is one effort to facilitate networking and raise awareness of inclusion issues for deaf/HoH academics.
Hoffman HJ, Dobie RA, Losonczy KG, Themann CL, Flamme GA. Declining Prevalence of Hearing Loss in US Adults Aged 20 to 69 Years. JAMA Otolaryngol Head Neck Surg. 2017;143(3):274–285. doi:10.1001/jamaoto.2016.3527
National Science Foundation, National Center for Science and Engineering Statistics. 2018. Doctorate Recipients from U.S. Universities: 2017. Special Report NSF 19-301. Alexandria, VA. Available at https://ncses.nsf.gov/pubs/nsf19301/.
Graduate students, postdocs, and other academics applying for jobs face a hypercompetitive job market, limited geographic options, and a potentially withering assessment of their research productivity, teaching abilities, and overall potential. If you are deaf or hard of hearing, add to this scenario the weighty decision of when you should reveal your deafness during the job application process. In the United States employers are prohibited from discriminating against job applicants based on disability. However, not all countries offer this protection, and even in the U.S. many of us worry that unconscious, or even conscious, bias can often taint the work of search committees.
So what is a deaf/HoH job applicant to do? Do you reveal your deafness in your CV? Once you are offered an interview? When you are on site for an interview or visit? Do you reveal it to the search committee chair? To the human resources department? Do you request accommodations when invited for an interview, or do you wing it? Answers to these questions may vary depending on your degree of hearing loss, the ethos of the institution or position you are applying to, and your personal style. Answers may even vary depending on whether you are applying for a job today or several years ago, and your perception of the societal climate at the time.
The Mind Hearswould like to learn from academics who have navigated (or are navigating) the job search phase about the choices they have made, what they wish they had done differently, and what they have found particularly effective. Please help us out by answering this short survey (5-10 minutes) about your experiences. The survey will be available until July 18, 2019:
We would like to collate this collective knowledge and experiences into a compendium of anonymized comments to be posted at the end of summer as a blog post. By sharing the strategies we have tried, we hope to create a resource that can serve as a guide for all of us, and particularly for the upcoming generation of students and postdocs.
In recent years, to my delight, captions have been showing up in more and more places in the United States. While I’ve been using captioning on my home TV for decades, now I see open captioning on TVs in public places, many internet videos, and most recently, in academic presentations. Everyone benefits from good captioning, not just deaf/HoH or folks with an auditory processing disorder. Children and non-native English speakers, for example, can hone their English skills by reading captions in English. And nearly everyone has trouble making out some dialogue now and then. But not all captioning is the same. While one form of captioning may provide fabulous access for deaf/HoH, another is useless. To ensure that our materials optimize inclusion, we need to figure out how to invest in the former and avoid the latter.
To unpack this a bit, I’m going to distinguish between 4 types of captioning that I’ve had experience with: 1) captions, 2) CART (communication access real-time translation), 3) auto-craptions, and 4) real-time auto-captions with AI. The first two are human produced and the last two are computer produced.
Captions: Captions are word-for-word transcriptions of spoken material. Open captions are automatically displayed, while closed captions require the user to activate the captions (click the CC option on a TV). To make these, a human produced script is added to the video as captions. Movies and scripted TV shows (i.e. not live shows) all use this method and the quality is usually quite good. In a perfect world, deaf/HoH academics (including students) would have access to captioning of this high quality all the time. Stop laughing. It could happen.
CART:This real-time captioning utilizes a stenotype-trained professional to transcribe the spoken material. Just like the court reporters who document court proceedings, a CART professional uses a coded keyboard (see image at right) to quickly enter phonemes that are matched in the vocabulary database to form words. The CART transcriptionist will modify the results as they go to ensure quality product. While some CART transcriptionists work in person (same room as the speakers), others work remotely by using a microphone system to listen to the speakers. Without a doubt, in-person CART provides way better captioning quality than remote CART. In addition to better acoustics, the in-person service can better highlight when the speaker has changed and transcriptionists can more easily ask for clarification when they haven’t understood a statement. As a cheaper alternative to CART, schools and universities sometimes use C-Print for lectures, where the non-steno-trained translators capture the meaning but not word-for-word translation. In professional settings, such as academic presentations, where specific word choice is important, CART offers far better results than C-Print but requires trained stenographers.
Some drawbacks of CART are that the transcription lags, so sometimes the speaker will ask “Any questions?” but I and other users can’t read this until the speaker is well into the next topic. Awkward, but eventually the group will get used to you butting in late. CART also can be challenging with technical words in academic settings. Optimally, all the technical vocabulary is pre-loaded, which involves sending material to the captionist ahead of time for the topics likely to be discussed. Easy-peasy? Not so fast! For administrative meetings of over 10 people, I don’t always know in advance where the discussion will take us. Like jazz musicians, academics enjoy straying from meeting agendas. For research presentations, most of us work on and tweak our talks up until our presentation. So getting advance access to materials for a departmental speaker can be… challenging.
Craptions:These are machine-produced auto-captions that use basic speech recognition software. Where can you find these abominationsless-than-ideal captions? Many YouTube videos and Skype use this. We call them ‘crap’tions because of the typical quality. It is possible that craptions can do an okay job if the language is clear and simple. For academic settings, these auto-craptions with basic speech recognition software are pretty much useless.
The picture at right shows auto-craption for a presentation at the 2018 American Geophysical Union conference about earthquakes. I know, right]Yes, the speaker was speaking in clear English… about earthquakes. The real crime of this situation is that I had requested CART ahead of time, and the conference’s ADA compliance subcontractor hired good quality professional transcriptionists. Then, the day before the conference, the CART professionals were told they were not needed. Of course, I didn’t know this and thought I was getting remote CART. By the time the craptions began showing up on my screen, it was too late to remedy the situation. No one that I talked with at the conference seemed to know anything about the decision to use craptions instead of CART; I learned all of this later directly from the CART professionals. The conference contractor figured that they could ‘save money’ by providing auto-craption instead of CART. Because of this cost-saving measure, I was unable to get adequate captioning for the two sessions of particular interest to me and for which I had requested CART. From my previous post on FM Systems, you may remember that all of my sessions at that conference were in the auxiliary building where the provided FM systems didn’t work. These screw-ups meant it was a lousy meeting for me. Five months have passed since the conference, and I’m still pretty steamed. Mine is but one story; I would wager that every deaf/HoH academic can tell you other stories about material being denied to them because quality captioning was judged too expensive.
Real-time auto-caption with AI: These new programs use cloud-based machine learning that goes farbeyond the stand-alone basic speech recognition of craptioning software. The quality is pretty good and shows signs of continuous improvement. Google slides and Microsoft office 365 PowerPoint both have this functionality. Link to a video of Google Slides auto-caption in action.You need to have internet access to utilize the cloud-based machine learning of these systems. One of the best features is that the lag between the spoken word and text is very short. I speak quickly and the caption is able to keep up with me. Before you start singing hallelujah, keep in mind that it is not perfect. Real-time auto captioning cannot match the accuracy of captioning or CART for transcribing technical words. Keep in mind that while it might get many of the words, if the captions miss one or two technical words in a sentence, then deaf/HoH still miss out. Nevertheless, many audience members will benefit, even with these missing words. So, we encourage all presenters to use real-time auto caption for every presentation. However, if a deaf/HoH person requests CART, real-time auto caption, even it is pretty darn good, should never be offered as a cheaper substitution. Their accommodation requests should be honored.
An offshoot of the real-time auto-caption with AI are apps that work on your phone. Android phones now have a Google app (Live Transcribe) that utilizes the same speech recognition power used in Google Slides. Another app that works on multiple phone platforms is Ava. I don’t have an Android phone and have only tried Ava in a few situations. It seems to do okay if the phone is close to the speaker, which might work in small conversation but poses problems for meetings of more than 3 people or academic presentations. Yes, I could put my phone up by the speaker, but then I can’t see the captions. So yeah, no.
What are your experiences with accessing effective captions in academic settings? Have you used remote captioning with success? For example, recently, I figured out that I can harness google slides real time auto-caption to watch webinars by overlapping two browser windows. For the first time, I can understand a webinar. I’ve got a lot of webinars to catch up on! Tell us what has worked (or not worked) for you.
You’re wearing your hearing aids, sitting at a conference presentation, feeling confident that you’re understanding what’s going on, when it happens. The audience reacts to something the speaker said, and you have no idea why. Until then, you’d thought that you were grasping enough of the presentation, but you’ve clearly missed something good. Reality check: your hearing aids might be good but you still can’t hear like a hearing person. I’ve been there. And I’ve found that when I’ve been able to get a good FM system set up at conferences, I can catch a lot more of the speaker’s remarks and subsequent discussions than when I try and go it alone with just my hearing aids. Getting FM systems to work effectively, however, can sometimes challenge even the most intrepid academic. So I thought that I would share what I’ve learned through several decades of requesting and using FM systems at conferences. I’ve occasionally used Real-Time Captioning (CART) and ASL interpreters at conferences, but someone more expert should post about those.
What is an FM system?
Frequency Modulation (FM) systems involve a paired transmitter and receiver that provide additional amplification to either a headset or, even better, directly to our hearing aids. That additional amplification can be invaluable in some difficult-to-hear situations. The audio signal is transmitted via waves within a narrow range of the FM spectrum—yup, the same as non-satellite radio. FM systems are sometimes called Assistive Listening Devices (ALDS). At conferences these systems can help by amplifying speakers’ remarks, audience questions, and ensuing discussions, as well as elevating conversations around posters above background noise.
Requesting FM systems at large conferences in the US
Because of the Americans with Disabilities Act (ADA), large conferences in the US will have a box to check on the registration page to request accommodation. If they provide an open response box, I typically write:
“I need a FM system with 60 decibels of undistorted gain. The system should have a neckloop or induction earhooks that work with the telecoil in my hearing aids. Headsets are not compatible with my hearing aids.”
Through years of bad experiences, I’ve learned to provide very specific instructions.
Although I provide these specifics, I am often disappointed when I arrive at the
conference center. Many conference FM systems are pretty weak and only provide only a small amount of clear amplification (maybe 15-20 dB). This might be okay for someone who has a mild hearing loss—such as some with recently acquired loss—but it pretty useless for me. At other conferences, such as at the 2017 American Geophysical Union, I’m offered a setup as in the photo at right.
Me: These are not compatible with hearing aids
Clueless but earnest conference liaison: Oh yes, they are! You just put the headset over your ears.
Me: Um no. I use behind-the-ear hearing aids and my microphones are behind my ears. This is why I specifically requested a neckloop to directly communicate with the telecoil in my hearing aids.
Clueless but earnest conference liaison: A what?
Clueless but earnest conference liaison: Oh. Well, why don’t you just take your hearing aids out and use the headset instead?
Me: Umm no. My hearing aids are tuned for my particular frequency spectrum of hearing loss. I asked for 60 decibels gain for the system to boost above what my hearing aids offer and to compensate for people speaking softly, people not speaking directly into the microphone. . . That sort of thing.
Clueless but earnest conference liaison: Huh. Well, we don’t have anything like that.
After such unfruitful conversations I usually begin sorting out my own accommodations with my personal FM system (more on that in a bit). The few times that I’ve pushed for conferences or their sites to find an neckloop or a stronger FM system, I’ve never had success. For example, at one conference, a team of six technicians met with me to tell me that there was not a single induction neckloop to be had in the entire city of New Orleans—their hands were tied. Sure.
Warning about accommodation requests:Although conferences are becoming more responsive, I’ve found that about a third of the time, my requests on the registration forms are ignored. I never hear back from the conference, and when I show up they have no idea what I’m talking about. So as part of my conference prep, I now contact them about a month before the meeting if I haven’t received notification. I also budget an extra hour or two when I first arrive at the conference to sort out the accommodations.
Paired FM systems versus direct wired rooms
With paired FM systems, one transmitter is paired to one receiver that you carry with you. The transmitter must be set up in the conference room in advance of the session and is usually patched into the sound system so that your receiver picks up signals directly from the room’s microphones. In order to set this up, large conferences need to know which sessions you will attend several weeks ahead of time. This means that you can’t pop from one session to another as our hearing peers might do at large conferences. Also, if two HoH people want to attend the same session, the room may need to have two transmitters patched into the sound system.
Newer (or newly renovated since 2012) convention centers in the US and UK may have built-in transmitters throughout the convention hall. This means that you can take any receiver into any room and instantly get amplification without setting things up ahead of time. This flexibility is quite nice! The picture at right shows a charging rack of FM headsets and induction loops for the Washington DC Convention Center. I was really looking forward to using those at the 2018 AGU meeting, but unfortunately, all the sessions in my discipline were in the Marriott hotel next door and the system didn’t work at all there.
Small conferences and meetings outside of the US
For small conferences, as well as meetings outside of the US where the ADA is not in effect, I bring my personal FM system. At the top of this post are pictures of the FM system that I first started using around 1994 (left) and my current outdated fourteen-year-old system (middle). I can’t get this set repaired anymore, so I’m going to get a new one like the one on the right. Some benefits of personal systems over conference-provided systems is that personal systems are more powerful. My first FM system had audio boots that hooked directly to my hearing aids (left picture) which reduces signal degradation that can happen with neckloops (middle image).
At small conferences, I put my transmitter at the lectern before each session to help me catch more of the speaker’s presentation. Alas, this doesn’t help with questions and discussions, which can be a large challenge. At some conferences where microphones are used for questions and discussions, I ask the AV crew to patch my transmitter into the sound system. Right is a picture of all the different adaptors that I bring with me to ensure that my transmitter will work with the venue’s sound system. Some of these may be outdated.
While patching my transmitter into the sound system has worked very well in the past, I’ve had problems lately. Maybe sound systems have become more fussy about patching in auxiliary outputs. I am also not sure whether the newest FM systems, which use Bluetooth rather than FM signal, even have input jacks. Another hack that I came up with is to put my transmitter in front of a speaker (the photo at left is my transmitter taped to a microphone pole in front of a speaker stand at the 2018 Southern California Earthquake Center annual meeting). This hack allowed me to access the presentations and discussions that used microphones.
FM systems in poster halls
If the poster hall is crowded, you can aim the microphone of the FM system transmitter towards any speaker to elevate their voice above the background noise. This approach has worked well for me when using my own FM system. Note that the systems provided by convention centers are not mobile; it is best to bring your own to use in poster halls.
FM systems are expensive (~US$1000 – $4000), and like hearing aids, are often not covered by US health insurance. Full-time students in the US are eligible for personal FM systems through vocational rehab (degree of coverage depends on income). Many audiologists may not be aware of this (my own weren’t!), but check with the disability office at your university and they can hook you up with your state’s vocational rehab office. These FM systems are worth getting before you graduate! Some employers do purchase FM systems for their workers because they can be critical for career success; however, I’ve yet to meet an academic who has successfully negotiated an FM system from their employer (and would love to hear if you have). While insurance didn’t cover my last FM system, I was able to use a health spending account through my employer that saved me from paying taxes on the device. It is my understanding that outside of the US, personal FM systems are nearly always paid for out of pocket.
Why am I so pushy?
Since I end up using my personal FM system most of the time at large conferences, you might wonder why I keep requesting accommodations. I do so because I want the conference center to know that we are here. I want them to know that deaf/HoH academics should be considered when they are planning their meetings and ADA accommodations. If we don’t make waves, they will believe that the level of accommodation currently offered is satisfactory. I’ve heard too many stories of older academics who stop attending conferences because of declining hearing, and younger HoH academics discouraged from academic careers because of the difficulty of networking at large conferences. We owe it to ourselves and our community to be counted, advocate for flexible, effective amplification systems, and share our successful strategies.
Is my experience consistent with your own? What successful strategies have you used for FM systems at conferences?
I wrote this post on an airplane coming back from an international conference I attended in Thailand. Because of the distance involved, participation at this meeting was pretty light on scientists from North and South America, but had a lot of participants from Europe (primarily the Netherlands, France, Spain, and Belgium) and Asia (primarily Thailand, China, Japan, Taiwan, but several other countries too). It was a wonderful conference: great venue, warm hosts, cutting-edge talks, great food, new people to meet, and some fun sightseeing thrown in. It also brought with it the usual challenges of trying to understand talks and poster presentations and network with important scientists in noisy settings. But this conference also brought home a specific problem that has stymied me throughout my career: understanding unfamiliar accents.
Deaf/HoH academics who depend on oral communication will likely be familiar with the problem that, even in an optimal hearing environment, we better understand those who speak like we do. Unfamiliar or “foreign” is relative, of course. I speak English and Spanish, but, due to the particularities of my upbringing, my best shot at hearing/understanding Spanish is with people who speak Colombian Spanish, or even more, the version of Colombian Spanish spoken in and around Bogotá (indeed, that is the accent I speak with – immediately recognizable to most Latin Americans). My Argentinean and Mexican friends can attest to how obnoxious I can be asking them to repeat themselves. Likewise, for English, I fare best with a northern Midwestern US type of English; Australian, British, Indian and many other accents will leave me floundering. I imagine that the same is true for other deaf/HoH academics, but with different versions of their language they are most used to.
Scholarly research, of course, is a global venture, and it is wonderful that many luminaries in my field hail from around the world. I’m already incredibly lucky that most professional communication is conducted in English, a language I happen to know. But, while hearing people can be quite understanding of my communication difficulties in suboptimal environments, it seems cruel (and professionally unwise) to tell colleagues that I can’t ‘hear’ them because of their accents—especially because many such colleagues have worked hard to acquire their English skills, thus going the extra mile to ensure communication. Because of globalism, the problem with understanding unfamiliar accents goes beyond conferences and professional networking. Many of my undergraduate and graduate students are also from various international locations. I am heartbroken every time I feel that my difficulty understanding my students negatively affects my ability to mentor them.
I have not found ideal strategies to deal with the challenges of unfamiliar accents. Every accent becomes a little more familiar with constant exposure, so I do understand my graduate students (with whom I communicate almost daily) better as time goes by. But it never stops being a challenge, and I sometimes have to resort to written communication in our one-on-one meetings. Since the undergraduates I teach change each semester, I don’t have similar opportunities to become familiar with their accents. For conferences and professional networking, I imagine that real-time captioning would be the ideal solution; but such a resource is not available at all conferences (though it should be!) and is generally not an option for networking. I’ve been excited by the recent advances in speech recognition software, such as that demonstrated by Google Slides, and wonder both if the technology can accommodate a range of accents and, if so, if it could ever become a portable “translator” for deaf/HoH individuals (I know some portable translator apps exist, but haven’t tried them and don’t know the scope of their utility; perhaps some readers can share their experiences?). I’m also curious whether unfamiliar accents are ever a challenge for deaf/HoH academics who rely on sign language interpreters. What other strategies have deaf/HoH academics employed to help navigate the challenge of unfamiliar accents in a professional setting?
Just like their non-Deaf colleagues, Deaf academics teach students, discuss and present their research, attend various professional meetings, and give media interviews. Communicating and sharing knowledge with others is a critical part of academia. However, not everyone has had experience communicating with somebody using sign language, and many non-signers are unfamiliar with the protocols of working with ASL-English interpreters. Ashley Campbell, the staff ASL-English interpreter at Saint Mary’s University in Halifax, Nova Scotia, and Linda Campbell, a Senior Research Fellow of Environmental Science at Saint Mary’s, have put together a rich set of resources: a series of tip sheets on how best to work with interpreters in various academic scenarios. By sharing these resources with The Mind Hears, Ashley and Linda provide quick reference tools that will simultaneously educate and lessen any stress around facilitating communication through interpreters. Though originally written to facilitate ASL-English communications, these tip sheets can be applied to any settings that incorporate signed language-spoken language communications.
Do you have ideas on further tip sheets to add to this resource? Are there other recommendations that you would add to the existing tip sheets? Please let us know what strategies you have found useful in educating non-signers, and help Ashley and Linda expand the reach and utility of the resources they have created. Write to Ashley at Ashley.N.Campbell@smu.ca or share your thoughts in the comments below.
Since 2015 I have been the staff ASL-English interpreter within the Faculty of Science at Saint Mary’s University in Halifax, Canada. My first exposure to sign language was in Belleville, Ontario where I lived for a short period early in life. Many years later I took ASL night classes for enjoyment and through learning the language and culture I became interested in studying it more formally. I graduated from an interpreting training program in 2010 and along with interpreting have volunteered for both provincial and national interpreting association boards. I have a passion for sharing knowledge with the mentality of “each one, teach one”. When I’m not working I am a mom to a very active toddler, cooking feasts for my family, and enjoying the odd Netflix program.
Dr. Campbell is a Professor and a Senior Research Fellow at Saint Mary’s University in Halifax. She moved to Halifax from a Canada Research Chair (Tier II) faculty position at Queen’s University in Kingston. Her research and teaching at Saint Mary’s University focus on contaminants in the environment and on sustainability / resilience issues, with emphasis on aquatic ecosystems and water resources. Currently, Dr. Campbell’s research group is examining environmental contaminants across the Maritimes and around the world, with projects looking at impacts of legacy gold mine tailings from the 1800’s and contaminant transfer in aquatic food webs, birds, bats and humans.
The new year brings a fresh start to our lives; it’s a natural time to reflect on the year past and make plans for the coming year. For your new year, why not work towards making your academic workplace more accessible for your deaf/HoH colleagues? To help in this effort, we’ve assembled a list of guidelines that might improve your workplace’s inclusivity.
Ideas on this list come from a variety of sources but primarily our own experiences. Would you like to add to or revise the list? We welcome your comments and suggestions directly to the linked google doc. We will endeavor to update the list posted below as we collect more comments and suggestions on the google doc. If you find that you want to explore a topic in more detail, we encourage you to write a blog post for The Mind Hears—we will link your post to this list.
What can you do to improve the academic workplace for your deaf and hard-of-hearing colleagues?
Overarching philosophy: If a participant requests accommodation for a presentation or meeting, you can work with them to figure out the best solution. It may be signed interpreters (there are different kinds of signing), oral interpreters, CART (Communication Access Realtime Translation), or FM systems (assistive listening devices). It could be rearranging the room or modifying the way that the meeting is run. Keep in mind that what works for one deaf/HoH person may not work for another person with similar deafness. What works for someone in one situation may not work at all for that same person in another situation, even if the situations seem similar to you. The best solution will probably not be the first approach that you try nor may it be the quickest or cheapest approach; it will be the one that allows your deaf and hard-of-hearing colleagues to participate fully and contribute to the discussion. Reaching the goal of achieving an academic workplace accessible to deaf/HoH academics is a journey; we’ve assembled this list to capture just a few tools that can help us on this journey.
Leave sufficient lights on in the room so that the speaker’s face and interpreters (if present) can be seen.
Have presenters use a microphone when it exists; do not let them assume they don’t need amplification. Ditto for audience questions.
Note: check that the microphone system works well before the presentation. A bad microphone system can be worse than none at all
Have presenters repeat the questions from the audience before answering
If the presenter is deaf/HoH, the convener/host should be ready to repeat audience questions.
Encourage all presenters to use real-time auto-caption with Google slidesor Microsoft’s Presentation Translator add-on for PowerPoint (for Windows only at this point). At the end of January 2019, Microsoft Office 365 will include built-in real-time auto-caption. Our experience is that the AI with these programs far outperforms typical voice recognition software and has less lag than CART.
If speakers are using videos, encourage them either to turn on captioning for the videos (CC button, usually on lower right) or eliminate use of videos without captioning.
If CART services are provided for deaf/HoH participants, consider projecting the captioning onto a screen so that all in the room can benefit.
When available, use “looped” rooms for presentations (indicated by the symbol at right) that allow users of hearing aids and cochlear implants with telecoil functionality to access amplified sound directly
In the UK and US (2010 update to Americans with Disabilities Act), loop systems are mandated by law for any public venues that have amplified sound (summary of US regulation). However, our experience in the US is that few universities have such rooms for meetings and departmental presentations. In contrast, some of us have noticed that in the UK virtually all public institutions (even grocery stores!), have loop systems, but they are almost never turned on. It may be wise to notify the hosts 2 or more days in advance to make sure the loop system is powered up and turned on.
Some hearing aids and cochlear implants may not transmit looped sound and ambient sound at the same time; so don’t bother chatting with your deaf/HoH neighbor during the main presentation!!
Meetings > 10 people (e.g. faculty meetings)
Start the meeting with a communication check. “Is this communication set up working for everyone?”.
As much as is possible with a large group, have all participants sit around a table or set of tables so that they face each other.
CART services can be helpful for meetings where multiple people are speaking. We have found that having a CART captionist in the room works better than working with a remote captionist; having several microphones in the room does not always provide clear access to the speakers for the remote captionist.
Having a microphone that is passed from speaker and transmitter to a computer can help CART or one of the better-quality real-time auto-captioning softwares.
An FM system (assistive listening device) can help for such meetings. Using the microphone/transmitter as a ‘talking stick’ ensures that all conversations are amplified. You can also place an onmi-directional FM microphone (or, even better, two) in the center of the room to catch conversation around the group.
Note: Unlike looped rooms, FM systems work with specific hearing aids or cochlear implants, so if the meeting has more than one deaf/HoH person using FM, the technology issues can become complex.
If conversation devolves to rapid interjections, the discussion leader should rein in the conversation and recap what was discussed.
For quick conversations, signaling the next speaker, for example by raising a hand, can help deaf/HoH know where to look next for speech reading. Hearing aids and cochlear implants are notoriously bad for directionality of sound and some of us only detect sound on one side. While interpreters strive to indicate who is talking throughout conversations, visual signaling can help us track the conversation
The meeting organizer should check in periodically to ensure that the communication environment is working for everyone.
A written summary distributed afterwards to meeting participants can ensure that everyone has the same information.
Meetings < 10 people (e.g. committee meetings)
A lot of the strategies for larger meetings also work for small meeting. The following include notes specifically for meetings of smaller groups.
Have participants sit in a circle, so that all faces are visible for speech reading. Conference rooms with long narrow tables can be challenging.
Use the smallest room possible to accommodate the size of the meeting
Encourage meetings in rooms with minimized resonance; rooms with carpet and soundproof walls are better listening environments. Also, avoid rooms with a lot of external noise (e.g., busy roads or construction).
Use rooms with window treatments that can be adjusted to reduce glare so that speakers are not backlit.Discourage people from talking over one another in meetings.
Check in periodically to ensure that the communication environment is working for everyone.
Deaf/HoH people are notoriously bad at catching jokes, as comments are typically made more quickly they we can track conversation. It can be helpful to repeat jokes for your deaf/HoH neighbor.
Face people while conversing.
If the deaf/HoH person is using an signing or oral interpreter, direct all conversation to the deaf/HoH person, not the interpreter.
When you’re in a noisy room, you won’t be heard by speaking with cupped hands directly into the deaf/HoH person’s ears, because in this situation we can’t speech read your face. Similarly, we cannot understand whispering behind a cupped hand or into our ears. Come to think of it, whispering hardly ever works because the sound and speech reading information are so distorted—best avoided altogether.
Avoid covering your mouth. If you are chewing, please wait to speak until you are done chewing. Also avoid blocking visibility of your mouth with your cup at gatherings with coffee/tea.
If a deaf/HoH person asks for repetition, please repeat as closely as possible what you just said. Sometimes we hear part but not all. If you change up the words to reframe what you said, we are back to square one.
If a deaf/HoH person asks for repetition/clarification, never say “Oh, it’s not important.” This conveys that you don’t value their participation in the conversation. So even if you think your comment is not worth repeating, please repeat yourself to avoid excluding your colleague.
When a deaf/HoH person joins the conversation, it’s helpful to give them a little recap of the current topic of discussion.
Hearing aid and cochlear implant batteries go dead at the most inopportune times. Most of us go through one or more batteries per aid each week; the chances of this happening while we are in a presentation, meeting, or conversation are quite high. As we search for and replace the fiddly batteries, please keep in mind that we are missing out on the conversation.
Incidental conversations (e.g. passing in the hallway)
When greeting your deaf/HoH colleagues in passing, give a wave. We may not hear a quiet greeting.
To get the attention of deaf/HoH colleagues, waving your hand where we can see it is more pleasant than being shouted at.
Not all deaf/HoH people wear hearing aids throughout their workday. Some of us enjoy periodically being able to take out our hearing aids or turn off our cochlear implants and focus in the quiet.
Our communication skills can vary with fatigue level. The cognitive fatigue of speech reading is taxing so that after a few hours of teaching and meetings with spoken conversation, we may avoid all conversations or switch from speaking to signing.
Not all deaf/HoH people speech read. Some of us rely on writing notes or using voice recognition apps on our mobile devices; you may be asked to communicate using modes unfamiliar to you. A motto of the deaf community: use whatever form of communication works.
Not all people are easy to speech read. People with facial hair and people who either don’t move their lips/face or over-enunciate can be very difficult to understand. Some of us hear high pitched voices better than low and some of hear low voices better. For better or for worse, many of us avoid conversations with people we don’t understand even though they may be wonderful people. It is not rude to ask if you are easy to understand and how you could be better understood.
You may have noticed that all of these considerations not only increase access for deaf and hard of hearing but make these situations more inclusive for all participants, such as non-native English speakers. Some of these strategies ensure that the loudest in the group doesn’t monopolize conversation and allow space for less confidence participants. If you make your workplace more accessible for your deaf and hard of hearing colleagues, you will make a more accessible workplace for everyone.
In May I received the outstanding researcher award from the College of Natural Sciences at UMass Amherst. This was a great honor and I even got to give a 3-minute acceptance speech. While the speech starts with some of the challenges, the main point is that my deafness shapes my approach to science in ways that benefit my research. PhD student extraordinaire, Laura Fattaruso, made a video of me re-enacting the speech and here is the transcript:
Academic success was not always expected of me. I have a severe-profound high-frequency hearing loss and was language delayed in my early education. The letters on the page don’t match the sounds that I hear so it took until 2nd grade for me to figure out the basics of reading. I also had years of speech therapy to learn how to pronounce sounds that I can’t hear. Just before middle school, some visual-based aptitude tests showed I actually had some talent and I also started to do well in math. So, then teachers started expecting more of me and as you probably figured out, I caught up well enough.
Now, as a professor at a University that serves a predominantly hearing community, my broken ears are a nuisance sometimes. But this 3-minute speech is not about overcoming challenges. Instead, I want to talk about something called <signing Deaf gain>. This sign is translated into English as Deaf gain or Deaf benefit. This term coined by Gallaudet scholars describes the value that Deaf and Hard-of-Hearing people provide to the larger community because of their differences. Our ecology colleagues tell us that more diverse ecological communities can better withstand stress than homogenous communities – so too with science communities. All of our differences make CNS stronger.
Here are three examples of deaf gain in my research approach
Deaf gain1: My way of doing research is intensely visual. My students know well that I have to show 3D concepts in the air with my hands and sketch whenever we do science. I don’t believe it until I can see it. We use the figures in our papers to tell the scientific story. In this way, my research is not about elegant verbal arguments and instead focuses on connections between ideas and demonstration of geologic processes.
Deaf gain 2: Deaf are known for being blunt. My students will tell you that my reviews can sometimes be painfully blunt. For deaf scientists, being understood is never taken for granted. So, we strive for clear and direct communication of our science.
Deaf gain 3: Being deaf in a hearing world requires stamina, courage, empathy, self-advocacy, a flexible neck to lip read people in the corners of the room and a sense of humor. An added benefit is being able to accessorize using blue hearing aids with blue glitter molds that match any outfit.
I’ve been lucky to have great students and colleagues who have join up in my Deaf way of science and we’ve had a blast. Thank you.
Do you share some of these characteristics? Are there ways that deaf/HoH gain has shaped your scholarship or research?
This semester I am teaching a large lecture course with about 175 students. I have taught this course 6 times before, with enrollment varying between 150 to 200. To be completely accurate, I only teach a third of the course, usually the first third of the semester, with two hearing faculty leading the other portions. Of course, teaching even a third of a course represents a challenge when your hearing is as crappy as mine. Therefore, my top priority for this class is ensuring that the students and I can communicate effectively (I speech-read and don’t sign). How do I do it? And does it work?
Like much of my professional life, the answer to the question “does it work?” shifts frequently. Some days I come out of class thinking I’ve nailed it and given students the educational experience they deserve. Other days, not so much. But, for better or worse, here is what I do:
I start out by making a very explicit announcement about being deaf/HoH the first day of class. I love the language that Michele used in her recent post about announcing your deafness to your class, and am thinking of borrowing some of this language next semester. Besides giving students tips on how best to communicate with me, my main preoccupation this first day is to emphasize that my deafness should not in any way scare them from asking questions, as I will work hard to ensure our communication. In a class this size, I am not always 100% sure I am getting this message across, but I try.
The second thing I started doing 3 or 4 years ago is using clickers. This classroom response system allows students to use handheld remotes to choose from alternative answers to a question I have posed, and I can assess their understanding in real-time. For me, this opportunity to interact with ALL students in my very large class, bypassing the usual difficulties of oral communication, is a radical departure from the usual state of affairs. I really like clickers, and love not having to dread the very solid silence that sometimes followed my lobbing a question to the class, while vainly hoping that an individual would venture an answer. However, clicker questions only go in one direction; they are no substitute for class discussion or questions asked by students.
So the final frontier—answering students’ questions! Large classes are, by their very nature, less interactive than smaller ones, as students are much more reticent about speaking out. I will here make a shameful confession in the era of “active learning” buzzwords—I derive some amount of comfort (or at least a decrease in anxiety) from knowing that a large class means fewer questions for me. Of course, questions still get asked, so the problems remain (and what serious instructor would prefer that their students ask less questions?!).
Walking up to students when they ask a question is not really an option in this course. I teach in auditorium-style classrooms and there is no way to get close to a student sitting in the middle of a row. What I have been doing instead is getting myself a student translator. I don’t have a TA, so I designate somebody in the class, ideally seated in the first row, to repeat questions for me. I have tried a few different student translator strategies. One semester I hired a work-study student to perform this role. The student was not a biology major and struggled mightily with the scientific vocabulary in the class—which meant that I struggled to understand the questions. I chalked this up as one of my not-so-good semesters. Another semester I asked a different student in the course to play the translator role each class period (in the interest of not overburdening anybody); this led to a lot of re-explaining of what I needed at the start of each class, which in turn led to awkwardness. Most semesters what I’ve done is ask two students—one for each side of the room—at the beginning of the semester if they are willing to play this role.
In general, things worked better once I started asking enrolled students for help, as students immersed in the class are very capable of understanding their classmates’ questions. A nice consequence is that most students feel surprised and elated to be asked to perform the translator role (that said, a few students have turned me down). Yet each year I find myself re-evaluating what I do. There can (and have been for me) hiccups with this approach. Examples are, designated students missing a class, leaving you without a translator. Or, students’ unease about speaking up in large classes might result in your designated translator whispering, and now you have TWO students you can’t understand; to work around this, I have occasionally fitted my student translator with a directional mic that my FM system can pick up, but have found the amplified sound of notebook pages being turned too overwhelming. Finally, there is that constant whispering doubt: is it fair to ask a student to perform this extra bit of work for me?
You will notice an underlying thread to these strategies. At no point have I asked my university or department for help (though I should clarify that my department contributed to the work-study hire I once tried). Why not? Hmm, this sounds like material for another blog post. What I’m doing seems, for the most part, to be working for me so far. But there is room for improvement. I would be thrilled to hear from other deaf/HoH instructors about the strategies used to manage large classes.