white young man with helmet on Stromboli volcano with gas steam in background and rocky landscape

Profile: Dr. Oliver Lamb

NRC Research Associate, University of North Carolina at Chapel Hill, USA

Field of expertise: Geophysical monitoring of natural phenomena, with a particular interest in volcanoes.

Years of experience (since start of PhD): Nearly 6.

Describe your hearing: Moderately hard of hearing in both ears, since birth. I have worn hearing aids since the age of 4.

Education/BackgroundI grew up in a small village in Wales about 10 mins drive out of the university town of Aberystwyth. My first school was completely in welsh. This might sound daunting for a young HoH kid from an English speaking family, but looking back I don’t think I was really fazed by this. I think it was helped by having supportive and understanding teachers and a teacher’s assistant, always happy to repeat things for me (thank you Mrs. Fuller!). My lucky streak with helpful teachers continued into my English secondary school (ages 11-18), where I never really had any major issues with my hearing. Obviously, there were plenty of noisy classes, but a vast majority of teachers were able to keep them relatively quiet when necessary.

I cannot remember meeting any other deaf or HoH kid my age, but I also don’t remember ever feeling like I was the ‘weird kid’. I guess growing up with a father and grandfather who also wore hearing aids probably made me feel somewhat ‘normal’.

white man in bright sunshine (hat and sunglasses) on a rocky volcano barren of vegetation

How did you get to where you are?

I’ve always had an interest (some might call a passion) in volcanoes. I was lucky to visit Mt. St. Helens when my family visited the US when I was aged 8. I think it wasn’t until I spoke to a careers advisor a few years later that I realized that I could work on volcanoes as a career. From then on, I always had a determination to study geology. After school, I went to the University of Oxford where I got a Masters in Earth Sciences (equivalent of a bachelor plus master degrees). From there I was fortunate to land in the University of Liverpool to pursue a PhD. There, I worked on a project looking at seismic and infrasonic data from various lava dome eruptions around the world, and also carried out a little experimental work with acoustic emissions in the lab. After graduating in 2017, I was eager to take up whatever postdoctoral job offer might come way, and that is how I ended up in North Carolina, USA, in early 2018.

 What is your typical day like?

It’s really hard to say I have a typical day really. Each day brings a new task or challenge, whether it is coding a new way to analyze or plot my data, or preparing equipment for fieldwork, or writing and editing an article or grant application. Most of my days are spent writing either code or scientific articles. The coding can be tedious and frustrating, but it is such a nice feeling when it works and you get a great result and plot at the end. The article writing has also been very difficult (I always struggle with the discussion section), but I think I’m slowly getting the hang of it. When I am not writing, I am probably grappling with some equipment or instrument, or attending various seminars, or preparing for an upcoming conference.

The days spent on fieldwork are probably the most special though. I have been fortunate enough to travel around the world to places such as Mexico, Guatemala, Italy, and Chile. The preparation for fieldwork can be very intense, because lots of different arrangements have to be made in a short amount of time and in the right order. When I’m in the field, I am typically running around digging holes to put seismometers into the ground and/or leaving infrasound microphones to listen to very low frequency noises around the volcano. It can be exhausting, back-breaking work (each station might include 30-40 kilos of equipment and batteries), but it is well worth it when you get a chance to work in some truly spectacular landscapes. Also, it is a huge privilege for me to meet the many wonderful and generous people around the world, most of whom who share a passion for what I do.

 What is the biggest professional challenge (as educator or researcher)? How do you mitigate this challenge?

Managing the workload and the high level of anxiety that comes with it. I do my best to keep on top of projects, but I often find myself neglecting tasks that I told myself that I would do weeks or months before. It was, and still is, a huge learning curve for me when I started my postdoctoral career because of the greater responsibilities I suddenly faced. This includes managing projects, producing articles, writing grant proposals, rewriting grant proposals, managing equipment for fieldwork, preparing for fieldwork, checking there’s enough money in the budget for fieldwork, managing administrative bureaucracy, rewriting grant proposals again, and data management, all with one eye on the somewhat unclear future because I have no clear idea of where or when the next postdoctoral or lecturing position might come available. The anxiety has been very difficult to manage at times, but I am grateful that I have had and still have a network of colleagues, friends and family that I can talk to.

What is an example of accommodation that you either use or would like to use in your current job?

This is not really a specific accommodation, but one of my biggest and most frequent frustrations is having to deal with terrible acoustics during talks, whether they are department seminars or during conferences. More speakers need to learn how to project their voice to the whole room, with or without a microphone. I should not have to sit in the first few rows to hear you. Similarly, it’s an instant turn off for me whenever a speaker at a conference declines to use the provided microphone just so they can move around the room or stage. There’s no point in trying to seem like a dynamic TED speaker when I (and I suspect a lot of other people in the same room) can’t even hear what you’re saying.

Having said that, it would be great to have personal microphone/transmitter kit that I can bring to talks that will help me hear the speaker. Something that doesn’t disturb the speaker and/or other talk attendees and transmits the speakers voice directly to my ears via Bluetooth or telecoil. I have seen a few systems like this out there but none of them seemed to have quite the right specifications for me to consider using them. I’m all ears to anyone who might have suggestions for something like this.

 What advice would you give your former self?

Be more confident in our own abilities, but don’t be afraid to ask for help on even the most menial things.

 Any funny stories you want to share?

This might not be a funny story for some people, but what ho. I am a pretty heavy sleeper, and have slept through lots of noisy situations. However, when I camped next to active volcanoes I have been woken up by large noisy rockfalls (in Mexico) or a particularly large explosion (in Guatemala). It’s good to know my ears are at least tuned to noises that I definitely need to pay attention to!

The sound we can see: working with hearing loss in the field

When I was 19 I went for a checkup with an audiologist and found out that I was hearing only 90% of what I should be. The doctor said that for my age, this was a high level of hearing loss, and attributed it possibly to the intense course of antibiotics I took for kidney failure when I was one year old. He suggested that I come back yearly to repeat the hearing exam, to verify if my ability further decreased below my current hearing levels. Of course I ignored this advice and never went back. When I started my graduate studies six years later, I decided it was finally time to visit the audiologist again, because I discovered that I could not hear the species of frog I had decided to base my research on. This was a very scary moment for me. How did I find myself in this situation?

In the last year of my undergraduate studies I took an ecology course and fell in love with the topic. I knew I wanted to earn a master’s degree in ecology, ideally working with animal populations. In Brazil, one has to take a standardized exam to enter a graduate program. I traveled 440 km to take the test and passed; I began my studies in the Federal University of Paraná located in Curitiba, in the south of Brazil. Among all the available mentors, there was one who carried out research on ecological dynamics of insects and anuran amphibians. I chose his lab and wrote a project proposal examining the population dynamics of an endemic species of stream frog (Hylodes heyeri) in the Atlantic Forest in Brazil, specifically Pico do Marumbi State Park, Piraquara, in the state of Paraná. Much of what I was to be doing was completely new to me: I had never worked with frogs and I also had never practiced the mark-and-recapture method. I thus faced a steep learning curve and had to learn a LOT about lab and fieldwork from my team and my mentor. In my first field outing, during which I was to learn how to identify and capture the species I would study, I discovered that I could not hear the frog. A labmate who accompanied me to the field said, “Are you listening? The frog is so close to us.” He thought I was not hearing the frog due to lack of experience, or because of the background noise of the stream. I worried that something else was amiss, and this finally prompted me to go back to my audiologist. There, I discovered that I had lost 2% more of my hearing, and this loss compromised treble sounds, those in the range of high to very high frequencies, precisely overlapping my frog’s vocalizations.

Now, I’m a PhD student and I use hearing aids programmed specifically for my hearing loss, which primarily encompasses frequencies above 4000 Hz. I was initially ashamed to wear hearing aids because people mocked them. But I didn’t consider changing projects, because I knew I could get help localizing the frog. I also knew there would be ways for me to analyze the sound without necessarily hearing it. Even with hearing aids, however, I can only hear the call of my frog when I am no more than 4 meters away. Other members of my lab can detect the sound of the frog from much farther away, even when they are 20 meters or more from the stream. This means that for every survey I carry out in the field, I need a person to accompany me to guide me to the frog, using their sense of hearing to identify the sound. But the assistance I receive in the field goes beyond locating my frog; the field can be dangerous for many reasons: I may not hear dangerous animals—such as puma, collared peccary, or leopards—approaching; and I may lose track of my team if people call me from too far away. Even for scientists without hearing loss, it is advisable not to carry out fieldwork alone.

In recent years, I have had the opportunity to learn Brazilian sign language (LIBRAS) in graduate courses. I am happy that it is a requirement for my degree! When I am in the field I communicate primarily with gestures. I am lucky that my frogs are diurnal, because I am able to see my companions in the field, making communication much easier. Once my companion hears the frog, they look at me so I can read their lips or we make gestures so as to not scare the frogs. Sometimes I use headphones, point the microphone of my recorder in the general direction of the frog, and increase the volume to better understand where the sound comes from—this trick of using my main research tool (my recorder) to find my frogs was taught to me by a friend who also carried out research in bioacoustics and had the challenge of finding a tiny mountain frog species that hid in leaf-litter (thank you, André Confetti). My frogs are also tiny, only 4 cm long. They camouflage in the streams and spook very easily, but in order to obtain my data, I need to get as close as 50 cm from the frog. Only then can I really start. The aim of my work is to analyze the effect of anthropogenic noise (such as traffic road sounds transmitted by playback) on frog communication. Once I am in position, I can play the anthropogenic sound, and record the frog’s call. I take these recordings back to the lab and experience the most rewarding aspect of my efforts to find these frogs. The recordings are transformed into graphs of the frequency and length of each call. Although I cannot hear the sounds my frog makes, I can see them! After seeing the sound I can analyze several call variables and calculate various statistics.

Would I recommend field work such as mine to somebody who finds themselves in my predicament? If you are open to creative workarounds, such fieldwork is possible for all. Having a field companion, using signs to communicate, and making use of the amplification provided by my recording equipment has solved the majority of my problems. Most important of all, having support from your mentor and other people who can help and you can trust is crucial. I do not intend to continue with bioacoustics research after I graduate, but if I need to mentor any students in the area, I’ll be happy to do it. I worry about my hearing loss too, in thinking of how it will affect my teaching in the future, because sometimes I hear words incorrectly and confuse their meaning. But I recently exposed my hearing loss in an interview; reading more at The Mind Hears and on other blogs has inspired me to worry less about my hearing loss and to continue to forge ahead in my career.

 

Biography: My name is Michelle Micarelli Struett and I am a doctoral candidate in the Graduate Program in Ecology and Conservation (where I also received my MS) at the Federal University of Paraná in Curitiba, Paraná, Brazil. My undergraduate was at Maringá State University in Maringá, which is also in Paraná. I am interested in animal behavior, especially in frogs, and in my research will examine multi-modal communication in the Brazilian Torrent Frog (Hylodes heyeri). This unique frog can sing from one or both sides of its mouth (it has two vocal sacs), depending on context. I will attempt to determine what that context is that stimulates those two possibilities (auditive, visual, or tactile), and how anthropogenic noise may interfere with communication and social interactions in this frog. Despite my hearing loss (which primarily encompasses frequencies above 4000 Hz), I have not been constrained from working with frog calls and bioacoustics.

Mandated equal opportunity hiring may not ensure equal considerations by hiring committees: A hypothetical scenario

-Ryan

Imagine that you are a deaf/hard-of-hearing (HoH) person applying for a full-time academic position in a U.S. public institution of higher learning. The position is listed nationally across multiple job boards. At the offering institution, deaf/HoH faculty, students, administrators, and staff members represent 1% of the population. You are highly qualified and display an extensive résumé with many accomplishments in your field and a strong history of service. Information about you is highly transparent on the internet at large.

You investigate and discover that the offering department does not currently have a deaf or hard-of-hearing person among their full-time and adjunct faculty.

Applying for the position:

When applying, you check the general “YES, I have a disability” box on the institution’s application and contact the human resources (HR) department directly to let them know that you are applying specifically as a deaf/HoH person. If you are offered an interview for the position, you request, as is your right, to meet with the search committee in person, rather than have the interview over a conference call. You cross your fingers, hoping that the HR department communicates with the department offering the position to ensure that they are presenting an equal opportunity for employment for those with disabilities. Does the HR department actually communicate your request for accommodation to the academic department? You may never know but let’s say that it does in this case…

Considerations of the hiring committee:

When the academic department’s search committee learns that you are deaf/HoH how will they respond? Are they experienced in the process of interviewing a deaf or hard-of-hearing person? How many interviews have they given to deaf/HoH applicants in the past? How many of those previous applicants were given an interview, made it to the second or third round of the process, and hired full-time? Where are the statistics to prove that equal opportunities are being given?

When the search committee learns of your request to meet in person for an interview because you are deaf/HoH, how aware and educated are the search committee members of Deaf culture and what it means to be deaf or hard of hearing? How aware are they of what it means to be a deaf/HoH faculty member teaching in a mainly all-hearing environment? Do they know the benefits of having a deaf or hard-of-hearing person as a part of their full-time or part-time faculty? What evidence is there within the department’s current publications, seminars, exhibitions, faculty development, and outreach efforts of awareness of the advantages brought about by workplace diversity that is inclusive of disability?

Is the typical academic faculty search committee equipped, skilled, and supportive enough to interview a deaf/HoH candidate if none of their members are deaf or hard of hearing? If they don’t have deaf/HoH members, are they sufficiently trained in deaf/HoH experiences to judge your application fairly against the numerous other applicants who do not have any disabilities? Are search committees trained enough to distinguish between medical and cultural models of disability, and to understand how these models impact their perceptions of your strengths? Are they savvy enough to move away from focusing on what the you can’t do, and focus instead on what your diverse perspective brings to the hiring unit?

Answers to many of the questions I ask above shouldbe part of the public record. My experience in the job search circuit thus far has left me disillusioned and believing that departmental search committees and HR departments are likely ill-equipped to handle deaf/HoH applicants. Studies have shown that search committees have many implicit biases. One of these biases is that since deafness may impede academic success, it is safer to hire a hearing applicant.

It’s time to fix this.

Have you ever been a on a faculty search committee where a deaf or hard-of-hearing person applied? If so, did that person receive the position? If not, would you like to share your experience?

Who am I at a Research Conference: the Deaf Person or the Scientist?

– Caroline

I look forward to and dread research conferences simultaneously.

I look forward to seeing my friends and colleagues, learning about new research, and exercising my neurons as I ponder different research topics and directions. I eagerly anticipate exploring the different cities and countries where the conferences are held. I long for those few days where I control my own schedule.

At the same time, I dread discovering that the provided access services are inadequate to catch the various research presentations and posters—the interpreting and/or captioning quality ranges from poor to excellent, so the significance of getting the gist of what is new research is >0.05 (I know I shouldn’t be using 0.05 as a baseline, my dear statistician friends). I also worry whether the quality of my research work is reflected accurately by the interpreters for my presentations.

But what I dread the most is being viewed as the deafperson, not as a scientist. At the first few conferences I attended, people would come up to me and ask questions such as, “How do you come up with signs for phytoplankton or photorespiration?” Often, they would try to strike up a conversation with the interpreter right in front of me and commiserate about how hard it must be to keep up with the scientific jargon, especially with people speaking at warp speeds. These conversations were always awkward since the interpreters know they cannot have personal conversations while they are interpreting. They would look to me for guidance on how to handle the situations, since they knew the protocol, even if my colleagues did not.

solomon mid post

I’ve mastered responding with a strained smile on my face, “Yes, it isn’t easy. By the way, what is yourresearch on? And do you have a poster or talk here?” Most people get the hint and are more than happy to talk about their own research. After twenty years in the field, these encounters become less frequent, but they still occur.

Those encounters have become rarer over time because I have become more assertive about going up to other researchers to ask them about their work; but that assertiveness and confidence has come in part because of my growing scientific reputation in the field of estuarine science and oceanography. Now, I suspect that if I stand around and wait for people to come talk to me, they either won’t come due to fear, or they will come with the dreadedquestions. I truly appreciate my colleagues who come to me to discuss science.

At academic conferences, I am a scientist first, and deaf person second.

Caroline_SolomonDr. Solomon has been a faculty member at Gallaudet since 2000.  She also is an adjunct at the University of Maryland Center for Environmental Science, and serves on masters and doctoral committees for research on increasing participation of deaf and hard of hearing people in STEM and estuarine science especially in the areas of nutrient and microbial dynamics.

 

 

 

 

 

Under-represented: Where are all the deaf and hard-of-hearing academics?

-Michele

Through working on The Mind Hears since Sept 2018, I’ve had the chance to meet some amazing deaf and hard-of-hearing scholars and researchers.Our backgrounds, areas of expertise, degrees of hearing, and jobs differ.But one very common experience for deaf/HoH at mainstream institutions (i.e. not at a primary deaf/HoH university), is thae lack of mentors who are deaf/HoH. This isolation drove us to start the blog. But our common experiences lead to the question: Where areall the deaf and hard-of-hearing academics?

The American Speech Language Hearing Association classifies degree of hearing loss on a scale of mild (26-40 db), moderate (41-55 db), moderately severe (56-70), severe (71-90), and profound (91+) (ASHA). Despite these straight-forward definitions, understanding the statistics on hearing loss requires nuance. While tests prove that many people have some degree of hearing loss, only a subset of these folks wear hearing aids or use signed language; even fewer request work accommodations. The National Institute on Deafness and Other Communication Disorders, part of the federal National Institutes of Health, reports that 14% of the working age adult population aged 20–69 has significant hearing loss (Hoffman et al., 2017). This 14% report a greater than 35 decibel threshold for hearing tones within speech frequencies in one or both ears (NIDCD). The number of people with high-frequency hearing loss is double the number with speech range loss (Hoffman et al., 2017). However, not hearing watch alarms or computer keyboards is not considered to be as impactful as missing speech range frequencies.

As Figure 1 shows, the statistics on hearing loss are further complicated by age, which correlates with incidence of hearing loss. Among folks aged 60–69 years, 39% have hearing loss (Hoffman et al., 2017). Within the larger disabled community, we crips joke that we are a community that can recruit new members. Joking aside, the reality is that if you are a hearing person reading this, there is a very good chance that hearing loss will affect you or someone close to you during your working lifetime. The Mind Hearscan be a valuable resource for folks with newly acquired hearing loss.

hoffman age
Figure 1: Modified from Hoffman et al., 2017

So where are the deaf and hard-of-hearing academics? Doctoral degrees are generally awarded to academics between the ages of 20 and 29; the incidence of significant hearing loss within this population is 2.2% (Hoffman et al., 2017). The National Science Foundation’s annual survey on doctoral recipients reports that 54,664 graduate students earned PhD degrees in 2017 (NSF 2017)—wow, that represents a lot of hard work! Great job y’all! Now, if the graduate student population resembles the general population, then we should expect that 1202 of those newly minted PhDs are deaf/HoH. Instead, the survey reports that only 654 PhDs, or 1.2%, were issued to deaf or hard of hearing people (NSF, 2017). This suggests that deaf/HoH PhDs have half the representation that they do within the general population.
Furthermore, the distribution of deaf/HoH PhDs is not even among the fields of the NSF doctoral survey. In 2017, as shown in Figure 2, each of the fields of Humanities and arts, Education, and Psychology and social sciences has a greater percentage of deaf/HoH than each of the fields of Engineering, Life sciences, Physical and earth sciences or Mathematics and computer sciences. It seems like I’ve heard of greater numbers of deaf/HoH scholars and researchers in the fields of Deaf Studies, Deaf Education and Signed Languages Studies than in other fields. This could impact the distribution. Or perhaps some fields are more friendly to deaf/HoH scholars and researchers. Nevertheless, deaf and HoH are underrepresented in all fields within scholars and researchers with PhDs.

2017 stats

So, what can we do? These numbers reveal why so many of us feel isolated in our experiences within academia. The Mind Hears is one effort to facilitate networking and raise awareness of inclusion issues for deaf/HoH academics.

 

 

References

American Speech-Language-Hearing Association. Available at https://www.asha.org/public/hearing/degree-of-hearing-loss/

Hoffman HJ, Dobie RA, Losonczy KG, Themann CL, Flamme GA. Declining Prevalence of Hearing Loss in US Adults Aged 20 to 69 Years. JAMA Otolaryngol Head Neck Surg. 2017;143(3):274–285. doi:10.1001/jamaoto.2016.3527

National Institute on Deafness and Other Communication Disorders (NIDCD), Available at https://www.nidcd.nih.gov/health/statistics/quick-statistics-hearing.

National Science Foundation, National Center for Science and Engineering Statistics. 2018. Doctorate Recipients from U.S. Universities: 2017. Special Report NSF 19-301. Alexandria, VA. Available at https://ncses.nsf.gov/pubs/nsf19301/.

 

Applying for jobs – When should you reveal your deafness?

-Ana

Graduate students, postdocs, and other academics applying for jobs face a hypercompetitive job market, limited geographic options, and a potentially withering assessment of their research productivity, teaching abilities, and overall potential. If you are deaf or hard of hearing, add to this scenario the weighty decision of when you should reveal your deafness during the job application process. In the United States employers are prohibited from discriminating against job applicants based on disability. However, not all countries offer this protection, and even in the U.S. many of us worry that unconscious, or even conscious, bias can often taint the work of search committees.

So what is a deaf/HoH job applicant to do? Do you reveal your deafness in your CV? Once you are offered an interview? When you are on site for an interview or visit? Do you reveal it to the search committee chair? To the human resources department? Do you request accommodations when invited for an interview, or do you wing it? Answers to these questions may vary depending on your degree of hearing loss, the ethos of the institution or position you are applying to, and your personal style. Answers may even vary depending on whether you are applying for a job today or several years ago, and your perception of the societal climate at the time.

The Mind Hearswould like to learn from academics who have navigated (or are navigating) the job search phase about the choices they have made, what they wish they had done differently, and what they have found particularly effective. Please help us out by answering this short survey (5-10 minutes) about your experiences. The survey will be available until July 18, 2019:

https://forms.gle/BWVjspLQAhuZLSreA

We would like to collate this collective knowledge and experiences into a compendium of anonymized comments to be posted at the end of summer as a blog post. By sharing the strategies we have tried, we hope to create a resource that can serve as a guide for all of us, and particularly for the upcoming generation of students and postdocs.

Traveling and Conferences: When Bacteria Has a Party

In my first post for The Mind Hears, I want to tell you a little about my background, then outline some strategies that I’ve found successful for traveling and attending conferences.

I have been a regular at my ENT’s (ear, nose, and throat doctor) office since I was young, getting new tubes, replacement tubes, removing cholesteatomas, and repairing perforated ear drums. On a good day, I have about 50% of normal range of hearing—less if I have sinus or ear infections. Because I had my right ear completely reconstructed, I am unable to use any hearing aids effectively. Due to my upbringing in a impoverished rural town, I didn’t have access to speech therapy or options to learn American Sign Language. My loss of hearing wasn’t pronounced as an official disability, so I moved through most of my life trying to find creative ways to be successful at school or professionally. Now I wish I had spoken up more, but the aforementioned lack of resources and accommodations made it difficult.

Traveling is a necessity for (geo)scientists, from fieldwork, attending conferences, or networking with the scientific community. The quickest mode of transportation is air travel with changing pressure and humidity which apparently has a big impact on my sinus system. I remember attending the American Geophysical Union (AGU) Fall Meeting in San Francisco as an undergraduate in 2006, overwhelmed by the size of the conference and harder of hearing than usual. I thought I happened to catch a cold and tried to communicate my fellow scientists in loud poster sessions. I repeated this trip a few more times in graduate school and sure enough, the bacteria in my sinuses decided to have a party that moved to my ears. We all have flora in systems, mine just like to come unannounced and frequently. Later in graduate school, I traveled to Italy for fieldwork and I found myself with (surprise!) a sinus infection. It was not fun being in a foreign country and being unable to communicate at all in the local language; in addition, the infections made communicating effectively with my own team difficult. Nevertheless, I powered through these situations.

circe quote

Because of my experiences, I’ve found myself being more vocal about my needs; I’ve realized that I’m my best advocate. Here are some strategies that have helped me.

Medical help: I have built a great relationship with my ENT and we’ve developed a system for traveling which helps prevent weeks of sinus congestion and nearly complete deafness. I travel for my job too often to make visits to the ENT feasible prior to every trip; but occasional visits a few times a year help. Please note, this is my personal plan; please consult your physician. I take steroidal prednisone and prescription-strength Sudafed right before a flight—this medication regime means I have a better chance of flying with limited, or even better, no sinus impacts. One downside to the medications, however, is that I’m sensitive to the steroid; I feel amped and often can’t sleep that first night if there are significant time zone changes—west coast to east coast in particular. This is not a minor downside; my reaction can make important meetings stressful. But the benefits far outweigh the cons. Since I’ve become a chronic sinus infection patient, normal antibiotics on existing infections don’t work. Proactively heading off infections is my preference, since if I’m at a conference or meeting, I cannot wait the two weeks for the medications to work. Waiting would mean that I’d miss conferences with breakthrough discoveries and vital conversations. I don’t love that I have to depend on medication and the side effects, but it helps me to be an active participant in conferences rather than a passive observer.

Communication tips:

  • Live-captioning platforms and apps are improving, and more conferences are starting to use them for conferences and poster sessions.
  • Teleconferencing:
    • An example is InnoCaption, an app for both Android and iPhone that can be used for teleconferencing meetings. A federally administered fund pays for it, and you must register, as it enables a live stenographer to generate captions. It requires a 4G network or reliable Wi-Fi.
    • Another approach is using smarter work phones that can use programs such as CapTel to do live captioning. These are phones such as the Cisco (model 8861) that does live captions during video. There are also applications such as Jabber that enable you to transfer captions to a computer screen for smart accessibility.
  • Traveling to foreign countries: Google Translate now has several offline dictionaries! Five years ago if you didn’t have Wi-Fi or data, you didn’t have Google Translate. But I recently used Google Translate successfully for Spanish! Google Translate is simple to use by talking into your smartphone—you can get good translations to or from English.

Conferences:

  • I find it helpful to I sit up front in conference rooms both to hear better and be seen.
  • If I didn’t quite catch the presentation, I ask the speakers for business cards to get a copy of presentations or posters.
  • Depend on the conference moderators: Another technique to anticipate impaired hearing depends on the conference size and style. I’ve asked moderators in advance (via email) to repeat questions from the audience if I’m a speaker. This helps to ensure I understand the question and help with accents. I’ve had mixed results—often there is no moderator to contact directly; it means I have to track down that individual in person before the session, which is a lot of work.
  • Normalize captions: The best way to normalize is to use Google Slides or captioned presentations for everyone all the time!

What tricks and tips do you use for communicating?

circeBIO: Circe Verba, PhD, is a research geologist, science communicator, and STEMinst at a government lab. She earned her doctorate in geology and civil engineering at the University of Oregon. Dr. Verba specializes in using advanced microscopy (petrography and electron microscopy) and microanalysis techniques to tackle challenges facing the safe and efficient use of fossil energy resources. Outside of the lab, Dr. Verba is an avid LEGO buff, even designing her own set featuring field geology and a petrographic laboratory.

Captions and Craptions for Academics

-Michele

In recent years, to my delight, captions have been showing up in more and more places in the United States. While I’ve been using captioning on my home TV for decades, now I see open captioning on TVs in public places, many internet videos, and most recently, in academic presentations. Everyone benefits from good captioning, not just deaf/HoH or folks with an auditory processing disorder. Children and non-native English speakers, for example, can hone their English skills by reading captions in English. And nearly everyone has trouble making out some dialogue now and then. But not all captioning is the same. While one form of captioning may provide fabulous access for deaf/HoH, another is useless. To ensure that our materials optimize inclusion, we need to figure out how to invest in the former and avoid the latter.

craptions

To unpack this a bit, I’m going to distinguish between 4 types of captioning that I’ve had experience with: 1) captions, 2) CART (communication access real-time translation), 3) auto-craptions, and 4) real-time auto-captions with AI. The first two are human produced and the last two are computer produced.

Captions: Captions are word-for-word transcriptions of spoken material. Open captions are automatically displayed, while closed captions require the user to activate the captions (click the CC option on a TV). To make these, a human produced script is added to the video as captions. Movies and scripted TV shows (i.e. not live shows) all use this method and the quality is usually quite good. In a perfect world, deaf/HoH academics (including students) would have access to captioning of this high quality all the time. Stop laughing. It could happen.

CART:This real-time captioning utilizes a stenotype-trained professional to transcribe the spoken material. Just like the court reporters who document court proceedings, a CART professional uses a coded keyboard (see image at right) to quickly enter phonemes that steno machineare matched in the vocabulary database to form words. The CART transcriptionist will modify the results as they go to ensure quality product. While some CART transcriptionists work in person (same room as the speakers), others work remotely by using a microphone system to listen to the speakers. Without a doubt, in-person CART provides way better captioning quality than remote CART. In addition to better acoustics, the in-person service can better highlight when the speaker has changed and transcriptionists can more easily ask for clarification when they haven’t understood a statement. As a cheaper alternative to CART, schools and universities sometimes use C-Print for lectures, where the non-steno-trained translators capture the meaning but not word-for-word translation. In professional settings, such as academic presentations, where specific word choice is important, CART offers far better results than C-Print but requires trained stenographers.

Some drawbacks of CART are that the transcription lags, so sometimes the speaker will ask “Any questions?” but I and other users can’t read this until the speaker is well into the next topic. Awkward, but eventually the group will get used to you butting in late. CART also can be challenging with technical words in academic settings. Optimally, all the technical vocabulary is pre-loaded, which involves sending material to the captionist ahead of time for the topics likely to be discussed. Easy-peasy? Not so fast!  For administrative meetings of over 10 people, I don’t always know in advance where the discussion will take us.  Like jazz musicians, academics enjoy straying from meeting agendas. For research presentations, most of us work on and tweak our talks up until our presentation. So getting advance access to materials for a departmental speaker can be… challenging.

Craptions:These are machine-produced auto-captions that use basic speech recognition software. Where can you find these abominationsless-than-ideal captions? Many YouTube videos and Skype use this. We call them ‘crap’tions because of the typical quality. It is possible that craptions can do an okay job if the language is clear and simple. For academic settings, these auto-craptions with basic speech recognition software are pretty much useless.

IMG_3588

The picture at right shows auto-craption for a presentation at the 2018 American Geophysical Union conference about earthquakes. I know, right]Yes, the speaker was speaking in clear English… about earthquakes. The real crime of this situation is that I had requested CART ahead of time, and the conference’s ADA compliance subcontractor hired good quality professional transcriptionists. Then, the day before the conference, the CART professionals were told they were not needed. Of course, I didn’t know this and thought I was getting remote CART. By the time the craptions began showing up on my screen, it was too late to remedy the situation. No one that I talked with at the conference seemed to know anything about the decision to use craptions instead of CART; I learned all of this later directly from the CART professionals. The conference contractor figured that they could ‘save money’ by providing auto-craption instead of CART. Because of this cost-saving measure, I was unable to get adequate captioning for the two sessions of particular interest to me  and for which I had requested CART. From my previous post on FM Systems, you may remember that all of my sessions at that conference were in the auxiliary building where the provided FM systems didn’t work. These screw-ups meant it was a lousy meeting for me. Five months have passed since the conference, and I’m still pretty steamed. Mine is but one story; I would wager that every deaf/HoH academic can tell you other stories about material being denied to them because quality captioning was judged too expensive.

Real-time auto-caption with AI: These new programs use cloud-based machine learning that goes farbeyond the stand-alone basic speech recognition of craptioning software. The quality is pretty good and shows signs of continuous improvement. Google slides and Microsoft office 365 PowerPoint both have this functionality. Link to a video of Google Slides auto-caption in action.You need to have internet access to utilize the cloud-basedScreen Shot 2019-04-24 at 5.26.12 PM machine learning of these systems. One of the best features is that the lag between the spoken word and text is very short. I speak quickly and the caption is able to keep up with me. Before you start singing hallelujah, keep in mind that it is not perfect. Real-time auto captioning cannot match the accuracy of captioning or CART for transcribing technical words. Keep in mind that while it might get many of the words, if the captions miss one or two technical words in a sentence, then deaf/HoH still miss out. Nevertheless, many audience members will benefit, even with these missing words. So, we encourage all presenters to use real-time auto caption for every presentation. However, if a deaf/HoH person requests CART, real-time auto caption, even it is pretty darn good, should never be offered as a cheaper substitution. Their accommodation requests should be honored.

An offshoot of the real-time auto-caption with AI are apps that work on your phone. Android phones now have a Google app (Live Transcribe) that utilizes the same speech recognition power used in Google Slides. Another app that works on multiple phone platforms is Ava. I don’t have an Android phone and have only tried Ava in a few situations. It seems to do okay if the phone is close to the speaker, which might work in small conversation but poses problems for meetings of more than 3 people or academic presentations. Yes, I could put my phone up by the speaker, but then I can’t see the captions. So yeah, no.

What are your experiences with accessing effective captions in academic settings? Have you used remote captioning with success? For example, recently, I figured out that I can harness google slides real time auto-caption to watch webinars by overlapping two browser windows. For the first time, I can understand a webinar.  I’ve got a lot of webinars to catch up on! Tell us what has worked (or not worked) for you.

Using FM Systems at Conferences

progress-Michele

You’re wearing your hearing aids, sitting at a conference presentation, feeling confident that you’re understanding what’s going on, when it happens. The audience reacts to something the speaker said, and you have no idea why. Until then, you’d thought that you were grasping enough of the presentation, but you’ve clearly missed something good. Reality check: your hearing aids might be good but you still can’t hear like a hearing person. I’ve been there. And I’ve found that when I’ve been able to get a good FM system set up at conferences, I can catch a lot more of the speaker’s remarks and subsequent discussions than when I try and go it alone with just my hearing aids. Getting FM systems to work effectively, however, can sometimes challenge even the most intrepid academic. So I thought that I would share what I’ve learned through several decades of requesting and using FM systems at conferences. I’ve occasionally used Real-Time Captioning (CART) and ASL interpreters at conferences, but someone more expert should post about those.

What is an FM system?

Frequency Modulation (FM) systems involve a paired transmitter and receiver that provide additional amplification to either a headset or, even better, directly to our hearing aids. That additional amplification can be invaluable in some difficult-to-hear situations. The audio signal is transmitted via waves within a narrow range of the FM spectrum—yup, the same as non-satellite radio. FM systems are sometimes called Assistive Listening Devices (ALDS). At conferences these systems can help by amplifying speakers’ remarks, audience questions, and ensuing discussions, as well as elevating conversations around posters above background noise.

Requesting FM systems at large conferences in the US

Because of the Americans with Disabilities Act (ADA), large conferences in the US will have a box to check on the registration page to request accommodation. If they provide an open response box, I typically write:

I need a FM system with 60 decibels of undistorted gain. The system should have a neckloop or induction earhooks that work with the telecoil in my hearing aids. Headsets are not compatible with my hearing aids.” 

Through years of bad experiences, I’ve learned to provide very specific instructions.

heads set
Headset offered at 2017 AGU

Although I provide these specifics, I am often disappointed when I arrive at the
conference center. Many conference FM systems are pretty weak and only provide only a small amount of clear amplification (maybe 15-20 dB). This might be okay for someone who has a mild hearing loss—such as some with recently acquired loss—but it pretty useless for me. At other conferences, such as at the 2017 American Geophysical Union, I’m offered a setup as in the photo at right.

  • Me: These are not compatible with hearing aids
  • Clueless but earnest conference liaison: Oh yes, they are! You just put the headset over your ears.
  • Me: Um no. I use behind-the-ear hearing aids and my microphones are behind my ears. This is why I specifically requested a neckloop to directly communicate with the telecoil in my hearing aids.
  • Clueless but earnest conference liaison: A what?
  • Me:
  • Clueless but earnest conference liaison: Oh. Well, why don’t you just take your hearing aids out and use the headset instead?
  • Me: Umm no. My hearing aids are tuned for my particular frequency spectrum of hearing loss. I asked for 60 decibels gain for the system to boost above what my hearing aids offer and to compensate for people speaking softly, people not speaking directly into the microphone. . . That sort of thing.
  • Clueless but earnest conference liaison: Huh. Well, we don’t have anything like that.

After such unfruitful conversations I usually begin sorting out my own accommodations with my personal FM system (more on that in a bit). The few times that I’ve pushed for conferences or their sites to find an neckloop or a stronger FM system, I’ve never had success. For example, at one conference, a team of six technicians met with me to tell me that there was not a single induction neckloop to be had in the entire city of New Orleans—their hands were tied. Sure.

Warning about accommodation requests:Although conferences are becoming more responsive, I’ve found that about a third of the time, my requests on the registration forms are ignored. I never hear back from the conference, and when I show up they have no idea what I’m talking about. So as part of my conference prep, I now contact them about a month before the meeting if I haven’t received notification. I also budget an extra hour or two when I first arrive at the conference to sort out the accommodations.

Paired FM systems versus direct wired rooms

With paired FM systems, one transmitter is paired to one receiver that you carry with you. The transmitter must be set up in the conference room in advance of the session and is usually patched into the sound system so that your receiver picks up signals directly from the room’s microphones. In order to set this up, large conferences need to know which sessions you will attend several weeks ahead of time. This means that you can’t pop from one session to another as our hearing peers might do at large conferences. Also, if two HoH people want to attend the same session, the room may need to have two transmitters patched into the sound system.

headsets
The 2018 AGU meeting provided headsets and telecoil loops. Progress!

Newer (or newly renovated since 2012) convention centers in the US and UK may have built-in transmitters throughout the convention hall. This means that you can take any receiver into any room and instantly get amplification without setting things up ahead of time. This flexibility is quite nice! The picture at right shows a charging rack of FM headsets and induction loops for the Washington DC Convention Center. I was really looking forward to using those at the 2018 AGU meeting, but unfortunately, all the sessions in my discipline were in the Marriott hotel next door and the system didn’t work at all there.

Small conferences and meetings outside of the US

For small conferences, as well as meetings outside of the US where the ADA is not in effect, I bring my personal FM system. At the top of this post are pictures of the FM system that I first started using around 1994 (left) and my current outdated fourteen-year-old system (middle). I can’t get this set repaired anymore, so I’m going to get a new one like the one on the right. Some benefits of personal systems over conference-provided systems is that personal systems are more powerful. My first FM system had audio boots that hooked directly to my hearing aids (left picture) which reduces signal degradation that can happen with neckloops (middle image).

At small conferences, I put my transmitter at the lectern before each session to help me adaptorscatch more of the speaker’s presentation. Alas, this doesn’t help with questions and discussions, which can be a large challenge. At some conferences where microphones are used for questions and discussions, I ask the AV crew to patch my transmitter into the sound system. Right is a picture of all the different adaptors that I bring with me to ensure that my transmitter will work with the venue’s sound system. Some of these may be outdated.

fm at micWhile patching my transmitter into the sound system has worked very well in the past, I’ve had problems lately. Maybe sound systems have become more fussy about patching in auxiliary outputs. I am also not sure whether the newest FM systems, which use Bluetooth rather than FM signal, even have input jacks. Another hack that I came up with is to put my transmitter in front of a speaker (the photo at left is my transmitter taped to a microphone pole in front of a speaker stand at the 2018 Southern California Earthquake Center annual meeting). This hack allowed me to access the presentations and discussions that used microphones.

FM systems in poster halls

If the poster hall is crowded, you can aim the microphone of the FM system transmitter towards any speaker to elevate their voice above the background noise. This approach has worked well for me when using my own FM system. Note that the systems provided by convention centers are not mobile; it is best to bring your own to use in poster halls.

FM systems are expensive (~US$1000 – $4000), and like hearing aids, are often not covered by US health insurance. Full-time students in the US are eligible for personal FM systems through vocational rehab (degree of coverage depends on income). Many audiologists may not be aware of this (my own weren’t!), but check with the disability office at your university and they can hook you up with your state’s vocational rehab office. These FM systems are worth getting before you graduate! Some employers do purchase FM systems for their workers because they can be critical for career success; however, I’ve yet to meet an academic who has successfully negotiated an FM system from their employer (and would love to hear if you have). While insurance didn’t cover my last FM system, I was able to use a health spending account through my employer that saved me from paying taxes on the device. It is my understanding that outside of the US, personal FM systems are nearly always paid for out of pocket.

Why am I so pushy?

Since I end up using my personal FM system most of the time at large conferences, you might wonder why I keep requesting accommodations. I do so because I want the conference center to know that we are here. I want them to know that deaf/HoH academics should be considered when they are planning their meetings and ADA accommodations. If we don’t make waves, they will believe that the level of accommodation currently offered is satisfactory. I’ve heard too many stories of older academics who stop attending conferences because of declining hearing, and younger HoH academics discouraged from academic careers because of the difficulty of networking at large conferences. We owe it to ourselves and our community to be counted, advocate for flexible, effective amplification systems, and share our successful strategies.

Is my experience consistent with your own? What successful strategies have you used for FM systems at conferences?

Understanding unfamiliar accents

-Ana

I wrote this post on an airplane coming back from an international conference I attended in Thailand. Because of the distance involved, participation at this meeting was pretty light on scientists from North and South America, but had a lot of participants from Europe (primarily the Netherlands, France, Spain, and Belgium) and Asia (primarily Thailand, China, Japan, Taiwan, but several other countries too). It was a wonderful conference: great venue, warm hosts, cutting-edge talks, great food, new people to meet, and some fun sightseeing thrown in. It also brought with it the usual challenges of trying to understand talks and poster presentations and network with important scientists in noisy settings. But this conference also brought home a specific problem that has stymied me throughout my career: understanding unfamiliar accents.

Deaf/HoH academics who depend on oral communication will likely be familiar with the problem that, even in an optimal hearing environment, we better understand those who speak like we do. Unfamiliar or “foreign” is relative, of course. I speak English and Spanish, but, due to the particularities of my upbringing, my best shot at hearing/understanding Spanish is with people who speak Colombian Spanish, or even more, the version of Colombian Spanish spoken in and around Bogotá (indeed, that is the accent I speak with – immediately recognizable to most Latin Americans). My Argentinean and Mexican friends can attest to how obnoxious I can be asking them to repeat themselves. Likewise, for English, I fare best with a northern Midwestern US type of English; Australian, British, Indian and many other accents will leave me floundering. I imagine that the same is true for other deaf/HoH academics, but with different versions of their language they are most used to.

Scholarly research, of course, is a global venture, and it is wonderful that many luminaries in my field hail from around the world. I’m already incredibly lucky that most professional communication is conducted in English, a language I happen to know. But, while hearing people can be quite understanding of my communication difficulties in suboptimal environments, it seems cruel (and professionally unwise) to tell colleagues that I can’t ‘hear’ them because of their accents—especially because many such colleagues have worked hard to acquire their English skills, thus going the extra mile to ensure communication. Because of globalism, the problem with understanding unfamiliar accents goes beyond conferences and professional networking. Many of my undergraduate and graduate students are also from various international locations. I am heartbroken every time I feel that my difficulty understanding my students negatively affects my ability to mentor them.

I have not found ideal strategies to deal with the challenges of unfamiliar accents. Every accent becomes a little more familiar with constant exposure, so I do understand my graduate students (with whom I communicate almost daily) better as time goes by. But it never stops being a challenge, and I sometimes have to resort to written communication in our one-on-one meetings. Since the undergraduates I teach change each semester, I don’t have similar opportunities to become familiar with their accents. For conferences and professional networking, I imagine that real-time captioning would be the ideal solution; but such a resource is not available at all conferences (though it should be!) and is generally not an option for networking. I’ve been excited by the recent advances in speech recognition software, such as that demonstrated by Google Slides, and wonder both if the technology can accommodate a range of accents and, if so, if it could ever become a portable “translator” for deaf/HoH individuals (I know some portable translator apps exist, but haven’t tried them and don’t know the scope of their utility; perhaps some readers can share their experiences?). I’m also curious whether unfamiliar accents are ever a challenge for deaf/HoH academics who rely on sign language interpreters. What other strategies have deaf/HoH academics employed to help navigate the challenge of unfamiliar accents in a professional setting?