Category Archives: meetings

Sudden Remote Teaching – Deaf/HoH

-Ryan

Here we are navigating our 5th week of remote / online classes here in NYC (and beyond of course) and adapting to our “new lives”. I can’t think of anything else to call it as of right now, so I’m going with this. I say this from the perspective of integration as I’m very much still in the: “I’m really perplexed about how we are even in the position that we are in” phase along with having adjusted to this new life and fulfilled so many new, mandated compliances to keep my courses going simultaneously. (That was a long sentence, too!) I originally started writing this post about 3 weeks ago. A lot has changed, which makes it seem harder to update, since I’ve made more progress than I thought I would. Or could.

Along with following all of the administrative protocols, attending endless Zoom meetings, making course updates, reformatting everything, and dealing with the staggering amount of e-mail and overall communication—and that’s just work stuff—not including connecting with family and friends. Whew!—I’m finally starting to reflect on things. Or… wait, is my ego reflecting on what it thinks it is reflecting on? Reflection invites in ALL of the emotions. And the feelings—both positive and negative. And there’s been quite a bit of the negative! Why am I reminded of past failures at a time like this? We humans like routines, they help us stay focused and structured. Uncertainty isn’t something we’re really good at, right? Wrong, we certainly adapt, and adapt quickly. I can see that as I edit this post!

cog-fatigue

Here is a visual interpretation of me after the first 2 days of my 3-in-a-row-straight zoom meetings..

I have a lot of thoughts and feelings about the conversion to remote and online teaching in general. As I see it now, especially as a deaf/HoH professor who depends on “visual everything,” I have much more to organize than I thought. I teach simultaneously between 4 colleges here in NYC. I’m teaching 7 courses between all of these schools to 99.9% hearing people. As we know, the reality of “just switching to video chat classes” is NOT easy, even for a hearing person teaching hearing people. Especially if you’ve never done this before. Video chat platforms can actually work well for me if it’s a one-on-one situation, but add 5-15-24 people and access really changes.

Simply put, I need to see a face and mouth at all times to have access to a spoken conversation. Yes, I wear hearing aids but they are NOT magical devices that mean I can “hear” what normal hearing people hear. I don’t, not even close… Because I’m deaf. My hearing loss is degenerative and has been decaying over time since birth. So these days, I only catch about 30% with hearing aids. The other 70% of the conversation is absorbed from lip reading, speech patterns, emotional rapport, facial expressions, and body language. When it comes to only seeing someone’s face, head, and shoulders on a flat monitor or screen, that 70% contextual part is naturally limited, and understanding speech becomes harder.

IMB_LPIpfu

When switching to “synchronous style” remote teaching formats using Zoom, Skype, Google Meet or Google Hangouts or another video-chat platform (I recently tested Microsoft Teams that DOES have a live, real-time captioning feature. I much prefer this over the others and will be switching to this. You do need to download the desktop application and have access to the business or education licensing, though), things can get really challenging, especially with a sea of small icon-like faces as the number of people increase in the chat session. Three of my classes have 20-plus students in them. As I mentioned, one-on-one video chat works well for me, but add several others to the chat, and, well, the faces get smaller, and visual access decreases. I need to adapt to this by using a text-chat feature to support the visuals. This can be done and I have been making several adaptations as time has passed. However, typing out the conversation slows down the process, and others in the virtual classroom may become a little impatient. “Please be compassionate, please be patient, please put yourself in the shoes of others and try to understand.” Hmm, this is tough, especially if I’m your first Deaf professor. Believe me, I know I am. We are learning together in this experience, in real-crazy-time. Things will be tweaked as we go along. We can’t be selfish and expect communication to function as it would in the normal classroom. It’s just not the same thing

Aside from what I said above, accessibility has a context that expands and extends far beyond myself; it is collective and contextual. I can share my own experiences here but my experiences obviously relate to my life experiences as a whole—and that includes all of my students. I care about them deeply and protect them fiercely. They come first, always have, and I am fully responsible for making the choice to teach and work where I do. What does accessibility look like on my students’ end? My students have their own issues, struggles and problems. Some have no access to the internet, no access to a computer, laptop, desktop, smartphone, or tablet. Which means no access to certain software applications. Some do not have a physical space to sit and be present in a video chat class as their living space is shared with parents, siblings, and other relatives who are also home, and in some cases working from home. Many have lost their jobs altogether. Some are living with multiple family members who are sick, whether with Covid-19 or other pre-existing conditions. This all happens simultaneously. But what we don’t really talk about, when we discuss how wonderful all of these new adaptations are, are the emotional and psychological aspects of this entire situation. Do we have enough contrast yet to fully understand the current and continuing impact of the last 5 weeks? No way.

I have adopted the mantra of Compassion, Patience, Understanding, Accessibility, Adaptability, Inclusion, Helpfulness, and Humility. We can do this together both inside and outside academia. Fellow students, faculty, and colleagues, both those with accessibility needs and those who need help working with folks with accessibility needs, let’s pull together and contribute our resources and knowledge to help each other. Blogs like this one and other social media have a huge reach and can be used to share useful perspectives and resources.

It is also crucial that we communicate honestly with our colleagues, students and administration. I AM Guilty of this in the past myself! I have and continue to reach out to my people. All of my students already know that that I am deaf/HoH. I was upfront with them from Day One of our semester. I explained my communication needs and stated that I always need to see a face, lips, and body language to follow verbal conversations. If not, then we need to type, write, text, or make written communication happen. The application of a speech-to-text application like Cardzilla (that I love! iOs  Android) or another form of text/type/visual communication also helps! Of course, content management system (CMS) platforms like WordPress websites are also super effective, and I have built a website for every class that I teach! No, not Blackboard or Canvas. I build my own websites for my courses so that I have full autonomy of the admin aspects of communication and access and so much more.

The combination of Zoom and the CMS platforms have allowed for a relatively smooth integration for me. As I mentioned above I will integrate MS Teams this week over Zoom. Zoom allows for simultaneous Video, Audio, and Text Chat, so for me and my students, this is crucial! I can see a face to speech read and then ask for additional text follow up via text in the chat box. Plus, if turn the audio on my computer speaker up, in my case to Very High, I can place my iPhone next to it and have the Cardzilla app transcribe the audio to text. It is a hack, but it works, and I am grateful for that. My students have been super patient and seriously awesome at this point! Accessibility is EVERYTHING! Especially in this very NEW situation we find ourselves in.

Aside from teaching and hacking accessibility and expanding my awareness of how amazing our collective human potentials are, how are you all coping with the isolation, and order to stay home? I’m focusing on self-care. Making healthy meals and setting a cozy and loving environment in my space. I’m also making a lot of new art, I mean A LOT!

breakfast

IMG_3076

Ink-Jet-SeriesWIP-Paintings

Communication is EVERYTHING so please be mindful and specific about what you NEED.

Much Love to all!

How much listening is too much?

– Michele

Listening is hard work. At the end of a long day of meetings I’m exhausted. When I share this with my hearing colleagues they’ll say “Oh, I know—me too!” But is it the same? Really? 

Studies have shown that users of hearing aids like me, who rely on speech reading along with amplification, experience listening fatigue as much higher rates than hearing people (e.g., Bess and Hornsby, 2014). We are working much harder than everyone around us to piece things together and make sense from what we are able to hear. Most listening fatigue studies are on school-aged children and the few studies of adults show that “Adults with hearing loss require more time to recover from fatigue after work, and have more work absences.” (Hornsby et al., 2016). As academics, our jobs require us to listen to others all the time—in our classes, in faculty meetings, in seminars, and when meeting with students. How do we recognize cognitive fatigue due to too much listening and mitigate this fatigue so that we can manage our work responsibilities? This is a tremendous challenge for deaf/HoH academics and The Mind Hears will explore this topic in several blog posts. 

In this post I share how I figured out my daily listening limit, which turns out to be 3 hours with good amplification and clear speech reading. For many years, I pushed through my day not paying attention to how much time I was spending in meetings and classes. Some days I felt okay while other days I ended up utterly exhausted. The kind of exhausted where I can’t track conversation and even have trouble putting my own sentences together. When this happens, I can’t converse with my family and exercise class is out of the question because I can’t follow the instructor. I just take my hearing aids out and lie on the floor with the dog— I don’ need to speech read him and he gets me. Yay dogs!  

When I explain to my listening fatigue to non-native English speakers, they get it right away. They recognize that this listening fatigue is just like when they first moved to a country with a new language; while they had good command of the new language, following it all day exhausted them. Exactly! Except I’m not going to get any better at my native language.

After a while—actually a really long while because for many years I tried to work as if I was a hearing person due to internalized ableism, which really is a whole different blog topic—and now this sentence has really gotten off track so I’m going to start over. After a while, I started to realize that for my own health I needed to avoid becoming so exhausted that several times a week, I could only commune with the dog.

undefinedIt turns out that my fancy new Garmin watch that tells me to “MOVE” every hour also detects my stress level. This image at left is from a day at a conference. All I did that day was sit in one room listening to talks with occasional breaks for coffee and meals. My heart rate stayed elevated all day due to the work of following the conversation and the anxiety of constantly deciding whether I should ask for clarification on something I may have missed or just let it go. When even my watch is telling me ‘enough is enough’ or more specifically “You’ve had very few restful moments on this day. Remember to slow down and relax to keep yourself going”, it might be time to figure out how much listening is too much

So last February I tracked both my hours each day spent listening and my evening exhaustion level in my bullet journal. 

Actually, I didn’t track this much detail—I just made marks in my bullet journal for each hour and then noted whether this was manageable. Below are two example pages. For the day on the left, the 3 Xs represent 3 hours of listening and this was an OK day. The image on the right is from another day that month. The horizontal line below the Xs means that I was on the floor with the dog that evening after 5 hours of listening. 

Yes, I know that my handwriting is messy and I tend to kick a lot of tasks to the next day. But this blog post is not about my untidiness and unreliability. What I learned from this exercise was that any day including more than 3 hours of listening would be a tough an unmanageable day. Armed with this knowledge, I could start to try to rearrange my schedule to avoid having days with more than 3 hours of listening. 

Interestingly, this goes against the advice that many academics give each other. Early career researchers are encouraged to push all meetings to one day so that you have a day free for research. This is great advice… for a hearing person. For many deaf/HoH, we may do better with two free mornings a week rather than 1 full day so that no one day is overloaded with listening.

So how successful have I been? Moderately. While I have control over some aspects of my schedule, I don’t over others. I schedule my one-on-one meetings with my research assistants on days that I don’t have a lot of other meetings. If I’m teaching a 3-hour lab, sometimes it’s just impossible for me to have no other teaching or meetings that day. But I am considering restructuring my lab activities so that I don’t need to be ‘on’ the whole time. I’ve also started talking with my department head about my effort to limit my daily meetings; this involves educating him on why listening fatigue is different for me than for hearing faculty. Had I been more savvy, I might have negotiated a listening limit when I was hired. Take note of this, future academics! 

I’m still sorting out how to manage my day and eager to learn more from others on how they successfully manage listening fatigue. As I mentioned at the start of this post, The Mind Hears wants to have a series of posts about listening fatigue. Tell us how has this fatigue affected your work day and your health. What solutions have you found?

References cited

  • Bess, F.H., & Hornsby, B.W. (2014). Commentary: Listening can be exhausting—Fatigue in children and adults with hearing loss. Ear and hearing35(6), 592.
  • Hornsby, B.W., Naylor, G., & and Bess, F.H. (2016). A taxonomy of fatigue concepts and their relation to hearing loss. Ear and hearing37(Suppl 1), 136S.

Captions and Craptions for Academics

-Michele

In recent years, to my delight, captions have been showing up in more and more places in the United States. While I’ve been using captioning on my home TV for decades, now I see open captioning on TVs in public places, many internet videos, and most recently, in academic presentations. Everyone benefits from good captioning, not just deaf/HoH or folks with an auditory processing disorder. Children and non-native English speakers, for example, can hone their English skills by reading captions in English. And nearly everyone has trouble making out some dialogue now and then. But not all captioning is the same. While one form of captioning may provide fabulous access for deaf/HoH, another is useless. To ensure that our materials optimize inclusion, we need to figure out how to invest in the former and avoid the latter.

craptions

To unpack this a bit, I’m going to distinguish between 4 types of captioning that I’ve had experience with: 1) captions, 2) CART (communication access real-time translation), 3) auto-craptions, and 4) real-time auto-captions with AI. The first two are human produced and the last two are computer produced.

Captions: Captions are word-for-word transcriptions of spoken material. Open captions are automatically displayed, while closed captions require the user to activate the captions (click the CC option on a TV). To make these, a human produced script is added to the video as captions. Movies and scripted TV shows (i.e. not live shows) all use this method and the quality is usually quite good. In a perfect world, deaf/HoH academics (including students) would have access to captioning of this high quality all the time. Stop laughing. It could happen.

CART:This real-time captioning utilizes a stenotype-trained professional to transcribe the spoken material. Just like the court reporters who document court proceedings, a CART professional uses a coded keyboard (see image at right) to quickly enter phonemes that steno machineare matched in the vocabulary database to form words. The CART transcriptionist will modify the results as they go to ensure quality product. While some CART transcriptionists work in person (same room as the speakers), others work remotely by using a microphone system to listen to the speakers. Without a doubt, in-person CART provides way better captioning quality than remote CART. In addition to better acoustics, the in-person service can better highlight when the speaker has changed and transcriptionists can more easily ask for clarification when they haven’t understood a statement. As a cheaper alternative to CART, schools and universities sometimes use C-Print for lectures, where the non-steno-trained translators capture the meaning but not word-for-word translation. In professional settings, such as academic presentations, where specific word choice is important, CART offers far better results than C-Print but requires trained stenographers.

Some drawbacks of CART are that the transcription lags, so sometimes the speaker will ask “Any questions?” but I and other users can’t read this until the speaker is well into the next topic. Awkward, but eventually the group will get used to you butting in late. CART also can be challenging with technical words in academic settings. Optimally, all the technical vocabulary is pre-loaded, which involves sending material to the captionist ahead of time for the topics likely to be discussed. Easy-peasy? Not so fast!  For administrative meetings of over 10 people, I don’t always know in advance where the discussion will take us.  Like jazz musicians, academics enjoy straying from meeting agendas. For research presentations, most of us work on and tweak our talks up until our presentation. So getting advance access to materials for a departmental speaker can be… challenging.

Craptions:These are machine-produced auto-captions that use basic speech recognition software. Where can you find these abominationsless-than-ideal captions? Many YouTube videos and Skype use this. We call them ‘crap’tions because of the typical quality. It is possible that craptions can do an okay job if the language is clear and simple. For academic settings, these auto-craptions with basic speech recognition software are pretty much useless.

IMG_3588

The picture at right shows auto-craption for a presentation at the 2018 American Geophysical Union conference about earthquakes. I know, right]Yes, the speaker was speaking in clear English… about earthquakes. The real crime of this situation is that I had requested CART ahead of time, and the conference’s ADA compliance subcontractor hired good quality professional transcriptionists. Then, the day before the conference, the CART professionals were told they were not needed. Of course, I didn’t know this and thought I was getting remote CART. By the time the craptions began showing up on my screen, it was too late to remedy the situation. No one that I talked with at the conference seemed to know anything about the decision to use craptions instead of CART; I learned all of this later directly from the CART professionals. The conference contractor figured that they could ‘save money’ by providing auto-craption instead of CART. Because of this cost-saving measure, I was unable to get adequate captioning for the two sessions of particular interest to me  and for which I had requested CART. From my previous post on FM Systems, you may remember that all of my sessions at that conference were in the auxiliary building where the provided FM systems didn’t work. These screw-ups meant it was a lousy meeting for me. Five months have passed since the conference, and I’m still pretty steamed. Mine is but one story; I would wager that every deaf/HoH academic can tell you other stories about material being denied to them because quality captioning was judged too expensive.

Real-time auto-caption with AI: These new programs use cloud-based machine learning that goes farbeyond the stand-alone basic speech recognition of craptioning software. The quality is pretty good and shows signs of continuous improvement. Google slides and Microsoft office 365 PowerPoint both have this functionality. Link to a video of Google Slides auto-caption in action.You need to have internet access to utilize the cloud-basedScreen Shot 2019-04-24 at 5.26.12 PM machine learning of these systems. One of the best features is that the lag between the spoken word and text is very short. I speak quickly and the caption is able to keep up with me. Before you start singing hallelujah, keep in mind that it is not perfect. Real-time auto captioning cannot match the accuracy of captioning or CART for transcribing technical words. Keep in mind that while it might get many of the words, if the captions miss one or two technical words in a sentence, then deaf/HoH still miss out. Nevertheless, many audience members will benefit, even with these missing words. So, we encourage all presenters to use real-time auto caption for every presentation. However, if a deaf/HoH person requests CART, real-time auto caption, even it is pretty darn good, should never be offered as a cheaper substitution. Their accommodation requests should be honored.

An offshoot of the real-time auto-caption with AI are apps that work on your phone. Android phones now have a Google app (Live Transcribe) that utilizes the same speech recognition power used in Google Slides. Another app that works on multiple phone platforms is Ava. I don’t have an Android phone and have only tried Ava in a few situations. It seems to do okay if the phone is close to the speaker, which might work in small conversation but poses problems for meetings of more than 3 people or academic presentations. Yes, I could put my phone up by the speaker, but then I can’t see the captions. So yeah, no.

What are your experiences with accessing effective captions in academic settings? Have you used remote captioning with success? For example, recently, I figured out that I can harness google slides real time auto-caption to watch webinars by overlapping two browser windows. For the first time, I can understand a webinar.  I’ve got a lot of webinars to catch up on! Tell us what has worked (or not worked) for you.

Understanding unfamiliar accents

-Ana

I wrote this post on an airplane coming back from an international conference I attended in Thailand. Because of the distance involved, participation at this meeting was pretty light on scientists from North and South America, but had a lot of participants from Europe (primarily the Netherlands, France, Spain, and Belgium) and Asia (primarily Thailand, China, Japan, Taiwan, but several other countries too). It was a wonderful conference: great venue, warm hosts, cutting-edge talks, great food, new people to meet, and some fun sightseeing thrown in. It also brought with it the usual challenges of trying to understand talks and poster presentations and network with important scientists in noisy settings. But this conference also brought home a specific problem that has stymied me throughout my career: understanding unfamiliar accents.

Deaf/HoH academics who depend on oral communication will likely be familiar with the problem that, even in an optimal hearing environment, we better understand those who speak like we do. Unfamiliar or “foreign” is relative, of course. I speak English and Spanish, but, due to the particularities of my upbringing, my best shot at hearing/understanding Spanish is with people who speak Colombian Spanish, or even more, the version of Colombian Spanish spoken in and around Bogotá (indeed, that is the accent I speak with – immediately recognizable to most Latin Americans). My Argentinean and Mexican friends can attest to how obnoxious I can be asking them to repeat themselves. Likewise, for English, I fare best with a northern Midwestern US type of English; Australian, British, Indian and many other accents will leave me floundering. I imagine that the same is true for other deaf/HoH academics, but with different versions of their language they are most used to.

Scholarly research, of course, is a global venture, and it is wonderful that many luminaries in my field hail from around the world. I’m already incredibly lucky that most professional communication is conducted in English, a language I happen to know. But, while hearing people can be quite understanding of my communication difficulties in suboptimal environments, it seems cruel (and professionally unwise) to tell colleagues that I can’t ‘hear’ them because of their accents—especially because many such colleagues have worked hard to acquire their English skills, thus going the extra mile to ensure communication. Because of globalism, the problem with understanding unfamiliar accents goes beyond conferences and professional networking. Many of my undergraduate and graduate students are also from various international locations. I am heartbroken every time I feel that my difficulty understanding my students negatively affects my ability to mentor them.

I have not found ideal strategies to deal with the challenges of unfamiliar accents. Every accent becomes a little more familiar with constant exposure, so I do understand my graduate students (with whom I communicate almost daily) better as time goes by. But it never stops being a challenge, and I sometimes have to resort to written communication in our one-on-one meetings. Since the undergraduates I teach change each semester, I don’t have similar opportunities to become familiar with their accents. For conferences and professional networking, I imagine that real-time captioning would be the ideal solution; but such a resource is not available at all conferences (though it should be!) and is generally not an option for networking. I’ve been excited by the recent advances in speech recognition software, such as that demonstrated by Google Slides, and wonder both if the technology can accommodate a range of accents and, if so, if it could ever become a portable “translator” for deaf/HoH individuals (I know some portable translator apps exist, but haven’t tried them and don’t know the scope of their utility; perhaps some readers can share their experiences?). I’m also curious whether unfamiliar accents are ever a challenge for deaf/HoH academics who rely on sign language interpreters. What other strategies have deaf/HoH academics employed to help navigate the challenge of unfamiliar accents in a professional setting?

How to work with ASL-English interpreters and Deaf academics in academic settings

Just like their non-Deaf colleagues, Deaf academics teach students, discuss and present their research, attend various professional meetings, and give media interviews. Communicating and sharing knowledge with others is a critical part of academia. However, not everyone has had experience communicating with somebody using sign language, and many non-signers are unfamiliar with the protocols of working with ASL-English interpreters. Ashley Campbell, the staff ASL-English interpreter at Saint Mary’s University in Halifax, Nova Scotia, and Linda Campbell, a Senior Research Fellow of Environmental Science at Saint Mary’s, have put together a rich set of resources: a series of tip sheets on how best to work with interpreters in various academic scenarios. By sharing these resources with The Mind Hears, Ashley and Linda provide quick reference tools that will simultaneously educate and lessen any stress around facilitating communication through interpreters. Though originally written to facilitate ASL-English communications, these tip sheets can be applied to any settings that incorporate signed language-spoken language communications.

The tip sheets can be found at:

https://smu.ca/academics/departments/environmental-science-work-with-interpreter.html

Do you have ideas on further tip sheets to add to this resource? Are there other recommendations that you would add to the existing tip sheets? Please let us know what strategies you have found useful in educating non-signers, and help Ashley and Linda expand the reach and utility of the resources they have created. Write to Ashley at Ashley.N.Campbell@smu.ca or share your thoughts in the comments below.

 

Ashley Campbell

Since 2015 I have been the staff ASL-English interpreter within the Faculty of Science at Saint Mary’s University in Halifax, Canada. My first exposure to sign language was in Belleville, Ontario where I lived for a short period early in life. Many years later I took ASL night classes for enjoyment and through learning the language and culture I became interested in studying it more formally. I graduated from an interpreting training program in 2010 and along with interpreting have volunteered for both provincial and national interpreting association boards. I have a passion for sharing knowledge with the mentality of “each one, teach one”. When I’m not working I am a mom to a very active toddler, cooking feasts for my family, and enjoying the odd Netflix program.

Linda Campbell

Dr. Campbell is a Professor and a Senior Research Fellow at Saint Mary’s University in Halifax.  She moved to Halifax from a Canada Research Chair (Tier II) faculty position at Queen’s University in Kingston. Her research and teaching at Saint Mary’s University focus on contaminants in the environment and on sustainability / resilience issues, with emphasis on aquatic ecosystems and water resources. Currently, Dr. Campbell’s research group is examining environmental contaminants across the Maritimes and around the world, with projects looking at impacts of legacy gold mine tailings from the 1800’s and contaminant transfer in aquatic food webs, birds, bats and humans.