Welcome to our new website! We hope you love it.

Your cart

Your cart is empty

What the Bleep?

There’s a lot to celebrate in how far we’ve come in the fight for Diversity, Equity and Inclusion (DEI) for our country, but we find these noble efforts conflict with accessibility at times. Too often, accommodations created by and/or for the Deaf community (such as text-to-911 and closed captioning) are first fought against, then begrudgingly accepted, and finally, usurped by the masses with the needs of the Deaf community being forgotten. In the age of technology, people who are Deaf or hard of hearing can experience digital media like never before. With a click of a button (or even automatically), we can read what’s happening in our news, online, or as we Netflix and chill. But what the f**k s going on with captioning?

I recently discovered that Automated Speech Recognition, or “ASR” is being CENSORED. My strategist and I communicate using a combination of Slack (obsessed), text, and video chat. We rely heavily on Google Meet for our meetings, where we can use live video and it’s easy to turn on captions and chat along the side of the video. She is a hearing person and I Deaf, and we have been able to successfully communicate. Our relationship is fun and we tend to swear … a lot. Recently, while on one of our planning calls, I was stunned when I saw her language start to appear with stars where letters belonged. That data set looked f*cking awesome, my new design was f**king gorgeous! I stared in utter disbelief that I, as a Deaf person, was not clearly being shown what was being said. I immediately reached out to some of my accessibility connections and let them know what I’d found. They were all equally flabbergasted and enraged. We started poking around as a group and found this censorship rollout was launched recently to filter offensive language in automated captioning. But had they even considered the conflict this created with accessibility, the very reason for the existence of captioning?

First, let’s clarify the difference between captions and subtitles. While they are similar, they each serve their own purpose. Captioning is specifically meant to enhance the viewing experience for Deaf or hard of hearing individuals. In addition to the text, they are supposed to display information like speaker differentiation and describe other relevant sounds in words. Captions may be embedded into the video or have the option to turn off sound completely (closed captions). Subtitles, on the other hand, are simply text meant for a hearing audience. It’s assumed the viewer can hear all that other stuff so subtitles are good for someone who may not speak the language, or need some translation. Automated captioning is more similar to subtitles as opposed to captioning and often are without punctuation or capitalization. There are some companies that provide a hybrid method which is a combination of ASR with human editing capabilities. In that situation, it may resemble captioning if the captions include descriptive effects like sound and punctuation.

Closed captioning was first mandated by the Television Circuitry Act of 1990. It required television screens over 13” to have a built-in closed captioning option. Other laws have also been passed to require other things to be captioned such as content first created for television and then placed on the internet. Technology now allows for a lot more automation, which means even more subtitling, which is also without punctuation and other markers to show sentence structure thereby reducing readability.. In an effort to avoid offending anyone with profanity or racially charged language, online media like YouTube, and even companies like Rev, whose very mission is speech-to-text for accessibility, have started censoring out those naughty words, which is bullsh*t.
When profane words are heard on video, but are censored out of subtitles and/or captions, Deaf, and often hard of hearing, people do not know what is being said. Don’t get me wrong, I’m not aching to see those four-letter words, but, I would at least like to have an accurate understanding of exactly what’s being said. People who are deaf deserve an equal experience to that of their hearing counterparts, even if it is offensive. Wouldn’t you want to know if a friend or colleague was swearing at you or using racial slurs in conversation? Can you imagine if that happened, and instead of immediately unfriending them, you instead carried on the conversation with a smile, none the wiser? Tech giants who are censoring captions claim that seeing FXXX, BullXXXX, etc. is good enough, but it’s not. What is heard MUST BE SEEN.
The real problem here is that when the audio isn’t altered and the captions (or subtitles for that matter) are censored, it hurts the Deaf and hard of hearing community almost exclusively. No one is saving gentle eyes that may burn and shrivel at the sight of a few f-bombs, but even worse, the change to censor out select sensitive text is actually causing words that sound like the bad ones to get bleeped out, too. This affects more than just a Deaf person’s viewing pleasure of their favorite show. People who are deaf are impacted on video conferences and in other scenarios where censorship is practiced. Where captioning is concerned, companies are excluding the very people for whom it is most relevant. One must ask the question, where does this censorship start and end?
As we continue to make progress in DEI for everyone, let’s not lose track of the real goal here: to provide fair treatment and equal access for our diverse population. With captions, what started as an accommodation for the Deaf and hard of hearing is now being used by most people (a whopping 85% of videos on mobile are viewed in silent mode), but ironically our needs are not top of mind anymore.
Previous post
Next post

Leave a comment