Podcasts

Living and Working with Speech Recognition Technology

Real experiences and accessible design insights on Dragon Dictation

Abstract digital illustration of a human face formed by glowing blue lines and geometric points, paired with a soundwave coming from it's mouth.

Speech recognition technology has evolved from clunky early tools into powerful systems built into the devices we use every day. But for people with disabilities, these tools are far more than convenient; they can be life-changing.

In this episode, host Katie Samson traces the story of speech recognition through the lived experiences of Marcus Goldman, whose father with ALS used Dragon in its early days; physicist Jim Isenberg, who relies on Dragon after a spinal cord injury; and web developer Leah Mattern, who breaks down how design decisions impact user experience. Together, they explore the independence, frustrations, and opportunities that voice technology creates, and why inclusive design matters now more than ever.

Learn more about supporting people with disabilities in Tamman’s Learn Center and stay connected with us on LinkedIn.

Meet our guests:

Jim Isenberg wearing a Packers shirt and beanie while sitting in his electric wheelchair in a long tree-lined driveway in the fall.

Jim Isenberg is a physicist and quadriplegic who relies on Dragon daily for work and writing, offering first-hand insight into the challenges and rewards of speech recognition software.

Markus Goldman half smiling at the camera in a blue shirt with some graffiti on the wall behind him

Markus Goldman is the former producer of Article 19 who grew up using Dragon speech recognition with his father after an ALS diagnosis and now works in accessibility, bringing full-circle perspective to the technology’s impact!

Leah smiling and posing for the camera whilst holding a guide cane to the side.

Leah Mattern is a web developer, trainer, and accessibility consultant at Tamman and Chax who specializes in creating inclusive digital experiences for users of assistive technologies.

Listen to more Article 19 Podcast Episodes


Full Transcript

Access the PDF Transcript

Markus Goldman:

Who was Mr. Goldman? Well, he was my father. He was born in Denver. We grew up there. He went to the University of Colorado. He was a salesperson for Xerox when copy machines were all the rage in the seventies. We were outdoor people camping and we were always, always outside doing stuff, riding bikes, playing sports, always outside. He was always very involved in our life as much as he could be.

We had our own garden and stuff like that, and chemicals like Roundup were in use at our house, unfortunately, and things like that, because they didn’t know how horribly bad those chemicals are for us. My dad got sick, and he was diagnosed with Lou Gehrig’s disease. It was the late 80s, and at that time the average lifespan of somebody with ALS was about four and a half years once diagnosed.

We got a van. I don’t know how my mom got a hold of it, through insurance or something, but we got a van with a wheelchair lift on it so we could take dad to see my brother’s baseball games. It was hard, but we still did it. I mean, we did some road trips. We did some traveling. And before he became fully immobile, his friends took him on a fly fishing trip to Alaska as a last hurrah. I’m thankful for Dragon being there because it allowed us to communicate with him a lot easier. For a little longer.

Katie Samson:

Welcome to Article 19. I’m Katie and I’ll be your host for this episode. However you might look at it, we are all on our way to experiencing a mind-body disability in one way, shape, or form in our lifetime. This can be episodic, situational, temporary or permanent. Is this new information? Just think about your personal sphere, friends, family, extended family, colleagues and neighbors.

Katie Samson:

Technology often shows up at the right time for many of us when confronted with catastrophic changes. Marcus Goldman’s father, Nelson, received his ALS diagnosis as a middle-aged businessman, husband, and father of three in the 1980s. It was the dawn of American computerized technological advancements. Speech recognition software was the solution that gave hope, purpose, and a way back to a life Nelson once had prior to his ALS diagnosis.

Dragon’s journey began in 1982 when doctors James and Janne Baker founded Dragon Systems in Newton, Massachusetts. The company’s first breakthrough came in 1990 with Dragon Dictate, but users had to pause between words due to the technological limitations. The real revolution occurred in 1997 with Dragon Naturally Speaking, the first continuous speech recognition software that allowed natural speaking without pauses.

Just three years later, I experienced my own disabling event. At 20 years old, I broke my neck sledding, leaving me paralyzed from the chest down, with limited mobility in my hands and fingers. I learned about Dragon Naturally Speaking, and it became a lifeline for my communication, recovery and rehabilitation.

Fast forward to today, speech to text is everywhere. We use it in our daily lives for sending text messages or asking Siri, Alexa or Google to remember a task. But does the speech recognition software still hold importance like it did for Marcus’s father or me?

What is the Dragon user experience like today? And how do web developers and software engineers design for this technology? I’m getting after these questions in this episode, and I can’t wait for you all to learn with me.

A19 and Eleanor Roosevelt Voice Clip:

Expression is one of the most powerful tools we have. A voice, a pen, a keyboard.  “The real change we must give to people throughout the world in their human rights must come about in the hearts of people. We must want our fellow human beings to have rights and freedoms which give them dignity.” Article 19 is the voice in the room.

Katie Samson:

I invited my colleague Leah Mattern back on the pod. You may remember her from our previous episode on normalizing workplace accommodations from August, 2024. With Leah’s expertise as a web developer, trainer, and consultant, we began our conversation with some level setting. I asked her to explain: what is speech recognition software and who uses it.

Leah Mattern:

Speech recognition software is any software that you can add to your operating system to allow you to control your digital device without using any peripherals like a keyboard or a mouse. And technically anyone can use this software. And in fact, you may already be using this software without even really knowing what it is.

For example, on a cell phone, if you click on the little microphone and speak to dictate to your phone what your text message is and then send it out, that’s an example of speech recognition software. But it can go deeper than that too. For example, if you have a motor disability that renders you unable to use your limbs for whatever reason, you can use speech recognition software to actually control everything from your desktop to web browsing. So it’s actually really fantastic software.

Katie Samson:

How do users navigate web content actually using speech commands? I wonder if you could give us some examples.

Leah Mattern:

Absolutely. So there are a couple of ways that you can interact with speech recognition software. And for Windows, the most common one is Dragon Naturally Speaking. And they also, of course, as mentioned, they make it for Mac as well using voice control. So you can imitate the keyboard shortcuts. So for example, you can say the word tab to tab between interactive components like links or images or buttons, or you can say things like page up or page down to pop between web views, or you can say something like go all the way to the top or go all the way to the bottom to navigate the entire web page. Or you can use something called a mouse control where you can actually enable mouse control and it’ll show you a grid of numbers that will allow you to kind of focus in on parts of the screen that you would like. So for example, you can say “2,” and your computer will zoom in on the square with the number two in it and it’ll kind of make another grid superimposed on top of that so you can just zoom in until you’ve found the place that you want to view or activate. So it’s really versatile.

Katie Samson:

So versatile. In fact, I realized I know a physicist who is a quadriplegic who uses Dragon Naturally Speaking. Some of our listeners may be more impressed that I actually know a physicist rather than a quadriplegic who uses assistive technology, but I digress.

While lecturing at the University of Sydney in December 2017, Professor Jim Isenberg had a cervical spinal cord injury at Bondi Beach. While Jim and I are both quadriplegics, Jim uses the combination of a ventilator at night and a diaphragmatic pacemaker during the day to control his breathing. Through Jim’s wife Pauline and her diligent research, Jim was flown to Jefferson Magee Rehabilitation in Philadelphia a few months later, where he was introduced to an incredible therapy team. I’ll let Jim take it from here.

Jim Isenberg:

So one of the wonderful occupational therapists, Natalie, she asked me if I want to learn how to do voice recognition, be able to use that kind of approach to continue to work. So Natalie got me going. So did my wife Pauline, who was very helpful in this whole thing. So she started teaching me Dragon. But there was this wonderful person at a place called Inglis House in Philadelphia called Luccia. What happened then is that…Every Monday for probably a year I would travel over to Inglis House, and she would just train me on Dragon. There’s a lot of rules you have to learn, but she was so patient. She helped me a huge amount. I think everyone knows that your voice to the outside sounds different from what you hear inside. I mean, sometimes my voice probably seems like it’s terrible, but somehow Dragon gets used to it and

Jim Isenberg:

There’s two things. There’s learning how to speak properly and there’s all the little rules of Dragon. So, whereas it’s not just taking words, also all the little commands you have to give. It is a very different vocabulary. you know, of course, Dragon supposedly learns from what you’re doing. Like this dichotomy of commands like page down and it’s funny, whoever’s near me when they hear me on Dragon will hear me say “Scratch that, scratch that, scratch that.” There’s “open word” and there’s like “page down”, go to “end of line” and so on. Somehow Dragon has to be able to distinguish between writing down, go to the end of line. Musically sometimes there’s things I’m saying end of life, I’m not saying you’re not interested in that. People at the University of Oregon helped me a lot with the technology. It’s set up so when you turn this screen on and the computer. Dragon comes on automatically. It depends on what document I want to work with. I have to say, open home. And then I could choose. One of the aspects of Dragon is if you want to actually pick a particular document to open, it would say like, mouse grid, which gives you nine pieces. You have to keep saying press five, press nine, whatever. And eventually you can finally open the document you want to work with.

Katie Samson:

So we often talk at Chax and at Tamman about keyboard navigation. When we talk about using something like the tab key and one quick exercise that I want to invite all of our listeners to do the next time they’re in front of a computer is they can literally go on a webpage and simply press the tab key and see what happens. And one of the questions that we first ask when it comes to digital accessibility is: How does it feel? How does a website feel? And what we’re really talking about is someone, you know, like yourself, like myself, who has a mobility disability. How does a website feel when you’re trying to keyboard navigate through the website or dictate your way through a website? Can you touch or tab your way through or speak your way through a website? And I wonder, can you talk a little bit about that process?

Katie Samson:

when you come to an inaccessible website that has barriers to your assistive technology and that frustration, you know, often comes in the form of just like, okay, I’m not going to use this, but do you find that sometimes like one part helps and then another part is not working?

Jim Isenberg:

Yeah, so that happens. And I should point out there are a number of websites, even from things I use all the time, which are just not very accessible for Dragon. So sometimes what happens if I go to a website, I’ll try to dictate and have dragon do that. What can happen with some of them is a website and the whole thing, the computer just freezes. So what I usually do in that case is because I am fortunate enough to have 24 hour nursing care here then sometimes one of the nurses will, I’ll ask him or her to type in what I want. I wish more websites were compatible with Dragon. Some are and some are not. You know, that’s just the way it is. again, because my work is very important to me and I do have to rely on this, I just have to accommodate it.

Katie Samson:

What do you think is important for people to know when it comes to using Dragon from the user experience side first?

Jim Isenberg:

Well again, I think in the early stages when you’re just learning to, first of all, you have to get used to your voice. That’s important. But again, for me, what was really crucial was all that work that wonderful person Lucia did with me. You know, so there’s, especially for someone who has a spinal cord injury or something like that, there’s a relatively long period, I’d say several weeks, when you just have to learn all the commands. You just have to learn how to modulate your voice. So if there is a…

Jim Isenberg:

an occupational therapist or somebody who’s just being a wonderful helper, then that’s very important. You know, I do a little bit of mentoring with people at Magee and other people. There’s one fellow, for example, who had an injury about a month or two ago, and he gets very depressed. “What am I going to do with my life?” Just to have a window so he could connect with people. You could write emails, you could write documents, you could even write fiction. you know, a while before, I really got a lot of work to go again.

I started writing a memoir and I think this could just open up your life a little. Especially if you just had a debilitating spinal cord injury, you feel like you might be shut off from the world. And these things like Dragon, or the Apple equivalent, just open up your life.

Katie Samson:

Thank you, Jim, for opening up your life and your lived experience with speech recognition. I get the satisfaction of meditating with you in our health and fitness zooms weekly, and it is a privilege to call you my friend.

Let’s get back to our conversation with Leah. Here she discusses how good UX can make or break the experience for someone like Jim.

Leah Mattern:

So it’s really important that content in your website is labeled properly. And what I mean by that is when you’re looking at a web viewport, there are two kind of tree systems that you’re looking at that are stacked basically. And I don’t want to get too deep into the weeds and geek out all over the place, but the first tree is the large tree. We call the DOM tree. DOM stands for document object model, where everything in the web viewport is stacked on top of each other. So you’ve got your footer and then the body of the content stacked on top of that. And then you’ve got the header and then everything in between. And then there’s a tree overlaid over top of that we call the accessibility tree. And the accessibility tree filters out kind of like all of the extraneous nonsense and brings forward anything that we considered interesting to assistive technologies like screen readers or speech recognition software. For example, images with alternative text or buttons or links, things like that, or landmark regions. And those things get what’s called an accessible name, based on the text that we give it. So for example, if I want to find a button with the title “submit,” the text submit that’s fed to that button is going to be that button’s accessible name. At least that’ll be the first way you can assign an accessible name to it. And if you haven’t named that button well, or if you are faking a button with what we call a div, which is an HTML tag, we call it a generic tag. It’s an uninteresting tag, then the folks who are using the speech recognition software may not get access to that button. They may not be able to interact with it.

Katie Samson:

I can understand. if you were trying to select something using your voice and it was tagged with a similar name over and over again, the computer might be reading that you’re tagging that same button when in fact there’s multiple different selections that that particular user might be wanting to make. 

Leah Mattern:

Yeah Absolutely. A really great example of this that you see all over the place is that link with the text “learn more” in it. We like to use “learn more” all of the time. And while visually you may be able to understand that “learn more” is going to take you somewhere specific based on the context of the text around it, Somebody with speech recognition software is going to see a whole bunch of “learn mores.” And if they want to click on the one “learn more,” the software may take the text out of each link and pull you to the wrong place, it may just pick the first one and take you to that one all the time, no matter which one in the lineup you’re trying to access, because it’s confused about the accessible name.

Katie Samson:

I see. I hear this term thrown around all the time. And I wondered if this has anything to do with accessible naming. When we talk about something called semantic HTML and common web elements, I wondered if you could digest that a little bit to our listeners and how it might relate to this topic.

Leah Mattern: 

In web development, there are two kinds of widgets basically that we make for anything in the web. You can customize something, or you can use what we would say native HTML. In HTML, if you’re unfamiliar, it’s hypertext markup language, and it’s how we create the content that you see. So any text blocks, any images, interactive elements like buttons, things like that. The browser has a bunch of elements that it recognizes. So there’s a button element, there’s a link element, there’s actually an image element, something like that. So native links and buttons and images come with a whole bunch of kind of like hidden magic that the browser knows about and communicates. So if I were just to code button and then give it submit,

And then push it out, the browser is going to make it look like a button and it’s going to give it an accessible name of submit automatically because it sees the text within the button and knows what it is. And then, you know, like you can assign it some kind of action and JavaScript later, but it knows that it’s a button. Whereas if I were going to custom make a button out of the uninteresting content that I mentioned earlier, like a division, or we have something called a span tag that we can make a button out of, the browser doesn’t know what that is, basically. It’s just a generic container that holds stuff. It doesn’t have any of that magic that it communicates. let’s say if I’m using a screen reader and I come across a button, it’ll say, button, submit button, and then it’ll give you some actions to do with the button. Click on this, to click on this button, use these keys.

Leah Mattern:

But if I come across the generic element, it’s just going to say the word inside the element. And then it’ll say text element basically, and it won’t do anything. So when we’re creating content on the web, the best course of action is always to start with native HTML, because that’s the best bang for your buck.

Katie Samson:

I think I get it now, a little bit more. I’m getting there, I’m getting there.

Leah, can you talk a little bit about WAI-ARIA? I’ve heard that in a lot of conversations and meetings. I know you do a lot of work in this area, especially with advising and consulting for our clients. How does this contribute to providing accessibility on the web?

Leah Mattern:

Absolutely. So this is a hot topic and it can be easily misunderstood. So WAI-ARIA is a big old long acronym and it stands for web accessibility initiative, accessible rich internet applications, which, you know, what ARIA does is it allows us to make the web a little more verbose and it allows us to customize things a little more. When earlier I mentioned, we have a few kind of basic native HTML elements like buttons and links. We don’t have native versions for some of the more complicated widgets. And so we can use ARIA to kind of convey states and names, role states and names and make that happen. So for example, if I wanted to make a custom button–-yes, I do not recommend doing this, no  ARIA is better than bad ARIA.

Leah Mattern:

But if you wanted to make a custom button for whatever reason you decided, you could take a generic span tag, like I mentioned earlier, and put some text in it. And then you could give it a role and feed it a string that says button. So role equals button. So then assistive tech will recognize it as a button. And then if you could give it, let’s see, an aria pressed attribute or an aria selected B for like another kind of element. Yeah, basically adding aria to things makes them more recognizable to the browser.

Katie Samson:

So Leah, oftentimes because I’m a novice when it comes to web accessibility and I am doing my own basic testing for accessibility, I might suggest to someone, can you tab through the website just using your keyboard? But are you saying that’s not enough? And there is elements of accessibility that we’re actually missing in that process because of speech recognition software.

Leah Mattern:

Yes. And it can be nuanced and it depends entirely on, you know, what you’re hoping to achieve. But if you’re just pressing the tab button, there could be a lot of elements that you’re missing. We actually spend some time providing labels to a lot of elements that people with assistive technologies will need to interact with. So if you’re pressing tab, the idea is that you’re going to hit all of the interactive elements like links and buttons, but that also leaves behind things like images.

And it can leave behind landmark content too. And what I mean by that is the webpage is sectioned specifically and set up very specifically so that we can kind of provide like a legend or like a map, a roadmap for people with assistive tech. So we provide the roadmap to users by giving things a name, a role, and a value using ARIA. So for example, just a general overview, we generally divide a webpage into header, body, footer, and then the body contains all of the main content. To provide the roadmap for users. We can actually assign what would be the header element in HTML. can give it the role equals banner. And then for like say the footer content, we have the HTML footer element, but we can give that the role of content info. Then assistive technology is going to recognize that. And then inside of the body content, there’s a whole bunch of content that we show to users, you can see it visually like images, we can give those alternative text and that gives the images accessible names, or if the images is decorative, we can just have users pass right by it by giving the alternative text what we call a null string or an empty string. There’s all sorts of things that we can direct users to.

Katie Samson:

Leah, what is the relationship between semantic HTML and WAI-ARIA in creating accessible web content, and are there best practices?

Leah Mattern:

That is an excellent question. And this one, I believe is where a lot of assistive technologies get hung up. And it’s not the AT’s fault actually. This is where we as web developers can refine our skills and help create cleaner sites. So for example, one thing that we run into a lot is visible text versus label text. ARIA has a couple of ways that you can label an element to give it an accessible name. The lowest level is just the text you give to the elements. So for example, if I give you the button with the submit in it, the visible text, you can actually add something to that button called an aria label. And when you give it that attribute, you can feed it anything in the string. So for example, I can have a button that visually says submit, and then I can have an aria label that says, this form. And the aria label will actually supersede the visible text.

Leah Mattern:

And where this creates a bit of a problem for AT users is that when you’re using speech recognition software, you’re saying the words click and the visible text of the button. So click submit, and that’s going to submit your form. But if that ARIA label supersedes that text and it says submit this form, the AT user may not necessarily know that. In some speech recognition software, there is a way to view accessible names, I believe, for these interactive elements, but you may not always know how to turn that on or know I want it turned on because it can be a little verbose sometimes. So that creates an issue. So you’re clicking submit, and the button isn’t submitting and that’s going to make anyone want to rage quit your website seriously.

Katie Samson:

Rage quitting. Yeah, that is a phenomenon. 

Leah Mattern:

Yeah, most common to people who are using assistive tech on a site that is not coded correctly.

Katie Samson:

We laugh, but inside we’re crying.

Leah Mattern:

Yes! Just tears. As a voiceover user myself, absolutely.

Katie Samson:

Lastly, Leah, what is an accessible description and how is it implemented? Can you give an example of its usefulness?

Leah Mattern:

We use this one quite often on web forms and it’s another really great use for ARIA elements or ARIA attributes. So for example, you can create a password in input and you can create a little label, like just a separate semantic HTML label for that. So then when you pop to that input, it’s going to be input and it’ll say password because that’s its label. But then if you have specific password requirements, like “password must be eight characters long,” visually we’ll usually stick that right below the input and you may not encounter it. If you’re using assistive technology, it may come afterwards in the document object step. So what we can do is we can put an ARIA “described by” on that input and we pair it with that text by giving the ARIA described by the name of the ID of the text. And what I mean by that is that the text is in its own little tag. We give that text, password must be 8 characters, an ID, like say password text or password description. And then I’ll take the words “password description” and feed it to that ARIA described by. So the input knows first, it’ll be reading out the label. So password and then password must be 8 characters long. So really, really helpful in generating that extra content and kind of like associating elements to one another.

Katie Samson:

I sort of think of it like you’re feeding healthy vegetables to the element. They’re like maybe a little extra work to like cut and saute and stir fry up for your protein, but better digested together, fiber and protein.

Leah Mattern:

I’ve never heard it put that way, but yeah, actually. Yep, that’s a pretty good way to look at it. You’re making a really nice, like, information salad.

Katie Samson:

Yeah. I mean, everybody can relate to food, right?

Leah Mattern:

Yeah, sure. And visually you can see that little bit of extra word salad underneath your input. But for example, if you’re using a screen reader and you can’t see the text visually, it’ll allow you to hear it without having to move your cursor down below the input. So it’s really helpful.

Katie Samson:

Gotcha. So Leah, I know you’re not an occupational therapist and you’re not necessarily working with people on a day-to-day basis and advising them on different types of assistive technology. So I just wonder, because, you know, we did speak with Jim, who was very eloquent in sort of describing some of the difficulties and challenges, you know, he’s had in navigating with Dragon versus a Mac. Do you have any thoughts on that or opinions from the testing that you’ve done in your profession from the development side.

Leah Mattern:

Yeah, I am also primarily a Mac user myself. So, you know, sometimes it hurts my heart that they’re so different because I love the operating system of a Mac. Windows people, don’t come at me. I use Windows a lot too, but I just, have a special place in my heart for the Mac operating system.

The assistive tech for Windows has been around for a very long time and it’s developed over time. Whereas, you know, different companies have taken on this personal software and like they’ve R&D-ed it and they built it themselves. And Mac has built everything all internally. So they’re not pulling any third party content. And it’s not, I hate saying this, but it’s not as verbose as like saying that and dragon naturally speaking would be.

I’m primarily a voiceover user, which is the screen reader for Mac, and working with voiceover is very different than working with, say, JAWS or NVDA. And it’s got its own little quirks that you kind of have to figure out. And it really, it, yeah. And a lot of times too, users who are working with say like a vocational rehabilitation program through their state, they’re going to end up with Windows based content and things like Dragon or JAWS more often because of the contracts that those companies have with the state are usually cheaper to obtain as well. They’re more widely used and in addition, so sometimes the training isn’t quite there for the Mac either. I think we could probably dive a little deeper into that, but I think Mac needs to come a little further with their voice software. I’m sad to say that, but. yes, I’m waiting for the day when they’re both equal.

Katie Samson:

Well, there you have it folks. I think a safe answer, but a good one, right? Because we’re trying to advise as best we can, but also knowing that every user out there has a different experience and whether it’s a physical disability or maybe a combined set of disabilities that they have to navigate with multiple types of assistive technology at the same time. But I think you’ve given us and our listeners some really good fodder.

So I appreciate your time and thank you for being with us.

Leah Mattern:

Absolutely, but my closing thoughts would be just making your websites as clean as possible. I go with ARIA, but a “when in doubt, leave it out” approach. Like they say that no ARIA is better than bad ARIA and it’s super true. If you’re making your websites clean, both technologies, both operating systems should be able to get access to every bit of content.

Katie Samson:

Definitely if there’s people out there that are listening to this and they don’t really know where to start, please reach out to us. Leah does consulting and advising. You can check out www.chaxtc.com and look through all of our accessibility services and consulting and just hit us up, because we’re really interested in talking more and working with you on building a more inclusive web. So thanks so much, Leah.

Leah Mattern:

Yeah, thank you, Katie.

Katie Samson:

so much to digest here, but really, Leah helped us understand the versatility and the depth of modern speech recognition tools. She explained how these systems empower users to navigate digital environments hands-free, making technology more accessible and inclusive. And then there’s Jim’s journey, which illuminated both the steep learning curve and the profound rewards of mastering assistive technology as well as the ongoing need for accessible design and patient support. This episode underscores that while technology has advanced, its true value lies in the doors it opens for people facing life-changing circumstances. Speech recognition software is not just about convenience. It’s about autonomy, dignity, and the ability to fully participate in life. As we look to the future, the challenge for developers, advocates, and all of us is to ensure that these tools remain accessible, adaptable, and empowering for everyone.

Hey Marcus, are you there? Could you step out from behind the Oz curtain? I just want to give you an opportunity to share your thoughts as we wrap up this episode.

Markus Goldman:

Yes, I’m here.

You know, Katie, it’s been really, really interesting and fascinating to dig in and learn about where Dragon has come to today because my family’s experience with Dragon was in its early days. I guess you could call it the early model of Dragon. And it made life easier for us then. And today, it is even stronger and making a bigger difference in the accessibility space. And for my life to have come full circle in this way, where I was part of using Dragon for our familial needs at that time, to now where I am working in the accessibility space with Dragon and other tools. I feel very lucky that life has circled in this way.

Katie Samson:

Thank you, Marcus. I am your host, Katie Samson. Our guests today were Jim Isenberg, Leah Maddern, and Marcus Goldman, who was both a guest and an executive producer for this episode. Support also came from Sydney Bromfield, Lena Marchese, and Kristen Witucki.

Article 19 is a call for others to join us in a bigger conversation around the ADA, digital accessibility, and access to information. At Tamman and Chax, we’re working to build the inclusive web every day. And to do that, it takes all of us working together and learning together. Until next time, thank you so much for listening and being a part of our journey. Take care.

Up Next

Thoughts

Lessons Learned from Abruptly Going Remote

Like many around the world, COVID-19 has changed the way we work

Never Miss an Insight

Sign up for emails from Tamman

This field is for validation purposes and should be left unchanged.