It’s tempting to see web and application accessibility as altruistic rather than profitable. But that’s not true, contends Navya Agarwal, a senior software engineer and technical lead at Adobe who...
It’s tempting to see web and application accessibility as altruistic rather than profitable. But that’s not true, contends Navya Agarwal, a senior software engineer and technical lead at Adobe who focuses on frontend development.
Agarwal is also an accessibility expert who actively contributes to the W3C Accessible Rich Internet Applications (ARIA) Working Group.
“Building equitable products isn’t simply about altruism,” said Agarwal. “It can create opportunities for market expansion, penetration and sustainable growth. So that’s a section that is often left out by someone who is developing a new product, but building for all makes sure that you are getting more revenue at the end.”
Adobe’s AI Assistant Prioritizes Accessibility
Agarwal was on the team that built Adobe Express’ new AI Assistant, which was released in October and is in beta. The AI assistant soon will be integrated with ChatGPT Plus as well, she added.
The assistant is basically a generic conversational interface designed to make creativity more accessible and intuitive for everyone, she said.
“What we want to present to the world is a more humanly centered model where you focus on the intention, and the system helps you orchestrate everything else around you so it can go from any possibilities, basically creating images, rewriting content, making quick, quick edits, anything,” she said.
Accessibility is often considered an add-on, rather than an essential part of the product. That’s why it’s often layered on top of the existing product created for a general audience, rather than embedded into the product development process. Adobe Express AI Assistant was designed to support accessibility from its inception.
“It expands to cognitive disabilities, for example, things like ADHD, dyslexia, which are not really talked about right now; it’s underrepresented,” she said. “For example, if someone is going on a website who is facing dyslexia and ADHD, the website looks cluttered.”
The offering shows what’s possible when AI is applied to accessibility. While many think of accessibility as relevant to the vision or hearing impaired, with AI it can accommodate other challenges as well. For instance, the Adobe Express AI Assistant can change design to be less cluttered for those with ADHD, autism or other sensory issues. It can also just be helpful to people as they age, she added.
“Just imagine that you have agent where you only have a voice command; you’re just talking and it is … giving you the results,” she said. “All these are use cases that can be served with adaptive technology.”
While AI does introduce the risk of hallucinations, Agarwal sees that as a lesser evil than having no text descriptions or support at all.
“It expands to cognitive disabilities, for example, things like ADHD, dyslexia, which are not really talked about right now; it’s underrepresented.”
— Navya Agarwal, senior software engineer and technical lead at Adobe
As the tech world moves toward agentic AI, she foresees users having a digital personal shopping assistant to help users find clothes based on preferred parameters.
Benefits to Developers
With AI, developers are no longer limited to tactics that only assist the vision or hearing impaired, she said. Instead, users can tell the assistant their accommodation needs and the AI can create those, she said. That means users don’t have to tolerate a cluttered site or toolbar, for example; they can just talk to the web using voice commands or writing prompts. Some screen readers have already added a feature that lets users request an image description from ChatGPT or Claude without having to switch context, she said.
Previously, developers could only add an alt-text description to an image that says something simple — this is a long-sleeve knitted jumper in black that’s 100% cotton, for instance.
“But it doesn’t tell you so many different things, whether it’s lightweight or whether it’s chunky, etc.,” Agarwal said. “As AI enters the system, now we can just simply have our image being described in the context by using ChatGPT or Claude. Basically, my screen reader already has a feature that lets me request an image description from ChatGPT or Claude without having to switch context to do it.”
Incorporating accessibility also offers benefits to developers themselves, she added.
“By embedding more equitable practices into our product development process up front, rather than as an afterthought, we can enable teams to launch products faster, with lower risk and greater success for broader audiences,” Agarwal said.
The post What Adobe’s New AI Assistant Can Teach Frontend Developers appeared first on The New Stack.