top of page

Digital accessibility and accessible digital tools in everyday life

Banner featuring a smiling man, smartphone, and digital icons, titled “Accessible Digital Tools in Everyday Life.”

A digital tool is a hardware device (PC, smartphone, tablet), software (apps, management software), or online services (cloud, e-banking). They are essential for learning, communication, and automation. When we talk about accessibility, however, we're not just talking about people with disabilities. We're talking about the ability of these tools to be truly usable by the greatest possible number of people, with different conditions, different skills, and different needs. Let me explain: you don't have to have a certified disability to encounter barriers that prevent us from using a digital tool. Lighter websites and simpler interfaces allow navigation even for those with slow connections, older devices, or those unfamiliar with technology. Then there are generational factors: older people don't necessarily have a disability, but they may have a natural decline in vision or less confidence with technology. You know those phones with big buttons? There are. Cultural factors: we don't all have the same level of education, but we should be able to access portals or institutional websites without first obtaining a master's degree in computer engineering. In short, accessibility transforms a digital product into a democratic tool available to everyone.

But what are the latest innovations in digital accessibility? I asked Google and I'm passing on their answers to you, because I'm not subscribed to a special newsletter that keeps me up to date. I'm one of you. Even a little less so.


The Apple ecosystem.


The American multinational based in California (lucky them, my house is in front of a drain), has standardized its systems, with many contextual controls that appear when needed, leaving more space for the content.

Essentially, iPhone, Mac, and iPad menus appear only on demand without cluttering the screen, giving priority to content enjoyment.

Here's a quick example: Have you ever tried typing a word on your iPhone and then wanted to copy or delete it? I imagine so. Until you highlight it by long-pressing on it, the menu that allows you to copy, cut, or paste it doesn't appear.

Many people struggle when faced with a screen full of icons, notifications, and complex menus. Whether it's due to a lack of familiarity with technology or a cognitive disability.

People with visual impairments often use magnification, and a menu can take up half the screen, obscuring the content.

Screen reader software (those that read the screen for the blind user) doesn't distinguish between main content and menus and ends up reading everything on the screen, making navigation slower, more tiring, and less efficient.

And then there's the issue of optimization: for task-oriented users (those who access an app to do a specific thing on the fly, like paying for parking or sending an email), not having a thousand things on the screen makes everything more intuitive.

One last example, but there are many more: think about home automation. Hiding secondary menus allows you to display larger, more readable information, and instead of navigating through three submenus to turn on the kitchen light, you create a single, large, dynamic button on the home screen that only appears when you're home.


There are also a number of other features that I'll quickly summarize for you, such as direct integration with hearing aids for audio streaming and settings control, or Assistive Touch, which creates a virtual floating button on the screen, useful for those who have difficulty using physical buttons, complex gestures, or a broken Home button. It allows you to simulate presses, swipes, use Siri, lock the screen, take screenshots, and customize actions, reducing wear and tear on physical buttons.

Not to mention what has always been Apple's strength (no, I'm definitely not talking about the price 😅): the software and hardware optimization that allows the various Apple devices to work together without too many problems and run apps that would never run on other devices (with the same power), because they are all children of the same parents.


Example: Try editing a 4K video packed with effects on a MacBook Air using Premiere Pro (a third-party app) or Final Cut (Apple's own app). The video file is the same, and so is the computer, but in Premiere Pro everything crashes, while in Final Cut it feels like you're sitting on a $4,000, unstoppable computer. The power of being the same person.


ANDROID


The distinct competition doesn't have the advantage of having its own offspring on which to build an ecosystem, because you know better than I that Android is an operating system present on countless smartphone models, each from a different manufacturer. Samsung, Oppo, Motorola, LG, and so on.

However, it has overcome this limitation by integrating Google's artificial intelligence.

The "Talk Back" screen reader has been enhanced with Gemini's multimodal capabilities. With a simple gesture, users can get a detailed description of the entire screen, including graphics, photos, or icons without alt text.

The “Ask Gemini” feature allows you to ask specific questions about the screen you are viewing, either by voice or via the keyboard, and interact more naturally with apps.

Real-time subtitles use AI to analyze audio and transcribe not just words, but also emotions and audio context. In fact, if the speaker intentionally elongates a word, like "nooooooo," the feature notices and writes it that way.

The AI detects volume levels, displaying subtitle text in all caps to emphasize shouting, and captures background sounds of human interactions like sighs, clapping, or coughing and reports those, too.

There are then two functions that reduce physical interaction.


AutoClick : When enabled, this feature allows the system to automatically click when the cursor remains still for a customizable amount of time. In practice, if for some reason the user is unable to click, they simply drag the cursor to the required location and, after a few seconds, the software will recognize the need to click and click for the user.


Voice Access : Allows you to navigate an app's interface using your voice. And it can be activated the same way: just say "Hey Google, launch Voice Access."

 


MICROSOFT


The other distinct competitor has transformed its operating system (the very famous, sometimes infamous, Windows), into a proactive and adaptive system.

How did they do it? ... more than the technical details, what matters here is the practical result for the user.

But let's see where Windows is going.


Local real-time subtitles : The latest versions of the system are increasingly capable of generating instant subtitles for any audio passing through it (movies, videos, calls, apps). And it does so locally, so latency (the response time, how long it takes to hear and type) is almost zero. Previously, this was only possible through apps that did everything via an internet connection. So, with slow round-trip connections, the subtitles weren't exactly real-time.


Voice Access: like Android, but everything is local, like subtitles. Windows itself manages everything, allowing for virtually unlimited control. Users have full control over the entire interface, which can be further customized to meet their motor or visual needs. It can be simplified, enlarged, and do a whole host of other things, without slowing down the computer due to the graphics overhead, because everything is handled by AI.


Eye Tracking and Optical Control: The new Windows "system" is optimized to handle eye tracking without taxing the CPU. Support for technologies like eye tracking and alternative inputs continues to improve, but the actual experience still depends heavily on compatible hardware and the quality of integration.

And all of this, as mentioned, isn't just "additions," like adding salt to pasta. Windows has become salted water, ready for cooking.


Half

In March 2026, the company that owns Facebook and Instagram added new features to its smart glasses. Those Ray-Bans that record videos, sometimes Oakleys.

We're talking about hands-free features that enable something truly revolutionary, like asking the glasses to describe the context around them. This is useful if the user can't see, but also if they simply can't understand what's in front of them. Imagine being at an art exhibition, but unable to get close to the label on a painting to see who painted it or when. "Hey, glasses, whose painting is that? What does it represent?" I mean, not even Tony Stark.


We are facing something more than a simple technological leap, this is a fundamental change in the social and cultural model.

Accessibility has stopped being an isolated technical aspect and is becoming the beating heart of design. We've stopped thinking about who a product is for and started thinking about how many people it can reach and how.

We just have to wait for it to happen on the streets too.


The challenge for the coming years will be to ensure that all this accessibility is accompanied by real privacy and security. Users must be assured that assistive technologies do not become a tool for tracking or discrimination.

Why do I say this? Because wearable devices and assistive technologies constantly monitor biometric data, habits, and movements, creating detailed user profiles.

And the unprotected and unauthorized sharing of health data, for example, can lead to discrimination in the workplace, for example.


Imagine an employee using an eye-tracking device or an adaptive keyboard due to a motor or neurological disability. In order to function, these devices collect data such as reaction times, click frequency, and other information.


Now, imagine a company using that data to evaluate employee productivity and decide whose contracts to renew. Or if an insurance company requested that data to offer the company a policy. Who would be discriminated against?

Welcome to the Matrix. Nope.


In Europe, the issue concerns not only the AI Act (EU Regulation 2024/1689), but also the GDPR, anti-discrimination regulations, and, in many cases, labor law. Because when an assistive technology collects sensitive or behavioral data, the question isn't just what it can do, but who can access that data, for what purposes, and with what guarantees. But I'll leave you with a bit of suspense; we'll talk about that next time.



Support ForAllWe

ForAllWe is an independent project focused on digital accessibility and inclusive technology. If you find our content useful, you can support us with a voluntary contribution. Your support helps us stay independent and involve people with disabilities as authors and testers.




Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page