Saturday, 23 Nov 2024

Opinion | I Have a Choice to Make About My Blindness

I recently visited a local Mexican restaurant with my family. It was the first time we’d gone out together for a meal since the start of the pandemic. As I gazed up at the familiar menu boards hanging behind the counter, I realized with some dismay that I could no longer read them.

I could still make out the headlines — information I already knew, like the fact that the restaurant served burritos and tacos and beverages. But all the text below those headings was indecipherable. This has become a common occurrence as I enter the late stages of retinitis pigmentosa, an untreatable degenerative retinal disease that over decades has been destroying my vision.

In that moment, I had a choice: I could pull out my phone and try to use its magnification or text-to-speech capabilities to read the menu, or ask my family for help. There’s a powerful tension between the independence facilitated by assistive technologies, and the possibility of interdependence that can emerge from the exchange between disabled and non-disabled people. This tension has never been more pronounced than today, when advances in technology stand to usher in an unprecedented era of independence for disabled users.

In the last few years, a new category of technology for the blind has emerged, called “visual interpreters.” With the Be My Eyes app, a blind person can point her phone at something she can’t see — a pair of pants, for instance, which may or may not match her shirt — and connect her phone’s camera to the screen of a sighted volunteer who can talk her through the situation. Even if, technically speaking, the blind person is still relying on someone else for help, the anonymity and digital frictionlessness of the app experience creates the feeling of an automated solution to the problem.

Advances in machine vision, like the astonishingly powerful image-recognition capabilities of modern A.I., are erasing even these human actors from the equation. This year, Be My Eyes released a beta version of a service called the Virtual Volunteer, which replaces the human at the other end of the line with A.I. (powered by OpenAI’s GPT-4 model). A blind beta tester pointed his camera at a frozen meal, and the A.I. read him the description of the contents on the package, including the date of expiration and the size of the meal.

But the pitfalls of artificial intelligence are as present in the assistive-tech sphere as they are in the rest of society. As delighted as blind beta testers of OpenAI’s visual interpreter were, it also made some obvious mistakes: As Kashmir Hill recently reported in The New York Times, OpenAI confidently described a remote control for a blind user, including descriptions of buttons that weren’t there. When another beta tester showed the tool the contents of a fridge, asking for recipe ideas, it recommended “whipped cream soda” and a “creamy jalapeño sauce.” And OpenAI recently decided to blur people’s faces in the photos that the blind beta testers were uploading, severely limiting the Virtual Volunteer’s social utility for a blind user.

The visual world of information that is inaccessible to blind people is impossibly vast — think of every image and video and text that’s uploaded to the internet, let alone all the information that fills our offline world. (According to the World Blind Union, 95 percent of the world’s published knowledge is “locked” in inaccessible print formats.) This infinitely refreshing storehouse of information, most of it difficult if not impossible for people with visual or print disabilities to get access to, makes a universal technological solution seem like the only path forward. But in spite of technology’s well-documented power to transform the lives of people with disabilities, it cannot be the only solution.

Machine-vision bots have begun to automatically describe images online, but the results are still wildly variable — on Facebook, when my screen reader encounters photos of my friends and family, it invariably offers howlers like “image may contain: fruit.” If people wrote their own image descriptions, I’d get a much clearer sense of what was going on, with far more context. Likewise, companies such as accessiBe and AudioEye have amassed millions of dollars offering “accessibility overlays” and widgets that claim to automatically fix websites that are broken for its disabled users (and thus help the sites avoid costly A.D.A. lawsuits) with a few lines of A.I.-generated code. But frequently, accessibility overlays have made websites even more difficult to navigate for blind users. The solution, many advocates suggest, is to rely less on A.I., and instead to hire human accessibility experts to design websites with disability in mind at the outset. Again, people must remain part of the process.

Waiting in line for dinner this summer, I felt unwilling to pull out my phone to use any of the cybernetic solutions available to help me decipher the menu. I decided to just ask my wife, Lily, to tell me about the taco options. Our son, Oscar, who’s 10, interrupted her: Let me do it! He proudly read the various taco descriptions to me, and we both set to discussing which ones sounded good. Relying on Oscar to read the menu didn’t feel anything like a loss of independence. It was a fun, affectionate dialogue — a shared experience with a loved one, which was, beyond basic sustenance, the real reason we were there. His eyes and ears and brain had far superior sensors than any assistive device out there, and he’s far more charming to interact with.

Independence is essential for everyone, and especially for disabled people, whom the world tends to look at with pity, revulsion and exceedingly low expectations. I’m eager to see how technology enables that independence in entirely new ways. But there is irreplaceable value in interdependence, too — the feeling of shared experience that comes when two people interact and exchange ideas and abilities. Oscar may have read me the menu, but I helped him interpret it and figure out what he wanted to eat, too.

I think this is what the disability-justice educator Mia Mingus means when she talks about “access intimacy” — an idea, she says, that “reorients our approach from one where disabled people are expected to squeeze into able-bodied people’s world, and instead calls upon able-bodied people to inhabit our world.” I know that A.I. will be transformative in its ability to restore some of the independence that blindness threatens to take from me. But I hope I won’t lose sight of this other experience, too, of the moments of intimacy and exchange that appear when two people come together to collectively explore parts of the world they couldn’t have encountered, in quite the same way, on their own.

Andrew Leland is the author of “The Country of the Blind: A Memoir at the end of Sight.”

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].

Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram.

Source: Read Full Article

Related Posts