Skip to content ↓

Wearable system helps visually impaired users navigate

Device provides information from a 3-D camera, via vibrating motors and a Braille interface.
Watch Video
Press Inquiries

Press Contact:

Abby Abazorius
Phone: 617-253-2709
MIT News Office

Media Download

New algorithms power a prototype system for helping visually impaired users avoid obstacles and identify objects.
Download Image
Caption: New algorithms power a prototype system for helping visually impaired users avoid obstacles and identify objects.
Credits: Courtesy of the researchers

*Terms of Use:

Images for download on the MIT News office website are made available to non-commercial entities, press and the general public under a Creative Commons Attribution Non-Commercial No Derivatives license. You may not alter the images provided, other than to crop them to size. A credit line must be used when reproducing images; if one is not provided below, credit the images to "MIT."

Close
New algorithms power a prototype system for helping visually impaired users avoid obstacles and identify objects.
Caption:
New algorithms power a prototype system for helping visually impaired users avoid obstacles and identify objects.
Credits:
Courtesy of the researchers

Computer scientists have been working for decades on automatic navigation systems to aid the visually impaired, but it’s been difficult to come up with anything as reliable and easy to use as the white cane, the type of metal-tipped cane that visually impaired people frequently use to identify clear walking paths.

White canes have a few drawbacks, however. One is that the obstacles they come in contact with are sometimes other people. Another is that they can’t identify certain types of objects, such as tables or chairs, or determine whether a chair is already occupied.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a new system that uses a 3-D camera, a belt with separately controllable vibrational motors distributed around it, and an electronically reconfigurable Braille interface to give visually impaired users more information about their environments.

The system could be used in conjunction with or as an alternative to a cane. In a paper they’re presenting this week at the International Conference on Robotics and Automation, the researchers describe the system and a series of usability studies they conducted with visually impaired volunteers.

“We did a couple of different tests with blind users,” says Robert Katzschmann, a graduate student in mechanical engineering at MIT and one of the paper’s two first authors. “Having something that didn’t infringe on their other senses was important. So we didn't want to have audio; we didn’t want to have something around the head, vibrations on the neck — all of those things, we tried them out, but none of them were accepted. We found that the one area of the body that is the least used for other senses is around your abdomen.”

Katzschmann is joined on the paper by his advisor Daniela Rus, an Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science; his fellow first author Hsueh-Cheng Wang, who was a postdoc at MIT when the work was done and is now an assistant professor of electrical and computer engineering at National Chiao Tung University in Taiwan; Santani Teng, a postdoc in CSAIL; Brandon Araki, a graduate student in mechanical engineering; and Laura Giarré, a professor of electrical engineering at the University of Modena and Reggio Emilia in Italy.

Parsing the world

The researchers’ system consists of a 3-D camera worn in a pouch hung around the neck; a processing unit that runs the team’s proprietary algorithms; the sensor belt, which has five vibrating motors evenly spaced around its forward half; and the reconfigurable Braille interface, which is worn at the user’s side.

The key to the system is an algorithm for quickly identifying surfaces and their orientations from the 3-D-camera data. The researchers experimented with three different types of 3-D cameras, which used three different techniques to gauge depth but all produced relatively low-resolution images — 640 pixels by 480 pixels — with both color and depth measurements for each pixel.

The algorithm first groups the pixels into clusters of three. Because the pixels have associated location data, each cluster determines a plane. If the orientations of the planes defined by five nearby clusters are within 10 degrees of each other, the system concludes that it has found a surface. It doesn’t need to determine the extent of the surface or what type of object it’s the surface of; it simply registers an obstacle at that location and begins to buzz the associated motor if the wearer gets within 2 meters of it.

Chair identification is similar but a little more stringent. The system needs to complete three distinct surface identifications, in the same general area, rather than just one; this ensures that the chair is unoccupied. The surfaces need to be roughly parallel to the ground, and they have to fall within a prescribed range of heights.

Tactile data

The belt motors can vary the frequency, intensity, and duration of their vibrations, as well as the intervals between them, to send different types of tactile signals to the user. For instance, an increase in frequency and intensity generally indicates that the wearer is approaching an obstacle in the direction indicated by that particular motor. But when the system is in chair-finding mode, for example, a double pulse indicates the direction in which a chair with a vacant seat can be found.

The Braille interface consists of two rows of five reconfigurable Braille pads. Symbols displayed on the pads describe the objects in the user’s environment — for instance, a “t” for table or a “c” for chair. The symbol’s position in the row indicates the direction in which it can be found; the column it appears in indicates its distance. A user adept at Braille should find that the signals from the Braille interface and the belt-mounted motors coincide.

In tests, the chair-finding system reduced subjects’ contacts with objects other than the chairs they sought by 80 percent, and the navigation system reduced the number of cane collisions with people loitering around a hallway by 86 percent.

Press Mentions

Fox News

FOX News reporter Grace Williams writes that MIT researchers have developed a new system to assist people with visual impairments in navigating their surroundings. “We wanted to primarily complement the white cane to allow users with visual impairments to quickly assess their environment in a contactless manner,” explains graduate student Robert Katzschmann. 

BBC News

Prof. Daniela Rus and graduate student Robert Katzschmann speak with BBC reporter Gareth Mitchell about the device they developed to help the visually impaired navigate. Rus explains that they applied the technologies used for autonomous driving to develop a system that can, “guide a visually impaired person in the same way a suite of sensors can guide a self-driving car.”

TechCrunch

TechCrunch reporter Brian Heater writes that MIT researchers have developed a vibrating wearable device to help people with visual impairments navigate. “In a world where computers help us with everything from navigating space travel to counting the steps we take in a day, I think we can do better to support visually impaired people,” explains Prof. Daniela Rus.

Fortune- CNN

Fortune reporter Aaron Pressman highlights how MIT researchers have developed a new wearable device to help visually impaired people navigate and avoid obstacles. Pressman writes that CSAIL researchers are, “combining cutting edge techniques from 3D cameras and image recognition software to build an automated navigation system for the visually impaired.”

Boston Herald

Boston Herald reporter Jordan Graham writes that MIT researchers have developed a wearable device aimed at helping visually impaired users navigate their environments. The system is equipped with, “a 3-D camera, a vibration pack and an electronic braille screen that will tell users not just where things are — but what they are.”

Related Links

Related Topics

Related Articles

More MIT News