Offering commentary on current developments and controversies in politics, religion, philosophy, science, education and anything else which attracts our interest.
Neuroscientist David Eagleman has been working with a team of engineers to design a device that will convert the frequencies of sound waves into vibrations that deaf people will be able to feel and interpret. These vibrations will allow the deaf to actually "hear" words that are spoken to them. This sounds like an amazing technological development, and I've posted some excerpts from the article in The Atlantic:
The VEST, or the Versatile Extra-Sensory Transducer, is a wearable tool that allows the deaf to, as Eagleman puts it, "feel" speech. An app downloaded onto a smartphone or tablet with a microphone will pick up sounds and send them via bluetooth to the vest. The vest will then "translate" those sounds into a series of vibrations that reflect the frequencies picked up by the mic by using a network of transducers — devices that can convert the signals into vibrations. So, if you spoke to the person wearing the vest, that person would "feel" what you're saying through vibrations on their back, instead of through their ears.
But Eagleman is quick to point out that the vest isn't just translating the sounds into a code — the patterns felt aren't a "language" to be interpreted like braille. In fact, the device doesn't use a specific language; it responds to all ambient noises and sounds.
"What you're feeling is not code for a letter or a word...you're actually feeling a representation of the sound."
"The pattern of vibrations that you're feeling [while wearing the VEST] represent the frequencies that are present in the sound," he said. "What that means is what you're feeling is not code for a letter or a word — it's not like morse code — but you're actually feeling a representation of the sound."
So far, it all works, Eagleman said. The team tested a prototype on a 37-year-old deaf man who, after five days of wearing the VEST, understood the words said to him out loud by feeling the vibrations because, as Eagleman put it, "his brain is starting to unlock what the data mean."
That "unlocking" phenomenon, like adding a new sense, is hard to explain. How do a series of vibrations that supposedly reflect sound eventually have meaning when there's no language assigned to them? How does the brain on the first day have no idea what a couple of vibrations on, say, the lower back means, but by the fifth day, know that they form a specific word?
This leads me to wonder whether a bat's sonar system might work this way and whether something similar could be designed to aid blind people in navigating their environment.
"My view is that the brain is a general-purpose computational device," Eagleman told me. "You could take any kind of data stream and the brain will figure it out. I consider it the biggest miracle no one's heard of."
How does the brain perform this astonishing miracle? More specifically, how did such an ability ever evolve through chance and natural selection since clearly there was no need for the brain to be able to manipulate data streams in this fashion until now? I suppose we should put our doubts aside and just will ourselves to have faith in the capability of blind, purposeless forces to effect such miracles.