Archives

Written by Devin Coldewey

Scientists pull speech directly from the brain

In a feat that could eventually unlock the possibility of speech for people with severe medical conditions, scientists have successfully recreated the speech of healthy subjects by tapping directly into their brains. The technology is a long, long way from practical application but the science is real and the promise is there.

Edward Chang, neurosurgeon at UC San Francisco and co-author of the paper published today in Nature, explained the impact of the team’s work in a press release: “For the first time, this study demonstrates that we can generate entire spoken sentences based on an individual’s brain activity. This is an exhilarating proof of principle that with technology that is already within reach, we should be able to build a device that is clinically viable in patients with speech loss.”

To be perfectly clear, this isn’t some magic machine that you sit in and its translates your thoughts into speech. It’s a complex and invasive process that decodes not exactly what the subject is thinking but what they were actually speaking.

Led by speech scientist Gopala Anumanchipalli, the experiment involved subjects who had already had large electrode arrays implanted in their brains for a different medical procedure. The researchers had these lucky people read out several hundred sentences aloud while closely recording the signals detected by the electrodes.

The electrode array in question

See, it happens that the researchers know a certain pattern of brain activity that comes after you think of and arrange words (in cortical areas like Wernicke’s and Broca’s) and before the final signals are sent from the motor cortex to your tongue and mouth muscles. There’s a sort of intermediate signal between those that Anumanchipalli and his co-author, grad student Josh Chartier, previously characterized, and which they thought may work for the purposes of reconstructing speech.

Analyzing the audio directly let the team determine which muscles and movements would be involved when (this is pretty established science), and from this they built a sort of virtual model of the person’s vocal system.

They then mapped the brain activity detected during the session to that virtual model using a machine learning system, essentially allowing a recording of a brain to control a recording of a mouth. It’s important to understand that this isn’t turning abstract thoughts into words — it’s understanding the brain’s concrete instructions to the muscles of the face, and determining from those which words those movements would be forming. It’s brain reading, but it isn’t mind reading.

The resulting synthetic speech, while not exactly crystal clear, is certainly intelligible. And set up correctly, it could be capable of outputting 150 words per minute from a person who may otherwise be incapable of speech.

“We still have a ways to go to perfectly mimic spoken language,” said Chartier. “Still, the levels of accuracy we produced here would be an amazing improvement in real-time communication compared to what’s currently available.”

For comparison, a person so afflicted, for instance with a degenerative muscular disease, often has to speak by spelling out words one letter at a time with their gaze. Picture 5-10 words per minute, with other methods for more disabled individuals going even slower. It’s a miracle in a way that they can communicate at all, but this time-consuming and less than natural method is a far cry from the speed and expressiveness of real speech.

If a person was able to use this method, they would be far closer to ordinary speech, though perhaps at the cost of perfect accuracy. But it’s not a magic bullet.

The problem with this method is that it requires a great deal of carefully collected data from what amounts to a healthy speech system, from brain to tip of the tongue. For many people it’s no longer possible to collect this data, and for others the invasive method of collection will make it impossible for a doctor to recommend. And conditions that have prevented a person from ever talking prevent this method from working as well.

The good news is that it’s a start, and there are plenty of conditions it would work for, theoretically. And collecting that critical brain and speech recording data could be done preemptively in cases where a stroke or degeneration is considered a risk.

LEGO Braille bricks are the best, nicest and, in retrospect, most obvious idea ever

Braille is a crucial skill to learn for children with visual impairments, and with these LEGO Braille Bricks, kids can learn through hands-on play rather than more rigid methods like Braille readers and printouts. Given the naturally Braille-like structure of LEGO blocks, it’s surprising this wasn’t done decades ago.

The truth is, however, that nothing can be obvious enough when it comes to marginalized populations like people with disabilities. But sometimes all it takes is someone in the right position to say “You know what? That’s a great idea and we’re just going to do it.”

It happened with the BecDot (above). and it seems to have happened at LEGO. Stine Storm led the project, but Morten Bonde, who himself suffers from degenerating vision, helped guide the team with the passion and insight that only comes with personal experience.

In some remarks sent over by LEGO, Bonde describes his drive to help:

When I was contacted by the LEGO Foundation to function as internal consultant on the LEGO Braille Bricks project, and first met with Stine Storm, where she showed me the Braille bricks for the first time, I had a very emotional experience. While Stine talked about the project and the blind children she had visited and introduced to the LEGO Braille Bricks I got goose bumps all over the body. I just knew that I had to work on this project.

I want to help all blind and visually impaired children in the world dare to dream and see that life has so much in store for them. When, some years ago, I was hit by stress and depression over my blind future, I decided one day that life is too precious for me not to enjoy every second of. I would like to help give blind children the desire to embark on challenges, learn to fail, learn to see life as a playground, where anything can come true if you yourself believe that they can come true. That is my greatest ambition with my participation in the LEGO Braille Bricks project

The bricks themselves are very much like the originals, specifically the common 2×4 blocks, except they don’t have the full eight “studs” (so that’s what they’re called). Instead, they have the letters of the Braille alphabet, which happens to fit comfortably in a 2×3 array of studs, with room left on the bottom to put a visual indicator of the letter or symbol for sighted people.

It’s compatible with ordinary LEGO bricks, and of course they can be stacked and attached to themselves, though not with quite the same versatility as an ordinary block, as some symbols will have fewer studs. You’ll probably want to keep them separate, since they’re more or less identical unless you inspect them individually.

All told, the set, which will be provided for free to institutions serving vision-impaired students, will include about 250 pieces: A-Z (with regional variants), the numerals 0-9, basic operators like + and =, and some “inspiration for teaching and interactive games.” Perhaps some specialty pieces for word games and math toys, that sort of thing.

LEGO was already one of the toys that can be enjoyed equally by sighted and vision-impaired children, but this adds a new layer, or I suppose just re-engineers an existing and proven one, to extend and specialize the decades-old toy for a group that already seems already to have taken to it:

“The children’s level of engagement and their interest in being independent and included on equal terms in society is so evident. I am moved to see the impact this product has on developing blind and visually impaired children’s academic confidence and curiosity already in its infant days,” said Bonde.

Danish, Norwegian, English and Portuguese blocks are being tested now, with German, Spanish and French on track for later this year. The kit should ship in 2020 — if you think your classroom could use these, get in touch with LEGO right away.

Alphabet’s Wing gets FAA permission to start delivering by drone

Wing Aviation, the drone-based delivery startup born out of Google’s X labs, has received the first FAA certification in the country for commercial carriage of goods. It might not be long before you’re getting your burritos sent par avion.

The company has been performing tests for years, making thousands of flights and supervised deliveries to show that its drones are safe and effective. Many of those flights were in Australia, where in suburban Canberra the company recently began its first commercial operations. Finland and other countries are also in the works.

Wing’s first operations, starting later this year, will be in Blacksburg and Christiansburg, Va.; obviously an operation like this requires close coordination with municipal authorities as well as federal ones. You can’t just get a permission slip from the FAA and start flying over everyone’s houses.

“Wing plans to reach out to the local community before it begins food delivery, to gather feedback to inform its future operations,” the FAA writes in a press release. Here’s hoping that means you can choose whether or not these loud little aircraft will be able to pass through your airspace.

Although the obvious application is getting a meal delivered quickly even when traffic is bad, there are plenty of other applications. One imagines quick delivery of medications ahead of EMTs, or blood being transferred quickly between medical centers.

I’ve asked Wing for more details on its plans to roll this out elsewhere in the U.S. and will update this story if I hear back.