One thing that it’s best the world never hears, is my thoughts. Nope, keep them locked away up there in my big head, bouncing around saying terrible things like the true monster that I am, nobody ever able to hear them, to see the real me. So why on earth would I connect a big hulking […]
One thing that it’s best the world never hears, is my thoughts. Nope, keep them locked away up there in my big head, bouncing around saying terrible things like the true monster that I am, nobody ever able to hear them, to see the real me.
So why on earth would I connect a big hulking bit of space-age tech to my head that can read my thoughts? Why would that be a thing that I’d do? It wouldn’t, that’s why – but it would be a thing that some clever cloggies over at MIT university in the US would do. Because they’ve done it.
The researchers have invented a snazzy white gadget that clips onto the side of your face and picks up the words that you say in your head. The device contains electrodes that detect ‘neuromuscular signals’ in your face and jaw that are supposedly triggered by internal vibrations when you think about stuff. It’s all too minuscule for even you – who literally lives inside your own head – to notice, but this tech will pick it up.
The head-strap also includes a pair of bone-conduction headphones, which transport the facial vibrations into the inner ear, without actually blocking the ear canal. This essentially means you can hear everything around you, as well as whatever the system is telling you. Certainly not the kind of thing to drive you to madness or anything.
Arnav Kapur, a graduate student at the MIT Media Lab, who led the development of the tech, said:
“The motivation for this was to build an IA device — an intelligence-augmentation device.
“Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?”
Pattie Maes, Kapur’s thesis advisor, reiterates:
“We basically can’t live without our cellphones, our digital devices.
“But at the moment, the use of those devices is very disruptive. If I want to look something up that’s relevant to a conversation I’m having, I have to find my phone and type in the passcode and open an app and type in some search keyword, and the whole thing requires that I completely shift attention from my environment and the people that I’m with to the phone itself.”
Thad Starner, a professor in Georgia Tech’s College of Computing says:
“I think that they’re a little underselling what I think is a real potential for the work.
“Like, say, controlling the airplanes on the tarmac at Hartsfield Airport here in Atlanta. You’ve got jet noise all around you, you’re wearing these big ear-protection things — wouldn’t it be great to communicate with voice in an environment where you normally wouldn’t be able to?
“[Or for] people who have disabilities where they can’t vocalize normally. For example, Roger Ebert did not have the ability to speak anymore because lost his jaw to cancer. Could he do this sort of silent speech and then have a synthesizer that would speak the words?”
I mean, sure, it might make things easier in the short term, but let’s look at the bigger picture here – this will not end well.
A bad idea from a cabal of megalomaniacs, this – my brain is the only private space I have left, and if I want to sit at work and think about nothing else but all the cheese I’m going to eat when I get home, that’s nobody’s business but mine. I can’t have Amazon Prime turning up to the office with a wheel of Camembert every day – people would think I’m some sort of dairy pervert.
AND I’M NOT. AS I KEEP TELLING YOU.