Amazon released, Thursday, Halo, its connected bracelet. It stands out from its competitors thanks to the use of artificial intelligence, which listens to the user's voice to guess his emotional state. An innovation that goes too far?
Halo, what? Amazon made, on Thursday, August 27, a remarkable and already controversial entry into the connected healthcare market with a new bracelet. Called Halo, it resembles the Fitbit and others that offer to monitor sleep and measure heart rate. But it introduces two innovations that caused a reaction.
And it's timely, because one of Amazon's main promises is that its bracelet will be able to "read" the user's emotions thanks to artificial intelligence. Boredom, elation, hesitation, joy or confusion are some of the feelings that the AI of Halo is supposed to recognize by the tone of voice.
Emotions, "complex materials to handle" for AI
What's in it for health? In theory, the recognition and analysis of emotions by algorithms can "help people who are depressed or who have, for example, anger management problems", according to artificial intelligence experts. The voice recording can be used as a working support for the doctor who follows the patient or even to warn a person about his emotional state.
But this technology is not yet 100% reliable. "Emotions are very complex to handle [for an RN] because they depend on the context, the person, their culture and their social environment,". This is why, to be as efficient as possible, algorithms must be precisely parameterized according to the person, whose emotions they are supposed to recognize. And generic solutions such as the Halo bracelet can hardly claim to perceive nuances in the voice that can vary from one individual to another.
Amazon is aware of this and is careful not to present its bracelet as a medical accessory. The Internet giant did not seek FDA (US drug agency) approval as Apple did, for example, for its connected watch. Halo simply wants to improve the "emotional well-being" of its owner.
Marketing tool ?
Amazon's AI analyzes "the volume of speech, its intensity, tempo and rhythm of the voice to determine what emotions others can detect in the tone," notes the press release introducing the bracelet. The idea would be to inform the user about how an audience perceives a speech or how the quality of sleep can affect your emotional state, the group told The Washington Post.
The idea of an AI on your wrist judging when you seem to be in a bad mood or not "enthusiastic enough" seems "straight out of a dystopia invented for an episode of Black Mirror," CNN noted.
The usefulness of such a gadget is very questionable for consumers... but much more obvious for Amazon. "It's an ideal marketing tool," stresses an artificial intelligence specialist. With this type of data, the king of e-commerce could easily adapt its offers to the emotional state of its customers.
An AI that tracks fat
Halo doesn't just guess emotions. The application that accompanies the bracelet also invites users to take a picture of themselves - in light clothing, from the front, back and side - in order to evaluate the fat mass index (which establishes the relationship between fat mass and muscle mass). Amazon assures that this is a more relevant indicator of health status than weight or body mass index.
In addition to this index, the AI starts working with the photos taken to create a 3D rendering of the person's body and offer the possibility to see what they would look like with more or less fat. "It's a dangerous tool that risks perpetuating the culture of over dieting and the fear of gaining weight in the name of 'health,'" say several Twitter users.
It is a cocktail of AI use extremely intrusive and undesirable. It's also a very pernicious way of doing things, because the different services are presented as fun functions to use. "This can make people dependent on these technological tools to monitor their well-being and prepare them to accept improved versions that will collect more accurate data.
Halo is a new illustration of the need for international standards for the ethical development of artificial intelligence.