New AI Coming to Human-Machine Interfaces from Mitsubishi Electric
The Mitsubishi Motors Corporation (MMC) has its hand in a lot of pockets in different industries. It’s not a bad idea, to spread oneself to have multiple means of success. This is especially true when it comes to the Mitsubishi Motors automaker and Mitsubishi Electric. The former we all know too well, but the latter isn’t talked about nearly as much. Thought to only be a supplier of electric appliances and infrastructure, Mitsubishi Electric actually plays a role in designing, developing, and producing much of the future technology going into Mitsubishi Motors vehicles . From the self-driving autonomous technology at the 2019 Consumer Electronics Show to the artificial intelligence at the 2019 Tokyo Motor Show, Mitsubishi Electric will be changing the way we look at cars with an advanced Human-Machine Interface (HMI).
Virtual Assistants and HMIs
Today, many people own some form of virtual assistant or platform with a human-machine-interface. Anyone who owns an Amazon Alexa or Google Home has a virtual assistant, and most of us who own one use it every day, from setting an alarm clock to making shopping lists, and even video calls with the Echo Show. Anyone who talks to their TV to search for content or perform a function knows what it’s like to have an HMI on hand when the remote is all the way across the couch.
We also know how aggravating these virtual assistants and HMIs can be. Unless speaking crystal clear and at a steady speed, most devices won’t pick up voice commands properly, and consumers are left repeating themselves to a robot. It’s not a very enjoyable experience, and probably why some consumers eventually forget they own a smart device altogether, aside from their phones. New technology from Mitsubishi Electric, specifically the company’s Maisart® artificial intelligence (AI) technology branch, can help change such difficulties.
The unnamed HMI platform can make use of vague voice commands and still respond accordingly by employing the use of a built-in, ever-changing knowledge graph. Also known as a database, this is a collection of user information, device specification and functionality, and external information. The HMI uses this database to match voice commands with stored data. For example, say someone were to record _ Avengers: Endgame _ while they were out. They come home and simply say, “Play Avengers”, the HMI will already know what the user is talking about.
This process is made possible by matching a user voice-command with missing information in the knowledge graph based on the relevance of information using three sets of components-subjects, predicates, and objects. Cross-referenced with user information, device specifications and functionality, and external information, the HMI platform responds accordingly without the need to speak like a robot. Easy, simple, human-to-machine interaction.
The database built into the HMI platform may also constantly update and change based on the user. Without the need to use a large amount of computation and memory to perform functions with complex commands, the HMI is freed up to adjust the relevance of its information with reference to user commands and sensing information. Whether this means the HMI will predict what the user is asking for based on a vague command is unclear, but if someone was having an emergency, calling out for “help” is a lot more natural than yelling, “911!”
Such natural HMI interaction could even save a life. What do you think of this cutting-edge technology from Mitsubishi Electric and Maisart? Join the discussion on Miami Lakes Mitsubishi social media .