Taking a step back, AI is broken down into one main category and two sub-set categories. AI is basically the idea that a machine can learn, think and behave like a human. While deep learning and neural net – subsets of AI that utilize data to learn – allow a computer to make decisions like a human would.
As humans, we’re able to build models up to some level of complexity – and we further need to “teach” AI to take it forward and enhance our results, also to perform tests in 24/7 mode. Especially when the number of parameters and complexity of a model increases it becomes harder for us to keep track of all dependencies and corner cases. Distinguishing between significant and insignificant parameters becomes more and more difficult. But using AI algorithms allows us to find optimal solutions for a multitude of different challenges.
This is exactly the situation our team in Krakow encountered with our gesture recognition technology. First launched in 2015 with BMW, the technology enables a user to control the center console of a car using basic hand gestures. This means the system needs to know which hand movements are valid and which aren’t. This is really important because you can’t have the system reacting to every movement.
To accomplish this, the team uses recordings of gestures to extract and map various hand features and movement. The data is then used to set parameters of statistical models that represent gestures. In turn, we use these parameters to craft our AI algorithms and essentially "teach" the system.