Résumé:
Recent advances in mobile computing, wireless sensing and communication technologies,
consumer electronics have modernized our cities and living environments. Buildings,
roads, and vehicles are now empowered with a variety of smart sensors and objects that
are interconnected via machine-to-machine communication protocols, accessible via the
Internet, to form what is known as the Internet of Things (IoT). The power of IoT expands
when coupled with Machine Learning, since the later o er techniques that allow
analyzing the vast amount of data generated by sensors and actuators. Smart buildings
are an appealing example of IoT and machine learning applications o ering higher energy
saving and occupants satisfaction through dynamic control.
Vocal virtual assistants (e.g., Amazon Alexa, Google Home) are now a central component
of the smart house. However, they are not adapted to deaf and mute people who
communicate using sign language. E cient alternative communication means inside the
house are required to assist the interaction of deaf and hearing-impaired people.
The main goal of this thesis is to conceive and realize a solution based on machine
learning for sign language recognition that allows the control of a smart home environment
through gestures.
Keywords: Smart buildings, Machine learning, Sign language, Disabled people, HumanComputer
interaction.