This document discusses integrating multimodalities in user interfaces. It covers several topics:
1. Context-sensitive interfaces allow devices to sense contextual information like location to better understand user intentions. This requires understanding users through personas.
2. Computer vision and barcodes can replace cumbersome manual input but are less suitable when interactions are complex.
3. Multimodal interfaces combine different interface types like touch and voice to mimic human interaction.