Towards a Situated, Multimodal Interface for Multiple UAV Control
Multiple autonomous Unmanned Aerial Vehicles (UAVs) can be used to complement human teams. This paper presents the results of an exploratory study to investigate gesture/speech interfaces for interaction with robots in a situated manner and the development of three iterations of a prototype command set. A command set was compiled from observing users interacting with a simulated interface in a virtual reality environment. We discovered that users find this type of interface intuitive and their commands tend to naturally group into both
High-Level' andLow-Level' instructions. However, as the robots moved further away, the loss of depth perception and direct feedback was inimical to the interaction. In a second experiment we found that using simple heads up display elements could mitigate these issues.