In human-robot collaboration domains, augmented reality (AR) technologies have enabled people to visualize the state of robots. Current AR-based visualization policies are designed manually, which requires a lot of human efforts and domain knowledge. When too little information is visualized, human users find the AR interface not useful; when too much information is visualized, they find it difficult to process the visualized information. In this paper, we develop a framework, called VARIL, that enables AR agents to learn visualization policies (what to visualize, when, and how) from demonstrations. We created a Unity-based platform for simulating warehouse environments where human-robot teammates collaborate on delivery tasks. We have collected a dataset that includes demonstrations of visualizing robots' current and planned behaviors. Results from experiments with real human participants show that, compared with competitive baselines from the literature, our learned visualization strategies significantly increase the efficiency of human-robot teams, while reducing the distraction level of human users. VARIL has been demonstrated in a built-in-lab mock warehouse.
No AR visualization
Full AR visualization
Learned AR visualization (ours)
For demonstration, a built-in-lab mock warehouse was constructed in a 2500 square foot room. Three turtlebot robots were used to mimic the human-multi-robot collaborative delivery task similar to the Unity environment. The speed of the Turtlebots were set to 0.5 m/s. The human worker was provided with an AR device (tablet) with a screen size of 10 inches. Figure on the right shows the ``warehouse'' environment for the demonstration of VARIL, where a human worker is holding an AR device to track the status of three turtlebots.
Mock warehouse and AR visualizations that include robot avatars and trajectories in color.