Autonomous machines and robots are interacting with humans and navigating unbounded environments more and more each day. A mature vision pipeline and advancements in artificial intelligence have created initial momentum. The next step is sensor fusion. To reduce errors and enable higher levels of autonomy and interaction with humans, multi-modal systems are required for more robust and accurate perception. Yet these systems must meet system cost, power, size and weight requirements. In this session, panelists will discuss why sensor fusion technology is critical to next-generation smart-sensor systems, why sensor fusion can be difficult and costly to implement and how to mitigate those challenges, and the future of this technology.