Gary Bradski of OpenCV discusses the evolution of autonomous systems, from his early involvement with the winning DARPA Grand Challenge team in 2005 that formed the basis for Waymo, to the current state of robotics and AI. He reflects on the two-decade journey of self-driving cars, attributing the long adoption cycle to the complexity of real-world edge cases and the maturation of deep learning, which was a necessary ingredient for automating perception from data.
—
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.
—
Bradski shares his perspective on sensor fusion for autonomous vehicles, advocating for multiple sources of information, such as cameras combined with LiDAR, to ensure safety. While acknowledging the progress of camera-only systems like Tesla’s, he suggests a flexible approach that allows for the integration of new sensors as they become more cost-effective. He recounts his experience on Sebastian Thrun’s Stanford team, highlighting the unique confluence of talent and the government’s role in spurring innovation through the DARPA challenges.
Looking beyond automotive, Bradski notes that robotics is finally entering a significant growth phase, driven by labor shortages in sectors like agriculture, mining, and defense. He describes this era as the rise of “AI and atoms,” where intelligent systems interact with the physical world, moving beyond chatbots. He emphasizes a shift from purely vision-based systems to broader sensor-based robotics, where proprioceptive feedback and touch will be crucial for advanced capabilities.
Addressing the topic of AI model efficiency, Bradski explains the trend towards optimizing models for size and performance. He points to the development of architectures like Yann LeCun’s Joint Embedding Predictive Architectures (JEPA), which create efficient representations in a latent vector space for more effective modeling. This enables a range of model sizes—large, medium, and small—with smaller models becoming increasingly powerful on embedded platforms.
OpenCV plays a key role in this ecosystem by providing a DNN library designed to make AI models run optimally on any hardware platform. The library aims for state-of-the-art performance on CPUs and leverages specific acceleration architectures like CUDA where available. Bradski mentions that OpenCV is also adapting to support many new and upcoming architectures, ensuring that developers can deploy computer vision and AI workloads efficiently across a diverse hardware landscape.



