Author:


Intel Laptop 20% performance boost using Frore Systems AirJet Mini: 20W to 24W sustained CPU power

Posted by – January 18, 2026
Category: Exclusive videos

In this quick laptop thermal retrofit demo, Frore Systems swaps the stock fan assembly in a 14-inch Samsung Galaxy Book5 Pro for four AirJet Mini solid-state cooling modules. The baseline machine is shown sustaining about 20 W of CPU power with audible fans; the modified unit is tuned to hold about 24 W, roughly a 20% uplift, while aiming for near-silent operation and a more sealed chassis. https://www.froresystems.com/products/airjet-mini

AirJet Mini is designed as a thin active heat-sink module rather than a rotary fan: it uses ultrasonic actuation to move air through micro-vents, producing high back pressure (around 1,750 Pa) in a compact form factor. Frore rates the original Mini at roughly 5.25 W of heat removal at about 21 dBA while drawing up to about 1 W, so scaling to multiple modules can add meaningful sustained cooling without the tonal fan whine that often dominates thin laptops at load in air.

What matters here is sustained package power, not short boost: once a notebook hits its steady-state thermal limit, firmware clamps PL1 and clocks settle. Holding 24 W instead of 20 W can translate into higher all-core frequency, steadier interactive latency, and fewer dips from thermal throttling in long compiles, renders, or exports. The footage is filmed at CES Las Vegas 2026, and it’s a useful example of how solid-state airflow can change the acoustics-perf trade space at a booth.

As always, outcomes depend on the whole stack: heat spreader quality, vapor chamber or heat pipe routing, fin and vent geometry, and how the BIOS enforces PL1/PL2 with skin-temperature limits. AirJet-style modules can also support dust-resilient, water-resistant industrial design because airflow can be routed through controlled paths rather than large open fan grilles, which may help consistency over time in real work.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=QyFY9Z9npEo

VESA DSC logo explained: DP2.1 DP54 bandwidth, 4K165 HDR workflows, compression limit, ClearMR 21000

Posted by – January 17, 2026
Category: Exclusive videos

VESA uses certification programs as a shorthand for what a display pipeline can really sustain, and this booth walk-through focuses on how those logos map to measurable signal integrity and motion performance. The headline demo is an LG QHD OLED gaming monitor certified for ClearMR 21000 (top tier motion blur rating), AdaptiveSync Display, and DisplayHDR True Black 500, running at 540 Hz over DisplayPort 2.1 using UHBR13.5 (54 Gb/s) with a DP54 cable. https://www.vesa.org/

ClearMR is essentially VESA’s way of normalizing blur metrics across panels and refresh regimes, so “21000” isn’t marketing fluff but a tier that implies very low perceived motion smear when the whole chain—panel response, overdrive, and scanout timing—behaves. On top of 540 Hz at QHD, the monitor also exposes a dual-mode toggle: it can drop resolution and push refresh up to 720 Hz, which is interesting for esports latency budgets even if it falls short of VESA’s Dual Mode certification threshold because that program requires at least 1080p.

The conversation then shifts from desktop gaming to mobile HDR, showing OLED tandem panels in laptops from LG and Lenovo. Tandem OLED stacks two emissive layers to raise peak luminance while keeping OLED black levels, which is how these systems hit VESA DisplayHDR True Black 1000. VESA mentions more than 100 True Black 1000 laptop models certified, with some families peaking around 1,600 nits—numbers that are easier to appreciate in person at CES Las Vegas 2026.

A recurring technical theme is Display Stream Compression (DSC): it has existed for years as an optional feature in older DisplayPort generations, but it’s a mandatory capability in DisplayPort 2.1 and now has a dedicated VESA logo program to indicate a validated implementation. DSC is typically visually lossless and is what makes extreme pixel rates feasible—think high-refresh QHD OLED, multi-display MST docking, or pushing beyond raw link budgets like 54 Gb/s UHBR13.5 and up to 80 Gb/s UHBR20.

That DSC logo idea also shows up in TVs: LG’s newly announced C6 is highlighted because it targets 4K at 165 Hz with HDR, a case where compression is effectively required to move enough pixels even when the physical input is HDMI. VESA’s point is less about inventing a new codec and more about making interoperability predictable by certifying the DSC behavior, while keeping the standard itself royalty-free for members (with certification handled via test house cost) rather than per-unit licensing fee.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=ciI_iUrkugs

VESA Thunderbolt 5 / USB4 v2 DP Tunneling at 120 Gbps: Single-Cable Dual 5K 165Hz Bandwidth

Posted by – January 17, 2026
Category: Exclusive videos

VESA walks through two real-world PC display pipelines that push modern interconnect limits: DisplayPort tunneling over USB4 v2 (aligned with Thunderbolt 5 behavior) and native DisplayPort 2.1 UHBR20 Multi-Stream Transport. The through-line is certification-grade thinking: link training, bandwidth allocation, DSC behavior, and the practical “does it stay stable when you unplug, re-route, and re-daisy-chain” edge. https://www.vesa.org/

The first setup, filmed at CES Las Vegas 2026, is a single-cable “wide + fast” scenario: a Gigabyte Thunderbolt 5 add-in card takes multiple DisplayPort inputs and tunnels two DP streams over one USB4 v2 output into a Kensington Thunderbolt dock. From there, two 5120×2160 5K panels run at 165 Hz, effectively demonstrating a dual-5K high-refresh desktop over one cable, with video traffic prioritized and kept coherent by the tunneling stack there.

A key detail is USB4 v2 asymmetric mode: instead of the usual 2-lane up / 2-lane down, the link can shift to 3 lanes downstream (up to 120 Gbps) and 1 lane upstream (up to 40/60 Gbps depending on implementation). That’s what enables enough downstream headroom for multiple high-rate DP streams, and it pairs well with Display Stream Compression (DSC) on the panels to stretch effective payload without changing the physical lane rate.

The second demo switches to native DisplayPort 2.1 UHBR20 with MST daisy-chaining: an NVIDIA RTX 5090 drives three 32-inch Gigabyte AORUS FO32U2 Pro 4K HDR monitors from a single UHBR20 output, using each monitor’s DP in/out MST hub to forward streams down the chain. The visible target is 3840×2160 at 120 Hz across the chain (even if each monitor can do higher), highlighting the real constraint: GPU port policy and bandwidth budgeting per output, not just cable capability here.

VESA also frames why MST compliance work matters: topology changes, stream re-enumeration, and hub routing are where users feel pain, so more exhaustive test coverage aims to make daisy-chained setups behave predictably across many permutations. In theory MST can scale to large fan-out counts, but the demo keeps it grounded in what’s achievable today for multi-monitor gaming, simulation, and high-density workstation layouts too.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=hizvFMf72Ao

TeleCANesis thin middleware for in-vehicle HMI: CAN-to-cloud routing, hypervisor IPC

Posted by – January 17, 2026
Category: Exclusive videos

TeleCANesis shows what “getting data where it needs to go” looks like inside a modern off-road vehicle platform: routing signals and commands between infotainment UI, instrument cluster, and embedded services so the right data arrives at the right endpoint with predictable timing. In this demo, that includes moving Bluetooth media metadata (track, artist) and control commands between the HMI layer and the Bluetooth stack, without each app hard-wiring every connection. https://telecanesis.com/

On the vehicle side, the same message routes carry speed, gear state, and other telemetry into the cluster, and can also drive body functions like lighting or logic such as enabling a reverse camera when the gear selector changes. The takeaway is less about a single widget and more about a reusable data plane: map signals once, then reuse them across displays, ECUs, and services as the product evolves, while keeping latency and ordering in check.

There’s also a cabin detail from Ottawa Infotainment: audio is produced via transducers bonded into the roof and doors, so the panels become the radiating surface instead of installing traditional speaker cone. The video was filmed at CES Las Vegas 2026, and the booth context matters because it ties UI, sensor inputs, and connectivity into one integrated experience rather than a lab bench.

Across the booth, TeleCANesis sits under multiple UI stacks and display technologies, feeding the same vehicle signals into different HMIs, and routing safety-related sensor data in other demos. A key point is how this scales when the compute architecture gets more complex: in a next-gen platform with a hypervisor and multiple guest environments, TeleCANesis acts as the messaging backbone between isolated partitions so apps can exchange only the intended data across a clean boundary.

Under the hood, the approach leans on thin middleware plus model-driven configuration and automated code generation (including the TeleCANesis Hub toolkit built on QNX), which makes verification and safety/security certification more tractable than hand-written glue code. They describe using AI during project ingestion and setup, but keeping runtime messaging deterministic, because safety-critical routing is one of the places you can’t tolerate “creative” behavior from tooling. That split—AI to accelerate setup, determinism to ship—captures the engineering mindset in one shot.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=AGfMAuJzTlM

Ottawa Infotainment DragonFire OS demo: CAN-to-cloud IVI for ATVs, fleet telematics, safety UI

Posted by – January 17, 2026
Category: Exclusive videos

Ottawa Infotainment (Sean Hazaray) walks through a demo vehicle that represents a fast-growing niche: side-by-sides, ATVs, motorcycles, and neighborhood EVs that now expect “car-like” digital cockpit UX. The company positions itself as full-stack IVI and E/E architecture, spanning embedded hardware + OS, vehicle networks (CAN) and IO, and cloud-connected back ends that turn raw signals into driver-facing context on a large in-vehicle display. https://ottawainfotainment.com/pages/ces2026

A key theme is shortening OEM integration time by shipping pre-integrated building blocks instead of one-off engineering. In the cockpit, “infotainment” is framed as the orchestration layer for navigation, media, instrument-cluster data, and vehicle status, with an emphasis on configurable HMI that can be adapted across platforms and programs without restarting validation from zero each time.

Safety and fleet workflows are used as concrete examples of why tight integration matters. The vehicle shows attention-grabbing hazard lighting tied to Emergency Safety Solutions (ESS) concepts, and the broader message is that safety-critical alerts, coaching cues, and operational telemetry should live inside OEM-grade displays rather than on extra tablets, phone mounts, or aftermarket screens that increase distraction and training overhead.

Filmed at CES Las Vegas 2026, the booth pitch is “ecosystem-first”: partnerships like Geotab (fleet telematics and data intelligence embedded into DragonFire OS as an OEM option), ESS (connected hazard alerts), and modular E/E work with suppliers like Pektron point toward a software-defined vehicle approach where cockpit compute, ECUs, and cloud services evolve together through upgrades rather than hardware swaps, faster.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=Kg0iYo-bBSQ

LOTES Ultra96 HDMI 2.2 connectors: Category 4 board/cable-side parts and CTS approval path

Posted by – January 16, 2026
Category: Exclusive videos

LOTES (Lotes Co., Ltd.) focuses on the unglamorous but critical part of the HDMI upgrade cycle: the physical interconnect. In this interview, Cien Wong explains how the company manufactures both the board-side HDMI receptacle and the cable-side plug for HDMI 2.2, targeting the new Category 4 “Ultra96” ecosystem where signal integrity margins tighten as bandwidth climbs toward 64/80/96 Gbps. https://www.lotes.cc/en/

A key theme is traceability and compliance rather than hype. For HDMI 2.2, HDMI Licensing Administrator maintains approved Category 3/Category 4 connector lists under the Compliance Test Specification (CTS), and device makers must use listed connectors to pass Authorized Testing Center validation. The practical takeaway for buyers is simple: check the HDMI.org approved-connector resources instead of trusting look-alike parts, a point made on the CES Las Vegas 2026 show floor.

The demo connects the dots between connector design and lab-grade verification. LOTES highlights collaboration with test vendors such as Rohde & Schwarz and the HDMI plugfest path, where measurements like differential insertion loss, differential impedance, attenuation-to-crosstalk, and intra-/inter-pair skew decide whether a connector/cable assembly behaves at multi-tens-of-GHz edge rates. That discipline matters because small discontinuities at the plug, PCB launch, or cable termination can show up as eye-diagram closure, elevated BER, or flaky link training at speed.

Timing-wise, LOTES says the hardware is essentially ready, while broader market availability depends on finalizing the HDMI 2.2 cable/connector test procedures and certification cadence, with products likely appearing toward late 2026 and then ramping as TVs, GPUs, and consoles adopt the spec. The company is headquartered in Keelung, Taiwan, with multi-site manufacturing across China plus a plant in Vietnam, which is relevant for OEM supply planning in Asia.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=70KnoWi7hj8

Elka Ultra96 HDMI 2.2 cables at CES 2026 passive coax 2 m now, 5–10 m roadmap

Posted by – January 16, 2026
Category: Exclusive videos

Elka walks through its HDMI cable roadmap with a focus on the new HDMI 2.2 “Ultra96” ecosystem: passive coaxial designs aimed at next-gen bandwidth targets, plus clear labeling so buyers can tell what they’re getting. The demo highlights a 2 m Ultra96 cable as the current reference build, while outlining longer-reach variants that follow the same electrical targets and compliance approach over time.

Filmed at CES Las Vegas 2026, the discussion frames HDMI 2.2 as a transition period where most consumer gear is still HDMI 2.1, but cable and connector vendors are already building toward higher data rates and stricter signal-integrity margins. Elka positions itself as a Taiwan-headquartered manufacturer with production across China, Laos, Vietnam, and Malaysia, using that footprint to scale different cable constructions and BOM choices.

On the technical side, the emphasis is on certification labels and performance claims tied to Ultra96: the transcript calls out 96 Gb/s class signaling and common use-cases like high-frame-rate 4K and 8K video modes for gaming, conference rooms, and pro AV installs. Even if end-devices lag, cabling that meets insertion-loss, impedance control, and crosstalk requirements is a prerequisite for stable links at higher symbol rates.

There’s also a branding note: Elka mentions a broader company rebrand and a “Spider” retail presence, suggesting a push to make certification marks and product families easier to recognize across regions (North America, Europe, Japan, and broader Asia). The takeaway is less about flashy demos and more about the practical pipeline—manufacturing scale, compliance labeling, and a length roadmap from short passive runs toward longer options as the market catches up.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=UhqyIp4RMHE

Arduino UNO Q: Dragonwing QRB2210 + STM32U585, Debian Linux, edge AI + robotics

Posted by – January 16, 2026
Category: Exclusive videos

Arduino’s UNO Q is a “dual-brain” dev board built with Qualcomm, combining a Linux-capable Qualcomm Dragonwing QRB2210 MPU with a real-time STM32U585 MCU in the familiar UNO form factor. The pitch is simple: you get a small SBC for UI, networking, and on-device inference, plus deterministic GPIO and motor-control timing on the microcontroller side—without having to design your own inter-processor plumbing. https://www.arduino.cc/product-uno-q

In the demo, the board runs standard Debian Linux with a preloaded IDE and a catalog of example apps, including a face-detection project. You can also drive the same workflow from a laptop over Wi-Fi, so the board can sit “headless” in a robot or enclosure while you iterate. The key abstraction is an Arduino “app” split across two worlds: a classic Arduino sketch for the MCU, and a Linux-side component you can write in Python (or anything that runs on Debian), tied together with simple RPC calls for message passing and control, today.

The robot-dog setup shows why this hybrid approach matters: the STM32 side handles real-time motor control while the QRB2210 hosts a lightweight web app that becomes the controller UI. Add a USB camera and you can loop vision results—like face detection or a custom classifier—back into low-latency behaviors on the microcontroller pins, without turning your control loop into a Linux scheduling problem. This was filmed at CES Las Vegas 2026, but the engineering theme is broader: making “UI + compute + control” feel like one coherent platform, there.

For AI workflows, the board story leans on a gentle on-ramp: start with “default models,” then move to custom training via Edge Impulse, export, and re-integrate into the same Arduino/Linux split application model. Hardware-wise, UNO Q is positioned as an entry board at $44, with a 2 GB RAM version shown and a 4 GB variant mentioned as upcoming; the goal is to keep the developer experience consistent as the line expands, while staying open source and accessible for robotics, IoT gateways, vision, and local web dashboards inside.

Overall, the UNO Q looks like Arduino trying to collapse the gap between maker-friendly GPIO and modern embedded compute: Cortex-A53 class Linux, GPU/ISP-capable silicon, Wi-Fi-based dev loops, and a clean API boundary to a real-time MCU. If you’ve ever duct-taped a Pi (or similar SBC) onto a microcontroller just to get a UI and networking, this is the same architecture—but packaged as one board with a curated software path from demo to product prototype, now.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=z22RdSICsSc

Dentomi GumAI demo: smartphone photo gingivitis screening, plaque heatmap, self-care guidance

Posted by – January 16, 2026
Category: Exclusive videos

Dentomi (DTOMI Limited) demonstrates GumAI, a computer-vision oral-health tool that turns a phone camera into a fast, at-home dental screening flow. You take an intraoral photo with a smartphone or iPad, and the app returns an annotated view that highlights where brushing or flossing needs more attention, using a simple green/yellow/red overlay aimed at coaching rather than replacing a dentist visit. https://www.dentomi.biz/

Under the hood it maps a familiar dentistry step—visual inspection—into an AI pipeline: guided image capture, quality checks (focus, lighting, framing), then pixel-level segmentation and classification to mark gingival margins, plaque-heavy zones, and other visible hygiene indicators. The practical value is repeatability, so people can track changes over time and tighten daily technique at home.

The team frames it as access tech for communities that don’t get regular dental care, with deployments via NGO partners, community centres, and elderly homes. In the interview (filmed at CES Las Vegas 2026), they also describe collaborations in Hong Kong, including sponsorship-style rollouts with Colgate-Palmolive that remove cost barriers and support preventive follow-up for health equity.

Ward describes a dentist and public-health background and an ongoing PhD at the University of Hong Kong, with the product starting as research intended to translate into community impact. Training follows the typical supervised-learning path: labeled clinical photos from partner clinics and hospitals, plus additional user images when consent is granted, which brings up real questions around data governance and privacy.

Commercially, the model leans toward funded access—brands, dental associations, or public programmes cover licences so end users can scan for free, while the system can nudge referrals when risk looks elevated. It’s easy to imagine insurer and teledentistry tie-ins later, but the core framing stays consistent: image-based screening and education that helps people decide when to seek care and how to improve day-to-day habits before issues grow, prompting timely act.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=FlnzG9ZLwtY

VESA DisplayPort DP80LL: UHBR20 active LRD cables, inrush power and compliance testing

Posted by – January 16, 2026
Category: Exclusive videos

This video digs into how VESA’s DisplayPort team validates the new DP80 low-loss cable class for DisplayPort 2.1, using a link-layer/protocol tester (Teledyne LeCroy quantumdata M42de) to run first-pass compliance checks. The core idea is simple: plug the cable into “in” and “out,” then verify it can link-train and move data across every lane count and configuration, including UHBR rates up to UHBR20, with a clean pass/fail report. That DP80 logo isn’t just marketing; it’s meant to give end users a quick signal that a cable has been through a defined compliance path rather than “it worked on my desk.” https://vesa.org/

A big theme is the practical limit of purely passive DP80 at the highest rates: once you chase 20 Gbit/s per lane, you quickly run out of electrical margin, especially past roughly a meter in common materials. DP80LL (DP80 “low loss”) is VESA’s answer: keep the same endpoint experience, but use active electronics to extend reach and improve margins. The demo focuses on LRD (linear redriver) designs with active components at both ends that reshape/restore the signal before it hits the receiver, and it also tees up active optical approaches for even longer spans where copper loss becomes the wall.

Filmed at CES Las Vegas 2026, the discussion gets refreshingly concrete about why “active” is hard: power behavior, not just eye diagrams. DisplayPort includes a DP_PWR pin intended to power adapters and active cables (historically 3.3 V at up to 500 mA), while USB-C variants can draw from the Type-C power domain, so every active design has to manage startup without browning out the port. Compliance testing drills into inrush (the plug-in current spike and voltage droop) and source/sink “outrush” robustness, which is why soft-start circuits and controlled capacitor charging become make-or-break details.

There’s also nuance around interoperability and timing. When you connect a cable, HPD/AUX sideband activity kicks off link training, capability reads (DPCD/EDID paths), and clock recovery, all within spec-defined time windows. LRD-style cables behave like fast pass-through paths, while more complex repeater topologies can add training steps and delay, and optical links can introduce measurable latency if the run gets extreme. The video highlights how certification is expanding beyond straight cables into trickier categories like active adapters (for example USB-C to DP), where VESA needs test requirements that prevent “extension hacks” from silently breaking signal integrity.

The takeaway is that cable certification is becoming a first-class part of enabling UHBR20 in real setups: big, high-refresh desktop monitors, workstations, docks, and GPU-to-display runs that don’t fit the one-meter fantasy. DP80LL and related active/optical designs are about preserving link reliability at 80 Gbps class throughput while keeping user experience boring—in the good way—so the system link-trains once and stays locked. For anyone building or buying next-gen DisplayPort 2.1/2.1b gear, this is a peek into the engineering reality behind “it just works” at the edge of signal integrity.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=a6w1eAhk9ug

Edge Impulse XR + IQ9 edge AI 100 TOPS: YOLO-Pro, Llama 3.2, RayNeo X3 Pro AR1, PPE + QA LLM

Posted by – January 16, 2026
Category: Exclusive videos

Edge Impulse (a Qualcomm company) frames its platform as a model-to-firmware pipeline for edge AI: capture sensor or camera data, label it, train a compact model, then ship an optimized artifact that can run without a cloud round trip. The demos emphasize quantization, runtime portability, and repeatable edge MLOps where latency, privacy, and uptime matter for real work. https://edgeimpulse.com/

One highlight is an XR industrial worker assistant running on TCL RayNeo X3 Pro glasses built on Snapdragon AR1, with a dual micro-display overlay and a forward camera. Edge Impulse trains a YOLO-class detector (their “YOLO Pro” variant) to identify specialized parts, then a local Llama 3.2 flow pulls the right documentation and generates step-by-step context like part numbers, install notes, and purpose for a field crew guide.

The workflow focus is data: capture images directly from the wearable, annotate in Studio, and iterate via active learning where an early model helps pre-label the next batch. They also point to connectors that let foundation models assist labeling, plus data augmentation and synthetic data generation to widen coverage. This segment was filmed at the Qualcomm booth during CES Las Vegas 2026, but the core story is a repeatable edge pipeline, not a one-off demo.

A second showcase moves to the factory line: vision-based defect detection on Qualcomm Dragonwing IQ9, positioned for on-device AI at up to 100 TOPS. The UI runs with Qt, while the model flags defective coffee pods in real time and an on-device Llama 3.2 3B interface answers queries like defect summaries or safety prompts, all offline on the same device.

They round it out with PPE and person detection on an industrial gateway, plus Arduino collaborations: the UNO Q hybrid board (Dragonwing QRB2210 MPU + STM32U585 MCU) using USB-C hubs for peripherals, wake-word keyword spotting, and App Lab flows to deploy Edge Impulse models. There’s also a cascaded pattern where a small on-device detector triggers a cloud VLM only when extra scene context is needed, a practical tradeoff for cost and scale.

Edge Impulse XR + IQ9 edge AI: YOLO-Pro, Llama 3.2, AR1 smart glasses, defect detection
Edge Impulse on-device GenAI workflows: Hexagon NPU, QNN, 8-bit quant, Arduino UNO Q

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=602KtzBVvFU

Tensor Level-4 personal robocar: GreenMobility Copenhagen, Lyft plan, VinFast production this year

Posted by – January 16, 2026
Category: Exclusive videos

Tensor is positioning its Robocar as a privately owned SAE Level 4 vehicle, engineered around autonomy rather than retrofitting sensors onto an existing platform. The design is sensor-first: 5 LiDAR units, 37 cameras, 11 radars, plus microphones and underbody detection to see close to the curb and avoid low obstacles, with a cleaning system (large fluid tank, air/liquid jets, wipers) to keep optics usable in real-world grime. https://www.tensor.auto/

A big theme is fail-operational redundancy: braking, steering, power and compute are treated as duplicated subsystems, with partners mentioned like Bosch, ZF and Autoliv for safety-critical hardware. Tensor’s approach relies on multi-modal sensor fusion—using the strengths of vision, radar and LiDAR together—so the stack can handle edge cases like occlusion, glare, and near-field perception without betting everything on a single modality, which is where many autonomy programs see risk.

The interview was filmed at CES Las Vegas 2026, where Tensor also talked about opening parts of its AI work to outside developers. Beyond the car itself, they point to open tooling for “physical AI” workflows (vision-language-action training and deployment), and say the core models are being released in an open form, inviting collaboration while keeping the vehicle’s runtime data local to the car, via OpenTau.

Inside, the cabin is treated like a productivity and media space: multiple displays, individual in-cabin cameras for calls, and privacy shutters for sensor coverage you want to disable. The signature mechanical element is a fold-away steering wheel and pedals that pop out on demand, making the handoff between Level 4 autonomy and manual control explicit, and supporting a spectrum from Level 3/2 ADAS down to Level 0 for fully human driving mode.

On go-to-market, Tensor frames a hybrid of personal ownership and fleet economics: owners can optionally connect the vehicle to ride-hailing when idle, while fleet partners like Lyft and the Copenhagen car-sharing operator GreenMobility have been announced as early channels. Manufacturing is planned via VinFast in Vietnam, with production targeted for the second half of 2026 and deployments likely constrained to geofenced ODD areas before broader roll-out in 2026.

Tensor Robocar Level-4 autonomy: 100+ sensors, Nvidia Thor compute, dual-mode cabin
Tensor autonomous car: LiDAR/radar/camera fusion, retractable wheel, privacy-first on-device AI

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=0IglyT7SjX4

Savlink Ultra96 HDMI 2.2 AOC: 96Gbps over 100m, opto-electronic cable design

Posted by – January 15, 2026
Category: Exclusive videos

Savlink walks through how “Ultra96” cabling is reshaping practical HDMI 2.2 deployments: once you push toward 96Gbps (next-gen FRL), passive copper quickly hits short runs, so their focus is active optical cable (AOC) builds that keep full-bandwidth signaling stable at 10m, 30m, and up to 100m while still presenting as a standard HDMI link end to end. https://smartavlink.com/

A key detail is power and topology: the optical transceivers draw from the HDMI +5V rail (and the cable is directional, with “source” and “display” ends), so you don’t need an external injector just to reach long distance. The demo contrasts a ~2m Ultra96-class copper lead with fiber-based AOC where attenuation, crosstalk, and EMI are far easier to control at high symbol rate.

Beyond pure reach, the engineering story is about mechanical packaging. Savlink shows ultra-slim micro-coax builds (down to ~2.7mm OD, ~36-AWG class conductors) for tight installs, plus armored variants that integrate Kevlar reinforcement for higher pull strength and abrasion resistance. This was filmed at CES Las Vegas 2026, where the same cable constraints show up everywhere from compact AV rigs to robotics at the expo.

They also highlight “optical engine” breakout concepts: converting USB, HDMI, or DisplayPort electrical lanes to fiber on a small PCB, then de-multiplexing on the far end into interfaces like DP, USB-C, and USB-A. That kind of modular conversion is useful when you need long-haul transport but still want standard connectors at the edge.

The broader theme is reliability in harsh environments: low-EMI fiber for medical imaging and industrial gear, and flex-life for robots where cables run through narrow arm tubing and survive drag-chain motion over millions of cycles. If you’re planning 8K or 4K-high-refresh pipelines, spatial/VR links, or long HDMI runs in noisy spaces, this is a practical look at what changes when the cable becomes an active opto-electronic system rather than just copper.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=SI1tqqfEXos

East-Toptech Ultra96 HDMI 2.2 cable: 96Gbps, 16K, passive 2m, locking plug

Posted by – January 15, 2026
Category: Exclusive videos

East-Toptech (Shenzhen) positions itself as an OEM/ODM cable manufacturer with high-volume throughput (they cite ~10 million cables per year) and long experience building A/V interconnects for brands and distributors. The conversation focuses on how cable design is a system problem: conductor geometry, shielding, connector mechanics, jacket materials (nylon braid, TPE/PE-style mixes), and—crucially—how products are prepared for formal certification and retail packaging.
https://east-toptech.com/

The main showcase is an HDMI 2.2-ready “Ultra96” passive HDMI cable concept, aimed at the new 96Gbps-class link budgets (FRL) that enable very high resolution / high refresh transport profiles, up to 16K-class timing in the spec roadmap. The transcript briefly says “196,” but the industry label to watch is Ultra96 (up to 96Gbps) plus the official certification label on the box; they say broad availability follows once certification is secured for market.

A lot of the booth story is about form factors that solve real install pain: a short 2 m passive lead for maximum margin, very slim cable builds for tight routing, and a coiled HDMI cable meant for VR or compact devices where bend radius, strain relief, and snag resistance matter. They also point to mechanical locking HDMI connectors, plus typical signal-integrity talking points like controlled differential impedance, EMI shielding strategy, and connector plating choices intended to keep insertion loss and crosstalk in check.

Filmed during CES Las Vegas 2026, the closing note is basically roadmap: passive Ultra96 where it makes sense, then longer-reach HDMI 2.2 options via active copper/equalized designs or AOC once the compliance ecosystem and labeling are fully settled. The takeaway isn’t one hero SKU, but a factory approach that can iterate cable geometry, jackets, and locking hardware quickly as 8K gaming, high-frame-rate workflows, and next-gen display timing become more common in the roadmap.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=9Ubx2BAOhZo

Amazfit lineup tour: Balance 2 dive modes, T-Rex 3 Pro titanium, Helio Strap recovery

Posted by – January 15, 2026
Category: Exclusive videos

Amazfit walks through a full wearable lineup built around sports tracking, long runtimes, and a relatively lightweight software stack. The newest drop here is the Active Max, positioned as a mid-tier watch with a larger 1.5-inch AMOLED panel (up to 3000 nits), up to 25 days of claimed battery life, and 4GB storage that can hold roughly 100 hours of podcasts, plus offline maps for phone-free training. https://us.amazfit.com/products/active-max

The rest of the range is framed as “pick the form factor that fits your day, keep the data in one place.” Active 2 is the smaller, style-first option, while the Helio Strap is a screenless band aimed at recovery and sleep for people who don’t want a watch on at night; wearing it on the upper arm also improves comfort during hard sessions. The common thread is continuous sensor data feeding into Zepp, so readiness-style metrics, sleep staging, stress, and training load stay comparable across devices, even when you swap hardware or take the watch off for a while.

For tougher use-cases, Balance 2 and T-Rex 3 Pro lean into water and outdoor durability, both rated to 10 ATM and positioned for diving modes (including freediving/scuba, with marketing claims up to about 45 m). T-Rex 3 Pro also comes in 44 mm and 48 mm sizes and uses rugged materials like grade-5 titanium elements, while keeping practical features like mic/speaker for calls, GPS-based navigation, and offline mapping in the same app flow. This segment was filmed at CES Las Vegas 2026, which is why the pitch focuses on quick comparisons rather than deep lab testing here.

Zepp’s nutrition tooling is the other interesting angle: there’s an in-app food log that can estimate macros from a photo, and the “Vital Food Camera” concept pushes that idea into dedicated hardware that captures multiple images per minute to infer what you ate, in what order, and how much you actually consumed. If Amazfit ships something like that, the hard problems won’t be the camera—it’ll be privacy controls, on-device vs cloud inference, and accurate portion estimation across messy real meals, all while keeping battery budgets realistic. The price point mentioned for Active Max is $169, and the broader message is a decade of power-management tuning via Amazfit’s own OS and athlete feedback loops, without moving the products out of reach for regular buyers today.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=fHg4P4eanEk

Sensereo Airo modular air monitor: CO2, PM, TVOC pods over Thread + Matter smart home

Posted by – January 15, 2026
Category: Exclusive videos

Sensereo’s Airo frames indoor air quality (IAQ) as a distributed sensing job: instead of one “main” monitor, you dock and charge small, battery-powered pods and place them where exposure actually happens. Each pod is focused on a metric—CO2 for ventilation/cognitive comfort, particulate matter (PM/PM2.5) for smoke and dust events, TVOC for chemicals and off-gassing (including 3D printing), plus temperature and humidity for thermal balance—and the app translates raw telemetry into readable context and next steps. https://sensereo.com/

The modular design fits real homes because rooms behave differently: a bedroom can drift into high CO2 overnight, a kitchen can spike particulates during cooking, and a hobby corner can push VOCs after cleaning sprays or resin work. Airo’s “choose what you need, duplicate what you need” approach helps you validate changes like opening a window, adjusting HVAC airflow, or running a purifier, using room-level signal rather than a single average for the whole space, with air.

This interview was filmed at CES Las Vegas 2026, where Sensereo pitched “environmental intelligence” as an always-on measurement layer you can move and scale over time. The company describes a charging dock plus swappable sensor pods, with battery life on the order of weeks (around a month between charges for key pods), and notes its component sourcing with established sensor makers such as Bosch and Figaro for the sensing stack and calibration path, with Thread.

On connectivity, Airo is positioned to plug into mainstream smart-home graphs: low-power Thread links between pods, and Matter-oriented integration so platforms like Apple Home and Google Home can consume readings and trigger automations from thresholds (CO2, PM, TVOC). In the demo you see trend lines and historical views, which is where IAQ gets actionable: separating baseline drift from short spikes like wildfire smoke, cleaning sessions, or indoor smoking, using data.

The video also mentions an upcoming Kickstarter with a starter kit (dock plus four sensor pods) aimed at an entry price point under about US$200 for early backers. The broader takeaway is that modular sensing plus interoperable networking can make IAQ manageable like temperature: measure locally, compare over time, and trigger small interventions that reduce exposure without constant manual checking, with care.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=dFG6w3mlHNA

Teledyne LeCroy HDMI 2.2 testing: M42h 96Gbps FRL/FEC protocol analyzer + generator

Posted by – January 15, 2026
Category: Exclusive videos

HDMI 2.2 pushes the ecosystem from “it works” to “it works at 96Gbps”, which changes what engineers need to validate: Fixed Rate Link (FRL) behavior, Forward Error Correction (FEC), link training, and the way metadata and audio ride alongside high-rate video. In this interview, Teledyne LeCroy’s quantumdata team frames their role as the plumbing behind the logos—tools chip vendors and device makers use to debug, pre-test, and get ready for formal certification. https://www.teledynelecroy.com/protocolanalyzer/quantumdata-m42h

The centerpiece is the quantumdata M42h, a compact HDMI generator + protocol analyzer built for HDMI 2.2 FRL rates up to 96Gbps (24Gbps per lane), with visibility into FRL packetization (superblocks / character blocks), control data, InfoFrames, and captured error conditions. Filmed at CES Las Vegas 2026, the demo lands on a key point: test gear can be available ahead of the final compliance program, so silicon teams can iterate while the certification details get locked.

A practical theme is emulation. When you can’t buy an HDMI 2.2 display or source off the shelf, a box that can impersonate a sink or a source becomes the reference endpoint, letting teams validate interoperability before TVs, consoles, and GPUs ship. The loopback workflow shown—generate a defined stream, feed it back, then analyze what returns—turns “the picture looks odd” into timestamped protocol events you can debug in a lab.

They also point to a more portable, battery-powered tester with a built-in screen for AV integrators who need on-site verification—EDID behavior, HDCP handshakes, and signal continuity—without hauling a full bench setup. Rollout expectations stay grounded: Ultra96-class cables tend to arrive first, while sources and sinks follow once compliance specs and logos are finalized, with the interview estimating late 2026 into early 2027 for broader shelf availability, depending on real-world timing.

Teledyne LeCroy positions this as one slice of a broader protocol-test stack spanning HDMI, DisplayPort, USB, PCI Express, Ethernet, Wi-Fi, Bluetooth, and MIPI. The takeaway is that “new standard” is mostly a test problem—repeatable stimulus, deep capture of the right protocol layers, and turning edge-case failures into actionable debug data for real hardware.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=gVtjmaDhF54

Birdfy Hum Bloom, Bath Pro, Feeder Vista: 4K slow-mo hummingbirds, dual-cam birdbath, 6K 360° feeder

Posted by – January 15, 2026
Category: Exclusive videos

Birdfy is turning backyard birdwatching into a small edge-AI workflow: camera-equipped feeders and baths that push visitor alerts to your phone, record clips on landing, and run computer-vision classification so footage arrives tagged by species. The goal is less “random wildlife camera” and more a searchable, shareable stream—portrait framing for close detail, wide view for context, plus a lightweight bird journal inside one app. https://www.birdfy.com/

Hum Bloom is built around hummingbird behavior and optics. The feeder uses a biomimetic flower-style nectar bulb so the camera keeps a clean line of sight, and a hydraulic pump system that keeps nectar available right where the bird feeds. Pair that with 4K capture and slow motion to resolve wingbeats, and Birdfy’s AI layer that aims at hummingbird species coverage (the booth mentioned roughly 150), so the metadata is about what you saw, not just that “something flew by,” it.

In the walkthrough, filmed at CES Las Vegas 2026, the conversation shifts from software to mechanics that make better data. Feeder Vista is positioned as a 360° setup with dual ultra-wide lenses and up to 6K video (plus high-frame-rate slow motion), letting you choose panoramic context or a single wide perspective. Instead of gravity-fed seed, an air-pump lifts a measured portion from a sealed container to the tray, keeping bulk feed dry while helping the camera get consistent framing on each visit at the feeder expo.

Bath Pro applies the same dual-view idea to water: a wide-angle camera to catch group activity, plus a portrait camera for individual detail, with smart capture that prioritizes faces and feathers over background clutter. A solar-powered fountain creates moving-water cues that attract visits, and an optional de-icer/heater keeps water accessible in winter—useful in places where bird activity continues but the basin would otherwise freeze, care.

The interview also lands on a realistic limit: species recognition is getting strong at scale, but true “this exact individual bird” re-identification is still hard without dependable visual markers. Treated as connected edge cameras with event-based recording, motion/weight sensing, and ongoing model updates, the interesting engineering story is how lens choice, placement geometry, and feeder mechanics co-design to turn backyard visits into clean, low-noise datasets you can enjoy in the moment, app.

Birdfy Hum Bloom: 4K slow-mo hummingbird feeder with hydraulic nectar pump + AI ID
Birdfy Feeder Vista: 360° 6K dual-lens bird feeder cam with air-pump seed system
Birdfy Bath Pro: dual-camera smart birdbath with solar fountain, de-icer/heater, AI alerts

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=aOt8Ps1XasM

Geniatech 42-inch ePaper whiteboard: AI modules on ARM: NXP/Qualcomm/MediaTek, Kinara, Hailo, MemryX

Posted by – January 15, 2026
Category: Exclusive videos

Geniatech is pushing ePaper beyond “static signage” by turning large-format E Ink into interactive, workflow-aware displays that behave more like tools than screens. The headline demo is a 42-inch ePaper interactive whiteboard designed for classrooms and meeting rooms, pairing a reflective, eye-friendly panel with low-latency handwriting, reusable templates (like weekly reports), and easy sharing via QR code. https://www.geniatech.com/product/42-epaper-interactive-whiteboard/

A nice touch is the “lecture replay” idea: voice recording can be captured alongside the pen strokes so students can re-watch how a solution was built, step by step, without needing a power-hungry LCD. Because it’s reflective ePaper, it avoids backlight glare and keeps heat generation low, which matters when a board is on all day in a bright room. The emphasis here is practical UX: smooth pen feel, fast refresh where it counts, and simple content distribution for real teaching.

For outdoor infrastructure, the same platform shows up as ePaper transit signage: a bus-stop style display with three panels driven from a single control board, built for low power and weather exposure. Reflections and finish come up in the discussion (matte vs glossy), and Geniatech highlights partial-refresh modes to update just the changing regions (like arrival times) instead of doing full-screen flashes all the time. The video is filmed at CES Las Vegas 2026, and the broader theme is “ultra-low power, always-visible info” for public space.

Smaller ePaper devices round out the story, including digital photo frames that aim for near-zero maintenance by using indoor light energy harvesting (a “room-light charging” approach) so a once-per-day image update can run for extremely long periods without manual charging. There’s also a 28.5-inch color ePaper display pitched as “photo-like,” with both partial and full refresh options depending on whether you’re updating small UI elements or switching the whole layout.

Then Geniatech pivots from displays to compute: embedded edge AI on ARM, with boards and modules spanning NXP, MediaTek, and Qualcomm platforms, built to run inference locally for low latency, offline operation, and better data control. The partner ecosystem matters here: accelerator modules mentioned include Kinara Ara-class NPUs, plus options like Hailo-8, MemryX, and DeepX in Geniatech’s modular lineup, letting integrators match TOPS, power, and cost to the deployment instead of locking into one silicon path.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=klaQaYe4w-Y

VESA DisplayPort Automotive Extensions at CES 2026: CRC ROI metadata, OpenGMSL, quantumdata M42de

Posted by – January 15, 2026
Category: Exclusive videos

VESA’s DisplayPort Automotive Extensions (DP AE) is about treating the in-car display path like a safety-critical data link, not “just pixels.” The idea is to detect corruption, dropped or repeated frames, and even intentional tampering so a rear-view camera, speedometer, or driver instrument cluster can be flagged as invalid instead of silently showing the wrong thing. https://vesa.org/

A key mechanism is functional-safety metadata riding on top of standard DisplayPort: CRC (cyclic redundancy check) signatures plus frame counters and timing checks, computed per region of interest (ROI) so the most critical parts of a screen get the tightest scrutiny. If a CRC mismatch appears, or if a frame freezes or skips, the system can raise a warning immediately rather than leaving the driver to guess what happened here.

DP AE also adds security concepts aimed at image integrity and authentication, so attempts to modify rendered content in transit can be detected at the display level. This matters as vehicles add more high-resolution interior panels and camera feeds, while the attack surface grows across GPUs, head units, and external links in a modern car.

The demo is filmed at CES Las Vegas 2026 and ties DP AE to real automotive wiring reality: long cable runs and SerDes links. VESA highlights collaboration with the OpenGMSL ecosystem to carry DisplayPort over longer distances (the video mentions up to 15 m), while keeping end-to-end checks consistent across silicon vendors, Tier-1s, and test tool chains today.

On the validation side, Teledyne LeCroy’s quantumdata platform is shown as a practical way to emulate DP AE sources and sinks, inspect the new secondary data packets, and inject faults to prove detection works. Between FPGA setups, software models, and compliance workflows, the takeaway is an ecosystem push: interoperable safety/security profiles that different suppliers can test the same way and ship with fewer integration surprises too.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=a01I6FVdY_4