Iceplosion Home Frozen Carbonated Drinks: NFC capsules, CO2 60 L, 2–4 min freeze

Posted by – January 18, 2026
Category: Exclusive videos

Iceplosion is building a single-serve countertop machine that makes frozen carbonated beverages at home, basically the “fizzy slushie” you’d normally buy at a convenience store, but produced on demand from a capsule. The core idea is controlled carbonation plus rapid freezing, so you can switch between frozen carbonated drinks, non-carbonated slushies, and ice-cold soda from the same platform without needing a bulky commercial dispenser. https://icelosion.com

The drink workflow is deliberately “coffee-pod simple”: insert a syrup capsule, add water, connect a standard commercially available 60 L CO2 cylinder, then let the machine do metering, mixing, chilling, and freeze management. The capsule is read via NFC so the system can enforce flavor ID, recipe parameters, and use-by date checks before dispensing. The headline spec is taking room-temperature liquid to a frozen texture in roughly 2–4 minutes, with each capsule producing one portion around 16–20 fl oz (about 500–700 mL) per cup.

Midway through the interview (filmed at CES Las Vegas 2026), the CEO contrasts the newer black enclosure with an older larger white demo unit, and explains why the chassis was repackaged. Feedback from a Berlin trade show pushed them to reduce footprint for realistic kitchen use, and the redesign is claimed to be about 40% smaller while keeping the mechanical and thermal “guts” essentially locked in. What’s left is consumer-grade industrial design and manufacturable packaging, rather than re-inventing the freeze/carbonation module.

Commercially, the target pricing discussed is about $700 for the appliance and around $1 per capsule, which frames the product as a convenience and repeat-use economics play rather than a one-off gadget. The single-portion format also avoids keeping a whole tank cold, and it fits common home moments: hosting, barbecues, and watching sports where quick turnaround matters. If the machine can truly maintain sustained performance across back-to-back pours, the interesting engineering story becomes consistency: temperature control, viscosity management, carbonation retention, and cleaning workflow.

Flavor is where the platform can scale: they mention roughly 20 varieties today (cola, cherry, blue raspberry, strawberry lemonade, plus sugar-free options), with the possibility to develop new syrups as long as the formulation hits the right composition. For “healthier” slushies, the constraints are technical as much as marketing: managing Brix, freezing-point depression, texture, and CO2 behavior when you move toward real-juice bases and low-sugar recipes. The company positioning is also international—an English founder, operations based in Sicily, and an American corporate setup to support rollout over time.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=rO27j5ZlhlM

Autel Energy bidirectional 12kW home EVSE, 50kW DC Compact, 640kW modular fast charge

Posted by – January 18, 2026
Category: Exclusive videos

Autel Energy walks through an EVSE portfolio that spans residential Level 2 all the way to depot-class DC infrastructure. The highlight at the front of the booth is a bidirectional home charger rated at 12 kW (50 A), positioned as a V2H bridge between an EV pack and a home load panel so the car can act like a much larger “battery” than typical stationary storage. https://autelenergy.com/

A recurring theme is interoperability: charging is “simple” only when the control pilot handshake, protection logic, and vehicle communication behave predictably every time. Autel’s roots in automotive diagnostics show up in how they talk about optimizing the charger-to-vehicle handshake, plus the expectation that an EVSE must play nicely across mixed fleets, firmware revisions, and grid constraints without drama, especially when tied into smart energy management and time-of-use rate.

On the hardware side, the booth tour highlights a compact DC fast charger in the ~50 kW class (single or dual port), with support for CCS and the Tesla/NACS connector (SAE J3400). The same “compact but practical” idea carries into their commercial AC Single Level 2 station, also 12 kW, which adds an embedded Nayax payment terminal so small businesses or multi-tenant sites can do tap-to-charge without building a bespoke billing flow here.

For higher power sites, Autel contrasts monolithic all-in-one cabinets up to 480 kW with a distributed architecture up to 640 kW where a centralized power cabinet feeds smaller dispensers (similar in layout to many highway fast-charge sites). Internally, the modularity is framed like a server rack: 40 kW power modules can be added to scale output over time, with cooling and power electronics packaged to keep upgrades and service predictable at full power.

They also hint at where charging is headed: automated plug-in/plug-out experiments for fleet depots, plus monitoring workflows that matter when uptime is the product. The interview is filmed at CES Las Vegas 2026, and it lands on Autel’s global footprint with North American operations based in Anaheim, additional presence in Fremont, and manufacturing in Greensboro, alongside deployments across Europe and other regions like Anaheim.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=OEGGfZM1vRE

RayNeo X3 Pro AI+AR Glasses: Snapdragon AR1, MicroLED waveguide, Gemini demos

Posted by – January 18, 2026
Category: Exclusive videos

RayNeo’s smart-glasses lineup splits into two categories: AI+AR “information overlay” eyewear and high-FOV “portable cinema” display glasses. This interview starts with X3 Pro, described as the latest AI+AR model launched in December 2025, using dual-eye full-color MicroLED waveguides so the image is visible mainly inside the wearer’s viewing cone. It’s built on Qualcomm Snapdragon AR1 Gen 1 to keep latency and power low for a heads-up layer. https://www.rayneo.com/products/x3-pro-ai-display-glasses

In the demo, a wake phrase (“OK RayNeo”) triggers Google Gemini, turning the glasses into a voice-first assistant for quick lookups like weather and general knowledge. Control is shared with a right-side touchpad/trackpad: swipe to navigate, then tap or double-tap to enter features. When idle, the UI drops into sleep mode to preserve battery and manage heat on a face-worn form factor.

On the display side, the presenter mentions 640×480 content rendering and very high in-eye brightness claims (up to about 6,000 nits peak) to keep overlays readable outdoors. RayNeo also talks about opening up the platform through AR SDKs and Unity workflows, suggesting this is meant for third-party apps, not just built-in assistant prompts. The feature set shown leans practical: live translation (the booth claim is up to 14 languages) and around 5 hours of typical daily use depending on use.

The camera then moves to RayNeo Air 4 Pro, positioned less as AR and more as a head-mounted external monitor for gaming, phone mirroring, and laptop work over USB-C (DisplayPort Alt Mode). Around CES Las Vegas 2026, RayNeo and early coverage pointed to HDR10 FHD Micro-OLED panels, refresh rates reported up to 120 Hz, high-frequency PWM dimming, and a video-processing pipeline that can map SDR into HDR and simulate 3D from 2D sources. Audio is central here too: four built-in speakers tuned with Bang & Olufsen for immersive sound.

Pricing underlines the split: X3 Pro is discussed around $1,099 as a flagship AI+AR device, while Air 4 Pro is pitched closer to $299, with availability described as early 2026 (the booth mentions late February in some regions, while reports cite late January sales in others). Taken together, the video captures the current convergence in smart eyewear: MicroLED waveguides + on-device AI for lightweight overlays, and Micro-OLED HDR display glasses for high-bandwidth media over a single cable.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=EONf1cSHl64

iodyne Pro Mini at Frore booth: dual AirJet cooling for sustained NVMe transfers over USB4

Posted by – January 18, 2026
Category: Exclusive videos

iodyne Pro Mini is a portable, bus-powered SSD shown with Frore Systems AirJet cooling, aimed at keeping transfer speed steady instead of spiking and then throttling. In the demo, dual AirJet modules are credited with sustaining about 3 GB/s during long writes and reads, which is the real pain point when you’re moving multi-hundred-gigabyte camera originals and deliverables on tight deadlines. https://iodyne.com/promini/

What makes the design interesting is that the performance claim is tied to thermal behavior, not peak burst numbers: USB4/Thunderbolt class bandwidth can be available, but typical compact NVMe enclosures often hit a heat ceiling and drop throughput hard. AirJet is a solid-state active cooling approach (no fan blades), built to push airflow through a thin chassis so the SSD can hold its steady-state transfer rate under sustained load and heat.

Beyond throughput, the product pitch leans into security and operations features that usually live in IT gear rather than pocket storage: XTS-AES-256 encryption, hardware-backed access using device passkeys, and workflow-oriented touches like a customizable digital label and “containers” for separating data. It also adds Find My-style tracking plus fleet management so teams can locate drives and remotely lock or disable them, as shown here at CES Las Vegas 2026 today.

In practice, this targets production and post workflows where the cost isn’t just time, but people waiting around while media offloads finish: copying, verifying, and making multiple backups at wrap is routine, and anything that avoids thermal throttling can compress that window. The takeaway from the booth walkthrough is straightforward: sustained bandwidth, always-on protection, and remote manageability are being packaged into a small USB4/Thunderbolt SSD meant for on-set and field use where minutes have real value.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=A0EzR5T1deg

Intel Laptop 20% performance boost using Frore Systems AirJet Mini: 20W to 24W sustained CPU power

Posted by – January 18, 2026
Category: Exclusive videos

In this quick laptop thermal retrofit demo, Frore Systems swaps the stock fan assembly in a 14-inch Samsung Galaxy Book5 Pro for four AirJet Mini solid-state cooling modules. The baseline machine is shown sustaining about 20 W of CPU power with audible fans; the modified unit is tuned to hold about 24 W, roughly a 20% uplift, while aiming for near-silent operation and a more sealed chassis. https://www.froresystems.com/products/airjet-mini

AirJet Mini is designed as a thin active heat-sink module rather than a rotary fan: it uses ultrasonic actuation to move air through micro-vents, producing high back pressure (around 1,750 Pa) in a compact form factor. Frore rates the original Mini at roughly 5.25 W of heat removal at about 21 dBA while drawing up to about 1 W, so scaling to multiple modules can add meaningful sustained cooling without the tonal fan whine that often dominates thin laptops at load in air.

What matters here is sustained package power, not short boost: once a notebook hits its steady-state thermal limit, firmware clamps PL1 and clocks settle. Holding 24 W instead of 20 W can translate into higher all-core frequency, steadier interactive latency, and fewer dips from thermal throttling in long compiles, renders, or exports. The footage is filmed at CES Las Vegas 2026, and it’s a useful example of how solid-state airflow can change the acoustics-perf trade space at a booth.

As always, outcomes depend on the whole stack: heat spreader quality, vapor chamber or heat pipe routing, fin and vent geometry, and how the BIOS enforces PL1/PL2 with skin-temperature limits. AirJet-style modules can also support dust-resilient, water-resistant industrial design because airflow can be routed through controlled paths rather than large open fan grilles, which may help consistency over time in real work.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=QyFY9Z9npEo

VESA DSC logo explained: DP2.1 DP54 bandwidth, 4K165 HDR workflows, compression limit, ClearMR 21000

Posted by – January 17, 2026
Category: Exclusive videos

VESA uses certification programs as a shorthand for what a display pipeline can really sustain, and this booth walk-through focuses on how those logos map to measurable signal integrity and motion performance. The headline demo is an LG QHD OLED gaming monitor certified for ClearMR 21000 (top tier motion blur rating), AdaptiveSync Display, and DisplayHDR True Black 500, running at 540 Hz over DisplayPort 2.1 using UHBR13.5 (54 Gb/s) with a DP54 cable. https://www.vesa.org/

ClearMR is essentially VESA’s way of normalizing blur metrics across panels and refresh regimes, so “21000” isn’t marketing fluff but a tier that implies very low perceived motion smear when the whole chain—panel response, overdrive, and scanout timing—behaves. On top of 540 Hz at QHD, the monitor also exposes a dual-mode toggle: it can drop resolution and push refresh up to 720 Hz, which is interesting for esports latency budgets even if it falls short of VESA’s Dual Mode certification threshold because that program requires at least 1080p.

The conversation then shifts from desktop gaming to mobile HDR, showing OLED tandem panels in laptops from LG and Lenovo. Tandem OLED stacks two emissive layers to raise peak luminance while keeping OLED black levels, which is how these systems hit VESA DisplayHDR True Black 1000. VESA mentions more than 100 True Black 1000 laptop models certified, with some families peaking around 1,600 nits—numbers that are easier to appreciate in person at CES Las Vegas 2026.

A recurring technical theme is Display Stream Compression (DSC): it has existed for years as an optional feature in older DisplayPort generations, but it’s a mandatory capability in DisplayPort 2.1 and now has a dedicated VESA logo program to indicate a validated implementation. DSC is typically visually lossless and is what makes extreme pixel rates feasible—think high-refresh QHD OLED, multi-display MST docking, or pushing beyond raw link budgets like 54 Gb/s UHBR13.5 and up to 80 Gb/s UHBR20.

That DSC logo idea also shows up in TVs: LG’s newly announced C6 is highlighted because it targets 4K at 165 Hz with HDR, a case where compression is effectively required to move enough pixels even when the physical input is HDMI. VESA’s point is less about inventing a new codec and more about making interoperability predictable by certifying the DSC behavior, while keeping the standard itself royalty-free for members (with certification handled via test house cost) rather than per-unit licensing fee.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=ciI_iUrkugs

VESA Thunderbolt 5 / USB4 v2 DP Tunneling at 120 Gbps: Single-Cable Dual 5K 165Hz Bandwidth

Posted by – January 17, 2026
Category: Exclusive videos

VESA walks through two real-world PC display pipelines that push modern interconnect limits: DisplayPort tunneling over USB4 v2 (aligned with Thunderbolt 5 behavior) and native DisplayPort 2.1 UHBR20 Multi-Stream Transport. The through-line is certification-grade thinking: link training, bandwidth allocation, DSC behavior, and the practical “does it stay stable when you unplug, re-route, and re-daisy-chain” edge. https://www.vesa.org/

The first setup, filmed at CES Las Vegas 2026, is a single-cable “wide + fast” scenario: a Gigabyte Thunderbolt 5 add-in card takes multiple DisplayPort inputs and tunnels two DP streams over one USB4 v2 output into a Kensington Thunderbolt dock. From there, two 5120×2160 5K panels run at 165 Hz, effectively demonstrating a dual-5K high-refresh desktop over one cable, with video traffic prioritized and kept coherent by the tunneling stack there.

A key detail is USB4 v2 asymmetric mode: instead of the usual 2-lane up / 2-lane down, the link can shift to 3 lanes downstream (up to 120 Gbps) and 1 lane upstream (up to 40/60 Gbps depending on implementation). That’s what enables enough downstream headroom for multiple high-rate DP streams, and it pairs well with Display Stream Compression (DSC) on the panels to stretch effective payload without changing the physical lane rate.

The second demo switches to native DisplayPort 2.1 UHBR20 with MST daisy-chaining: an NVIDIA RTX 5090 drives three 32-inch Gigabyte AORUS FO32U2 Pro 4K HDR monitors from a single UHBR20 output, using each monitor’s DP in/out MST hub to forward streams down the chain. The visible target is 3840×2160 at 120 Hz across the chain (even if each monitor can do higher), highlighting the real constraint: GPU port policy and bandwidth budgeting per output, not just cable capability here.

VESA also frames why MST compliance work matters: topology changes, stream re-enumeration, and hub routing are where users feel pain, so more exhaustive test coverage aims to make daisy-chained setups behave predictably across many permutations. In theory MST can scale to large fan-out counts, but the demo keeps it grounded in what’s achievable today for multi-monitor gaming, simulation, and high-density workstation layouts too.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=hizvFMf72Ao

TeleCANesis thin middleware for in-vehicle HMI: CAN-to-cloud routing, hypervisor IPC

Posted by – January 17, 2026
Category: Exclusive videos

TeleCANesis shows what “getting data where it needs to go” looks like inside a modern off-road vehicle platform: routing signals and commands between infotainment UI, instrument cluster, and embedded services so the right data arrives at the right endpoint with predictable timing. In this demo, that includes moving Bluetooth media metadata (track, artist) and control commands between the HMI layer and the Bluetooth stack, without each app hard-wiring every connection. https://telecanesis.com/

On the vehicle side, the same message routes carry speed, gear state, and other telemetry into the cluster, and can also drive body functions like lighting or logic such as enabling a reverse camera when the gear selector changes. The takeaway is less about a single widget and more about a reusable data plane: map signals once, then reuse them across displays, ECUs, and services as the product evolves, while keeping latency and ordering in check.

There’s also a cabin detail from Ottawa Infotainment: audio is produced via transducers bonded into the roof and doors, so the panels become the radiating surface instead of installing traditional speaker cone. The video was filmed at CES Las Vegas 2026, and the booth context matters because it ties UI, sensor inputs, and connectivity into one integrated experience rather than a lab bench.

Across the booth, TeleCANesis sits under multiple UI stacks and display technologies, feeding the same vehicle signals into different HMIs, and routing safety-related sensor data in other demos. A key point is how this scales when the compute architecture gets more complex: in a next-gen platform with a hypervisor and multiple guest environments, TeleCANesis acts as the messaging backbone between isolated partitions so apps can exchange only the intended data across a clean boundary.

Under the hood, the approach leans on thin middleware plus model-driven configuration and automated code generation (including the TeleCANesis Hub toolkit built on QNX), which makes verification and safety/security certification more tractable than hand-written glue code. They describe using AI during project ingestion and setup, but keeping runtime messaging deterministic, because safety-critical routing is one of the places you can’t tolerate “creative” behavior from tooling. That split—AI to accelerate setup, determinism to ship—captures the engineering mindset in one shot.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=AGfMAuJzTlM

Ottawa Infotainment DragonFire OS demo: CAN-to-cloud IVI for ATVs, fleet telematics, safety UI

Posted by – January 17, 2026
Category: Exclusive videos

Ottawa Infotainment (Sean Hazaray) walks through a demo vehicle that represents a fast-growing niche: side-by-sides, ATVs, motorcycles, and neighborhood EVs that now expect “car-like” digital cockpit UX. The company positions itself as full-stack IVI and E/E architecture, spanning embedded hardware + OS, vehicle networks (CAN) and IO, and cloud-connected back ends that turn raw signals into driver-facing context on a large in-vehicle display. https://ottawainfotainment.com/pages/ces2026

A key theme is shortening OEM integration time by shipping pre-integrated building blocks instead of one-off engineering. In the cockpit, “infotainment” is framed as the orchestration layer for navigation, media, instrument-cluster data, and vehicle status, with an emphasis on configurable HMI that can be adapted across platforms and programs without restarting validation from zero each time.

Safety and fleet workflows are used as concrete examples of why tight integration matters. The vehicle shows attention-grabbing hazard lighting tied to Emergency Safety Solutions (ESS) concepts, and the broader message is that safety-critical alerts, coaching cues, and operational telemetry should live inside OEM-grade displays rather than on extra tablets, phone mounts, or aftermarket screens that increase distraction and training overhead.

Filmed at CES Las Vegas 2026, the booth pitch is “ecosystem-first”: partnerships like Geotab (fleet telematics and data intelligence embedded into DragonFire OS as an OEM option), ESS (connected hazard alerts), and modular E/E work with suppliers like Pektron point toward a software-defined vehicle approach where cockpit compute, ECUs, and cloud services evolve together through upgrades rather than hardware swaps, faster.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=Kg0iYo-bBSQ

LOTES Ultra96 HDMI 2.2 connectors: Category 4 board/cable-side parts and CTS approval path

Posted by – January 16, 2026
Category: Exclusive videos

LOTES (Lotes Co., Ltd.) focuses on the unglamorous but critical part of the HDMI upgrade cycle: the physical interconnect. In this interview, Cien Wong explains how the company manufactures both the board-side HDMI receptacle and the cable-side plug for HDMI 2.2, targeting the new Category 4 “Ultra96” ecosystem where signal integrity margins tighten as bandwidth climbs toward 64/80/96 Gbps. https://www.lotes.cc/en/

A key theme is traceability and compliance rather than hype. For HDMI 2.2, HDMI Licensing Administrator maintains approved Category 3/Category 4 connector lists under the Compliance Test Specification (CTS), and device makers must use listed connectors to pass Authorized Testing Center validation. The practical takeaway for buyers is simple: check the HDMI.org approved-connector resources instead of trusting look-alike parts, a point made on the CES Las Vegas 2026 show floor.

The demo connects the dots between connector design and lab-grade verification. LOTES highlights collaboration with test vendors such as Rohde & Schwarz and the HDMI plugfest path, where measurements like differential insertion loss, differential impedance, attenuation-to-crosstalk, and intra-/inter-pair skew decide whether a connector/cable assembly behaves at multi-tens-of-GHz edge rates. That discipline matters because small discontinuities at the plug, PCB launch, or cable termination can show up as eye-diagram closure, elevated BER, or flaky link training at speed.

Timing-wise, LOTES says the hardware is essentially ready, while broader market availability depends on finalizing the HDMI 2.2 cable/connector test procedures and certification cadence, with products likely appearing toward late 2026 and then ramping as TVs, GPUs, and consoles adopt the spec. The company is headquartered in Keelung, Taiwan, with multi-site manufacturing across China plus a plant in Vietnam, which is relevant for OEM supply planning in Asia.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=70KnoWi7hj8

Elka Ultra96 HDMI 2.2 cables at CES 2026 passive coax 2 m now, 5–10 m roadmap

Posted by – January 16, 2026
Category: Exclusive videos

Elka walks through its HDMI cable roadmap with a focus on the new HDMI 2.2 “Ultra96” ecosystem: passive coaxial designs aimed at next-gen bandwidth targets, plus clear labeling so buyers can tell what they’re getting. The demo highlights a 2 m Ultra96 cable as the current reference build, while outlining longer-reach variants that follow the same electrical targets and compliance approach over time.

Filmed at CES Las Vegas 2026, the discussion frames HDMI 2.2 as a transition period where most consumer gear is still HDMI 2.1, but cable and connector vendors are already building toward higher data rates and stricter signal-integrity margins. Elka positions itself as a Taiwan-headquartered manufacturer with production across China, Laos, Vietnam, and Malaysia, using that footprint to scale different cable constructions and BOM choices.

On the technical side, the emphasis is on certification labels and performance claims tied to Ultra96: the transcript calls out 96 Gb/s class signaling and common use-cases like high-frame-rate 4K and 8K video modes for gaming, conference rooms, and pro AV installs. Even if end-devices lag, cabling that meets insertion-loss, impedance control, and crosstalk requirements is a prerequisite for stable links at higher symbol rates.

There’s also a branding note: Elka mentions a broader company rebrand and a “Spider” retail presence, suggesting a push to make certification marks and product families easier to recognize across regions (North America, Europe, Japan, and broader Asia). The takeaway is less about flashy demos and more about the practical pipeline—manufacturing scale, compliance labeling, and a length roadmap from short passive runs toward longer options as the market catches up.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=UhqyIp4RMHE

Arduino UNO Q: Dragonwing QRB2210 + STM32U585, Debian Linux, edge AI + robotics

Posted by – January 16, 2026
Category: Exclusive videos

Arduino’s UNO Q is a “dual-brain” dev board built with Qualcomm, combining a Linux-capable Qualcomm Dragonwing QRB2210 MPU with a real-time STM32U585 MCU in the familiar UNO form factor. The pitch is simple: you get a small SBC for UI, networking, and on-device inference, plus deterministic GPIO and motor-control timing on the microcontroller side—without having to design your own inter-processor plumbing. https://www.arduino.cc/product-uno-q

In the demo, the board runs standard Debian Linux with a preloaded IDE and a catalog of example apps, including a face-detection project. You can also drive the same workflow from a laptop over Wi-Fi, so the board can sit “headless” in a robot or enclosure while you iterate. The key abstraction is an Arduino “app” split across two worlds: a classic Arduino sketch for the MCU, and a Linux-side component you can write in Python (or anything that runs on Debian), tied together with simple RPC calls for message passing and control, today.

The robot-dog setup shows why this hybrid approach matters: the STM32 side handles real-time motor control while the QRB2210 hosts a lightweight web app that becomes the controller UI. Add a USB camera and you can loop vision results—like face detection or a custom classifier—back into low-latency behaviors on the microcontroller pins, without turning your control loop into a Linux scheduling problem. This was filmed at CES Las Vegas 2026, but the engineering theme is broader: making “UI + compute + control” feel like one coherent platform, there.

For AI workflows, the board story leans on a gentle on-ramp: start with “default models,” then move to custom training via Edge Impulse, export, and re-integrate into the same Arduino/Linux split application model. Hardware-wise, UNO Q is positioned as an entry board at $44, with a 2 GB RAM version shown and a 4 GB variant mentioned as upcoming; the goal is to keep the developer experience consistent as the line expands, while staying open source and accessible for robotics, IoT gateways, vision, and local web dashboards inside.

Overall, the UNO Q looks like Arduino trying to collapse the gap between maker-friendly GPIO and modern embedded compute: Cortex-A53 class Linux, GPU/ISP-capable silicon, Wi-Fi-based dev loops, and a clean API boundary to a real-time MCU. If you’ve ever duct-taped a Pi (or similar SBC) onto a microcontroller just to get a UI and networking, this is the same architecture—but packaged as one board with a curated software path from demo to product prototype, now.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=z22RdSICsSc

Dentomi GumAI demo: smartphone photo gingivitis screening, plaque heatmap, self-care guidance

Posted by – January 16, 2026
Category: Exclusive videos

Dentomi (DTOMI Limited) demonstrates GumAI, a computer-vision oral-health tool that turns a phone camera into a fast, at-home dental screening flow. You take an intraoral photo with a smartphone or iPad, and the app returns an annotated view that highlights where brushing or flossing needs more attention, using a simple green/yellow/red overlay aimed at coaching rather than replacing a dentist visit. https://www.dentomi.biz/

Under the hood it maps a familiar dentistry step—visual inspection—into an AI pipeline: guided image capture, quality checks (focus, lighting, framing), then pixel-level segmentation and classification to mark gingival margins, plaque-heavy zones, and other visible hygiene indicators. The practical value is repeatability, so people can track changes over time and tighten daily technique at home.

The team frames it as access tech for communities that don’t get regular dental care, with deployments via NGO partners, community centres, and elderly homes. In the interview (filmed at CES Las Vegas 2026), they also describe collaborations in Hong Kong, including sponsorship-style rollouts with Colgate-Palmolive that remove cost barriers and support preventive follow-up for health equity.

Ward describes a dentist and public-health background and an ongoing PhD at the University of Hong Kong, with the product starting as research intended to translate into community impact. Training follows the typical supervised-learning path: labeled clinical photos from partner clinics and hospitals, plus additional user images when consent is granted, which brings up real questions around data governance and privacy.

Commercially, the model leans toward funded access—brands, dental associations, or public programmes cover licences so end users can scan for free, while the system can nudge referrals when risk looks elevated. It’s easy to imagine insurer and teledentistry tie-ins later, but the core framing stays consistent: image-based screening and education that helps people decide when to seek care and how to improve day-to-day habits before issues grow, prompting timely act.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=FlnzG9ZLwtY

VESA DisplayPort DP80LL: UHBR20 active LRD cables, inrush power and compliance testing

Posted by – January 16, 2026
Category: Exclusive videos

This video digs into how VESA’s DisplayPort team validates the new DP80 low-loss cable class for DisplayPort 2.1, using a link-layer/protocol tester (Teledyne LeCroy quantumdata M42de) to run first-pass compliance checks. The core idea is simple: plug the cable into “in” and “out,” then verify it can link-train and move data across every lane count and configuration, including UHBR rates up to UHBR20, with a clean pass/fail report. That DP80 logo isn’t just marketing; it’s meant to give end users a quick signal that a cable has been through a defined compliance path rather than “it worked on my desk.” https://vesa.org/

A big theme is the practical limit of purely passive DP80 at the highest rates: once you chase 20 Gbit/s per lane, you quickly run out of electrical margin, especially past roughly a meter in common materials. DP80LL (DP80 “low loss”) is VESA’s answer: keep the same endpoint experience, but use active electronics to extend reach and improve margins. The demo focuses on LRD (linear redriver) designs with active components at both ends that reshape/restore the signal before it hits the receiver, and it also tees up active optical approaches for even longer spans where copper loss becomes the wall.

Filmed at CES Las Vegas 2026, the discussion gets refreshingly concrete about why “active” is hard: power behavior, not just eye diagrams. DisplayPort includes a DP_PWR pin intended to power adapters and active cables (historically 3.3 V at up to 500 mA), while USB-C variants can draw from the Type-C power domain, so every active design has to manage startup without browning out the port. Compliance testing drills into inrush (the plug-in current spike and voltage droop) and source/sink “outrush” robustness, which is why soft-start circuits and controlled capacitor charging become make-or-break details.

There’s also nuance around interoperability and timing. When you connect a cable, HPD/AUX sideband activity kicks off link training, capability reads (DPCD/EDID paths), and clock recovery, all within spec-defined time windows. LRD-style cables behave like fast pass-through paths, while more complex repeater topologies can add training steps and delay, and optical links can introduce measurable latency if the run gets extreme. The video highlights how certification is expanding beyond straight cables into trickier categories like active adapters (for example USB-C to DP), where VESA needs test requirements that prevent “extension hacks” from silently breaking signal integrity.

The takeaway is that cable certification is becoming a first-class part of enabling UHBR20 in real setups: big, high-refresh desktop monitors, workstations, docks, and GPU-to-display runs that don’t fit the one-meter fantasy. DP80LL and related active/optical designs are about preserving link reliability at 80 Gbps class throughput while keeping user experience boring—in the good way—so the system link-trains once and stays locked. For anyone building or buying next-gen DisplayPort 2.1/2.1b gear, this is a peek into the engineering reality behind “it just works” at the edge of signal integrity.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=a6w1eAhk9ug

Edge Impulse XR + IQ9 edge AI 100 TOPS: YOLO-Pro, Llama 3.2, RayNeo X3 Pro AR1, PPE + QA LLM

Posted by – January 16, 2026
Category: Exclusive videos

Edge Impulse (a Qualcomm company) frames its platform as a model-to-firmware pipeline for edge AI: capture sensor or camera data, label it, train a compact model, then ship an optimized artifact that can run without a cloud round trip. The demos emphasize quantization, runtime portability, and repeatable edge MLOps where latency, privacy, and uptime matter for real work. https://edgeimpulse.com/

One highlight is an XR industrial worker assistant running on TCL RayNeo X3 Pro glasses built on Snapdragon AR1, with a dual micro-display overlay and a forward camera. Edge Impulse trains a YOLO-class detector (their “YOLO Pro” variant) to identify specialized parts, then a local Llama 3.2 flow pulls the right documentation and generates step-by-step context like part numbers, install notes, and purpose for a field crew guide.

The workflow focus is data: capture images directly from the wearable, annotate in Studio, and iterate via active learning where an early model helps pre-label the next batch. They also point to connectors that let foundation models assist labeling, plus data augmentation and synthetic data generation to widen coverage. This segment was filmed at the Qualcomm booth during CES Las Vegas 2026, but the core story is a repeatable edge pipeline, not a one-off demo.

A second showcase moves to the factory line: vision-based defect detection on Qualcomm Dragonwing IQ9, positioned for on-device AI at up to 100 TOPS. The UI runs with Qt, while the model flags defective coffee pods in real time and an on-device Llama 3.2 3B interface answers queries like defect summaries or safety prompts, all offline on the same device.

They round it out with PPE and person detection on an industrial gateway, plus Arduino collaborations: the UNO Q hybrid board (Dragonwing QRB2210 MPU + STM32U585 MCU) using USB-C hubs for peripherals, wake-word keyword spotting, and App Lab flows to deploy Edge Impulse models. There’s also a cascaded pattern where a small on-device detector triggers a cloud VLM only when extra scene context is needed, a practical tradeoff for cost and scale.

Edge Impulse XR + IQ9 edge AI: YOLO-Pro, Llama 3.2, AR1 smart glasses, defect detection
Edge Impulse on-device GenAI workflows: Hexagon NPU, QNN, 8-bit quant, Arduino UNO Q

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=602KtzBVvFU

Tensor Level-4 personal robocar: GreenMobility Copenhagen, Lyft plan, VinFast production this year

Posted by – January 16, 2026
Category: Exclusive videos

Tensor is positioning its Robocar as a privately owned SAE Level 4 vehicle, engineered around autonomy rather than retrofitting sensors onto an existing platform. The design is sensor-first: 5 LiDAR units, 37 cameras, 11 radars, plus microphones and underbody detection to see close to the curb and avoid low obstacles, with a cleaning system (large fluid tank, air/liquid jets, wipers) to keep optics usable in real-world grime. https://www.tensor.auto/

A big theme is fail-operational redundancy: braking, steering, power and compute are treated as duplicated subsystems, with partners mentioned like Bosch, ZF and Autoliv for safety-critical hardware. Tensor’s approach relies on multi-modal sensor fusion—using the strengths of vision, radar and LiDAR together—so the stack can handle edge cases like occlusion, glare, and near-field perception without betting everything on a single modality, which is where many autonomy programs see risk.

The interview was filmed at CES Las Vegas 2026, where Tensor also talked about opening parts of its AI work to outside developers. Beyond the car itself, they point to open tooling for “physical AI” workflows (vision-language-action training and deployment), and say the core models are being released in an open form, inviting collaboration while keeping the vehicle’s runtime data local to the car, via OpenTau.

Inside, the cabin is treated like a productivity and media space: multiple displays, individual in-cabin cameras for calls, and privacy shutters for sensor coverage you want to disable. The signature mechanical element is a fold-away steering wheel and pedals that pop out on demand, making the handoff between Level 4 autonomy and manual control explicit, and supporting a spectrum from Level 3/2 ADAS down to Level 0 for fully human driving mode.

On go-to-market, Tensor frames a hybrid of personal ownership and fleet economics: owners can optionally connect the vehicle to ride-hailing when idle, while fleet partners like Lyft and the Copenhagen car-sharing operator GreenMobility have been announced as early channels. Manufacturing is planned via VinFast in Vietnam, with production targeted for the second half of 2026 and deployments likely constrained to geofenced ODD areas before broader roll-out in 2026.

Tensor Robocar Level-4 autonomy: 100+ sensors, Nvidia Thor compute, dual-mode cabin
Tensor autonomous car: LiDAR/radar/camera fusion, retractable wheel, privacy-first on-device AI

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=0IglyT7SjX4

Savlink Ultra96 HDMI 2.2 AOC: 96Gbps over 100m, opto-electronic cable design

Posted by – January 15, 2026
Category: Exclusive videos

Savlink walks through how “Ultra96” cabling is reshaping practical HDMI 2.2 deployments: once you push toward 96Gbps (next-gen FRL), passive copper quickly hits short runs, so their focus is active optical cable (AOC) builds that keep full-bandwidth signaling stable at 10m, 30m, and up to 100m while still presenting as a standard HDMI link end to end. https://smartavlink.com/

A key detail is power and topology: the optical transceivers draw from the HDMI +5V rail (and the cable is directional, with “source” and “display” ends), so you don’t need an external injector just to reach long distance. The demo contrasts a ~2m Ultra96-class copper lead with fiber-based AOC where attenuation, crosstalk, and EMI are far easier to control at high symbol rate.

Beyond pure reach, the engineering story is about mechanical packaging. Savlink shows ultra-slim micro-coax builds (down to ~2.7mm OD, ~36-AWG class conductors) for tight installs, plus armored variants that integrate Kevlar reinforcement for higher pull strength and abrasion resistance. This was filmed at CES Las Vegas 2026, where the same cable constraints show up everywhere from compact AV rigs to robotics at the expo.

They also highlight “optical engine” breakout concepts: converting USB, HDMI, or DisplayPort electrical lanes to fiber on a small PCB, then de-multiplexing on the far end into interfaces like DP, USB-C, and USB-A. That kind of modular conversion is useful when you need long-haul transport but still want standard connectors at the edge.

The broader theme is reliability in harsh environments: low-EMI fiber for medical imaging and industrial gear, and flex-life for robots where cables run through narrow arm tubing and survive drag-chain motion over millions of cycles. If you’re planning 8K or 4K-high-refresh pipelines, spatial/VR links, or long HDMI runs in noisy spaces, this is a practical look at what changes when the cable becomes an active opto-electronic system rather than just copper.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=SI1tqqfEXos

East-Toptech Ultra96 HDMI 2.2 cable: 96Gbps, 16K, passive 2m, locking plug

Posted by – January 15, 2026
Category: Exclusive videos

East-Toptech (Shenzhen) positions itself as an OEM/ODM cable manufacturer with high-volume throughput (they cite ~10 million cables per year) and long experience building A/V interconnects for brands and distributors. The conversation focuses on how cable design is a system problem: conductor geometry, shielding, connector mechanics, jacket materials (nylon braid, TPE/PE-style mixes), and—crucially—how products are prepared for formal certification and retail packaging.
https://east-toptech.com/

The main showcase is an HDMI 2.2-ready “Ultra96” passive HDMI cable concept, aimed at the new 96Gbps-class link budgets (FRL) that enable very high resolution / high refresh transport profiles, up to 16K-class timing in the spec roadmap. The transcript briefly says “196,” but the industry label to watch is Ultra96 (up to 96Gbps) plus the official certification label on the box; they say broad availability follows once certification is secured for market.

A lot of the booth story is about form factors that solve real install pain: a short 2 m passive lead for maximum margin, very slim cable builds for tight routing, and a coiled HDMI cable meant for VR or compact devices where bend radius, strain relief, and snag resistance matter. They also point to mechanical locking HDMI connectors, plus typical signal-integrity talking points like controlled differential impedance, EMI shielding strategy, and connector plating choices intended to keep insertion loss and crosstalk in check.

Filmed during CES Las Vegas 2026, the closing note is basically roadmap: passive Ultra96 where it makes sense, then longer-reach HDMI 2.2 options via active copper/equalized designs or AOC once the compliance ecosystem and labeling are fully settled. The takeaway isn’t one hero SKU, but a factory approach that can iterate cable geometry, jackets, and locking hardware quickly as 8K gaming, high-frame-rate workflows, and next-gen display timing become more common in the roadmap.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=9Ubx2BAOhZo

Amazfit lineup tour: Balance 2 dive modes, T-Rex 3 Pro titanium, Helio Strap recovery

Posted by – January 15, 2026
Category: Exclusive videos

Amazfit walks through a full wearable lineup built around sports tracking, long runtimes, and a relatively lightweight software stack. The newest drop here is the Active Max, positioned as a mid-tier watch with a larger 1.5-inch AMOLED panel (up to 3000 nits), up to 25 days of claimed battery life, and 4GB storage that can hold roughly 100 hours of podcasts, plus offline maps for phone-free training. https://us.amazfit.com/products/active-max

The rest of the range is framed as “pick the form factor that fits your day, keep the data in one place.” Active 2 is the smaller, style-first option, while the Helio Strap is a screenless band aimed at recovery and sleep for people who don’t want a watch on at night; wearing it on the upper arm also improves comfort during hard sessions. The common thread is continuous sensor data feeding into Zepp, so readiness-style metrics, sleep staging, stress, and training load stay comparable across devices, even when you swap hardware or take the watch off for a while.

For tougher use-cases, Balance 2 and T-Rex 3 Pro lean into water and outdoor durability, both rated to 10 ATM and positioned for diving modes (including freediving/scuba, with marketing claims up to about 45 m). T-Rex 3 Pro also comes in 44 mm and 48 mm sizes and uses rugged materials like grade-5 titanium elements, while keeping practical features like mic/speaker for calls, GPS-based navigation, and offline mapping in the same app flow. This segment was filmed at CES Las Vegas 2026, which is why the pitch focuses on quick comparisons rather than deep lab testing here.

Zepp’s nutrition tooling is the other interesting angle: there’s an in-app food log that can estimate macros from a photo, and the “Vital Food Camera” concept pushes that idea into dedicated hardware that captures multiple images per minute to infer what you ate, in what order, and how much you actually consumed. If Amazfit ships something like that, the hard problems won’t be the camera—it’ll be privacy controls, on-device vs cloud inference, and accurate portion estimation across messy real meals, all while keeping battery budgets realistic. The price point mentioned for Active Max is $169, and the broader message is a decade of power-management tuning via Amazfit’s own OS and athlete feedback loops, without moving the products out of reach for regular buyers today.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=fHg4P4eanEk

Sensereo Airo modular air monitor: CO2, PM, TVOC pods over Thread + Matter smart home

Posted by – January 15, 2026
Category: Exclusive videos

Sensereo’s Airo frames indoor air quality (IAQ) as a distributed sensing job: instead of one “main” monitor, you dock and charge small, battery-powered pods and place them where exposure actually happens. Each pod is focused on a metric—CO2 for ventilation/cognitive comfort, particulate matter (PM/PM2.5) for smoke and dust events, TVOC for chemicals and off-gassing (including 3D printing), plus temperature and humidity for thermal balance—and the app translates raw telemetry into readable context and next steps. https://sensereo.com/

The modular design fits real homes because rooms behave differently: a bedroom can drift into high CO2 overnight, a kitchen can spike particulates during cooking, and a hobby corner can push VOCs after cleaning sprays or resin work. Airo’s “choose what you need, duplicate what you need” approach helps you validate changes like opening a window, adjusting HVAC airflow, or running a purifier, using room-level signal rather than a single average for the whole space, with air.

This interview was filmed at CES Las Vegas 2026, where Sensereo pitched “environmental intelligence” as an always-on measurement layer you can move and scale over time. The company describes a charging dock plus swappable sensor pods, with battery life on the order of weeks (around a month between charges for key pods), and notes its component sourcing with established sensor makers such as Bosch and Figaro for the sensing stack and calibration path, with Thread.

On connectivity, Airo is positioned to plug into mainstream smart-home graphs: low-power Thread links between pods, and Matter-oriented integration so platforms like Apple Home and Google Home can consume readings and trigger automations from thresholds (CO2, PM, TVOC). In the demo you see trend lines and historical views, which is where IAQ gets actionable: separating baseline drift from short spikes like wildfire smoke, cleaning sessions, or indoor smoking, using data.

The video also mentions an upcoming Kickstarter with a starter kit (dock plus four sensor pods) aimed at an entry price point under about US$200 for early backers. The broader takeaway is that modular sensing plus interoperable networking can make IAQ manageable like temperature: measure locally, compare over time, and trigger small interventions that reduce exposure without constant manual checking, with care.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=dFG6w3mlHNA