Arduino UNO Q: Dragonwing QRB2210 + STM32U585, Debian Linux, edge AI + robotics

Posted by – January 16, 2026
Category: Exclusive videos

Arduino’s UNO Q is a “dual-brain” dev board built with Qualcomm, combining a Linux-capable Qualcomm Dragonwing QRB2210 MPU with a real-time STM32U585 MCU in the familiar UNO form factor. The pitch is simple: you get a small SBC for UI, networking, and on-device inference, plus deterministic GPIO and motor-control timing on the microcontroller side—without having to design your own inter-processor plumbing. https://www.arduino.cc/product-uno-q

In the demo, the board runs standard Debian Linux with a preloaded IDE and a catalog of example apps, including a face-detection project. You can also drive the same workflow from a laptop over Wi-Fi, so the board can sit “headless” in a robot or enclosure while you iterate. The key abstraction is an Arduino “app” split across two worlds: a classic Arduino sketch for the MCU, and a Linux-side component you can write in Python (or anything that runs on Debian), tied together with simple RPC calls for message passing and control, today.

The robot-dog setup shows why this hybrid approach matters: the STM32 side handles real-time motor control while the QRB2210 hosts a lightweight web app that becomes the controller UI. Add a USB camera and you can loop vision results—like face detection or a custom classifier—back into low-latency behaviors on the microcontroller pins, without turning your control loop into a Linux scheduling problem. This was filmed at CES Las Vegas 2026, but the engineering theme is broader: making “UI + compute + control” feel like one coherent platform, there.

For AI workflows, the board story leans on a gentle on-ramp: start with “default models,” then move to custom training via Edge Impulse, export, and re-integrate into the same Arduino/Linux split application model. Hardware-wise, UNO Q is positioned as an entry board at $44, with a 2 GB RAM version shown and a 4 GB variant mentioned as upcoming; the goal is to keep the developer experience consistent as the line expands, while staying open source and accessible for robotics, IoT gateways, vision, and local web dashboards inside.

Overall, the UNO Q looks like Arduino trying to collapse the gap between maker-friendly GPIO and modern embedded compute: Cortex-A53 class Linux, GPU/ISP-capable silicon, Wi-Fi-based dev loops, and a clean API boundary to a real-time MCU. If you’ve ever duct-taped a Pi (or similar SBC) onto a microcontroller just to get a UI and networking, this is the same architecture—but packaged as one board with a curated software path from demo to product prototype, now.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=z22RdSICsSc

Dentomi GumAI demo: smartphone photo gingivitis screening, plaque heatmap, self-care guidance

Posted by – January 16, 2026
Category: Exclusive videos

Dentomi (DTOMI Limited) demonstrates GumAI, a computer-vision oral-health tool that turns a phone camera into a fast, at-home dental screening flow. You take an intraoral photo with a smartphone or iPad, and the app returns an annotated view that highlights where brushing or flossing needs more attention, using a simple green/yellow/red overlay aimed at coaching rather than replacing a dentist visit. https://www.dentomi.biz/

Under the hood it maps a familiar dentistry step—visual inspection—into an AI pipeline: guided image capture, quality checks (focus, lighting, framing), then pixel-level segmentation and classification to mark gingival margins, plaque-heavy zones, and other visible hygiene indicators. The practical value is repeatability, so people can track changes over time and tighten daily technique at home.

The team frames it as access tech for communities that don’t get regular dental care, with deployments via NGO partners, community centres, and elderly homes. In the interview (filmed at CES Las Vegas 2026), they also describe collaborations in Hong Kong, including sponsorship-style rollouts with Colgate-Palmolive that remove cost barriers and support preventive follow-up for health equity.

Ward describes a dentist and public-health background and an ongoing PhD at the University of Hong Kong, with the product starting as research intended to translate into community impact. Training follows the typical supervised-learning path: labeled clinical photos from partner clinics and hospitals, plus additional user images when consent is granted, which brings up real questions around data governance and privacy.

Commercially, the model leans toward funded access—brands, dental associations, or public programmes cover licences so end users can scan for free, while the system can nudge referrals when risk looks elevated. It’s easy to imagine insurer and teledentistry tie-ins later, but the core framing stays consistent: image-based screening and education that helps people decide when to seek care and how to improve day-to-day habits before issues grow, prompting timely act.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=FlnzG9ZLwtY

VESA DisplayPort DP80LL: UHBR20 active LRD cables, inrush power and compliance testing

Posted by – January 16, 2026
Category: Exclusive videos

This video digs into how VESA’s DisplayPort team validates the new DP80 low-loss cable class for DisplayPort 2.1, using a link-layer/protocol tester (Teledyne LeCroy quantumdata M42de) to run first-pass compliance checks. The core idea is simple: plug the cable into “in” and “out,” then verify it can link-train and move data across every lane count and configuration, including UHBR rates up to UHBR20, with a clean pass/fail report. That DP80 logo isn’t just marketing; it’s meant to give end users a quick signal that a cable has been through a defined compliance path rather than “it worked on my desk.” https://vesa.org/

A big theme is the practical limit of purely passive DP80 at the highest rates: once you chase 20 Gbit/s per lane, you quickly run out of electrical margin, especially past roughly a meter in common materials. DP80LL (DP80 “low loss”) is VESA’s answer: keep the same endpoint experience, but use active electronics to extend reach and improve margins. The demo focuses on LRD (linear redriver) designs with active components at both ends that reshape/restore the signal before it hits the receiver, and it also tees up active optical approaches for even longer spans where copper loss becomes the wall.

Filmed at CES Las Vegas 2026, the discussion gets refreshingly concrete about why “active” is hard: power behavior, not just eye diagrams. DisplayPort includes a DP_PWR pin intended to power adapters and active cables (historically 3.3 V at up to 500 mA), while USB-C variants can draw from the Type-C power domain, so every active design has to manage startup without browning out the port. Compliance testing drills into inrush (the plug-in current spike and voltage droop) and source/sink “outrush” robustness, which is why soft-start circuits and controlled capacitor charging become make-or-break details.

There’s also nuance around interoperability and timing. When you connect a cable, HPD/AUX sideband activity kicks off link training, capability reads (DPCD/EDID paths), and clock recovery, all within spec-defined time windows. LRD-style cables behave like fast pass-through paths, while more complex repeater topologies can add training steps and delay, and optical links can introduce measurable latency if the run gets extreme. The video highlights how certification is expanding beyond straight cables into trickier categories like active adapters (for example USB-C to DP), where VESA needs test requirements that prevent “extension hacks” from silently breaking signal integrity.

The takeaway is that cable certification is becoming a first-class part of enabling UHBR20 in real setups: big, high-refresh desktop monitors, workstations, docks, and GPU-to-display runs that don’t fit the one-meter fantasy. DP80LL and related active/optical designs are about preserving link reliability at 80 Gbps class throughput while keeping user experience boring—in the good way—so the system link-trains once and stays locked. For anyone building or buying next-gen DisplayPort 2.1/2.1b gear, this is a peek into the engineering reality behind “it just works” at the edge of signal integrity.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=a6w1eAhk9ug

Edge Impulse XR + IQ9 edge AI 100 TOPS: YOLO-Pro, Llama 3.2, RayNeo X3 Pro AR1, PPE + QA LLM

Posted by – January 16, 2026
Category: Exclusive videos

Edge Impulse (a Qualcomm company) frames its platform as a model-to-firmware pipeline for edge AI: capture sensor or camera data, label it, train a compact model, then ship an optimized artifact that can run without a cloud round trip. The demos emphasize quantization, runtime portability, and repeatable edge MLOps where latency, privacy, and uptime matter for real work. https://edgeimpulse.com/

One highlight is an XR industrial worker assistant running on TCL RayNeo X3 Pro glasses built on Snapdragon AR1, with a dual micro-display overlay and a forward camera. Edge Impulse trains a YOLO-class detector (their “YOLO Pro” variant) to identify specialized parts, then a local Llama 3.2 flow pulls the right documentation and generates step-by-step context like part numbers, install notes, and purpose for a field crew guide.

The workflow focus is data: capture images directly from the wearable, annotate in Studio, and iterate via active learning where an early model helps pre-label the next batch. They also point to connectors that let foundation models assist labeling, plus data augmentation and synthetic data generation to widen coverage. This segment was filmed at the Qualcomm booth during CES Las Vegas 2026, but the core story is a repeatable edge pipeline, not a one-off demo.

A second showcase moves to the factory line: vision-based defect detection on Qualcomm Dragonwing IQ9, positioned for on-device AI at up to 100 TOPS. The UI runs with Qt, while the model flags defective coffee pods in real time and an on-device Llama 3.2 3B interface answers queries like defect summaries or safety prompts, all offline on the same device.

They round it out with PPE and person detection on an industrial gateway, plus Arduino collaborations: the UNO Q hybrid board (Dragonwing QRB2210 MPU + STM32U585 MCU) using USB-C hubs for peripherals, wake-word keyword spotting, and App Lab flows to deploy Edge Impulse models. There’s also a cascaded pattern where a small on-device detector triggers a cloud VLM only when extra scene context is needed, a practical tradeoff for cost and scale.

Edge Impulse XR + IQ9 edge AI: YOLO-Pro, Llama 3.2, AR1 smart glasses, defect detection
Edge Impulse on-device GenAI workflows: Hexagon NPU, QNN, 8-bit quant, Arduino UNO Q

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=602KtzBVvFU

Tensor Level-4 personal robocar: GreenMobility Copenhagen, Lyft plan, VinFast production this year

Posted by – January 16, 2026
Category: Exclusive videos

Tensor is positioning its Robocar as a privately owned SAE Level 4 vehicle, engineered around autonomy rather than retrofitting sensors onto an existing platform. The design is sensor-first: 5 LiDAR units, 37 cameras, 11 radars, plus microphones and underbody detection to see close to the curb and avoid low obstacles, with a cleaning system (large fluid tank, air/liquid jets, wipers) to keep optics usable in real-world grime. https://www.tensor.auto/

A big theme is fail-operational redundancy: braking, steering, power and compute are treated as duplicated subsystems, with partners mentioned like Bosch, ZF and Autoliv for safety-critical hardware. Tensor’s approach relies on multi-modal sensor fusion—using the strengths of vision, radar and LiDAR together—so the stack can handle edge cases like occlusion, glare, and near-field perception without betting everything on a single modality, which is where many autonomy programs see risk.

The interview was filmed at CES Las Vegas 2026, where Tensor also talked about opening parts of its AI work to outside developers. Beyond the car itself, they point to open tooling for “physical AI” workflows (vision-language-action training and deployment), and say the core models are being released in an open form, inviting collaboration while keeping the vehicle’s runtime data local to the car, via OpenTau.

Inside, the cabin is treated like a productivity and media space: multiple displays, individual in-cabin cameras for calls, and privacy shutters for sensor coverage you want to disable. The signature mechanical element is a fold-away steering wheel and pedals that pop out on demand, making the handoff between Level 4 autonomy and manual control explicit, and supporting a spectrum from Level 3/2 ADAS down to Level 0 for fully human driving mode.

On go-to-market, Tensor frames a hybrid of personal ownership and fleet economics: owners can optionally connect the vehicle to ride-hailing when idle, while fleet partners like Lyft and the Copenhagen car-sharing operator GreenMobility have been announced as early channels. Manufacturing is planned via VinFast in Vietnam, with production targeted for the second half of 2026 and deployments likely constrained to geofenced ODD areas before broader roll-out in 2026.

Tensor Robocar Level-4 autonomy: 100+ sensors, Nvidia Thor compute, dual-mode cabin
Tensor autonomous car: LiDAR/radar/camera fusion, retractable wheel, privacy-first on-device AI

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=0IglyT7SjX4

Savlink Ultra96 HDMI 2.2 AOC: 96Gbps over 100m, opto-electronic cable design

Posted by – January 15, 2026
Category: Exclusive videos

Savlink walks through how “Ultra96” cabling is reshaping practical HDMI 2.2 deployments: once you push toward 96Gbps (next-gen FRL), passive copper quickly hits short runs, so their focus is active optical cable (AOC) builds that keep full-bandwidth signaling stable at 10m, 30m, and up to 100m while still presenting as a standard HDMI link end to end. https://smartavlink.com/

A key detail is power and topology: the optical transceivers draw from the HDMI +5V rail (and the cable is directional, with “source” and “display” ends), so you don’t need an external injector just to reach long distance. The demo contrasts a ~2m Ultra96-class copper lead with fiber-based AOC where attenuation, crosstalk, and EMI are far easier to control at high symbol rate.

Beyond pure reach, the engineering story is about mechanical packaging. Savlink shows ultra-slim micro-coax builds (down to ~2.7mm OD, ~36-AWG class conductors) for tight installs, plus armored variants that integrate Kevlar reinforcement for higher pull strength and abrasion resistance. This was filmed at CES Las Vegas 2026, where the same cable constraints show up everywhere from compact AV rigs to robotics at the expo.

They also highlight “optical engine” breakout concepts: converting USB, HDMI, or DisplayPort electrical lanes to fiber on a small PCB, then de-multiplexing on the far end into interfaces like DP, USB-C, and USB-A. That kind of modular conversion is useful when you need long-haul transport but still want standard connectors at the edge.

The broader theme is reliability in harsh environments: low-EMI fiber for medical imaging and industrial gear, and flex-life for robots where cables run through narrow arm tubing and survive drag-chain motion over millions of cycles. If you’re planning 8K or 4K-high-refresh pipelines, spatial/VR links, or long HDMI runs in noisy spaces, this is a practical look at what changes when the cable becomes an active opto-electronic system rather than just copper.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=SI1tqqfEXos

East-Toptech Ultra96 HDMI 2.2 cable: 96Gbps, 16K, passive 2m, locking plug

Posted by – January 15, 2026
Category: Exclusive videos

East-Toptech (Shenzhen) positions itself as an OEM/ODM cable manufacturer with high-volume throughput (they cite ~10 million cables per year) and long experience building A/V interconnects for brands and distributors. The conversation focuses on how cable design is a system problem: conductor geometry, shielding, connector mechanics, jacket materials (nylon braid, TPE/PE-style mixes), and—crucially—how products are prepared for formal certification and retail packaging.
https://east-toptech.com/

The main showcase is an HDMI 2.2-ready “Ultra96” passive HDMI cable concept, aimed at the new 96Gbps-class link budgets (FRL) that enable very high resolution / high refresh transport profiles, up to 16K-class timing in the spec roadmap. The transcript briefly says “196,” but the industry label to watch is Ultra96 (up to 96Gbps) plus the official certification label on the box; they say broad availability follows once certification is secured for market.

A lot of the booth story is about form factors that solve real install pain: a short 2 m passive lead for maximum margin, very slim cable builds for tight routing, and a coiled HDMI cable meant for VR or compact devices where bend radius, strain relief, and snag resistance matter. They also point to mechanical locking HDMI connectors, plus typical signal-integrity talking points like controlled differential impedance, EMI shielding strategy, and connector plating choices intended to keep insertion loss and crosstalk in check.

Filmed during CES Las Vegas 2026, the closing note is basically roadmap: passive Ultra96 where it makes sense, then longer-reach HDMI 2.2 options via active copper/equalized designs or AOC once the compliance ecosystem and labeling are fully settled. The takeaway isn’t one hero SKU, but a factory approach that can iterate cable geometry, jackets, and locking hardware quickly as 8K gaming, high-frame-rate workflows, and next-gen display timing become more common in the roadmap.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=9Ubx2BAOhZo

Amazfit lineup tour: Balance 2 dive modes, T-Rex 3 Pro titanium, Helio Strap recovery

Posted by – January 15, 2026
Category: Exclusive videos

Amazfit walks through a full wearable lineup built around sports tracking, long runtimes, and a relatively lightweight software stack. The newest drop here is the Active Max, positioned as a mid-tier watch with a larger 1.5-inch AMOLED panel (up to 3000 nits), up to 25 days of claimed battery life, and 4GB storage that can hold roughly 100 hours of podcasts, plus offline maps for phone-free training. https://us.amazfit.com/products/active-max

The rest of the range is framed as “pick the form factor that fits your day, keep the data in one place.” Active 2 is the smaller, style-first option, while the Helio Strap is a screenless band aimed at recovery and sleep for people who don’t want a watch on at night; wearing it on the upper arm also improves comfort during hard sessions. The common thread is continuous sensor data feeding into Zepp, so readiness-style metrics, sleep staging, stress, and training load stay comparable across devices, even when you swap hardware or take the watch off for a while.

For tougher use-cases, Balance 2 and T-Rex 3 Pro lean into water and outdoor durability, both rated to 10 ATM and positioned for diving modes (including freediving/scuba, with marketing claims up to about 45 m). T-Rex 3 Pro also comes in 44 mm and 48 mm sizes and uses rugged materials like grade-5 titanium elements, while keeping practical features like mic/speaker for calls, GPS-based navigation, and offline mapping in the same app flow. This segment was filmed at CES Las Vegas 2026, which is why the pitch focuses on quick comparisons rather than deep lab testing here.

Zepp’s nutrition tooling is the other interesting angle: there’s an in-app food log that can estimate macros from a photo, and the “Vital Food Camera” concept pushes that idea into dedicated hardware that captures multiple images per minute to infer what you ate, in what order, and how much you actually consumed. If Amazfit ships something like that, the hard problems won’t be the camera—it’ll be privacy controls, on-device vs cloud inference, and accurate portion estimation across messy real meals, all while keeping battery budgets realistic. The price point mentioned for Active Max is $169, and the broader message is a decade of power-management tuning via Amazfit’s own OS and athlete feedback loops, without moving the products out of reach for regular buyers today.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=fHg4P4eanEk

Sensereo Airo modular air monitor: CO2, PM, TVOC pods over Thread + Matter smart home

Posted by – January 15, 2026
Category: Exclusive videos

Sensereo’s Airo frames indoor air quality (IAQ) as a distributed sensing job: instead of one “main” monitor, you dock and charge small, battery-powered pods and place them where exposure actually happens. Each pod is focused on a metric—CO2 for ventilation/cognitive comfort, particulate matter (PM/PM2.5) for smoke and dust events, TVOC for chemicals and off-gassing (including 3D printing), plus temperature and humidity for thermal balance—and the app translates raw telemetry into readable context and next steps. https://sensereo.com/

The modular design fits real homes because rooms behave differently: a bedroom can drift into high CO2 overnight, a kitchen can spike particulates during cooking, and a hobby corner can push VOCs after cleaning sprays or resin work. Airo’s “choose what you need, duplicate what you need” approach helps you validate changes like opening a window, adjusting HVAC airflow, or running a purifier, using room-level signal rather than a single average for the whole space, with air.

This interview was filmed at CES Las Vegas 2026, where Sensereo pitched “environmental intelligence” as an always-on measurement layer you can move and scale over time. The company describes a charging dock plus swappable sensor pods, with battery life on the order of weeks (around a month between charges for key pods), and notes its component sourcing with established sensor makers such as Bosch and Figaro for the sensing stack and calibration path, with Thread.

On connectivity, Airo is positioned to plug into mainstream smart-home graphs: low-power Thread links between pods, and Matter-oriented integration so platforms like Apple Home and Google Home can consume readings and trigger automations from thresholds (CO2, PM, TVOC). In the demo you see trend lines and historical views, which is where IAQ gets actionable: separating baseline drift from short spikes like wildfire smoke, cleaning sessions, or indoor smoking, using data.

The video also mentions an upcoming Kickstarter with a starter kit (dock plus four sensor pods) aimed at an entry price point under about US$200 for early backers. The broader takeaway is that modular sensing plus interoperable networking can make IAQ manageable like temperature: measure locally, compare over time, and trigger small interventions that reduce exposure without constant manual checking, with care.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=dFG6w3mlHNA

Teledyne LeCroy HDMI 2.2 testing: M42h 96Gbps FRL/FEC protocol analyzer + generator

Posted by – January 15, 2026
Category: Exclusive videos

HDMI 2.2 pushes the ecosystem from “it works” to “it works at 96Gbps”, which changes what engineers need to validate: Fixed Rate Link (FRL) behavior, Forward Error Correction (FEC), link training, and the way metadata and audio ride alongside high-rate video. In this interview, Teledyne LeCroy’s quantumdata team frames their role as the plumbing behind the logos—tools chip vendors and device makers use to debug, pre-test, and get ready for formal certification. https://www.teledynelecroy.com/protocolanalyzer/quantumdata-m42h

The centerpiece is the quantumdata M42h, a compact HDMI generator + protocol analyzer built for HDMI 2.2 FRL rates up to 96Gbps (24Gbps per lane), with visibility into FRL packetization (superblocks / character blocks), control data, InfoFrames, and captured error conditions. Filmed at CES Las Vegas 2026, the demo lands on a key point: test gear can be available ahead of the final compliance program, so silicon teams can iterate while the certification details get locked.

A practical theme is emulation. When you can’t buy an HDMI 2.2 display or source off the shelf, a box that can impersonate a sink or a source becomes the reference endpoint, letting teams validate interoperability before TVs, consoles, and GPUs ship. The loopback workflow shown—generate a defined stream, feed it back, then analyze what returns—turns “the picture looks odd” into timestamped protocol events you can debug in a lab.

They also point to a more portable, battery-powered tester with a built-in screen for AV integrators who need on-site verification—EDID behavior, HDCP handshakes, and signal continuity—without hauling a full bench setup. Rollout expectations stay grounded: Ultra96-class cables tend to arrive first, while sources and sinks follow once compliance specs and logos are finalized, with the interview estimating late 2026 into early 2027 for broader shelf availability, depending on real-world timing.

Teledyne LeCroy positions this as one slice of a broader protocol-test stack spanning HDMI, DisplayPort, USB, PCI Express, Ethernet, Wi-Fi, Bluetooth, and MIPI. The takeaway is that “new standard” is mostly a test problem—repeatable stimulus, deep capture of the right protocol layers, and turning edge-case failures into actionable debug data for real hardware.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=gVtjmaDhF54

Birdfy Hum Bloom, Bath Pro, Feeder Vista: 4K slow-mo hummingbirds, dual-cam birdbath, 6K 360° feeder

Posted by – January 15, 2026
Category: Exclusive videos

Birdfy is turning backyard birdwatching into a small edge-AI workflow: camera-equipped feeders and baths that push visitor alerts to your phone, record clips on landing, and run computer-vision classification so footage arrives tagged by species. The goal is less “random wildlife camera” and more a searchable, shareable stream—portrait framing for close detail, wide view for context, plus a lightweight bird journal inside one app. https://www.birdfy.com/

Hum Bloom is built around hummingbird behavior and optics. The feeder uses a biomimetic flower-style nectar bulb so the camera keeps a clean line of sight, and a hydraulic pump system that keeps nectar available right where the bird feeds. Pair that with 4K capture and slow motion to resolve wingbeats, and Birdfy’s AI layer that aims at hummingbird species coverage (the booth mentioned roughly 150), so the metadata is about what you saw, not just that “something flew by,” it.

In the walkthrough, filmed at CES Las Vegas 2026, the conversation shifts from software to mechanics that make better data. Feeder Vista is positioned as a 360° setup with dual ultra-wide lenses and up to 6K video (plus high-frame-rate slow motion), letting you choose panoramic context or a single wide perspective. Instead of gravity-fed seed, an air-pump lifts a measured portion from a sealed container to the tray, keeping bulk feed dry while helping the camera get consistent framing on each visit at the feeder expo.

Bath Pro applies the same dual-view idea to water: a wide-angle camera to catch group activity, plus a portrait camera for individual detail, with smart capture that prioritizes faces and feathers over background clutter. A solar-powered fountain creates moving-water cues that attract visits, and an optional de-icer/heater keeps water accessible in winter—useful in places where bird activity continues but the basin would otherwise freeze, care.

The interview also lands on a realistic limit: species recognition is getting strong at scale, but true “this exact individual bird” re-identification is still hard without dependable visual markers. Treated as connected edge cameras with event-based recording, motion/weight sensing, and ongoing model updates, the interesting engineering story is how lens choice, placement geometry, and feeder mechanics co-design to turn backyard visits into clean, low-noise datasets you can enjoy in the moment, app.

Birdfy Hum Bloom: 4K slow-mo hummingbird feeder with hydraulic nectar pump + AI ID
Birdfy Feeder Vista: 360° 6K dual-lens bird feeder cam with air-pump seed system
Birdfy Bath Pro: dual-camera smart birdbath with solar fountain, de-icer/heater, AI alerts

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=aOt8Ps1XasM

Geniatech 42-inch ePaper whiteboard: AI modules on ARM: NXP/Qualcomm/MediaTek, Kinara, Hailo, MemryX

Posted by – January 15, 2026
Category: Exclusive videos

Geniatech is pushing ePaper beyond “static signage” by turning large-format E Ink into interactive, workflow-aware displays that behave more like tools than screens. The headline demo is a 42-inch ePaper interactive whiteboard designed for classrooms and meeting rooms, pairing a reflective, eye-friendly panel with low-latency handwriting, reusable templates (like weekly reports), and easy sharing via QR code. https://www.geniatech.com/product/42-epaper-interactive-whiteboard/

A nice touch is the “lecture replay” idea: voice recording can be captured alongside the pen strokes so students can re-watch how a solution was built, step by step, without needing a power-hungry LCD. Because it’s reflective ePaper, it avoids backlight glare and keeps heat generation low, which matters when a board is on all day in a bright room. The emphasis here is practical UX: smooth pen feel, fast refresh where it counts, and simple content distribution for real teaching.

For outdoor infrastructure, the same platform shows up as ePaper transit signage: a bus-stop style display with three panels driven from a single control board, built for low power and weather exposure. Reflections and finish come up in the discussion (matte vs glossy), and Geniatech highlights partial-refresh modes to update just the changing regions (like arrival times) instead of doing full-screen flashes all the time. The video is filmed at CES Las Vegas 2026, and the broader theme is “ultra-low power, always-visible info” for public space.

Smaller ePaper devices round out the story, including digital photo frames that aim for near-zero maintenance by using indoor light energy harvesting (a “room-light charging” approach) so a once-per-day image update can run for extremely long periods without manual charging. There’s also a 28.5-inch color ePaper display pitched as “photo-like,” with both partial and full refresh options depending on whether you’re updating small UI elements or switching the whole layout.

Then Geniatech pivots from displays to compute: embedded edge AI on ARM, with boards and modules spanning NXP, MediaTek, and Qualcomm platforms, built to run inference locally for low latency, offline operation, and better data control. The partner ecosystem matters here: accelerator modules mentioned include Kinara Ara-class NPUs, plus options like Hailo-8, MemryX, and DeepX in Geniatech’s modular lineup, letting integrators match TOPS, power, and cost to the deployment instead of locking into one silicon path.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=klaQaYe4w-Y

VESA DisplayPort Automotive Extensions at CES 2026: CRC ROI metadata, OpenGMSL, quantumdata M42de

Posted by – January 15, 2026
Category: Exclusive videos

VESA’s DisplayPort Automotive Extensions (DP AE) is about treating the in-car display path like a safety-critical data link, not “just pixels.” The idea is to detect corruption, dropped or repeated frames, and even intentional tampering so a rear-view camera, speedometer, or driver instrument cluster can be flagged as invalid instead of silently showing the wrong thing. https://vesa.org/

A key mechanism is functional-safety metadata riding on top of standard DisplayPort: CRC (cyclic redundancy check) signatures plus frame counters and timing checks, computed per region of interest (ROI) so the most critical parts of a screen get the tightest scrutiny. If a CRC mismatch appears, or if a frame freezes or skips, the system can raise a warning immediately rather than leaving the driver to guess what happened here.

DP AE also adds security concepts aimed at image integrity and authentication, so attempts to modify rendered content in transit can be detected at the display level. This matters as vehicles add more high-resolution interior panels and camera feeds, while the attack surface grows across GPUs, head units, and external links in a modern car.

The demo is filmed at CES Las Vegas 2026 and ties DP AE to real automotive wiring reality: long cable runs and SerDes links. VESA highlights collaboration with the OpenGMSL ecosystem to carry DisplayPort over longer distances (the video mentions up to 15 m), while keeping end-to-end checks consistent across silicon vendors, Tier-1s, and test tool chains today.

On the validation side, Teledyne LeCroy’s quantumdata platform is shown as a practical way to emulate DP AE sources and sinks, inspect the new secondary data packets, and inject faults to prove detection works. Between FPGA setups, software models, and compliance workflows, the takeaway is an ecosystem push: interoperable safety/security profiles that different suppliers can test the same way and ship with fewer integration surprises too.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=a01I6FVdY_4

Mobilint Edge AI roadmap ARIES REGULUS multi-LLM PCIe card, TOPS/Watt NPUs, vehicle-grade SoC prep

Posted by – January 15, 2026
Category: Exclusive videos

Mobilint is a Korea-based AI semiconductor company building power-efficient NPUs for on-device and on-premise inference, aiming to shift workloads from cloud GPUs into compact systems with predictable latency and power. In the interview they mention working across the memory and foundry ecosystem (including Samsung and SK hynix) while focusing the demo on ARIES: 80 TOPS in a 25 W TDP, PCI Express Gen4 x8, and 16 GB LPDDR4X (optional 32 GB) with 66.7 GB/s memory bandwidth https://www.mobilint.com/

On the demo table, ARIES is framed through deployable computer-vision throughput: YOLO-11 object detection plus standard backbones like ResNet-50 and MobileNet, with attention on TOPS per watt rather than peak TOPS alone. The target is industrial PCs and compact edge servers where thermal headroom is tight, so inference stays local while multiple models share one host chip.

A second setup zooms out to a larger PCIe card concept that “crams” four Mobilint M800 accelerators onto one board, intended to run several ~8B-parameter language models concurrently, or scale up via partitioning and batching. That naturally leads to vision-language models: camera frames become embeddings, the text decoder turns them into scene descriptions, and multilingual output becomes a useful interface for inspection or support, recorded on the CES Las Vegas 2026 show floor there.

For smaller, always-on endpoints, Mobilint highlights REGULUS, a full SoC that pairs an NPU with Arm Cortex-A53 CPU cores so it can run Linux and execute pre-trained models without a separate host. They cite around 10 TOPS under 3 W for drones, robots, and AI CCTV, then demonstrate high-input video analytics, including a 96-stream fire-risk example where bandwidth, buffering, and scheduling matter as much as raw compute in the field.

The closing theme is vehicle and humanoid readiness: partners want edge AI that is fast and power-bounded, but also engineered for functional safety and security hardening, not just benchmarks. The takeaway is that autonomy progress is a mix of smarter models, tighter sensor-to-actuation pipelines, and consolidating silicon so the platform can scale compute without multiplying energy cost today.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=2AUfvdShhgE

CT5 ZONE HSS1 multi-user AI companion: shared earbuds for translation, privacy-tilt camera, 4K

Posted by – January 14, 2026
Category: Exclusive videos

CT5 presents ZONE HSS1 as a screen-free, hands-free “hear what you see / see what you hear” AI interface: a lightweight head-worn module with an 8MP camera, microphone array, and an earbud-based audio path that keeps the user in a conversational loop with a chatbot. Instead of pulling out a phone, the device is meant to capture first-person context (vision + sound) and return spoken guidance, so the interaction feels more like an always-available voice assistant with situational awareness. https://www.ct5.co.kr/

A key design point is multi-user audio: two earbuds can be shared so two people can listen at the same time, which CT5 frames as a practical way to run live translation for face-to-face conversation without everyone staring at screens. The company also positions it as a “smart glasses without glasses” approach, aiming for longer runtime than typical camera-enabled wearables by pushing heavy inference to cloud LMM/LLM endpoints via a paired smartphone.

In the demo filmed at CES Las Vegas 2026, the CEO describes both continuous live video modes and a “memory” style mode that records snapshots over time so the assistant can answer questions based on what the user has been seeing. They highlight model choice (Gemini, OpenAI, and other APIs), and acknowledge that multimodal usage can map to token/API cost even if pricing isn’t enforced during early pilot runs.

Hardware-wise, CT5 quotes about 90 g total weight and more than 20 hours of battery life on a charge, with the weight supported around the head rather than the nose bridge. The device is slated around a US$300 target price, with pilot production underway and an expected product launch window around April, focusing on coaching and real-time assistance use cases that benefit from first-person context on the move.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=8rAyXEhlQNs

Rohde & Schwarz HDMI 2.2 Ultra96 cable compliance: ZNB3000 VNA, crosstalk, skew

Posted by – January 14, 2026
Category: Exclusive videos

Rohde & Schwarz engineer Patrick McKenzie explains what “HDMI 2.2 cable compliance” means at the electrical layer: proving an Ultra96 cable can carry multi-lane differential traffic with controlled loss and low coupling, not just “pass video.” The demo frames compliance as a measurement recipe that turns VNA data into the parameters used for certification. https://www.rohde-schwarz.com/us/solutions/electronics-testing/high-speed-digital-interface-testing/hdmi-testing/hdmi-connector-and-cable-testing_258387.html

At the instrument level, a vector network analyzer (VNA) stimulates the channel and measures what returns across frequency, lane by lane. Because each HDMI lane is differential (P/N on each side), one lane measurement typically needs four VNA ports, repeated across the four data lanes. From those sweeps you derive insertion loss, attenuation-to-crosstalk ratio, differential impedance, inter-pair skew, and mode conversion, and you can apply time-domain transforms (TDR-like views) to pinpoint impedance discontinuities, connector launches, and pair imbalance in the setup.

A key practical detail is the fixture stack: HDMI plugs into test-point adapters (TPAs) that break the high-speed pairs out to SMA coax so the analyzer can reference clean planes. The example uses a Wilder TPA, while the other lanes (and the eARC lane when relevant) are terminated so the lane under test isn’t distorted by unterminated stubs. This interview was filmed at CES Las Vegas 2026, so you also see how a compliance bench gets operated in a busy show environment on the expo floor.

On the Rohde & Schwarz side, the platform discussed is the R&S ZNB3000 VNA family (released in February 2025), positioned as a faster mid-range instrument with strong dynamic range for small-signal crosstalk work. Options scale from 2 to 4 ports and up to 54 GHz, which is useful when fixtures, connectors, and cable launches push measurements into the tens of GHz. The UI is Windows-based, with FPGA-backed acquisition and DSP behind the screen, and firmware updates landing on a regular cadence there.

If you build, qualify, or certify high-speed copper interconnect, the takeaway is how modern HDMI validation is basically signal-integrity engineering packaged into a standard: characterize the channel, quantify lane-to-lane coupling, and verify skew/impedance limits before any eye-diagram margin discussion. With HDMI 2.2 pushing the Ultra96 class up to 96 Gbit/s, VNAs plus well-controlled fixtures become the gatekeepers for interoperability and predictable link behavior in real product work.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=70L1rsnA76g

GetD AI Translation Glasses: 29g open-ear audio, triple-mic ENC, Find My safety

Posted by – January 14, 2026
Category: Exclusive videos

GetD is positioning these as everyday AI translation glasses rather than an AR display: 29 g frames with RX-compatible lenses, photochromic behavior (sunglass outdoors, blue-cut indoors), and a phone-linked stack that can run real-time speech translation plus an AI assistant. Translation is described as using Microsoft Azure Speech Translation, with ChatGPT-style interaction handled through the companion app and Bluetooth audio. https://igetd.com/

On the audio side, the design leans into open-ear speakers with a triple-microphone array for environmental noise cancellation (ENC) so the wearer can still hear the room while getting clearer capture for calls and translation. The demo emphasizes “premium sound” tuning and voice pickup when someone speaks nearby, which matters for face-to-face interpreting and meeting-style use.

A notable feature is Apple Find My integration on iPhone, framed as a safety workflow for seniors: alerts and location sharing can help relatives react quickly if someone falls or needs help. The hardware callout is practical rather than flashy—microprocessor, IMU/G-sensor, battery, speaker modules, and mic placement are shown through a transparent frame variant. This interview was filmed at CES Las Vegas 2026.

GetD is deliberately avoiding an always-on camera and a constant display in this model, arguing that “intelligent but invisible” wearability is the point: comfortable optics, low weight, and fewer privacy concerns in public spaces. They do mention future accessibility ideas like on-lens transcription for cinema or hearing-impaired users, but positioned as a later roadmap rather than the core product.

Commercially, the pitch is a consumer launch path with Kickstarter pricing: an early-bird target around $179 and a retail price above $200, alongside multiple frame colors including the transparent look. If the execution holds up, the technical story is less about AR and more about audio UX: low-latency Bluetooth, multi-mic beamforming/ENC, cloud speech translation, and a mobile AI layer that keeps the phone in the loop without forcing you to stare at it all day.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=xN-JfIstGKk

HDMI 2.2 booth tour at #ces2026 HDMI Licensing Administrator 12K120 video, eARC audio, gaming

Posted by – January 14, 2026
Category: Exclusive videos

HDMI Licensing Administrator walks through what HDMI 2.2 changes at the ecosystem level: the jump to next-gen HDMI Fixed Rate Link signaling and up to 96Gbps, plus how the new Ultra96 cable and labeling is meant to reduce confusion when people buy cables for high-bandwidth sources and displays. https://www.hdmi.org/spec/hdmi2

A big theme is uncompressed video headroom: think 4K at very high refresh (up to 480Hz), 8K60 and 4K240 in full chroma 4:4:4, and higher-tier modes like 12K120, while keeping 10-bit and 12-bit workflows practical for HDR mastering, PC gaming, and pro creation. The booth also frames the Ultra96 feature name as a bandwidth marker (64/80/96Gbps), not just a version sticker.

Shot on the show floor at CES Las Vegas 2026, the tour connects those numbers to the plumbing behind them: tighter compliance requirements, tougher tolerances, and the move toward higher-performance certified components like Category 4 connectors. On the cable side, Ultra96 certification plus scannable labels are positioned as a practical way to verify model and length, especially once early prototypes turn into retail stock.

Audio and latency are treated as first-class engineering problems rather than add-ons. eARC is framed as the day-to-day enabler for Dolby Atmos and DTS:X through soundbars and AVRs, while HDMI 2.2 Latency Indication Protocol (LIP) targets better A/V sync in multi-hop setups where a TV, receiver, and multiple sources all add delay. For gamers, the familiar stack stays central: VRR, ALLM, and Quick Frame Transport, shown alongside high-refresh displays and an HDMI-equipped handheld dock.

The last section widens the lens: HDMI as the default interconnect for streaming sticks, digital signage players, matrix switchers, and creator gear from cameras and drones to tracking follow-me stage cams. There’s also a brief nod to sustainability work like cable material recycling and smaller packaging labels, but the core message is interoperability—higher bandwidth, clearer certification, and fewer surprises when you plug it all together next.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

source https://www.youtube.com/watch?v=e5SZVTXrWh0

Teledyne FLIR OEM Tura thermal ADAS: 640×512 FIR, GMSL2/FPD-Link, IP6K9K

Posted by – January 14, 2026
Category: Exclusive videos

Teledyne FLIR OEM outlines a thermal-first approach to vehicle perception, positioning longwave infrared (LWIR, 8–14 µm) as a complement to visible cameras, radar, and lidar when lighting or contrast collapses. Tura is presented as an automotive-qualified thermal camera module that detects heat signatures from pedestrians and animals and feeds an AI pipeline that can label objects in real time, aiming to reduce missed detections at night and in poor weather. https://oem.flir.com/tura/

The conversation stays on automotive constraints: ISO 26262 functional-safety development targeting ASIL-B, AEC-Q qualified components, and a heated IP6K9K enclosure to keep the optical window clear for de-fog and de-ice. The module uses a 640×512 uncooled microbolometer with 12 µm pixel pitch and a shutterless signal path designed to avoid periodic shutter interruptions, which matters for uptime in production vehicles. As a reference point, they mention autonomous fleets (like Zoox) using multiple thermal cameras per vehicle to strengthen perception redundancy in low light and bad weather.

Teledyne frames the benefit as more reaction time: thermal can see roughly four times beyond headlight reach, and a published target is pedestrian/animal detection at around 200 m or more in suitable conditions. Integration details are OEM-oriented: multiple FOV options (24°/42°/70°), selectable frame rates up to 60 Hz, power-over-coax input (6–15 V), and SERDES variants for in-vehicle links (GMSL2 or FPD-Link over FAKRA) carrying MIPI video streams. They also discuss cost targets at automotive volume—potentially near $100 in scale, with nearer-term figures more like $300—plus the role of training data and perception software to accelerate deployment into an ADAS stack.

A less obvious theme is validation: Teledyne advocates thermally active pedestrian dummies that are heated to match human IR signatures, making nighttime AEB tests more representative than “cold” mannequins. Filmed at CES Las Vegas 2026 during Pepcom, the interview ties the hardware story to evolving safety expectations (including higher-speed nighttime scenarios referenced in FMVSS 127 discussions) and how repeatable targets could turn thermal performance into an engineering metric.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=zPwkcmlGG50

Sharp Poketomo pocket AI companion robot: ChatGPT LLM, face ID camera, 4 motors

Posted by – January 14, 2026
Category: Exclusive videos

Sharp’s Poketomo is a pocket-sized conversational character built as an always-with-you companion rather than a productivity assistant. In the interview, Sharp explains that it comes from the same team behind Robohon, but shifts the focus from complex movement to lightweight, curated dialogue powered by an LLM (including ChatGPT) plus Sharp’s own intelligence layer for a more guided experience. https://poketomo.com/

A big part of the concept is “carry culture”: people put Poketomo in a bag, take it out for small moments, and even make custom outfits like knitted hats and mini uniforms. That physical personalization matters because it turns the device into something between a character collectible and a social object, where communities form around sharing looks, routines, and short daily interactions that feel more like check-ins than long chat sessions today.

Later in the video—filmed at CES Las Vegas 2026—you see Sharp demo Poketomo speaking English, highlighting the idea of “short-form conversation” designed for reflections, encouragement, and mood tracking rather than task automation. The product is intentionally tuned to feel like a companion that builds familiarity over time, with behavior that stays on-theme instead of trying to be an all-purpose assistant here.

On the hardware side, Poketomo uses a small camera in the mouth area for owner recognition, enabling more personalized exchanges once it knows who it is talking to. The unit animates with four motors (arms plus head turn and nod) to add nonverbal cues, and it pairs with an app so the same conversation history can continue even when the robot is not in your hand, keeping the “memory” consistent across sessions for that one unit.

Pricing is positioned to be more reachable than earlier character robots: in Japan it’s around ¥39,600 (often described as about $250), plus a monthly Cocoro Plan subscription that scales by usage volume (entry tiers around ¥495/month, with higher tiers up to about ¥1,980/month for larger conversation allowances). Sharp is still treating global rollout as an open question, but the English demo is a clear step toward broader availability later.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=SGe_W5lUpa0