Teledyne LeCroy HDMI 2.2 testing: M42h 96Gbps FRL/FEC protocol analyzer + generator

Posted by – January 15, 2026
Category: Exclusive videos

HDMI 2.2 pushes the ecosystem from “it works” to “it works at 96Gbps”, which changes what engineers need to validate: Fixed Rate Link (FRL) behavior, Forward Error Correction (FEC), link training, and the way metadata and audio ride alongside high-rate video. In this interview, Teledyne LeCroy’s quantumdata team frames their role as the plumbing behind the logos—tools chip vendors and device makers use to debug, pre-test, and get ready for formal certification. https://www.teledynelecroy.com/protocolanalyzer/quantumdata-m42h

The centerpiece is the quantumdata M42h, a compact HDMI generator + protocol analyzer built for HDMI 2.2 FRL rates up to 96Gbps (24Gbps per lane), with visibility into FRL packetization (superblocks / character blocks), control data, InfoFrames, and captured error conditions. Filmed at CES Las Vegas 2026, the demo lands on a key point: test gear can be available ahead of the final compliance program, so silicon teams can iterate while the certification details get locked.

A practical theme is emulation. When you can’t buy an HDMI 2.2 display or source off the shelf, a box that can impersonate a sink or a source becomes the reference endpoint, letting teams validate interoperability before TVs, consoles, and GPUs ship. The loopback workflow shown—generate a defined stream, feed it back, then analyze what returns—turns “the picture looks odd” into timestamped protocol events you can debug in a lab.

They also point to a more portable, battery-powered tester with a built-in screen for AV integrators who need on-site verification—EDID behavior, HDCP handshakes, and signal continuity—without hauling a full bench setup. Rollout expectations stay grounded: Ultra96-class cables tend to arrive first, while sources and sinks follow once compliance specs and logos are finalized, with the interview estimating late 2026 into early 2027 for broader shelf availability, depending on real-world timing.

Teledyne LeCroy positions this as one slice of a broader protocol-test stack spanning HDMI, DisplayPort, USB, PCI Express, Ethernet, Wi-Fi, Bluetooth, and MIPI. The takeaway is that “new standard” is mostly a test problem—repeatable stimulus, deep capture of the right protocol layers, and turning edge-case failures into actionable debug data for real hardware.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=gVtjmaDhF54

Birdfy Hum Bloom, Bath Pro, Feeder Vista: 4K slow-mo hummingbirds, dual-cam birdbath, 6K 360° feeder

Posted by – January 15, 2026
Category: Exclusive videos

Birdfy is turning backyard birdwatching into a small edge-AI workflow: camera-equipped feeders and baths that push visitor alerts to your phone, record clips on landing, and run computer-vision classification so footage arrives tagged by species. The goal is less “random wildlife camera” and more a searchable, shareable stream—portrait framing for close detail, wide view for context, plus a lightweight bird journal inside one app. https://www.birdfy.com/

Hum Bloom is built around hummingbird behavior and optics. The feeder uses a biomimetic flower-style nectar bulb so the camera keeps a clean line of sight, and a hydraulic pump system that keeps nectar available right where the bird feeds. Pair that with 4K capture and slow motion to resolve wingbeats, and Birdfy’s AI layer that aims at hummingbird species coverage (the booth mentioned roughly 150), so the metadata is about what you saw, not just that “something flew by,” it.

In the walkthrough, filmed at CES Las Vegas 2026, the conversation shifts from software to mechanics that make better data. Feeder Vista is positioned as a 360° setup with dual ultra-wide lenses and up to 6K video (plus high-frame-rate slow motion), letting you choose panoramic context or a single wide perspective. Instead of gravity-fed seed, an air-pump lifts a measured portion from a sealed container to the tray, keeping bulk feed dry while helping the camera get consistent framing on each visit at the feeder expo.

Bath Pro applies the same dual-view idea to water: a wide-angle camera to catch group activity, plus a portrait camera for individual detail, with smart capture that prioritizes faces and feathers over background clutter. A solar-powered fountain creates moving-water cues that attract visits, and an optional de-icer/heater keeps water accessible in winter—useful in places where bird activity continues but the basin would otherwise freeze, care.

The interview also lands on a realistic limit: species recognition is getting strong at scale, but true “this exact individual bird” re-identification is still hard without dependable visual markers. Treated as connected edge cameras with event-based recording, motion/weight sensing, and ongoing model updates, the interesting engineering story is how lens choice, placement geometry, and feeder mechanics co-design to turn backyard visits into clean, low-noise datasets you can enjoy in the moment, app.

Birdfy Hum Bloom: 4K slow-mo hummingbird feeder with hydraulic nectar pump + AI ID
Birdfy Feeder Vista: 360° 6K dual-lens bird feeder cam with air-pump seed system
Birdfy Bath Pro: dual-camera smart birdbath with solar fountain, de-icer/heater, AI alerts

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=aOt8Ps1XasM

Geniatech 42-inch ePaper whiteboard: AI modules on ARM: NXP/Qualcomm/MediaTek, Kinara, Hailo, MemryX

Posted by – January 15, 2026
Category: Exclusive videos

Geniatech is pushing ePaper beyond “static signage” by turning large-format E Ink into interactive, workflow-aware displays that behave more like tools than screens. The headline demo is a 42-inch ePaper interactive whiteboard designed for classrooms and meeting rooms, pairing a reflective, eye-friendly panel with low-latency handwriting, reusable templates (like weekly reports), and easy sharing via QR code. https://www.geniatech.com/product/42-epaper-interactive-whiteboard/

A nice touch is the “lecture replay” idea: voice recording can be captured alongside the pen strokes so students can re-watch how a solution was built, step by step, without needing a power-hungry LCD. Because it’s reflective ePaper, it avoids backlight glare and keeps heat generation low, which matters when a board is on all day in a bright room. The emphasis here is practical UX: smooth pen feel, fast refresh where it counts, and simple content distribution for real teaching.

For outdoor infrastructure, the same platform shows up as ePaper transit signage: a bus-stop style display with three panels driven from a single control board, built for low power and weather exposure. Reflections and finish come up in the discussion (matte vs glossy), and Geniatech highlights partial-refresh modes to update just the changing regions (like arrival times) instead of doing full-screen flashes all the time. The video is filmed at CES Las Vegas 2026, and the broader theme is “ultra-low power, always-visible info” for public space.

Smaller ePaper devices round out the story, including digital photo frames that aim for near-zero maintenance by using indoor light energy harvesting (a “room-light charging” approach) so a once-per-day image update can run for extremely long periods without manual charging. There’s also a 28.5-inch color ePaper display pitched as “photo-like,” with both partial and full refresh options depending on whether you’re updating small UI elements or switching the whole layout.

Then Geniatech pivots from displays to compute: embedded edge AI on ARM, with boards and modules spanning NXP, MediaTek, and Qualcomm platforms, built to run inference locally for low latency, offline operation, and better data control. The partner ecosystem matters here: accelerator modules mentioned include Kinara Ara-class NPUs, plus options like Hailo-8, MemryX, and DeepX in Geniatech’s modular lineup, letting integrators match TOPS, power, and cost to the deployment instead of locking into one silicon path.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=klaQaYe4w-Y

VESA DisplayPort Automotive Extensions at CES 2026: CRC ROI metadata, OpenGMSL, quantumdata M42de

Posted by – January 15, 2026
Category: Exclusive videos

VESA’s DisplayPort Automotive Extensions (DP AE) is about treating the in-car display path like a safety-critical data link, not “just pixels.” The idea is to detect corruption, dropped or repeated frames, and even intentional tampering so a rear-view camera, speedometer, or driver instrument cluster can be flagged as invalid instead of silently showing the wrong thing. https://vesa.org/

A key mechanism is functional-safety metadata riding on top of standard DisplayPort: CRC (cyclic redundancy check) signatures plus frame counters and timing checks, computed per region of interest (ROI) so the most critical parts of a screen get the tightest scrutiny. If a CRC mismatch appears, or if a frame freezes or skips, the system can raise a warning immediately rather than leaving the driver to guess what happened here.

DP AE also adds security concepts aimed at image integrity and authentication, so attempts to modify rendered content in transit can be detected at the display level. This matters as vehicles add more high-resolution interior panels and camera feeds, while the attack surface grows across GPUs, head units, and external links in a modern car.

The demo is filmed at CES Las Vegas 2026 and ties DP AE to real automotive wiring reality: long cable runs and SerDes links. VESA highlights collaboration with the OpenGMSL ecosystem to carry DisplayPort over longer distances (the video mentions up to 15 m), while keeping end-to-end checks consistent across silicon vendors, Tier-1s, and test tool chains today.

On the validation side, Teledyne LeCroy’s quantumdata platform is shown as a practical way to emulate DP AE sources and sinks, inspect the new secondary data packets, and inject faults to prove detection works. Between FPGA setups, software models, and compliance workflows, the takeaway is an ecosystem push: interoperable safety/security profiles that different suppliers can test the same way and ship with fewer integration surprises too.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=a01I6FVdY_4

Mobilint Edge AI roadmap ARIES REGULUS multi-LLM PCIe card, TOPS/Watt NPUs, vehicle-grade SoC prep

Posted by – January 15, 2026
Category: Exclusive videos

Mobilint is a Korea-based AI semiconductor company building power-efficient NPUs for on-device and on-premise inference, aiming to shift workloads from cloud GPUs into compact systems with predictable latency and power. In the interview they mention working across the memory and foundry ecosystem (including Samsung and SK hynix) while focusing the demo on ARIES: 80 TOPS in a 25 W TDP, PCI Express Gen4 x8, and 16 GB LPDDR4X (optional 32 GB) with 66.7 GB/s memory bandwidth https://www.mobilint.com/

On the demo table, ARIES is framed through deployable computer-vision throughput: YOLO-11 object detection plus standard backbones like ResNet-50 and MobileNet, with attention on TOPS per watt rather than peak TOPS alone. The target is industrial PCs and compact edge servers where thermal headroom is tight, so inference stays local while multiple models share one host chip.

A second setup zooms out to a larger PCIe card concept that “crams” four Mobilint M800 accelerators onto one board, intended to run several ~8B-parameter language models concurrently, or scale up via partitioning and batching. That naturally leads to vision-language models: camera frames become embeddings, the text decoder turns them into scene descriptions, and multilingual output becomes a useful interface for inspection or support, recorded on the CES Las Vegas 2026 show floor there.

For smaller, always-on endpoints, Mobilint highlights REGULUS, a full SoC that pairs an NPU with Arm Cortex-A53 CPU cores so it can run Linux and execute pre-trained models without a separate host. They cite around 10 TOPS under 3 W for drones, robots, and AI CCTV, then demonstrate high-input video analytics, including a 96-stream fire-risk example where bandwidth, buffering, and scheduling matter as much as raw compute in the field.

The closing theme is vehicle and humanoid readiness: partners want edge AI that is fast and power-bounded, but also engineered for functional safety and security hardening, not just benchmarks. The takeaway is that autonomy progress is a mix of smarter models, tighter sensor-to-actuation pipelines, and consolidating silicon so the platform can scale compute without multiplying energy cost today.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=2AUfvdShhgE

CT5 ZONE HSS1 multi-user AI companion: shared earbuds for translation, privacy-tilt camera, 4K

Posted by – January 14, 2026
Category: Exclusive videos

CT5 presents ZONE HSS1 as a screen-free, hands-free “hear what you see / see what you hear” AI interface: a lightweight head-worn module with an 8MP camera, microphone array, and an earbud-based audio path that keeps the user in a conversational loop with a chatbot. Instead of pulling out a phone, the device is meant to capture first-person context (vision + sound) and return spoken guidance, so the interaction feels more like an always-available voice assistant with situational awareness. https://www.ct5.co.kr/

A key design point is multi-user audio: two earbuds can be shared so two people can listen at the same time, which CT5 frames as a practical way to run live translation for face-to-face conversation without everyone staring at screens. The company also positions it as a “smart glasses without glasses” approach, aiming for longer runtime than typical camera-enabled wearables by pushing heavy inference to cloud LMM/LLM endpoints via a paired smartphone.

In the demo filmed at CES Las Vegas 2026, the CEO describes both continuous live video modes and a “memory” style mode that records snapshots over time so the assistant can answer questions based on what the user has been seeing. They highlight model choice (Gemini, OpenAI, and other APIs), and acknowledge that multimodal usage can map to token/API cost even if pricing isn’t enforced during early pilot runs.

Hardware-wise, CT5 quotes about 90 g total weight and more than 20 hours of battery life on a charge, with the weight supported around the head rather than the nose bridge. The device is slated around a US$300 target price, with pilot production underway and an expected product launch window around April, focusing on coaching and real-time assistance use cases that benefit from first-person context on the move.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=8rAyXEhlQNs

Rohde & Schwarz HDMI 2.2 Ultra96 cable compliance: ZNB3000 VNA, crosstalk, skew

Posted by – January 14, 2026
Category: Exclusive videos

Rohde & Schwarz engineer Patrick McKenzie explains what “HDMI 2.2 cable compliance” means at the electrical layer: proving an Ultra96 cable can carry multi-lane differential traffic with controlled loss and low coupling, not just “pass video.” The demo frames compliance as a measurement recipe that turns VNA data into the parameters used for certification. https://www.rohde-schwarz.com/us/solutions/electronics-testing/high-speed-digital-interface-testing/hdmi-testing/hdmi-connector-and-cable-testing_258387.html

At the instrument level, a vector network analyzer (VNA) stimulates the channel and measures what returns across frequency, lane by lane. Because each HDMI lane is differential (P/N on each side), one lane measurement typically needs four VNA ports, repeated across the four data lanes. From those sweeps you derive insertion loss, attenuation-to-crosstalk ratio, differential impedance, inter-pair skew, and mode conversion, and you can apply time-domain transforms (TDR-like views) to pinpoint impedance discontinuities, connector launches, and pair imbalance in the setup.

A key practical detail is the fixture stack: HDMI plugs into test-point adapters (TPAs) that break the high-speed pairs out to SMA coax so the analyzer can reference clean planes. The example uses a Wilder TPA, while the other lanes (and the eARC lane when relevant) are terminated so the lane under test isn’t distorted by unterminated stubs. This interview was filmed at CES Las Vegas 2026, so you also see how a compliance bench gets operated in a busy show environment on the expo floor.

On the Rohde & Schwarz side, the platform discussed is the R&S ZNB3000 VNA family (released in February 2025), positioned as a faster mid-range instrument with strong dynamic range for small-signal crosstalk work. Options scale from 2 to 4 ports and up to 54 GHz, which is useful when fixtures, connectors, and cable launches push measurements into the tens of GHz. The UI is Windows-based, with FPGA-backed acquisition and DSP behind the screen, and firmware updates landing on a regular cadence there.

If you build, qualify, or certify high-speed copper interconnect, the takeaway is how modern HDMI validation is basically signal-integrity engineering packaged into a standard: characterize the channel, quantify lane-to-lane coupling, and verify skew/impedance limits before any eye-diagram margin discussion. With HDMI 2.2 pushing the Ultra96 class up to 96 Gbit/s, VNAs plus well-controlled fixtures become the gatekeepers for interoperability and predictable link behavior in real product work.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=70L1rsnA76g

GetD AI Translation Glasses: 29g open-ear audio, triple-mic ENC, Find My safety

Posted by – January 14, 2026
Category: Exclusive videos

GetD is positioning these as everyday AI translation glasses rather than an AR display: 29 g frames with RX-compatible lenses, photochromic behavior (sunglass outdoors, blue-cut indoors), and a phone-linked stack that can run real-time speech translation plus an AI assistant. Translation is described as using Microsoft Azure Speech Translation, with ChatGPT-style interaction handled through the companion app and Bluetooth audio. https://igetd.com/

On the audio side, the design leans into open-ear speakers with a triple-microphone array for environmental noise cancellation (ENC) so the wearer can still hear the room while getting clearer capture for calls and translation. The demo emphasizes “premium sound” tuning and voice pickup when someone speaks nearby, which matters for face-to-face interpreting and meeting-style use.

A notable feature is Apple Find My integration on iPhone, framed as a safety workflow for seniors: alerts and location sharing can help relatives react quickly if someone falls or needs help. The hardware callout is practical rather than flashy—microprocessor, IMU/G-sensor, battery, speaker modules, and mic placement are shown through a transparent frame variant. This interview was filmed at CES Las Vegas 2026.

GetD is deliberately avoiding an always-on camera and a constant display in this model, arguing that “intelligent but invisible” wearability is the point: comfortable optics, low weight, and fewer privacy concerns in public spaces. They do mention future accessibility ideas like on-lens transcription for cinema or hearing-impaired users, but positioned as a later roadmap rather than the core product.

Commercially, the pitch is a consumer launch path with Kickstarter pricing: an early-bird target around $179 and a retail price above $200, alongside multiple frame colors including the transparent look. If the execution holds up, the technical story is less about AR and more about audio UX: low-latency Bluetooth, multi-mic beamforming/ENC, cloud speech translation, and a mobile AI layer that keeps the phone in the loop without forcing you to stare at it all day.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=xN-JfIstGKk

HDMI 2.2 booth tour at #ces2026 HDMI Licensing Administrator 12K120 video, eARC audio, gaming

Posted by – January 14, 2026
Category: Exclusive videos

HDMI Licensing Administrator walks through what HDMI 2.2 changes at the ecosystem level: the jump to next-gen HDMI Fixed Rate Link signaling and up to 96Gbps, plus how the new Ultra96 cable and labeling is meant to reduce confusion when people buy cables for high-bandwidth sources and displays. https://www.hdmi.org/spec/hdmi2

A big theme is uncompressed video headroom: think 4K at very high refresh (up to 480Hz), 8K60 and 4K240 in full chroma 4:4:4, and higher-tier modes like 12K120, while keeping 10-bit and 12-bit workflows practical for HDR mastering, PC gaming, and pro creation. The booth also frames the Ultra96 feature name as a bandwidth marker (64/80/96Gbps), not just a version sticker.

Shot on the show floor at CES Las Vegas 2026, the tour connects those numbers to the plumbing behind them: tighter compliance requirements, tougher tolerances, and the move toward higher-performance certified components like Category 4 connectors. On the cable side, Ultra96 certification plus scannable labels are positioned as a practical way to verify model and length, especially once early prototypes turn into retail stock.

Audio and latency are treated as first-class engineering problems rather than add-ons. eARC is framed as the day-to-day enabler for Dolby Atmos and DTS:X through soundbars and AVRs, while HDMI 2.2 Latency Indication Protocol (LIP) targets better A/V sync in multi-hop setups where a TV, receiver, and multiple sources all add delay. For gamers, the familiar stack stays central: VRR, ALLM, and Quick Frame Transport, shown alongside high-refresh displays and an HDMI-equipped handheld dock.

The last section widens the lens: HDMI as the default interconnect for streaming sticks, digital signage players, matrix switchers, and creator gear from cameras and drones to tracking follow-me stage cams. There’s also a brief nod to sustainability work like cable material recycling and smaller packaging labels, but the core message is interoperability—higher bandwidth, clearer certification, and fewer surprises when you plug it all together next.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

source https://www.youtube.com/watch?v=e5SZVTXrWh0

Teledyne FLIR OEM Tura thermal ADAS: 640×512 FIR, GMSL2/FPD-Link, IP6K9K

Posted by – January 14, 2026
Category: Exclusive videos

Teledyne FLIR OEM outlines a thermal-first approach to vehicle perception, positioning longwave infrared (LWIR, 8–14 µm) as a complement to visible cameras, radar, and lidar when lighting or contrast collapses. Tura is presented as an automotive-qualified thermal camera module that detects heat signatures from pedestrians and animals and feeds an AI pipeline that can label objects in real time, aiming to reduce missed detections at night and in poor weather. https://oem.flir.com/tura/

The conversation stays on automotive constraints: ISO 26262 functional-safety development targeting ASIL-B, AEC-Q qualified components, and a heated IP6K9K enclosure to keep the optical window clear for de-fog and de-ice. The module uses a 640×512 uncooled microbolometer with 12 µm pixel pitch and a shutterless signal path designed to avoid periodic shutter interruptions, which matters for uptime in production vehicles. As a reference point, they mention autonomous fleets (like Zoox) using multiple thermal cameras per vehicle to strengthen perception redundancy in low light and bad weather.

Teledyne frames the benefit as more reaction time: thermal can see roughly four times beyond headlight reach, and a published target is pedestrian/animal detection at around 200 m or more in suitable conditions. Integration details are OEM-oriented: multiple FOV options (24°/42°/70°), selectable frame rates up to 60 Hz, power-over-coax input (6–15 V), and SERDES variants for in-vehicle links (GMSL2 or FPD-Link over FAKRA) carrying MIPI video streams. They also discuss cost targets at automotive volume—potentially near $100 in scale, with nearer-term figures more like $300—plus the role of training data and perception software to accelerate deployment into an ADAS stack.

A less obvious theme is validation: Teledyne advocates thermally active pedestrian dummies that are heated to match human IR signatures, making nighttime AEB tests more representative than “cold” mannequins. Filmed at CES Las Vegas 2026 during Pepcom, the interview ties the hardware story to evolving safety expectations (including higher-speed nighttime scenarios referenced in FMVSS 127 discussions) and how repeatable targets could turn thermal performance into an engineering metric.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=zPwkcmlGG50

Sharp Poketomo pocket AI companion robot: ChatGPT LLM, face ID camera, 4 motors

Posted by – January 14, 2026
Category: Exclusive videos

Sharp’s Poketomo is a pocket-sized conversational character built as an always-with-you companion rather than a productivity assistant. In the interview, Sharp explains that it comes from the same team behind Robohon, but shifts the focus from complex movement to lightweight, curated dialogue powered by an LLM (including ChatGPT) plus Sharp’s own intelligence layer for a more guided experience. https://poketomo.com/

A big part of the concept is “carry culture”: people put Poketomo in a bag, take it out for small moments, and even make custom outfits like knitted hats and mini uniforms. That physical personalization matters because it turns the device into something between a character collectible and a social object, where communities form around sharing looks, routines, and short daily interactions that feel more like check-ins than long chat sessions today.

Later in the video—filmed at CES Las Vegas 2026—you see Sharp demo Poketomo speaking English, highlighting the idea of “short-form conversation” designed for reflections, encouragement, and mood tracking rather than task automation. The product is intentionally tuned to feel like a companion that builds familiarity over time, with behavior that stays on-theme instead of trying to be an all-purpose assistant here.

On the hardware side, Poketomo uses a small camera in the mouth area for owner recognition, enabling more personalized exchanges once it knows who it is talking to. The unit animates with four motors (arms plus head turn and nod) to add nonverbal cues, and it pairs with an app so the same conversation history can continue even when the robot is not in your hand, keeping the “memory” consistent across sessions for that one unit.

Pricing is positioned to be more reachable than earlier character robots: in Japan it’s around ¥39,600 (often described as about $250), plus a monthly Cocoro Plan subscription that scales by usage volume (entry tiers around ¥495/month, with higher tiers up to about ¥1,980/month for larger conversation allowances). Sharp is still treating global rollout as an open question, but the English demo is a clear step toward broader availability later.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=SGe_W5lUpa0

Colorii USB4 v2 80Gbps #CES2026 + Thunderbolt 5 dock: NVMe enclosures, RAID clone, e-ink S.M.A.R.T.

Posted by – January 14, 2026
Category: Exclusive videos

Colorii shows a creator-focused MagSafe-style grip that turns an iPhone into a more “camera-like” rig: a USB-C direct-attach handle with a 2280 M.2 NVMe slot so you can record straight to a removable SSD instead of filling internal storage. The idea is practical for ProRes workflows, because 4K ProRes files get large fast, and the grip adds a tactile record button plus a safer mechanical hold while keeping the drive magnetically locked in place. https://www.colorii.cc/

The grip is tuned for iPhone 15 Pro / 16 Pro sizing, using a smart clamp geometry so it stays rigid and balanced in the hand, and it leaves room for pass-through USB-C power delivery so long takes don’t drain the phone. There’s also a second USB-C port for accessories, letting you stack a wireless mic receiver or compact light while still routing data to the SSD. In practice, this is a mini-rig that keeps audio, power, and storage on one clean cable route.

Colorii also demos a small rear-camera “selfie monitor” that mirrors the phone display so vloggers can frame using the better back camera rather than the front sensor. The current unit is a compact HD panel, with a larger 5-inch touchscreen follow-up that starts to feel like a dedicated on-camera monitor for short-form and live content. Together, the grip + monitor combo is a modular mobile video kit built around USB-C and MagSafe ergonomics.

On the storage side, the booth leans into high-speed external NVMe: a USB4 40Gbps enclosure (real-world throughput typically around 2.5–3.5GB/s depending on SSD and host), plus a more experimental “cyber” chassis targeting 80Gbps-class links such as USB4 v2 / Thunderbolt 5-capable hosts. Thermal design is a recurring theme, with metal housings and a copper plate to spread heat from hot-running PCIe drives, because throttling is often the limiting factor during sustained write load.

Rounding it out are productivity docks: a dual-bay enclosure offering RAID0/RAID1 and offline clone modes via hardware buttons, an e-ink enclosure that surfaces S.M.A.R.T.-style health metrics like temperature, power-on time, and written bytes, and a Thunderbolt 5 docking concept with integrated NVMe bay, DisplayPort/HDMI up to 8K, 2.5GbE, 10Gbps USB-A/USB-C, and SD reader. Filmed at CES Las Vegas 2026, it’s a snapshot of how accessory makers are merging phone capture and desktop-class I/O into compact, field-ready gear.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=smdX51_JRSY

faytech booth tour at CES 2026 Transparent OLED kiosk + Looking Glass HLD, optical bonding, IP69K

Posted by – January 14, 2026
Category: Exclusive videos

faytech’s booth tour is a good snapshot of where “display as interface” is going: not just a panel on a wall, but a complete front end for AI agents, payments, and wayfinding. The standout is a concierge-style station built with partners like Napster and Edo, blending audio (including a dedicated subwoofer/speaker) with showpiece visuals like lenticular-style depth effects and transparent display concepts meant for high-traffic public spaces. https://faytech.com/ces-highlights/

A practical thread running through the demos is how these kiosks are engineered for real deployments, not just show-floor gloss. The China rollout example focuses on self-service ordering plus card payment and voucher printing, which is a useful reminder that UX, peripherals, and compliance matter as much as pixels. Seen in context later at CES Las Vegas 2026, the pitch is that interactive signage is becoming an AI-enabled “counter” that can talk, guide, and transact.

On the core product side, faytech leans hard on industrial display fundamentals: optical bonding to improve contrast and readability, plus rugged mechanics for touch reliability and long uptimes. A new USB touchscreen series is shown running from a Mac mini without driver drama, targeting machine-control and shop-floor HMI use where “one cable for signal + touch (and often power)” reduces integration friction. They also show a movable button accessory for haptic feedback, aiming to bring back tactile control where flat glass alone can feel vague.

Ruggedization gets specific with stainless steel outdoor and washdown designs rated up to IP69K, positioned for food processing, healthcare, and other environments that demand high-pressure cleaning and sealed I/O. The same approach extends to semi-outdoor and outdoor signage formats (strip displays for transit, kiosk enclosures, and modular housings), where brightness, sealing, and serviceability tend to decide whether a screen becomes a long-term asset. In other words, the “nice look” is backed by mechanical and environmental detail that helps it survive real work.

The other big theme is 3D and volumetric-style presentation without headsets: faytech pairs transparent OLED kiosk form factors with Looking Glass Hololuminescent Display tech to create a perceived depth volume behind the front surface, tuned for retail, signage, and character-driven content. That plugs neatly into the booth’s AI-avatar ecosystem, including large-format “holo box” builds (like an 86-inch class unit) where animated agents run all day—bandwidth permitting. It’s a coherent stack: durable enclosures + bonded touch + novel optics, built to make AI interfaces feel present in a physical space, not just on a flat screen.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=M4iKU-kycio

Camera Cooling using Frore Systems AirJet Gen2: vibration-free active cooling for FPGA + sensor

Posted by – January 13, 2026
Category: Exclusive videos

This interview looks at an industrial machine-vision camera that integrates Frore Systems AirJet Mini Gen 2 solid-state active cooling to keep an on-board FPGA running at sustained clock rates without resorting to a bulky passive heat sink. The clever mechanical detail is a user-replaceable intake filter, so the camera can stay dustproof and water-resistant while still moving enough air through the enclosure for long runtimes in a factory setting. https://www.froresystems.com/

A key point is back pressure: traditional tiny fans struggle when you add filtration because static pressure collapses, airflow drops, and temperatures rise. AirJet’s pumping approach tolerates higher restriction, so you can design for environmental sealing and serviceability at the same time—more like maintaining an HVAC filter than babying a fan that clogs and slowly derates.

Thermals matter here not only for compute but for image quality. Keeping the sensor and the FPGA thermally stable reduces thermal drift, dark-current noise, and timing variability in the processing pipeline, which is especially relevant for high-frame-rate 4K/60 class workloads and on-camera ISP, compression, or embedded inference. The footage was filmed at CES Las Vegas 2026, but the use case is very much industrial uptime rather than show-floor spectacle.

The other constraint is vibration. In many vision systems, even small vibrations can translate into blur, calibration drift, or mechanical coupling into the optics and chassis, so a vibration-free cooler is attractive when you’re trying to shrink volume and mass without sacrificing reliability. The replaceable filter also turns “dust equals downtime” into a predictable maintenance task that can be scheduled around the environment and duty cycle, year after year if conditions allow.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=cOL5by3QRUM

Frore Systems Qualcomm 6mm 2-in-1 demo: AirJet Mini G2 solid-state cooling at 18W

Posted by – January 13, 2026
Category: Exclusive videos

Frore Systems walks through a Qualcomm 2-in-1 reference design that pushes thin-and-quiet device engineering by treating thermal design as the limiter, not raw compute. The prototype is about 6 mm thick and uses three solid-state AirJet modules to sustain roughly 18 W of TDP, positioned as a meaningful thickness drop versus a 10 mm class tablet while targeting similar sustained performance behavior. https://www.froresystems.com/

The interesting part is how AirJet changes the usual airflow constraints inside sealed or semi-sealed chassis. AirJet Mini G2 is a thin, solid-state active cooling module (roughly a few millimeters thick) that’s designed to move air with relatively high back pressure, which matters when you add restrictive inlet/outlet paths, gaskets, or fine filtration. Frore’s published figures for Mini G2 commonly reference around 7.5 W heat removal per module in a compact footprint, so scaling to multiple modules becomes a practical way to keep clocks up without resorting to thicker heatsinks or small, fast fans that become the bottleneck under load and dust.

In this demo, the airflow path is also treated like an industrial reliability problem: the design is shown with dust-proof and water-resistant filtration on both intake and exhaust while still maintaining cooling flow, and the filter concept is meant to be replaceable rather than “clean it later with compressed air.” That framing makes more sense once you remember this was filmed at CES Las Vegas 2026, where a lot of “thin device” demos ignore what happens after months in a backpack, workshop, or fleet deployment, and servicing matters as much as peak wattage when the device must stay consistent and serviceable.

Zooming out, Qualcomm reference designs like this are effectively templates for OEMs: they show that a Snapdragon-class 2-in-1 can target sustained performance at higher power budgets inside a very slim chassis, without the acoustic and maintenance tradeoffs that come with conventional active cooling. For AI-leaning workloads that mix CPU, GPU, and NPU utilization—plus continuous video, conferencing, or on-device inference—the payoff is less thermal throttling and more predictable performance per watt, which is ultimately what users notice when a thin system is supposed to behave like a thicker one during real compute.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=iwKqIos9x0Q

Booster Robotics humanoid dev kit: T1/K1 RL balance, Jetson Orin, ROS2

Posted by – January 13, 2026
Category: Exclusive videos

Booster Robotics frames its humanoids as a developer-first platform for education and research, and this interview leans into the “let people build on it” idea rather than a finished home-assistant pitch. The demo focuses on whole-body control: stable walking, quick recovery when pushed, and pre-baked motion clips like the Michael Jackson routine as a practical test for gait timing and joint coordination. https://www.booster.tech/booster-t1/

A key theme is how balance gets trained: reinforcement learning inside simulation, where the robot is exposed to lots of perturbed scenarios until it learns a robust policy for keeping its center of mass and contact forces inside safe limits. Filmed at CES Las Vegas 2026, the booth moment makes that tangible—you can physically shove the robot and watch it absorb the impulse with ankle/hip strategies instead of tipping over.

Booster positions the larger T1 as its first model and talks about modularity—swapping end-effectors, adding dexterous hands, and integrating third-party components as the manipulation stack matures. Publicly listed T1 materials commonly emphasize a full developer API, ROS2 compatibility, and simulation tooling, plus an onboard compute tier based on NVIDIA Jetson Orin (often cited up to ~100 TOPS) for perception, state estimation, and onboard inference.

The conversation also hits the gap between “embodied AI” expectations and what ships: autonomous navigation with visual-language models is moving fast, but getting product-level reliability still takes work. For now, Booster’s near-term targets are safer, smaller humanoids for classrooms and labs, with entry configurations discussed around the $6k range and roughly 1h20 of walking on a charge—enough to iterate on locomotion, perception, and early manipulation without claiming it will do laundry next year.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=152oCd0KMBY

ROBROS IGRIS-C Korean humanoid robot at #ces2026 29-joint platform, RGB+depth sensing, IL training

Posted by – January 13, 2026
Category: Exclusive videos

ROBROS presents its compact “C human” humanoid as a developer-oriented research platform, built around indoor-safe locomotion, a friendly industrial design, and a strong focus on dexterous manipulation. A key differentiator is the in-house, tendon-driven hand architecture, where cable routing couples joint motion while still allowing independent finger control, aiming for human-like grasping without bulky linkages. https://robros.co.kr/

In the demo, the robot walks under a safety harness, highlighting stability while the team iterates on hardware and control. Each hand is described as having six degrees of freedom, with tendon actuation visible in the finger mechanism, and the overall build prioritizes compact proportions and a flatter head profile to reduce overhead clearance issues in indoor spaces while keeping the face intentionally simple.

On the sensing and compute stack, the robot uses a 3D-vision setup with two RGB cameras plus a rear depth sensor, paired with a PC and an NVIDIA Jetson Nano for onboard processing. The learning approach is centered on imitation learning: operators teleoperate using a “master hand,” repeat tasks many times to collect demonstrations, and then train models so the robot can reproduce the same task in a similar environment, captured in this interview filmed at CES Las Vegas 2026 there.

Beyond the single prototype shown, the broader context is Korea’s fast-growing humanoid ecosystem, including a government-backed alliance presence at CES with multiple companies under one pavilion. ROBROS positions itself as a private company targeting research labs, universities, and government-funded institutes that want a full humanoid body for embodied AI experiments, with a team size now above forty and still scaling, pointing to a steady build-out toward real-world evaluation on the road.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=PPJOHjpvHxM

Heybike booth tour at #ces2026 folding fat-tire e-bikes, hydraulic fork, air shock, 624–720Wh packs

Posted by – January 13, 2026
Category: Exclusive videos

Heybike’s booth walkthrough looks at how the brand is segmenting e-mobility into a few clear archetypes: a city-first commuter geometry, compact folding frames for mixed-mode travel, and smaller-wheel “dirt” formats aimed at short, punchy riding. The common thread is practical ergonomics—step-through options, portable fold points, and battery packaging meant to stay out of the way while keeping service access straightforward. https://heybike.com/

A detail that matters more than it sounds is the “dual-sensor” assist logic: being able to swap between cadence sensing (motor responds to crank rotation) and torque sensing (motor scales with rider effort) changes how controllable the bike feels at low speed and on grades. Torque-based pedal assist (PAS) typically delivers smoother ramp-up and can be more energy-efficient because assist tracks real load rather than constant cadence.

In the folding lineup, Mars 3.0 is positioned as an all-terrain, fat-tire folder with full suspension and a 624Wh pack, rated up to about 65 miles of range, plus a torque sensor and quoted 95 N·m torque. ([Heybike][2]) Ranger 3.0 Pro pushes farther with a larger 720Wh battery, a stated 90-mile class range, and a full-suspension stack (hydraulic fork up front and a rear air shock). The Helio fold goes the other direction: an 18 kg build meant for stairs, train platforms, and tight storage, where fold geometry and carry weight matter as much as motor output.

The interview also touches the commuter “Venus” family (including a “hybrid” upgrade described as smoother with more battery headroom) and a compact-wheel dirt model described with 14-inch front and 12-inch rear wheels plus a 50–60 mile claim. Those headline distances are always conditional—wind, temperature, tire pressure, stop-and-go braking, and how much throttle is used can shift Wh-per-km dramatically. Filmed on the CES Las Vegas 2026 show floor, it’s a useful snapshot of what e-bike vendors are optimizing around right now: sensor choice, suspension kinematics, and fold mechanics more than raw top speed.

On the business side, Heybike frames itself as a direct-to-consumer player with regional warehousing and pickup logistics, while demoing a higher-priced “Polaris” concept positioned as an adventure/commute crossover in the USD 3–4k bracket. The meaningful spec is the whole system—motor tuning, controller limits, battery BMS behavior, and chassis stiffness—which is what determines whether a long-range number feels realistic in daily use.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=xTGP7wEgGzw

Zeroth Jupiter roadmap at #ces2026 from compact M1 companion to full-size teleop/autonomous humanoid

Posted by – January 13, 2026
Category: Exclusive videos

Zeroth is pitching a small, consumer-oriented humanoid platform that prioritizes safety and everyday interaction over raw payload. The M1 prototype shown here stands about 50 cm tall and weighs roughly 2.5 kg, which changes the risk profile compared with full-height biped demos and makes “bump recovery” and self-righting a core behavior rather than a lab trick. https://www.zeroth0.com/

M1 is framed as an indoor companion for kids and older adults: reminders, simple guidance, and light assistance that stays within home-scale constraints. The demo highlights two mobility modes: walking on its own feet, and riding a self-balancing scooter as a wheeled base, which is a hybrid approach when you want smooth room-to-room travel without solving every edge case of legged locomotion on carpet and clutter.

The interview was filmed at CES Las Vegas 2026, and it puts M1 next to a second concept called W1 that shifts the same “robot body” idea outdoors. W1 is positioned as a camping follower that hides a heavy power station inside the torso so the user doesn’t carry a 10 kg class battery pack by hand, and it can tow a small trailer advertised around a 50 kg load for food, drinks, and gear.

From a robotics perspective, these products sit at the intersection of embodied AI, human-robot interaction, and practical mechatronics: stable balance control, fall detection, self-righting, and the perception stack needed to follow a person and avoid obstacles. The scooter mode also hints at a modular mobility strategy where the autonomy layer can swap between biped gait and wheeled stabilization depending on the task and environment.

Zeroth also teases a larger “Jupiter” humanoid as the longer-term path toward home chores like fetching, wiping surfaces, vacuuming, and eventually kitchen work, which will demand better manipulation, safety envelopes, and reliability than a booth demo. In the near term, the story is about right-sizing the robot for real homes and pushing toward shipment readiness rather than research-only prototypes, early.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=1iHI1RMmnL0

ZWHAND dexterous robot hand: 17–20 DOF, e-skin tactile sensing, micro actuators

Posted by – January 13, 2026
Category: Exclusive videos

ZWHAND brings a dexterous robotic hand that’s built around a micro drive approach: the motor, reducer, and control electronics are treated as a single module so each joint can be packaged tightly and still deliver repeatable torque and position control. In the booth demo, a simple UI mirrors finger poses, while an on-screen readout visualizes fingertip pressure as the hand detects touch, making the sensing layer as visible as the mechanics. https://www.zwhand.com/en/

On camera, the showcased unit is discussed as a 17 degree-of-freedom build, with a 20 active DOF variant referenced for richer thumb and finger articulation. Filmed at CES Las Vegas 2026, the conversation stays practical: how many micro actuators you can actually fit into a human-scale envelope, how a high-performance driver board and PCBA layout affect heat and cabling, and why the communication interface often determines whether a hand can be swapped onto a humanoid in the field.

Tactile sensing is the other half of the story. ZWHAND points to flexible e-skin and high-sensitivity pressure sensing to move beyond open-loop “close the fingers” grasps, toward force-aware manipulation that can detect slip, modulate grip strength, and support safer human–robot interaction. Even with a basic visualization, you can see the control stack implied here: per-finger calibration, force estimation, impedance control, and learned grasp policies that fuse touch with vision for stable grip.

The team also calls out a common limitation in dexterous hands: water exposure. For tasks like dishwashing, the blocker is usually sealing, corrosion resistance, and realistic IP ratings rather than DOF alone, so “loading dishes into a dishwasher” is more plausible than immersion. The booth shows a progression across generations, trending toward smaller form factor and longer duty life, with public materials citing 10,000+ hours as a target for continuous operation in controlled settings like a lab.

The bigger takeaway is why hands remain a bottleneck for embodied AI: multi-contact physics, compliance tuning, sensor noise, and the need to coordinate many joints under tight power, weight, and reliability limits. A 17–20 DOF design sits in a pragmatic zone where you can cover most everyday grasps without turning the end-effector into a constant maintenance project. As interfaces and tactile data pipelines mature, these hands start to look less like a demo prop and more like a usable device.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=Lg5S4tqBf9Y