Colorii USB4 v2 80Gbps #CES2026 + Thunderbolt 5 dock: NVMe enclosures, RAID clone, e-ink S.M.A.R.T.

Posted by – January 14, 2026
Category: Exclusive videos

Colorii shows a creator-focused MagSafe-style grip that turns an iPhone into a more “camera-like” rig: a USB-C direct-attach handle with a 2280 M.2 NVMe slot so you can record straight to a removable SSD instead of filling internal storage. The idea is practical for ProRes workflows, because 4K ProRes files get large fast, and the grip adds a tactile record button plus a safer mechanical hold while keeping the drive magnetically locked in place. https://www.colorii.cc/

The grip is tuned for iPhone 15 Pro / 16 Pro sizing, using a smart clamp geometry so it stays rigid and balanced in the hand, and it leaves room for pass-through USB-C power delivery so long takes don’t drain the phone. There’s also a second USB-C port for accessories, letting you stack a wireless mic receiver or compact light while still routing data to the SSD. In practice, this is a mini-rig that keeps audio, power, and storage on one clean cable route.

Colorii also demos a small rear-camera “selfie monitor” that mirrors the phone display so vloggers can frame using the better back camera rather than the front sensor. The current unit is a compact HD panel, with a larger 5-inch touchscreen follow-up that starts to feel like a dedicated on-camera monitor for short-form and live content. Together, the grip + monitor combo is a modular mobile video kit built around USB-C and MagSafe ergonomics.

On the storage side, the booth leans into high-speed external NVMe: a USB4 40Gbps enclosure (real-world throughput typically around 2.5–3.5GB/s depending on SSD and host), plus a more experimental “cyber” chassis targeting 80Gbps-class links such as USB4 v2 / Thunderbolt 5-capable hosts. Thermal design is a recurring theme, with metal housings and a copper plate to spread heat from hot-running PCIe drives, because throttling is often the limiting factor during sustained write load.

Rounding it out are productivity docks: a dual-bay enclosure offering RAID0/RAID1 and offline clone modes via hardware buttons, an e-ink enclosure that surfaces S.M.A.R.T.-style health metrics like temperature, power-on time, and written bytes, and a Thunderbolt 5 docking concept with integrated NVMe bay, DisplayPort/HDMI up to 8K, 2.5GbE, 10Gbps USB-A/USB-C, and SD reader. Filmed at CES Las Vegas 2026, it’s a snapshot of how accessory makers are merging phone capture and desktop-class I/O into compact, field-ready gear.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=smdX51_JRSY

faytech booth tour at CES 2026 Transparent OLED kiosk + Looking Glass HLD, optical bonding, IP69K

Posted by – January 14, 2026
Category: Exclusive videos

faytech’s booth tour is a good snapshot of where “display as interface” is going: not just a panel on a wall, but a complete front end for AI agents, payments, and wayfinding. The standout is a concierge-style station built with partners like Napster and Edo, blending audio (including a dedicated subwoofer/speaker) with showpiece visuals like lenticular-style depth effects and transparent display concepts meant for high-traffic public spaces. https://faytech.com/ces-highlights/

A practical thread running through the demos is how these kiosks are engineered for real deployments, not just show-floor gloss. The China rollout example focuses on self-service ordering plus card payment and voucher printing, which is a useful reminder that UX, peripherals, and compliance matter as much as pixels. Seen in context later at CES Las Vegas 2026, the pitch is that interactive signage is becoming an AI-enabled “counter” that can talk, guide, and transact.

On the core product side, faytech leans hard on industrial display fundamentals: optical bonding to improve contrast and readability, plus rugged mechanics for touch reliability and long uptimes. A new USB touchscreen series is shown running from a Mac mini without driver drama, targeting machine-control and shop-floor HMI use where “one cable for signal + touch (and often power)” reduces integration friction. They also show a movable button accessory for haptic feedback, aiming to bring back tactile control where flat glass alone can feel vague.

Ruggedization gets specific with stainless steel outdoor and washdown designs rated up to IP69K, positioned for food processing, healthcare, and other environments that demand high-pressure cleaning and sealed I/O. The same approach extends to semi-outdoor and outdoor signage formats (strip displays for transit, kiosk enclosures, and modular housings), where brightness, sealing, and serviceability tend to decide whether a screen becomes a long-term asset. In other words, the “nice look” is backed by mechanical and environmental detail that helps it survive real work.

The other big theme is 3D and volumetric-style presentation without headsets: faytech pairs transparent OLED kiosk form factors with Looking Glass Hololuminescent Display tech to create a perceived depth volume behind the front surface, tuned for retail, signage, and character-driven content. That plugs neatly into the booth’s AI-avatar ecosystem, including large-format “holo box” builds (like an 86-inch class unit) where animated agents run all day—bandwidth permitting. It’s a coherent stack: durable enclosures + bonded touch + novel optics, built to make AI interfaces feel present in a physical space, not just on a flat screen.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=M4iKU-kycio

Camera Cooling using Frore Systems AirJet Gen2: vibration-free active cooling for FPGA + sensor

Posted by – January 13, 2026
Category: Exclusive videos

This interview looks at an industrial machine-vision camera that integrates Frore Systems AirJet Mini Gen 2 solid-state active cooling to keep an on-board FPGA running at sustained clock rates without resorting to a bulky passive heat sink. The clever mechanical detail is a user-replaceable intake filter, so the camera can stay dustproof and water-resistant while still moving enough air through the enclosure for long runtimes in a factory setting. https://www.froresystems.com/

A key point is back pressure: traditional tiny fans struggle when you add filtration because static pressure collapses, airflow drops, and temperatures rise. AirJet’s pumping approach tolerates higher restriction, so you can design for environmental sealing and serviceability at the same time—more like maintaining an HVAC filter than babying a fan that clogs and slowly derates.

Thermals matter here not only for compute but for image quality. Keeping the sensor and the FPGA thermally stable reduces thermal drift, dark-current noise, and timing variability in the processing pipeline, which is especially relevant for high-frame-rate 4K/60 class workloads and on-camera ISP, compression, or embedded inference. The footage was filmed at CES Las Vegas 2026, but the use case is very much industrial uptime rather than show-floor spectacle.

The other constraint is vibration. In many vision systems, even small vibrations can translate into blur, calibration drift, or mechanical coupling into the optics and chassis, so a vibration-free cooler is attractive when you’re trying to shrink volume and mass without sacrificing reliability. The replaceable filter also turns “dust equals downtime” into a predictable maintenance task that can be scheduled around the environment and duty cycle, year after year if conditions allow.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=cOL5by3QRUM

Frore Systems Qualcomm 6mm 2-in-1 demo: AirJet Mini G2 solid-state cooling at 18W

Posted by – January 13, 2026
Category: Exclusive videos

Frore Systems walks through a Qualcomm 2-in-1 reference design that pushes thin-and-quiet device engineering by treating thermal design as the limiter, not raw compute. The prototype is about 6 mm thick and uses three solid-state AirJet modules to sustain roughly 18 W of TDP, positioned as a meaningful thickness drop versus a 10 mm class tablet while targeting similar sustained performance behavior. https://www.froresystems.com/

The interesting part is how AirJet changes the usual airflow constraints inside sealed or semi-sealed chassis. AirJet Mini G2 is a thin, solid-state active cooling module (roughly a few millimeters thick) that’s designed to move air with relatively high back pressure, which matters when you add restrictive inlet/outlet paths, gaskets, or fine filtration. Frore’s published figures for Mini G2 commonly reference around 7.5 W heat removal per module in a compact footprint, so scaling to multiple modules becomes a practical way to keep clocks up without resorting to thicker heatsinks or small, fast fans that become the bottleneck under load and dust.

In this demo, the airflow path is also treated like an industrial reliability problem: the design is shown with dust-proof and water-resistant filtration on both intake and exhaust while still maintaining cooling flow, and the filter concept is meant to be replaceable rather than “clean it later with compressed air.” That framing makes more sense once you remember this was filmed at CES Las Vegas 2026, where a lot of “thin device” demos ignore what happens after months in a backpack, workshop, or fleet deployment, and servicing matters as much as peak wattage when the device must stay consistent and serviceable.

Zooming out, Qualcomm reference designs like this are effectively templates for OEMs: they show that a Snapdragon-class 2-in-1 can target sustained performance at higher power budgets inside a very slim chassis, without the acoustic and maintenance tradeoffs that come with conventional active cooling. For AI-leaning workloads that mix CPU, GPU, and NPU utilization—plus continuous video, conferencing, or on-device inference—the payoff is less thermal throttling and more predictable performance per watt, which is ultimately what users notice when a thin system is supposed to behave like a thicker one during real compute.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=iwKqIos9x0Q

Booster Robotics humanoid dev kit: T1/K1 RL balance, Jetson Orin, ROS2

Posted by – January 13, 2026
Category: Exclusive videos

Booster Robotics frames its humanoids as a developer-first platform for education and research, and this interview leans into the “let people build on it” idea rather than a finished home-assistant pitch. The demo focuses on whole-body control: stable walking, quick recovery when pushed, and pre-baked motion clips like the Michael Jackson routine as a practical test for gait timing and joint coordination. https://www.booster.tech/booster-t1/

A key theme is how balance gets trained: reinforcement learning inside simulation, where the robot is exposed to lots of perturbed scenarios until it learns a robust policy for keeping its center of mass and contact forces inside safe limits. Filmed at CES Las Vegas 2026, the booth moment makes that tangible—you can physically shove the robot and watch it absorb the impulse with ankle/hip strategies instead of tipping over.

Booster positions the larger T1 as its first model and talks about modularity—swapping end-effectors, adding dexterous hands, and integrating third-party components as the manipulation stack matures. Publicly listed T1 materials commonly emphasize a full developer API, ROS2 compatibility, and simulation tooling, plus an onboard compute tier based on NVIDIA Jetson Orin (often cited up to ~100 TOPS) for perception, state estimation, and onboard inference.

The conversation also hits the gap between “embodied AI” expectations and what ships: autonomous navigation with visual-language models is moving fast, but getting product-level reliability still takes work. For now, Booster’s near-term targets are safer, smaller humanoids for classrooms and labs, with entry configurations discussed around the $6k range and roughly 1h20 of walking on a charge—enough to iterate on locomotion, perception, and early manipulation without claiming it will do laundry next year.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=152oCd0KMBY

ROBROS IGRIS-C Korean humanoid robot at #ces2026 29-joint platform, RGB+depth sensing, IL training

Posted by – January 13, 2026
Category: Exclusive videos

ROBROS presents its compact “C human” humanoid as a developer-oriented research platform, built around indoor-safe locomotion, a friendly industrial design, and a strong focus on dexterous manipulation. A key differentiator is the in-house, tendon-driven hand architecture, where cable routing couples joint motion while still allowing independent finger control, aiming for human-like grasping without bulky linkages. https://robros.co.kr/

In the demo, the robot walks under a safety harness, highlighting stability while the team iterates on hardware and control. Each hand is described as having six degrees of freedom, with tendon actuation visible in the finger mechanism, and the overall build prioritizes compact proportions and a flatter head profile to reduce overhead clearance issues in indoor spaces while keeping the face intentionally simple.

On the sensing and compute stack, the robot uses a 3D-vision setup with two RGB cameras plus a rear depth sensor, paired with a PC and an NVIDIA Jetson Nano for onboard processing. The learning approach is centered on imitation learning: operators teleoperate using a “master hand,” repeat tasks many times to collect demonstrations, and then train models so the robot can reproduce the same task in a similar environment, captured in this interview filmed at CES Las Vegas 2026 there.

Beyond the single prototype shown, the broader context is Korea’s fast-growing humanoid ecosystem, including a government-backed alliance presence at CES with multiple companies under one pavilion. ROBROS positions itself as a private company targeting research labs, universities, and government-funded institutes that want a full humanoid body for embodied AI experiments, with a team size now above forty and still scaling, pointing to a steady build-out toward real-world evaluation on the road.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=PPJOHjpvHxM

Heybike booth tour at #ces2026 folding fat-tire e-bikes, hydraulic fork, air shock, 624–720Wh packs

Posted by – January 13, 2026
Category: Exclusive videos

Heybike’s booth walkthrough looks at how the brand is segmenting e-mobility into a few clear archetypes: a city-first commuter geometry, compact folding frames for mixed-mode travel, and smaller-wheel “dirt” formats aimed at short, punchy riding. The common thread is practical ergonomics—step-through options, portable fold points, and battery packaging meant to stay out of the way while keeping service access straightforward. https://heybike.com/

A detail that matters more than it sounds is the “dual-sensor” assist logic: being able to swap between cadence sensing (motor responds to crank rotation) and torque sensing (motor scales with rider effort) changes how controllable the bike feels at low speed and on grades. Torque-based pedal assist (PAS) typically delivers smoother ramp-up and can be more energy-efficient because assist tracks real load rather than constant cadence.

In the folding lineup, Mars 3.0 is positioned as an all-terrain, fat-tire folder with full suspension and a 624Wh pack, rated up to about 65 miles of range, plus a torque sensor and quoted 95 N·m torque. ([Heybike][2]) Ranger 3.0 Pro pushes farther with a larger 720Wh battery, a stated 90-mile class range, and a full-suspension stack (hydraulic fork up front and a rear air shock). The Helio fold goes the other direction: an 18 kg build meant for stairs, train platforms, and tight storage, where fold geometry and carry weight matter as much as motor output.

The interview also touches the commuter “Venus” family (including a “hybrid” upgrade described as smoother with more battery headroom) and a compact-wheel dirt model described with 14-inch front and 12-inch rear wheels plus a 50–60 mile claim. Those headline distances are always conditional—wind, temperature, tire pressure, stop-and-go braking, and how much throttle is used can shift Wh-per-km dramatically. Filmed on the CES Las Vegas 2026 show floor, it’s a useful snapshot of what e-bike vendors are optimizing around right now: sensor choice, suspension kinematics, and fold mechanics more than raw top speed.

On the business side, Heybike frames itself as a direct-to-consumer player with regional warehousing and pickup logistics, while demoing a higher-priced “Polaris” concept positioned as an adventure/commute crossover in the USD 3–4k bracket. The meaningful spec is the whole system—motor tuning, controller limits, battery BMS behavior, and chassis stiffness—which is what determines whether a long-range number feels realistic in daily use.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=xTGP7wEgGzw

Zeroth Jupiter roadmap at #ces2026 from compact M1 companion to full-size teleop/autonomous humanoid

Posted by – January 13, 2026
Category: Exclusive videos

Zeroth is pitching a small, consumer-oriented humanoid platform that prioritizes safety and everyday interaction over raw payload. The M1 prototype shown here stands about 50 cm tall and weighs roughly 2.5 kg, which changes the risk profile compared with full-height biped demos and makes “bump recovery” and self-righting a core behavior rather than a lab trick. https://www.zeroth0.com/

M1 is framed as an indoor companion for kids and older adults: reminders, simple guidance, and light assistance that stays within home-scale constraints. The demo highlights two mobility modes: walking on its own feet, and riding a self-balancing scooter as a wheeled base, which is a hybrid approach when you want smooth room-to-room travel without solving every edge case of legged locomotion on carpet and clutter.

The interview was filmed at CES Las Vegas 2026, and it puts M1 next to a second concept called W1 that shifts the same “robot body” idea outdoors. W1 is positioned as a camping follower that hides a heavy power station inside the torso so the user doesn’t carry a 10 kg class battery pack by hand, and it can tow a small trailer advertised around a 50 kg load for food, drinks, and gear.

From a robotics perspective, these products sit at the intersection of embodied AI, human-robot interaction, and practical mechatronics: stable balance control, fall detection, self-righting, and the perception stack needed to follow a person and avoid obstacles. The scooter mode also hints at a modular mobility strategy where the autonomy layer can swap between biped gait and wheeled stabilization depending on the task and environment.

Zeroth also teases a larger “Jupiter” humanoid as the longer-term path toward home chores like fetching, wiping surfaces, vacuuming, and eventually kitchen work, which will demand better manipulation, safety envelopes, and reliability than a booth demo. In the near term, the story is about right-sizing the robot for real homes and pushing toward shipment readiness rather than research-only prototypes, early.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=1iHI1RMmnL0

ZWHAND dexterous robot hand: 17–20 DOF, e-skin tactile sensing, micro actuators

Posted by – January 13, 2026
Category: Exclusive videos

ZWHAND brings a dexterous robotic hand that’s built around a micro drive approach: the motor, reducer, and control electronics are treated as a single module so each joint can be packaged tightly and still deliver repeatable torque and position control. In the booth demo, a simple UI mirrors finger poses, while an on-screen readout visualizes fingertip pressure as the hand detects touch, making the sensing layer as visible as the mechanics. https://www.zwhand.com/en/

On camera, the showcased unit is discussed as a 17 degree-of-freedom build, with a 20 active DOF variant referenced for richer thumb and finger articulation. Filmed at CES Las Vegas 2026, the conversation stays practical: how many micro actuators you can actually fit into a human-scale envelope, how a high-performance driver board and PCBA layout affect heat and cabling, and why the communication interface often determines whether a hand can be swapped onto a humanoid in the field.

Tactile sensing is the other half of the story. ZWHAND points to flexible e-skin and high-sensitivity pressure sensing to move beyond open-loop “close the fingers” grasps, toward force-aware manipulation that can detect slip, modulate grip strength, and support safer human–robot interaction. Even with a basic visualization, you can see the control stack implied here: per-finger calibration, force estimation, impedance control, and learned grasp policies that fuse touch with vision for stable grip.

The team also calls out a common limitation in dexterous hands: water exposure. For tasks like dishwashing, the blocker is usually sealing, corrosion resistance, and realistic IP ratings rather than DOF alone, so “loading dishes into a dishwasher” is more plausible than immersion. The booth shows a progression across generations, trending toward smaller form factor and longer duty life, with public materials citing 10,000+ hours as a target for continuous operation in controlled settings like a lab.

The bigger takeaway is why hands remain a bottleneck for embodied AI: multi-contact physics, compliance tuning, sensor noise, and the need to coordinate many joints under tight power, weight, and reliability limits. A 17–20 DOF design sits in a pragmatic zone where you can cover most everyday grasps without turning the end-effector into a constant maintenance project. As interfaces and tactile data pipelines mature, these hands start to look less like a demo prop and more like a usable device.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=Lg5S4tqBf9Y

Aurzen portable cinema lineup: ZIP pocket DLP + D1 MAX 1000 ANSI Google TV + Roku D1R

Posted by – January 13, 2026
Category: Exclusive videos

Aurzen’s Zip Cyber Edition is pitched as a tri-fold, ultra-portable DLP projector that turns quick “phone-to-wall” viewing into something closer to a pocket display system. The Cyber Edition styling is a reskin meant to signal exposed-tech vibes, but the practical story is wireless casting: direct mirroring and AirPlay-style playback, plus a stand-and-place workflow that’s meant to feel as casual as setting a device on a table. https://aurzen.com/products/aurzen-zip-tri-fold-portable-projector

In use, the Zip leans on automatic image correction so you can tilt it up or down and let keystone compensation square the frame, with a small onboard speaker for basic audio and Bluetooth for headphones or a car stereo. Specs floating around the Zip line point to native 720p with 1080p input support, brightness in the ~100 ANSI-lumen class, and a built-in 5000 mAh battery that’s roughly “one short film” territory, with USB-C PD fast charging and the option to extend runtime via an external power bank for longer play.

The wider Aurzen lineup shown here frames portability as a spectrum: from pocket projection to living-room brightness. The EAZZE D1 MAX is positioned as a higher-output, mixed-use model with Google TV (including mainstream streaming without a separate stick), 4K input support, MEMC motion smoothing, Wi-Fi/Bluetooth connectivity, and built-in Dolby Audio with higher-wattage speakers—basically the “set it up fast, still looks like a home-theater feed” category for a flexible room.

On the “smart platform” side, Aurzen also leans into Roku integration with the D1R Cube, one of the early projectors to run the Roku TV interface natively, combining 1080p output with 4K input support, app-first navigation, and portable sizing. In this video—filmed at CES Las Vegas 2026—the same portability theme shows up in playful optical add-ons (like a galaxy lens effect) and in the BOOM-series vibe of visible speaker design, lighting accents, and bass processing that’s described in algorithmic, transient-control language.

The most concrete mobility demo is the Travel Play accessory concept for a Tesla Model Y: a custom screen kit that stores in the trunk and turns the rear into a pop-up cinema for camping, parking breaks, or kid-friendly downtime. The interesting technical angle isn’t just projection, but systems thinking—casting pipeline, battery strategy, mounting geometry, and audio routing to Bluetooth/car speakers—so the setup behaves like a small, transportable AV stack rather than a fragile gadget you only use at home here.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=OoeKLK6N-10

Piano LED Plus 2026: MIDI USB-B + LED strip piano tutor, hand colors, wait-mode tempo

Posted by – January 12, 2026
Category: Exclusive videos

Piano LED Plus is a MIDI-driven learning kit that adds an addressable LED strip above your digital piano keys, turning note-on/note-off data into a visual guide. The black controller box reads your keyboard via USB-to-Host (USB-B) or MIDI, then syncs the lesson flow with a companion app so the right notes light up at the right time. Color coding separates hands (for example green for right hand, blue for left), which makes two-hand coordination feel more like following a lane system than decoding sheet music. https://www.pianoledshop.com/

The learning loop is built around timing and accuracy: you can slow the tempo, practice one hand at a time, and use a “wait for correct notes” style flow where the system only advances when you hit the intended keys. That turns rhythm and fingering into measurable targets instead of guesswork, especially for people who are new to piano and still building motor memory. Difficulty can be scaled across multiple levels, and the same MIDI file becomes easier or harder depending on the chosen mode.

In this demo you also get a practical look at the hardware fit: the strip can be mapped for shorter keyboards and physically trimmed to match the keybed length, while still covering the standard 88-key layout when needed. The conversation is filmed at CES Las Vegas 2026, so it’s very much a booth-style walkthrough of what the system does, how it connects, and the kind of learner it targets in 2026.

A more ambitious idea comes up too: using the incoming MIDI stream not just to display “what to press,” but to support creation—recording your own playing, visualizing it back on the LEDs, and eventually offering chord/key guidance for improvisation. The current focus stays on structured learning and repeatability (record, replay, refine), but the same MIDI parsing and key mapping could later underpin scale-aware or chord-tone highlighting for composition, if the product roadmap goes there in the future.

The team describes a few years of development, a growing song catalog, and an app-first workflow across common platforms, with a basic set of free pieces and an optional premium tier for more content and modes. It’s positioned as an add-on for families who already have a digital piano and want a guided practice layer without changing the instrument itself, while keeping the complexity in firmware, MIDI handling, and the mobile/desktop app.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=k4MYaobOgUA

Alilo AI Smart Bunny + Quectel FC41D in kids toys: 2.4GHz 802.11n, Bluetooth 5.2, filtered chat

Posted by – January 12, 2026
Category: Exclusive videos

Alilo shows a range of screen-free “smart toy” designs that mix classic infant sensory play with embedded audio and connectivity, starting with a soft, light-up rattle/globe aimed at babies under one year. Instead of a single fixed jingle, the toy cycles through multiple sound profiles when shaken, while a diffused LED core steps through seven colors for low-light feedback and calming routines. https://www.aliloai.com/products/smart-ai-bunny

The more technical demo is an AI Smart Bunny that behaves like a voice-first companion: mic + speaker, on-device buttons for modes (music, light, AI talk), and optional offline playback via local storage or a memory card. With Wi-Fi available, the bunny can be configured to call a cloud LLM endpoint (described as OpenAI API, with the option to swap in other providers), so a press-to-talk interaction becomes conversational Q&A, story generation, and language practice without a screen in use.

Under the hood, the conversation lands on the radio/compute building blocks: the PCB integrates a Quectel FC41D Wi-Fi + Bluetooth module built around an ARM968-class MCU. That points to a typical 2.4 GHz IEEE 802.11b/g/n + Bluetooth 5.2 stack and modern Wi-Fi security (WPA2/WPA3), which matters when the device sits on a home network alongside other IoT gear. The interview is filmed at CES Las Vegas 2026, where this “toy meets module” approach is turning into a repeat pattern today.

A recurring theme is child safety at the software layer: the pitch is that responses can be constrained for age-appropriate content, and that the UI stays intentionally simple so a small child can operate it with predictable outcomes. The demo also highlights what is not included—no camera in the bunny—so the experience is primarily audio plus LED cues, reducing data capture while still enabling personalization through prompts and curated audio libraries there.

Beyond the bunny, Alilo also shows educational SKUs like a child calculator aimed at early school ages, with graded difficulty that can move from basic arithmetic toward larger-number prompts. Taken together, the booth visit is a snapshot of how early-learning toys are being re-architected around low-power wireless MCUs, local audio pipelines, and optional cloud inference—useful context if you track where conversational UI meets family-oriented embedded hardware right now here.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=P2YelYppJTA

XbotGo Falcon 4K dual-lens AI sports camera: auto-tracking, auto-zoom, RTMP livestream

Posted by – January 12, 2026
Category: Exclusive videos

XbotGo’s idea is to turn youth and amateur sports filming into a mostly hands-free workflow: you set up the camera at the sideline, pick the sport, and let computer-vision tracking follow play while parents actually watch the match. In this interview, product manager Jordan Sherman frames it as an “AI cameraman” for soccer, basketball, tennis and other sports, with automated highlights so kids can replay key moments later. https://xbotgo.com/

The third-generation Falcon is positioned as the all-in-one unit: a dual-lens design where one camera is dedicated to tracking/analysis and the other to capture, enabling auto-framing plus auto-zoom for a broadcast-style shot. On the hardware side it’s built around a 6 TOPS AI processor, a Sony 4K image sensor, motorized 360° rotation with 160° tilt range, and a roughly 3–4 hour battery window depending on mode.

For sharing, Falcon supports local recording to microSD (up to 1 TB, exFAT) and optional cloud upload for team access, with live streaming designed around standard RTMP so it can push to YouTube, Facebook, or other endpoints. Control and connectivity lean on Wi-Fi 6 plus BLE 5.2, and in practice you’ll rely on venue Wi-Fi or a phone/hotspot for uplink. The demo was filmed at CES Las Vegas 2026, so you also get a quick look at the UI flow and sample footage in video.

Chameleon is the more entry-level approach: the base unit provides the tracking compute plus a motorized mount, while your smartphone becomes the capture camera through the XbotGo app on iOS or Android. That architecture keeps cost down (roughly $330–$350 depending on bundle) while still enabling auto-tracking, smart zoom behavior, and some sport-specific features like jersey-number tracking and AI basketball editing, with up to around 8 hours per charge.

The conversation also hints at the next step: multi-camera coverage with synchronized angles (behind each goal, midfield, or corners) and some form of automated switching, which is where youth-sports video starts to resemble a lightweight broadcast pipeline. Pair that with reliable time alignment, external wireless audio, and event detection that can cut highlights automatically, and you get a practical tool for coaches, families, and club media without a full production crew sync.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=8avj7aTb124

GPD WIN 5 Ryzen AI Max+ 395 handheld PC + external 80Wh battery, DC + USB-C PD, Pocket 4 mini laptop

Posted by – January 12, 2026
Category: Exclusive videos

GPD’s CES lineup this year revolves around a clear idea: treat a handheld like a real PC, then solve the power and thermals so it can actually run modern AAA workloads. The WIN 5 demo centers on AMD’s Ryzen AI Max+ 395 (Strix Halo) paired with Radeon 8060S-class integrated graphics, pushing performance that normally lives in thicker laptops, but in a controller-first form factor. https://www.gpd.hk/

What makes the WIN 5 architecture interesting is the power system: instead of hiding a large pack inside the chassis, GPD uses a detachable external battery module (around 80Wh) that can be swapped and even “stacked” in practice by carrying spares. For peak load it can run from a high-power DC adapter, while USB-C PD (the booth mentioned up to 100W) is a more universal way to top up from a large power bank when you’re away from an outlet, keeping sustained clocks realistic without turning the device into a hot brick of silicon power.

On the productivity side, the Pocket 4 keeps GPD’s tiny-laptop identity alive with a rotating display that flips into a tablet-like posture, plus a spec sheet that’s closer to an ultrabook than a novelty. Configurations around AMD Ryzen AI 9 HX 370 (Strix Point) and Radeon 890M iGPU, LPDDR5x memory, PCIe NVMe storage, and USB4/Thunderbolt-class I/O are designed for “real work” in a jacket pocket, and the modular bay concept (often used for things like RS-232, KVM, or LTE modules) is the kind of niche engineering that still matters in field deployments there.

The smaller machines in the interview also show how GPD segments x86: an Intel N300-class unit aimed at light productivity and admin tasks, and a rugged MicroPC-style device focused on ports and practicality rather than raw GPU throughput. This was filmed at CES Las Vegas 2026, and the conversation is a good snapshot of how handheld PCs are converging with mini laptops: same Windows/Linux stack, same driver and firmware concerns, just tighter constraints on power density and cooling trade.

GPD also draws a line around platform choices: today it’s Intel and AMD only, mainly because game compatibility and tooling are still easiest on x86. They do acknowledge the ARM angle if Valve’s Linux/Steam ecosystem keeps moving that direction, but the underlying message is pragmatic: follow the software library, then adapt the hardware. For viewers, that makes this less about one gadget and more about the roadmap for portable compute that can game, compile, and travel light on the same road.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=9FG7uMVobrQ

AGIBOT Panda D1 quadruped demo: backflip, push-ups, dynamic gait control, jumping

Posted by – January 12, 2026
Category: Exclusive videos

AGIBOT’s panda-themed quadruped turns a playful mascot into a serious locomotion demo, switching modes on command and showing how much control bandwidth modern legged robots can deliver. What you’re really watching is a tight loop of sensing and actuation—stable stance, fast transitions, and short bursts of dynamic motion—packaged in a friendly shell that makes the mechanics easier to notice. https://www.agibot.com/

In the clip the “panda” drops into push-ups, pops back up, hops, pivots toward the camera, and runs a backflip routine—moves that depend on whole-body control, inertial measurement (IMU) feedback, foot-contact timing, and careful torque/position control across multiple joints. Even when it looks like a party trick, it stress-tests balance recovery, trajectory planning, and impact management during takeoff and landing.

AGIBOT frames these platforms as part of a broader embodied-AI stack, spanning its D1 quadruped line and humanoid families such as A2 and X2, plus work on training data and “interaction + manipulation + locomotion” as a unified control problem. That context matters, because the same perception-to-control plumbing behind a stunt can be repurposed for patrol, guided interaction, or repeatable navigation tasks, and this quick panda demo was filmed on the CES Las Vegas 2026 show floor there.

The fun moment is when the panda “comes after” the operator, but it also hints at the real gap between impressive locomotion and useful home autonomy: chores like laundry or cooking need robust perception, safe force control, and reliable manipulation, not just agile gait. Treat this video as a snapshot of where legged robotics is getting very capable—dynamic stability, motion primitives, and user-triggered behaviors—while the hard part is turning that athletic base into dependable everyday help.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=x2n3kB_19mg

SenseRobot chess robot: screen-free robotic arm board, blitz mode, Chess Mini for kids

Posted by – January 12, 2026
Category: Exclusive videos

SenseRobot’s pitch is simple: bring board-game engines back into the real world with a screen-free tabletop robot that physically moves pieces on a real board, so practice feels closer to over-the-board play than tapping on an app. In this demo you see it set up across multiple tables, with support not only for chess but also checkers/draughts and Chinese chess (Xiangqi), switching boards while keeping the same “move a piece, press go, robot replies” flow. https://www.senserobotchess.com/

What makes it interesting technically is the closed-loop interaction: the system has to sense the current board state, validate your move, and then execute its reply with a small robotic arm and gripper while staying aligned to squares. When an illegal or clearly losing move happens, the robot flags it as a mistake and can restore the position, which implies some combination of move-history tracking and board-state verification rather than blindly trusting the user. That mix of physical HRI, motion control, and rules enforcement is the core of the product story today.

Midway through the interview, filmed at CES Las Vegas 2026, the focus shifts from “robot opponent” to “robot coach.” The rep claims a wide range of difficulty levels from beginner up to grandmaster, plus training value that’s different from playing on a phone: you get a tactile board, a consistent practice partner, and less eye strain than a screen-first chess routine. They also reference a partnership with the European Chess Union, framing the device as a structured way to build confidence before facing human opponents here.

There are a few practical engineering moments too: the arm is presented as stable and safe for home use, and when the host interrupts the motion, the robot pauses and finds its way back, hinting at basic obstruction handling, path recovery, and “return-to-home” style behaviors. The rep also mentions a blitz mode in newer products, which raises the bar on motor speed, acceleration limits, and reliable piece pickup and placement at higher tempo without sacrificing safety mode.

On roadmap and commercial details, they say they’ve sold around 20,000 units globally, built the robot over roughly four years, and that the North America “basic” version sits around the $1,000 mark with availability through mainstream retail. The notable next step is a smaller, more affordable Chess Mini aimed at kids, with talk of extra kid-focused features like STEAM-style programming hooks on top of board-game play, which could turn the robot into a gateway for both chess training and robotics literacy from a single view

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=m8f30MFbLiA

Napster Station kiosk with faytech: VoiceField mic array + embodied AI concierge, Napster View 3D

Posted by – January 12, 2026
Category: Exclusive videos

Napster is being reframed here as a “streaming expertise” product: a library of domain AI companions you meet in real-time video instead of text chat. The demo focuses on embodied agents for tech support, fitness coaching, and personal guidance, plus digital twins that can mirror a real person and optionally escalate to a live call. The pitch is simple UX: talk naturally, keep context, and let the system handle the tool-wrangling under the hood. https://www.napster.ai/view

On desktop, the centerpiece is Napster View, a small clip-on display for Mac that uses a lenticular multi-view optical stack to create glasses-free stereoscopic depth, so an agent appears “above” your main screen and keeps eye contact. The team describes combining a custom lens with rendering tuned for multiple viewpoints to keep parallax consistent and reduce visual fatigue, with USB-C power and a low-cost hardware entry point. The footage is shot during CES Las Vegas 2026, where spatial UI for everyday computer work is turning into a practical form factor.

Software-wise, View is paired with a companion app that can see you, and—when you grant permission—see what’s on your screen for situational awareness. That enables screen-guided help (for example, learning macOS app workflows quickly) and artifact generation like emails, plans, or images from what the model observes. They also preview “gated” control of macOS actions (launching apps, manipulating documents, editing media) with extra testing and safety checks, because automation shifts from advice to execution mode.

The same conversational layer is used for generative media: you pick a genre and scenario, and an AI “artist” produces lyrics, cover art, and multiple song variants, then returns them through the UI as shareable assets. The transcript stresses a model-agnostic approach—swapping underlying LLM or music models as they improve—so users don’t need to track the fast-moving ecosystem. It’s a clear example of orchestration: multimodal input, structured outputs, and lightweight creative iteration in one place.

For public spaces, Napster Station extends the idea into a kiosk: camera-triggered interaction plus a near-field microphone array meant to isolate the voice of the person directly in front, even in loud environments. The pitch is “AI outside the browser,” where an embodied concierge can drive existing web surfaces (retail, airports, hotels, venues) by taking a spoken intent and executing steps like a digital employee. Technically it’s a blend of UX, audio DSP, vision, and agent workflows tuned for a crowded trade-show floor.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=RN8xqMVZ7aE

Frore Systems Booth Tour at CES 2026, From Edge to Cloud, AirJet Mini G2, AirJet PAK, LiquidJet

Posted by – January 11, 2026
Category: Exclusive videos

Thermal design is becoming a first-order limiter for thin clients, rugged edge boxes, and AI racks, and Frore Systems frames AirJet and LiquidJet as two complementary ways to raise sustained power without reverting to bulky fans or oversized heat sinks. The tour connects solid-state active airflow at the device level with direct-to-chip liquid cooling at the rack level, focusing on steady-state thermal envelope instead of brief boost behavior. https://www.froresystems.com/products/airjet-r-mini-g2

Later in the video, filmed at CES Las Vegas 2026, AirJet Mini G2 is presented as a sealed, solid-state active cooling module roughly 2.65 mm thick that targets about 7.5 W of heat removal while consuming about 1 W. Gen 2 is described as a ~50% heat-removal step over the first AirJet Mini, and the discussion keeps coming back to why that matters in shipping hardware: acoustic limits, dust-tolerant airflow paths, and multi-year reliability testing.

On client compute, the theme is turning passively cooled form factors into sustained-TDP systems. A Qualcomm reference mini PC built around Snapdragon X2 Elite uses three AirJet Mini G2 units to support about a 25 W thermal envelope in a sub-10 mm chassis, and similar integration patterns are shown for ultra-thin notebooks and tablet-class devices. The engineering win is not a single peak score, but fewer throttle cliffs during long exports, compilation, and on-device AI inference.

Where rugged packaging matters, Frore shows how high static pressure can keep airflow viable through filters. A class-1 5G hotspot example pushes roughly 31 dBm transmit power and pairs it with Wi-Fi 7, yet stays pocket-size by using AirJet modules behind IP53-grade filtered vents; the company cites back pressure around 1750 Pa to move air even when the intake and exhaust are constrained. The same idea is applied to compact SSD enclosures aimed at sustained read/write bandwidth, and to industrial cameras where vibration from a fan would blur imaging.

In the cloud segment, LiquidJet is positioned as a direct-to-chip coldplate built with 3D short-loop jet-channel microstructures, manufactured using semiconductor-style steps like lithography and bonding on metal wafers. By designing the internal jet geometry from an ASIC power map, more coolant can be directed at hotspot regions, with Frore citing support for very high local heat flux up to about 600 W/cm². The claimed upside is headroom to run accelerators cooler for efficiency, or to trade temperature margin for higher clocks, improving tokens per watt and overall data-center PUE at scale.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=ZQ8-D-xn7rQ

Gravity Universe Time 720° magnetic levitation clock with planetary time modes + app

Posted by – January 11, 2026
Category: Exclusive videos

Gravity is a Shenzhen-based startup building “sci-fi to function” desk objects, and this video focuses on Universe Time: a timepiece that behaves more like a kinetic display than a traditional clock, with a floating sphere acting as the moving indicator for how time “flows” across different reference frames and places. The core idea is to make time feel physical: you watch a miniature “planet” move rather than just reading digits. https://www.gravityplayer.com/

Universe Time uses a controlled magnetic levitation system to keep a metallic sphere hovering while it repositions itself to a target angle, then locks into a stable hover again; the demo also shows how the mechanism can articulate through wide orientation changes, including a 720° motion sequence and a 6-DoF style movement envelope while maintaining levitation. The interview is filmed at CES Las Vegas 2026, which fits the product’s intersection of consumer hardware, industrial design, and playful physics for home setups here.

On the software side, the companion app turns the display into a “universe time” selector: you can switch between time zones or choose planet-based presets and watch the sphere accelerate to the new setpoint, then settle with closed-loop stabilization. The interface also exposes visual tuning such as LED color themes, plus time display modes where the orbit maps to hours, minutes, or a seconds cadence, so the motion becomes the readout rather than a conventional hand set too.

A practical engineering thread in the conversation is calibration: levitation height is configurable (the demo mentions roughly 1 cm hover with options up to about 7 cm), but changing mass, finish, or geometry can require recalculating control parameters for magnetic stability. They also mention how paints and surface treatments can perturb the magnetic field and sensor feedback, which is why “planet skins” and textured finishes become a non-trivial materials problem rather than just decoration, and why customization is treated as a premium, order-defined setup for now it.

Behind the scenes, Gravity’s productization looks like a modern IoT pipeline: cloud + app + device identity, with OTA firmware updates and certificate-based onboarding, supporting a connected device that is as much embedded control as it is décor. The same levitation stack is shown branching into other categories (lighting, a levitating desk lamp form, audio speaker concepts, wall-mounted floating pieces, and levitating rocket collectibles), suggesting a platform approach where the control electronics, sensing, and magnetic actuation get reused across new form factors today.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=WGSCPc3uwJc

Artly Barista Bot: imitation learning, motion-capture training, autonomous latte art

Posted by – January 11, 2026
Category: Exclusive videos

Artly positions its VA platform as a “robot training school” for physical AI: instead of scripting a single demo, they build a reusable skill library that can drive a robotic barista workflow and then expand into other manipulation tasks. In this interview, CEO/co-founder Yushan Chen frames the coffee system as the first high-volume application, where the robot has to execute a full sequence—grind dose, tamp, pull espresso, steam milk, pour, and finish latte art—with repeatable timing and tool handling. https://www.artly.ai/

A key technical idea here is learning-from-demonstration (imitation learning): an engineer performs the task while wearing sensors (motion capture / teleoperation style inputs), and the robot later reproduces the same trajectories. During training, the platform records synchronized action data plus camera streams, then uses perception to re-localize target objects at runtime. In the demo, the arm-mounted vision stack identifies items like oranges and apples and closes the loop so the robot can continue a pick-and-place motion even when the scene is slightly different each try.

They also call out Intel RealSense depth cameras for object perception, which fits the need for 3D pose estimation, reach planning, and gentle grasp control around deformable objects. The robot detects failed grasps, retracts, and retries—suggesting basic recovery logic plus confidence checks that keep the arm from “committing” to a bad pickup. Even with a short training session (they mention about two minutes), you can see how fast a narrow, well-instrumented skill can be brought to a usable level.

Beyond the lab, Artly says it has around 40 deployments across North America, and the point of that footprint is data: every real execution can become additional training signal to refine the policy and improve robustness across different cups, fruit sizes, and counter layouts. The video itself was filmed at CES Las Vegas 2026, where this kind of closed-loop manipulation is showing up less as a novelty and more as a practical “physical AI” pattern for retail automation out on the floor there.

Artly’s roadmap in the conversation is basically dexterity plus generality: better end-effectors (including more hand-like grippers), richer sensory feedback, and progressively harder latte-art patterns that demand tighter control of flow rate, tilt angle, and microfoam texture. If the platform can keep turning demonstrations into dependable, auditable skills—perception, grasping, tool use, and recovery—it becomes a template for other tasks like drink garnish or fresh-ingredient handling without changing the overall training loop too much, which is the interesting part to watch next year.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=B_TZLnS5Mw8