Aurzen portable cinema lineup: ZIP pocket DLP + D1 MAX 1000 ANSI Google TV + Roku D1R

Posted by – January 13, 2026
Category: Exclusive videos

Aurzen’s Zip Cyber Edition is pitched as a tri-fold, ultra-portable DLP projector that turns quick “phone-to-wall” viewing into something closer to a pocket display system. The Cyber Edition styling is a reskin meant to signal exposed-tech vibes, but the practical story is wireless casting: direct mirroring and AirPlay-style playback, plus a stand-and-place workflow that’s meant to feel as casual as setting a device on a table. https://aurzen.com/products/aurzen-zip-tri-fold-portable-projector

In use, the Zip leans on automatic image correction so you can tilt it up or down and let keystone compensation square the frame, with a small onboard speaker for basic audio and Bluetooth for headphones or a car stereo. Specs floating around the Zip line point to native 720p with 1080p input support, brightness in the ~100 ANSI-lumen class, and a built-in 5000 mAh battery that’s roughly “one short film” territory, with USB-C PD fast charging and the option to extend runtime via an external power bank for longer play.

The wider Aurzen lineup shown here frames portability as a spectrum: from pocket projection to living-room brightness. The EAZZE D1 MAX is positioned as a higher-output, mixed-use model with Google TV (including mainstream streaming without a separate stick), 4K input support, MEMC motion smoothing, Wi-Fi/Bluetooth connectivity, and built-in Dolby Audio with higher-wattage speakers—basically the “set it up fast, still looks like a home-theater feed” category for a flexible room.

On the “smart platform” side, Aurzen also leans into Roku integration with the D1R Cube, one of the early projectors to run the Roku TV interface natively, combining 1080p output with 4K input support, app-first navigation, and portable sizing. In this video—filmed at CES Las Vegas 2026—the same portability theme shows up in playful optical add-ons (like a galaxy lens effect) and in the BOOM-series vibe of visible speaker design, lighting accents, and bass processing that’s described in algorithmic, transient-control language.

The most concrete mobility demo is the Travel Play accessory concept for a Tesla Model Y: a custom screen kit that stores in the trunk and turns the rear into a pop-up cinema for camping, parking breaks, or kid-friendly downtime. The interesting technical angle isn’t just projection, but systems thinking—casting pipeline, battery strategy, mounting geometry, and audio routing to Bluetooth/car speakers—so the setup behaves like a small, transportable AV stack rather than a fragile gadget you only use at home here.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=OoeKLK6N-10

Piano LED Plus 2026: MIDI USB-B + LED strip piano tutor, hand colors, wait-mode tempo

Posted by – January 12, 2026
Category: Exclusive videos

Piano LED Plus is a MIDI-driven learning kit that adds an addressable LED strip above your digital piano keys, turning note-on/note-off data into a visual guide. The black controller box reads your keyboard via USB-to-Host (USB-B) or MIDI, then syncs the lesson flow with a companion app so the right notes light up at the right time. Color coding separates hands (for example green for right hand, blue for left), which makes two-hand coordination feel more like following a lane system than decoding sheet music. https://www.pianoledshop.com/

The learning loop is built around timing and accuracy: you can slow the tempo, practice one hand at a time, and use a “wait for correct notes” style flow where the system only advances when you hit the intended keys. That turns rhythm and fingering into measurable targets instead of guesswork, especially for people who are new to piano and still building motor memory. Difficulty can be scaled across multiple levels, and the same MIDI file becomes easier or harder depending on the chosen mode.

In this demo you also get a practical look at the hardware fit: the strip can be mapped for shorter keyboards and physically trimmed to match the keybed length, while still covering the standard 88-key layout when needed. The conversation is filmed at CES Las Vegas 2026, so it’s very much a booth-style walkthrough of what the system does, how it connects, and the kind of learner it targets in 2026.

A more ambitious idea comes up too: using the incoming MIDI stream not just to display “what to press,” but to support creation—recording your own playing, visualizing it back on the LEDs, and eventually offering chord/key guidance for improvisation. The current focus stays on structured learning and repeatability (record, replay, refine), but the same MIDI parsing and key mapping could later underpin scale-aware or chord-tone highlighting for composition, if the product roadmap goes there in the future.

The team describes a few years of development, a growing song catalog, and an app-first workflow across common platforms, with a basic set of free pieces and an optional premium tier for more content and modes. It’s positioned as an add-on for families who already have a digital piano and want a guided practice layer without changing the instrument itself, while keeping the complexity in firmware, MIDI handling, and the mobile/desktop app.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=k4MYaobOgUA

Alilo AI Smart Bunny + Quectel FC41D in kids toys: 2.4GHz 802.11n, Bluetooth 5.2, filtered chat

Posted by – January 12, 2026
Category: Exclusive videos

Alilo shows a range of screen-free “smart toy” designs that mix classic infant sensory play with embedded audio and connectivity, starting with a soft, light-up rattle/globe aimed at babies under one year. Instead of a single fixed jingle, the toy cycles through multiple sound profiles when shaken, while a diffused LED core steps through seven colors for low-light feedback and calming routines. https://www.aliloai.com/products/smart-ai-bunny

The more technical demo is an AI Smart Bunny that behaves like a voice-first companion: mic + speaker, on-device buttons for modes (music, light, AI talk), and optional offline playback via local storage or a memory card. With Wi-Fi available, the bunny can be configured to call a cloud LLM endpoint (described as OpenAI API, with the option to swap in other providers), so a press-to-talk interaction becomes conversational Q&A, story generation, and language practice without a screen in use.

Under the hood, the conversation lands on the radio/compute building blocks: the PCB integrates a Quectel FC41D Wi-Fi + Bluetooth module built around an ARM968-class MCU. That points to a typical 2.4 GHz IEEE 802.11b/g/n + Bluetooth 5.2 stack and modern Wi-Fi security (WPA2/WPA3), which matters when the device sits on a home network alongside other IoT gear. The interview is filmed at CES Las Vegas 2026, where this “toy meets module” approach is turning into a repeat pattern today.

A recurring theme is child safety at the software layer: the pitch is that responses can be constrained for age-appropriate content, and that the UI stays intentionally simple so a small child can operate it with predictable outcomes. The demo also highlights what is not included—no camera in the bunny—so the experience is primarily audio plus LED cues, reducing data capture while still enabling personalization through prompts and curated audio libraries there.

Beyond the bunny, Alilo also shows educational SKUs like a child calculator aimed at early school ages, with graded difficulty that can move from basic arithmetic toward larger-number prompts. Taken together, the booth visit is a snapshot of how early-learning toys are being re-architected around low-power wireless MCUs, local audio pipelines, and optional cloud inference—useful context if you track where conversational UI meets family-oriented embedded hardware right now here.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=P2YelYppJTA

XbotGo Falcon 4K dual-lens AI sports camera: auto-tracking, auto-zoom, RTMP livestream

Posted by – January 12, 2026
Category: Exclusive videos

XbotGo’s idea is to turn youth and amateur sports filming into a mostly hands-free workflow: you set up the camera at the sideline, pick the sport, and let computer-vision tracking follow play while parents actually watch the match. In this interview, product manager Jordan Sherman frames it as an “AI cameraman” for soccer, basketball, tennis and other sports, with automated highlights so kids can replay key moments later. https://xbotgo.com/

The third-generation Falcon is positioned as the all-in-one unit: a dual-lens design where one camera is dedicated to tracking/analysis and the other to capture, enabling auto-framing plus auto-zoom for a broadcast-style shot. On the hardware side it’s built around a 6 TOPS AI processor, a Sony 4K image sensor, motorized 360° rotation with 160° tilt range, and a roughly 3–4 hour battery window depending on mode.

For sharing, Falcon supports local recording to microSD (up to 1 TB, exFAT) and optional cloud upload for team access, with live streaming designed around standard RTMP so it can push to YouTube, Facebook, or other endpoints. Control and connectivity lean on Wi-Fi 6 plus BLE 5.2, and in practice you’ll rely on venue Wi-Fi or a phone/hotspot for uplink. The demo was filmed at CES Las Vegas 2026, so you also get a quick look at the UI flow and sample footage in video.

Chameleon is the more entry-level approach: the base unit provides the tracking compute plus a motorized mount, while your smartphone becomes the capture camera through the XbotGo app on iOS or Android. That architecture keeps cost down (roughly $330–$350 depending on bundle) while still enabling auto-tracking, smart zoom behavior, and some sport-specific features like jersey-number tracking and AI basketball editing, with up to around 8 hours per charge.

The conversation also hints at the next step: multi-camera coverage with synchronized angles (behind each goal, midfield, or corners) and some form of automated switching, which is where youth-sports video starts to resemble a lightweight broadcast pipeline. Pair that with reliable time alignment, external wireless audio, and event detection that can cut highlights automatically, and you get a practical tool for coaches, families, and club media without a full production crew sync.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=8avj7aTb124

GPD WIN 5 Ryzen AI Max+ 395 handheld PC + external 80Wh battery, DC + USB-C PD, Pocket 4 mini laptop

Posted by – January 12, 2026
Category: Exclusive videos

GPD’s CES lineup this year revolves around a clear idea: treat a handheld like a real PC, then solve the power and thermals so it can actually run modern AAA workloads. The WIN 5 demo centers on AMD’s Ryzen AI Max+ 395 (Strix Halo) paired with Radeon 8060S-class integrated graphics, pushing performance that normally lives in thicker laptops, but in a controller-first form factor. https://www.gpd.hk/

What makes the WIN 5 architecture interesting is the power system: instead of hiding a large pack inside the chassis, GPD uses a detachable external battery module (around 80Wh) that can be swapped and even “stacked” in practice by carrying spares. For peak load it can run from a high-power DC adapter, while USB-C PD (the booth mentioned up to 100W) is a more universal way to top up from a large power bank when you’re away from an outlet, keeping sustained clocks realistic without turning the device into a hot brick of silicon power.

On the productivity side, the Pocket 4 keeps GPD’s tiny-laptop identity alive with a rotating display that flips into a tablet-like posture, plus a spec sheet that’s closer to an ultrabook than a novelty. Configurations around AMD Ryzen AI 9 HX 370 (Strix Point) and Radeon 890M iGPU, LPDDR5x memory, PCIe NVMe storage, and USB4/Thunderbolt-class I/O are designed for “real work” in a jacket pocket, and the modular bay concept (often used for things like RS-232, KVM, or LTE modules) is the kind of niche engineering that still matters in field deployments there.

The smaller machines in the interview also show how GPD segments x86: an Intel N300-class unit aimed at light productivity and admin tasks, and a rugged MicroPC-style device focused on ports and practicality rather than raw GPU throughput. This was filmed at CES Las Vegas 2026, and the conversation is a good snapshot of how handheld PCs are converging with mini laptops: same Windows/Linux stack, same driver and firmware concerns, just tighter constraints on power density and cooling trade.

GPD also draws a line around platform choices: today it’s Intel and AMD only, mainly because game compatibility and tooling are still easiest on x86. They do acknowledge the ARM angle if Valve’s Linux/Steam ecosystem keeps moving that direction, but the underlying message is pragmatic: follow the software library, then adapt the hardware. For viewers, that makes this less about one gadget and more about the roadmap for portable compute that can game, compile, and travel light on the same road.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=9FG7uMVobrQ

AGIBOT Panda D1 quadruped demo: backflip, push-ups, dynamic gait control, jumping

Posted by – January 12, 2026
Category: Exclusive videos

AGIBOT’s panda-themed quadruped turns a playful mascot into a serious locomotion demo, switching modes on command and showing how much control bandwidth modern legged robots can deliver. What you’re really watching is a tight loop of sensing and actuation—stable stance, fast transitions, and short bursts of dynamic motion—packaged in a friendly shell that makes the mechanics easier to notice. https://www.agibot.com/

In the clip the “panda” drops into push-ups, pops back up, hops, pivots toward the camera, and runs a backflip routine—moves that depend on whole-body control, inertial measurement (IMU) feedback, foot-contact timing, and careful torque/position control across multiple joints. Even when it looks like a party trick, it stress-tests balance recovery, trajectory planning, and impact management during takeoff and landing.

AGIBOT frames these platforms as part of a broader embodied-AI stack, spanning its D1 quadruped line and humanoid families such as A2 and X2, plus work on training data and “interaction + manipulation + locomotion” as a unified control problem. That context matters, because the same perception-to-control plumbing behind a stunt can be repurposed for patrol, guided interaction, or repeatable navigation tasks, and this quick panda demo was filmed on the CES Las Vegas 2026 show floor there.

The fun moment is when the panda “comes after” the operator, but it also hints at the real gap between impressive locomotion and useful home autonomy: chores like laundry or cooking need robust perception, safe force control, and reliable manipulation, not just agile gait. Treat this video as a snapshot of where legged robotics is getting very capable—dynamic stability, motion primitives, and user-triggered behaviors—while the hard part is turning that athletic base into dependable everyday help.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=x2n3kB_19mg

SenseRobot chess robot: screen-free robotic arm board, blitz mode, Chess Mini for kids

Posted by – January 12, 2026
Category: Exclusive videos

SenseRobot’s pitch is simple: bring board-game engines back into the real world with a screen-free tabletop robot that physically moves pieces on a real board, so practice feels closer to over-the-board play than tapping on an app. In this demo you see it set up across multiple tables, with support not only for chess but also checkers/draughts and Chinese chess (Xiangqi), switching boards while keeping the same “move a piece, press go, robot replies” flow. https://www.senserobotchess.com/

What makes it interesting technically is the closed-loop interaction: the system has to sense the current board state, validate your move, and then execute its reply with a small robotic arm and gripper while staying aligned to squares. When an illegal or clearly losing move happens, the robot flags it as a mistake and can restore the position, which implies some combination of move-history tracking and board-state verification rather than blindly trusting the user. That mix of physical HRI, motion control, and rules enforcement is the core of the product story today.

Midway through the interview, filmed at CES Las Vegas 2026, the focus shifts from “robot opponent” to “robot coach.” The rep claims a wide range of difficulty levels from beginner up to grandmaster, plus training value that’s different from playing on a phone: you get a tactile board, a consistent practice partner, and less eye strain than a screen-first chess routine. They also reference a partnership with the European Chess Union, framing the device as a structured way to build confidence before facing human opponents here.

There are a few practical engineering moments too: the arm is presented as stable and safe for home use, and when the host interrupts the motion, the robot pauses and finds its way back, hinting at basic obstruction handling, path recovery, and “return-to-home” style behaviors. The rep also mentions a blitz mode in newer products, which raises the bar on motor speed, acceleration limits, and reliable piece pickup and placement at higher tempo without sacrificing safety mode.

On roadmap and commercial details, they say they’ve sold around 20,000 units globally, built the robot over roughly four years, and that the North America “basic” version sits around the $1,000 mark with availability through mainstream retail. The notable next step is a smaller, more affordable Chess Mini aimed at kids, with talk of extra kid-focused features like STEAM-style programming hooks on top of board-game play, which could turn the robot into a gateway for both chess training and robotics literacy from a single view

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=m8f30MFbLiA

Napster Station kiosk with faytech: VoiceField mic array + embodied AI concierge, Napster View 3D

Posted by – January 12, 2026
Category: Exclusive videos

Napster is being reframed here as a “streaming expertise” product: a library of domain AI companions you meet in real-time video instead of text chat. The demo focuses on embodied agents for tech support, fitness coaching, and personal guidance, plus digital twins that can mirror a real person and optionally escalate to a live call. The pitch is simple UX: talk naturally, keep context, and let the system handle the tool-wrangling under the hood. https://www.napster.ai/view

On desktop, the centerpiece is Napster View, a small clip-on display for Mac that uses a lenticular multi-view optical stack to create glasses-free stereoscopic depth, so an agent appears “above” your main screen and keeps eye contact. The team describes combining a custom lens with rendering tuned for multiple viewpoints to keep parallax consistent and reduce visual fatigue, with USB-C power and a low-cost hardware entry point. The footage is shot during CES Las Vegas 2026, where spatial UI for everyday computer work is turning into a practical form factor.

Software-wise, View is paired with a companion app that can see you, and—when you grant permission—see what’s on your screen for situational awareness. That enables screen-guided help (for example, learning macOS app workflows quickly) and artifact generation like emails, plans, or images from what the model observes. They also preview “gated” control of macOS actions (launching apps, manipulating documents, editing media) with extra testing and safety checks, because automation shifts from advice to execution mode.

The same conversational layer is used for generative media: you pick a genre and scenario, and an AI “artist” produces lyrics, cover art, and multiple song variants, then returns them through the UI as shareable assets. The transcript stresses a model-agnostic approach—swapping underlying LLM or music models as they improve—so users don’t need to track the fast-moving ecosystem. It’s a clear example of orchestration: multimodal input, structured outputs, and lightweight creative iteration in one place.

For public spaces, Napster Station extends the idea into a kiosk: camera-triggered interaction plus a near-field microphone array meant to isolate the voice of the person directly in front, even in loud environments. The pitch is “AI outside the browser,” where an embodied concierge can drive existing web surfaces (retail, airports, hotels, venues) by taking a spoken intent and executing steps like a digital employee. Technically it’s a blend of UX, audio DSP, vision, and agent workflows tuned for a crowded trade-show floor.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=RN8xqMVZ7aE

Frore Systems Booth Tour at CES 2026, From Edge to Cloud, AirJet Mini G2, AirJet PAK, LiquidJet

Posted by – January 11, 2026
Category: Exclusive videos

Thermal design is becoming a first-order limiter for thin clients, rugged edge boxes, and AI racks, and Frore Systems frames AirJet and LiquidJet as two complementary ways to raise sustained power without reverting to bulky fans or oversized heat sinks. The tour connects solid-state active airflow at the device level with direct-to-chip liquid cooling at the rack level, focusing on steady-state thermal envelope instead of brief boost behavior. https://www.froresystems.com/products/airjet-r-mini-g2

Later in the video, filmed at CES Las Vegas 2026, AirJet Mini G2 is presented as a sealed, solid-state active cooling module roughly 2.65 mm thick that targets about 7.5 W of heat removal while consuming about 1 W. Gen 2 is described as a ~50% heat-removal step over the first AirJet Mini, and the discussion keeps coming back to why that matters in shipping hardware: acoustic limits, dust-tolerant airflow paths, and multi-year reliability testing.

On client compute, the theme is turning passively cooled form factors into sustained-TDP systems. A Qualcomm reference mini PC built around Snapdragon X2 Elite uses three AirJet Mini G2 units to support about a 25 W thermal envelope in a sub-10 mm chassis, and similar integration patterns are shown for ultra-thin notebooks and tablet-class devices. The engineering win is not a single peak score, but fewer throttle cliffs during long exports, compilation, and on-device AI inference.

Where rugged packaging matters, Frore shows how high static pressure can keep airflow viable through filters. A class-1 5G hotspot example pushes roughly 31 dBm transmit power and pairs it with Wi-Fi 7, yet stays pocket-size by using AirJet modules behind IP53-grade filtered vents; the company cites back pressure around 1750 Pa to move air even when the intake and exhaust are constrained. The same idea is applied to compact SSD enclosures aimed at sustained read/write bandwidth, and to industrial cameras where vibration from a fan would blur imaging.

In the cloud segment, LiquidJet is positioned as a direct-to-chip coldplate built with 3D short-loop jet-channel microstructures, manufactured using semiconductor-style steps like lithography and bonding on metal wafers. By designing the internal jet geometry from an ASIC power map, more coolant can be directed at hotspot regions, with Frore citing support for very high local heat flux up to about 600 W/cm². The claimed upside is headroom to run accelerators cooler for efficiency, or to trade temperature margin for higher clocks, improving tokens per watt and overall data-center PUE at scale.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=ZQ8-D-xn7rQ

Gravity Universe Time 720° magnetic levitation clock with planetary time modes + app

Posted by – January 11, 2026
Category: Exclusive videos

Gravity is a Shenzhen-based startup building “sci-fi to function” desk objects, and this video focuses on Universe Time: a timepiece that behaves more like a kinetic display than a traditional clock, with a floating sphere acting as the moving indicator for how time “flows” across different reference frames and places. The core idea is to make time feel physical: you watch a miniature “planet” move rather than just reading digits. https://www.gravityplayer.com/

Universe Time uses a controlled magnetic levitation system to keep a metallic sphere hovering while it repositions itself to a target angle, then locks into a stable hover again; the demo also shows how the mechanism can articulate through wide orientation changes, including a 720° motion sequence and a 6-DoF style movement envelope while maintaining levitation. The interview is filmed at CES Las Vegas 2026, which fits the product’s intersection of consumer hardware, industrial design, and playful physics for home setups here.

On the software side, the companion app turns the display into a “universe time” selector: you can switch between time zones or choose planet-based presets and watch the sphere accelerate to the new setpoint, then settle with closed-loop stabilization. The interface also exposes visual tuning such as LED color themes, plus time display modes where the orbit maps to hours, minutes, or a seconds cadence, so the motion becomes the readout rather than a conventional hand set too.

A practical engineering thread in the conversation is calibration: levitation height is configurable (the demo mentions roughly 1 cm hover with options up to about 7 cm), but changing mass, finish, or geometry can require recalculating control parameters for magnetic stability. They also mention how paints and surface treatments can perturb the magnetic field and sensor feedback, which is why “planet skins” and textured finishes become a non-trivial materials problem rather than just decoration, and why customization is treated as a premium, order-defined setup for now it.

Behind the scenes, Gravity’s productization looks like a modern IoT pipeline: cloud + app + device identity, with OTA firmware updates and certificate-based onboarding, supporting a connected device that is as much embedded control as it is décor. The same levitation stack is shown branching into other categories (lighting, a levitating desk lamp form, audio speaker concepts, wall-mounted floating pieces, and levitating rocket collectibles), suggesting a platform approach where the control electronics, sensing, and magnetic actuation get reused across new form factors today.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=WGSCPc3uwJc

Artly Barista Bot: imitation learning, motion-capture training, autonomous latte art

Posted by – January 11, 2026
Category: Exclusive videos

Artly positions its VA platform as a “robot training school” for physical AI: instead of scripting a single demo, they build a reusable skill library that can drive a robotic barista workflow and then expand into other manipulation tasks. In this interview, CEO/co-founder Yushan Chen frames the coffee system as the first high-volume application, where the robot has to execute a full sequence—grind dose, tamp, pull espresso, steam milk, pour, and finish latte art—with repeatable timing and tool handling. https://www.artly.ai/

A key technical idea here is learning-from-demonstration (imitation learning): an engineer performs the task while wearing sensors (motion capture / teleoperation style inputs), and the robot later reproduces the same trajectories. During training, the platform records synchronized action data plus camera streams, then uses perception to re-localize target objects at runtime. In the demo, the arm-mounted vision stack identifies items like oranges and apples and closes the loop so the robot can continue a pick-and-place motion even when the scene is slightly different each try.

They also call out Intel RealSense depth cameras for object perception, which fits the need for 3D pose estimation, reach planning, and gentle grasp control around deformable objects. The robot detects failed grasps, retracts, and retries—suggesting basic recovery logic plus confidence checks that keep the arm from “committing” to a bad pickup. Even with a short training session (they mention about two minutes), you can see how fast a narrow, well-instrumented skill can be brought to a usable level.

Beyond the lab, Artly says it has around 40 deployments across North America, and the point of that footprint is data: every real execution can become additional training signal to refine the policy and improve robustness across different cups, fruit sizes, and counter layouts. The video itself was filmed at CES Las Vegas 2026, where this kind of closed-loop manipulation is showing up less as a novelty and more as a practical “physical AI” pattern for retail automation out on the floor there.

Artly’s roadmap in the conversation is basically dexterity plus generality: better end-effectors (including more hand-like grippers), richer sensory feedback, and progressively harder latte-art patterns that demand tighter control of flow rate, tilt angle, and microfoam texture. If the platform can keep turning demonstrations into dependable, auditable skills—perception, grasping, tool use, and recovery—it becomes a template for other tasks like drink garnish or fresh-ingredient handling without changing the overall training loop too much, which is the interesting part to watch next year.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=B_TZLnS5Mw8

Timekettle W4 AI Interpreter Earbuds: bone-voiceprint, 0.2s real-time dialogue

Posted by – January 11, 2026
Category: Exclusive videos

Timekettle introduces the W4 AI Interpreter Earbuds, built around a “shared-earbud” workflow for two-way, face-to-face translation. Two people each wear one earbud, while the phone app assigns languages and routes audio so each side hears the right translation stream. Voice pickup combines a bone-voiceprint (bone-conduction vibration) sensor with microphones to improve speech capture in loud places. https://www.timekettle.co/products/w4-ai-interpreter-earbuds

In the demo, the interviewer speaks Danish while Ela speaks Chinese, with left/right earbuds mapped to the two participants so the channels don’t get mixed. The intent is to avoid the stop-start feel of typical translator apps: you talk normally, and the system plays back the translated audio fast enough to keep eye contact and cadence. This clip was filmed at CES Las Vegas 2026, which is a useful stress test because booth floors are packed with competing voices and PA noise.

On the spec side, Timekettle highlights ~0.2 s response time, “self-correcting” context-aware translation, and Babel OS 2.0 running on the companion app. Marketing claims include operation in environments up to roughly 100 dB and up to 98% translation accuracy, which in practice will vary by language pair, speaking style, and domain vocabulary. Language coverage is described as about 42–43 languages with roughly 95–96 accents, aimed at real conversational flow.

The company says it has been building translation earbuds for around 10 years, and the interview corrects the scale to about 150k users rather than “millions.” Pricing mentioned for W4 is $349, positioning it for travel, expo meetings, and quick multilingual coordination where hands-free audio separation beats passing a phone back and forth, today.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=p_zxMXi8yUg

HumanBeam on faytech 86″ 4K touch TalkToMeAI: agentic avatar kiosk for clinics, resorts, training

Posted by – January 11, 2026
Category: Exclusive videos

HumanBeam is positioning “embodied AI” as a step beyond a text chatbot: a lifelike avatar trained on a defined knowledge base, delivered through a BeamBox-style 3D kiosk so the interaction feels like speaking with a front-desk companion. The emphasis is on agentic behavior—answering questions while also driving the next action (directions, check-in steps, forms, escalation), rather than dumping info and leaving the user to assemble the workflow. https://humanbeam.io/talktomeai

The demo was filmed at CES Las Vegas 2026 on the faytech booth, where the avatar runs on public-space display hardware instead of a typical monitor. faytech frames the install around an 86-inch 3840×2160 panel with optical bonding and infrared multi-touch, aimed at readability and durability for lobbies, clinics, and city kiosks. In the booth setup, they call out high-brightness operation (around the 1000-nit class) so the face and UI stay legible under show-floor lighting, in 4K.

For hospitality, the avatar becomes a travel guide or concierge trained on resort and local content, designed for walk-up, high-volume conversation and multilingual coverage (they cite 27 languages). When requests cross policy, liability, or “needs a human” boundaries, the same channel can switch from AI to a live remote staff member via a beam-in handoff, keeping context and reducing friction for late-night arrivals or accessibility needs, in care.

In education and healthcare, HumanBeam highlights virtual patient simulation for universities: configurable personas that let schools run repeatable ER and intake scenarios while observing how students ask questions and make decisions. On the operational side, the same interface can offload intake, wayfinding, and routine FAQ in a clinic, then escalate to a nurse or doctor only when needed—shifting humans away from admin loops and back toward empathy and triage.

A notable technical thread is “intent-based” interaction: the avatar infers what a visitor is trying to accomplish, captures qualified leads, and can surface context-relevant prompts without forcing a rigid script. The booth also acknowledges constraints such as needing reliable connectivity for some sessions, plus privacy and consent questions that come with vision cues, sentiment signals, and analytics in a public kiosk. The positioning is less “replace staff” and more “extend staff capacity” with a consistent, human-like front-end role.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=UDp030pVkJg

Looking Glass hololuminescent display + faytech glasses-free 3D digital signage, 16″ FHD and 27″ 4K

Posted by – January 10, 2026
Category: Exclusive videos

Looking Glass and faytech walk through a new Hololuminescent Display (HLD) platform aimed at group-viewable, glasses-free 3D for digital signage and in-store product presentation. The core idea is a light-field optical stack that creates a fixed “holographic volume” while staying slim enough to mount like a normal screen, roughly under an inch thick on the shipping sizes. https://lookingglassfactory.com/hld-overview

The demo focuses on how parallax behaves in the real world: as you move your head, the background shifts naturally while a foreground layer can stay readable for UI, giving a hybrid of conventional 2D interface plus spatial content inside a visible “box.” Because it’s autostereoscopic and multi-view, it stays convincing for multiple people at once, and even reads well on camera for people filming the display.

They also outline the initial lineup and positioning versus earlier, more developer-centric light-field systems. HLD 16 is a 16-inch portrait display listed at 1080p, while HLD 27 is a 27-inch portrait display listed at 4K UHD, both designed for plug-and-play deployments and repeatable content loops. The pricing discussed is about $1,500 for the 16-inch unit and about $3,000 for the 27-inch unit.

On the deployment side, they frame HLD as a “taster” for retail endcaps and kiosks, with optional touchscreen integration through the faytech partnership, so a standard touch UI can sit alongside a floating 3D product render. Brightness is described around 500–600 nits in the booth context, with the implication that higher-brightness builds can be handled as a specialty build. This interview was filmed at CES Las Vegas 2026 inside the faytech booth area.

Finally, the conversation lands on AI-driven characters as a natural fit for spatial displays: Looking Glass previously built an early 3D chatbot concept (Lightforms) and now expects partners to bring modern LLM-driven agents onto this kind of hardware. The practical takeaway is that a conversational character, brand mascot, or guided product explainer becomes more “present” when it occupies depth in a shared viewing volume, even when driven by modest on-site compute like a tablet or signage player.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=4DZaffaSbJU

Cuneflow E-Ink notebook demo: multimodal pen + audio, meeting library context, privacy

Posted by – January 10, 2026
Category: Exclusive videos

Cuneflow is building a voice+ink notebook that treats handwriting and audio as the primary inputs, then turns them into searchable transcripts and compact AI summaries. The core idea is simple: capture ideas at the speed of a pen, but keep the output as structured meeting notes you can actually retrieve later, without living in a laptop UI. https://www.cuneflow.com

Instead of being “just another notes app,” the device revolves around two surfaces: Meeting and Library. You import reference material into a Library (for example via Google Drive), and the notebook uses that corpus as context for summarizing what was said in a meeting. This interview was filmed at CES Las Vegas 2026, where the pitch is a focused, paper-like workflow rather than a full tablet experience here.

On the hardware side, it’s an E-Ink display with a front light, running Android under a custom interface designed around pen input. You can write normal notes, but also draw quick symbols (a star, a smile, a scribble) to mark emphasis, and the system is meant to connect those marks to the audio timeline so key moments surface in the summary. Think multimodal note-taking: ink strokes + speech-to-text + semantic indexing, all in one place to write.

Cuneflow also draws a boundary around collaboration: they’re not trying to replace Notion/Lark with real-time co-editing on the device, and they intentionally avoid pushing heavy typing on a glass keyboard. The point is low-friction capture during a meeting, with Wi-Fi sync as the transport layer (and the option to record even when connectivity is weak, then reconcile later). It’s a “capture first, organize later” model, tuned for speed and focus fast.

Security comes up quickly in any voice-transcription product, and they emphasize encryption plus compliance work, with an explicit stance that user data is not used to train their model. Processing is described as server-backed, but with a path for enterprises to host their own model if they need tighter control. On the roadmap: more microphones, a thinner chassis, newer compute silicon, and ongoing OTA software updates as the UI and summarization quality evolve later.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=RfqBNfbJkC0

VOCCI AI note-taking ring: tap highlights, 8h recording, 5m pickup, phone transcript

Posted by – January 10, 2026
Category: Exclusive videos

VOCCI (by Gyges Labs) is trying to make “capture mode” frictionless: a titanium smart ring that records conversations and turns them into searchable AI notes. Instead of pulling out a phone or opening a laptop, you double-click the ring’s button to start audio capture, then wear it like any other piece of jewelry — 3–5 g, designed for all-day use, with a charging case for top-ups.
https://vocci.ai/

What stands out is the interaction model: tap while recording to “highlight” a moment that matters, so the summary doesn’t treat every sentence as equal. Audio can stay on the ring until you sync to the companion app, where speech-to-text transcription feeds an AI agent that produces meeting notes, decisions, and action-style recaps, with prompts you can customize for your own reporting format.

From a systems perspective, Vocci is closer to a wearable voice recorder than a health ring: it targets roughly 8 hours of continuous recording and advertises an effective pickup range around 5 meters, aimed at classrooms or conference rooms. The interview was filmed at CES Las Vegas 2026, and the focus is on capturing natural dialogue without needing your phone in hand, then letting the phone do the heavier AI processing later.

Design details matter for a device you’ll actually keep on: aerospace-grade titanium for durability and skin tolerance, multiple sizes, and color options that should ship with the same base material. The team describes itself as California-based, and pricing wasn’t final in the demo, though early coverage suggests it may land under the $200 mark, positioning it against dedicated recorders and other “memory” wearables without forcing a bulky gadget vibe.

The bigger idea is selective recall: the ring becomes a physical “bookmark” for your brain, with highlight taps acting like labels for decisions, names, or sparks of insight. VOCCI says it plans to launch around mid-February, with broader shipping later in 2026, so the real test will be transcript quality in noisy spaces and how well the AI stays faithful to context when you need the note to be ready.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=fv5YwgGCLeY

Ascentiz Modular Exoskeleton: swappable hip/knee assist, BodyOS open-source, USB-C API

Posted by – January 10, 2026
Category: Exclusive videos

Ascentiz is building a modular, belt-based exoskeleton that treats mobility assist like a plug-in platform: snap on a hip module for extra propulsion and energy return during walking, stairs, hills, and even running, then pair it with other modules when you need more support. In the demo, the hip assist is described as giving an extra push up or down slopes and cutting perceived effort by around 30%, with a swappable battery rated for about 10 hours or roughly 15.5 miles per pack. https://ascentizexo.com/

The interesting part is the architecture: a central control box acts as the “brain,” exposing a standard module interface and API, with physical connectivity shown as USB-C. Ascentiz calls the software layer BodyOS, framed as an open, developer-friendly “Android-like” stack for exoskeleton modules, so third parties can build compatible hip, knee, or upper-body attachments and still share sensing, power management, and coordinated control.

On the motion side, the system leans on onboard sensing and gait/motion algorithms to switch profiles for walking, uphill/downhill, stairs, running, or biking without feeling like a rigid robot frame. This interview was filmed around CES Las Vegas 2026, and the pitch is that consumer exosuits are finally getting small enough (higher power-density motors, better packaging, and quick-swap batteries) to be worn for real activities rather than lab demos.

Use cases go beyond “superhuman hiking”: camera operators hauling heavy rigs, workers lifting and carrying, and anyone who wants reduced fatigue across long days on foot. They also talk about assisted mobility for older adults and people with weak knees/legs, where added stability and strength could reduce fatigue and help lower fall risk, with a quick on/off setup time around half a minute.

Commercially, Ascentiz positions the hip module as the entry point (quoted at $1,499 in this clip), with a knee-support module at $2,499, and optional upper-body pieces coming from partners via the same modular interface. They say they’ve completed a Kickstarter campaign around $2.5M with 2,000+ backers and are targeting mass production and initial deliveries around March, which will be a good real-world test of comfort, durability, and how well BodyOS can attract module makers at scale.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=NCQdHno-234

Waterdrop A1 Reverse Osmosis Water Bar: 0.0001µm membrane, 6 temps, 5 volumes

Posted by – January 10, 2026
Category: Exclusive videos

Waterdrop’s A1 countertop reverse osmosis dispenser is shown as a self-contained way to turn tap water into temperature-controlled drinking water without plumbing. The front OLED screen gives direct control over temperature and dispense volume, with color-coded feedback (blue for cold, red/orange for hot) and a quick stop so you can dose for a cup, bottle, or thermos without guesswork. https://www.waterdropfilter.com/products/ro-hot-cold-water-dispenser-a1

Technically, the A1 is built around a multi-stage RO architecture: a pre-filter stage feeding a 0.0001 µm reverse osmosis membrane, plus UV sterilization intended to keep the internal tanks cleaner over time. Waterdrop’s published claims center on lowering TDS and reducing a wide spread of contaminants that matter in real tap water—PFAS (PFOA/PFOS), chlorine taste/odor, and heavy-metal ions among them—while listing a 2:1 pure-to-drain ratio and a 100 GPD production class output.

Where the product differentiates from a typical countertop purifier is the thermal layer on top of RO. You get six temperature presets spanning roughly 5°C to 95°C (41°F to 203°F), several fixed dispense volumes that auto-stop, and a child-lock that must be held to unlock hot water to reduce burn risk. The UI also references modes like night mode, off-home mode, and altitude mode, and the demo shows hot output arriving in seconds rather than a kettle-style wait at CES Las Vegas 2026.

Because it’s tank-based, portability is the key tradeoff: you refill a feed reservoir and periodically empty a separate wastewater compartment (a normal consequence of RO concentrate). Maintenance is designed around quick cartridge swaps—twist out and replace—with typical guidance of about 6 months for the composite filter and up to 12 months for the RO cartridge, depending on local water quality and how much you use it today.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=Z2iCDdi6_1M

Xthings Ultraloq Bolt Sense: palm vein + 3D face smart lock, Matter/Aliro, UWB

Posted by – January 10, 2026
Category: Exclusive videos

Xthings has spent more than a decade building smart access hardware that tries to feel “invisible”: you walk up, authenticate, and the door behaves like it understands intent. In this interview, the focus is on stacking multiple credentials—PIN, NFC tap, fingerprint, and now proximity plus computer vision—while keeping broad compatibility with mainstream ecosystems like Apple Home, Google Home, and Samsung SmartThings. https://xthings.com/

A big theme is proximity done properly. Their ultra-wideband (UWB) smart lock uses ranging to judge distance and approach direction, so it can unlock when you actually reach for the handle, not just because you walked nearby with a UWB phone. If you don’t have UWB, the same lineup supports NFC tap, keypad code entry, and (on some models) a physical key override, plus digital key sharing for households and small teams at the door.

For higher assurance, Xthings is pushing multi-modal biometrics with Ultraloq Bolt Sense: palm-vein authentication plus 3D facial recognition. Palm vein ID typically uses near-infrared imaging to read sub-surface vascular patterns, which can work even with wet hands or in low light, and it’s generally harder to spoof than many surface-level biometrics. The conversation also touches standards-first thinking, with newer locks like the Latch 7 Pro leaning on Matter over Thread for local control and Aliro-style interoperability for access credentials, while still offering familiar fallbacks.

The “Urban Guardian” concept stretches the same identity + sensing ideas into public space hardware. It’s presented as a self-contained safety node for streets or corporate campuses: solar panels charging an internal battery, 4G connectivity, 360° cameras, lighting, and an SOS/info interface, without trenching cables or deep backend integration. Practical touches like MagSafe-style wireless charging suggest it’s designed for real-world dwell time, not just passive monitoring at night.

On the monitoring side, the Ulticam camera line adds Matter-ready devices and Google Gemini-powered video understanding, shifting alerts from generic motion to more contextual summaries (like recognizing a delivery event). The lineup is positioned around details like 4K/HDR capture, wide field of view, two-way audio, and common installs such as PoE, alongside variants that emphasize floodlighting or longer-range wireless options. Filmed at CES Las Vegas 2026, the story here is less about one gadget and more about how access control, identity, and AI video context can converge into one cross-platform stack.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=5b35yUmj-kY

Beetles Gel Polish Zodiac Kit: UV/LED curing lamp, mini colors, DIY nail art

Posted by – January 10, 2026
Category: Exclusive videos

Beetles Gel Polish walks through a compact “Aucus” gift set built around zodiac themes: the color selection is meant to match a sign’s vibe, while the box bundles the core tools for a full gel manicure in one place. Alongside multiple mini gel bottles, you get a UV/LED nail lamp, nail art brushes, and small themed extras like pendants, so you’re not buying the essentials piece by piece. The smaller bottle format also makes it easier to treat this as a travel-friendly kit rather than a drawer full of full-size bottles. https://beetlesgel.com/

From a technical angle, the pitch is about accessible soak-off gel workflow for DIY users: thin, controlled layers, LED/UV curing with consistent exposure, and enough pigment variety to build simple looks without mixing. A bundled lamp matters because cure quality is what drives wear time and scratch resistance; under-curing can leave soft layers, while over-curing can make removal harder. If you’re doing gel at home, the usual best practice still applies: prep/dehydrate the nail plate, keep product off skin (allergen risk), fully cure each layer, then remove with acetone soak-off and gentle push, with basic skin care.

The brand positioning here is “online-first but moving into shelves,” starting from Amazon popularity and social discovery, then expanding toward big-box and pharmacy retail. In the interview, they mention Walmart and Target, with CVS referenced as an upcoming channel, which is a typical path for consumer beauty brands once packaging, compliance, and merchandising are ready. The conversation was filmed at CES Las Vegas 2026 during the Impact Global Connect event, so it’s framed as a quick booth intro rather than a long-form tutorial, with the focus on what’s inside the box and how it fits DIY habits in the US market.

What’s most interesting is how the product strategy leans into frequent seasonal drops: rotating curated palettes (often 20–30+ colors in mini format) plus small “collector” elements, making the set feel like a ready-made gift or starter pack. If you’re used to pro salon systems with larger bottles and strict base/top coat pairings, this is aimed more at variety and convenience than building a single locked-in system. They also note that Europe isn’t the current launch focus, so for now the availability story is primarily US retail and online, with the kit concept built around quick, complete setups you can actually finish at home today

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=wN94ATrY1qs