Author:


AGIBOT Panda D1 quadruped demo: backflip, push-ups, dynamic gait control, jumping

Posted by – January 12, 2026
Category: Exclusive videos

AGIBOT’s panda-themed quadruped turns a playful mascot into a serious locomotion demo, switching modes on command and showing how much control bandwidth modern legged robots can deliver. What you’re really watching is a tight loop of sensing and actuation—stable stance, fast transitions, and short bursts of dynamic motion—packaged in a friendly shell that makes the mechanics easier to notice. https://www.agibot.com/

In the clip the “panda” drops into push-ups, pops back up, hops, pivots toward the camera, and runs a backflip routine—moves that depend on whole-body control, inertial measurement (IMU) feedback, foot-contact timing, and careful torque/position control across multiple joints. Even when it looks like a party trick, it stress-tests balance recovery, trajectory planning, and impact management during takeoff and landing.

AGIBOT frames these platforms as part of a broader embodied-AI stack, spanning its D1 quadruped line and humanoid families such as A2 and X2, plus work on training data and “interaction + manipulation + locomotion” as a unified control problem. That context matters, because the same perception-to-control plumbing behind a stunt can be repurposed for patrol, guided interaction, or repeatable navigation tasks, and this quick panda demo was filmed on the CES Las Vegas 2026 show floor there.

The fun moment is when the panda “comes after” the operator, but it also hints at the real gap between impressive locomotion and useful home autonomy: chores like laundry or cooking need robust perception, safe force control, and reliable manipulation, not just agile gait. Treat this video as a snapshot of where legged robotics is getting very capable—dynamic stability, motion primitives, and user-triggered behaviors—while the hard part is turning that athletic base into dependable everyday help.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=x2n3kB_19mg

SenseRobot chess robot: screen-free robotic arm board, blitz mode, Chess Mini for kids

Posted by – January 12, 2026
Category: Exclusive videos

SenseRobot’s pitch is simple: bring board-game engines back into the real world with a screen-free tabletop robot that physically moves pieces on a real board, so practice feels closer to over-the-board play than tapping on an app. In this demo you see it set up across multiple tables, with support not only for chess but also checkers/draughts and Chinese chess (Xiangqi), switching boards while keeping the same “move a piece, press go, robot replies” flow. https://www.senserobotchess.com/

What makes it interesting technically is the closed-loop interaction: the system has to sense the current board state, validate your move, and then execute its reply with a small robotic arm and gripper while staying aligned to squares. When an illegal or clearly losing move happens, the robot flags it as a mistake and can restore the position, which implies some combination of move-history tracking and board-state verification rather than blindly trusting the user. That mix of physical HRI, motion control, and rules enforcement is the core of the product story today.

Midway through the interview, filmed at CES Las Vegas 2026, the focus shifts from “robot opponent” to “robot coach.” The rep claims a wide range of difficulty levels from beginner up to grandmaster, plus training value that’s different from playing on a phone: you get a tactile board, a consistent practice partner, and less eye strain than a screen-first chess routine. They also reference a partnership with the European Chess Union, framing the device as a structured way to build confidence before facing human opponents here.

There are a few practical engineering moments too: the arm is presented as stable and safe for home use, and when the host interrupts the motion, the robot pauses and finds its way back, hinting at basic obstruction handling, path recovery, and “return-to-home” style behaviors. The rep also mentions a blitz mode in newer products, which raises the bar on motor speed, acceleration limits, and reliable piece pickup and placement at higher tempo without sacrificing safety mode.

On roadmap and commercial details, they say they’ve sold around 20,000 units globally, built the robot over roughly four years, and that the North America “basic” version sits around the $1,000 mark with availability through mainstream retail. The notable next step is a smaller, more affordable Chess Mini aimed at kids, with talk of extra kid-focused features like STEAM-style programming hooks on top of board-game play, which could turn the robot into a gateway for both chess training and robotics literacy from a single view

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=m8f30MFbLiA

Napster Station kiosk with faytech: VoiceField mic array + embodied AI concierge, Napster View 3D

Posted by – January 12, 2026
Category: Exclusive videos

Napster is being reframed here as a “streaming expertise” product: a library of domain AI companions you meet in real-time video instead of text chat. The demo focuses on embodied agents for tech support, fitness coaching, and personal guidance, plus digital twins that can mirror a real person and optionally escalate to a live call. The pitch is simple UX: talk naturally, keep context, and let the system handle the tool-wrangling under the hood. https://www.napster.ai/view

On desktop, the centerpiece is Napster View, a small clip-on display for Mac that uses a lenticular multi-view optical stack to create glasses-free stereoscopic depth, so an agent appears “above” your main screen and keeps eye contact. The team describes combining a custom lens with rendering tuned for multiple viewpoints to keep parallax consistent and reduce visual fatigue, with USB-C power and a low-cost hardware entry point. The footage is shot during CES Las Vegas 2026, where spatial UI for everyday computer work is turning into a practical form factor.

Software-wise, View is paired with a companion app that can see you, and—when you grant permission—see what’s on your screen for situational awareness. That enables screen-guided help (for example, learning macOS app workflows quickly) and artifact generation like emails, plans, or images from what the model observes. They also preview “gated” control of macOS actions (launching apps, manipulating documents, editing media) with extra testing and safety checks, because automation shifts from advice to execution mode.

The same conversational layer is used for generative media: you pick a genre and scenario, and an AI “artist” produces lyrics, cover art, and multiple song variants, then returns them through the UI as shareable assets. The transcript stresses a model-agnostic approach—swapping underlying LLM or music models as they improve—so users don’t need to track the fast-moving ecosystem. It’s a clear example of orchestration: multimodal input, structured outputs, and lightweight creative iteration in one place.

For public spaces, Napster Station extends the idea into a kiosk: camera-triggered interaction plus a near-field microphone array meant to isolate the voice of the person directly in front, even in loud environments. The pitch is “AI outside the browser,” where an embodied concierge can drive existing web surfaces (retail, airports, hotels, venues) by taking a spoken intent and executing steps like a digital employee. Technically it’s a blend of UX, audio DSP, vision, and agent workflows tuned for a crowded trade-show floor.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=RN8xqMVZ7aE

Frore Systems Booth Tour at CES 2026, From Edge to Cloud, AirJet Mini G2, AirJet PAK, LiquidJet

Posted by – January 11, 2026
Category: Exclusive videos

Thermal design is becoming a first-order limiter for thin clients, rugged edge boxes, and AI racks, and Frore Systems frames AirJet and LiquidJet as two complementary ways to raise sustained power without reverting to bulky fans or oversized heat sinks. The tour connects solid-state active airflow at the device level with direct-to-chip liquid cooling at the rack level, focusing on steady-state thermal envelope instead of brief boost behavior. https://www.froresystems.com/products/airjet-r-mini-g2

Later in the video, filmed at CES Las Vegas 2026, AirJet Mini G2 is presented as a sealed, solid-state active cooling module roughly 2.65 mm thick that targets about 7.5 W of heat removal while consuming about 1 W. Gen 2 is described as a ~50% heat-removal step over the first AirJet Mini, and the discussion keeps coming back to why that matters in shipping hardware: acoustic limits, dust-tolerant airflow paths, and multi-year reliability testing.

On client compute, the theme is turning passively cooled form factors into sustained-TDP systems. A Qualcomm reference mini PC built around Snapdragon X2 Elite uses three AirJet Mini G2 units to support about a 25 W thermal envelope in a sub-10 mm chassis, and similar integration patterns are shown for ultra-thin notebooks and tablet-class devices. The engineering win is not a single peak score, but fewer throttle cliffs during long exports, compilation, and on-device AI inference.

Where rugged packaging matters, Frore shows how high static pressure can keep airflow viable through filters. A class-1 5G hotspot example pushes roughly 31 dBm transmit power and pairs it with Wi-Fi 7, yet stays pocket-size by using AirJet modules behind IP53-grade filtered vents; the company cites back pressure around 1750 Pa to move air even when the intake and exhaust are constrained. The same idea is applied to compact SSD enclosures aimed at sustained read/write bandwidth, and to industrial cameras where vibration from a fan would blur imaging.

In the cloud segment, LiquidJet is positioned as a direct-to-chip coldplate built with 3D short-loop jet-channel microstructures, manufactured using semiconductor-style steps like lithography and bonding on metal wafers. By designing the internal jet geometry from an ASIC power map, more coolant can be directed at hotspot regions, with Frore citing support for very high local heat flux up to about 600 W/cm². The claimed upside is headroom to run accelerators cooler for efficiency, or to trade temperature margin for higher clocks, improving tokens per watt and overall data-center PUE at scale.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=ZQ8-D-xn7rQ

Gravity Universe Time 720° magnetic levitation clock with planetary time modes + app

Posted by – January 11, 2026
Category: Exclusive videos

Gravity is a Shenzhen-based startup building “sci-fi to function” desk objects, and this video focuses on Universe Time: a timepiece that behaves more like a kinetic display than a traditional clock, with a floating sphere acting as the moving indicator for how time “flows” across different reference frames and places. The core idea is to make time feel physical: you watch a miniature “planet” move rather than just reading digits. https://www.gravityplayer.com/

Universe Time uses a controlled magnetic levitation system to keep a metallic sphere hovering while it repositions itself to a target angle, then locks into a stable hover again; the demo also shows how the mechanism can articulate through wide orientation changes, including a 720° motion sequence and a 6-DoF style movement envelope while maintaining levitation. The interview is filmed at CES Las Vegas 2026, which fits the product’s intersection of consumer hardware, industrial design, and playful physics for home setups here.

On the software side, the companion app turns the display into a “universe time” selector: you can switch between time zones or choose planet-based presets and watch the sphere accelerate to the new setpoint, then settle with closed-loop stabilization. The interface also exposes visual tuning such as LED color themes, plus time display modes where the orbit maps to hours, minutes, or a seconds cadence, so the motion becomes the readout rather than a conventional hand set too.

A practical engineering thread in the conversation is calibration: levitation height is configurable (the demo mentions roughly 1 cm hover with options up to about 7 cm), but changing mass, finish, or geometry can require recalculating control parameters for magnetic stability. They also mention how paints and surface treatments can perturb the magnetic field and sensor feedback, which is why “planet skins” and textured finishes become a non-trivial materials problem rather than just decoration, and why customization is treated as a premium, order-defined setup for now it.

Behind the scenes, Gravity’s productization looks like a modern IoT pipeline: cloud + app + device identity, with OTA firmware updates and certificate-based onboarding, supporting a connected device that is as much embedded control as it is décor. The same levitation stack is shown branching into other categories (lighting, a levitating desk lamp form, audio speaker concepts, wall-mounted floating pieces, and levitating rocket collectibles), suggesting a platform approach where the control electronics, sensing, and magnetic actuation get reused across new form factors today.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=WGSCPc3uwJc

Artly Barista Bot: imitation learning, motion-capture training, autonomous latte art

Posted by – January 11, 2026
Category: Exclusive videos

Artly positions its VA platform as a “robot training school” for physical AI: instead of scripting a single demo, they build a reusable skill library that can drive a robotic barista workflow and then expand into other manipulation tasks. In this interview, CEO/co-founder Yushan Chen frames the coffee system as the first high-volume application, where the robot has to execute a full sequence—grind dose, tamp, pull espresso, steam milk, pour, and finish latte art—with repeatable timing and tool handling. https://www.artly.ai/

A key technical idea here is learning-from-demonstration (imitation learning): an engineer performs the task while wearing sensors (motion capture / teleoperation style inputs), and the robot later reproduces the same trajectories. During training, the platform records synchronized action data plus camera streams, then uses perception to re-localize target objects at runtime. In the demo, the arm-mounted vision stack identifies items like oranges and apples and closes the loop so the robot can continue a pick-and-place motion even when the scene is slightly different each try.

They also call out Intel RealSense depth cameras for object perception, which fits the need for 3D pose estimation, reach planning, and gentle grasp control around deformable objects. The robot detects failed grasps, retracts, and retries—suggesting basic recovery logic plus confidence checks that keep the arm from “committing” to a bad pickup. Even with a short training session (they mention about two minutes), you can see how fast a narrow, well-instrumented skill can be brought to a usable level.

Beyond the lab, Artly says it has around 40 deployments across North America, and the point of that footprint is data: every real execution can become additional training signal to refine the policy and improve robustness across different cups, fruit sizes, and counter layouts. The video itself was filmed at CES Las Vegas 2026, where this kind of closed-loop manipulation is showing up less as a novelty and more as a practical “physical AI” pattern for retail automation out on the floor there.

Artly’s roadmap in the conversation is basically dexterity plus generality: better end-effectors (including more hand-like grippers), richer sensory feedback, and progressively harder latte-art patterns that demand tighter control of flow rate, tilt angle, and microfoam texture. If the platform can keep turning demonstrations into dependable, auditable skills—perception, grasping, tool use, and recovery—it becomes a template for other tasks like drink garnish or fresh-ingredient handling without changing the overall training loop too much, which is the interesting part to watch next year.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=B_TZLnS5Mw8

Timekettle W4 AI Interpreter Earbuds: bone-voiceprint, 0.2s real-time dialogue

Posted by – January 11, 2026
Category: Exclusive videos

Timekettle introduces the W4 AI Interpreter Earbuds, built around a “shared-earbud” workflow for two-way, face-to-face translation. Two people each wear one earbud, while the phone app assigns languages and routes audio so each side hears the right translation stream. Voice pickup combines a bone-voiceprint (bone-conduction vibration) sensor with microphones to improve speech capture in loud places. https://www.timekettle.co/products/w4-ai-interpreter-earbuds

In the demo, the interviewer speaks Danish while Ela speaks Chinese, with left/right earbuds mapped to the two participants so the channels don’t get mixed. The intent is to avoid the stop-start feel of typical translator apps: you talk normally, and the system plays back the translated audio fast enough to keep eye contact and cadence. This clip was filmed at CES Las Vegas 2026, which is a useful stress test because booth floors are packed with competing voices and PA noise.

On the spec side, Timekettle highlights ~0.2 s response time, “self-correcting” context-aware translation, and Babel OS 2.0 running on the companion app. Marketing claims include operation in environments up to roughly 100 dB and up to 98% translation accuracy, which in practice will vary by language pair, speaking style, and domain vocabulary. Language coverage is described as about 42–43 languages with roughly 95–96 accents, aimed at real conversational flow.

The company says it has been building translation earbuds for around 10 years, and the interview corrects the scale to about 150k users rather than “millions.” Pricing mentioned for W4 is $349, positioning it for travel, expo meetings, and quick multilingual coordination where hands-free audio separation beats passing a phone back and forth, today.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=p_zxMXi8yUg

HumanBeam on faytech 86″ 4K touch TalkToMeAI: agentic avatar kiosk for clinics, resorts, training

Posted by – January 11, 2026
Category: Exclusive videos

HumanBeam is positioning “embodied AI” as a step beyond a text chatbot: a lifelike avatar trained on a defined knowledge base, delivered through a BeamBox-style 3D kiosk so the interaction feels like speaking with a front-desk companion. The emphasis is on agentic behavior—answering questions while also driving the next action (directions, check-in steps, forms, escalation), rather than dumping info and leaving the user to assemble the workflow. https://humanbeam.io/talktomeai

The demo was filmed at CES Las Vegas 2026 on the faytech booth, where the avatar runs on public-space display hardware instead of a typical monitor. faytech frames the install around an 86-inch 3840×2160 panel with optical bonding and infrared multi-touch, aimed at readability and durability for lobbies, clinics, and city kiosks. In the booth setup, they call out high-brightness operation (around the 1000-nit class) so the face and UI stay legible under show-floor lighting, in 4K.

For hospitality, the avatar becomes a travel guide or concierge trained on resort and local content, designed for walk-up, high-volume conversation and multilingual coverage (they cite 27 languages). When requests cross policy, liability, or “needs a human” boundaries, the same channel can switch from AI to a live remote staff member via a beam-in handoff, keeping context and reducing friction for late-night arrivals or accessibility needs, in care.

In education and healthcare, HumanBeam highlights virtual patient simulation for universities: configurable personas that let schools run repeatable ER and intake scenarios while observing how students ask questions and make decisions. On the operational side, the same interface can offload intake, wayfinding, and routine FAQ in a clinic, then escalate to a nurse or doctor only when needed—shifting humans away from admin loops and back toward empathy and triage.

A notable technical thread is “intent-based” interaction: the avatar infers what a visitor is trying to accomplish, captures qualified leads, and can surface context-relevant prompts without forcing a rigid script. The booth also acknowledges constraints such as needing reliable connectivity for some sessions, plus privacy and consent questions that come with vision cues, sentiment signals, and analytics in a public kiosk. The positioning is less “replace staff” and more “extend staff capacity” with a consistent, human-like front-end role.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=UDp030pVkJg

Looking Glass hololuminescent display + faytech glasses-free 3D digital signage, 16″ FHD and 27″ 4K

Posted by – January 10, 2026
Category: Exclusive videos

Looking Glass and faytech walk through a new Hololuminescent Display (HLD) platform aimed at group-viewable, glasses-free 3D for digital signage and in-store product presentation. The core idea is a light-field optical stack that creates a fixed “holographic volume” while staying slim enough to mount like a normal screen, roughly under an inch thick on the shipping sizes. https://lookingglassfactory.com/hld-overview

The demo focuses on how parallax behaves in the real world: as you move your head, the background shifts naturally while a foreground layer can stay readable for UI, giving a hybrid of conventional 2D interface plus spatial content inside a visible “box.” Because it’s autostereoscopic and multi-view, it stays convincing for multiple people at once, and even reads well on camera for people filming the display.

They also outline the initial lineup and positioning versus earlier, more developer-centric light-field systems. HLD 16 is a 16-inch portrait display listed at 1080p, while HLD 27 is a 27-inch portrait display listed at 4K UHD, both designed for plug-and-play deployments and repeatable content loops. The pricing discussed is about $1,500 for the 16-inch unit and about $3,000 for the 27-inch unit.

On the deployment side, they frame HLD as a “taster” for retail endcaps and kiosks, with optional touchscreen integration through the faytech partnership, so a standard touch UI can sit alongside a floating 3D product render. Brightness is described around 500–600 nits in the booth context, with the implication that higher-brightness builds can be handled as a specialty build. This interview was filmed at CES Las Vegas 2026 inside the faytech booth area.

Finally, the conversation lands on AI-driven characters as a natural fit for spatial displays: Looking Glass previously built an early 3D chatbot concept (Lightforms) and now expects partners to bring modern LLM-driven agents onto this kind of hardware. The practical takeaway is that a conversational character, brand mascot, or guided product explainer becomes more “present” when it occupies depth in a shared viewing volume, even when driven by modest on-site compute like a tablet or signage player.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=4DZaffaSbJU

Cuneflow E-Ink notebook demo: multimodal pen + audio, meeting library context, privacy

Posted by – January 10, 2026
Category: Exclusive videos

Cuneflow is building a voice+ink notebook that treats handwriting and audio as the primary inputs, then turns them into searchable transcripts and compact AI summaries. The core idea is simple: capture ideas at the speed of a pen, but keep the output as structured meeting notes you can actually retrieve later, without living in a laptop UI. https://www.cuneflow.com

Instead of being “just another notes app,” the device revolves around two surfaces: Meeting and Library. You import reference material into a Library (for example via Google Drive), and the notebook uses that corpus as context for summarizing what was said in a meeting. This interview was filmed at CES Las Vegas 2026, where the pitch is a focused, paper-like workflow rather than a full tablet experience here.

On the hardware side, it’s an E-Ink display with a front light, running Android under a custom interface designed around pen input. You can write normal notes, but also draw quick symbols (a star, a smile, a scribble) to mark emphasis, and the system is meant to connect those marks to the audio timeline so key moments surface in the summary. Think multimodal note-taking: ink strokes + speech-to-text + semantic indexing, all in one place to write.

Cuneflow also draws a boundary around collaboration: they’re not trying to replace Notion/Lark with real-time co-editing on the device, and they intentionally avoid pushing heavy typing on a glass keyboard. The point is low-friction capture during a meeting, with Wi-Fi sync as the transport layer (and the option to record even when connectivity is weak, then reconcile later). It’s a “capture first, organize later” model, tuned for speed and focus fast.

Security comes up quickly in any voice-transcription product, and they emphasize encryption plus compliance work, with an explicit stance that user data is not used to train their model. Processing is described as server-backed, but with a path for enterprises to host their own model if they need tighter control. On the roadmap: more microphones, a thinner chassis, newer compute silicon, and ongoing OTA software updates as the UI and summarization quality evolve later.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=RfqBNfbJkC0

VOCCI AI note-taking ring: tap highlights, 8h recording, 5m pickup, phone transcript

Posted by – January 10, 2026
Category: Exclusive videos

VOCCI (by Gyges Labs) is trying to make “capture mode” frictionless: a titanium smart ring that records conversations and turns them into searchable AI notes. Instead of pulling out a phone or opening a laptop, you double-click the ring’s button to start audio capture, then wear it like any other piece of jewelry — 3–5 g, designed for all-day use, with a charging case for top-ups.
https://vocci.ai/

What stands out is the interaction model: tap while recording to “highlight” a moment that matters, so the summary doesn’t treat every sentence as equal. Audio can stay on the ring until you sync to the companion app, where speech-to-text transcription feeds an AI agent that produces meeting notes, decisions, and action-style recaps, with prompts you can customize for your own reporting format.

From a systems perspective, Vocci is closer to a wearable voice recorder than a health ring: it targets roughly 8 hours of continuous recording and advertises an effective pickup range around 5 meters, aimed at classrooms or conference rooms. The interview was filmed at CES Las Vegas 2026, and the focus is on capturing natural dialogue without needing your phone in hand, then letting the phone do the heavier AI processing later.

Design details matter for a device you’ll actually keep on: aerospace-grade titanium for durability and skin tolerance, multiple sizes, and color options that should ship with the same base material. The team describes itself as California-based, and pricing wasn’t final in the demo, though early coverage suggests it may land under the $200 mark, positioning it against dedicated recorders and other “memory” wearables without forcing a bulky gadget vibe.

The bigger idea is selective recall: the ring becomes a physical “bookmark” for your brain, with highlight taps acting like labels for decisions, names, or sparks of insight. VOCCI says it plans to launch around mid-February, with broader shipping later in 2026, so the real test will be transcript quality in noisy spaces and how well the AI stays faithful to context when you need the note to be ready.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=fv5YwgGCLeY

Ascentiz Modular Exoskeleton: swappable hip/knee assist, BodyOS open-source, USB-C API

Posted by – January 10, 2026
Category: Exclusive videos

Ascentiz is building a modular, belt-based exoskeleton that treats mobility assist like a plug-in platform: snap on a hip module for extra propulsion and energy return during walking, stairs, hills, and even running, then pair it with other modules when you need more support. In the demo, the hip assist is described as giving an extra push up or down slopes and cutting perceived effort by around 30%, with a swappable battery rated for about 10 hours or roughly 15.5 miles per pack. https://ascentizexo.com/

The interesting part is the architecture: a central control box acts as the “brain,” exposing a standard module interface and API, with physical connectivity shown as USB-C. Ascentiz calls the software layer BodyOS, framed as an open, developer-friendly “Android-like” stack for exoskeleton modules, so third parties can build compatible hip, knee, or upper-body attachments and still share sensing, power management, and coordinated control.

On the motion side, the system leans on onboard sensing and gait/motion algorithms to switch profiles for walking, uphill/downhill, stairs, running, or biking without feeling like a rigid robot frame. This interview was filmed around CES Las Vegas 2026, and the pitch is that consumer exosuits are finally getting small enough (higher power-density motors, better packaging, and quick-swap batteries) to be worn for real activities rather than lab demos.

Use cases go beyond “superhuman hiking”: camera operators hauling heavy rigs, workers lifting and carrying, and anyone who wants reduced fatigue across long days on foot. They also talk about assisted mobility for older adults and people with weak knees/legs, where added stability and strength could reduce fatigue and help lower fall risk, with a quick on/off setup time around half a minute.

Commercially, Ascentiz positions the hip module as the entry point (quoted at $1,499 in this clip), with a knee-support module at $2,499, and optional upper-body pieces coming from partners via the same modular interface. They say they’ve completed a Kickstarter campaign around $2.5M with 2,000+ backers and are targeting mass production and initial deliveries around March, which will be a good real-world test of comfort, durability, and how well BodyOS can attract module makers at scale.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=NCQdHno-234

Waterdrop A1 Reverse Osmosis Water Bar: 0.0001µm membrane, 6 temps, 5 volumes

Posted by – January 10, 2026
Category: Exclusive videos

Waterdrop’s A1 countertop reverse osmosis dispenser is shown as a self-contained way to turn tap water into temperature-controlled drinking water without plumbing. The front OLED screen gives direct control over temperature and dispense volume, with color-coded feedback (blue for cold, red/orange for hot) and a quick stop so you can dose for a cup, bottle, or thermos without guesswork. https://www.waterdropfilter.com/products/ro-hot-cold-water-dispenser-a1

Technically, the A1 is built around a multi-stage RO architecture: a pre-filter stage feeding a 0.0001 µm reverse osmosis membrane, plus UV sterilization intended to keep the internal tanks cleaner over time. Waterdrop’s published claims center on lowering TDS and reducing a wide spread of contaminants that matter in real tap water—PFAS (PFOA/PFOS), chlorine taste/odor, and heavy-metal ions among them—while listing a 2:1 pure-to-drain ratio and a 100 GPD production class output.

Where the product differentiates from a typical countertop purifier is the thermal layer on top of RO. You get six temperature presets spanning roughly 5°C to 95°C (41°F to 203°F), several fixed dispense volumes that auto-stop, and a child-lock that must be held to unlock hot water to reduce burn risk. The UI also references modes like night mode, off-home mode, and altitude mode, and the demo shows hot output arriving in seconds rather than a kettle-style wait at CES Las Vegas 2026.

Because it’s tank-based, portability is the key tradeoff: you refill a feed reservoir and periodically empty a separate wastewater compartment (a normal consequence of RO concentrate). Maintenance is designed around quick cartridge swaps—twist out and replace—with typical guidance of about 6 months for the composite filter and up to 12 months for the RO cartridge, depending on local water quality and how much you use it today.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=Z2iCDdi6_1M

Xthings Ultraloq Bolt Sense: palm vein + 3D face smart lock, Matter/Aliro, UWB

Posted by – January 10, 2026
Category: Exclusive videos

Xthings has spent more than a decade building smart access hardware that tries to feel “invisible”: you walk up, authenticate, and the door behaves like it understands intent. In this interview, the focus is on stacking multiple credentials—PIN, NFC tap, fingerprint, and now proximity plus computer vision—while keeping broad compatibility with mainstream ecosystems like Apple Home, Google Home, and Samsung SmartThings. https://xthings.com/

A big theme is proximity done properly. Their ultra-wideband (UWB) smart lock uses ranging to judge distance and approach direction, so it can unlock when you actually reach for the handle, not just because you walked nearby with a UWB phone. If you don’t have UWB, the same lineup supports NFC tap, keypad code entry, and (on some models) a physical key override, plus digital key sharing for households and small teams at the door.

For higher assurance, Xthings is pushing multi-modal biometrics with Ultraloq Bolt Sense: palm-vein authentication plus 3D facial recognition. Palm vein ID typically uses near-infrared imaging to read sub-surface vascular patterns, which can work even with wet hands or in low light, and it’s generally harder to spoof than many surface-level biometrics. The conversation also touches standards-first thinking, with newer locks like the Latch 7 Pro leaning on Matter over Thread for local control and Aliro-style interoperability for access credentials, while still offering familiar fallbacks.

The “Urban Guardian” concept stretches the same identity + sensing ideas into public space hardware. It’s presented as a self-contained safety node for streets or corporate campuses: solar panels charging an internal battery, 4G connectivity, 360° cameras, lighting, and an SOS/info interface, without trenching cables or deep backend integration. Practical touches like MagSafe-style wireless charging suggest it’s designed for real-world dwell time, not just passive monitoring at night.

On the monitoring side, the Ulticam camera line adds Matter-ready devices and Google Gemini-powered video understanding, shifting alerts from generic motion to more contextual summaries (like recognizing a delivery event). The lineup is positioned around details like 4K/HDR capture, wide field of view, two-way audio, and common installs such as PoE, alongside variants that emphasize floodlighting or longer-range wireless options. Filmed at CES Las Vegas 2026, the story here is less about one gadget and more about how access control, identity, and AI video context can converge into one cross-platform stack.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=5b35yUmj-kY

Beetles Gel Polish Zodiac Kit: UV/LED curing lamp, mini colors, DIY nail art

Posted by – January 10, 2026
Category: Exclusive videos

Beetles Gel Polish walks through a compact “Aucus” gift set built around zodiac themes: the color selection is meant to match a sign’s vibe, while the box bundles the core tools for a full gel manicure in one place. Alongside multiple mini gel bottles, you get a UV/LED nail lamp, nail art brushes, and small themed extras like pendants, so you’re not buying the essentials piece by piece. The smaller bottle format also makes it easier to treat this as a travel-friendly kit rather than a drawer full of full-size bottles. https://beetlesgel.com/

From a technical angle, the pitch is about accessible soak-off gel workflow for DIY users: thin, controlled layers, LED/UV curing with consistent exposure, and enough pigment variety to build simple looks without mixing. A bundled lamp matters because cure quality is what drives wear time and scratch resistance; under-curing can leave soft layers, while over-curing can make removal harder. If you’re doing gel at home, the usual best practice still applies: prep/dehydrate the nail plate, keep product off skin (allergen risk), fully cure each layer, then remove with acetone soak-off and gentle push, with basic skin care.

The brand positioning here is “online-first but moving into shelves,” starting from Amazon popularity and social discovery, then expanding toward big-box and pharmacy retail. In the interview, they mention Walmart and Target, with CVS referenced as an upcoming channel, which is a typical path for consumer beauty brands once packaging, compliance, and merchandising are ready. The conversation was filmed at CES Las Vegas 2026 during the Impact Global Connect event, so it’s framed as a quick booth intro rather than a long-form tutorial, with the focus on what’s inside the box and how it fits DIY habits in the US market.

What’s most interesting is how the product strategy leans into frequent seasonal drops: rotating curated palettes (often 20–30+ colors in mini format) plus small “collector” elements, making the set feel like a ready-made gift or starter pack. If you’re used to pro salon systems with larger bottles and strict base/top coat pairings, this is aimed more at variety and convenience than building a single locked-in system. They also note that Europe isn’t the current launch focus, so for now the availability story is primarily US retail and online, with the kit concept built around quick, complete setups you can actually finish at home today

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=wN94ATrY1qs

Benks DuPont Kevlar phone cases: 600D vs 1500D weave, TPU bumpers, MagSafe

Posted by – January 10, 2026
Category: Exclusive videos

Benks focuses on protective accessories built around genuine DuPont Kevlar aramid fiber, aiming for cases that stay very thin and light while still handling daily wear. In this interview they show a limited Chinese New Year “Year of the Horse” edition, using the woven pattern as both structure and design language, and clarify that “Kevlar-grade” doesn’t mean the phone is bulletproof even if similar fibers are used in ballistic gear. https://www.benks.com/

A key technical thread is how Benks differentiates 600D vs 1500D Kevlar: denier relates to fiber density, and changing it affects weave tightness, visual contrast, and surface texture. The talk frames it as a tactile choice too—rougher grip versus a smoother feel—while keeping the same core benefits of aramid fiber: high tensile strength for its mass, low thickness, and minimal bulk in hand.

The broader story is case engineering as materials plus ergonomics, filmed at CES Las Vegas 2026. Ultra-thin aramid shells can feel excellent and pocket well, but edge impacts are difficult to solve at very low thickness, so Benks also shows variants that add a TPU perimeter to improve drop behavior while keeping the Kevlar backplate aesthetic. They also highlight fitment work for foldables, and even mention being ready with a tri-fold style case for a newly launched Samsung foldable shown here.

Pricing is positioned around $40–$55 in the conversation, with a strong emphasis on authenticity: many “Kevlar-looking” patterns in the market are carbon fiber or generic aramid, while Benks points to DuPont licensing and consistent weave quality. The practical takeaway is to pick between maximum thinness and texture, or a hybrid build with TPU edges for a more forgiving drop profile, plus an ecosystem angle that extends the same material theme into items like power banks and stands ahead.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=9RltLVEMuxM

Pluggable Thunderbolt 5 Dock TBT-UDH2: 140W PD, dual HDMI 2.1, 2.5GbE, 120Gbps Bandwidth Boost

Posted by – January 9, 2026
Category: Exclusive videos

Pluggable founder Bernie Thompson explains why docking stations keep evolving: laptops lose ports, but workflows still demand power delivery, fast I/O, wired networking, and stable external displays over a single cable. The 2026 flagship Thunderbolt 5 dock is framed as a response to real connector habits—more USB-C, higher-wattage charging, and native display outputs people actually use in daily desk setups. https://plugable.com/products/tbt-udh2

In this booth chat from CES Las Vegas 2026, the dock’s layout is shown from the back: 180W input, up to 140W host charging, multiple USB-C data+power ports, dual HDMI, and Ethernet. Thunderbolt 5’s 80Gbps baseline with 120Gbps Bandwidth Boost lets a dock target high-refresh multi-monitor modes (for example dual 4K at 144Hz-class loads, with 8K modes depending on host and display). Pluggable also leans into a fanless mechanical design, plus 2.5GbE and UHS-II SD/microSD for media ingest here.

The technical subtext is that “one port to everything” only works when link negotiation is solid: TB5/USB4 tunneling, cable quality, monitor EDID behavior, and OS-specific display limits can all bite. Pluggable’s compatibility-first approach is about taming those edge cases, so storage, capture devices, and high-speed peripherals behave predictably across Windows and macOS with care.

The second demo uses Thunderbolt as external PCIe: a TB enclosure that hosts a desktop-class GPU, framed less as an eGPU gaming box and more as an AI inference engine. For local LLMs, the bottleneck is often VRAM and privacy, not peak frame rate—so a higher-memory card can load models quickly, generate tokens locally, and keep prompts and documents off the cloud. They also mention an in-house Plugable Chat app (Apache 2.0) built to run “chat with your data” workflows, including RAG over internal document stores inside.

Taken together, it’s a snapshot of docks turning into workstation front-ends: power, displays, Ethernet, removable media, and now desk-side AI acceleration over the same Thunderbolt fabric. If you’re building a dependable laptop-to-desk setup, the practical advice is to budget power, validate monitor modes (refresh matters as much as resolution), and treat cables as part of the system, not an afterthought today.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=xOWt8R_pj1M

iFixit FixHub USB-C 100W Smart Soldering + 55Wh Battery, Repairable Design

Posted by – January 9, 2026
Category: Exclusive videos

iFixit sits at the intersection of practical electronics work and sustainability: a huge, free library of repair manuals plus the parts and tooling to actually complete the job, whether it’s a phone, a laptop, or something as mundane as a small appliance. In this interview, Liz Chamberlain (Director of Sustainability) frames repair as a capability problem: if you own hardware, you should be able to maintain it, diagnose it, and restore it instead of treating it as sealed, disposable gear. https://www.ifixit.com/products/fixhub-power-series-portable-soldering-station

The hardware centerpiece is FixHub, a USB-C Power Delivery smart soldering iron system built around a 100W iron and a dual-port portable power station. The pack is rated 55Wh and is positioned as an “off-bench” setup: fast heat-up, sustained work time (quoted as up to 8 hours continuous), and the ability to run two irons with shared power limits when both ports are used. Temperature control is part of the point here, with a working range roughly 100°C to 420°C, plus safety behaviors like auto-standby and tip/heat indication via an illuminated ring.

What makes the demo feel real is that it’s not just “solder goes on”: it’s also rework. Chamberlain shows a beginner-friendly workflow, including desoldering an LED installed backwards, using copper braid (wick) to pull solder off pads—exactly the kind of mistake that turns “learning to solder” into “learning to debug assembly.” iFixit’s upcoming learn-to-solder kit leans into that by putting the instructions directly onto the PCB, while the commercial bundle pricing lands around $80 for the iron, about $250 for the iron plus battery station, and roughly $300 for a fuller kit with consumables and hand tools.

On the policy and product-design side, the conversation lands where iFixit often applies pressure: right-to-repair legislation and manufacturer choices that determine whether repair is routine or painful. They’re active in both EU and US advocacy, and the example that keeps coming up is batteries—moving away from aggressive adhesive, fragile pull-tabs, and solvent-based removal toward designs that are truly user-replaceable. It’s consistent with the FixHub philosophy too: screws, teardown guidance, and even 3D-printable case files so the tool itself is not a black box.

Finally, there’s the information layer: iFixit’s guides are split between in-house technical writers (often with engineering backgrounds) and a community-edited wiki model, which makes the content both broad and self-correcting. That same corpus has become training fodder for AI crawlers, so iFixit responded with its own mobile app and FixBot, an AI repair assistant that uses their manuals to ask diagnostic questions, route you to the right guide, and support voice or camera-based “show me what’s broken” workflows while still nudging users toward the photos and steps. The interview was filmed at CES Las Vegas 2026, where repairability, tool ecosystems, and AI-assisted troubleshooting are starting to converge in a very practical way today.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=JphaIeeCuho

OWC Thunderblade X12 192TB: 12x NVMe RAID via SoftRAID over Thunderbolt 5

Posted by – January 9, 2026
Category: Exclusive videos

OWC (Other World Computing) is framing Thunderbolt 5 storage as a way to keep “local-first” media workflows fast enough that you stop thinking about ports, bandwidth, and copy time, and instead treat external NVMe like a real extension of your workstation. The focus here is bus-powered performance, predictable sustained throughput, and RAID that stays readable across Mac and PC without being trapped behind a proprietary controller.
https://www.owc.com/

On the portable side, the Envoy Ultra is positioned as a single-SSD Thunderbolt 5 drive that pushes into the 6,000 MB/s class on burst and stays in a much higher baseline than typical USB-C SSDs once caches are exhausted. In the conversation, they describe roughly the first ~10% of the capacity sustaining the top band, then settling into about 1,600–1,800 MB/s for longer transfers, which is still very usable for large camera originals, proxies, and scratch media. The enclosure is engineered for passive heat dissipation (no fan, no dust ingress path), and the emphasis is on rugged, field-friendly behavior under real copy pressure and heat.

The bigger “desktop-on-a-cable” concept is the Thunderblade X12: a 12-drive NVMe array scaling by 12-drive increments up to 192 TB, aimed at high-throughput editing, ingest, and on-set shuttling where capacity and sustained speed matter more than peak benchmarks. RAID modes mentioned include 0/1/4/5/10, with RAID 6 planned, and the pitch is end-to-end sustained transfer in the 6,000 MB/s range without thermal throttling even on very large moves. The physical design leans into a heavy-duty heatsink approach to keep performance consistent under load.

A key layer is SoftRAID: software-defined RAID with drive health monitoring and early warning, plus practical interoperability for creators who bounce between macOS and Windows. The point isn’t “RAID replaces backup” (they explicitly call out keeping backups), but that if the enclosure ever dies, the data layout isn’t locked to a proprietary bridge chip—move the blades to another compatible setup and the volume can come back. Thunderbolt 5 also helps with system-level plumbing: where older buses could see display traffic steal priority and cut storage throughput, the added headroom makes it easier to run high-res monitors and fast storage on the same connection without the same penalty.

They also highlight the often-ignored weak link: cabling. A certified 2 m Thunderbolt 5 cable means you can actually place storage, docks, or displays off-desk without gambling on random USB-C wiring, signal integrity, or intermittent drops. This interview was filmed at CES Las Vegas 2026, and it lands as a pragmatic look at how bandwidth, thermals, RAID metadata, and certification details add up to fewer workflow surprises on set.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=HiBFXHJ-QcQ

Ambiq Apollo510B edge AI SoC: Cortex-M55 Helium, BLE 5.4, on-chip SRAM/NVM

Posted by – January 9, 2026
Category: Exclusive videos

Ambiq frames its mission as “ambient intelligence”: ultra-low-power Arm Cortex-M microcontrollers and SoCs that keep sensing, listening, and rendering without treating the cloud as a default. The core differentiator is SPOT sub-threshold design, running transistors at very low voltage margins so meaningful edge AI can fit inside tighter power, heat, and battery limits. https://ambiq.com/apollo510/

On the silicon side, the Apollo5 family moves from Cortex-M4 into Cortex-M55 plus Helium (MVE) vector extensions, effectively adding an on-device AI/ML DSP path for int8 and floating-point workloads. Apollo510-class parts pair that compute with big on-chip memory and wide internal buses to avoid costly off-chip RAM/flash I/O; Ambiq highlighted configurations around 4 MB SRAM and 4 MB non-volatile memory, plus a microcontroller-scale GPU aimed at smooth UI on a small die.

A concrete edge-compute example is AR eyewear: Even Realities smart glasses use Apollo510B to drive dual microLED microdisplays, manage multi-layer graphics, and run local audio pipelines like beamforming, noise reduction, and speech enhancement. The interview was recorded during the Pepcom media event around CES in Las Vegas 2026, and it tied this workload to the broader shift from watches to rings, shoes, and other placements where thickness, mass, and thermal budget are hard constraint.

Ambiq positions the MCU as radio-agnostic—cellular, LoRa, Zigbee, or a nearby gateway—while wireless variants integrate Bluetooth Low Energy 5.4 and a dedicated network processor for low-duty connectivity. That maps cleanly onto “sensor hub” architectures: keep a Cortex-A Linux/Android host asleep, fuse sensors and do always-on inference on Cortex-M, then wake the heavy core only when something is worth sending through the gate.

The power story is really about system behavior: keep radios off, ship answers instead of streams, and even compress signals like PPG data by up to about 16× when you must preserve raw traces for later analysis. Ambiq also hinted at next-gen “Atomic” devices with a neural processor for local video triggers (package detection, security cues), and noted it began trading on the New York Stock Exchange under AMBQ on July 30, 2025, putting more visibility on its edge.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=L3_-ibDfQuU