Strada cloud-free ProRes 4K streaming: virtual filesystem to Final Cut, real-time HEVC, p2p storage

Posted by – January 8, 2026
Category: Exclusive videos

Strada is pitching a simple idea: keep raw media on your own drives, but make it reachable like a cloud app. Instead of uploading terabytes of camera originals, Strada’s peer-to-peer “Agent” turns any folder on a Mac or PC into a remotely accessible library, so collaborators can review, share, and start cutting without paying ongoing storage rent or dealing with cloud egress. They even estimate cloud storage can cost around 40x more per GB than a drive you already own. https://strada.tech/

The demo shows why this matters for video: a ProRes 4K file stored on an OWC ThunderBay in Los Angeles is played back from a laptop in Vegas with responsive scrubbing, instant jumps across the timeline, and even 8x playback. The drive activity light becomes the proof point—pause and it stops, play and it spins up—because the source really is staying on that local RAID here.

Under the hood, Strada leans on real-time, variable-bitrate file streaming plus hardware encoders to adapt huge mezzanine files to whatever network you have. In this CES Las Vegas 2026 walk-through, they describe Apple Silicon as a good fit thanks to built-in HEVC, letting multi-GB media get compressed from “gigabit-class” data rates down to a few Mbps in roughly 100–200 ms, so you can work without pre-made proxy files today.

A newer piece is the virtual file system mount: Strada can appear in Finder like a local volume, so you can drag a remote clip directly into Final Cut Pro and keep the project pointing back to the original storage. On capable M-series machines, the same host can serve multiple concurrent ProRes streams for several editors, while Windows and ARM-enabled PCs can also participate when you need a mixed studio.

Looking forward, Strada talks about byte-range caching: edit lightweight H.265 proxies anywhere, then “conform” by pulling only the exact high-quality segments referenced on the timeline, minimizing transfer and avoiding manual relinks. The team also hints at Linux support and more appliance-like hardware paths (helped by OWC’s recent investment), which could turn direct-attached drives into a more distributed collaboration fabric for the years ahead.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=OtTWLHDuTYg

Sonim MegaConnect Pocket Emergency WiFi, Frore Systems HPUE Class 1 5G hotspot cooling with AirJet

Posted by – January 8, 2026
Category: Exclusive videos

This interview digs into a very practical bottleneck in rugged wireless gear: once you push a mobile hotspot into HPUE/Power Class 1 transmit levels, the RF power amplifier heat load grows fast, and the product usually turns into a heavy “metal brick” to stay stable. Frore Systems shows how Sonim’s MegaConnect integrates three AirJet Mini solid-state active cooling modules so a higher-power 5G hotspot can remain pocketable while still serving as a team Wi-Fi node in remote coverage zones. https://www.froresystems.com/

The key point is efficiency: Class 1/HPUE amplifiers are often quoted around the teens in percent efficiency, meaning most battery energy becomes heat rather than radiated power. AirJet changes the packaging trade-off by moving heat with high back-pressure airflow in a thin, fanless module, letting Sonim shrink the thermal solution while keeping vents and airflow paths aimed at the PA and the main PCB heat sources to manage that heat.

From a networking angle, MegaConnect is framed around FirstNet MegaRange operation (Band 14/n14) plus modern Wi-Fi, so the value is uplink reliability and coverage extension rather than peak downlink headlines. In the field, that translates to sending live video, uploading incident data, and creating a local hotspot bubble for nearby devices, while the thermal design reduces throttling and helps the enclosure target IP53-type dust and water-splash resilience at the edge.

Seen in the context of booth demos at CES Las Vegas 2026, it’s a nice reminder that radio “range” features are frequently limited by watts, not marketing. When cooling is compact enough to carry, a responder can walk the perimeter and keep a higher-power uplink available for the rest of the crew, without hauling a separate amplifier or a huge heatsink pack.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=uhrwT_LugKE

Compact Edge AI: Frore Systems AirJet PAK 5C G2: Jetson Orin NX Super 40W solid-state active cooling

Posted by – January 8, 2026
Category: Exclusive videos

Frore Systems is showing an AirJet PAK module aimed at rugged edge-AI boxes built around NVIDIA Jetson Orin NX “Super” class designs, where sustained 40W operation usually pushes passive heatsinks into bulky, heavy metal. The idea is to replace a large passive thermal mass with a compact solid-state active cooler that keeps the same compute envelope, but targets around an 80% reduction in volume and weight for the thermal hardware. https://www.froresystems.com/products/airjet-r-pak

At the core is AirJet Mini G2, a thin solid-state air-moving “chip” designed for high backpressure airflow in sealed enclosures (useful when you need to pull air through tight ducts and filters). In this demo the AirJet PAK 5C G2 integrates five Mini G2 devices plus a control board into a self-contained heatsink module; a heat spreader couples the SoC to the PAK, and a simple 4-pin interface handles power/ground and operating level control for closed-loop thermal management.

What’s interesting here is the system-level trade: keeping “fanless-like” industrial attributes (silent operation, fewer moving parts, long service life) while avoiding the passive-only penalty in mass, z-height, and enclosure size. That matters for robots and humanoids where every gram affects payload, stability, battery sizing, and joint torque, and where thermal throttling directly cuts TOPS/W under sustained inference loads. The video itself was filmed at CES Las Vegas 2026.

Reliability is a big part of the pitch: the PAK can be paired with dustproof, water-resistant filtration so the device behaves more like a sealed industrial node than a laptop-style ventilated box. If your target is “deploy and forget” edge compute for 5–10 years, this kind of high-backpressure solid-state airflow plus filtration is a practical path to sustained 40W AI at the edge without oversizing the enclosure for passive heat soak.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=KRZeqrQjEIc

BreakReal R1 AI bartender: natural-language cocktails, QR ingredient scan, fridge + ozone clean

Posted by – January 8, 2026
Category: Exclusive videos

BreakReal R1 is a countertop conversational AI bartender that turns natural-language requests into a drink recipe, then mixes it without you measuring each pour. In the demo you can ask for something like a spicy piña colada or a sweeter fruit-forward mix, and the system suggests variations before you press start. The core idea is personalization through chat-style intent parsing plus a recipe engine that maps your flavor words to the ingredients currently loaded. https://breakreal.com/

On the hardware side, R1 is built around an eight-ingredient bay feeding multi-channel pumps, with sensors used to confirm what’s installed and to control dosing. Instead of a fixed menu, the machine composes a recipe from what you have on hand, then runs the pump sequence automatically to hit target ratios. It’s positioned as a flexible drink platform, supporting alcoholic and non-alcoholic builds and even coffee-style beverages, with an emphasis on repeatable dose.

Ingredient onboarding is handled through an app workflow that uses a camera scan (QR/label) to identify a bottle or carton and attach structured metadata like flavor profile and alcohol by volume. That database then drives recommendations—what you could make now, what you might want to buy next, and how a new ingredient changes the recipe graph. This interview was filmed at CES Las Vegas 2026, where the focus was less on mixology theater and more on the data pipeline behind the pour there.

Temperature is treated as part of the system design, with a built-in refrigerated compartment and an adjustable setpoint intended to keep mixers cold, while you still add ice when you want fast chilling or dilution. The UI also plays with mood inputs—happy, tired, or in-between—to generate different cocktails, and there’s a community angle where recipes can be shared and downloaded across regions. In practice it’s an automation layer for parties: less time learning technique, more time tasting how different ingredient sets change the same base recipe here.

Maintenance is handled via an auto-clean cycle that the company says uses ozonated water to reduce residue and bacterial growth in the fluid path, aiming to simplify pump/valve hygiene. As with any multi-ingredient dispenser, real-world reliability will hinge on sugar buildup, viscosity differences, and how consistently people run cleaning cycles. Pricing discussed around launch puts early-bird units around $1,099 with a ~$1,299 MSRP, and the pitch is clearly for home use rather than replacing a bartender today.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=ptqfF9WPawc

Momcozy Air 1 wearable breast pump: app control + charging case, plus 2-in-1 swing 66 lb

Posted by – January 8, 2026
Category: Exclusive videos

Momcozy’s booth walkthrough centers on two “hands-free” caregiving workflows: an electric swing meant to calm an awake infant, and wearable breast pumps built for discreet, mobile pumping. What connects them is human-factors design—reducing setup steps, keeping controls predictable, and making it easier for a parent to grab a short break without turning childcare into a project. https://momcozy.com/products/momcozy-air-1-ultra-slim-breast-pump

On the swing side, the standout spec is how long it stays useful. Instead of topping out around 30–35 lb like many infant swings, this 2-in-1 unit converts into a stationary toddler seat rated up to 66 lb; in the interview they frame that as usable through roughly age 5. Momcozy also positions it as more than a single motion: multiple swing motions/patterns with adjustable speed, built-in music (eight melodies are mentioned), and a detachable seat with recline options for supervised lounging, reading, or snack time.

For pumping, the focus shifts to app-connected wearables (the demo references Air 1, and Momcozy’s Mobile Flow M9 is another app-driven model in the same lineup). The technical angle is closed-loop power and tracking: a charging case that recharges the pumps between sessions (they claim about 15 pumping sessions per full case charge, depending on duration), plus app control for mode changes, timers, and milk-output logging. Momcozy also leans into fit and discretion—an ultra-slim profile is claimed for Air 1, while M9 marketing emphasizes multi-mode control, 15 intensity steps, and up to 300 mmHg suction—because comfort, seal quality, and flange alignment drive real-world efficiency.

Filmed during CES Las Vegas 2026, the video frames Momcozy as a broad pregnancy-to-toddler ecosystem rather than a single device pitch: soothing, feeding, pumping, and everyday parenting gear that prioritizes portability and low-interruption interaction. The most useful takeaway is the safety and usage framing—swings are for soothing while awake (not sleep), and wearables are about making pumping compatible with commuting, meetings, and travel—so the tech serves routine, not the other way around today

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=mfpGlwR3UYk

Airseekers Tron Ultra 4SWD robotic mower: LiDAR+AI vision, FlowCut 2.0 dual blade

Posted by – January 8, 2026
Category: Exclusive videos

Airseekers is pushing robot mowing beyond “flat suburban lawn” and into mobility-first outdoor robotics: the interview focuses on the new Tron Ultra platform (and what they learned from the first TRON generation). The big theme is reducing setup friction (wire-free operation, simpler boundary concepts) while increasing autonomy via sensor fusion, so the mower behaves more like a small off-road robot than a gadget. https://airseekers-robotics.com/

Later in the video (filmed at CES Las Vegas 2026), Tron Ultra is shown doing maneuvers you normally associate with skid-steer or tracked machines: zero-radius spin, tight-radius turns, and a sideways “crab” move for escaping narrow passages. Airseekers frames this as 4SWD (four-wheel steering + four-wheel drive), where each wheel can be controlled to improve traction, reduce rutting, and hold line on uneven ground, including steep grades around 85% (about 40°) when conditions allow at that degree.

Navigation is presented as a LiDAR + AI-vision stack with VSLAM-style mapping and multi-camera coverage (the demo unit references four cameras for near-360 perception). The goal is reliable obstacle detection, no-go zones, and path replanning around lawn furniture, plants, and edges in real time, without the brittleness people associate with weak GNSS areas. Airseekers also talks about expanding signal coverage using beacons to reduce dead zones under trees and near structures, so the robot can keep a clean boundary map.

On cutting, Tron Ultra upgrades the company’s FlowCut concept into a FlowCut 2.0 “3-in-1” approach with a double-bladed design and a wider cutting deck, aimed at higher throughput and finer mulch for nutrient recycling. The pitch is practical: more grass per pass, fewer missed strips, adjustable cutting height, and more consistent mulching so clippings don’t clump or smother the turf during a cut.

Commercially, Airseekers says the first-gen TRON launched via crowdfunding in 2024 and scaled into broader sales in 2025, with claimed shipments around 20k units across the US and Europe and user feedback trending positive despite early firmware bugs. For Tron Ultra, the interview mentions an April launch window and a price “over $2k,” while recent announcements around CES point to an April 2026 Kickstarter target closer to the $3k range depending on configuration, with swappable batteries and fast charging aimed at multi-zone and larger-property coverage value

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=qppExGKhUzQ

Frore Systems LiquidJet coldplate: 3D jet-channel cooling, 600 W/cm² hotspots, 7.7°C

Posted by – January 7, 2026
Category: Exclusive videos

Frore Systems’ LiquidJet is a direct-to-chip liquid cooling coldplate aimed at the thermal limits of modern AI accelerators, and it was first unveiled at OCP in October 2025. Instead of treating a GPU package as one uniform heat source, it treats it as a power map, so the coldplate can be tuned for die, HBM stacks, and any localized hot spot that dominates junction temperature. https://www.froresystems.com/products/liquidjet-dlc-coldplate

A lot of today’s data-center coldplates are built with skived 2D microchannels: coolant enters, runs a long path, warms up, and you get a noticeable temperature gradient across the plate. LiquidJet flips that by using short-loop “jet-channel” microstructures and multi-level manifolding, so cooling can be uniform when you want it, or intentionally biased toward a 10×11 mm hot region on a larger die area. Feature sizes can get down to ~75 µm, which is why the manufacturing approach matters a lot.

The manufacturing angle is borrowing semiconductor-style fabrication, but on metal wafers: etch the microstructure rather than machining long channels, then build the stack as a precise, repeatable flow network. In this CES Las Vegas 2026 walkthrough, Frore shows hotspot demos and explains how the approach can support very high heat flux, up to about 600 W/cm² with liquid-metal TIM, versus roughly 300 W/cm² in many conventional plates, while holding a tighter temperature field on the plate.

On current high-power GPU platforms, Frore quotes results like ~7.7°C lower GPU temperature, ~75% higher heat removed per unit flow (kW/LPM), and ~50% lower coldplate mass using copper where it counts and a lighter top construction. Lower required flow (they cite about 1.0–1.4 LPM per kW) can also reduce pumping load and pressure stress on rack plumbing, which matters when a CDU is feeding many parallel loops at scale, with production targeting around June.

Looking ahead, the discussion links cooling needs to packaging trends: single-reticle versus multi-reticle dies, rising total module power toward multi-kilowatt designs, and the need to cool both compute and adjacent HBM without overbuilding the whole loop. If hyperscalers can hold junction temperature down with less flow and less pressure drop, they can often sustain higher clocks and improve throughput-per-watt at the rack level, which is the real prize in an AI factory rack.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=WrwldCJg5-Q

Rokid Glasses HUD update: microLED waveguide, SDK + app store, AI model choice

Posted by – January 7, 2026
Category: Exclusive videos

Rokid’s latest smart-glasses update shows a deliberate split between two product directions: a display-free “AI glasses” form factor for all-day wear, and a lightweight AR option with a subtle heads-up overlay. The conversation stays grounded in practical wearability—swappable styling, prescription support up to about ±15 diopters, and lens choices like clear, tinted, and photochromic—while keeping the same voice-first AI intent. https://global.rokid.com/pages/rokid-glasses

The new display-free model, branded Rokid Style, leans into camera + audio rather than a microdisplay. Public specs around CES 2026 put it near 38.5 g with a 12 MP camera, open-ear speakers, and mic capture for commands, calls, and meeting-style transcription. Internally, it uses a dual-chip split: Qualcomm Snapdragon AR1-class silicon for imaging/AI tasks, plus an ultra-low-power NXP RT600-family MCU for always-on sensing and audio DSP, which helps battery life.

Rokid’s AR glasses remain a different product: dual microLED waveguide displays that render a green monochrome UI, so navigation prompts, captions, teleprompter text, and quick widgets stay glanceable without turning the frames into a bulky visor. Rokid says the earlier prototype matured through crowdfunding and shipment, and the team has already rolled out 300+ optimizations based on daily user feedback. This interview was filmed at CES Las Vegas 2026.

What stands out is the “model choice” story. Depending on region and preference, users can route queries to different AI backends (examples discussed include ChatGPT, Gemini, DeepSeek, and Alibaba Qwen), with cloud translation options as well as offline translation modes. For developers, Rokid points to Android and iOS SDKs/APIs, an AOSP-based stack on the glasses, and a lightweight app model, with an app store already live in China and planned for broader release.

The roadmap is framed through real use cases: an augmented interviewer that suggests better questions in real time, context-aware humor prompts, guided tours that narrate what you see, grocery coaching via short-term memory, and opt-in conference networking overlays. Rokid also hints at full-color waveguide displays as a future step, while acknowledging current cost and yield limits—so near-term progress is likely to come from software, ecosystem, and power management rather than spectacle.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=hJ-t-NnnJGQ

ZenoWell ear neuromodulation: left-ear vagus stimulation, HRV biofeedback, sleep + pain

Posted by – January 7, 2026
Category: Exclusive videos

ZenoWell demonstrates an ear-worn neuromodulation device that delivers transcutaneous auricular vagus nerve stimulation (taVNS) rather than audio. Small electrodes rest on the outer ear to stimulate the auricular branch of the vagus nerve, aiming to shift autonomic state toward parasympathetic “rest-and-digest” and make it easier to downshift into sleep. https://zenowell.ai/

In the demo, stimulation targets the cymba concha and cavum concha (the “simba/cavum” zones on the ear), and intensity is adjusted with simple +/– controls. The sensation is described as comfortable—closer to a gentle vibration than an electric shock. ZenoWell frames it as a multi-mode device with presets for sleep, acute stress, headache, and pain, plus programs intended to support faster recovery.

The newer generation adds app connectivity and personalization: stimulation parameters can be tuned to biometrics like resting heart rate, heart-rate variability (HRV), and sleep performance, using onboard sensing or third-party wearable data. The team also talks about expanding ear-based sensing (PPG for pulse metrics and even EEG-like signals) toward a tighter closed-loop “sense + stimulate” workflow. This interview was filmed at CES Las Vegas 2026, where taVNS is becoming a more visible consumer wellness category.

A practical detail is the left-ear recommendation: right-side vagal pathways can have stronger cardiac influence, so many non-invasive protocols default to the left to reduce unwanted heart effects. A user testimonial describes hyperarousal episodes where heart rate drops from roughly 120 to 80 within about 20 minutes of a session, consistent with a rapid sympathetic-to-parasympathetic shift for some users. They also note a short adaptation period where the brain gets used to the new sensation quickly.

The positioning here is “non-drug adjunct,” not a replacement for medical care, alongside ongoing research interest in taVNS for insomnia across populations and broader neuromodulation for rehab and symptom support. Usage guidance is protocol-driven: sleep programs used regularly for 2–4 weeks, and chronic pain sessions up to twice per day over similar time windows. Pricing discussed ranges from an entry tier around $30 up to roughly $300–$400 for higher-end, app-connected models, and the team mentions exploring reimbursement pathways with insurers in the future.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=k9MkzTl7R5Y

Frore Systems AirJet solid-state cooling for Snapdragon X2 Elite desktop: 21 dBA, 10mm

Posted by – January 6, 2026
Category: Exclusive videos

Frore Systems walks through a Qualcomm reference desktop built around Snapdragon X2 Elite, showing how three AirJet solid-state cooling modules can fit inside a mini PC that’s roughly 10 mm thin while staying very quiet. The point isn’t the industrial design; it’s demonstrating that active cooling doesn’t have to mean fans, thick heatsinks, or wide-open vents when you still want sustained SoC power around the 25 W class. https://froresystems.com/

AirJet is a fully self-contained thermal module that uses ultrasonic MEMS membranes to pull air in and push out high-velocity pulsating jets, trading “big fan airflow” for high static pressure and a controlled internal duct path. Frore’s published numbers put AirJet Mini G2 at about 7.5 W heat removal per chip, ~2.65 mm thickness, ~1,750 Pa back pressure, and ~21 dBA acoustics, which helps explain why stacking multiple chips scales cleanly inside a compact device package.

The reference design is shown in two form factors: an ultra-thin square slab and a circular puck variant, both intended as templates that Qualcomm can hand to OEMs rather than a finished retail product. Because intake happens through small, discrete inlets, the demo highlights adding fine filtration (they mention a MERV-14 class filter) to move toward dust-resilient, water-resistant designs that won’t clog like conventional fan grilles. This was filmed at CES Las Vegas 2026, where sealed edge compute is a recurring theme here.

In practical terms, the interesting engineering claim is sustained performance: keeping an Arm Windows desktop from thermal throttling once CPU, GPU, and NPU blocks are loaded for long AI sessions, compiles, or local inference. The speaker frames it as “about five times thinner than a Mac mini” while maintaining higher sustained output, but even if you ignore the comparison, the takeaway is a thermal budget strategy that prioritizes flat, mount-anywhere PCs over short benchmark bursts today.

If these concepts become shipping systems, expect them to show up behind monitors, inside kiosks, in conference-room AV racks, and in industrial/retail edge boxes where acoustic noise, dust ingress, and service intervals matter. Solid-state active airflow removes fan bearings as a failure point and gives OEMs more freedom on materials (plastic or metal) while keeping the system quiet.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=PKX_83sRMss

Yarbo M Series modular yard robot at CES 2026: LiDAR, mower+snow+trimmer for 0.25–1.5 acre auto

Posted by – January 6, 2026
Category: Exclusive videos

Yarbo is built around a single autonomous Core that turns into different outdoor machines by swapping seasonal modules, so one battery platform can cover mowing, debris handling, and snow work. The navigation approach is sensor fusion—RTK positioning, stereo vision, IMU/odometry, and ultrasonic obstacle sensing—paired with app mapping and automatic return-to-charge behavior.
https://www.yarbo.com/ces-2026

In this interview, filmed at CES Las Vegas 2026, Melody and Cory introduce a new compact M Series that keeps the modular idea but targets typical residential lots (about 0.25 to 1.5 acres) as a more affordable entry point. The unit shown adds a LiDAR-assisted perception layer alongside dual cameras, aiming for 360° situational awareness when combined with ultrasonic sensing. Pricing was still being finalized, but the discussed target range was roughly $2,500–$3,000.

For spring and summer, the mower module handles the main cut while a rear trimmer can run at the same time for boundaries the deck can’t reach, turning “mow + edge” into one job. The trimmer uses standard spool line (not proprietary), with capacity cited around 23 m, and the cutting height adjustment shown in the demo spans about 2–4 inches. This fits the broader theme: treat yard care as repeatable autonomy, not a one-off gadget, and keep maintenance simple at the edge.

When conditions change, modules swap: a collector can pick up clippings, leaves, pine needles, and small debris, then auto-dump when load reaches about 55 lb at a user-defined drop zone on the map. For snow, they show an angled plow blade that can yaw about 25° left or right, plus a two-stage snow blower option intended for heavier accumulation. They also describe tying the robot to a weather service so clearing can start overnight before morning routines start.

The conversation also covers ownership models: splitting cost across neighbors, or landscapers running small fleets that can be dropped off, charged, and rotated between jobs. As a reference point, Yarbo’s full-size mower module is rated around 6.2 acres per week at a typical twice-weekly schedule, while the M Series is positioned as the compact sibling. Warranty was described as 2 years standard with optional extensions up to 5, and safety relies on redundant sensing plus obstacle avoidance so the platform can run without constant supervision, with new attachments expected to extend capabilities over time.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=oZa5ea1fFZI

BLUETTI Pioneer Na+ sodium-ion 900Wh: -25°C portable power station + 1500W inverter, Charger 2 DC-DC

Posted by – January 6, 2026
Category: Exclusive videos

BLUETTI’s latest portable power lineup leans into two themes: higher energy density in smaller enclosures, and more complete charging “ecosystems” around the box. In this interview you get a quick look at the Elite Series form factor, plus the Elite 100 V2—a 1,024Wh LiFePO4 (LFP) unit rated around 1,800W, using bio-circular attributed polycarbonate (mass-balance materials such as Covestro Bayblend RE) to lower the housing footprint while keeping a travel-ready enclosure for field use. https://www.bluettipower.com/pages/ces

A big real-world pain point for campers and overlanders is recharging away from the wall, and BLUETTI’s Charger 2 targets that directly with a DC-DC alternator + solar approach. It’s positioned as a smart energy hub: up to 800W from the vehicle side, up to 600W PV input (13–50V, 20A class), and up to 1,200W out toward a compatible power station, with cut-off logic intended to protect the starter battery once the engine is off or voltage drops, and an install that’s described as roughly an hour for a typical setup in a vehicle.

The most technically distinct box on the table is Pioneer Na, marketed as an early commercial sodium-ion portable power station: 900Wh capacity with a 1,500W inverter-class output, fast charging up to about 1.9kW, and cold-weather operation down to roughly −25°C (with charging supported down to around −15°C). Sodium-ion trades energy density for thermal robustness and long cycle life (often quoted 4,000+ cycles), so it’s less about being the lightest pack and more about staying useful when LFP chemistry can feel sluggish in deep winter cold.

Filmed at CES Unveiled in Las Vegas 2026, the booth demo also nods to appliance-first backup thinking with a “FridgePower” concept: a slim, fridge-oriented backup station aimed at riding out outages without losing food, where surge headroom for compressor start and recharge paths (PV, wall, and vehicle DC) matter as much as the headline watt-hour figure. In other words, “portable power” is increasingly about power electronics behavior under load, not just battery capacity on the spec sheet for the grid.

Overall, BLUETTI is framing portable power as modular infrastructure: LFP for mainstream density and cost, sodium-ion for low-temperature resilience, and dedicated DC charging hardware to make energy replenishment predictable while traveling. If you’re comparing boxes, the useful spec list isn’t only Wh and W; it’s PV voltage/current windows, DC port topology, thermal operating limits, surge characteristics, and cycle-life expectations for your off-grid gear.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=-fdRpvARPB8

Vrch Agentic VJ System: real-time audio-to-diffusion visuals on local GPU + MIDI control

Posted by – January 6, 2026
Category: Exclusive videos

Vrch’s Agentic VJ System is a compact, backpack-friendly VJ computer that listens to a live audio feed (and can also ingest camera input) to generate visuals in real time, so a performance doesn’t depend on pre-rendered clips or a huge media library. https://www.vrch.io/aivj

Under the hood it chains several local AI components: a custom live-audio analysis model extracts tempo/BPM plus higher-level cues like genre and mood, a language-based agent turns those parameters into scene prompts (the demo mentions an Alibaba Qwen family model), and a diffusion model renders the frames on a discrete GPU (shown with RTX 4080-class hardware, with an upgrade path to 4090-class) with low latency.

The operator experience is closer to a visual synthesizer than “AI asset generation”: a touchscreen UI shows the auto-analysis, you can override or steer the prompt on the fly, and control can come from DJ gear via MIDI/OSC, gamepads, or other controllers. The hardware is designed around a swappable GPU and a tight parallel pipeline, aiming for high on-device throughput without cloud dependency.

For bigger stages, the system can scale out by running multiple nodes and stitching outputs over WebSockets, so each box renders a tile of a larger canvas for higher resolution projection. In the interview they reference tests in London’s Outernet immersive venue, and note that early prototypes are being rented frequently, with most interest coming from outside China; the clip itself was filmed at CES Las Vegas 2026 in Eureka Park.

Today it’s an x86 Linux build packaged as a 3D-printed prototype, with a target mass-production price around USD $2,000–$3,000 depending on performance tier. They also acknowledge a future path toward ARM/NPU acceleration, but the current stack leans on NVIDIA CUDA, which keeps the real-time render path straightforward while the product roadmap takes shape.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=PJtupyq1R6g

Pebble Core Devices CEO Eric Migicovsky at CES 2026: Pebble Round 2, Pebble Time 2, Pebble 2 Duo

Posted by – January 5, 2026
Category: Exclusive videos

Eric Migicovsky frames the Pebble comeback as a response to a wearables market that drifted toward “a phone on your wrist.” The new lineup keeps the original thesis: e-paper readability in sun, physical buttons, notification triage, music control, and basic step/sleep tracking, without cellular, app sprawl, or constant charging anxiety. AI shows up as a lightweight helper rather than a center of gravity, which is part of the point: fewer features, executed cleanly. https://repebble.com/watch

On the hardware side, the Round 2 is a clear example of how far low-power silicon and display modules have moved: the bezel disappears, contrast is deeper, and battery life jumps from a few days on the old Time Round to roughly two weeks on the reboot. Pebble is still leaning on Sharp memory-display heritage and color e-paper reflective LCD, with the design tuned for ambient light plus a wrist-flick backlight when it is dark today.

Software is where the reboot becomes more interesting technically. PebbleOS sits on FreeRTOS rather than embedded Linux, with a compact UI framework and tiny kilobyte-scale apps and watchfaces that load fast and sip power. Migicovsky talks about using modern AI coding tools to generate watchfaces and apps already, and the longer-term idea is “describe an app” by voice and have the watch scaffold it automatically right there.

The story also reflects a rare IP arc: Pebble was acquired by Fitbit, then Fitbit by Google, and Google ultimately agreed to open-source large parts of the original operating system and tooling. That shift enables development in public, community pull requests, and a more durable ecosystem for iOS and Android users who want long battery life and tactile controls, as shown in this CES 2026 booth chat on the floor.

Alongside the watches, Pebble Index 01 extends the same minimal-compute philosophy into a ring: a thumb button and microphone capture short thoughts, then Bluetooth sync streams audio to the phone where speech-to-text and an on-device LLM can route it into notes, reminders, or timers. It is positioned as “external memory” more than a health tracker, using local processing and long-life batteries to keep the device simple, private, and quick in the moment of mind.

I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=mcJsEaKZpE8

FlexAI PaaS: AI training + inference right-sizing, OpenAI-compatible API, multi-cloud

Posted by – January 5, 2026
Category: Exclusive videos

FlexAI frames itself as a platform-as-a-service for AI teams that need to train, fine-tune, and serve models without turning every ML sprint into MLOps firefighting. The idea is workload right-sizing: pick the smallest viable cluster shape for each job, reduce idle GPU time, and keep iteration loops tight, with a stated goal of cutting typical costs to around 30% while improving time-to-train for real projects. https://www.flex.ai/


HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.

On the training path, you connect a GitHub repo, point to your requirements file and entry point, then choose node count and accelerator count. Instead of manually aligning drivers, CUDA, PyTorch builds, containers, and dependency pinning, FlexAI automates the environment so scaling from 8 GPUs to 16 is a config edit rather than a re-install cycle. The platform is described as deployable across regions (including France and the US) and able to run on AWS, GCP, or Azure when you want it inside your broader stack.

Inference gets treated like a sizing and orchestration problem rather than “pick a GPU and hope”: an Inference Sizer asks for throughput targets (requests per second), token sizes, and model class, then recommends GPU SKU and GPU count based on benchmarking. The demo highlights fractional GPUs (down to slices such as 1/7), autoscaling bounds, and an OpenAI-compatible API endpoint you can drop into an app, plus built-in observability for latency, throughput, and utilization as a single metric.

A practical driver here is the move away from frontier-model APIs once cost curves and hallucination risk become product liability: teams start with hosted endpoints, then migrate to fine-tuned open models (or train from scratch for narrow domains like space, legal, or healthcare) where behavior, evaluation, and data control matter. Filmed at Web Summit Lisbon 2025, the interview also sketches FlexAI’s company arc: a public $30M seed round (Alpha Intelligence Capital, Elaia, Heartcore), a roughly 30-person team spread across France, India, and the US, and plans to build more community presence via Station F in Paris in January.

The longer-term bet is heterogeneous compute without lock-in: support for multiple GPU families (NVIDIA H100/H200 and newer Blackwell-era options like GB200/B200, plus AMD paths such as MI300-class inference), and the ability to route workloads across clouds while keeping the developer surface area stable. Combine that with utilization-driven scheduling and you can imagine carbon-aware placement — steering big training runs to cheaper, lower-carbon grids when deadlines allow — without forcing a rewrite of pipelines, which is the strategic path.

I’m publishing about 90+ videos from Embedded World North America 2025, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Join https://www.youtube.com/charbax/join for Early Access to all 90 videos (once they’re all queued in next few days) Check out all my Embedded World North America videos in my Embedded World playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=rtlVVemXNDw

Startup: OneDonate charity donation app: 3-tap giving, 3.5% fee, impact tracking + donor analytics

Posted by – January 5, 2026
Category: Exclusive videos

OneDonate is a mobile-first platform founded by Arijit to make donating feel as quick as any other in-app transaction: donors browse charities by category (health, environment, mental health, sport and more) and can complete a donation in a few taps, while charities sit inside a shared discovery layer instead of sending supporters through long web forms, logins, and repeated data entry. https://onedonate.co.uk/


HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.

Product-wise, it behaves like a searchable charity directory plus a lightweight checkout and record system. Users can filter by cause, set preferences, keep a donation-history ledger, and (where applicable) follow UK-style flows such as Gift Aid. Critically, OneDonate states it doesn’t handle the money itself: payments are processed by external providers and routed directly to the selected charity, with emphasis on secure, PCI-compliant gateways and separation of donor financial data from charity ops today.

A core idea is closing the feedback loop between donors and organisations. On the donor side that means impact-style updates and a sense of connection to a cause; on the charity side it means analytics around who is donating, what categories convert, and which cohorts are showing up. The conversation was filmed at Web Summit Lisbon 2025, and the “billionaire budget” example hints at programmable giving: set a fixed budget and auto-allocate across multiple causes rather than making a single manual choice there.

The business model is positioned as fee transparency instead of tip prompts. OneDonate talks about a disclosed platform fee around 3.5%, aiming to reduce the effective “extra” people pay on top of a donation and keep more money predictable for organisations, especially smaller charities trying to expand reach to Gen Z. The team described a recent UK go-live and a push to onboard more European charities into the same ecosystem here.

I’m publishing about 90+ videos from Embedded World North America 2025, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Join https://www.youtube.com/charbax/join for Early Access to all 90 videos (once they’re all queued in next few days) Check out all my Embedded World North America videos in my Embedded World playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=48kvHxU74mY

Startup: Launchify AI Go-To-Market Engineer: CRM-driven ICP, persona signals, outreach orchestration

Posted by – January 5, 2026
Category: Exclusive videos

Launchify (sometimes styled Launchyfi) describes an “AI go-to-market engineer”: an agentic layer that can plan, run, and optimize a startup’s GTM motion from a chat-style prompt interface, instead of hiring a dedicated GTM engineer or stitching together an agency plus a chaotic toolchain. The emphasis is operating the revenue engine end-to-end, with specialist agents that connect to your stack and execute repeatable plays while founders stay focused on product and customer discovery. https://launchyfi.com/


HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.

The workflow starts with deep CRM integration, treating the CRM as the source of truth for first-party pipeline data. Launchify analyzes historical deals to infer your actual ICP (not the aspirational one), then expands into firmographics like vertical, company size, and revenue bands. It also builds persona-level buyer maps (Head of Support, CMO, VP Sales, etc.), capturing pain points, motivations, KPIs, and common objections so targeting and messaging are grounded in proof.

From there, execution is framed as orchestration across sales-tech primitives: prospecting, enrichment, sequencing, and multichannel outreach. In practice that means automating lead discovery, attaching contact data, and running outbound via email plus LinkedIn automation, with guardrails around deliverability limits and LinkedIn throttling. The product focus in this clip is B2B SaaS outbound rather than paid acquisition, so signal-led segmentation and pipeline hygiene become the raw material for the agent.

Zooming out, the interview frames GTM engineering as a shift away from spray-and-pray volume toward a measurable system that continuously refines ICP and adapts to market signals. This was filmed at Web Summit Lisbon 2025, where the team talked about building in stealth, opening a waitlist, and onboarding startups in cohorts after the event at the booth.

For technical founders, the most useful lens is RevOps plus automation: clean objects, consistent stage definitions, and tight feedback loops determine whether an AI agent can create pipeline rather than activity. The evaluation is straightforward—conversion by persona, win-rate by segment, pipeline velocity, and CAC payback—while keeping human review for compliance, consent, and brand voice so execution stays aligned with intent and value.

I’m publishing about 90+ videos from Embedded World North America 2025, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Join https://www.youtube.com/charbax/join for Early Access to all 90 videos (once they’re all queued in next few days) Check out all my Embedded World North America videos in my Embedded World playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=Alq5MYrF3Uo

Unicorn Factory Lisboa startup incubator, acceleration, €1B+ raised, global scaling

Posted by – January 4, 2026
Category: Exclusive videos

Unicorn Factory Lisboa is a Lisbon City Hall-backed innovation hub that helps founders move from idea to revenue to international scale, without having to “figure out the ecosystem” alone. It blends incubation and acceleration with practical support like pitch preparation, hiring playbooks, mentor access, and investor introductions, and it reports supporting over 1,000 startups and helping teams raise over €1B across its network. https://unicornfactorylisboa.com/


HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.

In this short walkthrough, you see how the booth acts as a live showroom for a rotating cohort of startups, typically spanning pre-seed through post-seed and up to Series A. The team frames the space as a place to make the ecosystem legible: founders can demo products, meet partners, and join roundtables with public and private stakeholders who shape Lisbon’s startup pipeline here.

The conversation also shows how the event moment is used to amplify year-round programs, including an 8-month Scaling Up track aimed at post-seed scaleups with product–market fit, revenue signals, and an international growth plan. Filmed at Web Summit Lisbon 2025, it captures the “innovation week” dynamic where municipal actors, ecosystem builders, and founders cross-pollinate quickly around funding, GTM strategy, and org design for growth.

What comes through is a public-private model for startup enablement: city-led convening power paired with operator-grade acceleration, plus thematic tracks that connect startups to sector partners (for example Net Zero and health-focused initiatives). If you are building in Europe, the clip is a concise look at how a city tries to turn community, capital, and capability-building into repeatable startup momentum at work.

I’m publishing about 90+ videos from Embedded World North America 2025, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Join https://www.youtube.com/charbax/join for Early Access to all 90 videos (once they’re all queued in next few days) Check out all my Embedded World North America videos in my Embedded World playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=UxvA0s_cRhk

Chatronix Multi-LLM Turbo Mode: One Perfect Answer, DeepSeek vs GPT, Claude, Gemini

Posted by – January 4, 2026
Category: Exclusive videos

Chatronix is a multi-model chat workspace that runs one prompt through a bundle of leading LLMs in parallel (they describe it as “for the price of ChatGPT, you get six models at once”), then synthesizes a single merged output they call One Perfect Answer. The idea is classic model-ensemble orchestration: keep diversity (different reasoning styles, tool use, and web-search behaviors) while using aggregation to smooth out single-model failure modes like brittle logic, missing context, or hallucination. https://chatronix.ai/


HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.

In the demo, Andy (founding CPO) explains that Chatronix intentionally avoids “rating” models head-to-head, since they partner with providers; instead you either pick an individual engine output or rely on the merged response. Under the hood, this implies a proprietary arbitration layer that does response distillation, redundancy checking, and summarization across multiple candidate answers, with an emphasis on concise, decision-ready text rather than a long transcript-like dump of tokens over time.

The UI concept is simple but practical for power users: model tabs/icons let you inspect each engine’s response (they call out DeepSeek for longer-form replies, plus options like Gemini and Perplexity), and the merged answer is positioned as the default when you want a compressed takeaway. They also mention attachment support, including images, where only a subset of the connected models may be routed vision inputs depending on capability and cost, which is a common constraint in multi-LLM routing pipelines today.

Pricing and go-to-market are framed as consumer-first: about $25/month, reportedly live with early paying users after a mid-year launch, and they’re looking for seed funding after bootstrapping. A key detail is customization: if you want longer or more structured output, you steer it with prompting, and the “six answers + merged answer” pattern gives you a built-in range of styles for the same request. The interview itself was filmed on the Web Summit show floor in Lisbon, with the product positioned as a general-purpose B2C layer for anyone juggling multiple LLM subscriptions in daily work.

Stepping back, Chatronix fits into a fast-growing category of LLM aggregators that sit above vendors and sell workflow: parallel inference, comparative viewing, synthesis, and a consistent history across models. The interesting technical tension is cost vs quality: “highest-tier everything” is expensive, so the platform has to choose model versions, route selectively, and keep latency predictable while still making the merged answer feel more reliable than any single engine alone, even when prompts vary widely across tasks here.

I’m publishing about 90+ videos from Embedded World North America 2025, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Join https://www.youtube.com/charbax/join for Early Access to all 90 videos (once they’re all queued in next few days) Check out all my Embedded World North America videos in my Embedded World playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=C91hgQz4RKg

Startup: Talma HR Copilot for Recruiting: AI sourcing, interview scorecards, candidate screening

Posted by – January 4, 2026
Category: Exclusive videos

Talma describes itself as an HR copilot that “clones” a talent acquisition team: an AI agent sits between a hiring company and the market, helping source, interview, and score candidates so early-stage teams can scale without turning recruiting into a full-time fire drill. It’s positioned for startups and scaleups, but also for freelance recruiters who want repeatable, automated pipelines across European hubs like Paris, Berlin, and Barcelona. https://talma.ai/


HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.

A key theme is methodology over hype: before you search profiles or open an ATS, you write the role brief, define competencies, and build a scorecard that turns “gut feel” hiring into structured evaluation. That scorecard becomes the backbone for consistent screening questions, calibrated interviewer feedback, and faster decisions—especially when the first hires set the trajectory for the next year.

Technically, this kind of copilot usually blends LLM-driven conversation, semantic matching (embeddings), and retrieval over job requirements, candidate profiles, and prior feedback to keep context tight. The practical wins are automated outreach, pre-screens, scheduling, interview-note synthesis, and ranking summaries—while keeping a human-in-the-loop for final decisions, bias checks, and compliance needs like consent, audit trails, and GDPR limits. This interview was filmed at Web Summit Lisbon 2025, and the pitch is clearly about making structured hiring easier to run at speed, done right.

The value proposition is less “replace recruiters” and more “compress time-to-hire and reduce variance”: fewer missed signals, fewer unstructured interviews, and clearer trade-offs when comparing candidates. If Talma plugs cleanly into the tools teams already use (calendar, ATS/CRM, email) and keeps scoring transparent, it becomes a leverage layer for lean teams hiring across markets and time zones, without losing the human judgment that makes a hire stick.

I’m publishing about 90+ videos from Embedded World North America 2025, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Join https://www.youtube.com/charbax/join for Early Access to all 90 videos (once they’re all queued in next few days) Check out all my Embedded World North America videos in my Embedded World playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=-TiQCFBnEq0