AIPC Local LLM Box, ARFirst PC + AI box ROCm Ryzen 7 780M, Ryzen AI Max+ 395 126 TOPS, 128GB, 10GbE

Posted by – February 9, 2026
Category: Exclusive videos

AIPC shows two takes on ARfirst compute: a portable, AR-ready AI PC and a compact local-inference box aimed at running large language models without cloud dependency. The smaller system is positioned like a self-contained workstation with USB-C DisplayPort output for direct headset or display connection, plus enough GPU compute to handle everyday office workloads and light gaming in a single device.


HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.

On the handheld/keyboard form factor, the demo unit is specced around AMD Ryzen 7 with Radeon 780M graphics, 32 GB RAM, up to 2 TB storage, and Windows 11 Pro. The pitch is practicality: built-in pointing control, active cooling airflow channels, and an estimated ~8 hours of monitor-style use, with different keyboard layouts possible beyond the US version shown.

The bigger story is the “AIPC” local AI box built around AMD Ryzen AI Max+ 395, quoted at 126 TOPS, paired with Radeon 8060-class integrated graphics and configured up to 96–128 GB memory with 2 TB SSD. The point of the high memory ceiling is straightforward: bigger parameter models, larger context, more KV-cache headroom, and fewer compromises when you try to run heavier Qwen/Llama-class checkpoints locally rather than streaming tokens from a hosted API.

The workflow shown is very “local model ops”: a Windows environment with a model marketplace/manager (Nova Studio) to download, start/stop, and swap models, then run offline prompts (including quick multilingual queries) with no internet access. They also demo voice recording and TTS voice cloning to produce speech in another language using the recorded sample, framing the box as a 24/7 agent machine for coding, research, and multimodal generation with predictable cost and privacy characteristics.

Filmed at ISE 2026 in Barcelona, the interview leans into platform tradeoffs: comparisons to Mac Studio and NVIDIA “AI boxes,” plus the AMD ROCm vs CUDA ecosystem discussion, and the practical I/O checklist (HDMI, 10GbE Ethernet, high-speed external storage). The core claim is that this class of high-TOPS APU + large unified memory makes “biggish” local LLM work feel less like a lab setup and more like a normal desktop routine, at a fixed hardware price.

I’m publishing about 60+ videos from ISE 2026, check out all my ISE 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjUiepj5jbL6aIt6QB9jeCk

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 )

“Super Thanks” are welcome 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=J6KHnxFN3ZM