ARM is showing TrustZone Media Protection working with the Open Source Trusted Execution Environment, adopting everything within the Android operating system.
Here's my full video in 4K from my front row seat of the ARM Press Conference at Computex 2018. You can also watch my Interview with Nandan Nayampally here.
ARM Cortex-A76 is a new microarchitecture based on DynamIQ technology, delivers 35% faster 7nm laptop-class performance (Intel Core-i3, Core-i5 comparable performance) with 40% improved efficiency maintaining the power efficiency of a smartphone. ARM Cortex-A76 also delivers 4x compute performance improvements for AI/ML at the edge. The new ARM Mali-G76 enables higher performance gaming, cross-platform experiences 30% more efficiency and performance density, as the gaming market is expected to reach $137.9 billion in 2018 and possibly as high as $180 billion by 2021 where 60% of that might be on mobile. ARM Mali-V76 support 8K60 video decode, it can also support simultaneous 4K encode and decode for 4K video-conferencing.
The new Arm Allinea Studio release is a comprehensive and integrated tools suite to help Scientific computing, HPC and Enterprise developers to achieve best performance on modern server-class Arm-based platforms. Check out https://developer.arm.com/hpc for more info.
OPEN AI Lab aims to promote the industry development of Arm embedded smart machines, build an embedded SoC basic computing framework for smart machine application scenarios, and integrate application scenario service interfaces. Committed to promoting the in-depth collaboration of the entire industry chain of chips, hardware, and algorithm software, artificial intelligence will be available where there is computation. You can also watch Mingfei Huang's keynote about Open AI Lab here.
HKG18-200K2 – Keynote: Mingfei Huang: Accelerating AI from Cloud to Edge
The computing changes where machine meet AI. The AI shall not only be instructed from Cloud but also be embedded in Edge and Thing itself. We can’t image in the future those intelligent machines surrounding us get idiotic even horrific when disconnected. More and more instinctive intelligence in perception, cognition and decision should be embedded into machines. How to support diverse AI algorithms running across different embedded computing hardware? It needs platform that silicon companies, algorithm providers and device makers can collaborate on. Android NN is one of them, there are more devices without Android that need to be covered. OPEN AI LAB, initiated by Arm China, Allwinner and Horizon, open to all partners, is born to focus on eliminating the barriers. Its AI Distro contains a Tensor Engine that extracts ML/DL Computing from Arm-based CPU, GPU and 3rd party Accelerators for diverse algorithm models. With the collaboration between Linaro 96board and OPEN AI LAB, algorithm and application developers will have the best support with optimized AI libraries for different hardware.
Learn More at http://connect.linaro.org
HKG18-200K1 – Keynote: Mark Hambleton: The Fog
Today’s world of devices connected to clouds looks set to evolve with more intelligence and processing being pushed to the edge or migrating between the cloud and the edge. The very definition of edge is evolving too. In this presentation we will look at some potential futures made possible by the emergence of the fog and its implications for the segments that it embraces.
Mark Hambleton / ARM
Approaching 20 years of experience in embedded systems ranging from real time control of wind tunnels in his early career to a more recently on mobile devices. Mark has been working with the Linux kernel for approaching 15 years, initially creating networking products focussing on traffic classification and shaping for core and edge routers to more recently on mobile devices. Working as a Chief Architect for at Symbian (and Nokia), Mark established himself within the ARM community, he then joined Broadcom in 2012 to refocus on Linux on ARM working on their leading edge mobile SOCs and then on to ARM in 2014.
Learn More at http://connect.linaro.org
The Arm Machine Learning processor provides up to 4.6 Trillions of Machine Learning Operations Per Second, as part of the Project Trillium, Arm’s Machine Learning (ML) platform, enables a new era of advanced, ultra-efficient inference at the edge with Programmable layer engines for future-proofing, Highly tuned for advanced geometry implementations, Specifically designed for ML and neural network (NN) capabilities, the architecture is versatile enough to scale to any device, from IoT to connected cars and servers.
Built from the ground up for optimal performance and efficiency, Project Trillium completes the Arm Heterogenous ML compute platform with the Arm ML processor, the second-generation Arm Object Detection (OD) processor and open-source Arm NN software.
The Arm Machine Learning processor consists of state-of-the-art optimized fixed-function engines that provide best-in-class performance within a constrained power envelope. Additional programmable layer engines support the execution of non-convolution layers, and the implementation of selected primitives and operators, along with future innovation and algorithm generation. The network control unit manages the overall execution and traversal of the network and the DMA moves data in and out of the main memory. Onboard memory allows central storage for weights and feature maps, thus reducing traffic to the external memory and therefore, power.