Interview with @rexstjohn, the Arm ecosystem including @arduino, @google, @docker, @sparkfun and many more will be on December 2nd-3rd in Mountain View for the Arm AIOT Dev Summit! You can register at: http://armsummit.bemyapp.com
At SID Display Week 2019, Arm officially launches the Arm Mali -D77 DPU display processor that significantly improves the VR user experience with dedicated hardware functions for VR HMDs, namely: Lens Distortion Correction (LDC), Chromatic Aberration Correction (CAC) and Asynchronous Timewarp (ATW). These are on top of the already feature rich Mali-D71 DPU for premium mobile devices, Mali-D77 changes the way we think about VR workload distribution across the SoC. It enables a significant step-up in the display resolutions and frame rates that can be achieved within the power constraints of mobile VR HMD devices. This will pave the way towards lighter, smaller, more comfortable VR devices free from any cables, which, in turn, could drive the widespread adoption of consumer VR.
At SID Display Week 2019, Arm Mali -D77 DPU display processor is launched, that significantly improves the VR user experience with dedicated hardware functions for VR HMDs, namely: Lens Distortion Correction (LDC), Chromatic Aberration Correction (CAC) and Asynchronous Timewarp (ATW). These are on top of the already feature rich Mali-D71 DPU for premium mobile devices, Mali-D77 changes the way we think about VR workload distribution across the SoC. It enables a significant step-up in the display resolutions and frame rates that can be achieved within the power constraints of mobile VR HMD you can read more about the Mali-D77 here: https://community.arm.com/developer/tools-software/graphics/b/blog/posts/introducing-the-arm-mali-d77-display-processor
The Neoverse N1 CPU is optimized for a wide range of cloud native server workloads executing at a world-class compute efficiency. This enables an infrastructure transformation where processing is pushed to the edge where data is generated, thereby providing more scalability than moving all data to centralized datacenters.
The Arm Neoverse E1 CPU delivers best-in-class throughput efficiency. It incorporates a new simultaneous multithreading (SMT) microarchitecture design. With SMT, the processor can execute two threads concurrently resulting in better aggregate throughput performance.
The Neoverse E1 delivers 2.1x more compute performance, 2.7x more throughput performance and 2.4x better throughput efficiency compared to the Cortex-A53. The design is highly scalable to support throughput demands for next generation edge to core data transport.
Jem Davies, ARM VP, Fellow and GM, Machine Learning Group talks about ARM’s new Helium Machine Learning architecture for the ARM Cortex-M based microcontrollers, as a follow on to ARM CMSIS-NN Neural Network Kernels which Boosted Efficiency in Microcontrollers by 5x last year, now ARM launches Helium ARMv8.1-M to improve machine learning performance, with up to 50x on machine learning workloads, about 5x improvement in performance for regular DSP based workloads, as open source software and the new ARMv8.1-M architecture to be integrated in Microcontroller designs to come in the future.
Grant likely is a Senior Software Developer at ARM and a developer for the EBBR project https://github.com/ARM-software/ebbr. The EBBR or Embedded Base Boot Requirements is a specification for bootloaders for ARM based devices. This specification would enable arm based devices to share the same bootloader thus reducing development costs. This would enable the same OS to more easily boot on multiple devices
In this demo, the Trusted Firmware M is providing the SPE and JWT sign, Zephyr is providing the NSPE and The Google IoT application is running on Zephyr using secure services from Trusted Firmware M.
– Platform Security Architecture (PSA) is an IoI security framework being developed by Arm.
– Trusted Firmware M (TF-M) is an open source project to provide PSA compliant secure firmware for M profile devices.
– Zephyr is a Linux Foundation Collaboration Project to provide a small, scalable RTOS for connected, resource constrained device.
– Arm Musca-A1 subsystem based on Armv8-M which allows partitioning the SW execution in Secure and Non Secure domain.
Jem Davies is the General Manager of the Machine Learning Group at Arm, he talks about the new Machine Learning Collaboration with Arm NN and Linaro, where Arm is donating the Arm NN inference engine and software developer kit (SDK) to Linaro’s Machine Intelligence Initiative. As part of this initiative – which aims to be a focal point for collaborative engineering in the ML space – Arm is also opening Arm NN to external contributions.
Linaro’s Machine Learning Initiative will initially focus on inference for Arm Cortex-A SoCs and Arm Cortex-M MCUs running Linux and Android, both for edge compute and smart devices. The team will collaborate on defining an API and modular framework for an Arm runtime inference engine architecture based on plug-ins supporting dynamic modules and optimized shared Arm compute libraries. The work will rapidly develop to support a full range of processors, including CPUs, NPUs, GPUs, and DSPs and it is expected that Arm NN will be a crucial part of this.
You can watch Jem Davies keynote at Linaro Connect here
Vector Packet Processor (VPP) Works on various ARM platforms out of the box, All CI tests pass, ARM boards getting added to Fd.io lab, CSIT under progress, Performance benchmarking/analysis under progress.
Arm ServerReady is a program to make sure that the ecosystem is enabled to support the ARM server, making sure that all the operating systems just work and can be installed without a lot of patches and stuff. They ask ODM and Silicon Providers to work with ARM to comply with the standards to make sure everything just is working. Linaro LEG also did an SBSA QEMU effort, that is well aligned with the Arm ServerReady Program letting people run the tests even before the hardware is available.
You can find the slideshow about this here: https://www.slideshare.net/linaroorg/hkg18317-arm-server-ready-program
ARM is showing TrustZone Media Protection working with the Open Source Trusted Execution Environment, adopting everything within the Android operating system.
Here’s my full video in 4K from my front row seat of the ARM Press Conference at Computex 2018. You can also watch my Interview with Nandan Nayampally here.
ARM Cortex-A76 is a new microarchitecture based on DynamIQ technology, delivers 35% faster 7nm laptop-class performance (Intel Core-i3, Core-i5 comparable performance) with 40% improved efficiency maintaining the power efficiency of a smartphone. ARM Cortex-A76 also delivers 4x compute performance improvements for AI/ML at the edge. The new ARM Mali-G76 enables higher performance gaming, cross-platform experiences 30% more efficiency and performance density, as the gaming market is expected to reach $137.9 billion in 2018 and possibly as high as $180 billion by 2021 where 60% of that might be on mobile. ARM Mali-V76 support 8K60 video decode, it can also support simultaneous 4K encode and decode for 4K video-conferencing.
The new Arm Allinea Studio release is a comprehensive and integrated tools suite to help Scientific computing, HPC and Enterprise developers to achieve best performance on modern server-class Arm-based platforms. Check out https://developer.arm.com/hpc for more info.
OPEN AI Lab aims to promote the industry development of Arm embedded smart machines, build an embedded SoC basic computing framework for smart machine application scenarios, and integrate application scenario service interfaces. Committed to promoting the in-depth collaboration of the entire industry chain of chips, hardware, and algorithm software, artificial intelligence will be available where there is computation. You can also watch Mingfei Huang’s keynote about Open AI Lab here.
HKG18-200K2 – Keynote: Mingfei Huang: Accelerating AI from Cloud to Edge
The computing changes where machine meet AI. The AI shall not only be instructed from Cloud but also be embedded in Edge and Thing itself. We can’t image in the future those intelligent machines surrounding us get idiotic even horrific when disconnected. More and more instinctive intelligence in perception, cognition and decision should be embedded into machines. How to support diverse AI algorithms running across different embedded computing hardware? It needs platform that silicon companies, algorithm providers and device makers can collaborate on. Android NN is one of them, there are more devices without Android that need to be covered. OPEN AI LAB, initiated by Arm China, Allwinner and Horizon, open to all partners, is born to focus on eliminating the barriers. Its AI Distro contains a Tensor Engine that extracts ML/DL Computing from Arm-based CPU, GPU and 3rd party Accelerators for diverse algorithm models. With the collaboration between Linaro 96board and OPEN AI LAB, algorithm and application developers will have the best support with optimized AI libraries for different hardware.
Learn More at http://connect.linaro.org
HKG18-200K1 – Keynote: Mark Hambleton: The Fog
Today’s world of devices connected to clouds looks set to evolve with more intelligence and processing being pushed to the edge or migrating between the cloud and the edge. The very definition of edge is evolving too. In this presentation we will look at some potential futures made possible by the emergence of the fog and its implications for the segments that it embraces.
Mark Hambleton / ARM
Approaching 20 years of experience in embedded systems ranging from real time control of wind tunnels in his early career to a more recently on mobile devices. Mark has been working with the Linux kernel for approaching 15 years, initially creating networking products focussing on traffic classification and shaping for core and edge routers to more recently on mobile devices. Working as a Chief Architect for at Symbian (and Nokia), Mark established himself within the ARM community, he then joined Broadcom in 2012 to refocus on Linux on ARM working on their leading edge mobile SOCs and then on to ARM in 2014.
Learn More at http://connect.linaro.org