Using IBM Power9 and Nvidia Volta GPU technology, the Summit system is a proven AI pioneer expected to deliver in excess of 200 Peta flops of performance, likely making it one of the world’s most powerful supercomputer.
I previously interviewed Paul McKenney at Linaro Connect 5 years ago in Hong Kong here, since then he has been working with a lot of things at IBM and this is the first time he's back at Linaro Connect since that initial interview. He says there might be 20 Billion Linux machines in the world, most of them running on ARM, all of them have Paul McKenney's Read-Copy Update (RCU) code in them.
Read-copy update (RCU) is a synchronization mechanism that was added to the Linux kernel in October of 2002. RCU achieves scalability improvements by allowing reads to occur concurrently with updates. In contrast with conventional locking primitives that ensure mutual exclusion among concurrent threads regardless of whether they be readers or updaters, or with reader-writer locks that allow concurrent reads but not in the presence of updates, RCU supports concurrency between a single updater and multiple readers. RCU ensures that reads are coherent by maintaining multiple versions of objects and ensuring that they are not freed up until all pre-existing read-side critical sections complete. RCU defines and uses efficient and scalable mechanisms for publishing and reading new versions of an object, and also for deferring the collection of old versions. These mechanisms distribute the work among read and update paths in such a way as to make read paths extremely fast. In some cases (non-preemptable kernels), RCU's read-side primitives have zero overhead.
Doug Anson, ARM IoT Solutions Architect talks about how ARM has worked with IBM to integrate ARM mbed with IBM Watson IoT platform on Bluemix Cloud Platform. The demo uses ARM Cordio Bluetooth sending a Google Physical Web URL to the Chrome Browser in the smartphone, creating an ability for the Chrome browser to interact with Bluetooth, enabling payment for parking meters by Bluetooth from the parking meter. Hooking up with the backend through IBM Watson, integrating with IBM's payment service and blockchain service.
IBM Internet of Things Foundation is a fully managed, cloud-hosted service that makes it simple to derive value from Internet of Things (IoT) devices, be it a sensor, a gateway or something else. Using IBM's recipes, it can get connected and start sending data securely up to the cloud using the open, lightweight MQTT messaging protocol. From there, setup and managing the IoT devices using online dashboard or IBM's secure APIs, so that IoT apps can access live and historical data fast. Users can easily start creating applications using device data, within IBM Bluemix platform, another cloud or own servers.
In this video, the dashboard displays an example of some of the analytics which can be calculated using IBM IoT Foundation, such as the impact analysis from the live hits on the hard hat which are then displayed on Bluemix, IBMs cloud infrastructure. This platform allows a "one-stop-shop" for a device developer to get started and make use of the sensor data and connected devices, immediately.
Jonathan Hinkle, Director of Enterprise Server System Architecture at Lenovo, Chairman of the Open Server Summit, talks about Software Defined Storage, Network and Compute, featuring open source, open standards to lower costs and accelerate innovation in the server market. Lenovo acquired IBM's x86 Server Business last month, integrating all of IBM's x86 server products into Lenovo's server portfolio
Paul McKenney talks about the ARM big.LITTLE Linux kernel work.
Samsung, IBM and Globalfoundries join forces to create the future of ARM Processors. They share knowledge, research and development costs, to bring amazing new technologies to products faster. They synchronize the process to ensure that customers' chip designs can be produced at multiple sources in three different continents with no redesign required.
A Humans to Mars mission might cost as little as $30 Billion. Google has $40 Billion in the bank, Apple has $50 Billion in the bank, Microsoft has $80 Billion in the bank. Intel has $50 Billion in the bank. IBM has $100 Billion in the bank. HP has $110 Billion in the bank. Dell has $30 Billion in the bank. Cisco has $70 Billion in the bank. Oracle has $50 Billion in the bank.
Google should announce tomorrow that they invest $5 Billion in the Mars Direct 2016 mission (3 years ago, they made fun of the idea in the Google Mars Mission April 1st joke), $2.5 Billion now to Space X to develop the heavy-lift within 3 years. Google would publicly ask Microsoft and Apple to join in and each also invest $5 Billion in this. They can argue for it to their shareholders by saying 1) they can monetize the new patents that come of it 2) they can get each 1/3rd of all the advertising that can comes out of 7 years of live HD video streaming of the whole project. The number of years that this project will be under development, then launch and the 2.5 years of travel for the first crews. The live HDTV broadcast can be sent all over the world, they can recoup their investments just from the TV and live streaming advertising.
So now you've got $15 Billion from private US tech companies. The other $15 Billion can get paid like this, $3 Billion from EU, $3 Billion from China, $2 Billion from India, $1 Billion from Brazil/Japan/South Korea, $3 Billion from the US (Obama can do his Mars speech) and $3 Billion from the oil-rich Middle-east. The project would be an international technological collaboration, like the CERN LHC project, instead of a race.
Here are some of the reasons why I think US Silicon Valley companies should get bold and announce the financing of a major mission to send humans to Mars with 5-10 years:
1. These tech companies want to attract more scientists and engineers. The big investments means they can for example expand their R&D and combine it more with NASA.
2. These tech companies made unreasonably large amounts of profits these past few years and decades. If they all agree to spend the same large amount on this mission, that would not place one in disadvantage against the other in terms of how much of their cash is being spent. They owe it to society to give back at least for the scientific enthusiasm that it would generate for the world.
3. Those are basically investments in educations and universities. By those being US investments they would invest in US universities, all the Government has to do would be to open up for that H1B visa for all scientists and engineers from China and India coming in to work on that project.
4. The US army spends $1 Billion per day, for the price of just 4 days of war, they can fund their share of the worlds biggest scientific project, and arguably such a project can help to bring peace, as Humans can think bigger, look up to space, and think about life instead of fighting meaningless wars over limited natural resources.
Why am I a long time supporter of the Humans to Mars project and member of the Mars Society?
- I believe it's worth looking for fossils or proof of previous life on Mars, would explain a great deal about why life appeared on Earth. This is the most fundamental question for Humans and should be the most important question to try to answer.
- Just the work put into the project will bring us new technologies to solve our Energy and Pollution problems.
- It will help solve our political problems, would be the strongest message for peace.
- Robots on Mars are awesome, but in 1 day, a Human Geologist on Mars can walk around and look through as much ground as the Mars Rovers did in the last 6 years. The only way we can realistically look for proof of life on Mars is by sending Geologists on the ground there and have them look for fossils and other proof over the period of 1 and a half years, which is the minimum time they would have to stay there anyways before being able to fly back to the earth, then it takes 6 months each way.
What do you think about the idea to send Humans to Mars as soon as possible? And why couldn't the richest tech companies and billionaires not pay a significant part of it to get things going much sooner than politicians and their 4-year election cycles care to invest in on the longer term.
ARM and IBM have been collaborating for 3 years on designing smaller and smaller processors for the industry, improving SoC density, routability, manufacturability, power consumption and performance. Just a year ago, the standard was about 65nm for most ARM Cortex-A8 processors in devices on the market such as the Nexus One. About 6 months ago, 45nm ARM Cortex-A8 processors appeared on the market such as in the ipad/iphone4, galaxy tab/s, droid x/pro. Recent devices with Nvidia Tegra2 are 40nm. The next step for ARM Cortex-A9 dual-core and quad-core processors to appear on the market this year are in designs of 32nm (50% shrink off 45nm node) and 28nm (50% shrink off 40nm node). What's next? They are working on 22nm and 20nm designs for 2012 and have been announcing since ARM Techcon last November that they have 14nm designs for as soon as 2014 that are under work with IBM.
You have to consider, it's not possible to make them smaller than 0nm, there are no minus nanometers. Their achievements in shrinking processor designs are insane. The investments are huge. They have to invest billions of dollars in fundamental research of materials and processes, they have to invent new mathematical tricks. Some of these technologies take 10 years from the lab research to something that can be mass manufactured. To make it feasible, the ARM industry has to collaborate (2).
The reason for wanting smaller process size is to consume less power, to increase performance and to potentially lower cost of devices at the same time (factoring out the increasing cost of R&D for smaller designs through very huge scale).
Watch my video of IBM's Vice President of Semiconductor Research and Development, Dr. Gary Patton, keynoting on how they are getting to 14nm ARM Processor designs and smaller: