Nvidia Tegra3, what is the performance?

Posted by Charbax – November 9, 2011

Nvidia is launching the Tegra 3 next month in the $499 Asus Transformer Prime (with a $149 optional keyboard dock). This is an amazing new Quad-core ARM Cortex-A9 with a lower-power "companion chip" for reduced power usage.

They are publishing a lot of claims about the performance.

Nvidia published several other new videos on their YouTube Channel showcasing the Tegra3 processor: 1, 2, 3, 4

How does a Dual-core ARM Cortex-A9 clocked at 1.5Ghz like in the TI OMAP4460 and Qualcomm MSM8660 and at up to 1.8Ghz like in the upcoming TI OMAP4470 compare with a Quad-core Tegra3 clocked at 1.3Ghz?

Nvidia probably claims that the Quad-core design performs faster. And in my video with Freescale talking about their upcoming i.MX6Quad it's being said that running a higher frequency Dual-core can introduce potential leakage and higher power consumption, but what is it really? How is the performance going to compare for most popular Android tasks, for most common Android usage scenarios, for most current Android apps?

I am looking forward to testing these "second generation" ARM Cortex-A9 processors, I want to believe that these can provide for the full performance required to replace x86 for a full ARM Powered Laptop and Desktop experience. I want to believe that Nvidia improved their ARM Cortex-A9 design enough to provide for an amazing new faster memory bandwidth.

Basically, what I expect that we are getting now is enough performance, fast enough memory bandwidth, that we can run as many tabs as we want in the Android and Chrome web browser on ARM, that we can even expect to be able to begin to do things like video-editing (HTML5 cloud based), photo-editing (HTML5 cloud based), console-quality gaming (with cloud powered engines like OnLive if needed), all through this new class of ARM Cortex-A9 processors coming out now.

Which one do you pick among OMAP4460 Dual-core 45nm 1.5Ghz (December), OMAP4470 Dual-core 45nm 1.8Ghz (next 3 months?), Qualcomm MSM8660 Dual-core 45nm 1.5Ghz (now), Qualcomm Dual/Quad-core S4 Krait 28nm 1.5Ghz (next 6 months?), Freescale i.MX6Quad 1.2Ghz (next 6 months? higher clock speeds later?), Exynos 4210 45nm 1.2Ghz (now), Exynos 4212 32nm 1.5Ghz (next 6 months?), Apple A6 (32nm? dual or quad?) (next 6 months?), Marvell Armada PXA2128 (next 6 months?), ST-Ericsson U9500 45nm 1.2Ghz (now), ST-Ericsson U9540 32nm 1.85Ghz (next 6 months?) and Nvidia Tegra3 40nm 1.3Ghz (December, higher clock speeds later?)? And don't forget that the ARM Cortex-A15 designs at 28nm are going to arrive within a few months after that.

I think we are going to have a lot of fun with these new faster ARM Powered devices, do you agree?

  • Anonymous

    Consider recent synthetic benchmark it seems Tegra 3 is clearly the winner -> http://www.androidauthority.com/transformer-prime-benchmark-explained-30731/

  • I don’t know if we can trust some leaked random benchmark. We’ve seen how cyanogenmod can “magically” double certain benchmark results, I’m not sure if that actually means anything. I’d like to know actual performance comparisons in actual real usage scenarios, for example multi-tab web browsing with flash, opening certain popular apps and games, video codecs/bitrates/framerates/profiles supported etc.

  • Anonymous

    Real life gaming example: http://www.youtube.com/watch?v=eBvaDtshLY8 Can any of the competitors do that?

  • Yeah why not? The Mali-400, SGX54x and Adreno 220 are pretty amazing for graphics also..

  • Anonymous

    Do you know any games for tablets with dynamic lightning? I really doubt any of the SoC you mentioned in the blog post can handle that.

    PS: here is another demo: http://www.youtube.com/watch?v=C30ShWQm5pI

  • I don’t know that they can’t support stuff like that. Nvidia has been creating this Tegra-Zone for games to get optimized only on Tegra, but I don’t think that means other processors with other GPUs can’t also play the same games, it’s just a question of Google doing a bit of work to allow all the most advanced games to be easily hardware optimized for each GPU and available on all the platforms equally.

    HD quality games is certainly a big priority for Google as now these new ARM Processors are reaching console-level quality even when outputting the 3D games onto an external 1080p HDTV.

  • Anonymous

    Game engines like Unity3d do all the game optimizations not Google.

  • Anonymous

    Here are some more benchmarks comparing Tegra 3 with Tegra 2 and some platforms: http://www.xbitlabs.com/news/mobile/display/20110921142759_Nvidia_Unwraps_Performance_Benchmarks_of_Tegra_3_Kal_El.html

  • Anonymous

    Clock speed for when it’s running 1-3 cores is 1.4GHz and it only drops to 1.3GHz when all 4 cores are running at the same time.

    Mind on the GPU, Tegra 3 has a 12-Core Nvidia GPU with support for 3D stereo.  3D display support was one of the things the Tegra 2 lacked, along with limited h.264 support that Tegra 3 also addresses.  So at the very least it’s much better than the Tegra 2.

    The Asus Transformer Prime’s 1280×800 Super IPS+ screen should look really nice as well.

    Meanwhile, Samsung is also suppose to be coming out with a quad core Cortex A9 solution for the Sony PS Vital, so in addition to the continued dual core solutions we may also see some other quad core chips hit the market before the next gen 28nm chips come out.

  • Yes, but all the other new processors are also much better than Tegra2, so it’ll be interesting to see some reliable real usage scenarios performance tests posted.

  • Anonymous

    Never said it wouldn’t be interesting to see the comparisons.  Just pointing out the Tegra 3 is still definitely an improvement, significant in that it is already rated to be able to run Windows 8, and we’re also likely to see some head on quad core solutions from other companies as well and not just higher clocked dual cores.

  • Yup, I think Microsoft was demoing Windows 8 on OMAP4 and Qualcomm MSM8660 also..

    I would like to see performance info for normal/popular Android/Chrome use comparing the announced 1.5Ghz and 1.8Ghz dual-core with the announced 1.2Ghz and 1.3Ghz Quad-core.

  • Sure, but Google must be coordinating all that with the chip providers, game engine providers and game developers to make sure more and better games are available in the Google Marketplace and ready to work fully hardware accelerated on all the platforms equally.

  • Anonymous

    Even if Google does, they can’t prevent companies like Nvidia giving themselves a performance edge.  After all, even with companies working with Google, not all of them have full control over the closed drivers they use and generic support usually falls second to specialized support.

    Nothing new really though, Nvidia and ATI competed for years in the PC market with game deals that made the game work better with one or the other.  So similar competition is starting to be seen in the growing ARM gaming market and it’s just a question who can garner the most deals and how that compares to what is developed for the generic offerings to draw the most users to their product.

    Next Gen ARM chips and ICS though should go a long way to improving the overall gaming experience for all devices though.  So while a few of these companies may get an edge I don’t think it will be a overly dominating advantage.  Also it will be interesting what starts to come out when Windows 8 joins the fray.

  • Pingback: How Fast Nvidia Tegra 3 Really Is? - isthin.com - | isthin.com()

  • Marc Guillot Puig

    I hate marketing lies.

    nVidia says that Tegra3 is 5 times faster Tegra2. But their CPU can only be 2 times the CPU of Tegra 2 (and that’s supposing they scale perfectly), and they say their GPU is 3 times faster than Tegra 2 GPU, but they only improved from 8 process units to 12 process units (not cores, as some say).

    In the best scenario (and only in that ones), a Tegra3 could be twice as faster than a Tegra2, but they keep saying that Tegra3 is a x5 improvement (now we can understand their roadmaps promising 100x speed improvements in 5 years).

    A nice processor (if it don’t has a memory bottleneck, as semiaccurate says), but please nVidia, stop lying.

  • I do think that if they improved memory bandwidth that can help a lot. If Tegra2 had dual-core 1Ghz sharing one memory pipeline let’s say 1000 bogomips throughput and Tegra3 quad-core 1.3Ghz sharing four memory pipelines that’s maybe up to 5200 bogomips throughput. That’s maybe what they mean as 5x improvement.