Qualcomm’s upcoming Snapdragon 835 processor is expected to power many of 2017’s flagship phones. The company has already said the new 10nm chip will offer up to 27 percent better performance than Qualcomm’s existing processors, while taking up less space and using less power.

But just days before the chip is set to be showcased at the Consumer Electronics Show, a set of slides have leaked which give us a pretty good idea of what to expect in terms of specs and performance.

According to the presentation slides posted to VideoCardz, the chip features Qualcomm Kryo 280 CPU cores, Adreno 540 graphics, a gigabit-class LTE modem called the Snapdragon x16 LTE, and support for biometric security, cameras with fast auto-focus, and more.

The new graphics core is said to offer up to 25 percent faster rendering and support for 60 times as many colors, and the chip is said to use about half the power of a 2014-era Snapdragon 801 processor. It’s also more energy efficient than Snapdragon 810 or 820 chips, but the gains aren’t quite as great there.

The slides say the system supports 10-bit, 4K, 60 frames per second video playback, and support for OpenGL ES, Vulkan, and DirectX 12 graphics as well as HEVC video.

via /r/Android

Support Liliputing

Liliputing's primary sources of revenue are advertising and affiliate links (if you click the "Shop" button at the top of the page and buy something on Amazon, for example, we'll get a small commission).

But there are several ways you can support the site directly even if you're using an ad blocker* and hate online shopping.

Contribute to our Patreon campaign

or...

Contribute via PayPal

* If you are using an ad blocker like uBlock Origin and seeing a pop-up message at the bottom of the screen, we have a guide that may help you disable it.

Subscribe to Liliputing via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 9,454 other subscribers

23 replies on “Qualcomm Snapdragon 835 chip details leaked”

  1. Lots of stupid bold claims spoken by the QC PR’s. 50% more power efficient than S801 would translate to using only peak half of power that S801 used. I really don’t think so. S82x are far less efficient & first gen of Kryo cores are far from anything efficient even when compared to reference A72’s. It will be more power efficient working 80% of the time on smal cluster A53’s and then again thanks to lithography process used as even A53’s are much less efficient than old 32 bit Krait’s but as ARM showed us with A32’s vs A35’s that is an necessary when implementing full 64 bit V8 ISA.
    Really hoped for a consumer orientated SoC & return to the quad cluster only based on reference A73 cores with a reasonable peek frequencies of around 2GHz nicely regulated to stay within reasonable acceptable (around 1.6GHz), good (around 1GHz) & grean (400Mhz) lv’s with interactive & cpu ctr based on 10nm FinFET lithography. But I guess no one will deliver that.

  2. Qualcomm is the best, but I’ll take Kirin and Mediatek at half the price. If I could use the same Qualcomm device for more than 4 years I would gladly pay the premium, but mobile devices just aren’t there yet. If my company is buying, sure I will take the most expensive device but not on my own dime.

    1. just buy midrange qualcomm device with sd 650/652, two cortex a72 and powerfull adreno 510 are enough for sub $200 devices.

  3. 27% sounds like a small speed bump, but people really forget just how powerful the Snapdragon 820 is, let alone the QSD 821.

    This might actually be a situation where the performance increases stated is realistic, maybe +25% real world (?), rather than some niche benchmarking scenario where it now performs 400% better like how other companies present.

    I’d like to see how the QSD 835 fares against Intel’s Core i7-7Y75, and Apple’s A10 chips.
    Perhaps the iPad Pro and WindowsRT/UWP weren’t such bad ideas afterall.

    1. S82x series weren’t all that much more powerful than good implementations with reference A72 cores regarding CPU performance mere 30%. S835 is a different kind of animal with quad cluster Kryo (S82x had two dual core Kryo clusters) so it should be up to 2x faster with rare apps that can utilise 4 MP threads. I am not very satisfied with way they are pushing a GPU as the A530 throttles a lot, even A510 throttles when pushed hard after the 20 min (to 450MHz min on longer time usage) but at least SoC’s containing A510 ware based on HPM 28nm planar lithography…
      I am also disappointed with announced S660 really hoped for reference A73 cores there.

      1. Well Qualcomm started work on their Kryo-200 cores when the Cortex A57/A53 were available to them. They took these references and made improvements. Minor improvements on the A53, but major improvements on the A57. In fact, their fast cores are almost like ARM’s Cortex A72s!

        So the QSD 820 is almost like a dualcore A53 and a dualcore A72, 64-bit and on 14nm lithography. That’s one of the reasons why the difference between the QSD 650 and QSD 820 feels precisely like the difference between a 28nm and 14nm chipset….. unlike the QSD 808 which is using a far less-efficient architecture.

        Now ARM has introduced the Cortex A73, which design-wise has somethings in common with Qualcomm’s Krait 450 cores rather than their Base A72s. They’re stepping away from LITTLE.big designs. However, the A73’s cores are actually be slower than A72’s in terms of pure performance, but they look to be better in terms of overall and sustained performance. So LITTLE.big is a disadvantage with A73’s, its not really what they were designed for.

        It seems to be that Qualcomm’s adapted some of ARM’s Cortex A72 design elements into their Kryo-280 cores. And that Qualcomm has again returned to their regular recipe of quadcores like S4 Play, QSD 600, QSD 800, QSD 801, QSD 805. So it seems like ARM and Qualcomm are getting to the same conclusions/results despite working and designing independently of each other.

        In other words,
        – what the Tegra 3 was like compared to the QSD S4 Pro (Dual Krait-200). It’s similar to what the Exynos 7420 was to the QSD 820.
        – what the QSD S4 Pro was like to the QSD 600 (Quad-Krait 300). It’s similar to what the QSD 820 is to the QSD 835.
        – Big difference between the Tegra 3 to the QSD 600, similar in difference of design and performance between the Exynos 7420 and the QSD 835.

        I would assume, as per history, that Stock Cortex A73 cores are slightly more efficient than Kryo-280 cores. However, Qualcomm’s execution with great lithography, graphics units, and radios will make their flagship chip competitive against what Samsung and Apple may release.

        1. a) A73’s are not slower than A72’s in most scenarios and projected overlay of their performance is 8~10% higer compared to the A72.
          b) A73 is descent of A17 design & doesn’t have anything in common with Krait architecture.
          c) A73 is almost like the two x A53 delivering around 175% performance wile using 2x power (87.5% efficiency) (A72 is 165% with 3x power consumption – 55% efficiency) wile current Kryo is very far from that delivering 190% performance wile consuming 5x more power (38% efficiency).
          d) A73 is architecturally wise around 12. 5% slower than A72 regarding instructions per clock but it comes with bigger and more efficient cache translating into 25% (100-12.5=87.5x 1.25=109.38) more instructions per clock. It really represents a example of a reality good engineering design regarding it have a 66% size & power consumption of an A72 & it will be a very hard to beet.
          I am certain how Kryo culd actually pull a litle more if the architecture had a proper support in compilers.
          You assume to much & don’t really reed enough, 99% of things you told are inaccurate & wrong, only thing that I culd agree with you is how A73 cores maid on 10nm FinFET culd do solo without small A53 cluster.

          1. You have a comprehension problem.
            Like I said, the Cortex A72 can perform better at peak performance. However, they are still a bit thirsty and so they make for a perfect LITTLE.big combination. A quadcore A72 is quite a thirsty chip and hence the need to underclock, under-utilise, or use efficient lithography; see the industry.

            The Cortex A73’s don’t have that problem. They are more balanced by design, and not really intended to be used as LITTLE.big (but still can be). A quadcore A73 will be just as powerful as a A72 LITTLE.big, however they will have better performance overall. And they would be able to sustain it, important for VR.

            And yes, the Cortex A73’s are the successor to the Cortex A17’s… which if you actually do some reading, you will find they are close in design to Qualcomm’s Krait 450 cores. I’m well versed in the field of technology, thank you.
            I would refer you to this link, where in the comments an overview of ARM chips that might help you:
            https://liliputing.com/2016/12

          2. Lol.
            “A quadcore A73 will be just as powerful as a A72 LITTLE.big”
            I think I did give you a detail & pretty much accurate performance, power and efficiency comparation between; A7x series along with current Kryo & as a reference to the 8 stage in line A53 design think you are capable for a simple math future on.
            “I’m well versed in the field of technology” then stop making the childish inaccurate & wrong comparations like all above.

          3. Lol.
            You so childish.

            I didn’t say you were wrong, I simply said you misinterpreted what I was saying. I am not saying that the A73 is worse than the A72. I was saying the A73’s were better because they are a more balanced design. These are SoC’s afterall, where top-end performance isn’t as important as efficiency and overall performance.

            Yet, of all things, you forgot to mention the A72’s three-wide instruction decoder compared to only two on the A73 darling. Face it, these are two very differently designed processor architecture and their characteristics will be too. I mean, they were designed by different people working at ARM. Hence, why it makes a lot of sense for (2x A72) + (2x A53), when compared to a simple 4x A73 SoC. As a big fan of LITTLE.big computing, I actually would prefer the 4x A73, as the Android market has proven yet again that most developers ignore the advanced features of these complex systems. So a simpler design will yield better performance and battery life overall. (Overkill example: think of the Xbox 360 vs PS3, while the PS3 is much more powerful games were performing worse because developers weren’t taking advantage of its system design)

            But overall, the two approaches aim for the same thing: great performance when needed, and great battery life whenever possible.

          4. “Yet, of all things, you forgot to mention the A72’s three-wide instruction decoder compared to only two on the A73 darling” yes & it translates into 12.5 less instructions per clock for A73, which as I did mention have 2x bigger cache which is 50% faster than one at A72 (that is a 2×1.5=3 time more fetched instructions in L1, also L2 is the same 50% faster even size didn’t change that translates to 1.5) this translates into 25% more output of instructions per/s. In simple math:100-12.5=87.5x 1.25=109.38 so 1.09 is performance of the A73 compared to A72 clock per clock.
            “Hence, why it makes a lot of sense for (2x A72) + (2x A53), when compared to a simple 4x A73 SoC” looks like that you really don’t know much about clusters or architecture for that matter. Their is no 2×2 SoC anywhere so I will base an example between imaginary quad A73 cluster only based on the 10nm FinFET compared to current Snapdragon 625 based on 14nm FinFET both clocked to the peek of 2GHz. S625 is octa core two clusters (2×4) compared to the imaginary one CPU block would cost the same in number of gates & around 15~20% more on 10 nm vs 14 nm FinFET wile being able to save some 30% more power. Now first problem with having more than one cluster no matter is it a big litle or litle litle one is ARM power states as you see if you switch the core of it needs to be powered on both cache and logic blocks, do a cache test core test, re build the cache and so on, in state when only logic blocks are shut down and cache kept on active idle frequency the time/cost is roughly cut in half but still to much. So as this costs to much of the time designs based on more clusters actually to avoid micro shattering and stalls tend to keep two cores in active idle on each & any cluster (S652, S650, S625). Still you must take into the consideration & not so short time needed to migrate task from cluster to cluster. Now let’s make power & performance estimation for both typical and peak capabilities. You are probably aware that faster cluster will be most of the time limited to operate at about 1.6GHz (S82x S625) the Quad cluster A73 on 10nm will wile operating at 1.6GHz provide 75% more performance than A53 based 14nm octa core SoC running at the same frequency wile consuming only a litle more (when all 8 cores are in active state wile slower ones at it idle frequency) to a even litle less (only 2A73s vs 6 A53s two at 1.6Ghz rest at idle). The quad A73 SoC will also give a 40% better performance running at 1.6GHz vs only 4 A53 running on maximum 2GHz in this case wile consuming 10% less energy (20% when you count in additional 4 A53s that will idle). In the simultaneous MP environment MP8 performance of the S625 will be around at least 25% less than MP4 A73 wile using less power. Now let’s examine typical usage & frequently related to common tasks & their execution regarding CPU core architectures; A53 will need a typical frequency of around 700 MHz so that you have smooth scrolling/web scrolling experience but A73 archives the same at 400Mhz leading to actually lower power consumption (70% base X2 for A73 at 400Mhz vs 1.5 for A53 at 700 vs the same at 400 mhz). As you go future on to higher frequency spectrum things only get worse in comparison for A53’s at standard optimum of around 1GHz A73 archives 1.75x performance of the A53 that is more than typical usage peek of 1.6GHz for the A53 wile consuming less energy (A73’s at 1GHz) this time it is 1.4 for A72 vs 1.9 for A53 vs A53 at 1GHz.
            So you see how combining big litle with A73’s aren’t really much useful nor it will help much regarding efficiency. I am for a separate small A35 dual core multi purpose cluster clocked at a peek max frequency used for microcontrollers (400MHz) that will do offloading of general tasks like storage IO’s & audio reproduction all the time wile doing all functions in the sleep mode.
            So it’s actually possible to make a much better consumer product with A73s only that will be much more performant, snappier wile keeping power consumption and costs on more than reasonable level.

          5. Everything you wrote is exactly what I was alluding to.
            Like your concurrence, a Quad A73 is going to sustain performance much more efficiently than a Dual A53-Dual A72.

            And in a similar form, if the QSD 835 is a single-cluster Quadcore chip on the new Krait-280 cores, that will give it a similar advantage over the QSD 820’s big.LITTLE computing system, as we discussed.

            The QSD 820 is great, but even it suffers from big.LITTLE’s disadvantage of time wasted from handing threads from one cluster to another. This problem was much more evident on the QSD 810 and the Exynos 5433 SoC’s.

            So in conclusion; big.LITTLE is theoretically better*, but it requires great individual clusters, great drivers/optimisation, and developers that actually think about the system when writing.

            But we don’t live in a perfect world, so a single-cluster with 4 cores that can power low and power high are simpler, and so better overall.

          6. “Everything you wrote is exactly what I was alluding to”?
            You exactly didn’t told anything concrete.
            All what you did told is TOTALLY WRONG! & you are doing it again & again;
            “if the QSD 835 is a single-cluster Quadcore chip on the new Krait-280 cores” well it is not, it is a dual quad cluster in a big (Kryo) & litle (A53) configuration,
            “QSD 820 is great, but even it suffers from big.LITTLE’s disadvantage of time wasted from handing threads from one cluster to another” it’s not a big litle at all it’s a two big dual core cluster configuration (a rather utterly moronic config).
            Big litle whose a big bullshit from a day one it had some sense in A7 – A12 & A17 configurations & did menage to save some juice & give better efficiency since then in all consumer SoC’s carrying 64 bit V8 arm cores it gave only opposite results.

            & I give up on future explanations & this conversation on that matter.

          7. Okay, what you wrote here is now false.

            You should edit and correct your statements.

  4. I’m really curious to see unbiased, independent benchmarks of the Snapdragon 835 doing x86 Emulation under Windows 10.

    I’ve been saying for years that the only way I’d ever consider getting a Windows phone was if it could run legacy win32 apps (at least when docked), and had already lost hope when intel announced the cancellation of products below 4W TDP / 2W SDP.

    The Snapdragon 835 got me more excited for the possibilities of mobile tech than the Arm11 Chips powering the first Android phones (and original iPhone I guess…) did.

    1. Emulation costs alot of performance, and performance taxes power draw, and power draw requires capacity or efficiency or both.

      I’m not saying emulating some Windows XP programs on the QSD 835 is impossible, I’m saying you will experience horrible performance or horrible battery drain. But more likely BOTH.
      It’s not practical at all. You would be served with better efficiency using an Intel Atom X7-8750 instead. See the GPD Win as one example.

      The best way forward (for Microsoft) would be to hit the reset button.
      Create a new Operating System from the ground up for ARM-devices, and strip out some legacy features that are now more a burden than bonus. And then develop a new set of software to best run new applications on the OS wether it runs ARM or x86 architecture. However, make this next-gen software easy for users, individual developers, and large studios. Followed by incentivising developers to port their old executables to the new platform by providing conversional software, monetary incentives and promotions. And also keep it proprietary so that it cannot be forked/copied by competitors. That way you stop jeopardising efficiency in new systems, and actually focus on compatibility and increasing efficiency in the future.
      … … …
      …and that’s what Microsoft has been trying to accomplish with Windows 10 and UWP.
      You could say Windows 8/8.1 were alpha builds and Windows10 is pretty much a beta program.
      But MS hopes to finalise this goal before Android and iOS get more traction, and before legacy Windows programs become less useful.

      So why now? Or why is MS so late to the party?
      The thing is that Microsoft was cozy in their HTC Windows Mobile niche and Windows 7 platforms. They saw the original iPhone as a failure and deemed the iPhone 3G, iPhone 3GS, Moto Droid and Nexus One as useless competition. By proxy this meant the mobile market and ARM-architecture meant little to nothing to them. The industry showed them. And MS didn’t get serious until, really 2013… whereas the industry had time to evolve since 2007.

      This approach is the most painful for MS and consumers, but it is the best in the long-term.
      I mean think about a Gaming PC today (eg 6700K, GTX 1080, 64GB RAM etc etc).
      It’s vastly powerful, and can churn through your newest games (eg BF1) no problems.
      However, you can also play Age of Mythology no problems.
      Now think of a PS4 Pro and try playing a Tony Hawks Pro Skater 2 on it. Yeah, it won’t work. New hardware alienating old software. Thus a HTPC does objectively make for a better console than the actual consoles. That’s the power of controlling a platform. The big companies understand this, and consumers are beginning to appreciate it.

      Another example would be the iPhone 7 Plus and playing Super Monkey Ball on it. No problems.

      The sooner that Microsoft makes the jump from x86-only platform to a Windows-wherever-it-is platform, the better. Because you can bet in 2020 there will be people enjoying their new devices on the Windows platform and wanting to play Sonic Dash on it, even though its now a “legacy app”.

      1. “The best way forward (for Microsoft) would be to hit the reset button.
        Create a new Operating System from the ground up for ARM-devices”

        People keep saying this ignoring entirely the sheer amount of value in legacy win32 apps and the hordes of people who disagree and who want win32/legacy compatibiltiy.

        It’s the same reason Google would rather Chrome use Android apps or why Google and Apple would never allow or promote HTML5 webapps over dedicated apps in their ecosystem.

        This notion that Microsoft HAS to do ground-up rebuild is the reason MSFT has made so many half-ass efforts in so many ways on so many occasions so many times.

        1. Not true.
          Both iOS and Android were built with HTML5 webapps in mind. However, the (software) technology wasn’t there at the time, so Google used a Java-implementation and Apple used its Objective-C implementation from OS X.

          You see these companies could’ve made some half-assed attempt, and tried to squish in compatibility for old Symbian programs. They didn’t.
          They both realised and came to the conclusion that they needed to start from the ground up with the user in mind, and available software-hardware technology at the time. Which is one of the big reasons for their success.

          Windows needs to do the same thing to ever hope to compete in the long-term race. And they’ve realised this, and have started to do so with WindowsRT and Windows 8.1. Naturally further convergence into Windows10 core was the following step. The next step/major update will be a refinement on MS’s end.

          The reason why MS has made so many half-ass efforts is because it is MS’s fault for making half-ass efforts. If MS took the market seriously we would’ve gone from Windows Mobile 6.5 and Windows 7… straight to Windows10 Mobile and Windows 10 Pro.

          All those OS steps in-between were MS attempting to catch up.
          If you don’t believe me, check the market share of those operating systems.
          Those extra OS steps from 2010-2015 were practically like alpha tests, and ultimately failures. You know which one’s I’m talking about the failures with Windows Phone 7, Windows Phone 7.5, Windows Phone 7.8, Windows Phone 8, Windows Phone 8.1, Windows 8, and Windows 8.1.

          MS reacted late and slow, and now its come back around to bite them.
          Which is also why MS must change its business strategy too.

          1. I know what you’re saying, but i fundamentally disagree with your narative here.

            The main draw of using a Microsoft OS for me is legacy/backwards compatibility.

            If i can’t use unaltered legacy software on it, it’s “just another OS”, and the market doesn’t need “ANOTHER OS”. The current mobile landscape is dominated by apps, and users don’t care about what’s the best platform, they care about if the apps they want to use are available on a platform, and windows is severely lacking in that department.

            People had 8-9 years time to accumulate apps and media tied to specific ecosystems, and even if a newcommer is objectively better switching cost of needing to rebuy stuff, or find alternatives for programs and workflows they have gotten accustomed to is so high in most cases that you need a killer feature or people won’t care.

            As we’ve established before it looks like the future belongs to HTML5 / platform agnostic webapps, so the only thing any platform can really do to differentiate themselves in a meaningful way is offer something the others simply CAN’T replicate, and for MS this would be legacy win32 support, as neither Apple nor Google can do anything that makes “100% of software you know from 95% of all Desktop computers of the last 20 years run directly on your phone”.

      2. why xbox one can emulate xbox 360 games without perfomance penalty?

        and there are lot of win32 apps compiled for arm.

        1. Because its not emulation, not in the usual sense. Most (all ?) titles actually do run on the Xbox One hardware since both consoles share the similar hardware architecture and software language.

          However, not ALL Xbox 360 titles can and do run on the XB1, you are still at the mercy of MS here. Unlike a PC which doesn’t need any special treatment to do “backwards compatibility” or to simply crank up the settings when you get better hardware.

          Unlike the PS4 Pro which needs a patch to take advantage of better hardware in games. Also the PS4 compared to previous PSxx’s do not share the same architecture and software-base. It needs the titles to be either converted/remastered, or emulated.

          That’s why building a ecosystem, and as early-as-possible is an important task for the major players at the table. Whilst Windows is happy with its desktop ecosystem, even OS X has taken some steps to catch up in yesteryears. However MS’s ARM ecosystem is sorely lacking, unlike iOS and Android ecosystems. And currently the ARM ecosystem is looming over the desktop ecosystem, so one can say it is the more important ecosystem to control.

          edit: for clarity

          1. powerpc and x86 are same architecture? i thought they are different,

          2. Sorry, I mis-typed.
            The way the Xbox 360 games run on the XB1 is that the processors in the XB1 actually was designed and mimic the 360’s hardware design. And then they run the 360’s operating system like a VirtualMachine inside the XB1, and it means the XB1 can run 360 titles almost like they were native. See the interview with Phil Spencer.

            That’s why the backwards compatibility titles run very well on the XB1.
            In fact, recently they’ve made a patch where they can actually run 360 games BETTER on the XB1 than the 360.

            This is a big game changer…. but it sucks that MS didn’t care to allow this killer feature upon the console’s launch back in 2013. It’s almost too late now.

            A similar thing happened with the PS2 running some PS1 games better, and the PS3 running some PS2 (?) games better, but these all had dedicated hardware inside.

            I hope that clears it up.

Comments are closed.