Leaked Intel Bay Trail lineup: Atom, Celeron, and Pentium chips use less than 10W

Intel’s next-generation architecture for low-power chips, code-named “Bay Trail” is expected to hit the streets by the end of the year. Chips based on the platform are designed for tablets, notebooks, desktops, and other low-power computers, and they’ll replace today’s Intel Atom chips.

But Bay Trail processors aren’t just the future of Intel’s Atom family. We’ll also see Bay-Trail chips sold as Celeron and Pentium processors. And they’ll all use less than 10 watts, offering moderate performance and long battery life (assuming you’ve got a decent battery).

Bay Trail

We already knew that some next-generation Celeron and Pentium chips would have the same “Silvermont” cores as Intel Atom Bay Trail processors. But now a leaked product list is providing more details about Intel’s planned chip lineup.

Here’s the breakdown:

Intel Bay Trail-I (Atom, likely for tablets, notebooks)

  • Atom E3810 – 1.46 GHz single core CPU w/400 MHz GPU and 5W TDP
  • Atom E3821 – 1.33 GHz dual-core CPU w/533 MHz GPU and 6W TDP
  • Atom E3822 – 1.46 GHz dual-core CPU w/667 MHz GPU and 7W TDP
  • Atom E3823 – 1.75 GHz dual-core CPU w/792 MHz GPU and 8W TDP
  • Atom E3840 – 1.91 GHz quad-core CPU w/792 MHz GPU and 10W TDP

Intel Bay Trail-M (Celeron, Pentium for notebooks, convertibles)

  • Celeron N2805 – 1.46 GHz dual-core CPU w/667 MHz GPU and 4.5W TDP
  • Celeron N2810 – 2 GHz dual-core CPU w/756 MHz GPU and 7.5W TDP
  • Celeron N2910 – 1.6 GHz quad-core CPU w/756 MHz GPU and 7.5W TDP
  • Pentium N3510 – 2 GHz quad-core CPU w/750 MHz GPU and 7.5W TDP

Intel Bay Trail-D (Desktops)

  • Celeron J1750 – 2.41 GHz dual-core CPU w/792 MHz GPU and 10W TDP
  • Celeron J1850 – 2 GHz quad-core CPU w/792 MHz GPU and 10W TDP
  • Pentium J2850 – 2.41 GHz quad-core CPU w/792 MHz GPU and 10W TDP

Update: AnandTech also spotted a listing for a Bay Trail-T mobile chip last month:

  • Atom Z3770 – 2.4 GHz quad-core CPU w/2W SDP (scenario power design)

Unfortunately there’s still no word on the graphics technology or the actual TDP of that Bay Trail-T chip, so it’s tough to say how it’ll compare to chips that don’t use Intel’s SDP standard.

Even the slowest, lowest-power Silvermont chips are expected to offer about twice the performance of today’s Intel Atom processors, mostly while using the same amount of power or less.

Intel is dropping support for hyperthreading, but since most of the new chips will have multiple processor cores, they’ll still be able to handle multi-threaded applications. Silvermont chips also support out-of-order execution, so they’ll be more efficient at many single-threaded tasks as well.

Graphics performance should also be much better in Bay Trail chips than in the Clover Trail and Pine Trail chips they replace, since Intel is basically using a slimmed down version of the same HD 4000 graphics technology used in its 3rd-generation Intel Core “Ivy Bridge” chips.

via FanlessTech

  • Seta

    Next netbook generation will replace my beloved eee 1000H; I find netbooks far more useful than the fanciest of the tablets

    • Steve

      By tablet, you mean mobile OS based ones right? I find netbooks far more useful than Android and iOS tablets but more (without the “far”) useful than Windows 8 (no RT) tablets.

  • me

    Why do the Atoms for tablets use more power than the seemingly equivalent or faster (based only on CPU and GPU speed) notebook and desktop chips? What other differences are there to make this so?

    • James

      What happened to the TDPs from earlier leaks where Bay Trail-T was <= 3 W? Does this mean those slides were just talking about the slowest possible single core Atom that's not yet leaked?

      Hopefully, that "I" instead of a "T" is not a typo and we haven't seen the actual T models yet.

      • CyberGusa

        It’s not a typo, looked at original leak and it states that the “i” stands for the Industrial version of Bay Trail and not the tablet version…

        Though, Bay Trail also supports Burst Technology… which is like the Core i-Series Turbo Boost. So there’s probably some TDP over head for all Bay Trail SoCs to support the higher performance over clocking.

        While the list also lacks the Octo (8) cores for the Bay Trail D series. So it’s not a complete list…

        Right now, they just released some benchmarks for a Bay Trail T sample model that was clocked at 1101MHz, running Android…

        AnTuTu benchmark gets an overall score of 43,416!!!

        Qualcomm Snapdragon 800 SoC, clocked at 2.3GHz, scores over 35,000 in the same test.

      • monstercameron

        that antutu score means noting…

      • CyberGusa

        You keep telling yourself that :P

      • Vcoleiro1

        Got a source for that benchmark, I’d like to check it out?

      • CyberGusa

        Intel one is from…

        http://rbmen.blogspot.ro/2013/07/intelsocbaytrail-tantutu43426.html

        It’s in Japanese, btw…

        While it’s easy to look up similar scores for ARM SoCs, as AnTuTu is a common test…

      • Brogon

        The T chips are different. I hope the quad core T chips still fall under the 3 W TDP. I hope to run virtual machines on an ultra mobile device.

  • Steve

    What’s the TDP of the Z2760 Atom?

    • CyberGusa

      Clover Trail is rated for 1.7 to 3W max for the SoC itself, not counting the rest of the system…

      Mind that Bay Trail supports Burst Technology… which is basically the ATOM version of the Core i-Series Turbo Boost. So, while Bay Trail T is targeted for ≤3W, it can go a bit higher for short periods as needed for a little extra performance when needed but like ARM will have very low idle states when performance is not needed.

      The max clock speed it should be able to reach is around 2.1GHz…

      While Bay Trail is much more power efficient, with features like being able to dynamically channel power where needed between the CPU cores and GPU… So only the part that needs the power needs to be actually running and the rest can go into a low power state.

      Previously, the entire SoC had to change clock/power at the same time…

      • Jeff

        The Z2760 also has turbo. Turbo works by sharing the the TDP among the cores. TDP can be exceeded for short periods even before Turbo and multiple cores were introduced.

      • CyberGusa

        Actually, not really… Intel Burst Technology is really only used in the Medfield and Clover Trail+ ATOM SoCs, with base clocks of 1.3GHz and max Burst ranges of 1.6GHz, 1.8GHz, and 2GHz for the respective models.

        While the Z2760 actually operates at its rated clock speed of 1.8GHz and can cycle down in 100MHz increments towards its idle state and is why the TDP rating is a range of 1.7W to 3W…

        For Bay Trail though, the Burst Mode Technology should be standard for all versions and goes beyond what was possible with Clover Trail, allowing dynamic power management between the CPU cores and GPU.

        So power only goes where it is needed and each core and GPU can be separately clocked… Meaning it’s much more efficient…

    • maroon1

      You can’t compare these to Z2760

      You should wait for Bay Trail-T to compare it to Atom Z series

  • ed

    Wonder how long we have to wait for a cheap Zotac or Foxconn nettop with one of these chips inside.

    • Labmouse

      I would imagine at least until the chips are released…sometime by the end of the year?

    • http://www.fanlesstech.com/ FanlessTech

      These two particular brands have roadmaps full of Kabinis ;)

  • Tsais

    nice scoop!

    Wonder how well that quad core Atom does in comparison to the rest of the line.

    The 10W TDP on it makes it pretty clear that they’re not using the method people use in designing multicore ARM chips for cell phones, mating low power cores with high power cores, with a controller choosing which ones to activate depending on computational load.

    I was hoping Bay Trail would contain more really low power quadcores…

    But I guess Intel is busy just streamlining their existing designs for low power. Not complaining, you can only change so many things at a time… why the tic-toc method works so well.

    • CyberGusa

      The TDP rating can be a little deceptive… Since, while they may not make use of a extra low power core or anything like ARM’s big.LITTLE solution, they still make use of advance power management to give similar power efficiency as ARM SoCs.

      Even the existing Clover Trail can out last a Tegra 3 for something like playing a HD video continuously until the battery is drained for example…

      While, keep in mind that traditional x86 chips don’t idle much at all in comparison… Typical Ivy Bridge part can maybe go as low as 3W without entering a S3 Suspend state for example… while the rest of the system also adds its power load and so many Ultrabooks only idle to a little over 6W, if not higher!

      Mobile SoCs on the other hand can reach very low power states and Intel has managed to achieve this for the ATOM… The Medfield Z2460 for example operates at just ~700mW when making a 3G phone call but a Apple iPhone 4S would operate at ~800mW while doing the same task!

      While, these particular Bay Trail models won’t be going into a phone but the same architecture is there for allowing for very low power states that can still perform work.

      Mind that Silvermont significantly improves power efficiency from Clover Trail… The CPU and GPU for example can be dynamically and separately clocked. So power and performance is provided only where it is needed and as needed, instead of having to ramp the entire SoC no matter what work is being done.

      While even ARM SoCs are starting to go over 5W TDP… Some of the latest ARM SoCs can go over 4W for the CPU cores and an additional over 4W for the GPU(s)… but mobile SoCs rarely go full out and even more rarely use everything at the same time and under full load.

      Yet those same ARM SoCs can still be power efficient enough to go into Smart Phones, etc.

      Btw, it’s also partly why Intel is trying to push SDP instead of TDP now as average operational states are becoming more important as chips become more efficient…

      Besides, TDP has never been a accurate way to indicate power consumption and is becoming less so now with these mobile optimized chips.

  • Labmouse

    It’s interesting that Intel lists several of the Bay Trail-M CPUs as Celerons and Pentiums. That begs the question: Will the next generation of Celeron and Pentium be solely based on Silvermont?

    • http://www.fanlesstech.com/ FanlessTech

      Haswell-based Pentiums are coming too (G3220, G3220T, G3420, G3420T, G3430), this is when it gets confusing.

      • Labmouse

        Wow, so Celerons and Pentiums will be based on at least 2 different architectures…I’m assuming the G series of chips are socketable rather than soldered on, so you can at least you can at least differentiate them.

      • CyberGusa

        Pricing and performance range will probably be the main discerning factors… Along with model numbers…

        Brad was correct in a earlier article when he suspected the Nxxxx model numbers were based on Bay Trail… Since Intel has previously used that model naming system for previous netbook range ATOMs and it looks like they’ll be using that for at least some of the models…

    • CyberGusa

      There are already Ivy Bridge Celeron/Pentiums being released this year and according to this article…

      http://technewspedia.com/release-dates-for-new-cpus-intel-haswell-mb-h-haswell-ultulx-and-valleyview-m-t/

      There will be Haswell based versions as well to be released by September…

  • iAMunderDOg

    People, lets get facts straight…

    Clover Trail+ single thread performance at 1ghz in Cinebench 11.5 is 0.10 points while Bay Trail is 50% faster thus it will score at 1ghz 0.15 points.

    Bay Trail will consume 1/2 power for the same performance.

    Please don’t get fooled by marketers and slides, they won’t lie outside lawyer determined borders thus you should not naively believe to Intel’s marketing slides also I want to point out that AMD’s Temash single threaded performance at 1Ghz scores 0.26 points in Cinebench 11.5.

    In my opinion, Bay Trail is a failure compared to AMD’s Temash.

    A4-1200 Temash Dual Core 1ghz TDP:3.9w(SDP:2.2w) beats:

    Atom E3810 TDP:5w(kicks the living life out of it!)
    Atom E3821 TDP:6w(it slams it to the ground)
    Atom E3822 TDP:7w(gets beaten with ease)
    Atom E3823 TDP:8w(equal but has same TDP as A6-1450 Temash)

    A6-1450 Temash Quad Core 1ghz TDP:8w(SDP:5w) beats

    Atom E3840 TDP:10w(A6-1450 is 10-20% faster in GPU while in CPU is 10% slower)

    AMD’s Temash can compete with Bay Trail and “Beema/Leopard” in Q1-Q2 2014 will make Bay Trail nicknamed Bay Loser.

    Another great example why Intel intentionally cripples its Atom chips so it does not eats its own ULV margins, its just sad. Pretty sad.

    • CyberGusa

      You seem to be under the erroneous impression that AMD’s Temash actually compares to Intel’s Bay Trail… They don’t because Temash isn’t yet capable of competing with ARM like power efficiency levels but the Intel ATOM SoCs can!

      Really, AMD’s Temash is more efficient than the previous AMD Fusion/Brazos but you’re only looking at a improvement and not a true mobile optimization. So they’ll only give you about 1-2 more hours of run time on average than the previous AMD Fusion/Brazos products.

      AMD has yet to implement the advance power states and management to reduce power consumption into the low mw power ranges and to support mobile features like Always Connected Standby.

      Even if Bay Trail doesn’t manage to reach it’s up to 5x claimed better power efficiency than Clover Trail, it’ll still be significantly more power efficient than anything AMD can offer right now!

      Really, Silvermont can go into a phone but even the lowest end AMD Temash can barely go into a fan-less tablet and even that require them to put fairly strict constraints on its power usage!

      Sure, AMD has better GPU’s but they still consume a lot of power… The main difference between the AMD Temash A4-1200 and the A4-1250 is a 75MHz speed bump to the GPU, raising it from 225MHz for the A4-1200 to 300MHz for the A4-1250, but that’s enough to raise the TDP rating from 3.9W to 8W!

      The quad core Temash is also not really a 8W TDP part… Since that rating is only for it’s base clock speed but it supports a Turbo mode, which raises the CPU cores to 1.4GHz, from the base of 1GHz, and raises the GPU from 300MHz to 400MHz… And that’s not from a Turbo Core like feature but an actual switching of performance mode as it can stay at Turbo mode indefinitely and is more like Intel’s Speedstep in reverse!

      So the A6-1450 is more like a 14-18W TDP part, like Kabini… which makes sense since they’re basically the same SoC, Kabini is just targeted more towards laptops than tablets!

      Never mind, all the Bay Trail models you were comparing are not for the Tablet range, anymore than Kabini is… Bay Trail I is for Industrial usage, like for embedded systems. So like Bay Trail D, is meant for more desktop/server like environments where mobile power optimization isn’t needed.

      While Bay Trail M is for higher end mobile usages like Laptops, which is why it goes under the Celeron branding!

      Also, the Silvermont isn’t 50% faster… it’s 50% more efficient! This is separate from other factors, like clocking it faster and number of cores…

      Even with the Bobcat cores vs the older In Order Processing ATOMs, anything more than a few hundred MHz faster clock speeds start to give the ATOM the processing advantage.

      While Silvermont eliminates most of the processing efficiency disadvantages with a full Out Of Order 64bit architecture and can be clocked faster that AMD can for a given TDP range!

      The Atom E3823 you tried to compare to the quad core Temash for example is rated for 1.91Ghz and at 8W the quad core Temash is only clocked at 1GHz!

      Besides, Silvermont actually supports up to Octo (8) cores, which will be applied to the Bay Trail D at a later date. While the other Bay Trail releases will max out at quad cores…

      Silvermont also has what is called Burst Technology, which is based on the Core i-Series Turbo Boost. So can temporarily exceed its normal limits for as long as it doesn’t over heat.

      And as pointed out, AMD didn’t implement its Turbo Core technology into Temash!

      So don’t underestimate what Intel is offering with Bay Trail and don’t over estimate what AMD is offering at this time!

      • iAMunderDOg

        Windows Experience is not a benchmark, but it can give you a proper picture of what to expect from the product that you may buy while you must count other factors…

        You don’t understand the difference between A4-1200 and A4-1250 do you? A4-1250 has less constraints and can use faster RAM plus its GPU is 1/4 higher clocked than A4-1200s.

        A6-1450 is an 8 watt TDP APU while if its docked and/or activate performance mode then it goes up, then its 12-15 watt TDP APU.

        Clocks=/=Performance

        “The Atom E3823 you tried to compare to the quad core Temash for example is rated for 1.91Ghz and at 8W the quad core Temash is only clocked at 1GHz!” <- you fail right there

        Temash/Kabini have higher IPC than Bay Trail…

        Temash/Kabini have Jaguar cores, Jaguar is more efficient and 20% higher IPC per clock than Bobcat plus Bobcats old VLIW5 GPU gets demolished by much faster GCN GPU inside Temash/Kabini. Its far more efficient than Bobcat.

        You don't understand that I compared them to Atom line of Bay Trail chips that are for Tablets, but you just can't understand.l Yes Bay Trail has better idle but its GPU is weak and far less power efficient than AMD's GCN GPU's.

        I won't argue with you since you are a fanboi and you won't accept facts, I have done my research that I based on available data and no, Bay Trail is not 4 times power efficient to Clover.

        Bay Trail for the same performance as CloverTrail in terms of power consumption, well Bay Trail needs 1/2 of power for the same results. Bay Trail is an bear when it sleeps/idles but when it wakes up, its an wattage sucker as Jaguars.

      • CyberGusa

        Incorrect, WEI is a benchmark… it’s just not a very accurate one as it only provides a general analysis of the hardware and the score is given with the lowest scoring test.

        And no, I do understand the difference between the Temash products and the main difference between the A4-1200 and A4-1250 is the GPU clock speed… You won’t get a massive increase in power consumption by using faster RAM… especially when it’s optional and only barely faster than the base rated RAM!

        Besides, which they both only have 1MB of L2 Cache, operate at the same 1GHz clock speed, are both only dual cores, and are using the same 128 core Radeon GPUs.

        So there’s no real difference in performance other than the increased clock speed for the GPU!

        While you’re also wrong on IPC… Even with the old ATOM, the Bobcat cores were still a dual-issue dual-core machine so high IPC or highly
        threaded workloads showed little difference between it and the Atom.

        The only real CPU advantage the Bobcat cores had was in single threaded processing, especially with the Bobcat cores Out of Order Processing vs the ATOMs In Order Processing. But Silvermont is a fully Out of Order architecture that significantly improves the ATOM single threaded processing and further improves it multi-threaded processing.

        In fact multi-threaded processing is where it shows a 2.8x increase vs the 2x for single threaded!

        So the Jaguar’s 20% improvement is like nothing in comparison!

        And the power comparison to Clover Trail is stated for 4.7X more efficient for the same results under single threaded and 4.4x for multi-threaded.

        So you’re obviously confusing specs as is by trying to say it’s only using 1/2 power to perform the same work as the present Clover Trail.

        While higher performance does come at a cost but only while under heavy load. But the point remains that when not under load the Jaguar cores would consume more power because they can’t go as low as the Silvermont cores can!

        So, sorry but Temash is only a step up from the previous Desna and Hondo APU’s but doesn’t allow AMD to fully compete in the mobile space.

      • iAMunderDOg

        You fail miserably…

        AMD won’t compete with Intel when it comes to smartphones, I watched Intel’s conference on youtube and they were showing out Torchlight 2 on the tablet at 1080p while AMD already did that with its tablet at a much higher settings than on Intel’s reference tablet.

        When I said 4 watts idle, I meant when whole device is at idles while Jaguar alone idles at 0.75w and playing HD videos consumes 1-2w from the CPU its self not the whole system. You have been teached.

        Anyway, next gen ARM will destroy Bay Trail.

      • CyberGusa

        Sorry, but the only one failing here is you!

        Torchlight 2 was never demonstrated at higher than 1080P for AMD’s Temash.

        Temash and Kabini GPU’s at best can barely beat a Intel HD3000 but anything less than Llano or Intel HD4000 is incapable of even comparing to a entry level discrete gaming graphic card!

        So don’t ever confuse what Kabini and Temash are capable of with something like Richland!

        Besides, only the quad core Temash that clocks the GPU up to 400MHz can exceed the performance of a Intel HD3000… but the dual core A4-1200 only clocks the GPU at nearly half that for 225MHz… Meaning graphical performance would be far lower for the only AMD SoC that gets anywhere below the 5W TDP threshold range needed to go into a fan-less tablet.

        Meaning the A4-1200’s graphical performance is closer to the previous Hondo Z-60 APU and that was pretty much on the lowest end of the graphical performance range for AMD APU’s!

        Kabini and Temash are AMD’s low end solutions that replaces the AMD Fusion/Brazos for the netbook range performance!

        While 0.75W for the CPU idle is terrible for a mobile SoC! Even the previous Hondo’s Bobcat cores could idle down to 0.75W as well!

        ARM Krait Cores operate at no more than 800mw for each core under max load for comparison!

        Really, mobile ARM and Intel SoCs can idle down to low two digit mw levels and still do basic work… besides, AMD’s version of idling is doing absolutely nothing! It’s just one step away from going into S3 Suspend State for the Jaguar cores but doing even the most basic tasks will rapidly raise its power consumption!

        Really, for mobile SoCs from ARM or Intel… Playing HD video means the 1-2W is for the entire SoC and not just the CPU!

        So no, the only one who has to learn anything here is you!

        The reason AMD is resorting to using ARM cores next year for servers and perhaps more later is because they have nothing even remotely capable of being that power efficient yet!

        While AMD’s graphical advantage on the low end is no longer that high compared to Intel’s!

        Back in the Pine Trail ATOM, sure… The Ontario C-50 APU had about 5x the graphical performance of the Intel ATOM GMA 3150 but the present Saltwell ATOM reduced that lead to less than 2x and the upcoming Bay Trail will blow past it with another 3x advancement.

        While AMD Desna was just a power optimized version of the C-50… which then the Hondo optimized a bit more and finally Temash replaces with a small performance boost that only the quad core version can claim a 2x improvement…

        So the graphical performance is still within the same range as the previous AMD Fusion/Brazos despite going GCN… like most architectural improvements, it only provided a boost but nothing in the multiple range without scaling up, which they can’t do without rapidly raising power consumption and heat generation!

        That is the reality! Get used to it!

      • Charles Sexton – CAD/CAM Eng.

        The Intel video chip is an incomplete piece of shit. Comparing the Intel Video Core to the ATI Video Core is like comparing at kid with crayons vs Picasso. Who cares what the “performance” of the HD**** is? Try running CAD software or serious video editing software on it.

        Granted, the ATI solution isn’t a quadro or the like, but at least I can RUN the software with a realistic expectation of output instead of some crippled piece of shit like the Intel solution.

        I am waiting for the Temash. I am waiting for it so I can hold up a tablet PC as a Window to my creations in Solidworks or Autodesk Inventor. I don’t need high FPS, it’s not Quake, Halo, or Call of Duty or whatever “kids are playing these days”, but I do need my object to look like my object when I show it off to customers.

        This is something that doesn’t, yet, exist. Not from ARM, not from Intel, and not YET from AMD. AMD will be first. That’s where this all changes.

      • CyberGusa

        Sorry but we’re talking about low end iGPU’s here… none of them are even able to reach the level of a entry level discrete graphic gaming card!

        So it doesn’t matter if it’s AMD (they don’t call it ATI anymore, btw) as the performance range is still far below what you’ll need for anything significant!

        Really, try any high end modern game like Crysis 3, Metro Last Light, etc. and none of these low end iGPU’s would be adequate at all… as even on lowest settings and resolution you’d be lucky to get anything resembling playable FPS.

        Even much higher performance Richland and Haswell Iris Pro graphics are only achieving the performance of a mid range discrete gaming graphic card.

        While the Ivy Bridge HD4000 that the Silvermont GMA is based on is a lot more complete than any previous GMA used by the ATOM!

        It can actually handle 4K video streaming, up to 2560×1600 native screen resolution, eDP, DP, HDMI, DX11, gaming up to the level of performance needed to run Torchlight 2 at 1080P without any noticeable lagging.

        So it’s at least a step up from what’s available on mobile devices and like it or not, the AMD solutions do go pretty low as well and should not be compared to what AMD offers for its mid and higher end offerings!

        Neither Temash or Kabini can even offer Llano graphical performance and that’s two generations back from Richland!

        While the old Athlon processors are still about 50% more powerful than the Jaguar cores! They only use the Jaguar cores for the same reason Intel used the ATOM… Because it can run at much lower power levels but it is by no means their highest performing architecture!

      • Charles Saxton – CAD/CAM Eng

        Funny, you bring up gaming when I said I don’t care about “performance”.

        Obviously you don’t understand that the requirement is COMPLETENESS of the graphics pipeline (including driver software, but also firmware, and of course, hardware).

        Intel…graphics pipeline…is incomplete….and is a piece of shit. You’ll not get the output of the _ATI_ graphics pedigree.

        Even if the Intel Graphics were FASTER than the _ATI_ graphic solution, they would still be INCOMPLETE.

        INCOMPLETE. Meaning what you expect to be rendered and what is rendered, in a professional application, are NOT the same.

        INCOMPLEEEEEEEEEEEETE.

        Why people claim Intel has a “Graphics Solution” is beyond me. They are no more interested in selling graphics cards than they are selling printers, cameras, or monitors. But with AMD and nVidia and other companies either making Chipsets or SOC products, Intel had to license CUT RATE POWER-VR graphics solutions. And they haven’t kept up with the POWER-VR solutions of the ARM segment of the market.

        INCOMPLEEEEEEEETE… INTEL Graphics are INCOMPLEEEEEEETE SHIIIIIIIIT

      • CyberGusa

        I bring up gaming because it’s a good way to compare performance and get a idea of what can run on the given platform…

        For CAD, video editing, etc. you need Ultrabook range performance!

        The point I made is AMD isn’t offering Ultrabook range performance for their low end products like Temash and Kabini.

        Those are only a step above the previous AMD Fusion/Brazos APU’s that were used to compete with ATOMs for the netbook range of systems!

        While, CAD, Photoshop, Video Editing, etc all rely more on CPU performance than GPU performance anyway!

        And like I already pointed out, Intel graphics are no longer incomplete with their GMA’s starting with Ivy Bridge… It’s where Intel got parity with AMD’s Llano level of graphical performance.

        While, Intel still needs to work on their drivers… It has been over a year since Ivy Bridge was released and the driver support has improved to acceptable levels now. So most things will run and render properly… While Bay Trail will benefit from those drivers having a GMA based on the Ivy Bridge HD4000!

        Besides, as also pointed out the Temash and Kabini are still limited and if you really wanted to do any real work then either get something based on AMD Richland or Intel Haswell but don’t expect much from these low end solutions!

      • Charles Saxton – CAD/CAM Eng.

        I have been using cad workstations since they were measured in the 10’s of MHz. I know what I need. Performance isn’t a factor. This isn’t for authoring under, it’s for presentation.

        For authoring, I have a Lenovo y580 with a Geforce GTX 660M and an Intel HD4000. The HD4000, latest software and all, won’t work.

        _ATI_ Pedigree cards CAN. Discrete, integrated, it doesn’t matter. As long as the pipeline works.

        Quit preaching your perspective to people in the know.

      • CyberGusa

        Sorry, but I’ve been around a long time too and I’m not preaching but just telling you where things stand now!

        Take AutoCAD 2013 – C2012 Total Index for example, the HD4000 not only successfully runs it but gets a score of 414, which is nearly as high as a Radeon 7750 1GB that gets a core of 422!

        Sure, that’s not very high but goes to the point it’s not “Incomplete” as you keep on trying to state and goes to the point of where all these iGPU’s stand as we are talking about even lower end iGPU’s that meet even lower performance range!

        You’re apparently biased by all the years Intel produced lousy iGPUs, and I mostly don’t blame you because for a long time Intel did produce truly lousy iGPUs but they started turning that around with Ivy Bridge and they’re starting to move towards pretty decent products now with ever new release and is why the HD4000 is considered the threshold from the old to the new on what’s truly lousy and what’s okay now!

        The main thing they had to still work on is driver support but things like OpenGL support has dramatically improved, especially for the latest Haswell Iris Pro GMA, and as I pointed out earlier the drivers have improved over the last year and will apply to other products like Bay Trail’s GMA…

        While if you really have kept up then you’d know that AMD’s iGPUs are always a generation behind their discrete GPUs and scaled down GPU’s like those that are used in low end devices, get a lot of the features crippled or significantly reduced and should never be directly compared to their higher end versions!

        Most actually prefer Nvidia rather than AMD for doing AutoCad, etc. anyway and Kepler should start showing up in a much wider range of products over the next year…

        While at the very least Kabini will offer better specs than Temash but operating at 15-18W TDP means you might as well jump to Richland… the price difference is also pretty small anyway!

        Also, my point about all those apps requiring more CPU performance than GPU is true!

        So even for presentation purposes you’d have to consider what can give sufficient CPU performance to run those programs without a lot of lag and CPU performance goes very low for these low end devices!

      • iAMunderDOg

        A4-1200 CPU has the same performance as Z-60 CPU at turbo clocks and GPU performance is comparable to E350/Intel HD Graphics 2000 while its much better and stable and you can not compare VLIW5 with GCN, they are literally worlds apart in performance per watt.

        AMD showcased Torchlight 2 at 1080p and Intel did the same, but the main difference is graphical settings.

        A6-1450 played Torchlight 2 at 1080p with high settings and achieving 25 frames per second like on Bay Trail Quad Core Torchlight 2 run smoothly at 30fps but considerably lower graphical settings.

        Bay Trail GPU maximum performance potential is comparable to Intel HD 2000 Graphics or E350’s GPU.

        90% of Benchmarks have Intel compilers so it favors Intel’s product while we all know that it can’t touch AMD’s stronger solution.

        nVidia’s GPU’s are awful, they crippled their GPGPU compute units in GTX 6xx series and thats why you see Radeon HD 7970 beating GTX Titan by 4-5 times.

        Its the reason why nVidia Quadro is not inside Mac Pro.

        Adobe is optimizing their programs for AMD’s CPU’s and Open CL, especially when Kaveri gets out and if Adobe manages to use incredible potential of Kaveri and HSA/hUMA that removes a lot of bottlenecks associated with dedicated CPU and GPU configurations.

        Top of the line Kaveri will be 20-25% faster clock to clock compared to Richland and its GPU will be comparable to Radeon HD 7750-7770. Deal with it.

        You are out of date, people are jumping on HSA/hUMA AMD bandwagon, singing olalala. rofl

      • CyberGusa

        Z-60 does not have Turbo Clock, and the A4-1200 by all present accounts isn’t performing significantly better than the Z-60!

        It takes more than a 15-20% improvement to be really noticeable, let alone on a low end product that just means the improvement is small to begin with!

        While the benchmark leak is from Android, which doesn’t give Intel any advantage!

        The settings for the Bay Trail demonstration of Torchlight also weren’t shown. So you’re making stuff up now for your so called comparison!

        Never mind that the Bay Trail has a performance scale and the 3x improvement is only the base level for Bay Trail T but will probably go significantly higher for Bay Trail M and D, which we already know will operate at higher clock speed for the GPU (ranging from 400MHz to 792MHz, not counting Burst Mode)!

        And no, Nvidia GPU’s aren’t awful, they’re AMD’s direct competitors for GPU performance and they tend to alternate between them who has the present best offering!

        So, let’s not confuse your obvious bias with the facts! Especially, with Nvidia nearly set to start pushing out Kepler into the mobile space and that presently has more potential than anything AMD is pushing out right now… Though, they do have up to two years to counter… but they’re already a couple of years behind Nvidia in getting into the mobile market!

        By Tegra 6, Nvidia will have a custom 64bit ARM based architecture (codename Project Denver) and a Kepler based GPU… Shooting for 100x the performance of their old Tegra 2.

        For Kaveri, we frankly don’t know whether it’ll provide a significant improvement or not from Richland. The 20-25% is what they’re hoping for but AMD made similar claims with the last few updates and have fallen short each time.

        AMD has been losing billions over the last few years. So like it or not they still have a long way to go to become really competitive!

        While Kaveri will also be the test platform for HSA and hUMA and will likely make or break that endeavor… for their sake I hope it works out but it’s hardly a sure thing at this point.

        Especially with virtually no one but ARM jumping on the HSA and hUMA bandwagon to date and applications for ARM don’t necessarily benefit AMD… Even with ARM starting to use ARM for their server range products starting next year…

        Not to mention there are alternatives they have to compete with that already have greater industry backing, as well as examples like Nvidia CUDA that could easily become HSA and hUMA’s fate if they fail to become a industry standard!

        We probably won’t even see a Kaveri in an actual product until the beginning of next year with the present time table and that’s pretty close to the Haswell refresh time table and when Intel starts seriously moving to 14nm.

      • iAMunderDOg

        I thought Z60 has a turbo clock, ok then A4-1200 is 20-30% faster than Z-60 while as fast as C-70 at turbo in terms of CPU performance.

        20-30% improvement is not small at all, A4-1200 Temash is a decent chip for tablets compared to Atom Z2760. A4-1200 Temash has enough “juice” to play games like Call Of Duty Modern Warfare 1 and 2, GTA: San Andreas, Gears Of War and other games.

        You are wrong, AnTuTu benchmark is not reliable and can be easily cheated and it has Intel compilers, ARM CPUs don’t support some part of the benchmark from AnTuTu plus the code can be edited/modified. If you don’t believe me then visit Anandtech forum. ;)

        Intel’s presentation of Torchlight 2 at 1080p on their tablet ran on lower settings even thought Intel them self did not shown the settings while I compare both Intel’s and AMD’s tablet running Torchlight 2 and I can see it was running at lower settings. Visual comparison, I know that projector lowers the colors and quality of the image but still Intel’s tablet ran TL2 at lower setting.

        You are failing hard, don’t mix per core performance versus the performance of an entire chip and stop being a sucker to Intel slides as whole armada of Intel fanboys and easily manipulated investors. Bay Trail clock to clock will be 50-60% faster than Clover Trail, can have 4 cores and will not have hyperthreading.

        Its amazing how people forgot about Fermi and crippling of compute units of GTX 600 series that nVidia did just to be competitive in terms of power consumption, heat and “cooling”. Nothing else.

        Keep accusing me, try mirror your image to me. ;)
        All is gone when GCN 2.0 mobile GPU’s arrive and destroy nVidia’s mobile offering.

        Tegra is a joke, less efficient and less powerful. How many there are Tegra 4 devices? Not so many, a few. You are a sucker to slides aren’t you?

        If you did some research then you would know, visit Anandtech forum sometimes and update yourself since you are outdated in terms of information’s. Also mentioning AMD from 2011 is a serious fail, AMD changed a lot since Rory Read became a CEO of AMD. Total restructuring, the guys that screwed up got fired.

        AMD said that Piledriver will give 15% higher performance than Bulldozer and it did, also considerably lower power consumption and heat. So did they fail? No.

        AMD said that Jaguar will have 15-20% improvements over Bobcat and did they fail to deliver? No.

        Jaguar core is 20-30% faster than Bobcat clock to clock, power consumption is considerably lower, GCN GPU is far more powerful than Bobcats VLIW5 GPU.

        Compare E350 with A4-5000, youl get the picture.

        They have a long way to be competitive? They are, look at Jaguar. It’s smashing CloverTrail and on Anandtech forums people argue if Jaguar core is more power efficient than ARM core! Jesus christ you fail!

        Kaveri is not the only thing that has HSA/hUMA, did you forgot about PlayStation 4 and Xbox One? Game companies and publishers will support HSA/hUMA, more and more support hSA/hUMA gets from various software companies and do you even know the advantage of HSA/hUMA at all? I don’t think so, do some research by yourself and then report back.

        You fail again, google HSA foundation.

        CUDA main competitor is Open CL and gues what, HSA/hUMA supports Open CL and Open CL is main language for HSA/hUMA so you fail again.

        Kaveri will be released by end of Q3 or beginning of Q4 and AMD already shown a working sample.

        I am going to insult you and trash you ultil you do some proper research you amateur. You act like you know everything but you know nothing because you don’t do the research! You fail, miserably. GO away, AMATEUR!

      • CyberGusa

        Wow, you don’t have a clue what you’re talking about and you want to call me a “AMATEUR”?

        First, lets make some things clear as you don’t seem to even know what AMD did with their tablet optimized APU’s the last few years!

        The C-50 (9W) => Desna (5.9W), they just power optimized it for tablet use => Hondo (4.5W) mainly just further optimized it… Hondo is even made on the same 40nm FAB as Desna… The main achievement was keeping performance about the same while reducing TDP but not without some costs like reducing the number of supported ports to 1 each, etc. making it a simpler APU…

        And AMD only introduced a limited version, the 1.33GHz you’re thinking of only applied to one core while the other core got under clocked to compensate (meaning it only benefited single threaded processing), of Turbo Core for a few models of the Brazos 2.0 update, namely C-60, E-450, and a few others… But never applied it to their tablet optimized APU’s and they still haven’t for Temash!

        While the only consistent advantage these previous tablet optimized APU’s had over the ATOM was better graphical performance…

        But they lost on power efficiency, max CPU performance (because the ATOM can be clocked more than a few hundred MHz faster to compensate for the efficiency difference), and the present Imagination PowerVR GPU’s reduced the graphical performance gap to less than 2x, down from the original 5x difference from the old ATOMs using GMA 3150.

        The main CPU advantage the Bobcat cores had was that they were Out Of Order Processors and for single core processing they were noticeably more efficient but they were not power efficient enough and thus AMD had to seriously under clock them to compete in the same performance category as the ATOM and thus eliminated their advantage!

        Real problem was Bobcat was always closer to the ATOM range processor performance than it was AMD’s other processor architectures like the earlier Athlon that provides over 50% more performance than the equivalent E-350.

        So even Jaguar still isn’t as efficient a processor as AMD’s higher end architecture… It takes two cores for every one just to match Intel’s Sandy-Bridge-based Core i3-2377M… Meaning there’s still a large gap between AMD’s low end and their high end, like Richland’s Piledrivers!

        While Bay Trail is shooting for over a 50% improvement to the ATOM processor and that’s greater than the Jaguar improvement!

        So at worse, they’re at parity but with Intel able to clock faster and still put Bay Trail into mobile devices means they get the clear CPU advantage.

        While a graphical difference of less than 2x starts making the difference irrelevant for most users as you need large performance differences to see noticeable difference in graphical performance…

        Sure, Temash can beat Clover Trail but not by a huge margin and doesn’t negate the fact Bay Trail will easily close that gap anyway for at least the two low end A4-1200 and A4-1250. Only the quad core Temash and higher end Kabini may exceed what’s possible with Bay Trail…

        But by all accounts both Temash and Kabini are only more efficient than the previous AMD Fusion/Brazos APUs and won’t offer true mobile optimized power usage.

        So it’s more than likely the Silvermont based ATOMs/Celerons/Pentiums will still be more power efficient and will at the very least close the performance gap enough to make it negligible!

        While demonstrations of Kaveri mean nothing, Intel demonstrates their next gen architecture too but up to a year before it’s actually released!

        Besides, it takes months after a new release before actual products come out with them and Q4 release is too late to see Kaveri this year in a actual product!

        And the HSA foundation only has ARM supporting it right now!

        Really, look at the founders… AMD, ARM, Imagination (ARM market company), MediaTek (competitor of Qualcomm in the ARM market), Qualcomm (again ARM market company), Samsung, and Texas Instruments…

        All of these are limited backers that only really covers AMD and ARM part of the market…

        Problem for AMD is HSA and hUMA will go nowhere unless they can successfully turn it into a industry standard and that’s not even close to happening yet!

        And no, there are other alternatives than OpenCL, like like Microsoft’s C++ AMP… while developers don’t need to support HSA to support OpenCL… So it doesn’t really help AMD’s initiative much anyway… especially with alternatives also supporting OpenCL as well!

        While one of the main reasons why CUDA never took off was because it’s very difficult to program for a GPU and most of these solutions just make the process even more complex and it remains to be seen whether developers will even like AMD’s solution.

        So stop confusing hype with the facts! You’re apparently a AMD fan but you’re not doing them any favors by over selling what they’re offering right now and ignoring how far they still have to go to really succeed!

      • iAMunderDOg

        Wow, you always fail and now you are trying to mirror image of yourself to me. Nice try but no cigar!

        Maybe…

        They only sacrified the ports, not the performance that is important so case closed and move along. Also you fail again since A6-1450 has a turbo mode under its TDP.

        GPU performance is important, for games, graphics, rendering websites/pages/pictures, etc…

        They reduced GPU gap since their own GPU’s utterly failed, least Atom’s have enough performance for Windows 8. Bay Trail T will have same Power VR GPU’s, again as Clover Trail. :P

        Bobcat is twice as fast at same clocks compared to Atom so 2ghz Atom = 1Ghz Bobcat. Still, old C-50 and newer reiterations stood well against the time.

        There will be always a gap between ultra low power chips and desktop counter parts. So your point is invalid and unneeded.

        I don’t have the time nor energy to argue with you, you fail consistently and proving you wrong would need hours and days to teach your stuborn mind the reality.

        I give up, I won’t argue with a retard that does not do research and breaks all the journalism rules.

      • CyberGusa

        Sorry but I deal in reality and not fantasy as you apparently want to cling to but you’re not fooling anyone but yourself.

        Years of reviews between ATOM and AMD Fusion/Brazos prove everything I’ve stated!

        The Bobcat was never really twice as fast as the same clock ATOM! It only ever indicated to have the potential but only in certain single threaded performance.

        Processors are rarely linearly different from one another but instead have different things they’re good/bad at than the other…

        Bobcat is only dual-issue dual-core machine so high IPC or highly threaded workloads showed little difference between it and Atom.

        So the performance difference never applied to everything and any ATOM clocked more than only a few hundred MHz faster pulls ahead of the Bobcat in real world performance!

        The AMD ATHLON II X2 on the other hand was clearly 50% better than both Bobcat and the ATOM but consumed way too much power to compete in the same low end device market where battery life was a important factor… Just to show where these processors actually stood on the performance hierarchy when even a older architecture can blow them away for performance…

        Neither ATOM or Bobcat were chosen for their processor efficiency but the fact they could work at lower power levels to help maximize battery life!

        Even the less limited Zacated E-350 held only a 41% performance advantage over the similar clocked Atom D510…

        And that’s with a version of the Bobcat that wasn’t being held back like the Ontario C-Series was… Locking it down to a much lower 1GHz clock speed and similarly reduced graphical performance as well!

        The only multiple AMD ever had was on GPU performance and for the AMD Fusion/Brazos Series it only ranged from 5x (on the lowest end) to just over 9x (at the highest end).

        Though, there were some architectural advantages that things like Out Of Order processing provided a boost to the benchmarks, like in js processing, but that difference is eliminated with Silvermont, which is now also a Out Of Order Processor too and shares many of the same advantages now!

        Never mind on the higher end Intel maintains a single processing core advantage over AMD’s best offerings, which is why most of what AMD offers competes with Intel’s midrange!

        So it’s not like Intel couldn’t have significantly improved the ATOM performance!

        And no, Bay Trail T is not using Imagination PowerVR GPUs! All Bay Trails are using Silvermont Architecture and that uses a GMA based on the Ivy Bridge HD4000!

        While Intel’s promotional slides where all based on the Bay Trail T Z3770 to show base performance, which the graphical performance was only compared to the Clover Trail+ Z2580, which uses a dual SGX544MP2 GPU based GMA that’s significantly more powerful than the regular Clover Trail Z2760 GMA that’s based on the less powerful single GPU SGX545!

        The SGX544MP2 has been shown in ARM devices easily running 2048 x 1536 IPS displays for example!

        Meaning the performance improvement is pretty significant when you factor that Bay Trail T is suppose to have a base increase of nearly 3x for graphical performance, and that shows it’s comparing the high end for Saltwell, and we know from the higher clocked model Celerons and Pentiums that it does go even higher!

        Besides, nothing you even suggested changes the fact that Temash can’t compete with ARM SoCs and that’s where Bay Trail T is competing. So it’s not really even in the running as far as that’s concerned.

        And will likely not be very successful against Bay Trail M or D either with more limited driver support and more limited usage ranges!

        Really, AMD considers the AMD Fusion/Brazos one of their best successes over the last few years but they never dominated the low end laptop or netbook market!

        And there’s nothing to really convince people to see Temash and Kabini as anything more than a incremental update that fails to offer true competition in the mobile space!

        So pretend all you want but AMD needs to do better if they want to turn around the downward spiral of the last couple of years where they’ve been losing billions and return to a true competitive status with Intel…

      • ninaholic

        >> Really, AMD considers the AMD Fusion/Brazos one of their best successes over the last few years but they never dominated the low end laptop or netbook

        Why was that exactly? I’ve seen way more Atom N570/N2600 10″ netbooks out there than AMD C50/C60/C70 ones, yet the AMD ones had 1280×720 resolution instead of 1024×600 and did a lot better at video rendering for me (and sold for the same price, from what I’ve seen). Was it because the AMD netbooks consumed more power / had worse battery life, or the extra GPU power was a niche/unnecessary for most people, or did they have some sort of defect/not enough supplies, or did Intel buy out the entire WinTel netbook market with deals to only sell their chip? I’ve always suspected it was the latter, didn’t Intel do this before? I like the 1024×600 resolution on my N2600 and the performance is fine (as long as you uninstall most Windows junkware and replace everything with lighter free alternatives), but I never understood why they were way more popular for manufactures and sellers than the AMD C-Series. Any thoughts/opinions?

      • CyberGusa

        There have been ATOM netbooks with higher resolution screens… they were just very rare…

        But as to why AMD never really dominated the netbook market… a number of reasons…

        1) The main advantage AMD offered was graphical but you couldn’t really game on these low end systems either, they were just better at low end gaming than ATOM netbooks but nothing short of a Llano would even be considered entry level for actual gaming and there was already alternate solutions like a combo Nvidia ION with ATOM to provide a similar performance solution for those few looking for a little more than what a basic netbook could provide.

        While Nvidia’s Optimus may never have been as effective as advertised, it still promised consumers they could opt to keep the desired netbook battery life by sticking to the ATOM and turning the Nvidia ION off and only using it when graphical performance was needed.

        2) AMD had a more efficient processor with the Bobcat cores but the advantage was mainly in single threaded processing and the APU was still not as power efficient as the ATOM and so to get them into 10″ netbooks meant they had to seriously under clock them to 1GHz.

        This left dual core ATOMs like the N550 able to edge out CPU performance with 500MHz or more faster clock speeds to compensate for the efficiency difference.

        So only the higher end Zacate really used the full potential of the Bobcat cores but they were too high powered to be put into anything smaller than 11.6″ systems.

        3) ATOMs cold still provide more than a hour more run time, and most people getting netbooks got them for good battery life rather than performance…

        Pretty much the same reason why the mobile tablet market is doing so well now for example. Despite the fact it has only been less than a year since ARM devices have been able to rival and now exceed netbook range performance.

        4) Many AMD based netbooks still had the same specs as the ATOM models, so they didn’t always take advantage of the greater graphical performance… partly due to cost but also because higher resolution screens also mean higher power consumption and they were already hard pressed to compete with ATOM run times.

        5) Simple market momentum, ATOM were first and AMD took nearly two years to join in… Their previous Turion, etc solutions were easily more powerful but couldn’t provide anywhere near the same battery life and ran very hot and it was simply not a good time for AMD, having to prove itself to the consumers.

        6) AMD didn’t have a large price advantage against the ATOM like they did with Intel’s Core Processors. So battery life became more of a consideration… especially, when most viewed either as low end solutions best suited as a secondary device or like mobile devices, something suited just for basic usage.

        Unfortunately, there was never really a push to take full advantage of AMD’s graphical advantage and the newest high end PC games were still unplayable, especially with many of them requiring both good CPU performance and GPU performance.

        Star Craft 2 being a good example of types of games that made the CPU sometimes even more important, and neither ATOM or AMD offered much performance in that range.

        7) There’s some truth to WINTEL deals at the time, but mainly because they limited netbooks in general.

        One of the reasons netbooks were so cheap was because they pushed for off the self parts. So designs varied very little and costs were kept down with everyone getting pretty much the same parts.

        Limits like systems being allowed to be sold with only 1GB of RAM was actually because of MS, placing it as a requirement for Windows 7 Starter Edition. To help protect their sales on higher end systems and differentiate them from the low margin netbooks.

        Companies like Dell actually got into trouble because they kept on making non-standard models, which threatened their favored reseller status on more than one occasion…

        Though, I’m not discounting Intel’s influence as well… but leverage only goes so far and AMD was caught leveraging their GPU market too. While Intel and AMD came to an agreement on licensing that they had been charging each other for that gave both free reign… So I’d say the other factors had more to do with it…

        8) Supply and demand, one of the reasons AMD finally cut itself off from GloboFoundries was the increasing inability to meet demand, and so there was also a lack of trust on many OEMs that AMD could deliver the quantities they would need.

        Even now with Temash and Kabini, AMD is over 6 months behind their original road map. Though that wasn’t directly their fault but a general problem with the 28nm FAB, but shows it’s still a continuing problem for AMD.

        The delay even forced them to cancel the Wachita and Krishna APU’s they originally had on the roadmap for release prior to Temash and Kabini.

        So there has also been little updates since the Brazos 2.0 release and that was a disappointing minor update.

        9) Finally, marketing… AMD failed to really sell it to the public… So mainly fans of AMD really demanded it but everyone else either didn’t understand its advantages or expected more than it could provide.

        A issue that originally plagued netbooks when they first came out as well, with a lot returning them when they realized it wasn’t the device they thought it was…

        In any case, by the time that cleared up… the netbook market was already on the decline and now everyone is scrambling to compete in the mobile market…

      • Charles Saxton – CAD/CAM Eng

        Autocad…is not…the cutting edge…of design.

        Solidworks and Inventor are.

        The HD4000, which I have, blows chucks. It is INCOMPLETE from my OWN observation, not some benchmark.

        The HD4000 also causes my CAM software, SprutCAM to crash. I have to force the selection of the Geforce chip in my Y580. Which is no big deal, but I cannot rely on a machine that has ONLY an HD4000 in it.

        And what is worse, I cannot get a laptop without this embedded piece of shit chip without buying AMD.

        Fine, I won’t buy Intel in the future.

      • CyberGusa

        SprutCAM has been known to crash with AMD GPU’s too, and typically has more to do with the OS than the Video card…

        Besides, having to force the selection of Geoforce chip on your Y580 suggests you had a configuration and/or improper driver installation problem.

        SolidWorks can also lock up depending on driver release from AMD… issues can come and go between each release and they don’t always solve every problem.

        Anyway, neither AMD or Intel will be providing you with a full GPU experience in the low end range of iGPUs!

        Even the higher end Richland APU, which is two steps up from HD4000, from AMD would not be idea for you to do work and will mainly be only ideal for either light work or just presentations.

        Going below that as you’ll not only have to worry about proper support for the programs but slow enough performance that they may not even be useful for presentations!

        Besides, if you look at the list of certified GPUs for Solidwork… AMD’s mobile iGPUs aren’t even listed, Only the FireGL and FirePro GPUs are listed!

        So you’ll be taking a risk with anything else, regardless which you choose!

        While there are discrete cards that are configured as the default for the laptop and it won’t use the iGPU at all… So mainly watch who you buy the system from as that can make or break the reliability even more so than any specific hardware choices!

    • CyberGusa

      Btw, here’s a look at a Quanta prototype tablet running on AMD’s A4-1200 1Ghz dual core Temash…

      https://www.youtube.com/watch?v=tLGj3_246LA

      At 1:04 it shows the detailed Windows Experience Index score…

      3.1 – CPU
      4.5 – RAM
      3.9 – GPU
      5.6 – 3D Graphics
      7.5 – Primary Drive (SSD)

      A Dell Latitude 10 running on the Clover Trail 1.8GHz Z2760 gets…

      3.4 – CPU
      4.7 – RAM
      3.8 – GPU
      3.3 – 3D Graphics
      5.5 – Primary Drive (eMMC)

      While the quad core Bay Trail T Z3770, which is what Intel based their promotional slides on, is also reported to have a 2W SDP but it’s a 2.4GHz part…

      http://www.anandtech.com/show/7048/silvermont-to-sell-under-atom-celeron-and-pentium-brands-24ghz-z3770-leaked

      Meaning it will be a lot more powerful than Clover Trail and thus more powerful than the A4-1200 Temash!

      While none of the other Temash models will compete in these very low power ranges!

  • Charles Sexton – CAD/CAM Eng.

    The Intel video chip is an incomplete piece of shit. Comparing the Intel Video Core to the ATI Video Core is like comparing at kid with crayons vs Picasso. Who cares what the “performance” of the HD**** is? Try running CAD software or serious video editing software on it.

    Granted, the ATI solution isn’t a quadro or the like, but at least I can RUN the software with a realistic expectation of output instead of some crippled piece of shit like the Intel solution.

    I am waiting for the Temash. I am waiting for it so I can hold up a tablet PC as a Window to my creations in Solidworks or Autodesk Inventor. I don’t need high FPS, it’s not Quake, Halo, or Call of Duty or whatever “kids are playing these days”, but I do need my object to look like my object when I show it off to customers.

    This is something that doesn’t, yet, exist. Not from ARM, not from Intel, and not YET from AMD. AMD will be first. That’s where this all changes.

    Until then…

    • CyberGusa

      No, it’s pretty much limited regardless of whether you choose Intel or AMD for these low end solutions… These are very low end products that are barely above where netbooks used to stand on the performance scale!

      You’re not going to be doing much video editing or CAD work on either of them!

      The entire reason the Temash Jaguar cores are clocked down to 1GHz is because they’re emphasizing power efficiency… So everything is scaled back to keep it within acceptable power levels and that means you should not be over estimating what AMD is offering in this range!

      Really, the Jaguar cores are only about 20%, at best, better than the old Bobcat cores and they were pretty lousy on performance compared to Athlon and other AMD processors.

      AMD only uses them because they can reach lower power levels with them than their more powerful Architectures and they have similar limitations with their iGPUs!

      It takes the quad core A6-1450 Temash to even reach double the performance of the old Hondo Z-60 APU and that’s comparable to the old Ontario C-50 performance range.

      While all iGPUs are lousy compared to discrete graphic cards! Even a low end basic discrete graphic card can give you better performance than these will offer!

      Besides, AMD has been promising game changers for years and have yet to deliver!

      While drivers support is still limited for these low end products, even for AMD!

      It’s one of the reasons why they don’t yet offer Linux support for example! So don’t expect everything to just work!

      • iAMunderDOg

        AMD Jaguar is 30% faster clock to clock compared to Bobcat and its GPU is more powerful and efficient.

        A4-1200 Temash has CPU performance of Z60 at 1.3Ghz and a GPU performance of E350 that has VLIW5 GPU while Temash has GCN GPU that is far more efficient.

        Z60 has TDP 4.5 watts while E350 consumes 15 watts.

        A4-1200 Temash has performance same CPU performance as Z-60 and GPU performance of E350 in 3.9 watt TDP package.

        If you want something with more cores then get A6-5200 that has Ivy Bridge ultrabook performance while being noticeably efficient while Haswell is more efficient than Kabini but far more expensive.

        The gap in performance won’t be huge.

        AMD’s Ontario APU’s were amazing in 2011, C50 APU’s ended up in several tablets/convertibles/notebooks and great performance for a 9 watt chip from 2011. Some people just got blown away like other people did with when they bought Liano or Brazos APU’s by the performance.

        That were the second but short golden age, they kicked Intel in the guts with Brazos(E350) then in the nuts with Ontario(C50) and finally an upper cut with Liano that is even now faster then some latest Intel’s IGP’s even after 2 years. lol

        Intel stole some designs and architectures solutions from nVidia, Intel’s HD graphics are based around nVidia’s Fermi that is red hot and thats why Intel’s IGP’s are not as power efficient as GCN. or even VLIW4.

      • CyberGusa

        Wow, wrong on so many point… first, Jaguar is only 20% better than Bobcat a not 30%!

        Seocond, Bobcat’s advantage over the Atom was mainly only in single threaded processing and only by about 30% as a Atom clocked 50% faster pulled ahead of the Bobcat core!

        Mind that Silvermont improves the Atom by over 50%. So the CPU advantage goes to the Atom with Silvermont!

        The GPU improvement for Silvermont is even more signifant and at the very least puts it in the same range as the Temash GPUs!

        While the power efficiency is still on Intel’s side and the Temash GPU still consume too much power!

        So sorry but the numbers don’t support your contentions!

      • iAMunderDOg

        My calculations are wrong, I did some research and look what I found and compared too.

        A4-5000 1.5ghz Cinebench R10 Single Core 64bit: 1446
        [At 1.6Ghz it should score around 1550 – 410 pts faster)
        http://www.notebookcheck.net/AMD-A-Series-A4-5000-Notebook-Processor.92867.0.html

        E350 1.6ghz Cinebench R10 Single Core 64bit: 1140
        http://www.notebookcheck.net/AMD-E-350-Notebook-Processor.40941.0.html

        By my calculations, it should be almost 50% faster.

        A4-1200 1ghz should get Cinebench11.5 MultiCore 0.52

        E350 1.6ghz in Cinebench 11.5 MultiCore scores 0.59

        So basically you can play:
        http://www.youtube.com/watch?v=vIsi864c-is

        http://www.youtube.com/watch?v=QXpxyc9gnJY

        http://www.youtube.com/watch?v=vI9rKxpYvBk

        http://www.youtube.com/watch?v=SRKNwwXjBH0

        http://www.youtube.com/watch?v=cbsmI-NZCNQ

        AMD should make a Temash SKU similar to A4-1200, just it should have 200mhz higher clocks for CPU.

        Hopefully the refresh/update in H1 2014 will give substantial performance improvements since there is a high chance of being a 20nm.

        TSMC is rapidly ramping up 20nm and it actually from latest report, they are building it up like crazy ivan since they got contract with Apple.

        Maybe Apple fueless TSMC to give a substantial boost to TSMC’s progress plus TSMC signed few partnerships with few companies. Wow.

        20nm GPU’s in 2014 thumbs up!

      • CyberGusa

        First, try reading the review… It clearly states that the Jaguar cores are “According to AMD, the performance per clock has been increased by about 15 percent.” vs the previous Bobcat cores!

        There is no 50% improvement that doesn’t involve scaling to more cores and higher clock speeds!

        Jaguar is a 2-issue OoO architecture, but with only roughly 20% higher IPC than Bobcat thanks to a number of tweaks and they never claimed to be delivering more than that!

        Also, “Due to the low clock speed of 1.5 GHz, the performance in poorly parallelized applications is quite modest and just slightly above the old E2-1800.”

        Only when using all four cores does performance equal a dual core Sandy-Bridge-based Core i3-2377M but that’s using double the number of cores to get that performance!

        Second, benchmarks don’t scale linearly! The Quad Core Temash A6-1450 scores only 15% less than the Kabini A4-5000 and they only have 1GHz difference in max clock speeds, with the A6-1450 maxing at 1.4GHz, but are using the same architecture!

        So an increase in clock speed should only gain the Kabini about the same amount, at best! Since there are diminishing returns as you scale up the clock speed of any architecture!

        Third, it doesn’t change how the lower end Temash models compare as the Zacate E-350 had nearly twice the graphical performance of the Ontario C-50, which is what Desna was based on and thus what Hondo worked from and finally what the low end Temash models are being compared to and where only the quad core version provides performance to rival the Zacate E-350.

        Fourth, Cinebench CPU results are only comparable to tests done by the same version of Cinebench… But the E-350 results you are quoting are from a early Cinebench version!

        Only the multi-core test was updated to the newer Cinebench version but notebookcheck failed to update the Single CPU benchmark!

        Now, if AMD can get the 20nm out in time for the Jaguar+ refresh then they can potentially see some good improvements… We’ll have to wait and see…

        TSMC has made some good progress on getting back on track for 20nm production but it’s a more limited scale than the 28nm production before it and getting the factory ready for production is not the same as being able to push out final product… Many delays happen after production starts and they find defects or issues they didn’t foresee during construction and setup of the factory that only show up as they start making chips!

        While the Jaguar+ update has to be more than what AMD did with the Brazos 2.0 refresh to make it count… Hopefully it’ll work out but there’s still a lot of uncertainty right now…

        But I’m all for better competition from AMD…

      • iAMunderDOg

        Whatever…

        Watch this video:

        http://www.youtube.com/watch?v=PzmavmFF3UY

        Its amazing how well performs E350 in Linux, specially since the guy modified Linux kernel for it and the game runs smoother and at much higher quality than on Windows version. Its mind blowing.

      • CyberGusa

        Sure, games can perform very well under Linux…

        http://www.gamespot.com/news/left-4-dead-2-faster-on-linux-than-windows-says-valve-6390089

        Has more to do with the OS than the hardware though!

  • Paul M

    I really really want to upgrade my home fileserver/mediaserver from an Atom N330 to a Pentium J2850 as the 330 is just far too slow for any kind of transcoding of video. Please update us when motherboards with the new silvermont atoms are available! thanks!