<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>VR World</title>
	<atom:link href="http://www.vrworld.com/author/nova/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.vrworld.com</link>
	<description></description>
	<lastBuildDate>Thu, 09 Apr 2015 20:31:19 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.1.1</generator>
	<item>
		<title>Whither Galaxy S6? Samsung’s Newest Entry Shows Misdirected Smartphone Evolution</title>
		<link>http://www.vrworld.com/2015/03/14/whither-galaxy-s6-samsungs-newest-entry-shows-misdirected-smartphone-evolution/</link>
		<comments>http://www.vrworld.com/2015/03/14/whither-galaxy-s6-samsungs-newest-entry-shows-misdirected-smartphone-evolution/#comments</comments>
		<pubDate>Sat, 14 Mar 2015 09:26:31 +0000</pubDate>
		<dc:creator><![CDATA[Nebojsa Novakovic]]></dc:creator>
				<category><![CDATA[Analysis]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[Business]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[Mobile Computing]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Operating Systems]]></category>
		<category><![CDATA[Galaxy Note]]></category>
		<category><![CDATA[Galaxy S6]]></category>
		<category><![CDATA[KRX: 005930]]></category>
		<category><![CDATA[samsung]]></category>
		<category><![CDATA[Samsung Galaxy]]></category>
		<category><![CDATA[Samsung Galaxy Note]]></category>
		<category><![CDATA[Samsung Galaxy S6]]></category>

		<guid isPermaLink="false">http://www.vrworld.com/?p=49951</guid>
		<description><![CDATA[<p>The Samsung Galaxy S6 shows the evolution of smartphones doesn't mean an increase in productivity. </p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2015/03/14/whither-galaxy-s6-samsungs-newest-entry-shows-misdirected-smartphone-evolution/">Whither Galaxy S6? Samsung’s Newest Entry Shows Misdirected Smartphone Evolution</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="503" height="621" src="http://cdn.vrworld.com/wp-content/uploads/2015/03/kv-phones-1.png" class="attachment-post-thumbnail wp-post-image" alt="kv-phones (1)" /></p><p>Being a <a href="http://www.vrworld.com/tag/samsung-2/">Samsung</a> (<a href="www.google.com/finance?cid=151610035517112">KRX: 005930</a>) Galaxy user across a number of years (From the S3 to Note3 and then the S5, the last two in parallel right now – that’s quite a vote with someone’s wallet, I guess), I did eagerly await the launch of the Galaxy S6 to see if it is worth the upgrading consideration. Especially since the Galaxy Note Edge, the interim variant with the single curved side 2560&#215;1600 16:10 display, did show some promise on how the extra curve can be used without affecting the main work area size.</p>
<p>However, what came out did seriously disappoint me: what happened was that both the straight and curved versions share the same 2560&#215;1440 16:9 display – meaning that the curved side in a sense lost some 1/6 of its straight viewable work or play area on an already narrow display.</p>
<p>But that was just the beginning: the new phones had no microSD card slots for user storage expansion flexibility and, no, the battery can’t be replaced by the user either, just like on the iPhones. But yes, they have very very fast processors and 3+ megapixel displays with gazillion dots per inch density in a, yes, 5-inch format.</p>
<p>Hold on for a second: the existing 1920&#215;1080 FullHD displays on 5-inch plus smartphones already reach some 400 dots per inch resolution, beyond what a normal human eye can discern from say one foot distance. What is the point of adding extra resolution that can’t be seen? Wouldn’t it be better if Samsung add extra pixels to its laptops instead, so that 4K 15-inch models are a reality? Or UHD 16:10 3840&#215;2400 tablets, for instance, in the same format?</p>
<p>Don’t forget that the extra pixels add to the processing burden, video frame buffer memory footprint and of course the power consumption, yet there is almost no 1440p video content to benefit from them. And, yes, world standard 1080p FullHD content will look better on a “pixel for pixel” matching 1920&#215;1080 screen then interpolated across a 2560&#215;1440 screen. So, what the hell was the point in doing this? And, mind you, it’s not just Samsung doing this.</p>
<h2>Is the Galaxy S6 a step in the right direction?</h2>
<p>This brings us to a point: is the current smartphone evolution seriously misdirected? Not just from a ‘consumerised dumbing down’ of the overall approach and the waste of CPU cycles with slow Java apps compared to what optimised C++ stuff can do.  Remember a Cray 3 supercomputer three decades ago is slower than a current top end smartphone by quite a bit, but was hell a lot more optimally used resource-wise. It is desperately trying to create added specs that make no real usage sense, just to justify the new sales cycle – and any PC market technology trickery of that sort looks like angelic honesty compared to what is devised in the smartphone market.</p>
<p>The features being added don’t seem to make much sense in terms of real use: the 1440p displays are one good example of absolute uselessness unless you have a true eagle eye, I guess. The good stuff that was added – in the Samsung case, the USB3 connection for faster recharge and PC connections in the S5 and the Note3 – ended up removed and downgraded to the USB2 in the Note4 and the S6!</p>
<p>Then, if we really want a visually rich phone with such a strong GPU power, why not a direct microHDMI connection to a FullHD TV set to, say, play those lovely 3D Moto etc beginner’s games on it without having to use roundabout means such as wireless Screen Mirroring?</p>
<p>And, yes, looking at the on-screen keyboards there… when they occupy half of the screen, and you can barely see the message typed, it seems the time is to bring back the 16:10 screen to the smartphones too. It would help manage the problem, especially in the horizontal mode.</p>
<p>Back to the point above: Samsung is the leader of the smartphone market today, like it or not. Apple (<a href="www.google.com/finance?cid=22144">NASDAQ: APPL</a>) is still a formidable force, and Xiaomi could be the another top league member. However, the last we expected from a market leader was to create a closed ‘black box’ product with useless new stuff added, and good current stuff removed, all in the name of, what, an industrial design exercise? My vote on this is a big no, in the name of keeping what’s left of the basic sanity of this market, and it looks like the next phone I get will be a Chinese one (hope malware-free), and so be it – hope they get a little more pragmatic in the approach to the product evolution.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2015/03/14/whither-galaxy-s6-samsungs-newest-entry-shows-misdirected-smartphone-evolution/">Whither Galaxy S6? Samsung’s Newest Entry Shows Misdirected Smartphone Evolution</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2015/03/14/whither-galaxy-s6-samsungs-newest-entry-shows-misdirected-smartphone-evolution/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>Intel Xeon D: Hitting the ARM Microserver Hopes?</title>
		<link>http://www.vrworld.com/2015/03/10/intel-xeon-d-hitting-arm-microserver-hopes/</link>
		<comments>http://www.vrworld.com/2015/03/10/intel-xeon-d-hitting-arm-microserver-hopes/#comments</comments>
		<pubDate>Mon, 09 Mar 2015 17:00:37 +0000</pubDate>
		<dc:creator><![CDATA[Nebojsa Novakovic]]></dc:creator>
				<category><![CDATA[Analysis]]></category>
		<category><![CDATA[Companies]]></category>
		<category><![CDATA[Enterprise]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[Intel]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Servers]]></category>
		<category><![CDATA[xeon]]></category>
		<category><![CDATA[Xeon D]]></category>

		<guid isPermaLink="false">http://www.vrworld.com/?p=49514</guid>
		<description><![CDATA[<p>Today, Intel (NASDAQ: INTC) is announcing its first Broadwell-based Xeon processor. It isn&#8217;t the mainstream E3 series derived from desktop chips, nor the high end ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2015/03/10/intel-xeon-d-hitting-arm-microserver-hopes/">Intel Xeon D: Hitting the ARM Microserver Hopes?</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="764" height="585" src="http://cdn.vrworld.com/wp-content/uploads/2015/03/e7287943adec596e852b2c05702ebfd0-764-585.png" class="attachment-post-thumbnail wp-post-image" alt="e7287943adec596e852b2c05702ebfd0-764-585" /></p><div id="yMail_cursorElementTracker_0.5465991753153503">Today, <a href="http://www.vrworld.com/category/companies/intel/">Intel</a> (<a href="www.google.com/finance?cid=284784">NASDAQ: INTC</a>) is announcing its first Broadwell-based Xeon processor. It isn&#8217;t the mainstream E3 series derived from desktop chips, nor the high end E5 either &#8212; both of those will wait for later in the year.</div>
<div id="yMail_cursorElementTracker_0.5465991753153503"></div>
<div id="yMail_cursorElementTracker_0.5465991753153503">The new Xeon D goes for the upper end of the nascent microserver market, as well as for the dedicated storage and network appliances &#8212; exactly the focus of the current ARM server campaign.</div>
<div id="yMail_cursorElementTracker_0.5465991753153503"></div>
<div id="yMail_cursorElementTracker_0.5465991753153503">Microservers were chosen by ARM (<a href="www.google.com/finance?cid=14002991">LON: ARM</a>) as, compared to the bigger server iron, they mostly rely on open source Web 2.0 stack, while the storage and network devices usually run specific applications. In both cases, no need for ARM to fund expensive commercial application ports &#8212; something that many RISC CPU makers with far better CPUs failed in the pre-Linux days.</div>
<div id="yMail_cursorElementTracker_0.5465991753153503"></div>
<div><a href="http://cdn.vrworld.com/wp-content/uploads/2015/03/Screenshot_2015-03-09-15-35-26.png" rel="lightbox-0"><img class="aligncenter size-medium wp-image-49515" src="http://cdn.vrworld.com/wp-content/uploads/2015/03/Screenshot_2015-03-09-15-35-26-600x338.png" alt="Screenshot_2015-03-09-15-35-26" width="600" height="338" /></a></div>
<div></div>
<div id="yMail_cursorElementTracker_0.5465991753153503">So, at least in theory, Intel does not have the same apps advantage here. But, it has another one: compared to the previous RISC competitors who were superior to it performance wise, ARM is mostly inferior to the current Intel processors in this segment. The new Xeon D seems to aim to cement that advantage in a Borg-like &#8220;resistance is futile&#8221; fashion. How?</div>
<div id="yMail_cursorElementTracker_0.5465991753153503"></div>
<div id="yMail_cursorElementTracker_0.5465991753153503">First, eight of the new Broadwell cores with Xeon reliability enhancements and dedicated 1.5 MB L3 caches per each core, suited for microserver jobs that often tend to stay on specific cores. No big shared caches and internal buses for it compared to the big E5 brethren also reduces the die complexity quite a bit.</div>
<div id="yMail_cursorElementTracker_0.5465991753153503"></div>
<div id="yMail_cursorElementTracker_0.5465991753153503">As the target usages are also less memory bandwidth driven (no HPC or big data here), Intel used a simple combined dual channel DDR3L / DDR4 controller, so pick and choose which one you want. The first mainstream Skylake processors later this year will have a similar feature.</div>
<div id="yMail_cursorElementTracker_0.5465991753153503"></div>
<div id="yMail_cursorElementTracker_0.5465991753153503">Then, there are 32 PCIe lanes (24 v3 and 8 v2), six SATA6 ports and, guess what, two built in 10 Gbps Ethernet controllers &#8212; all on the same die. This rounds up the feature set in short.</div>
<div id="yMail_cursorElementTracker_0.5465991753153503"></div>
<div id="yMail_cursorElementTracker_0.5465991753153503">The 14nm processors, running at up to 2.6 GHz in Turbo, are up to one third slower per core than the bigger brethren, but still easily triple the speed of top devices from Applied Micro, the leading ARM server CPU maker these days.</div>
<div id="yMail_cursorElementTracker_0.5465991753153503"></div>
<div id="yMail_cursorElementTracker_0.5465991753153503">What to make out of this? Basically, after having learned the uber costly lessons competing with ARM using Atom in the handset and tablet area, Intel threw its best into the battlefield to prevent ARM from encroaching in its prized and the most profitable business: servers.</div>
<div id="yMail_cursorElementTracker_0.5465991753153503"></div>
<div id="yMail_cursorElementTracker_0.5465991753153503">On another note&#8230; with their low power, compact footprint and 128 GB ECC RAM support on top of all that storage and networks, these could be really nifty solutions for MMORPG &#8220;apartment block&#8221; servers for low latency local community or LANparty play. Makes sense?</div>
<p>&nbsp;</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2015/03/10/intel-xeon-d-hitting-arm-microserver-hopes/">Intel Xeon D: Hitting the ARM Microserver Hopes?</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2015/03/10/intel-xeon-d-hitting-arm-microserver-hopes/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>Year-End Thoughts: Intel Goes High-End in 2015?</title>
		<link>http://www.vrworld.com/2014/12/31/year-end-thoughts-intel-high-end-2015/</link>
		<comments>http://www.vrworld.com/2014/12/31/year-end-thoughts-intel-high-end-2015/#comments</comments>
		<pubDate>Tue, 30 Dec 2014 19:47:07 +0000</pubDate>
		<dc:creator><![CDATA[Nebojsa Novakovic]]></dc:creator>
				<category><![CDATA[Analysis]]></category>
		<category><![CDATA[Business]]></category>
		<category><![CDATA[Enterprise]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[ARM]]></category>
		<category><![CDATA[IBM]]></category>
		<category><![CDATA[Intel]]></category>

		<guid isPermaLink="false">http://www.vrworld.com/?p=41572</guid>
		<description><![CDATA[<p>Intel is still the leader, but ARM is there the bottom, and Chinese IBM at the top…</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/12/31/year-end-thoughts-intel-high-end-2015/">Year-End Thoughts: Intel Goes High-End in 2015?</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="600" height="350" src="http://cdn.vrworld.com/wp-content/uploads/2014/10/Intel.jpg" class="attachment-post-thumbnail wp-post-image" alt="Intel" /></p><p>Intel (<a href="www.google.com/finance?cid=284784">NASDAQ: INTC</a>) may have had a financial black hole with all the tablet and phone spends over the past few years fighting the ARM incumbency. Obviously, the outlay was so bad that the whole division had to fold under the PC Client one, leaving the latter’s boss Kirk Skaugen with the tough job on the integration – or, most likely, pushing Core microarchitecture approach further down the price scale to counter the increasingly complex ARM cores.</p>
<p>On the other hand, the high end Enterprise division had yet another stellar year, with little competition to bother about. Xeons are everywhere, and approaching 95% of the worldwide server and related CPU market is about as good as it gets before it becomes an absolute monopoly. The cores are mature and well tuned, as well as the ecosystem from memory to I/O to boards and everything else that matters.</p>
<div class="body" style="text-align: center;">
<p><a href="http://www.vrworld.com/2014/12/11/intels-david-mccloskey-looks-ahead-2014-back-2015/"><strong><em>Also read: Intel&#8217;s David McCloskey Looks Ahead at 2015 and Back at 2014</em></strong></a></p>
<div class="body" style="text-align: left;">However, there are clouds on the horizon for the 2015: ARM vendors are persistently trying to get into the server market, starting with low margin microservers mostly running Web 2.0 stuff where the big commercial software (un)availability is less of an issue. While this topic deserves a separate story, the focus there now is on improving the core throughput as well as cache, memory and interconnect bandwidth, things that ARM was sorely lacking until now – compared to both Intel x86 and other RISCs like MIPS or POWER, or even Chinese Alpha “Shenwei”. Having said that, I do feel that ARM will start making some tangible, but still small sized, dent into the server market in 2015, but one that will be very well marketed by the alliance vendors.</div>
<p>&nbsp;</p>
<p style="text-align: left;">On the other hand, China Government’s punishing IBM with expulsion of its high-end systems after the NSA ‘disclosure’ has resulted in an interesting side development: licensing of the POWER8 and later POWER9 architectures and IP to the Chinese, now an officially done deal.</p>
</div>
<div class="body">Will we soon see inexpensive Chinese POWER machines flooding the markets? I wouldn’t say so for another two years at least, until they are tried out in the internal China market first. But, once the strategies what is to be done are made public over the next half year or so, there could be some repercussion on the Xeon positioning mindshare. Mind you, POWER8 not only has the whole shebang of high end enterprise apps, but it is also the only core more efficient per-thread than the Xeon, and it does have the complete ultra high end ecosystems for memories, interconnects, and such – including NVlink shared memory low latency links between POWER8+ and POWER9 with Nvidia Pascal GPUs by 2017. By then Intel might update its Xeon Phi offerings with direct QPI shared memory links to their own Xeons too, though.</div>
<div class="body"></div>
<p>&nbsp;</p>
<div class="body">POWER is a RISC ISA, after all, just like Alpha or MIPS or ARM, so in principle, for the same process and transistor budget, it should be able to do more. The issue of ISA did haunt Intel for quite a while, although by now they have fine tuned it to the hilt.</div>
<p>On that subject, Haswell brought something really important to the table, overlooked by many: its AVX2 instruction extensions now handle the more common integer, not just floating point, operations. If in the future the address calculations are added to the roster, you pretty much don’t need the old base X86 set. Most of the software that matters is already AVX optimised, and more will follow. Would a SIMD and vector style pure AVX ISA, at some point, replace the old X86 within Intel?</p>
<div class="body">In summary, no one can unseat Intel from its high end throne in the coming year, either, but the attacks from both the top, if IBM decides to milk the new found Chinese partnership to the fullest, and the bottom, if the ARM finally finds its competitive spot in the server arena, will be there more than before. Watch this space for more details on all these in the coming month.</div>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/12/31/year-end-thoughts-intel-high-end-2015/">Year-End Thoughts: Intel Goes High-End in 2015?</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2014/12/31/year-end-thoughts-intel-high-end-2015/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>Connecting the CPUs and GPUs: Battles of Choices Are Coming</title>
		<link>http://www.vrworld.com/2014/11/11/connecting-cpus-gpus-battles-choices-coming/</link>
		<comments>http://www.vrworld.com/2014/11/11/connecting-cpus-gpus-battles-choices-coming/#comments</comments>
		<pubDate>Tue, 11 Nov 2014 11:58:11 +0000</pubDate>
		<dc:creator><![CDATA[Nebojsa Novakovic]]></dc:creator>
				<category><![CDATA[Analysis]]></category>
		<category><![CDATA[AMD]]></category>
		<category><![CDATA[AMD hypertransport]]></category>
		<category><![CDATA[GPU]]></category>
		<category><![CDATA[Nvida]]></category>
		<category><![CDATA[NVLink]]></category>
		<category><![CDATA[Power8+]]></category>
		<category><![CDATA[Power9]]></category>

		<guid isPermaLink="false">http://www.vrworld.com/?p=40351</guid>
		<description><![CDATA[<p>As GPUs get more powerful, a better solution to bridge the connectivity gap with the CPU is needed. Might AMD have the solution?</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/11/11/connecting-cpus-gpus-battles-choices-coming/">Connecting the CPUs and GPUs: Battles of Choices Are Coming</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="800" height="480" src="http://cdn.vrworld.com/wp-content/uploads/2014/11/51b99d8da936d.jpg" class="attachment-post-thumbnail wp-post-image" alt="" /></p><p>Modern high end CPUs are pretty fast these days: an Intel Xeon E5v3 (Haswell-EP) can pack up to 18 cores and two thirds of double precision teraflop in floating point power, while the 2015 Shenwei Alpha from China, with upwards of 32 vector-assisted cores per die, will crunch even more numbers per second. On the other hand, the GPUs have accelerated their own compute roadmap, with both Nvidia (<a href="http://www.google.ca/finance?cid=662925">NASDAQ: NVDA</a>) and AMD (<a href="http://www.google.ca/finance?cid=327">NYSE: AMD</a>) devices in the 2015 schedule breaking through the 3 teraflop DP ceiling. Of course, both CPUs and GPUs of this generation come with well tuned, high bandwidth memory systems too.</p>
<p>The same of course applies to Intel’s (<a href="http://www.google.ca/finance?cid=284784">NASDAQ: INTC</a>) Xeon Phi compute accelerator, with the next years’ Knights Landing 3 TFlop DP version matching nicely to the next generation Broadwell based Xeon E5v4. Knights Landing Xeon Phi, with its 16 GB 3D stacked memory on the package, will bring new levels of low latency ultra high bandwidth in-memory processing capabilities.</p>
<p>But the problems come when trying to connect these CPUs and GPUs together – the PCI Express link, used now in 99% of the cases, drastically impairs the connection, with its maximum 20 GB/s achievable net bandwidth and up to 1 microsecond roundtrip latency, over an order of magnitude slower latency what Intel QPI, AMD HyperTransport or IBM POWER8 peripheral buses and Nvidia NVlink do – and for many short transfers common in HPC, that latency can mean a lot. These other connections enable coherent shared memory between all those CPUs and GPUs, rather than messaging and copying between separate memory spaces.</p>
<p>So, even though the 2015 Knights Landing will still have to rely on PCIe V3 for connection to its Xeon cousins, the 2016 variety could – hopefully – use the far more efficient QPI. They better do, as, by then, the Nvidia “Pascal” GPU generation, the one after Maxwell, will team up with IBM Power8+ and Power9 to use common NVlink for tight, low latency, shared memory connection between IBM CPUs and Nvidia GPUs in computational environs.</p>
<p>Mind you, that need not apply just in some large supercomputers, but even in your own high end Linux workstations. If the speculated OpenPower expansion to China bears fruits soon, and we see an inexpensive Power8+ lookalike from there, with NVlink on board, making high speed heterogeneous yet shared memory ultrafast 20 – 50 TFLOPs workstations will become a reality within a year or so.</p>
<p>However, there’s a company that could have done it all, much earlier – you guessed it right, AMD. Remember HyperTransport, the most faithful follow-on of the Alpha EV7 bus, ahead of QPI and such? Well, why didn’t they put HyperTransport on its Hawaii and later high end GPUs, and let the GPUs coherently share each other’s memory and that of the matching Opteron CPUs? Even CrossFire stuff would operate far, far faster and neater.</p>
<p>It’s not too late for the company, though. If AMD does decide to again (hopefully) produce top end CPUs, and connects them via HyperTransport to its own arrays of GPUs, they could be back in business.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/11/11/connecting-cpus-gpus-battles-choices-coming/">Connecting the CPUs and GPUs: Battles of Choices Are Coming</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2014/11/11/connecting-cpus-gpus-battles-choices-coming/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Intel Core i7-5960X on Gigabyte X99-Gaming G1 WIFI: The Ultimate Enthusiast PC Combo?</title>
		<link>http://www.vrworld.com/2014/09/21/intel-core-i7-5960x-gigabyte-x99-gaming-g1-wifi-ultimate-enthusiast-pc-combo-2/</link>
		<comments>http://www.vrworld.com/2014/09/21/intel-core-i7-5960x-gigabyte-x99-gaming-g1-wifi-ultimate-enthusiast-pc-combo-2/#comments</comments>
		<pubDate>Mon, 22 Sep 2014 05:49:22 +0000</pubDate>
		<dc:creator><![CDATA[Nebojsa Novakovic]]></dc:creator>
				<category><![CDATA[Reviews]]></category>
		<category><![CDATA[Gigabyte]]></category>
		<category><![CDATA[Gigabyte X99-Gaming G1]]></category>
		<category><![CDATA[intel i7-5960X]]></category>
		<category><![CDATA[Motherboard reviews]]></category>

		<guid isPermaLink="false">http://www.brightsideofnews.com/?p=39075</guid>
		<description><![CDATA[<p>Intel’s launch of the Haswell-based Core i7 5960X and the associated X99 chipset with DDR4 memory has required a brand new series of motherboards as ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/09/21/intel-core-i7-5960x-gigabyte-x99-gaming-g1-wifi-ultimate-enthusiast-pc-combo-2/">Intel Core i7-5960X on Gigabyte X99-Gaming G1 WIFI: The Ultimate Enthusiast PC Combo?</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="1500" height="1048" src="http://cdn.vrworld.com/wp-content/uploads/2014/09/X99-Gaming_G1_WIFI_001.jpg" class="attachment-post-thumbnail wp-post-image" alt="X99-Gaming_G1_WIFI_00" /></p><p>Intel’s launch of the Haswell-based Core i7 5960X and the associated X99 chipset with DDR4 memory has required a brand new series of motherboards as well. The four major vendors – Gigabyte, Asus, MSI and Asrock – grabbed the chance to introduce other new features into the just launched platform to entice the users to upgrade.</p>
<p>Recently, <em>Bright Side of News*</em> reviewed Intel’s flagship CPU with <a href="http://www.brightsideofnews.com/2014/09/15/gigabyte-ga-x99-gaming-5-solid-performer/">Gigabyte’s X99-Gaming 5 motherboard</a>, which by itself is a decent balance between top performance, features and compact size. How about the flagship mainboard in Gigabyte line, Gaming G1 WiFi?</p>
<p>At 305 x 259 mm, the board itself is a bit larger than the usual 305 x 244 mm ATX size, but should still fit comfortably into most enthusiast oriented large casings, like the one from Antec used in this review. The first look at the X99-Gaming G1 WiFi reveals quite a stunning board, almost overloaded with all the bells and whistles one could ask for – up to 64 GB RAM if using 8 GB DDR4 DIMMs, plenty of PCIe slots for Quad GPU operation, and still three x1 slots squeezed in between. Add to it every single interface (minus Thunderbolt) on board, including SATA Express, eSATA, M2 slots, plenty of USB interfaces, and the icing on the cake: dual Gigabit Ethernet, one of which is Qualcomm Atheros KillerNIC, and Creative SoundCore 3D “quasi-DSP” with gold plated shielding. On the last one, it’s a pity that it’s still not the old Creative Sound Blaster X-Fi, as that one was more of a true audio processor that offloads the CPU from handling the audio.</p>
<p><img class="alignnone" src="http://cdn.vrworld.com/wp-content/uploads/2014/09/cpuZhaswellE.png" alt="" width="1254" height="411" /></p>
<p>As for Thunderbolt, separating it off the main board may end up to be a smart approach after all, as the 20 Gbps Thunderbolt 2 is still maturing, and there’s a question whether the users want a single port, dual port, or – for now – no ports.</p>
<h2>Overview and testing</h2>
<p>The board quality, from the PCB manufacture to the components used, whether in the power department, connectors, interfaces or audio amplifiers, is top notch, something that was once seen on Asus’ early ROG boards some years ago. The design and manufacturing control is still in Taiwan, by the way, which seems to help a bit in achieving operational reliability and less RMA headache for Gigabyte itself.</p>
<p>The board was tested with the i7-5960X CPU and Micron’s reference Crucial quad channel DDR4-2133 kit. The latter doesn’t have any fancy heat spreaders and such, however it is the reference kit coming from the memory die vendor itself, and it doesn’t block the internal airflow with the otherwise mostly useless heat spreader decorations than that gaming memory kits usually have. The cooler was Thermaltake’s Water 3 Pro – easy to install and good enough for say 30% or so overclocks satisfactory for most users, but not more.</p>
<p>In this early review, we looked at the BIOS tuning options, the Gigabyte auto overclocking choices as well as something not usually focused on in performance tests: the selected benchmark performance dependency on the CPU uncore (i.e. cache and memory controller) and memory bandwidth and latency settings. The subsequent review parts will focus on other benchmarks and further CPU and memory tuning experiences.</p>
<p><img class="alignnone" src="http://cdn.vrworld.com/wp-content/uploads/2014/09/BIOSscrshotX99.gif" alt="" width="1920" height="1080" /></p>
<p><strong>Sandra 2014</strong></p>
<p>Here are the results from default all the way to 4.3 GHz. See the variations once the CPU uncore and memory come into picture:</p>
<p><img class="alignnone" src="http://cdn.vrworld.com/wp-content/uploads/2014/09/SandraHaswellE.png" alt="" width="2692" height="486" /></p>
<p><strong>CineBench 15</strong></p>
<p><img class="alignnone" src="http://cdn.vrworld.com/wp-content/uploads/2014/09/CineBenchHaswellEnew.png" alt="" width="663" height="354" /></p>
<p>Same here – even though CPU compute-heavy ray tracing render routines are usually little memory bound, there are small but measurable benefits from tuning up the uncore and memory, as you see here (note: Cinebench could not detect true CPU clock speed).</p>
<p>At the start, I felt it could be related to the particular CPU sample used, or even the cooler limits, but the plateau for stable performance on this particular board was 4.3 GHz with Turbo turned off, with uncore set at 3.2 GHz. This by itself is a no mean feat, as – for the current Haswell-E at least – I’d not run the CPU at anything higher than 4.0 GHz for regular everyday operation, if intending to keep it up and running nicely for at least a year until the Broadwell-E refresh comes in handy. For a start, the i7-5960X at 4 GHz with 3.2 GHz uncore and DDR4-2400 CL 14-14-14 memory is a very decent top end setup by any means.</p>
<p>After all, even though the Haswell-E/EP high end parts were made in three die versions: 8-core, 12-core and 18-core (the latter two only as Xeons for workstations and servers), it’s logical to expect that, within given TDP tolerances, only the 8-core version would overclock sufficiently above the base setting, like above 30% with standard board and cooling stuff. The Gigabyte overclocking competition at IDF, that we covered here, reached the 6 GHz barrier on LN2 with some CPUs, but that is of course not a production setting.</p>
<div style="width: 1583px" class="wp-caption aligncenter"><img src="http://cdn.vrworld.com/wp-content/uploads/2014/09/X99inside.jpg" alt="" width="1573" height="995" /><p class="wp-caption-text">A look at the board in-operation.</p></div>
<p>I went ahead with a BIOS update from default F4 to the latest F8c BIOS version, just released this weekend, which promised to solve lots of early issues and improve tuning. However, the results were overall about the same in this round, so yes it could be that the CPU itself was kind of topping out at 4.3 GHz for stable production work. Either way, more memory modules await us to try how far we can go with the memory overclock. Whether upgrading this board as well to that “OC Socket 2084” seen on Asus Rampage V Extreme and Gigabyte X99 LN2 board is a benefit, we should know soon too.</p>
<p>In conclusion, the Gaming G1 is probably the most feature rich X99 based high end board in the market right now, based on the bells and whistles. The overclocking side may leave a bit more desired on the extreme side, but we’ll try it anyway with other CPU and memory samples soon.</p>
<p><em>This post originally appeared on <a href="http://www.vrworld.com/2014/09/21/intel-core-i7-5960x-gigabyte-x99-gaming-g1-wifi-ultimate-enthusiast-pc-combo/?preview=true&amp;preview_id=38896&amp;preview_nonce=e75ff51b81&amp;post_format=standard">VR World</a>, Bright Side of News&#8217;* Asia Pacific news portal. </em></p>
<p>&nbsp;</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/09/21/intel-core-i7-5960x-gigabyte-x99-gaming-g1-wifi-ultimate-enthusiast-pc-combo-2/">Intel Core i7-5960X on Gigabyte X99-Gaming G1 WIFI: The Ultimate Enthusiast PC Combo?</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2014/09/21/intel-core-i7-5960x-gigabyte-x99-gaming-g1-wifi-ultimate-enthusiast-pc-combo-2/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Intel Core i7-5960X on Gigabyte X99-Gaming G1 WIFI: The Ultimate Enthusiast PC Combo?</title>
		<link>http://www.vrworld.com/2014/09/21/intel-core-i7-5960x-gigabyte-x99-gaming-g1-wifi-ultimate-enthusiast-pc-combo/</link>
		<comments>http://www.vrworld.com/2014/09/21/intel-core-i7-5960x-gigabyte-x99-gaming-g1-wifi-ultimate-enthusiast-pc-combo/#comments</comments>
		<pubDate>Mon, 22 Sep 2014 04:22:23 +0000</pubDate>
		<dc:creator><![CDATA[Nebojsa Novakovic]]></dc:creator>
				<category><![CDATA[Reviews]]></category>
		<category><![CDATA[Gigabyte]]></category>
		<category><![CDATA[Gigabyte X99-Gaming G1]]></category>
		<category><![CDATA[intel i7-5960X]]></category>
		<category><![CDATA[Motherboard reviews]]></category>

		<guid isPermaLink="false">http://www.vrworld.com/?p=38896</guid>
		<description><![CDATA[<p>Intel’s launch of the Haswell-based Core i7 5960X and the associated X99 chipset with DDR4 memory has required a brand new series of motherboards as ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/09/21/intel-core-i7-5960x-gigabyte-x99-gaming-g1-wifi-ultimate-enthusiast-pc-combo/">Intel Core i7-5960X on Gigabyte X99-Gaming G1 WIFI: The Ultimate Enthusiast PC Combo?</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="1500" height="1048" src="http://cdn.vrworld.com/wp-content/uploads/2014/09/X99-Gaming_G1_WIFI_00.jpg" class="attachment-post-thumbnail wp-post-image" alt="X99-Gaming_G1_WIFI_00" /></p><p>Intel’s launch of the Haswell-based Core i7 5960X and the associated X99 chipset with DDR4 memory has required a brand new series of motherboards as well. The four major vendors – Gigabyte, Asus, MSI and Asrock – grabbed the chance to introduce other new features into the just launched platform to entice the users to upgrade.</p>
<p><em>VR World’s </em>sister site, <em>Bright Side of News*</em> reviewed Intel’s flagship CPU with <a href="http://www.brightsideofnews.com/2014/09/15/gigabyte-ga-x99-gaming-5-solid-performer/">Gigabyte’s X99-Gaming 5 motherboard</a>, which by itself is a decent balance between top performance, features and compact size. How about the flagship mainboard in Gigabyte line, Gaming G1 WiFi?</p>
<p>At 305 x 259 mm, the board itself is a bit larger than the usual 305 x 244 mm ATX size, but should still fit comfortably into most enthusiast oriented large casings, like the one from Antec used in this review. The first look at the X99-Gaming G1 WiFi reveals quite a stunning board, almost overloaded with all the bells and whistles one could ask for – up to 64 GB RAM if using 8 GB DDR4 DIMMs, plenty of PCIe slots for Quad GPU operation, and still three x1 slots squeezed in between. Add to it every single interface (minus Thunderbolt) on board, including SATA Express, eSATA, M2 slots, plenty of USB interfaces, and the icing on the cake: dual Gigabit Ethernet, one of which is Qualcomm Atheros KillerNIC, and Creative SoundCore 3D “quasi-DSP” with gold plated shielding. On the last one, it’s a pity that it’s still not the old Creative Sound Blaster X-Fi, as that one was more of a true audio processor that offloads the CPU from handling the audio.</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/09/cpuZhaswellE.png" rel="lightbox-0"><img class="aligncenter size-medium wp-image-38908" src="http://cdn.vrworld.com/wp-content/uploads/2014/09/cpuZhaswellE-600x196.png" alt="cpuZhaswellE" width="600" height="196" /></a></p>
<p>As for Thunderbolt, separating it off the main board may end up to be a smart approach after all, as the 20 Gbps Thunderbolt 2 is still maturing, and there’s a question whether the users want a single port, dual port, or – for now – no ports.</p>
<h2>Overview and testing</h2>
<p>The board quality, from the PCB manufacture to the components used, whether in the power department, connectors, interfaces or audio amplifiers, is top notch, something that was once seen on Asus’ early ROG boards some years ago. The design and manufacturing control is still in Taiwan, by the way, which seems to help a bit in achieving operational reliability and less RMA headache for Gigabyte itself.</p>
<p>The board was tested with the i7-5960X CPU and Micron’s reference Crucial quad channel DDR4-2133 kit. The latter doesn’t have any fancy heat spreaders and such, however it is the reference kit coming from the memory die vendor itself, and it doesn’t block the internal airflow with the otherwise mostly useless heat spreader decorations than that gaming memory kits usually have. The cooler was Thermaltake’s Water 3 Pro – easy to install and good enough for say 30% or so overclocks satisfactory for most users, but not more.</p>
<p>In this early review, we looked at the BIOS tuning options, the Gigabyte auto overclocking choices as well as something not usually focused on in performance tests: the selected benchmark performance dependency on the CPU uncore (i.e. cache and memory controller) and memory bandwidth and latency settings. The subsequent review parts will focus on other benchmarks and further CPU and memory tuning experiences.</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/09/BIOSscrshotX99.gif" rel="lightbox-1"><img class="aligncenter size-medium wp-image-38901" src="http://cdn.vrworld.com/wp-content/uploads/2014/09/BIOSscrshotX99-600x337.gif" alt="BIOSscrshotX99" width="600" height="337" /></a></p>
<p><strong>Sandra 2014</strong></p>
<p>Here are the results from default all the way to 4.3 GHz. See the variations once the CPU uncore and memory come into picture:</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/09/SandraHaswellE.png" rel="lightbox-2"><img class="aligncenter size-medium wp-image-38900" src="http://cdn.vrworld.com/wp-content/uploads/2014/09/SandraHaswellE-600x108.png" alt="SandraHaswellE" width="600" height="108" /></a></p>
<p>&nbsp;</p>
<p><strong>CineBench 15</strong></p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/09/CineBenchHaswellEnew.png" rel="lightbox-3"><img class="aligncenter size-medium wp-image-38905" src="http://cdn.vrworld.com/wp-content/uploads/2014/09/CineBenchHaswellEnew-600x320.png" alt="CineBenchHaswellEnew" width="600" height="320" /></a></p>
<p>Same here – even though CPU compute-heavy ray tracing render routines are usually little memory bound, there are small but measurable benefits from tuning up the uncore and memory, as you see here (note: Cinebench could not detect true CPU clock speed).</p>
<p>At the start, I felt it could be related to the particular CPU sample used, or even the cooler limits, but the plateau for stable performance on this particular board was 4.3 GHz with Turbo turned off, with uncore set at 3.2 GHz. This by itself is a no mean feat, as – for the current Haswell-E at least – I’d not run the CPU at anything higher than 4.0 GHz for regular everyday operation, if intending to keep it up and running nicely for at least a year until the Broadwell-E refresh comes in handy. For a start, the i7-5960X at 4 GHz with 3.2 GHz uncore and DDR4-2400 CL 14-14-14 memory is a very decent top end setup by any means.</p>
<p>After all, even though the Haswell-E/EP high end parts were made in three die versions: 8-core, 12-core and 18-core (the latter two only as Xeons for workstations and servers), it’s logical to expect that, within given TDP tolerances, only the 8-core version would overclock sufficiently above the base setting, like above 30% with standard board and cooling stuff. The Gigabyte overclocking competition at IDF, that we covered here, reached the 6 GHz barrier on LN2 with some CPUs, but that is of course not a production setting.</p>
<div id="attachment_38903" style="width: 610px" class="wp-caption aligncenter"><a href="http://cdn.vrworld.com/wp-content/uploads/2014/09/X99inside.jpg" rel="lightbox-4"><img class="size-medium wp-image-38903" src="http://cdn.vrworld.com/wp-content/uploads/2014/09/X99inside-600x379.jpg" alt="A look at the board in-operation. " width="600" height="379" /></a><p class="wp-caption-text">A look at the board in-operation.</p></div>
<p>I went ahead with a BIOS update from default F4 to the latest F8c BIOS version, just released this weekend, which promised to solve lots of early issues and improve tuning. However, the results were overall about the same in this round, so yes it could be that the CPU itself was kind of topping out at 4.3 GHz for stable production work. Either way, more memory modules await us to try how far we can go with the memory overclock. Whether upgrading this board as well to that “OC Socket 2084” seen on Asus Rampage V Extreme and Gigabyte X99 LN2 board is a benefit, we should know soon too.</p>
<p>In conclusion, the Gaming G1 is probably the most feature rich X99 based high end board in the market right now, based on the bells and whistles. The overclocking side may leave a bit more desired on the extreme side, but we’ll try it anyway with other CPU and memory samples soon.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/09/21/intel-core-i7-5960x-gigabyte-x99-gaming-g1-wifi-ultimate-enthusiast-pc-combo/">Intel Core i7-5960X on Gigabyte X99-Gaming G1 WIFI: The Ultimate Enthusiast PC Combo?</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2014/09/21/intel-core-i7-5960x-gigabyte-x99-gaming-g1-wifi-ultimate-enthusiast-pc-combo/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Haswell-E Controversy: What Should Intel Do About Asus And Socket 2084?</title>
		<link>http://www.vrworld.com/2014/09/09/haswell-e-controversey-intel-asus-socket-2084/</link>
		<comments>http://www.vrworld.com/2014/09/09/haswell-e-controversey-intel-asus-socket-2084/#comments</comments>
		<pubDate>Tue, 09 Sep 2014 13:28:07 +0000</pubDate>
		<dc:creator><![CDATA[Nebojsa Novakovic]]></dc:creator>
				<category><![CDATA[Analysis]]></category>
		<category><![CDATA[Event]]></category>
		<category><![CDATA[IDF 2014]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Asus]]></category>
		<category><![CDATA[motherboard]]></category>
		<category><![CDATA[overclocking socket]]></category>

		<guid isPermaLink="false">http://www.brightsideofnews.com/?p=38602</guid>
		<description><![CDATA[<p>IDF is busy, even before it starts – so it was this time in San Francisco with Gigabyte Overclocking competition just a day before the ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/09/09/haswell-e-controversey-intel-asus-socket-2084/">Haswell-E Controversy: What Should Intel Do About Asus And Socket 2084?</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="778" height="484" src="http://cdn.vrworld.com/wp-content/uploads/2014/09/ac-oc-socket-21.png" class="attachment-post-thumbnail wp-post-image" alt="ac-oc-socket-2" /></p><p>IDF is busy, even before it starts – so it was this time in San Francisco with Gigabyte Overclocking competition just a day before the keynote.</p>
<p>The OC results from Cookie, Charles Wirth and others were good, with 5.8 to 6 GHz achieved LN2 cooling results on the Core i7-5960X on Gigabyte boards seen here. The RAM on trial also performed well, hovering above 3 GHz for the G.Skill and Kingston part, with Crucial reference DIMMs just below that.</p>
<p>However, something far more interesting was found on the overclocking floor (together with our friend Koen from <a href="http://www.hardware.info/"><span style="color: #3facd6;">Hardware.info</span></a>). Remember Asus’ claims about additional pins on its LGA 2011-3 socket, aka “socket 2084,” which Asus claims brings extra performance and OC reliability by using undocumented pin holes on the Intel’s new CPUs? You can see Asus claims right here:</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/09/ac-oc-socket-1-600x4751.png" rel="lightbox-0"><img class="aligncenter size-full wp-image-38604" src="http://cdn.vrworld.com/wp-content/uploads/2014/09/ac-oc-socket-1-600x4751.png" alt="ac-oc-socket-1-600x475" width="600" height="475" /></a></p>
<p>According to industry sources Asus was considering patenting the socket and preventing Foxconn, its manufacturer, from selling it to other vendors. On the other hand, another major industry source claimed that these pins are not what Asus claims, but just CPU debug and test pins brought back to the socket in this generation. Therefore, not only would they be useless for overclocking, but also a potential crash risk if connected in a production system.</p>
<p>Then, remember another point – the Core i7-5960X is just a reduced version of the same die used for the Xeon E5 v3 8-core Haswell-EP, with some features like dual QPI channels, memory ECC and few others disabled, including some approximately 150 signal pins related to that. So, the Enterprise Group might know even more than the Client group about the undocumented pins on these new processors?</p>
<p><em>Note: The Foxconn model marking for the normal socket 47191 and for the OC socket the model is 46391.</em></p>
<p>Next step – see the Gigabyte brand new LN2 cooling optimised mainboard for these CPUs, with this exact same “special” socket:</p>

<a href='http://cdn.vrworld.com/wp-content/uploads/2014/09/iPhone-6.jpg' rel="lightbox[gallery-0]"><img width="750" height="420" src="http://cdn.vrworld.com/wp-content/uploads/2014/09/iPhone-6-750x420.jpg" class="attachment-vw_medium" alt="iPhone 6" /></a>

<p>And compare with the old socket:</p>

<p>So, Gigabyte can get hold of this same socket, the question is what do those additional pins connect to. If the first possibility stated above is correct, i.e. that these are real additional power, ground etc. pins that allow better and more reliable OC, then Asus has no way to patent these as it directly links to Intel IP including the socket and its validation.</p>
<p>If, however, it is the second possibility, of test, debug or no-connect pins being exposed, then it is a serious marketing rubbish which could be used to deceive the high-end buyers, whom both Intel and its key OEMs, which Asus and Gigabyte are, surely cherish and treasure. And mind you, this is already in a product on the market, the Asus top end Rampage V Extreme.</p>
<p>Since Asus, Gigabyte or even Foxconn are not likely to be able to respond to this, it seems that Intel will be the one to resolve this mystery, especially as more of its top end CPUs get the performance enhancement and managed unlocking capabilities over time.</p>
<p>The big question is: what are the warranty implications if things fail because of CPUs being inserted into sockets with undocumented pins.</p>
<p><em>This post <a href="http://www.vrworld.com/2014/09/08/haswell-e-controversy-intel-asus-socket-2084/">originally appeared on </a>VR World, Bright Side of News*&#8217; Asia Pacific sister site. </em></p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/09/09/haswell-e-controversey-intel-asus-socket-2084/">Haswell-E Controversy: What Should Intel Do About Asus And Socket 2084?</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2014/09/09/haswell-e-controversey-intel-asus-socket-2084/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Haswell-E Controversy: What Should Intel Do About Asus And Socket 2084?</title>
		<link>http://www.vrworld.com/2014/09/08/haswell-e-controversy-intel-asus-socket-2084/</link>
		<comments>http://www.vrworld.com/2014/09/08/haswell-e-controversy-intel-asus-socket-2084/#comments</comments>
		<pubDate>Tue, 09 Sep 2014 06:12:53 +0000</pubDate>
		<dc:creator><![CDATA[Nebojsa Novakovic]]></dc:creator>
				<category><![CDATA[Analysis]]></category>
		<category><![CDATA[Event]]></category>
		<category><![CDATA[Exclusive]]></category>
		<category><![CDATA[IDF 2014]]></category>
		<category><![CDATA[Asus]]></category>
		<category><![CDATA[motherboard]]></category>
		<category><![CDATA[overclocking socket]]></category>

		<guid isPermaLink="false">http://www.vrworld.com/?p=38472</guid>
		<description><![CDATA[<p>IDF is busy, even before it starts – so it was this time in San Francisco with Gigabyte (TPE: 2376)  Overclocking competition just a day ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/09/08/haswell-e-controversy-intel-asus-socket-2084/">Haswell-E Controversy: What Should Intel Do About Asus And Socket 2084?</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="778" height="484" src="http://cdn.vrworld.com/wp-content/uploads/2014/09/ac-oc-socket-2.png" class="attachment-post-thumbnail wp-post-image" alt="ac-oc-socket-2" /></p><p>IDF is busy, even before it starts – so it was this time in San Francisco with Gigabyte (<a href="www.google.com/finance?cid=681039">TPE: 2376</a>)  Overclocking competition just a day before the keynote.</p>
<p>The OC results from Cookie, Charles Wirth and others were good, with 5.8 to 6 GHz achieved LN2 cooling results on the Core i7-5960X on Gigabyte boards seen here. The RAM on trial also performed well, hovering above 3 GHz for the G.Skill and Kingston part, with Crucial reference DIMMs just below that.</p>
<p>However, something far more interesting was found on the overclocking floor (together with our friend Koen from <a href="http://www.hardware.info">Hardware.info</a>). Remember Asus&#8217; (<a href="https://www.google.com/finance?cid=674388">TPE: 2357</a>) claims about additional pins on its LGA 2011-3 socket, aka “socket 2084,” which Asus claims brings extra performance and OC reliability by using undocumented pin holes on the Intel’s (<a href="https://www.google.com/finance?q=intel&amp;ei=2aAPVLiFBsKwiQKgq4GYCQ">NASDAQ: INTC</a>) new CPUs? You can see Asus claims right here:</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/09/ac-oc-socket-1.png" rel="lightbox-0"><img class="alignnone size-medium wp-image-38474" src="http://cdn.vrworld.com/wp-content/uploads/2014/09/ac-oc-socket-1-600x475.png" alt="ac-oc-socket-1" width="600" height="475" /></a></p>
<p>According to industry sources Asus was considering patenting the socket and preventing Foxconn (<a href="https://www.google.com/finance?q=foxconn&amp;ei=hqEPVPjFL8KwiQKgq4GYCQ">TPE: 2354</a>), its manufacturer, from selling it to other vendors. On the other hand, another major industry source claimed that these pins are not what Asus claims, but just CPU debug and test pins brought back to the socket in this generation. Therefore, not only would they be useless for overclocking, but also a potential crash risk if connected in a production system.</p>
<p>Then, remember another point – the Core i7-5960X is just a reduced version of the same die used for the Xeon E5 v3 8-core Haswell-EP, with some features like dual QPI channels, memory ECC and few others disabled, including some approximately 150 signal pins related to that. The Enterprise Group might know even more than the Client Group about the undocumented pins on these new processors?</p>
<p><em>Note: The Foxconn model marking for the normal socket 47191 and for the OC socket the model is 46391.</em></p>
<p>Next step – see the Gigabyte brand new LN2 cooling optimised mainboard for these CPUs, with this exact same &#8220;special&#8221; socket:</p>

<a href='http://cdn.vrworld.com/wp-content/uploads/2014/09/new-socket-1-2.png' rel="lightbox[gallery-2]"><img width="688" height="420" src="http://cdn.vrworld.com/wp-content/uploads/2014/09/new-socket-1-2-688x420.png" class="attachment-vw_medium" alt="new-socket-1-2" /></a>
<a href='http://cdn.vrworld.com/wp-content/uploads/2014/09/new-socket-2-2.png' rel="lightbox[gallery-2]"><img width="528" height="420" src="http://cdn.vrworld.com/wp-content/uploads/2014/09/new-socket-2-2-528x420.png" class="attachment-vw_medium" alt="new-socket-2-2" /></a>
<a href='http://cdn.vrworld.com/wp-content/uploads/2014/09/new-socket-3-1-1-2.png' rel="lightbox[gallery-2]"><img width="750" height="420" src="http://cdn.vrworld.com/wp-content/uploads/2014/09/new-socket-3-1-1-2-750x420.png" class="attachment-vw_medium" alt="new-socket-3-1-1-2" /></a>

<p>And compare with the old socket:</p>

<a href='http://cdn.vrworld.com/wp-content/uploads/2014/09/old-1-resized.png' rel="lightbox[gallery-3]"><img width="750" height="420" src="http://cdn.vrworld.com/wp-content/uploads/2014/09/old-1-resized-750x420.png" class="attachment-vw_medium" alt="old-1-resized" /></a>
<a href='http://cdn.vrworld.com/wp-content/uploads/2014/09/old-3-resized.png' rel="lightbox[gallery-3]"><img width="602" height="331" src="http://cdn.vrworld.com/wp-content/uploads/2014/09/old-3-resized.png" class="attachment-vw_medium" alt="old-3-resized" /></a>
<a href='http://cdn.vrworld.com/wp-content/uploads/2014/09/old-2-resized-1.png' rel="lightbox[gallery-3]"><img width="750" height="420" src="http://cdn.vrworld.com/wp-content/uploads/2014/09/old-2-resized-1-750x420.png" class="attachment-vw_medium" alt="old-2-resized-1" /></a>

<p>&nbsp;</p>
<p>So, Gigabyte can get hold of this same socket, the question is what do those additional pins connect to. If the first possibility stated above is correct, i.e. that these are real additional power, ground etc. pins that allow better and more reliable OC, then Asus has no way to patent these as it directly links to Intel IP including the socket and its validation.</p>
<p>If, however, it is the second possibility, of test, debug or no-connect pins being exposed, then it is a serious marketing rubbish which could be used to deceive the high-end buyers, whom both Intel and its key OEMs, which Asus and Gigabyte  are, surely cherish and treasure. And mind you, this is already in a product on the market, the Asus top end Rampage V Extreme.</p>
<p>Since Asus, Gigabyte or even Foxconn are not likely to be able to respond to this, it seems that Intel will be the one to resolve this mystery, especially as more of its top end CPUs get the performance enhancement and managed unlocking capabilities over time.</p>
<p>The big question is: what are the warranty implications if things fail because of CPUs being inserted into sockets with undocumented pins.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/09/08/haswell-e-controversy-intel-asus-socket-2084/">Haswell-E Controversy: What Should Intel Do About Asus And Socket 2084?</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2014/09/08/haswell-e-controversy-intel-asus-socket-2084/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Haswell-EP Workstation Preview: Xeon E5 v3 Rocks, But Still More To Go</title>
		<link>http://www.vrworld.com/2014/09/08/haswell-ep-workstation-preview-xeon-e5-v3-rocks-still-go/</link>
		<comments>http://www.vrworld.com/2014/09/08/haswell-ep-workstation-preview-xeon-e5-v3-rocks-still-go/#comments</comments>
		<pubDate>Tue, 09 Sep 2014 00:26:30 +0000</pubDate>
		<dc:creator><![CDATA[Nebojsa Novakovic]]></dc:creator>
				<category><![CDATA[Event]]></category>
		<category><![CDATA[IDF 2014]]></category>
		<category><![CDATA[Reviews]]></category>
		<category><![CDATA[CPU]]></category>
		<category><![CDATA[E5-2687W v3]]></category>
		<category><![CDATA[Intel]]></category>
		<category><![CDATA[Intel Xeon]]></category>

		<guid isPermaLink="false">http://www.vrworld.com/?p=38464</guid>
		<description><![CDATA[<p>Today, as Intel (NASDAQ: INTC) launches the third generation of its Xeon E5 dual-CPU platform, many eyes are on the improvements it brings to the ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/09/08/haswell-ep-workstation-preview-xeon-e5-v3-rocks-still-go/">Haswell-EP Workstation Preview: Xeon E5 v3 Rocks, But Still More To Go</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="1201" height="793" src="http://cdn.vrworld.com/wp-content/uploads/2014/08/IntelLogo.jpg" class="attachment-post-thumbnail wp-post-image" alt="IntelLogo" /></p><p>Today, as Intel (<a href="http://www.google.com/finance?cid=284784">NASDAQ: INTC</a>) launches the third generation of its Xeon E5 dual-CPU platform, many eyes are on the improvements it brings to the servers in the datacenter. However, the benefits are just as high – if not higher – on the high-end workstation front.</p>
<p>First of all, Haswell core means sped-up AVX floating point, by inclusion of fused multiply-add (FMA) ops for theoretical FP rate doubling in benchmarks like Linpack, for instance. Haswell’s AVX2 also, just as importantly, moves integer processing to the wide parallel AVX engines, essentially offloading anything aside the address calculations to the RISC-like, three-address AVX instruction format and wide register sets. For workstation apps, once re-compiled to take advantage of it, the benefits could be enormous, and also be another gradual move away from the antiquated X86 code base.</p>
<p>Then, the wide choice of a number of cores per SKU – from 8 all the way to 18 – enables you to pick the right balance of per-core speed (i.e. per-thread performance) and core number, depending on the parallelism of your application. Some apps scale less well across many cores, thus preferring high per-core speed, while others like ray tracing make the most out of many cores.</p>
<p>The initial workstation SKU in the Xeon-E5 v3 range, the E5 2687W v3 flavor, is a 3.1 GHz 10-core part that actually uses the 12-core die where 2 cores (and their associated caches) were turned off. Now, its predecessor, the 2687Wv2 on the Ivy Bridge platform, had full L3 caches even if some cores of the die were disabled, a benefit that, I guess, we will only see back in Broadwell-EP (E5 v4) SKUs next year.</p>
<p>Then we come to DDR4 – yes the initial DIMMs aren’t exactly speedy, especially latency-wise, but the lower voltage and other reliability features of DDR4, together with quick improvements in speed and latency expected over the next few quarters, should provide the users the never-before seen capacity on a dual-socket workstation, beyond 1.5 TB RAM, without sacrificing the bandwidth on high load situations like DDR3.</p>
<p>The improvements in the PCIe bandwidth, integrated voltage regulation, and sped-up QPI to 9.6 GT/s also round out the key extra benefits.</p>
<p><strong>Putting it through its paces</strong></p>
<p>Here we look at the initial reference workstation based on this SKU from Intel, <a href="http://www.boxxtech.com/products/workstations" target="_blank">packaged by BOXX</a>. The machine itself is compact, using liquid cooling on a SuperMicro X10DAi workstation mainboard with three PCIe x16 v3 slots. This doesn’t max out the platforms theoretical quad-GPU full bandwidth capability, but should be enough for most users. In return, the board has space for 16 DDR4 DIMMs, i.e. a full terabyte of RAM if using 64 GB modules available early next year. The installed RAM was 128 GB, in 8 pcs of Samsung 16 GB ECC DDR4-2133 RDIMMs.</p>
<p>The system came with a Nvidia Quadro K2000, which I changed to AMD FirePro W9100, arguably the most powerful professional OpenGL card available as of today. With 16 GB VRAM and six DisplayPort outputs, the card is able to drive even 8K displays like the one from BOE Technology that we mentioned last week. Intel’s 240 GB + 400 GB (SATA + PCIe) SSD combo completed the picture.</p>
<p>The first benchmark was the brand new SPECwpc all-encompassing workstation productivity benchmark by SPEC, on this system. The suite, which takes a couple hours to run, covers everything from processor to graphics (a.k.a ViewPerf) to overall system performance, and seems to do the job with much less trouble than, for instance, BAPcO SysMark did years ago on the PCs.</p>
<p>Here are the first SPECwpc results, on the dual 3.1 GHz E5-2687W v3 system:</p>
<p><img class="alignnone" src="http://www.brightsideofnews.com/wp-content/uploads/2014/09/SPECwpc1-600x330.png" alt="" width="600" height="330" /></p>
<p><img class="alignnone" src="http://www.brightsideofnews.com/wp-content/uploads/2014/09/SPECwpcHaswellEP2-522x600.png" alt="" width="522" height="600" /></p>
<p>Next, we ran CineBench 15 – note that the system is about twice as fast as an overclocked 4+ GHz Core i7-5960X, the desktop Haswell-E brethren to these Xeons.</p>
<p><img class="alignnone" src="http://cdn.vrworld.com/wp-content/uploads/2014/09/cinebenchHaswellEP1.png" alt="" width="377" height="430" /></p>
<p>In CPU-Z, you can see the data about the CPU.</p>
<p><img class="alignnone" src="http://www.brightsideofnews.com/wp-content/uploads/2014/09/cpuzXeonE5v3-600x295.png" alt="" width="600" height="295" /></p>
<p>Then we come to the newest version of SiSoft Sandra. Here is the report on the key performance data.</p>
<p><img class="alignnone" src="http://cdn.vrworld.com/wp-content/uploads/2014/09/HaswellEPsandra2.png" alt="" width="1190" height="1215" /></p>
<p>&nbsp;</p>
<p>In our next round, we will be focusing on the changes in performance obtained when changing – and tuning – the main memory, as well as looking at the opportunity for even higher CPU speed. In my own opinion, the workstation market can easily justify higher TDP – and maybe even unlocked – Xeons, especially in both 8 core and 18 core per socket configurations.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/09/08/haswell-ep-workstation-preview-xeon-e5-v3-rocks-still-go/">Haswell-EP Workstation Preview: Xeon E5 v3 Rocks, But Still More To Go</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2014/09/08/haswell-ep-workstation-preview-xeon-e5-v3-rocks-still-go/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Haswell-EP Workstation Preview: Xeon E5 v3 Rocks, But Still More To Go</title>
		<link>http://www.vrworld.com/2014/09/08/haswell-ep-workstation-xeon-e5-v3-rocks-still-go/</link>
		<comments>http://www.vrworld.com/2014/09/08/haswell-ep-workstation-xeon-e5-v3-rocks-still-go/#comments</comments>
		<pubDate>Mon, 08 Sep 2014 21:33:54 +0000</pubDate>
		<dc:creator><![CDATA[Nebojsa Novakovic]]></dc:creator>
				<category><![CDATA[Analysis]]></category>
		<category><![CDATA[Event]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[IDF 2014]]></category>
		<category><![CDATA[Reviews]]></category>
		<category><![CDATA[CPU]]></category>
		<category><![CDATA[E5-2687W v3]]></category>
		<category><![CDATA[E5-2687Wv3]]></category>
		<category><![CDATA[Intel]]></category>
		<category><![CDATA[Intel Xeon]]></category>
		<category><![CDATA[Intel Xeon E5-2687W v3]]></category>
		<category><![CDATA[review]]></category>
		<category><![CDATA[Workstation]]></category>

		<guid isPermaLink="false">http://www.brightsideofnews.com/?p=38558</guid>
		<description><![CDATA[<p>Today, as Intel (NASDAQ: INTC) launches the third generation of its Xeon E5 dual-CPU platform, many eyes are on the improvements it brings to the ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/09/08/haswell-ep-workstation-xeon-e5-v3-rocks-still-go/">Haswell-EP Workstation Preview: Xeon E5 v3 Rocks, But Still More To Go</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="1201" height="793" src="http://cdn.vrworld.com/wp-content/uploads/2014/04/IntelLogo1.jpg" class="attachment-post-thumbnail wp-post-image" alt="Intel Logo" /></p><p>Today, as Intel (<a href="http://www.google.com/finance?cid=284784">NASDAQ: INTC</a>) launches the third generation of its Xeon E5 dual-CPU platform, many eyes are on the improvements it brings to the servers in the datacenter. However, the benefits are just as high – if not higher – on the high-end workstation front.</p>
<p>First of all, Haswell core means sped-up AVX floating point, by inclusion of fused multiply-add (FMA) ops for theoretical FP rate doubling in benchmarks like Linpack, for instance. Haswell’s AVX2 also, just as importantly, moves integer processing to the wide parallel AVX engines, essentially offloading anything aside the address calculations to the RISC-like, three-address AVX instruction format and wide register sets. For workstation apps, once re-compiled to take advantage of it, the benefits could be enormous, and also be another gradual move away from the antiquated X86 code base.</p>
<p>Then, the wide choice of a number of cores per SKU – from 8 all the way to 18 – enables you to pick the right balance of per-core speed (i.e. per-thread performance) and core number, depending on the parallelism of your application. Some apps scale less well across many cores, thus preferring high per-core speed, while others like ray tracing make the most out of many cores.</p>
<p>The initial workstation SKU in the Xeon-E5 v3 range, the E5 2687W v3 flavor, is a 3.1 GHz 10-core part that actually uses the 12-core die where 2 cores (and their associated caches) were turned off. Now, its predecessor, the 2687Wv2 on the Ivy Bridge platform, had full L3 caches even if some cores of the die were disabled, a benefit that, I guess, we will only see back in Broadwell-EP (E5 v4) SKUs next year.</p>
<p>Then we come to DDR4 – yes the initial DIMMs aren’t exactly speedy, especially latency-wise, but the lower voltage and other reliability features of DDR4, together with quick improvements in speed and latency expected over the next few quarters, should provide the users the never-before seen capacity on a dual-socket workstation, beyond 1.5 TB RAM, without sacrificing the bandwidth on high load situations like DDR3.</p>
<p>The improvements in the PCIe bandwidth, integrated voltage regulation, and sped-up QPI to 9.6 GT/s also round out the key extra benefits.</p>
<p><strong>Putting it through its paces</strong></p>
<p>Here we look at the initial reference workstation based on this SKU from Intel, <a href="http://www.boxxtech.com/products/workstations" target="_blank">packaged by BOXX</a>. The machine itself is compact, using liquid cooling on a SuperMicro X10DAi workstation mainboard with three PCIe x16 v3 slots. This doesn’t max out the platforms theoretical quad-GPU full bandwidth capability, but should be enough for most users. In return, the board has space for 16 DDR4 DIMMs, i.e. a full terabyte of RAM if using 64 GB modules available early next year. The installed RAM was 128 GB, in 8 pcs of Samsung 16 GB ECC DDR4-2133 RDIMMs.</p>
<p>The system came with a Nvidia Quadro K2000, which I changed to AMD FirePro W9100, arguably the most powerful professional OpenGL card available as of today. With 16 GB VRAM and six DisplayPort outputs, the card is able to drive even 8K displays like the one from BOE Technology that we mentioned last week. Intel’s 240 GB + 400 GB (SATA + PCIe) SSD combo completed the picture.</p>
<p>The first benchmark was the brand new SPECwpc all-encompassing workstation productivity benchmark by SPEC, on this system. The suite, which takes a couple hours to run, covers everything from processor to graphics (a.k.a ViewPerf) to overall system performance, and seems to do the job with much less trouble than, for instance, BAPcO SysMark did years ago on the PCs.</p>
<p>Here are the first SPECwpc results, on the dual 3.1 GHz E5-2687W v3 system:</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/09/SPECwpc1.png" rel="lightbox-0"><img class="aligncenter size-medium wp-image-38560" src="http://www.brightsideofnews.com/wp-content/uploads/2014/09/SPECwpc1-600x330.png" alt="SPECwpc1" width="600" height="330" /></a></p>
<p>&nbsp;</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/09/SPECwpcHaswellEP2.png" rel="lightbox-1"><img class="aligncenter size-medium wp-image-38565" src="http://www.brightsideofnews.com/wp-content/uploads/2014/09/SPECwpcHaswellEP2-522x600.png" alt="SPECwpcHaswellEP" width="522" height="600" /></a></p>
<p>Next, we ran CineBench 15 – note that the system is about twice as fast as an overclocked 4+ GHz Core i7-5960X, the desktop Haswell-E brethren to these Xeons.</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/09/cinebenchHaswellEP1.png" rel="lightbox-2"><img class="aligncenter size-full wp-image-38561" src="http://cdn.vrworld.com/wp-content/uploads/2014/09/cinebenchHaswellEP1.png" alt="cinebenchHaswellEP" width="377" height="430" /></a></p>
<p>In CPU-Z, you can see the data about the CPU.</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/09/cpuzXeonE5v3.png" rel="lightbox-3"><img class="aligncenter size-medium wp-image-38562" src="http://www.brightsideofnews.com/wp-content/uploads/2014/09/cpuzXeonE5v3-600x295.png" alt="cpuzXeonE5v3" width="600" height="295" /></a></p>
<p>Then we come to the newest version of SiSoft Sandra. Here is the report on the key performance data.</p>
<p><img class="aligncenter" src="http://cdn.vrworld.com/wp-content/uploads/2014/09/HaswellEPsandra2.png" alt="" width="1190" height="1215" /></p>
<p>In our next round, we will be focusing on the changes in performance obtained when changing – and tuning – the main memory, as well as looking at the opportunity for even higher CPU speed. In my own opinion, the workstation market can easily justify higher TDP – and maybe even unlocked – Xeons, especially in both 8 core and 18 core per socket configurations.</p>
<p><em>This post originally appeared on <a href="http://www.vrworld.com/2014/09/08/haswell-ep-workstation-preview-xeon-e5-v3-rocks-still-go/">VR World. </a></em></p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/09/08/haswell-ep-workstation-xeon-e5-v3-rocks-still-go/">Haswell-EP Workstation Preview: Xeon E5 v3 Rocks, But Still More To Go</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2014/09/08/haswell-ep-workstation-xeon-e5-v3-rocks-still-go/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>IDF 2014 Murmurings: A True Windows Phablet for Content Creationists?</title>
		<link>http://www.vrworld.com/2014/09/08/idf-2014-murmurings-true-windows-phablet-content-creationists/</link>
		<comments>http://www.vrworld.com/2014/09/08/idf-2014-murmurings-true-windows-phablet-content-creationists/#comments</comments>
		<pubDate>Mon, 08 Sep 2014 21:31:03 +0000</pubDate>
		<dc:creator><![CDATA[Nebojsa Novakovic]]></dc:creator>
				<category><![CDATA[Analysis]]></category>
		<category><![CDATA[IDF 2014]]></category>
		<category><![CDATA[Intel]]></category>
		<category><![CDATA[phablet]]></category>
		<category><![CDATA[Tablets]]></category>

		<guid isPermaLink="false">http://www.vrworld.com/?p=38461</guid>
		<description><![CDATA[<p>While flying from Taipei to San Francisco for the usual September round of tech events, one story in this month’s MacWorld caught my attention: one ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/09/08/idf-2014-murmurings-true-windows-phablet-content-creationists/">IDF 2014 Murmurings: A True Windows Phablet for Content Creationists?</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="1317" height="961" src="http://cdn.vrworld.com/wp-content/uploads/2014/09/computex-phablet.png" class="attachment-post-thumbnail wp-post-image" alt="computex-phablet" /></p><p>While flying from Taipei to San Francisco for the usual September round of tech events, one story in this month’s <em>MacWorld</em> caught my attention: one of the editors was speculating how nice it would be to have a true full Mac in an iPhone size. Of course, Apple may be a little farther from merging the OS X and iOS than we thought earlier, but, what about looking at the same thing on the Wintel platform?</p>
<p>It could be argued that one big mistake Microsoft (<a href="www.google.com/finance?cid=358464">NASDAQ: MSFT</a>)  did with Windows Phone is to compete with Android and Apple (<a href="www.google.com/finance?cid=22144">NASDAQ: APPL</a>). Windows OS, as imperfect as it is, has one big advantage over both Android and iOS: it is a content creation-oriented environment, compared to content-playback environments on the other two. After all, Apple has OS X for content creation, while Google, well, searches for already made content, isn’t it?</p>
<p>Come back to the increasingly popular phablets, seemingly the smallest true computer format one can accept for everyday use. A 6-inch 2015 phablet, with a Bluetooth keyboard embedded inside its protective cover instead of wasting the screen space, can also have a say quad core 14-nm Atom processor, 4 GB RAM and 64 GB flash storage, not to mention Full HD or better screen resolution. Hmm, looks as good as today’s UltraBooks, doesn’t it? So technically it should be able to run a full version of Windows 8.1 or later, including Office and many other apps, without a hitch – something utterly impossible for any ARM-based phone, phablet or tablet, no matter how powerful it is.</p>
<p>Now, if I want to modify my Word document, Excel spreadsheet or PowerPoint presentation on the go, and I don’t want to open my PC for that, trying to do the job on those small Android ‘Office’ apps, including Microsoft’s own one, is either pain in the neck or just not doable. But, with true Office on true Windows, on an Intel (<a href="www.google.com/finance?cid=284784">NASDAQ: INTC</a>)-inside phablet the ultra-compact content creation and editing becomes possible.</p>
<p>Now, will Intel and Microsoft jump on this advantage while they have it, before the Android world catches up?</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/09/08/idf-2014-murmurings-true-windows-phablet-content-creationists/">IDF 2014 Murmurings: A True Windows Phablet for Content Creationists?</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2014/09/08/idf-2014-murmurings-true-windows-phablet-content-creationists/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>IDF 2014 Murmurings: A True Windows Phablet for Content Creationists?</title>
		<link>http://www.vrworld.com/2014/09/08/idf-2014-murmurings-true-windows-phablet-content-creationists-2/</link>
		<comments>http://www.vrworld.com/2014/09/08/idf-2014-murmurings-true-windows-phablet-content-creationists-2/#comments</comments>
		<pubDate>Mon, 08 Sep 2014 21:17:58 +0000</pubDate>
		<dc:creator><![CDATA[Nebojsa Novakovic]]></dc:creator>
				<category><![CDATA[Analysis]]></category>
		<category><![CDATA[Event]]></category>
		<category><![CDATA[IFA 2014]]></category>
		<category><![CDATA[Opinion]]></category>
		<category><![CDATA[Rumors]]></category>
		<category><![CDATA[Atom]]></category>
		<category><![CDATA[IDF 2014]]></category>
		<category><![CDATA[Intel]]></category>
		<category><![CDATA[phablet]]></category>
		<category><![CDATA[SoC]]></category>
		<category><![CDATA[Windows]]></category>
		<category><![CDATA[Windows Phablet]]></category>
		<category><![CDATA[Wintel]]></category>

		<guid isPermaLink="false">http://www.brightsideofnews.com/?p=38567</guid>
		<description><![CDATA[<p>While flying from Taipei to San Francisco for the usual September round of tech events, one story in this month’s MacWorld caught my attention: one ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/09/08/idf-2014-murmurings-true-windows-phablet-content-creationists-2/">IDF 2014 Murmurings: A True Windows Phablet for Content Creationists?</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="580" height="327" src="http://cdn.vrworld.com/wp-content/uploads/2014/09/Samsung_Galaxy_Note_3-580-100.jpg" class="attachment-post-thumbnail wp-post-image" alt="Imagine this, running Windows and powered by Intel." /></p><p>While flying from Taipei to San Francisco for the usual September round of tech events, one story in this month’s <em>MacWorld</em> caught my attention: one of the editors was speculating how nice it would be to have a true full Mac in an iPhone size. Of course, Apple may be a little farther from merging the OS X and iOS than we thought earlier, but, what about looking at the same thing on the Wintel platform? Perhaps a Windows Phablet?</p>
<p>It could be argued that one big mistake Microsoft (<a href="www.google.com/finance?cid=358464">NASDAQ: MSFT</a>) did with Windows Phone is to compete with Android and Apple (<a href="www.google.com/finance?cid=22144">NASDAQ: APPL</a>). Windows OS, as imperfect as it is, has one big advantage over both Android and iOS: it is a content creation-oriented environment, compared to content-playback environments on the other two. After all, Apple has OS X for content creation, while Google, well, searches for already made content, isn’t it?</p>
<p>Come back to the increasingly popular phablets, seemingly the smallest true computer format one can accept for everyday use. A 6-inch 2015 phablet, with a Bluetooth keyboard embedded inside its protective cover instead of wasting the screen space, can also have a say quad core 14-nm Atom processor, 4 GB RAM and 64 GB flash storage, not to mention FullHD or better screen resolution. Hmm, looks as good as today’s UltraBooks, doesn’t it? So technically it should be able to run a full version of Windows 8.1 or later, including Office and many other apps, without a hitch – something utterly impossible for any ARM-based phone, phablet or tablet, no matter how powerful it is.</p>
<p>Now, if I want to modify my Word document, Excel spreadsheet or PowerPoint presentation on the go, and I don’t want to open my PC for that, trying to do the job on those small Android ‘Office’ apps, including Microsoft’s own one, is either pain in the neck or just not doable. But, with true Office on true Windows, on an Intel (<a href="www.google.com/finance?cid=284784">NASDAQ: INTC</a>)-inside phable, the ultra-compact content creation and editing becomes possible.</p>
<p>Now, will Intel and Microsoft jump on this advantage while they have it, before the Android world catches up?</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/09/08/idf-2014-murmurings-true-windows-phablet-content-creationists-2/">IDF 2014 Murmurings: A True Windows Phablet for Content Creationists?</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2014/09/08/idf-2014-murmurings-true-windows-phablet-content-creationists-2/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Nvidia Quadro vs. AMD Firepro: Professional Graphics Showdown</title>
		<link>http://www.vrworld.com/2014/09/03/nvidia-quadro-amd-firepro-professional-graphics-showdown/</link>
		<comments>http://www.vrworld.com/2014/09/03/nvidia-quadro-amd-firepro-professional-graphics-showdown/#comments</comments>
		<pubDate>Thu, 04 Sep 2014 06:02:56 +0000</pubDate>
		<dc:creator><![CDATA[Nebojsa Novakovic]]></dc:creator>
				<category><![CDATA[Audio/Video]]></category>
		<category><![CDATA[Enterprise]]></category>
		<category><![CDATA[Graphics]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[Reviews]]></category>
		<category><![CDATA[AMD]]></category>
		<category><![CDATA[DirectX]]></category>
		<category><![CDATA[FirePro W8100]]></category>
		<category><![CDATA[FirePro W8100 Review]]></category>
		<category><![CDATA[FirePro W8100 vs Quadro K5200]]></category>
		<category><![CDATA[K2200 Review]]></category>
		<category><![CDATA[K5200 Review]]></category>
		<category><![CDATA[K5200 vs W8100]]></category>
		<category><![CDATA[Nvidia]]></category>
		<category><![CDATA[Open GL]]></category>
		<category><![CDATA[Quadro K2200]]></category>
		<category><![CDATA[Quadro K5200]]></category>
		<category><![CDATA[Quadro Review]]></category>
		<category><![CDATA[SPEC ViewPerf12]]></category>
		<category><![CDATA[W8100 vs K5200]]></category>

		<guid isPermaLink="false">http://www.brightsideofnews.com/?p=38478</guid>
		<description><![CDATA[<p>Since the first graphics processors that hardwired the basic display operations of displays like the NEC 7220 and Hitachi 63484 in the early 1980s, they ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/09/03/nvidia-quadro-amd-firepro-professional-graphics-showdown/">Nvidia Quadro vs. AMD Firepro: Professional Graphics Showdown</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="675" height="392" src="http://cdn.vrworld.com/wp-content/uploads/2014/09/nvidia-quadro-post.jpg" class="attachment-post-thumbnail wp-post-image" alt="nvidia-quadro-post" /></p><p>Since the first graphics processors that hardwired the basic display operations of displays like the NEC 7220 and Hitachi 63484 in the early 1980s, they were followed by the first PC cards – the IBM PGA – some 30 years ago, the need for dedicated graphics processing hardware has set in firmly at the high end of the PC landscape.</p>
<p>At that time it was 2D only, yet it still cost a couple of grand per adapter card: a price class that has seemingly kept to this day, if talking about professional graphics cards like the ones from Nvidia and AMD that are included in this roundup review.</p>
<p>After the loss of the original <a href="http://en.wikipedia.org/wiki/Silicon_Graphics" target="_blank">Silicon Graphics</a>, as well as the other two major independent true OpenGL focused 3D professional GPU chip brands (<a href="http://en.wikipedia.org/wiki/3Dlabs" target="_blank">3DLabs</a> and E&amp;S), which was a big loss in terms of features and capabilities of those processors, what we have today is the duopoly of Nvidia and AMD/ATI in this space. Sure, <a title="An Inconvenient Truth: Intel Larrabee story revealed" href="http://www.brightsideofnews.com/2009/10/12/an-inconvenient-truth-intel-larrabee-story-revealed/" target="_blank">Intel’s Larabee</a> was originally targeted at this same market, but, as we all know, <a title="First Xeon Phi Supercomputer to Launch on January 7th, 2013, Tesla K20 Inside too" href="http://www.brightsideofnews.com/2012/09/13/first-xeon-phi-supercomputer-to-launch-on-january-7th2c-20132c-tesla-k20-inside-too/" target="_blank">failed and moved to the HPC area</a> for pure compute, <a title="Intel’s New Knight’s Landing Xeon Phi Combines Omni Scale Fabric with HMC" href="http://www.brightsideofnews.com/2014/06/23/intel-new-knights-landing-combines-omni-scale-fabric-hmc/" target="_blank">where it thrives now</a>.</p>
<p>While DirectX, for better or worse, dominates the PC 3D graphics landscape, the inherently more reliable and precise OpenGL is the API of choice for most professional applications.  And that’s where the difference between otherwise identical GPU dies on the consumer and professional card varieties comes in. The full OpenGL functionality enabling of the professional GPUs leads not only to, say, triple the OpenGL benchmark advantage, but also the proper OpenGL application operation necessary to pass all the expensive professional apps certification procedures and driver optimizations – one of the reasons, besides the margin aims, why those cards cost four to five times more than their consumer brethren with similar chips.</p>
<p><strong>Nvidia Quadro vs AMD FirePro</strong></p>
<p>OpenGL professional cards also have between two and four times more local memory than the consumer ones. For instance, the AMD Radeon R7 290X has 4 GB RAM, while its equivalent, the FirePro W9100, has a whopping 16 GB. The capability to drive two 8K displays, plus the allowance for larger in-memory compute jobs to use all those teraflops without slowing down to cross over PCIe, demands greater local memory. And yes, many professional 3D apps can readily make use of 4K and 8K resolutions right away today: whether it is 3D city modelling, or detailed engine assembly review, or complex molecular interaction simulations.</p>
<p><img class="alignnone" src="http://cdn.vrworld.com/wp-content/uploads/2014/08/QuadroACAD2015KL.png" alt="" width="1920" height="1200" /></p>
<p>Those extra pixels do need extra horsepower to drive them, plus the extra memory. Game developers can also benefit from humongous local card memory, as they can optimize the game memory usage in advance for future consumer cards to arrive a few years later — way in advance.</p>
<p><img class="aligncenter" src="http://cdn.vrworld.com/wp-content/uploads/2014/08/QuadroGang.jpg" alt="" width="2048" height="1152" /></p>
<p dir="ltr">In this roundup, we have the Quadro K2200 which has 4 GB VRAM, while K5200 and W8100 both have 8 GB VRAM. Note that W8100 has twice the memory bus with compared to K5200, at 512 bits vs 256 bits.</p>
<p dir="ltr">If relying on GPGPU computing, these cards offer an added advantage: their double-precision FP performance is usually fully enabled – not crippled as in their consumer twins. For instance, the otherwise same dies of the R7 290X and FirePro W8100 have 8 times difference in DP FP performance, and Nvidia&#8217;s GPU dies follow a similar path. The single precision FP is usually left full speed in both cases, though, as it affects gaming physics competitiveness on the consumer side.</p>
<p dir="ltr">As said, here we have a quick look at the two new OpenGL GPUs from Nvidia – Quadro K2200 and K5200 – as well as K5200’s head-on competitor from AMD, the FirePro W8100. To emphasise GPU performance variations over the base CPU speed influence, all cards were run on a standard 3.5 GHz quad-core Haswell Core i7-4770K platform with 8 GB RAM and Windows 7 Ultimate, running off an Intel enterprise SSD drive. The newest drivers as of August 22nd were used on all cards. The benchmark used was the most recent version of the sophisticated SPEC ViewPerf12 benchmark suite, which measures the performance range in a variety of pro apps and visualization options, as well as CineBench 15 OpenGL benchmark option, which focuses more on the card raw performance. Here are the results.</p>
<p dir="ltr">SPEC ViewPerf 12 results reflect not just the GPU graphics performance, but also the amount of memory available to locally store the dataset. Among the current OpenGL benchmark, this one is the closest to the actual application usage mix seen on professional 3D workstations.</p>
<p dir="ltr"><img class="alignnone" src="http://cdn.vrworld.com/wp-content/uploads/2014/08/ViewPerfSept2014.png" alt="" width="924" height="284" /></p>
<p dir="ltr">As you can see, the scaling among the three Nvidia cards is almost exponential 1:2:4 scale, which kind of renders the first card obsolete, the K2000, as its overall card specs are similar to the K2200. Also, note that, despite the higher raw hardware specs (GPU and memory bandwidth), K5200 beats W8100 by an unusually wide margin in some apps of this test suite thanks to being Nvidia&#8217;s updated Kepler architecture and improved memory capacity and bandwidth.  This is very likely because the K5200 makes a lot of improvements to memory performance (and capacity) and overall FLOPS performance over the K5000 (3TFLOPS vs 2TFLOPS) and can be directly noticeable in the professional benchmarks. The K5200 doubles memory capacity from 4GB to 8GB over the K5000, which also helps Nvidia become more competitive with AMD.</p>
<p dir="ltr">Nvidia&#8217;s Maxwell-based K2200 also performs quite well against the rest of the roundup, even beating AMD&#8217;s W8100 in one test (sw-03) but handily beating the old Kepler-based K2000. Because the K2000 and K2200 are the lowest end cards that Nvidia offers, the differences between architectures are more noticeable. If anything, we can see that AMD should be very worried about a potential Maxwell-based Quadro card from Nvidia if the K2200 improves performance as much as it does over the Kepler-based K2000.</p>
<p dir="ltr">Otherwise, we can see that the new K5200 from Nvidia mostly takes the cake in most of the benchmarks with the exception of three benchmarks, which indicates that AMD is still very competitive with Nvidia.</p>
<p dir="ltr">CineBench 15 OpenGL routine, commonly ran on the consumer GPUs as well, requires far less resources. However, even here, the full OpenGL performance and feature set of these cards beats their consumer brethren manifold:</p>
<p dir="ltr"><img class="alignnone" src="http://cdn.vrworld.com/wp-content/uploads/2014/08/CineBenchOpenGL.png" alt="" width="378" height="274" /></p>
<p dir="ltr">As you can see here, the K2200, even though spec-wise closer to K2000 than to K5200, is much nearer to Quadro K5200 in performance. I feel Nvidia should retire the K2000, or at least massively reduce its price vs K2200, since it makes little sense to consider it otherwise. But it also means that the K2200 delivers a much better level of performance for essentially the same money that they charge for the K2000. The K2200 is proving to be a very good budget card for professional applications and that Maxwell is a massive improvement over Kepler.</p>
<p dir="ltr">Also, AMD W8100 has a slight performance advantage here over the K5200: the raw GPU computation and memory capability of the Hawaii GPU core come to shine here.</p>
<p dir="ltr"><img class="alignnone" src="http://cdn.vrworld.com/wp-content/uploads/2014/08/QuadroGPUz.png" alt="" width="1204" height="497" /></p>
<p dir="ltr">And here, you can see the GPU-Z screenshots of all the Nvidia entries – GPU-Z crashes on the AMD card, so unfortunately we couldn’t go far there, as you can see on the screenshot.</p>
<p dir="ltr"><img class="alignnone" src="http://cdn.vrworld.com/wp-content/uploads/2014/08/AMDGPUzNotResp.png" alt="" width="400" height="490" /></p>
<p dir="ltr">If you look at other, more general-purpose 3-D CAD apps, like the AutoCAD 2015 shown here, the picture may be a little different – literally. In the case of AutoCAD, the 3-D polygonal performance for wireframe and shaded models is far more important than complex textures and effects, which are still relatively rarely used in this software for interactive visualization. This means that even a low to mid range card, like Quadro K2200, has sufficient performance for most CAD jobs. I tested both K2200 and K5200 on my AutoCAD Kuala Lumpur model, with plenty of buildings but pure polygonal definition, and there was zero difference in responsiveness, both handling any 3D visualization operation in real time.</p>
<p dir="ltr"><img class="alignnone" src="http://cdn.vrworld.com/wp-content/uploads/2014/08/QuadroACAD2015KL12.png" alt="" width="1920" height="1200" /></p>
<p dir="ltr">Worse, since DirectX is these days – like it or not – supported by many of these apps as well, this changes the equation, as consumer GPUs will run it just as well as the professional ones, at small fraction of the price. AutoCAD was, in fact, one of the first to accommodate that and, coupled with its relatively low requirements, it affects the justification for premium priced professional cards substantially.</p>
<p dir="ltr">On the other hand, many other apps and usage models do value the added benefits of OpenGL – especially those that run under Linux for performance, reliability and multi-core scaling reasons. OpenGL is the sole choice there. The trick, though, is to ensure that the OpenGL Linux driver is at least on the same level of quality as its Windows equivalent – something that Nvidia did well, but AMD still has a way to go.</p>
<p>So, at the end, how to justify purchasing one of these capable, but pricey, cards? At the end, it’s all about your application. If you design a tall building, or an oil rig, or a new-generation plane engine, both the value of your application and, especially, the value of your work and its end result product, will usually demand the total precision and performance guarantee for the underlying hardware running your job on your selected app. The certifications and tests done on all of these cards in a variety of systems prior to their launch go as far as possible in meeting those <a href="http://www.vrworld.com/2014/08/31/nvidia-quadro-vs-amd-firepro/">goals.</a></p>
<p><a href="http://www.vrworld.com/2014/08/31/nvidia-quadro-vs-amd-firepro/"><em>This post originally appeared on Bright Side of News&#8217;* sister site, VR World. </em></a></p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/09/03/nvidia-quadro-amd-firepro-professional-graphics-showdown/">Nvidia Quadro vs. AMD Firepro: Professional Graphics Showdown</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2014/09/03/nvidia-quadro-amd-firepro-professional-graphics-showdown/feed/</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
		<item>
		<title>First Volume Production 8K-Class Quad UHD LCD Panel Comes Out of Beijing</title>
		<link>http://www.vrworld.com/2014/09/01/8k-uhd-tv/</link>
		<comments>http://www.vrworld.com/2014/09/01/8k-uhd-tv/#comments</comments>
		<pubDate>Tue, 02 Sep 2014 05:00:52 +0000</pubDate>
		<dc:creator><![CDATA[Nebojsa Novakovic]]></dc:creator>
				<category><![CDATA[News]]></category>
		<category><![CDATA[8K]]></category>
		<category><![CDATA[8K screen]]></category>
		<category><![CDATA[Beijing Oriental]]></category>
		<category><![CDATA[BOE]]></category>

		<guid isPermaLink="false">http://www.vrworld.com/?p=37555</guid>
		<description><![CDATA[<p>Not satisfied with 4K, and wanting to go beyond to show your SLR photos in their full glory? Well, the answer to your wishes doesn’t ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/09/01/8k-uhd-tv/">First Volume Production 8K-Class Quad UHD LCD Panel Comes Out of Beijing</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="640" height="403" src="http://cdn.vrworld.com/wp-content/uploads/2014/09/boe-8k-tv.jpg" class="attachment-post-thumbnail wp-post-image" alt="boe-8k-tv" /></p><p>Not satisfied with 4K, and wanting to go beyond to show your SLR photos in their full glory? Well, the answer to your wishes doesn’t come from Japan or Korea this time, but from Middle Kingdom.</p>
<p>Beijing Oriental (BOE), the other leading Chinese LCD panel fab besides TCL’s CSOT in Shenzhen, has just unveiled its 98-inch 7680&#215;4320 panel series, the HV098XXX. The 32 Megapixel monster, with 2.23 x 1.30 metre dimensions and 14 cm depth requirement, would easily fill many company conference rooms, not to mention premium home theatres. It would make showing off huge photorealistic city models down to every window or potted plant visible, a breeze, not to mention space photographs will millions of stars clearly distinguishable, or soccer / football match recordings like what NHK in Japan was doing for the past half decade in 8K.</p>
<p>The display is no slouch in other specs either: 1.07B colours via 8-bit plus dithering (sorry, no true 10-bit yet), 500 cd/m2 brightness and 1200:1 contrast ratio. You will need a card like, say, AMD FirePro W9100 with its six DisplayPort 1.2 interfaces and 16 GB VRAM to actually drive this (and yes, AMD supports 8K displays on it). This being China, of course, the final price at this early stage depends greatly on the “customer’s strategic importance for market enablement”, so feel free to contact them and see how it goes.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/09/01/8k-uhd-tv/">First Volume Production 8K-Class Quad UHD LCD Panel Comes Out of Beijing</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2014/09/01/8k-uhd-tv/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Intel Transaction Memory Extensions On Hold Until Broadwell?</title>
		<link>http://www.vrworld.com/2014/09/01/intel-transaction-memory-extensions-hold-broadwell/</link>
		<comments>http://www.vrworld.com/2014/09/01/intel-transaction-memory-extensions-hold-broadwell/#comments</comments>
		<pubDate>Mon, 01 Sep 2014 14:26:31 +0000</pubDate>
		<dc:creator><![CDATA[Nebojsa Novakovic]]></dc:creator>
				<category><![CDATA[News]]></category>
		<category><![CDATA[Broadwell]]></category>
		<category><![CDATA[Intel]]></category>

		<guid isPermaLink="false">http://www.vrworld.com/?p=37515</guid>
		<description><![CDATA[<p>The Haswell platform, which both Bright Side of News and VR World has covered extensively in the past and will do so in the future, ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/09/01/intel-transaction-memory-extensions-hold-broadwell/">Intel Transaction Memory Extensions On Hold Until Broadwell?</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="640" height="353" src="http://cdn.vrworld.com/wp-content/uploads/2014/09/IntelBroadwell-640x353.jpg" class="attachment-post-thumbnail wp-post-image" alt="IntelBroadwell-640x353" /></p><p>The Haswell platform, which both <em>Bright Side of News and VR World </em>has covered extensively in the past and will do so in the future, is a big leap forward for Intel in many areas. Features including AVX2 that brings fused multiply-add and full integer computing parallelisation, DDR4 memory and massive internal bandwidth boosts, all help justify the move to the new platform.</p>
<p>However, one unique and interesting feature of the Haswell platform, Transaction Memory Extensions (TSX), seemingly had to be disabled. The news we had last month was that there were inconsistent results when using it on some platforms. The assumption was that it would be fixed by a microcode update.</p>
<p>Well, the news is that it won’t – the problem is a bit more complex than expected, and (aside of Haswell-EX ultra high end Xeon E7) takes too long to fully solve in this run. So, if you are a fan of TSX, you got to wait for Broadwell – desktop, mobile, server, whatever the flavour you need.</p>
<p>This doesn’t impact 99.9% of current apps, mind you, but for those understanding the true benefits of this quite revolutionary way of handling memory, it is a bit of extra delay on the way to finally use it widely.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/09/01/intel-transaction-memory-extensions-hold-broadwell/">Intel Transaction Memory Extensions On Hold Until Broadwell?</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2014/09/01/intel-transaction-memory-extensions-hold-broadwell/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Intel Navigating The New Landscape: Focus On The Golden Goose? Or Fight For Peanuts With The ARM Crowd?</title>
		<link>http://www.vrworld.com/2014/08/31/intel-mobile-server-2014-analysis/</link>
		<comments>http://www.vrworld.com/2014/08/31/intel-mobile-server-2014-analysis/#comments</comments>
		<pubDate>Sun, 31 Aug 2014 15:40:40 +0000</pubDate>
		<dc:creator><![CDATA[Nebojsa Novakovic]]></dc:creator>
				<category><![CDATA[Analysis]]></category>
		<category><![CDATA[xeon]]></category>

		<guid isPermaLink="false">http://www.vrworld.com/?p=37365</guid>
		<description><![CDATA[<p>This post originally appeared on Bright Side of News, VR World&#8217;s sister publication. The Portland suburb of Hillsboro, where all Intel’s (NASDAQ: INTC) high end product ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/08/31/intel-mobile-server-2014-analysis/">Intel Navigating The New Landscape: Focus On The Golden Goose? Or Fight For Peanuts With The ARM Crowd?</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="1201" height="793" src="http://cdn.vrworld.com/wp-content/uploads/2014/08/IntelLogo.jpg" class="attachment-post-thumbnail wp-post-image" alt="IntelLogo" /></p><p><em>This post originally appeared on <a href="http://www.brightsideofnews.com/2014/08/07/intel-navigating-new-landscape-focus-golden-goose-fight-peanuts-arm-crowd/">Bright Side of News</a>, VR World&#8217;s sister publication. </em></p>
<p>The Portland suburb of Hillsboro, where all Intel’s (<a href="www.google.com/finance?cid=284784">NASDAQ: INTC</a>) high end product operations – and its main cash cow – are located, was unusually hot for this time of the year, with temperatures almost touching 30 Celsius (90 Fahrenheit) some days.</p>
<p>So was Intel inside (pun intended), overheated in preparation for the imminent launch of new workstation, server and, yes, high-end desktop Haswell flavours that will have a public debut before the September IDF opens its doors. These were already well written about by many in the media community, so this time it’s pointless repeating what’s already widely known.</p>
<p>What is interesting is where Intel would go from here. Will the company focus on the Xeon and related enterprise and high end client products which do bring in the high-margins?  Or get embroiled deeper in the fight for the current fad of the day, the “all-popular but hard to make money” ultra-mobile gadgets?</p>
<p>The situation in the two markets can’t be more opposite: in the first, Intel’s Datacenter Group is an absolute industry leader, with the estimates of its market dominance hovering around, or above, 90% — of the highest-profit market in the general IT hardware space. After a bit of lull few years ago, the product launches are again on a yearly basis, keeping the tick-tock regular. Aside of increasingly hungry – perhaps vengeful – IBM with its global promotion of POWER8, there are no real global competitors in this space at the moment, performance-wise or presence-wise.</p>
<p><strong>Intel as the underdog</strong></p>
<p>On the other side, in the highest volume, but questionable margin, ultramobile space, with its plethora of smartphone and tablet offerings, Intel was, and still is, an underdog. Maybe it is in a worse position than AMD was versus Intel in the x86 space a decade ago, or that Alpha and MIPS were versus x86 fifteen years ago.</p>
<p>At least, during those respective times while trying hard to enter the main arena both of those competitors had their protected niche markets where they ruled — while it all lasted. In both cases, it was based on the combination of performance and feature advantages and customer base loyalty, at least for specific apps where Intel couldn’t match those competitors then.</p>
<p>Compare it to today’s ultramobile battlefield. Intel has sunk enormous resources, both financial and manhours, in getting into that almost totally ARM-dominated market. Over the past few years this seriously affected its balance sheet in the process. But Intel, like others, had its protected market: the high-end server side fund their low-end ultramobile peers. Yet, despite fairly good performance of its Atom-based mobile offering – in quite a few cases these measurably outperform their ARM competition – and huge investment in Android apps porting, the results are still only trickling in.</p>
<p><strong>Lessons learned</strong></p>
<p>Let’s go back in time to a period when Alpha and MIPS had even greater comparative performance advantage over the x86 in their respective heyday.</p>
<p>At the high end, that extra performance mattered much more than in a smartphone, whose primary functions should, after all, be calling and texting. But the companies behind them, while not small by any means, still couldn’t handle Intel marketing competition and lack of will by other partner vendors to fully support them. So, at least outside China, they failed.</p>
<p>Now, Intel faces a “central committee” of all-powerful global vendors like Samsung, Huawei, Nvidia, Apple, LG and, of course, Qualcomm, all working with the little ARM Plc, to push ARM forward.</p>
<p>Now ARM is hardly the best architecture around. In fact, if you really wanted to find something worse in performance, architecture and scaling than the x86, ARM and SPARC are the only real candidates, aside of the “good ship Itanic.” An architecture originally designed for a low-end desktop PC (see: BBC Micro) and embedded apps, never for high-performance computing, can in reality only stay within the ultramobile space unless major, major changes are made – which impact the now “golden” compatibility with the past apps.</p>
<p>After all, it took ARM nearly 30 years – from 1985 “Acorn RISC Machine” to 2014 Cortex-A57 – to have a proper 64-bit processor, while MIPS and Alpha were fully 64-bit in 1990 and 1991, respectively. Even the x86 has now over a decade of 64-bit existence.</p>
<p>And yes, those ARM alliance vendors fight each other like nobody’s business every day – they are each other’s worst enemy. However, Intel’s entry would unite them all against a “common enemy” who should not be allowed a chance at the dominance, at least not the way it has in the PC world.</p>
<p><strong>Does Intel need an exit strategy?</strong></p>
<p>Even with shareholder pressure of the “my daughter’s iPad doesn’t have Intel Inside: fix it or you’re fired!” sort, the question is how deep Intel should go into the smartphone and tablet quagmire?</p>
<p>Something like FullHD to UHD 2-in-1 running on Broadwell ULV does make sense, as it is essentially a PC Ultrabook with a Tablet mode or vice-versa. Windows is still more of a productivity platform than Android, so there would be a definite differentiation.</p>
<p>However, the mainstream ultramobile battlefield, with cut-throat prices for both SoC chips and the end products, may not be the best thing for Intel to enter. Perhaps a reasonable goal of creating and maintaining a 10% market presence in the smartphone and tablet field, not unlike that of Apple in the desktop and laptop space, would fit the best. It would be big enough to create a nice unique-value</p>
<p>niche and have most of the apps running native, but it would not be seen as a major threat to the ARM side, and other things would basically continue as usual.</p>
<p>However, on the high end, where those same ARM vendors are drooling after Intel’s high margin, four digit priced chippery, Intel has to stay resolute and, by accelerating the product launches and keeping the huge performance delta, show to those vendors that it will take forever and a day for them to catch up. Broadwell EP should not be delayed from the yearly refresh cycle, and neither should its Skylake follow-on. The profitable enterprise SSD, networking and interconnect programs are there as well, and they should move forward at the same rapid pace.</p>
<p>If there’s a way to justify even higher per-socket chip prices for even more powerful CPUs for even denser datacenters – where power and space are a constraint – then maybe there is a fresh way forward.</p>
<p>How about looking back at those previous non-x86 RISC architectures that still leave ARM in the dust as a way forward for Intel, while using the existing socket and chip infrastructure? After all, x86 being x86, there seems to be some sort of practical ceiling – somewhere around $5,000 per socket in Xeon E7 series – that the market is willing to accept.</p>
<p>This is still only about one-third of what IBM can get away with its top end POWER8 offerings, not to mention its ultrafast, hugely pricey MCM flavours. What if we had a much faster complementary RISC, yet Xeon E7 socket-compatible solution that provides enough extra performance, footprint and feature benefit that the users are willing to pay $10,000 per socket for it?</p>
<p>Especially if much higher instructions per cycle per core could be achieved even in usual apps compared to the x86? The Chinese “Shenwei” Alpha program, leading to a fairly compact 100 PFlop machine in about a year’s time, could – maybe – be the right hint. And yes, it already leaves ARM in the dust.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/08/31/intel-mobile-server-2014-analysis/">Intel Navigating The New Landscape: Focus On The Golden Goose? Or Fight For Peanuts With The ARM Crowd?</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2014/08/31/intel-mobile-server-2014-analysis/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Nvidia Quadro vs AMD FirePro: OpenGL Professional Graphics Showdown</title>
		<link>http://www.vrworld.com/2014/08/31/nvidia-quadro-vs-amd-firepro/</link>
		<comments>http://www.vrworld.com/2014/08/31/nvidia-quadro-vs-amd-firepro/#comments</comments>
		<pubDate>Sun, 31 Aug 2014 14:55:09 +0000</pubDate>
		<dc:creator><![CDATA[Nebojsa Novakovic]]></dc:creator>
				<category><![CDATA[Video Card Reviews]]></category>
		<category><![CDATA[AMD]]></category>
		<category><![CDATA[DirectX]]></category>
		<category><![CDATA[FirePro W8100]]></category>
		<category><![CDATA[Nvidia]]></category>
		<category><![CDATA[Open GL]]></category>
		<category><![CDATA[Quadro K2200]]></category>

		<guid isPermaLink="false">http://www.vrworld.com/?p=37348</guid>
		<description><![CDATA[<p>Since the first graphics processors that hardwired the basic display operations of displays like the NEC 7220 and Hitachi 63484 in the early 1980s, they ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/08/31/nvidia-quadro-vs-amd-firepro/">Nvidia Quadro vs AMD FirePro: OpenGL Professional Graphics Showdown</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="675" height="392" src="http://cdn.vrworld.com/wp-content/uploads/2014/08/nvidia-quadro-post.jpg" class="attachment-post-thumbnail wp-post-image" alt="nvidia-quadro-post" /></p><p>Since the first graphics processors that hardwired the basic display operations of displays like the NEC 7220 and Hitachi 63484 in the early 1980s, they were followed by the first PC cards – the IBM PGA – some 30 years ago, the need for dedicated graphics processing hardware has set in firmly at the high end of the PC landscape.</p>
<p>At that time it was 2D only, yet it still cost a couple of grand per adapter card: a price class that has seemingly kept to this day, if talking about professional graphics cards like the ones from Nvidia and AMD that are included in this roundup review.</p>
<p>After the loss of the original <a href="http://en.wikipedia.org/wiki/Silicon_Graphics" target="_blank">Silicon Graphics</a>, as well as the other two major independent true OpenGL focused 3D professional GPU chip brands (<a href="http://en.wikipedia.org/wiki/3Dlabs" target="_blank">3DLabs</a> and E&amp;S), which was a big loss in terms of features and capabilities of those processors, what we have today is the duopoly of Nvidia and AMD/ATI in this space. Sure, <a title="An Inconvenient Truth: Intel Larrabee story revealed" href="http://www.brightsideofnews.com/2009/10/12/an-inconvenient-truth-intel-larrabee-story-revealed/" target="_blank">Intel’s Larabee</a> was originally targeted at this same market, but, as we all know, <a title="First Xeon Phi Supercomputer to Launch on January 7th, 2013, Tesla K20 Inside too" href="http://www.brightsideofnews.com/2012/09/13/first-xeon-phi-supercomputer-to-launch-on-january-7th2c-20132c-tesla-k20-inside-too/" target="_blank">failed and moved to the HPC area</a> for pure compute, <a title="Intel’s New Knight’s Landing Xeon Phi Combines Omni Scale Fabric with HMC" href="http://www.brightsideofnews.com/2014/06/23/intel-new-knights-landing-combines-omni-scale-fabric-hmc/" target="_blank">where it thrives now</a>.</p>
<p>While DirectX, for better or worse, dominates the PC 3D graphics landscape, the inherently more reliable and precise OpenGL is the API of choice for most professional applications.  And that’s where the difference between otherwise identical GPU dies on the consumer and professional card varieties comes in. The full OpenGL functionality enabling of the professional GPUs leads not only to, say, triple the OpenGL benchmark advantage, but also the proper OpenGL application operation necessary to pass all the expensive professional apps certification procedures and driver optimizations – one of the reasons, besides the margin aims, why those cards cost four to five times more than their consumer brethren with similar chips.</p>
<p><strong>Nvidia Quadro vs AMD FirePro</strong></p>
<p>OpenGL professional cards also have between two and four times more local memory than the consumer ones. For instance, the AMD Radeon R7 290X has 4 GB RAM, while its equivalent, the FirePro W9100, has a whopping 16 GB. The capability to drive two 8K displays, plus the allowance for larger in-memory compute jobs to use all those teraflops without slowing down to cross over PCIe, demands greater local memory. And yes, many professional 3D apps can readily make use of 4K and 8K resolutions right away today: whether it is 3D city modelling, or detailed engine assembly review, or complex molecular interaction simulations.</p>
<p><img class="alignnone" src="http://cdn.vrworld.com/wp-content/uploads/2014/08/QuadroACAD2015KL.png" alt="" width="1920" height="1200" /></p>
<p>Those extra pixels do need extra horsepower to drive them, plus the extra memory. Game developers can also benefit from humongous local card memory, as they can optimize the game memory usage in advance for future consumer cards to arrive a few years later — way in advance.</p>
<p><img class="aligncenter" src="http://cdn.vrworld.com/wp-content/uploads/2014/08/QuadroGang.jpg" alt="" width="2048" height="1152" /></p>
<p dir="ltr">In this roundup, we have the Quadro K2200 which has 4 GB VRAM, while K5200 and W8100 both have 8 GB VRAM. Note that W8100 has twice the memory bus with compared to K5200, at 512 bits vs 256 bits.</p>
<p dir="ltr">If relying on GPGPU computing, these cards offer an added advantage: their double-precision FP performance is usually fully enabled – not crippled as in their consumer twins. For instance, the otherwise same dies of the R7 290X and FirePro W8100 have 8 times difference in DP FP performance, and Nvidia&#8217;s GPU dies follow a similar path. The single precision FP is usually left full speed in both cases, though, as it affects gaming physics competitiveness on the consumer side.</p>
<p dir="ltr">As said, here we have a quick look at the two new OpenGL GPUs from Nvidia – Quadro K2200 and K5200 – as well as K5200’s head-on competitor from AMD, the FirePro W8100. To emphasise GPU performance variations over the base CPU speed influence, all cards were run on a standard 3.5 GHz quad-core Haswell Core i7-4770K platform with 8 GB RAM and Windows 7 Ultimate, running off an Intel enterprise SSD drive. The newest drivers as of August 22nd were used on all cards. The benchmark used was the most recent version of the sophisticated SPEC ViewPerf12 benchmark suite, which measures the performance range in a variety of pro apps and visualization options, as well as CineBench 15 OpenGL benchmark option, which focuses more on the card raw performance. Here are the results.</p>
<p dir="ltr">SPEC ViewPerf 12 results reflect not just the GPU graphics performance, but also the amount of memory available to locally store the dataset. Among the current OpenGL benchmark, this one is the closest to the actual application usage mix seen on professional 3D workstations.</p>
<p dir="ltr"><img class="alignnone" src="http://cdn.vrworld.com/wp-content/uploads/2014/08/ViewPerfSept2014.png" alt="" width="924" height="284" /></p>
<p dir="ltr">As you can see, the scaling among the three Nvidia cards is almost exponential 1:2:4 scale, which kind of renders the first card obsolete, the K2000, as its overall card specs are similar to the K2200. Also, note that, despite the higher raw hardware specs (GPU and memory bandwidth), K5200 beats W8100 by an unusually wide margin in some apps of this test suite thanks to being Nvidia&#8217;s updated Kepler architecture and improved memory capacity and bandwidth.  This is very likely because the K5200 makes a lot of improvements to memory performance (and capacity) and overall FLOPS performance over the K5000 (3TFLOPS vs 2TFLOPS) and can be directly noticeable in the professional benchmarks. The K5200 doubles memory capacity from 4GB to 8GB over the K5000, which also helps Nvidia become more competitive with AMD.</p>
<p dir="ltr">Nvidia&#8217;s Maxwell-based K2200 also performs quite well against the rest of the roundup, even beating AMD&#8217;s W8100 in one test (sw-03) but handily beating the old Kepler-based K2000. Because the K2000 and K2200 are the lowest end cards that Nvidia offers, the differences between architectures are more noticeable. If anything, we can see that AMD should be very worried about a potential Maxwell-based Quadro card from Nvidia if the K2200 improves performance as much as it does over the Kepler-based K2000.</p>
<p dir="ltr">Otherwise, we can see that the new K5200 from Nvidia mostly takes the cake in most of the benchmarks with the exception of three benchmarks, which indicates that AMD is still very competitive with Nvidia.</p>
<p dir="ltr">CineBench 15 OpenGL routine, commonly ran on the consumer GPUs as well, requires far less resources. However, even here, the full OpenGL performance and feature set of these cards beats their consumer brethren manifold:</p>
<p dir="ltr"><img class="alignnone" src="http://cdn.vrworld.com/wp-content/uploads/2014/08/CineBenchOpenGL.png" alt="" width="378" height="274" /></p>
<p dir="ltr">As you can see here, the K2200, even though spec-wise closer to K2000 than to K5200, is much nearer to Quadro K5200 in performance. I feel Nvidia should retire the K2000, or at least massively reduce its price vs K2200, since it makes little sense to consider it otherwise. But it also means that the K2200 delivers a much better level of performance for essentially the same money that they charge for the K2000. The K2200 is proving to be a very good budget card for professional applications and that Maxwell is a massive improvement over Kepler.</p>
<p dir="ltr">Also, AMD W8100 has a slight performance advantage here over the K5200: the raw GPU computation and memory capability of the Hawaii GPU core come to shine here.</p>
<p dir="ltr"><img class="alignnone" src="http://cdn.vrworld.com/wp-content/uploads/2014/08/QuadroGPUz.png" alt="" width="1204" height="497" /></p>
<p dir="ltr">And here, you can see the GPU-Z screenshots of all the Nvidia entries – GPU-Z crashes on the AMD card, so unfortunately we couldn’t go far there, as you can see on the screenshot.</p>
<p dir="ltr"><img class="alignnone" src="http://cdn.vrworld.com/wp-content/uploads/2014/08/AMDGPUzNotResp.png" alt="" width="400" height="490" /></p>
<p dir="ltr">If you look at other, more general-purpose 3-D CAD apps, like the AutoCAD 2015 shown here, the picture may be a little different – literally. In the case of AutoCAD, the 3-D polygonal performance for wireframe and shaded models is far more important than complex textures and effects, which are still relatively rarely used in this software for interactive visualization. This means that even a low to mid range card, like Quadro K2200, has sufficient performance for most CAD jobs. I tested both K2200 and K5200 on my AutoCAD Kuala Lumpur model, with plenty of buildings but pure polygonal definition, and there was zero difference in responsiveness, both handling any 3D visualization operation in real time.</p>
<p dir="ltr"><img class="alignnone" src="http://cdn.vrworld.com/wp-content/uploads/2014/08/QuadroACAD2015KL12.png" alt="" width="1920" height="1200" /></p>
<p dir="ltr">Worse, since DirectX is these days – like it or not – supported by many of these apps as well, this changes the equation, as consumer GPUs will run it just as well as the professional ones, at small fraction of the price. AutoCAD was, in fact, one of the first to accommodate that and, coupled with its relatively low requirements, it affects the justification for premium priced professional cards substantially.</p>
<p dir="ltr">On the other hand, many other apps and usage models do value the added benefits of OpenGL – especially those that run under Linux for performance, reliability and multi-core scaling reasons. OpenGL is the sole choice there. The trick, though, is to ensure that the OpenGL Linux driver is at least on the same level of quality as its Windows equivalent – something that Nvidia did well, but AMD still has a way to go.</p>
<p>So, at the end, how to justify purchasing one of these capable, but pricey, cards? At the end, it’s all about your application. If you design a tall building, or an oil rig, or a new-generation plane engine, both the value of your application and, especially, the value of your work and its end result product, will usually demand the total precision and performance guarantee for the underlying hardware running your job on your selected app. The certifications and tests done on all of these cards in a variety of systems prior to their launch go as far as possible in meeting those <a href="http://www.vrworld.com/2014/08/31/nvidia-quadro-vs-amd-firepro/">goals.</a></p>
<p><a href="http://www.vrworld.com/2014/08/31/nvidia-quadro-vs-amd-firepro/"><em>This post originally appeared on Bright Side of News&#8217;* sister site, VR World. </em></a></p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/08/31/nvidia-quadro-vs-amd-firepro/">Nvidia Quadro vs AMD FirePro: OpenGL Professional Graphics Showdown</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2014/08/31/nvidia-quadro-vs-amd-firepro/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Mini-ITX 4 GHz Haswell: Climbing the &#039;Devil’s Canyon&#039; With Size Constraints?</title>
		<link>http://www.vrworld.com/2014/08/09/mini-itx-4-ghz-haswell-climbing-devils-canyon-size-constraints/</link>
		<comments>http://www.vrworld.com/2014/08/09/mini-itx-4-ghz-haswell-climbing-devils-canyon-size-constraints/#comments</comments>
		<pubDate>Sun, 10 Aug 2014 06:22:36 +0000</pubDate>
		<dc:creator><![CDATA[Nebojsa Novakovic]]></dc:creator>
				<category><![CDATA[Analysis]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[Devil's Canyon]]></category>
		<category><![CDATA[Gigabyte]]></category>
		<category><![CDATA[Intel]]></category>
		<category><![CDATA[Mini-ITX]]></category>

		<guid isPermaLink="false">http://www.brightsideofnews.com/?p=37371</guid>
		<description><![CDATA[<p>Ever thought of an ultra-small, yet fully overclockable, high speed desktop PC squeezed inside the compact Mini-ITX platform? A combination of Intel’s Core i7-4790X and ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/08/09/mini-itx-4-ghz-haswell-climbing-devils-canyon-size-constraints/">Mini-ITX 4 GHz Haswell: Climbing the &#039;Devil’s Canyon&#039; With Size Constraints?</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="1325" height="1113" src="http://cdn.vrworld.com/wp-content/uploads/2014/08/20140727_172927-edited.jpg" class="attachment-post-thumbnail wp-post-image" alt="20140727_172927-edited" /></p><p>Ever thought of an ultra-small, yet fully overclockable, high speed desktop PC squeezed inside the compact Mini-ITX platform? A combination of Intel’s Core i7-4790X and Gigabyte GA-Z97N board could give you that, just watch the Mini-ITX size and power limits.</p>
<p>First, with its jacked up CPU and GPU core speeds, the “Devil’s Canyon” 4 GHz four-core Haswell does have enough muscle to drive a home-theatre UHD 3840&#215;2160 TV platform, in everything minus the 3D games. The latter point would, of course, have to wait for some substantial GPU architecture refresh within Intel, something not likely until Skylake platform a year and half from now.</p>
<p><b>System overview</b></p>
<p>A UHD home theatre PC with OC capability may not be the first thing that comes to one’s mind when matching the desired features, however Taiwan vendors did create the solution anyway. One of the best such boards available is the GA-Z97N Gaming from Gigabyte, which was  matched for testingwith the i7-4790X. The other key components used were a pair of Kingston 4 GB HyperX DDR3-2400 DIMMs, and GELID SlimHero 4-heat pipe heat sink fan unit. As you will see, the Kingstons did manage to even improve on the default latency while cutting 3% of the default voltage required, while GELID SlimHero nicely covers both the VRM and DIMM parts for a bit of extra airflow to avoid need for a separate system fan altogether.</p>
<p>Back to the motherboard: knowing that overclocking the 4 GHz Haswell to somewhere around 4.5 GHz before Turbo, plus running the memory at highspeed, and still providing for an optional PCIe v3 GPU, was already a tall order for even a mATX mobo, I was pleasantly surprised that Gigabyte managed to squeeze far more into it.</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/08/20140727_164349.jpg" rel="lightbox-0"><img class="aligncenter size-full wp-image-37379" src="http://cdn.vrworld.com/wp-content/uploads/2014/08/20140727_164349.jpg" alt="20140727_164349" width="2048" height="1152" /></a></p>
<p>A combination of (now Qualcomm) KillerNIC Gigabit Ethernet and PCIe WiFi, 4 SATA plus 1 eSATA, all 6 Gbps, Realtek1150 audio codes, and, yes, still having one PS/2 keyboard or mouse connector besides the USB, just in case. The video interface portion didn’t disappoint either,</p>
<p>with DVI, HDMI and DisplayPort – but no Thunderbolt, though. And then, you would notice the 24+8 power connectors, the full complement needed for a decent OC gaming platform feeding a high end GPU, all in a Mini-ATX format.</p>
<p>For this first look, I put up the system to run in the open, before finding a truly good Mini-ATX casing and PSU that would do it justice – not an easy task knowing this compact format’s limitations. The setup, including CPU, HSF and memory, took all of ten minutes, and, this being a Mini-ATX board, it was darn easy to handle the connectors and cables.</p>
<p>Even though the default BIOS was dated April – and I would try to keep it as long as I can, since it keeps the TSX transaction memory extensions turned on, unlike the versions from June onwards – it fully supported the i7-4790X out of the box, including the 4 GHz default frequency and 4.4 GHz Turbo. The CPU voltage was a little high in my mind, going at 1.35 V, so, as incrementally overclocking it, I managed to find a sweet spot of 4.6 GHz default frequency at 1.32 volts CPU voltage. The resulting CPU Tcase temperature also dropped to 45 degrees Celsius, as you can see in the lovely FullHD UEFI BIOS screenshot.</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/08/140729121109-edited.jpg" rel="lightbox-1"><img class="aligncenter size-full wp-image-37381" src="http://cdn.vrworld.com/wp-content/uploads/2014/08/140729121109-edited.jpg" alt="140729121109-edited" width="960" height="540" /></a></p>
<p>In the same screenshot, you can see that I managed to slightly tune the Kingston memory and reduce the latency to 11-12-12 at DDR-2400 while dropping the voltage a bit to 1.6 volts. This is only a minor first round tune in to basically lower the power requirements ever so slightly so that a standard 150W Mini-ITX/ITX PSU can handle the whole box including a SSD and a DVD/BD-ROM drive.</p>
<p>Talking about UEFI BIOS, It looks great and gives out a lot of info, but, frankly, functionality wise, all that was there in the old text-mode BIOS user interfaces anyway, and there would be less system overhead with them. Having FullHD UEFI doesn’t save you from having to</p>
<p>toggle multiple screens still, and sometimes there can be a slight lag in getting a setting applied and in effect. Luckily, Gigabyte still provides the text mode BIOS option here.</p>
<p><b>More to come</b></p>
<p>So, what to say after this first look? From the point of achievable performance and features, the combo of Intel Devil’s Canyon and Gigabyte GA-Z97N Gaming Mini-ITX board gives up almost nothing compared to much larger format OC platforms, unless you need much more RAM or dual GPUs, among other things. It would be even more fun if Intel’s graphics was yet another step better for actual 3-D game use, but then, I guess, that’s a beyond-Broadwell question for those who’d be willing to wait for “GT4” graphics in Skylake CPU generation. If, aside of 3-D gaming, you’re happy with UHD capable setup that can nicely handle your new high-end TV and still allows an extra GPU if you move it to a little bigger casing, then this is the thing for you.</p>
<p>Based on this initial experience – upcoming benchmarks notwithstanding, since they will be in line with other similar Devil’s Canyon platforms – I would recommend the setup with this Gigabyte board for a Mini-ITX HTPC, on one condition: it should not be fitted into the tightest Mini-ITX casings out there.</p>
<p>Give it a bit of room to spare, including one for a bit better PSU than what Mini-ITX boxes usually supply. Here I ran it with a standard 500W ATX PSU for the initial stability purposes, however the main review will include trying out a few Mini-ITX casing and PSU combos on it right out of Shenzhen factories in a week’s time – after all, it will be one of our Mini-ITX reference platforms until Broadwell shows itself.</p>
<p>&nbsp;</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/08/09/mini-itx-4-ghz-haswell-climbing-devils-canyon-size-constraints/">Mini-ITX 4 GHz Haswell: Climbing the &#039;Devil’s Canyon&#039; With Size Constraints?</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2014/08/09/mini-itx-4-ghz-haswell-climbing-devils-canyon-size-constraints/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Intel Navigating the New Landscape: Focus on the Golden Goose, or Fight for the Peanuts With the ARM Crowd?</title>
		<link>http://www.vrworld.com/2014/08/07/intel-navigating-new-landscape-focus-golden-goose-fight-peanuts-arm-crowd/</link>
		<comments>http://www.vrworld.com/2014/08/07/intel-navigating-new-landscape-focus-golden-goose-fight-peanuts-arm-crowd/#comments</comments>
		<pubDate>Fri, 08 Aug 2014 02:58:21 +0000</pubDate>
		<dc:creator><![CDATA[Nebojsa Novakovic]]></dc:creator>
				<category><![CDATA[Analysis]]></category>
		<category><![CDATA[Business]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[Mobile Computing]]></category>
		<category><![CDATA[Intel]]></category>
		<category><![CDATA[Intel business analysis]]></category>
		<category><![CDATA[Intel mobile analysis]]></category>
		<category><![CDATA[Intel Strategy]]></category>
		<category><![CDATA[Intel Xeon]]></category>

		<guid isPermaLink="false">http://www.brightsideofnews.com/?p=37303</guid>
		<description><![CDATA[<p>The Portland suburb of Hillsboro, where all Intel’s high end product operations – and its main cash cow &#8211; are located, was unusually hot for ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/08/07/intel-navigating-new-landscape-focus-golden-goose-fight-peanuts-arm-crowd/">Intel Navigating the New Landscape: Focus on the Golden Goose, or Fight for the Peanuts With the ARM Crowd?</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="1201" height="793" src="http://cdn.vrworld.com/wp-content/uploads/2014/04/IntelLogo1.jpg" class="attachment-post-thumbnail wp-post-image" alt="Intel Logo" /></p><p>The Portland suburb of Hillsboro, where all Intel’s high end product operations – and its main cash cow &#8211; are located, was unusually hot for this time of the year, with temperatures almost touching 30 Celsius (90 Fahrenheit) some days.</p>
<p>So was Intel inside (pun intended), overheated in preparation for the imminent launch of new workstation, server and, yes, high-end desktop Haswell flavours that will have a public debut before the September IDF opens its doors. These were already well written about by many in the media community, so this time it’s pointless repeating what’s already widely known.</p>
<p>What is interesting is where Intel would go from here. Will the company focus on the Xeon and related enterprise and high end client products which do bring in the high-margins?  Or get embroiled deeper in the fight for the current fad of the day, the “all-popular but hard to make money” ultra-mobile gadgets?</p>
<p>The situation in the two markets can’t be more opposite: in the first, Intel’s Datacenter Group is an absolute industry leader, with the estimates of its market dominance hovering around, or above, 90% &#8212; of the highest-profit market in the general IT hardware space. After a bit of lull few years ago, the product launches are again on a yearly basis, keeping the tick-tock regular. Aside of increasingly hungry – perhaps vengeful – IBM with its global promotion of POWER8, there are no real global competitors in this space at the moment, performance-wise or presence-wise.</p>
<p><strong>Intel as the underdog</strong></p>
<p>On the other side, in the highest volume, but questionable margin, ultramobile space, with its plethora of smartphone and tablet offerings, Intel was, and still is, an underdog. Maybe it is in a worse position than AMD was versus Intel in the x86 space a decade ago, or that Alpha and MIPS were versus x86 fifteen years ago.</p>
<p>At least, during those respective times while trying hard to enter the main arena both of those competitors had their protected niche markets where they ruled &#8212; while it all lasted. In both cases, it was based on the combination of performance and feature advantages and customer base loyalty, at least for specific apps where Intel couldn’t match those competitors then.</p>
<p>Compare it to today’s ultramobile battlefield. Intel has sunk enormous resources, both financial and manhours, in getting into that almost totally ARM-dominated market. Over the past few years this seriously affected its balance sheet in the process. But Intel, like others, had its protected market: the high-end server side fund their low-end ultramobile peers. Yet, despite fairly good performance of its Atom-based mobile offering – in quite a few cases these measurably outperform their ARM competition – and huge investment in Android apps porting, the results are still only trickling in.</p>
<p><strong>Lessons learned</strong></p>
<p>Let’s go back in time to a period when Alpha and MIPS had even greater comparative performance advantage over the x86 in their respective heyday.</p>
<p>At the high end, that extra performance mattered much more than in a smartphone, whose primary functions should, after all, be calling and texting. But the companies behind them, while not small by any means, still couldn’t handle Intel marketing competition and lack of will by other partner vendors to fully support them. So, at least outside China, they failed.</p>
<p>Now, Intel faces a “central committee” of all-powerful global vendors like Samsung, Huawei, Nvidia, Apple, LG and, of course, Qualcomm, all working with the little ARM Plc, to push ARM forward.</p>
<p>Now ARM is hardly the best architecture around. In fact, if you really wanted to find something worse in performance, architecture and scaling than the x86, ARM and SPARC are the only real candidates, aside of the “good ship Itanic.” An architecture originally designed for a low-end desktop PC (see: BBC Micro) and embedded apps, never for high-performance computing, can in reality only stay within the ultramobile space unless major, major changes are made – which impact the now “golden” compatibility with the past apps.</p>
<p>After all, it took ARM nearly 30 years – from 1985 “Acorn RISC Machine” to 2014 Cortex-A57 – to have a proper 64-bit processor, while MIPS and Alpha were fully 64-bit in 1990 and 1991, respectively. Even the x86 has now over a decade of 64-bit existence.</p>
<p>And yes, those ARM alliance vendors fight each other like nobody’s business every day – they are each other’s worst enemy. However, Intel’s entry would unite them all against a “common enemy” who should not be allowed a chance at the dominance, at least not the way it has in the PC world.</p>
<p><strong>Does Intel need an exit strategy?</strong></p>
<p>Even with shareholder pressure of the “my daughter’s iPad doesn’t have Intel Inside: fix it or you’re fired!” sort, the question is how deep Intel should go into the smartphone and tablet quagmire?</p>
<p>Something like FullHD to UHD 2-in-1 running on Broadwell ULV does make sense, as it is essentially a PC Ultrabook with a Tablet mode or vice-versa. Windows is still more of a productivity platform than Android, so there would be a definite differentiation.</p>
<p>However, the mainstream ultramobile battlefield, with cut-throat prices for both SoC chips and the end products, may not be the best thing for Intel to enter. Perhaps a reasonable goal of creating and maintaining a 10% market presence in the smartphone and tablet field, not unlike that of Apple in the desktop and laptop space, would fit the best. It would be big enough to create a nice unique-value</p>
<p>niche and have most of the apps running native, but it would not be seen as a major threat to the ARM side, and other things would basically continue as usual.</p>
<p>However, on the high end, where those same ARM vendors are drooling after Intel’s high margin, four digit priced chippery, Intel has to stay resolute and, by accelerating the product launches and keeping the huge performance delta, show to those vendors that it will take forever and a day for them to catch up. Broadwell EP should not be delayed from the yearly refresh cycle, and neither should its Skylake follow-on. The profitable enterprise SSD, networking and interconnect programs are there as well, and they should move forward at the same rapid pace.</p>
<p>If there’s a way to justify even higher per-socket chip prices for even more powerful CPUs for even denser datacenters – where power and space are a constraint – then maybe there is a fresh way forward.</p>
<p>How about looking back at those previous non-x86 RISC architectures that still leave ARM in the dust as a way forward for Intel, while using the existing socket and chip infrastructure? After all, x86 being x86, there seems to be some sort of practical ceiling – somewhere around $5,000 per socket in Xeon E7 series – that the market is willing to accept.</p>
<p>This is still only about one-third of what IBM can get away with its top end POWER8 offerings, not to mention its ultrafast, hugely pricey MCM flavours. What if we had a much faster complementary RISC, yet Xeon E7 socket-compatible solution that provides enough extra performance, footprint and feature benefit that the users are willing to pay $10,000 per socket for it?</p>
<p>Especially if much higher instructions per cycle per core could be achieved even in usual apps compared to the x86? The Chinese “Shenwei” Alpha program, leading to a fairly compact 100 PFlop machine in about a year’s time, could – maybe – be the right hint. And yes, it already leaves ARM in the dust.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/08/07/intel-navigating-new-landscape-focus-golden-goose-fight-peanuts-arm-crowd/">Intel Navigating the New Landscape: Focus on the Golden Goose, or Fight for the Peanuts With the ARM Crowd?</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2014/08/07/intel-navigating-new-landscape-focus-golden-goose-fight-peanuts-arm-crowd/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Post-Computex Blues &#8211; Yet Another Bloodbath on the Horizon</title>
		<link>http://www.vrworld.com/2014/06/24/post-computex-blues-yet-another-bloodbath-horizon/</link>
		<comments>http://www.vrworld.com/2014/06/24/post-computex-blues-yet-another-bloodbath-horizon/#comments</comments>
		<pubDate>Tue, 24 Jun 2014 20:01:27 +0000</pubDate>
		<dc:creator><![CDATA[Nebojsa Novakovic]]></dc:creator>
				<category><![CDATA[2014]]></category>
		<category><![CDATA[Analysis]]></category>
		<category><![CDATA[Business]]></category>
		<category><![CDATA[Enterprise]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Opinion]]></category>
		<category><![CDATA[AMD]]></category>
		<category><![CDATA[Asus]]></category>
		<category><![CDATA[China]]></category>
		<category><![CDATA[Computex]]></category>
		<category><![CDATA[Computex Taipei]]></category>
		<category><![CDATA[Dongguan]]></category>
		<category><![CDATA[Gigabyte]]></category>
		<category><![CDATA[Haswell-E]]></category>
		<category><![CDATA[Hong Kong]]></category>
		<category><![CDATA[Intel]]></category>
		<category><![CDATA[Nvidia]]></category>
		<category><![CDATA[Qualcomm]]></category>
		<category><![CDATA[Shenzhen]]></category>
		<category><![CDATA[Surface Pro 3]]></category>
		<category><![CDATA[Taipei]]></category>
		<category><![CDATA[US]]></category>
		<category><![CDATA[Vendors]]></category>

		<guid isPermaLink="false">http://www.brightsideofnews.com/?p=36159</guid>
		<description><![CDATA[<p>Or&#8230; The Vendors Never Learn It&#8217;s been a full 2 weeks now since the end of Computex, and the associated roaming around Greater China and ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/06/24/post-computex-blues-yet-another-bloodbath-horizon/">Post-Computex Blues &#8211; Yet Another Bloodbath on the Horizon</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="1000" height="559" src="http://cdn.vrworld.com/wp-content/uploads/2014/06/ComputexTaipei_10001.jpg" class="attachment-post-thumbnail wp-post-image" alt="Computex Taipei_1000" /></p><h2>Or&#8230; The Vendors Never Learn</h2>
<p>It&#8217;s been a full 2 weeks now since the end of Computex, and the associated roaming around Greater China and certain (mostly Chinese speaking too) neighboring realms. This being at the very least fifteenth Computex for me, I didn&#8217;t bother much with press conferences and such, but checking the show floor to see what&#8217;s really going on, and then do a real check with selected vendors after the event is done with.</p>
<p>The Taiwanese, with diminishing focus on high end ‘added value’ PC stuff, moving towards mainstream consumer things with corresponding reduction in differentiation and ability to charge larger margins, in some cases increasingly relying on reference designs &#8211; tablets could be a repeat of the same story as graphics cards here. The Asus Transformer tablet range is still one of rare exceptions to at least aim there where higher margin Samsung or LG offerings are entrenched, for example.</p>
<p>Asus doesn&#8217;t seem to be so lucky with the ROG enthusiast board line, where Gigabyte has, according to more than one insider, claimed the quality prize and is now at the very least on an equal footing for the high end PC market board dominance race prior to the September Haswell-E next-gen Socket 2011 platform launch (the socket is NOT compatible with the current Socket 2011, just to state one more time).</p>
<p>So&#8230; The Big Four: Intel, Nvidia, Qualcomm and (still for now) AMD, all US vendors, still carry the innovation torch and, willingly or not, have to lead the OEMs into what to design and manufacture to a great extent. Ultrabooks and 2-in-1 convertibles were just an example, other stuff is just the same. Intel lost billions in the last financial year investing in an attempt to lead the mobile phone and tablet segments. Of course, they have the size &amp; strength to ride over it without much impact thanks to over divisions, but a loss is still a loss, something very uncommon for Intel.</p>
<p>It&#8217;s funny&#8230; even despite all the Computex announcements, the best tablet announced at the time was not in Taipei, but in New York &#8211; Microsoft Surface 3 Pro (which we&#8217;re currently reviewing). It sports a proper Intel Core processor, proper 3:2 ratio display, proper (for tablet at least) keyboard cover, and proper OS, as much as one can call Windows 8.1 that &#8211; at least vs Android. Even though, Intel did have a Surface Pro 3 at their suite at Computex.</p>
<p>Then we come to the sea of mainland China vendors from Shenzhen, Dongguan and other cities, in their little booths at the old Taipei WTC hall. Plenty of them offering plenty of stuff, but it seems they aren&#8217;t willing to learn the key lesson from their Taiwan brethren: don&#8217;t you all want to avoid making the same cheap crap, trying to make a dollar a piece and then bleeding each other&#8217;s margins to death with endless fights for every customer? While only the SoC and IP license owners make any money from it all?</p>
<p>When I asked Intel if they have a role to play in this situation, one of their regional honchos in-charge, Leighton Phillips, Director Product Marketing, Intel Asia Pacific &amp; Japan, explained that Intel is not the one restricting the components ecosystem for Intel-based tablets and such, but the choice is mostly on vendors themselves. After all, in his words, <em>&#8220;Shenzhen city is like one really big company itself.&#8221;</em> where certain &#8220;departments&#8221; decide to focus on repeating the low-cost stuff en masse and, hopefully, some bigger boys &#8211; or the daring ones with guts &#8211; decide to do unique things. Like it or not, after being in that city for years now on-and-off, I find it hard to disagree with this: many attempts to convince even the large groups there &#8211; even with ready buyers &#8211; to do something beyond the el cheapo fare, hit the risk-aversion wall. It&#8217;s a pity, as the Chinese government itself is more strategically focused on developing core technologies than, say, Taiwan.</p>
<p>After all, even outside the Intel world, a good example where a &#8216;me too&#8217; strategy leads long term is one really big long time OEM in Hong Kong that survives &#8211; pitifully at that &#8211; basically selling their boards and cards at material cost. Yet, for years now, their very survival depends on their single principal vendors&#8217; marketing money &#8211; which could be shut off anytime knowing that principal&#8217;s own survival issues.</p>
<p>Intel is, of course, investing a lot in Shenzhen: having (quite rightfully) selected it to be their next major hardware design center worldwide, after the USA and Taiwan. The rumors I hear from the insiders are for close to 700 engineering staff to be hired in Intel&#8217;s coming new space, likely in one of city&#8217;s many modern science &amp; tech parks, within this year and early next &#8211; far more than the official 150 staff mentioned up to now. Whether this will encourage the local companies to do more daring product designs as Intel helps them offload the engineering risks to some extent, remains to be seen soon. The city really must not repeat the mistakes of Taipei, cornered now into doing low-margin ecosystem stuff for the real technology principals.</p>
<p>They aren&#8217;t the only ones &#8211; Nvidia is also strongly present here, even organizing organic farming fun for its Shenzhen staff, and Qualcomm is preparing their positions in the new hardware Mecca. After all, everything from smartphones to supercomputers are both designed and used here. Only AMD is missing, without even an office to call home here&#8230; tells you something, doesn&#8217;t it?</p>
<p>That also serves as a warning to Taipei, that &#8211; irrespective of the simple cheap stuff shown in their little booths &#8211; Shenzhen is on target to take more and more of Taiwan&#8217;s IT pie in the near future; you will be looking at reports from their IT fairs here in the next year and beyond as well. So, Taiwan must do what Japan did as well, and boldly go to the top end of technology and produce stuff for the top tier of the users, willing to pay stuff for it. A good reference are Japanese products shown at Singapore&#8217;s BroadcastAsia show last week &#8211; cameras and monitors for US$ 30K and above EACH, and workstations controlling them costing not much less. And, they sell well&#8230; why bother selling million Fiats when thousand Ferraris could make more?</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/06/24/post-computex-blues-yet-another-bloodbath-horizon/">Post-Computex Blues &#8211; Yet Another Bloodbath on the Horizon</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2014/06/24/post-computex-blues-yet-another-bloodbath-horizon/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
	</channel>
</rss>

<!-- Performance optimized by W3 Total Cache. Learn more: http://www.w3-edge.com/wordpress-plugins/

Content Delivery Network via Amazon Web Services: CloudFront: cdn.vrworld.com

 Served from: www.vrworld.com @ 2015-04-10 11:53:56 by W3 Total Cache -->