<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>VR World &#187; DirectX</title>
	<atom:link href="http://www.vrworld.com/tag/directx/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.vrworld.com</link>
	<description></description>
	<lastBuildDate>Fri, 10 Apr 2015 04:26:13 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.1.1</generator>
	<item>
		<title>AMD’s Mantle Efforts Come to an End</title>
		<link>http://www.vrworld.com/2015/03/05/amds-mantle-efforts-come-end/</link>
		<comments>http://www.vrworld.com/2015/03/05/amds-mantle-efforts-come-end/#comments</comments>
		<pubDate>Thu, 05 Mar 2015 01:37:14 +0000</pubDate>
		<dc:creator><![CDATA[Sam Reynolds]]></dc:creator>
				<category><![CDATA[AMD]]></category>
		<category><![CDATA[Companies]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[API]]></category>
		<category><![CDATA[Direct X 12]]></category>
		<category><![CDATA[DirectX]]></category>
		<category><![CDATA[Mantle]]></category>
		<category><![CDATA[opengl]]></category>

		<guid isPermaLink="false">http://www.vrworld.com/?p=49089</guid>
		<description><![CDATA[<p>AMD suggests developers focus on Direct X 12 or OpenGL instead. </p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2015/03/05/amds-mantle-efforts-come-end/">AMD’s Mantle Efforts Come to an End</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="2847" height="1537" src="http://cdn.vrworld.com/wp-content/uploads/2014/08/amd-stage-apu-131.jpg" class="attachment-post-thumbnail wp-post-image" alt="AMD Restructuring" /></p><p>AMD (<a href="http://www.google.com/finance?cid=327">NASDAQ: AMD</a>) announced earlier this week at the Game Developer Conference that its Mantle API would be discontinued and suggested developers focus on Direct X 12 or OpenGL instead.</p>
<p>“Proud moments also call for reflection, and today we are especially thoughtful about Mantle’s future,” AMD’s vice president of visual and perceptual computing, Raja Koduri, said in a statement. “In the approaching era of DirectX 12 and the Next-Generation OpenGL Initiative, AMD is helping to develop two incredibly powerful APIs that leverage many capabilities of the award-winning Graphics Core Next (GCN) Architecture.”</p>
<p>AMD said that the Mantle 1.0 API will not be released to the public, however it will be releasing a <a href="http://www.amd.com/mantle">450-page API reference and guide </a>on Mantle to interested parties.</p>
<p>But AMD also said that this would not be the end of Mantle per se.</p>
<p>“Mantle must take on new capabilities and evolve beyond mastery of the draw call. It will continue to serve AMD as a graphics innovation platform available to select partners with custom needs,” Koduri wrote in his post.</p>
<p>One direction the underlying technology behind Mantle could go is to be the foundation of future APIs. That’s already the case with the Khronos Group, as it recently announced that Mantle will serve as the <a href="http://community.amd.com/community/amd-blogs/amd-gaming/blog/2015/03/03/one-of-mantles-futures-vulkan">underlying foundation</a> of its OpenGL-based API Vulkan.</p>
<p>While AMD faced an uphill battle with getting its API as it lacked the support of Nvidia (<a href="http://www.google.com/finance?cid=662925">NASDAQ: NVDA</a>) or Intel (<a href="http://www.google.com/finance?cid=284784">NASDAQ: INTC</a>), it did get a number of software publishers to commit to support it for upcoming titles. That should be counted as a success.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2015/03/05/amds-mantle-efforts-come-end/">AMD’s Mantle Efforts Come to an End</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2015/03/05/amds-mantle-efforts-come-end/feed/</wfw:commentRss>
		<slash:comments>7</slash:comments>
		</item>
		<item>
		<title>Nvidia Quadro vs. AMD Firepro: Professional Graphics Showdown</title>
		<link>http://www.vrworld.com/2014/09/03/nvidia-quadro-amd-firepro-professional-graphics-showdown/</link>
		<comments>http://www.vrworld.com/2014/09/03/nvidia-quadro-amd-firepro-professional-graphics-showdown/#comments</comments>
		<pubDate>Thu, 04 Sep 2014 06:02:56 +0000</pubDate>
		<dc:creator><![CDATA[Nebojsa Novakovic]]></dc:creator>
				<category><![CDATA[Audio/Video]]></category>
		<category><![CDATA[Enterprise]]></category>
		<category><![CDATA[Graphics]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[Reviews]]></category>
		<category><![CDATA[AMD]]></category>
		<category><![CDATA[DirectX]]></category>
		<category><![CDATA[FirePro W8100]]></category>
		<category><![CDATA[FirePro W8100 Review]]></category>
		<category><![CDATA[FirePro W8100 vs Quadro K5200]]></category>
		<category><![CDATA[K2200 Review]]></category>
		<category><![CDATA[K5200 Review]]></category>
		<category><![CDATA[K5200 vs W8100]]></category>
		<category><![CDATA[Nvidia]]></category>
		<category><![CDATA[Open GL]]></category>
		<category><![CDATA[Quadro K2200]]></category>
		<category><![CDATA[Quadro K5200]]></category>
		<category><![CDATA[Quadro Review]]></category>
		<category><![CDATA[SPEC ViewPerf12]]></category>
		<category><![CDATA[W8100 vs K5200]]></category>

		<guid isPermaLink="false">http://www.brightsideofnews.com/?p=38478</guid>
		<description><![CDATA[<p>Since the first graphics processors that hardwired the basic display operations of displays like the NEC 7220 and Hitachi 63484 in the early 1980s, they ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/09/03/nvidia-quadro-amd-firepro-professional-graphics-showdown/">Nvidia Quadro vs. AMD Firepro: Professional Graphics Showdown</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="675" height="392" src="http://cdn.vrworld.com/wp-content/uploads/2014/09/nvidia-quadro-post.jpg" class="attachment-post-thumbnail wp-post-image" alt="nvidia-quadro-post" /></p><p>Since the first graphics processors that hardwired the basic display operations of displays like the NEC 7220 and Hitachi 63484 in the early 1980s, they were followed by the first PC cards – the IBM PGA – some 30 years ago, the need for dedicated graphics processing hardware has set in firmly at the high end of the PC landscape.</p>
<p>At that time it was 2D only, yet it still cost a couple of grand per adapter card: a price class that has seemingly kept to this day, if talking about professional graphics cards like the ones from Nvidia and AMD that are included in this roundup review.</p>
<p>After the loss of the original <a href="http://en.wikipedia.org/wiki/Silicon_Graphics" target="_blank">Silicon Graphics</a>, as well as the other two major independent true OpenGL focused 3D professional GPU chip brands (<a href="http://en.wikipedia.org/wiki/3Dlabs" target="_blank">3DLabs</a> and E&amp;S), which was a big loss in terms of features and capabilities of those processors, what we have today is the duopoly of Nvidia and AMD/ATI in this space. Sure, <a title="An Inconvenient Truth: Intel Larrabee story revealed" href="http://www.brightsideofnews.com/2009/10/12/an-inconvenient-truth-intel-larrabee-story-revealed/" target="_blank">Intel’s Larabee</a> was originally targeted at this same market, but, as we all know, <a title="First Xeon Phi Supercomputer to Launch on January 7th, 2013, Tesla K20 Inside too" href="http://www.brightsideofnews.com/2012/09/13/first-xeon-phi-supercomputer-to-launch-on-january-7th2c-20132c-tesla-k20-inside-too/" target="_blank">failed and moved to the HPC area</a> for pure compute, <a title="Intel’s New Knight’s Landing Xeon Phi Combines Omni Scale Fabric with HMC" href="http://www.brightsideofnews.com/2014/06/23/intel-new-knights-landing-combines-omni-scale-fabric-hmc/" target="_blank">where it thrives now</a>.</p>
<p>While DirectX, for better or worse, dominates the PC 3D graphics landscape, the inherently more reliable and precise OpenGL is the API of choice for most professional applications.  And that’s where the difference between otherwise identical GPU dies on the consumer and professional card varieties comes in. The full OpenGL functionality enabling of the professional GPUs leads not only to, say, triple the OpenGL benchmark advantage, but also the proper OpenGL application operation necessary to pass all the expensive professional apps certification procedures and driver optimizations – one of the reasons, besides the margin aims, why those cards cost four to five times more than their consumer brethren with similar chips.</p>
<p><strong>Nvidia Quadro vs AMD FirePro</strong></p>
<p>OpenGL professional cards also have between two and four times more local memory than the consumer ones. For instance, the AMD Radeon R7 290X has 4 GB RAM, while its equivalent, the FirePro W9100, has a whopping 16 GB. The capability to drive two 8K displays, plus the allowance for larger in-memory compute jobs to use all those teraflops without slowing down to cross over PCIe, demands greater local memory. And yes, many professional 3D apps can readily make use of 4K and 8K resolutions right away today: whether it is 3D city modelling, or detailed engine assembly review, or complex molecular interaction simulations.</p>
<p><img class="alignnone" src="http://cdn.vrworld.com/wp-content/uploads/2014/08/QuadroACAD2015KL.png" alt="" width="1920" height="1200" /></p>
<p>Those extra pixels do need extra horsepower to drive them, plus the extra memory. Game developers can also benefit from humongous local card memory, as they can optimize the game memory usage in advance for future consumer cards to arrive a few years later — way in advance.</p>
<p><img class="aligncenter" src="http://cdn.vrworld.com/wp-content/uploads/2014/08/QuadroGang.jpg" alt="" width="2048" height="1152" /></p>
<p dir="ltr">In this roundup, we have the Quadro K2200 which has 4 GB VRAM, while K5200 and W8100 both have 8 GB VRAM. Note that W8100 has twice the memory bus with compared to K5200, at 512 bits vs 256 bits.</p>
<p dir="ltr">If relying on GPGPU computing, these cards offer an added advantage: their double-precision FP performance is usually fully enabled – not crippled as in their consumer twins. For instance, the otherwise same dies of the R7 290X and FirePro W8100 have 8 times difference in DP FP performance, and Nvidia&#8217;s GPU dies follow a similar path. The single precision FP is usually left full speed in both cases, though, as it affects gaming physics competitiveness on the consumer side.</p>
<p dir="ltr">As said, here we have a quick look at the two new OpenGL GPUs from Nvidia – Quadro K2200 and K5200 – as well as K5200’s head-on competitor from AMD, the FirePro W8100. To emphasise GPU performance variations over the base CPU speed influence, all cards were run on a standard 3.5 GHz quad-core Haswell Core i7-4770K platform with 8 GB RAM and Windows 7 Ultimate, running off an Intel enterprise SSD drive. The newest drivers as of August 22nd were used on all cards. The benchmark used was the most recent version of the sophisticated SPEC ViewPerf12 benchmark suite, which measures the performance range in a variety of pro apps and visualization options, as well as CineBench 15 OpenGL benchmark option, which focuses more on the card raw performance. Here are the results.</p>
<p dir="ltr">SPEC ViewPerf 12 results reflect not just the GPU graphics performance, but also the amount of memory available to locally store the dataset. Among the current OpenGL benchmark, this one is the closest to the actual application usage mix seen on professional 3D workstations.</p>
<p dir="ltr"><img class="alignnone" src="http://cdn.vrworld.com/wp-content/uploads/2014/08/ViewPerfSept2014.png" alt="" width="924" height="284" /></p>
<p dir="ltr">As you can see, the scaling among the three Nvidia cards is almost exponential 1:2:4 scale, which kind of renders the first card obsolete, the K2000, as its overall card specs are similar to the K2200. Also, note that, despite the higher raw hardware specs (GPU and memory bandwidth), K5200 beats W8100 by an unusually wide margin in some apps of this test suite thanks to being Nvidia&#8217;s updated Kepler architecture and improved memory capacity and bandwidth.  This is very likely because the K5200 makes a lot of improvements to memory performance (and capacity) and overall FLOPS performance over the K5000 (3TFLOPS vs 2TFLOPS) and can be directly noticeable in the professional benchmarks. The K5200 doubles memory capacity from 4GB to 8GB over the K5000, which also helps Nvidia become more competitive with AMD.</p>
<p dir="ltr">Nvidia&#8217;s Maxwell-based K2200 also performs quite well against the rest of the roundup, even beating AMD&#8217;s W8100 in one test (sw-03) but handily beating the old Kepler-based K2000. Because the K2000 and K2200 are the lowest end cards that Nvidia offers, the differences between architectures are more noticeable. If anything, we can see that AMD should be very worried about a potential Maxwell-based Quadro card from Nvidia if the K2200 improves performance as much as it does over the Kepler-based K2000.</p>
<p dir="ltr">Otherwise, we can see that the new K5200 from Nvidia mostly takes the cake in most of the benchmarks with the exception of three benchmarks, which indicates that AMD is still very competitive with Nvidia.</p>
<p dir="ltr">CineBench 15 OpenGL routine, commonly ran on the consumer GPUs as well, requires far less resources. However, even here, the full OpenGL performance and feature set of these cards beats their consumer brethren manifold:</p>
<p dir="ltr"><img class="alignnone" src="http://cdn.vrworld.com/wp-content/uploads/2014/08/CineBenchOpenGL.png" alt="" width="378" height="274" /></p>
<p dir="ltr">As you can see here, the K2200, even though spec-wise closer to K2000 than to K5200, is much nearer to Quadro K5200 in performance. I feel Nvidia should retire the K2000, or at least massively reduce its price vs K2200, since it makes little sense to consider it otherwise. But it also means that the K2200 delivers a much better level of performance for essentially the same money that they charge for the K2000. The K2200 is proving to be a very good budget card for professional applications and that Maxwell is a massive improvement over Kepler.</p>
<p dir="ltr">Also, AMD W8100 has a slight performance advantage here over the K5200: the raw GPU computation and memory capability of the Hawaii GPU core come to shine here.</p>
<p dir="ltr"><img class="alignnone" src="http://cdn.vrworld.com/wp-content/uploads/2014/08/QuadroGPUz.png" alt="" width="1204" height="497" /></p>
<p dir="ltr">And here, you can see the GPU-Z screenshots of all the Nvidia entries – GPU-Z crashes on the AMD card, so unfortunately we couldn’t go far there, as you can see on the screenshot.</p>
<p dir="ltr"><img class="alignnone" src="http://cdn.vrworld.com/wp-content/uploads/2014/08/AMDGPUzNotResp.png" alt="" width="400" height="490" /></p>
<p dir="ltr">If you look at other, more general-purpose 3-D CAD apps, like the AutoCAD 2015 shown here, the picture may be a little different – literally. In the case of AutoCAD, the 3-D polygonal performance for wireframe and shaded models is far more important than complex textures and effects, which are still relatively rarely used in this software for interactive visualization. This means that even a low to mid range card, like Quadro K2200, has sufficient performance for most CAD jobs. I tested both K2200 and K5200 on my AutoCAD Kuala Lumpur model, with plenty of buildings but pure polygonal definition, and there was zero difference in responsiveness, both handling any 3D visualization operation in real time.</p>
<p dir="ltr"><img class="alignnone" src="http://cdn.vrworld.com/wp-content/uploads/2014/08/QuadroACAD2015KL12.png" alt="" width="1920" height="1200" /></p>
<p dir="ltr">Worse, since DirectX is these days – like it or not – supported by many of these apps as well, this changes the equation, as consumer GPUs will run it just as well as the professional ones, at small fraction of the price. AutoCAD was, in fact, one of the first to accommodate that and, coupled with its relatively low requirements, it affects the justification for premium priced professional cards substantially.</p>
<p dir="ltr">On the other hand, many other apps and usage models do value the added benefits of OpenGL – especially those that run under Linux for performance, reliability and multi-core scaling reasons. OpenGL is the sole choice there. The trick, though, is to ensure that the OpenGL Linux driver is at least on the same level of quality as its Windows equivalent – something that Nvidia did well, but AMD still has a way to go.</p>
<p>So, at the end, how to justify purchasing one of these capable, but pricey, cards? At the end, it’s all about your application. If you design a tall building, or an oil rig, or a new-generation plane engine, both the value of your application and, especially, the value of your work and its end result product, will usually demand the total precision and performance guarantee for the underlying hardware running your job on your selected app. The certifications and tests done on all of these cards in a variety of systems prior to their launch go as far as possible in meeting those <a href="http://www.vrworld.com/2014/08/31/nvidia-quadro-vs-amd-firepro/">goals.</a></p>
<p><a href="http://www.vrworld.com/2014/08/31/nvidia-quadro-vs-amd-firepro/"><em>This post originally appeared on Bright Side of News&#8217;* sister site, VR World. </em></a></p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/09/03/nvidia-quadro-amd-firepro-professional-graphics-showdown/">Nvidia Quadro vs. AMD Firepro: Professional Graphics Showdown</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2014/09/03/nvidia-quadro-amd-firepro-professional-graphics-showdown/feed/</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
		<item>
		<title>Nvidia Quadro vs AMD FirePro: OpenGL Professional Graphics Showdown</title>
		<link>http://www.vrworld.com/2014/08/31/nvidia-quadro-vs-amd-firepro/</link>
		<comments>http://www.vrworld.com/2014/08/31/nvidia-quadro-vs-amd-firepro/#comments</comments>
		<pubDate>Sun, 31 Aug 2014 14:55:09 +0000</pubDate>
		<dc:creator><![CDATA[Nebojsa Novakovic]]></dc:creator>
				<category><![CDATA[Video Card Reviews]]></category>
		<category><![CDATA[AMD]]></category>
		<category><![CDATA[DirectX]]></category>
		<category><![CDATA[FirePro W8100]]></category>
		<category><![CDATA[Nvidia]]></category>
		<category><![CDATA[Open GL]]></category>
		<category><![CDATA[Quadro K2200]]></category>

		<guid isPermaLink="false">http://www.vrworld.com/?p=37348</guid>
		<description><![CDATA[<p>Since the first graphics processors that hardwired the basic display operations of displays like the NEC 7220 and Hitachi 63484 in the early 1980s, they ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/08/31/nvidia-quadro-vs-amd-firepro/">Nvidia Quadro vs AMD FirePro: OpenGL Professional Graphics Showdown</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="675" height="392" src="http://cdn.vrworld.com/wp-content/uploads/2014/08/nvidia-quadro-post.jpg" class="attachment-post-thumbnail wp-post-image" alt="nvidia-quadro-post" /></p><p>Since the first graphics processors that hardwired the basic display operations of displays like the NEC 7220 and Hitachi 63484 in the early 1980s, they were followed by the first PC cards – the IBM PGA – some 30 years ago, the need for dedicated graphics processing hardware has set in firmly at the high end of the PC landscape.</p>
<p>At that time it was 2D only, yet it still cost a couple of grand per adapter card: a price class that has seemingly kept to this day, if talking about professional graphics cards like the ones from Nvidia and AMD that are included in this roundup review.</p>
<p>After the loss of the original <a href="http://en.wikipedia.org/wiki/Silicon_Graphics" target="_blank">Silicon Graphics</a>, as well as the other two major independent true OpenGL focused 3D professional GPU chip brands (<a href="http://en.wikipedia.org/wiki/3Dlabs" target="_blank">3DLabs</a> and E&amp;S), which was a big loss in terms of features and capabilities of those processors, what we have today is the duopoly of Nvidia and AMD/ATI in this space. Sure, <a title="An Inconvenient Truth: Intel Larrabee story revealed" href="http://www.brightsideofnews.com/2009/10/12/an-inconvenient-truth-intel-larrabee-story-revealed/" target="_blank">Intel’s Larabee</a> was originally targeted at this same market, but, as we all know, <a title="First Xeon Phi Supercomputer to Launch on January 7th, 2013, Tesla K20 Inside too" href="http://www.brightsideofnews.com/2012/09/13/first-xeon-phi-supercomputer-to-launch-on-january-7th2c-20132c-tesla-k20-inside-too/" target="_blank">failed and moved to the HPC area</a> for pure compute, <a title="Intel’s New Knight’s Landing Xeon Phi Combines Omni Scale Fabric with HMC" href="http://www.brightsideofnews.com/2014/06/23/intel-new-knights-landing-combines-omni-scale-fabric-hmc/" target="_blank">where it thrives now</a>.</p>
<p>While DirectX, for better or worse, dominates the PC 3D graphics landscape, the inherently more reliable and precise OpenGL is the API of choice for most professional applications.  And that’s where the difference between otherwise identical GPU dies on the consumer and professional card varieties comes in. The full OpenGL functionality enabling of the professional GPUs leads not only to, say, triple the OpenGL benchmark advantage, but also the proper OpenGL application operation necessary to pass all the expensive professional apps certification procedures and driver optimizations – one of the reasons, besides the margin aims, why those cards cost four to five times more than their consumer brethren with similar chips.</p>
<p><strong>Nvidia Quadro vs AMD FirePro</strong></p>
<p>OpenGL professional cards also have between two and four times more local memory than the consumer ones. For instance, the AMD Radeon R7 290X has 4 GB RAM, while its equivalent, the FirePro W9100, has a whopping 16 GB. The capability to drive two 8K displays, plus the allowance for larger in-memory compute jobs to use all those teraflops without slowing down to cross over PCIe, demands greater local memory. And yes, many professional 3D apps can readily make use of 4K and 8K resolutions right away today: whether it is 3D city modelling, or detailed engine assembly review, or complex molecular interaction simulations.</p>
<p><img class="alignnone" src="http://cdn.vrworld.com/wp-content/uploads/2014/08/QuadroACAD2015KL.png" alt="" width="1920" height="1200" /></p>
<p>Those extra pixels do need extra horsepower to drive them, plus the extra memory. Game developers can also benefit from humongous local card memory, as they can optimize the game memory usage in advance for future consumer cards to arrive a few years later — way in advance.</p>
<p><img class="aligncenter" src="http://cdn.vrworld.com/wp-content/uploads/2014/08/QuadroGang.jpg" alt="" width="2048" height="1152" /></p>
<p dir="ltr">In this roundup, we have the Quadro K2200 which has 4 GB VRAM, while K5200 and W8100 both have 8 GB VRAM. Note that W8100 has twice the memory bus with compared to K5200, at 512 bits vs 256 bits.</p>
<p dir="ltr">If relying on GPGPU computing, these cards offer an added advantage: their double-precision FP performance is usually fully enabled – not crippled as in their consumer twins. For instance, the otherwise same dies of the R7 290X and FirePro W8100 have 8 times difference in DP FP performance, and Nvidia&#8217;s GPU dies follow a similar path. The single precision FP is usually left full speed in both cases, though, as it affects gaming physics competitiveness on the consumer side.</p>
<p dir="ltr">As said, here we have a quick look at the two new OpenGL GPUs from Nvidia – Quadro K2200 and K5200 – as well as K5200’s head-on competitor from AMD, the FirePro W8100. To emphasise GPU performance variations over the base CPU speed influence, all cards were run on a standard 3.5 GHz quad-core Haswell Core i7-4770K platform with 8 GB RAM and Windows 7 Ultimate, running off an Intel enterprise SSD drive. The newest drivers as of August 22nd were used on all cards. The benchmark used was the most recent version of the sophisticated SPEC ViewPerf12 benchmark suite, which measures the performance range in a variety of pro apps and visualization options, as well as CineBench 15 OpenGL benchmark option, which focuses more on the card raw performance. Here are the results.</p>
<p dir="ltr">SPEC ViewPerf 12 results reflect not just the GPU graphics performance, but also the amount of memory available to locally store the dataset. Among the current OpenGL benchmark, this one is the closest to the actual application usage mix seen on professional 3D workstations.</p>
<p dir="ltr"><img class="alignnone" src="http://cdn.vrworld.com/wp-content/uploads/2014/08/ViewPerfSept2014.png" alt="" width="924" height="284" /></p>
<p dir="ltr">As you can see, the scaling among the three Nvidia cards is almost exponential 1:2:4 scale, which kind of renders the first card obsolete, the K2000, as its overall card specs are similar to the K2200. Also, note that, despite the higher raw hardware specs (GPU and memory bandwidth), K5200 beats W8100 by an unusually wide margin in some apps of this test suite thanks to being Nvidia&#8217;s updated Kepler architecture and improved memory capacity and bandwidth.  This is very likely because the K5200 makes a lot of improvements to memory performance (and capacity) and overall FLOPS performance over the K5000 (3TFLOPS vs 2TFLOPS) and can be directly noticeable in the professional benchmarks. The K5200 doubles memory capacity from 4GB to 8GB over the K5000, which also helps Nvidia become more competitive with AMD.</p>
<p dir="ltr">Nvidia&#8217;s Maxwell-based K2200 also performs quite well against the rest of the roundup, even beating AMD&#8217;s W8100 in one test (sw-03) but handily beating the old Kepler-based K2000. Because the K2000 and K2200 are the lowest end cards that Nvidia offers, the differences between architectures are more noticeable. If anything, we can see that AMD should be very worried about a potential Maxwell-based Quadro card from Nvidia if the K2200 improves performance as much as it does over the Kepler-based K2000.</p>
<p dir="ltr">Otherwise, we can see that the new K5200 from Nvidia mostly takes the cake in most of the benchmarks with the exception of three benchmarks, which indicates that AMD is still very competitive with Nvidia.</p>
<p dir="ltr">CineBench 15 OpenGL routine, commonly ran on the consumer GPUs as well, requires far less resources. However, even here, the full OpenGL performance and feature set of these cards beats their consumer brethren manifold:</p>
<p dir="ltr"><img class="alignnone" src="http://cdn.vrworld.com/wp-content/uploads/2014/08/CineBenchOpenGL.png" alt="" width="378" height="274" /></p>
<p dir="ltr">As you can see here, the K2200, even though spec-wise closer to K2000 than to K5200, is much nearer to Quadro K5200 in performance. I feel Nvidia should retire the K2000, or at least massively reduce its price vs K2200, since it makes little sense to consider it otherwise. But it also means that the K2200 delivers a much better level of performance for essentially the same money that they charge for the K2000. The K2200 is proving to be a very good budget card for professional applications and that Maxwell is a massive improvement over Kepler.</p>
<p dir="ltr">Also, AMD W8100 has a slight performance advantage here over the K5200: the raw GPU computation and memory capability of the Hawaii GPU core come to shine here.</p>
<p dir="ltr"><img class="alignnone" src="http://cdn.vrworld.com/wp-content/uploads/2014/08/QuadroGPUz.png" alt="" width="1204" height="497" /></p>
<p dir="ltr">And here, you can see the GPU-Z screenshots of all the Nvidia entries – GPU-Z crashes on the AMD card, so unfortunately we couldn’t go far there, as you can see on the screenshot.</p>
<p dir="ltr"><img class="alignnone" src="http://cdn.vrworld.com/wp-content/uploads/2014/08/AMDGPUzNotResp.png" alt="" width="400" height="490" /></p>
<p dir="ltr">If you look at other, more general-purpose 3-D CAD apps, like the AutoCAD 2015 shown here, the picture may be a little different – literally. In the case of AutoCAD, the 3-D polygonal performance for wireframe and shaded models is far more important than complex textures and effects, which are still relatively rarely used in this software for interactive visualization. This means that even a low to mid range card, like Quadro K2200, has sufficient performance for most CAD jobs. I tested both K2200 and K5200 on my AutoCAD Kuala Lumpur model, with plenty of buildings but pure polygonal definition, and there was zero difference in responsiveness, both handling any 3D visualization operation in real time.</p>
<p dir="ltr"><img class="alignnone" src="http://cdn.vrworld.com/wp-content/uploads/2014/08/QuadroACAD2015KL12.png" alt="" width="1920" height="1200" /></p>
<p dir="ltr">Worse, since DirectX is these days – like it or not – supported by many of these apps as well, this changes the equation, as consumer GPUs will run it just as well as the professional ones, at small fraction of the price. AutoCAD was, in fact, one of the first to accommodate that and, coupled with its relatively low requirements, it affects the justification for premium priced professional cards substantially.</p>
<p dir="ltr">On the other hand, many other apps and usage models do value the added benefits of OpenGL – especially those that run under Linux for performance, reliability and multi-core scaling reasons. OpenGL is the sole choice there. The trick, though, is to ensure that the OpenGL Linux driver is at least on the same level of quality as its Windows equivalent – something that Nvidia did well, but AMD still has a way to go.</p>
<p>So, at the end, how to justify purchasing one of these capable, but pricey, cards? At the end, it’s all about your application. If you design a tall building, or an oil rig, or a new-generation plane engine, both the value of your application and, especially, the value of your work and its end result product, will usually demand the total precision and performance guarantee for the underlying hardware running your job on your selected app. The certifications and tests done on all of these cards in a variety of systems prior to their launch go as far as possible in meeting those <a href="http://www.vrworld.com/2014/08/31/nvidia-quadro-vs-amd-firepro/">goals.</a></p>
<p><a href="http://www.vrworld.com/2014/08/31/nvidia-quadro-vs-amd-firepro/"><em>This post originally appeared on Bright Side of News&#8217;* sister site, VR World. </em></a></p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/08/31/nvidia-quadro-vs-amd-firepro/">Nvidia Quadro vs AMD FirePro: OpenGL Professional Graphics Showdown</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2014/08/31/nvidia-quadro-vs-amd-firepro/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Intel and DirectX 12’s Big Day Out: Intel Chats On The Intel-Microsoft API Partnership</title>
		<link>http://www.vrworld.com/2014/08/25/intel-directx-12s-big-day-intel-chats-intel-microsoft-api-partnership/</link>
		<comments>http://www.vrworld.com/2014/08/25/intel-directx-12s-big-day-intel-chats-intel-microsoft-api-partnership/#comments</comments>
		<pubDate>Mon, 25 Aug 2014 17:13:52 +0000</pubDate>
		<dc:creator><![CDATA[Sam Reynolds]]></dc:creator>
				<category><![CDATA[Interviews]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Andrew Lauritzen]]></category>
		<category><![CDATA[DirectX]]></category>
		<category><![CDATA[DirectX 12]]></category>
		<category><![CDATA[Intel]]></category>
		<category><![CDATA[Microsoft]]></category>

		<guid isPermaLink="false">http://www.brightsideofnews.com/?p=38038</guid>
		<description><![CDATA[<p>For all the talk from AMD about Mantle being revolutionary for game developers and consumers, for a while it seemed to be forgotten that AMD doesn’t ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/08/25/intel-directx-12s-big-day-intel-chats-intel-microsoft-api-partnership/">Intel and DirectX 12’s Big Day Out: Intel Chats On The Intel-Microsoft API Partnership</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="1201" height="793" src="http://cdn.vrworld.com/wp-content/uploads/2014/04/IntelLogo1.jpg" class="attachment-post-thumbnail wp-post-image" alt="Intel Logo" /></p><p>For all the talk from <a href="http://www.google.com/finance?cid=327">AMD</a> about Mantle being revolutionary for game developers and consumers, for a while it seemed to be forgotten that AMD doesn’t have a monopoly on the competitive advantage Mantle promises. DirectX, with its near universal adoption amongst developers, is fully capable of offering the low overhead and close to the metal programming environment that Mantle promises.</p>
<p><a href="http://www.brightsideofnews.com/2014/08/14/needs-mantle-directx-12-shows-big-performance-gains-siggraph/">Earlier this month</a> at SIGGRAPH in Vancouver <a href="http://www.google.com/finance?q=NASDAQ%3AMSFT&amp;ei=ivH7U6jFKsS8kgXky4GACQ">Microsoft</a> proved just that, running DirectX 12 on an <a href="http://www.google.com/finance?q=NASDAQ%3AINTC&amp;ei=pPH7U4jrE8qnkgW3-oDwCA">Intel</a>-powered Surface Pro 3. During the benchmarks displayed at Intel’s booth on the show floor, DirectX 12 provided a fairly serious performance gains over the previous version.</p>
<p>Last week <i>Bright Side of News</i> caught up with Intel’s Andrew Lauritzen, a graphics software engineer in the Advanced Technology Group at Intel, to discuss what sort of gains DirectX 12 is going to have over its predecessor.</p>
<p><b><i>Bright Side of News: </i></b>What kind of advantages does DirectX 12 have over Mantle?</p>
<p>Mantle is something that only runs on AMD’s GPUs right now, so it’s something that can’t be compared directly compared to DirectX 12. In terms of what our hardware does with DirectX 12, we compare against Direct X 11 so we can show the benefits. The goals of Mantle are similar, but those things [the DirectX 12 vs. Mantle debate] could only be compared on AMD’s hardware once they have drivers for both.</p>
<p><b><i>BSN*: </i></b>What about comparing DirectX 12 to OpenGL and OpenCL?</p>
<p>OpenGL is structured similarly to Direct3D 10 and 11. It has a lot of the same overheads as those APIs had. Direct3D 12 is a new generation of APIs that gives a lot more explicit access to the hardware. Folks writing game engines are really the main people that have asked for it, and with it they can write more efficient rendering algorithms then they have been able to in the past.</p>
<p>OpenCL is more of a compute API. It’s not really for graphics. It’s more akin to the DirectCompute part of the APIs.</p>
<p><b><i>BSN*: </i></b>How long has Intel been working on the DirectX 12 effort with Microsoft?</p>
<p>That’s kind of a grey area, it’s like asking ‘since when has it been called DirectX 12?’. The ideas that have crystallized into DirectX 12 we’ve been discussing for many years. In fact, as many game developers will say, this has been an issue that’s been on their mind for many years as well and they’ve been giving feedback to both us and Microsoft with the goal of making things better.</p>
<p>As far as when those efforts crystallized into DirectX 12, well it was announced at GDC &#8212; that’s the most we can say. As long as we’ve been working on DirectX, we’ve been collaborating with Microsoft. That includes DirectX 11, and even before that. Discussions of this sort date back many years. It’s just a question of when they turned from discussions to an explicit plan.</p>
<p><b><i>BSN*</i></b><b>: </b>When did developers begin to request features that made it into DirectX 12, such as low overhead?</p>
<p>The requests go back as long as I’ve worked in the industry. For DirectX 10, one of the main goals was to lower the overhead compared to DirectX 9. It did succeed, in relative terms, but at the time the combination of hardware and software as well as other factors meant that they couldn’t make it go as far as they have with DirectX 12 in terms of making it go low overhead.</p>
<p>In DirectX 11, they tried to do the multithreading thing again. That was one of the big features of DirectX 11. But, it turned out again, that because of the number of API and driver issues they really never saw the benefit of that they were hoping for. It turned out not to be a huge win and not be very scalable. Really, with 12, what they were able to do is go back to the drawing board in a new era of both GPUs and a lot shared engine technology across different game developers. It made a lot more sense to basically go a significant step lower level than they had in the past.</p>
<p><b><i>BSN*:</i></b>Why is the performance jump so big between DirectX 11 to 12, then between 10 and 11?</p>
<p>There are things around how hazards were tracking the API [for more on that see <a href="https://developer.nvidia.com/sites/default/files/akamai/gameworks/events/gdc14/GDC_14_DirectX%20Advancements%20in%20the%20Many-Core%20Era%20Getting%20the%20Most%20out%20of%20the%20PC%20Platform.pdf">this </a>and <a href="https://software.intel.com/en-us/blogs/2014/08/07/direct3d-12-overview-part-4-heaps-and-tables">this</a>]. There were things how <a href="http://msdn.microsoft.com/en-us/library/sf4e5x7z(v=vs.110).aspx">Graphics State</a> was handled in the API, which basically made it a difficult problem for drivers to try to automatically make the API safe because it was a safer API before. Whereas DirectX 12 moved some of this into the users hands. So it&#8217;s a less safe API in terms of getting consistent correct rendering, but it allows game developers to do those things efficiently since they don’t always have to handle the most general cases like a driver does.</p>
<p><b><i>BSN*: </i></b>From what you at Intel have seen, what’s the response been like so far to DirectX 12 from developers?</p>
<p>I think it’s fair to say that it’s been really positive so far. This is something that they’ve been wanting for a long time. A combination of experience, and the fact that instead of everyone designing their own engines we’re getting specialist [companies] that are really focused on writing their own graphics engines. Having that core set of specialists have really let us open up stuff that game developers really want.</p>
<p>The API, in the past, has really been designed to handle mid-level; if every game is writing rendering of the minutiae of how to drive a GPU efficiently. But now more and more people are using [a handful of] engines so it makes sense to concentrate and optimize the technology in those engines as they are used across so many different games.</p>
<p><b><i>BSN*: </i></b>It seems like Intel is trying to make its own integrated GPUs competitive with low-to-mid range discrete GPUs. Is this the case?</p>
<p>We’re always trying to make our GPUs the best they can be in a given form factor and power budget. Increasingly our chips are going into lower and lower power budget things. Increasingly as we target these power constrained devices it becomes more and more important that we optimize all of the parts of the system as much as possible. You can’t just get away with ‘Oh, we have lots of extra CPU power so we’ll just eat the overhead from that,’ we have to make sure to optimize all parts of the stack.</p>
<p><b><i>BSN*</i></b><b>: </b>Will we see DirectX 12 on Intel’s lowest power devices like mobile Broadwell?</p>
<p>Yes. All of our chips based on Haswell will be DirectX 12 compatible. That includes Broadwell and any future chips.</p>
<p><b><i>BSN*: </i></b><b>Thanks for your time. </b></p>
<p><b><i>This interview has been edited for clarity and length. </i></b></p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/08/25/intel-directx-12s-big-day-intel-chats-intel-microsoft-api-partnership/">Intel and DirectX 12’s Big Day Out: Intel Chats On The Intel-Microsoft API Partnership</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2014/08/25/intel-directx-12s-big-day-intel-chats-intel-microsoft-api-partnership/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Who Needs Mantle? DirectX 12 Shows Big Performance Gains at SIGGRAPH</title>
		<link>http://www.vrworld.com/2014/08/14/needs-mantle-directx-12-shows-big-performance-gains-siggraph/</link>
		<comments>http://www.vrworld.com/2014/08/14/needs-mantle-directx-12-shows-big-performance-gains-siggraph/#comments</comments>
		<pubDate>Fri, 15 Aug 2014 04:26:48 +0000</pubDate>
		<dc:creator><![CDATA[Sam Reynolds]]></dc:creator>
				<category><![CDATA[News]]></category>
		<category><![CDATA[AMD]]></category>
		<category><![CDATA[Direct X 12]]></category>
		<category><![CDATA[DirectX]]></category>
		<category><![CDATA[Mantle]]></category>
		<category><![CDATA[Microsoft]]></category>
		<category><![CDATA[SIGGRAPH 2014]]></category>

		<guid isPermaLink="false">http://www.brightsideofnews.com/?p=37680</guid>
		<description><![CDATA[<p>Microsoft has appeared to have developed a viable competitor to AMD’s Mantle if the benchmarks displayed at Intel’s SIGGRAPH booth in Vancouver are consistent with ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/08/14/needs-mantle-directx-12-shows-big-performance-gains-siggraph/">Who Needs Mantle? DirectX 12 Shows Big Performance Gains at SIGGRAPH</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="640" height="360" src="http://cdn.vrworld.com/wp-content/uploads/2014/08/nmbi5d6rqjqysz3mpcjx.jpg" class="attachment-post-thumbnail wp-post-image" alt="nmbi5d6rqjqysz3mpcjx" /></p><p>Microsoft has appeared to have developed a viable competitor to AMD’s Mantle if the benchmarks displayed at Intel’s SIGGRAPH booth in Vancouver are consistent with real-world performance.</p>
<p>According to benchmarks and a demo ran at Intel’s SIGGRAPH booth, DirectX 12 offers a 70% boost in performance over DirectX 11 and offers substantial power savings to0. The demo run was a graphically intense simulation of what appears to be an asteroid belt with 50,000 asteroids rendered on screen. This is similar to AMD’s demo of <i>Star Citizen</i> &#8212; where tens of thousands of individual ships are rendered in a big dogfight &#8212; that it uses to promote Mantle. The demo was run on a Surface Pro 3, which is powered by a Core i5 chip with an Intel HD4400 GPU.</p>
<p>The benchmark Intel ran had two tests. In the first, the frame rate is constant (locked) allowing the benchmark to push the system as hard as possible to get the highest frames per second score possible. The second test, the frame rate is unlocked which allows the system to try and balance performance and power consumption.</p>
<p>As the image below displays, during the first test DirectX 12 had a nearly 50% power savings over DirectX 11.</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/08/sp3_dx11_dx12_power.jpg" rel="lightbox-0"><img class="aligncenter size-full wp-image-37681" src="http://cdn.vrworld.com/wp-content/uploads/2014/08/sp3_dx11_dx12_power.jpg" alt="sp3_dx11_dx12_power" width="1296" height="864" /></a></p>
<p>In the second test, which tried to balance performance and power consumption, DirectX 11 pushed out 19 FPS while DirectX 12 was able to render 33 FPS.</p>

<a href='http://cdn.vrworld.com/wp-content/uploads/2014/08/sp3_dx11.jpg' rel="lightbox[gallery-0]"><img width="750" height="420" src="http://cdn.vrworld.com/wp-content/uploads/2014/08/sp3_dx11-750x420.jpg" class="attachment-vw_medium" alt="sp3_dx11" /></a>

<p>As Intel explains in a blog post, the power savings come by substantially reducing CPU overhead.</p>
<p>&#8220;DirectX 12 is designed for low overhead, multi-threaded rendering. Using the new API we have reduced the CPU power requirement and thus freed up that power for the GPU,&#8221; Intel’s Andrew Lauritzen wrote in a <a href="https://software.intel.com/en-us/blogs/2014/08/11/siggraph-2014-directx-12-on-intel">blog post</a>.</p>
<p>This is a victory for both Microsoft and Intel. For Microsoft, it shows that there is no threat to the established legacy of DirectX. With data like this developers might have to take a long-hard look at moving to the relatively unused and untested Mantle &#8212; especially when DirectX runs (hypothetically) equally as well across all platforms. For Intel, this is a victory because it shows that its SoCs and CPUs have the ability to perform well in gaming scenarios on mobile platforms running Windows.</p>
<p>Perhaps there will be something of an API-war in 2015.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/08/14/needs-mantle-directx-12-shows-big-performance-gains-siggraph/">Who Needs Mantle? DirectX 12 Shows Big Performance Gains at SIGGRAPH</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2014/08/14/needs-mantle-directx-12-shows-big-performance-gains-siggraph/feed/</wfw:commentRss>
		<slash:comments>8</slash:comments>
		</item>
		<item>
		<title>Nvidia officially unveils civil &#8220;CX&#8221; and FX5800 monster</title>
		<link>http://www.vrworld.com/2008/11/10/nvidia-officially-unveils-civil-cx-and-fx5800-monster/</link>
		<comments>http://www.vrworld.com/2008/11/10/nvidia-officially-unveils-civil-cx-and-fx5800-monster/#comments</comments>
		<pubDate>Mon, 10 Nov 2008 14:00:14 +0000</pubDate>
		<dc:creator><![CDATA[Theo Valich]]></dc:creator>
				<category><![CDATA[3D]]></category>
		<category><![CDATA[AMD]]></category>
		<category><![CDATA[Graphics]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[ATI]]></category>
		<category><![CDATA[DirectX]]></category>
		<category><![CDATA[FX 5800]]></category>
		<category><![CDATA[GeForce]]></category>
		<category><![CDATA[Nvidia]]></category>
		<category><![CDATA[opengl]]></category>
		<category><![CDATA[Quadro]]></category>

		<guid isPermaLink="false">http://theovalich.wordpress.com/?p=351</guid>
		<description><![CDATA[<p>Last week, I did a short piece about the way how Nvidia is trying to bridge the 32-/64-bit divide, and today, the company officially unveiled ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2008/11/10/nvidia-officially-unveils-civil-cx-and-fx5800-monster/">Nvidia officially unveils civil &#8220;CX&#8221; and FX5800 monster</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>Last week, I did a short piece about the way how Nvidia is trying to <a href="http://theovalich.wordpress.com/2008/11/06/nvidia-plans-to-bridge-the-32-bit-and-64-bit-divide/" target="_blank">bridge the 32-/64-bit divide</a>, and today, the company officially unveiled Quadro FX 4800 and FX 5800.<br />
Quadro FX 4800 shares a lot of similarities with Adobe-oriented CX, but features 216 shaders (yes, GTX-260 brother here, if my sources were correct) and 1.5 GB of GDDR3 memory. But the star of the today&#8217;s launch is FX 5800, the new flagship of Quadro fleet.<br />
In a way, we already know everything about FX 5800, since Nvidia demonstrated the product back in August at Siggraph 2008, followed by Nvision 08 &#8211; so, specs come as of no surprise to anybody.<br />
FX5800 features GT200 chip with all 240 shader processors (with 30 &#8220;invisible&#8221; FP64-capable Double Precision register units &#8211; 1 per group of 8 shaders), coupled with massive 4GB of GDDR3 memory clocked at 816 MHz in DDR mode (1.63 GT/s). As it usually happens with Quadro products, Nvidia did not reveal the clocks, but fill-rate of 52 billion texels and 300 milion triangles/sec can lead to calculator, and calculator leads us to believe that for the first time in ages, Quadro product is actually clocked higher than the reference GeForce card it was built upon. As far as I know, GeForce GTX 280 comes with fill-rate of 48.2 Gtexels, while this Quadro card features 4 GTexel/s more, or roughly 40 MHz higher clockspeed. Of course, I could be wrong…</p>
<div id="attachment_352" style="width: 510px" class="wp-caption alignnone"><a href="http://cdn.vrworld.com/wp-content/uploads/2008/11/nvidia_quadroplex_2200.jpg" rel="lightbox-0"><img class="size-full wp-image-352" title="nvidia_quadroplex_2200" src="http://cdn.vrworld.com/wp-content/uploads/2008/11/nvidia_quadroplex_2200.jpg" alt="FX5800 actually debuted three months ago, tucked inside this case - QuadroPlex 2200 featured two of these monsters...again, not the brightest moment in history of Nvidia's marketing department" width="500" height="412" /></a><p class="wp-caption-text">FX5800 actually debuted 3 months ago, tucked inside QuadroPlex 2200, with two of these monsters inside...again, not the brightest moment in history of Nvidia&#39;s mkt department.</p></div>
<p>The card supports Quadro G-Sync II, but I am much more interested in the fact that you can just take the old Quadro card out, put the FX5800 in and hook it up to an SDI daughterboard, which is something that broadcast industry will appreciate. The price of the board is $3400, and seeing how the prices raged in the previous era, I have to say that having competition is really, really good. ATi Radeon forced Nvidia to reshuffle its GeForce pricing segment and now FirePro did the same with Quadro FX series.<br />
Thanks heavens, we (finally) have a war in workstation world. Now it is time to see RV770 chip with 4GB of memory as well… even though, ATI could technically came up with 4GB board as well, if it would use 4850X2 as a base for it. There is no way in hell we could see 4GB GDDR5 boards on the market, the higher density chips are not available atm.</p>
<p>P.S. with Nvidia continuing its funny usage of &#8220;FX&#8221; moniker, isn&#8217;t it funny that currently, the most powerful 3D card on face of the planet shares the same name as the most powerful dustbuster card that ever existed&#8230; yes, <a href="http://www.youtube.com/watch?v=PFZ39nQ_k90" target="_blank" rel="lightbox-video-0">the mighty loud GeForce FX 5800</a>. Yes, please do click on a link, some Monday laugh guaranteed. <img src="http://cdn.vrworld.com/wp-includes/images/smilies/icon_wink.gif" alt=";)" class="wp-smiley" /></p>
<p>P.P.S. Where&#8217;s the OpenGL 3.0 support guys?</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2008/11/10/nvidia-officially-unveils-civil-cx-and-fx5800-monster/">Nvidia officially unveils civil &#8220;CX&#8221; and FX5800 monster</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2008/11/10/nvidia-officially-unveils-civil-cx-and-fx5800-monster/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
	</channel>
</rss>

<!-- Performance optimized by W3 Total Cache. Learn more: http://www.w3-edge.com/wordpress-plugins/

Content Delivery Network via Amazon Web Services: CloudFront: cdn.vrworld.com

 Served from: www.vrworld.com @ 2015-04-10 15:45:11 by W3 Total Cache -->