<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>VR World &#187; DirectX 12</title>
	<atom:link href="http://www.vrworld.com/tag/directx-12/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.vrworld.com</link>
	<description></description>
	<lastBuildDate>Fri, 10 Apr 2015 04:26:13 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.1.1</generator>
	<item>
		<title>Futuremark 3DMark API Overhead Feature Test Lets You Benchmark DirectX 12 And Mantle</title>
		<link>http://www.vrworld.com/2015/03/28/futuremark-3dmark-api-overhead-feature-test-lets-you-benchmark-directx-12-and-mantle/</link>
		<comments>http://www.vrworld.com/2015/03/28/futuremark-3dmark-api-overhead-feature-test-lets-you-benchmark-directx-12-and-mantle/#comments</comments>
		<pubDate>Sat, 28 Mar 2015 07:00:44 +0000</pubDate>
		<dc:creator><![CDATA[Harish Jonnalagadda]]></dc:creator>
				<category><![CDATA[Graphics]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[3DMark]]></category>
		<category><![CDATA[AMD]]></category>
		<category><![CDATA[API]]></category>
		<category><![CDATA[directx 11]]></category>
		<category><![CDATA[DirectX 12]]></category>
		<category><![CDATA[Futuremark]]></category>
		<category><![CDATA[Mantle]]></category>
		<category><![CDATA[Microsoft]]></category>

		<guid isPermaLink="false">http://www.vrworld.com/?p=51042</guid>
		<description><![CDATA[<p>Want to benchmark DirectX 12? Now you can. </p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2015/03/28/futuremark-3dmark-api-overhead-feature-test-lets-you-benchmark-directx-12-and-mantle/">Futuremark 3DMark API Overhead Feature Test Lets You Benchmark DirectX 12 And Mantle</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="1920" height="1080" src="http://cdn.vrworld.com/wp-content/uploads/2015/03/3dmark-directx-12.jpg" class="attachment-post-thumbnail wp-post-image" alt="3dmark-directx-12" /></p><p>Futuremark has launched a new API Overhead feature test for its 3DMark benchmarking utility, with the test allowing users to test performance differences between <a href="http://vrworld.com/tag/directx-12" target="_blank">DirectX 12</a>, DirectX 11 and Mantle API.</p>
<p><a href="http://vrworld/tag/windows-10" target="_blank">Windows 10</a> Technical Preview users on build 10041 and the latest video drivers from Windows Update will now be able to access the test through 3DMark Advanced or Professional Edition. To test the DirectX 12 features, users must have DirectX 11-compliant hardware with at least 4GB RAM and 1GB video memory. To run Mantle tests, you need AMD hardware that works with the Mantle API.</p>
<p>Developed in collaboration with AMD(<a href="https://www.google.com/finance?q=amd&amp;ei=dTsVVaH6NYnwuATq94DYBQ" target="_blank">NASDAQ:AMD</a>), Intel (<a href="https://www.google.com/finance?q=intel&amp;ei=cTsVVeqMKNTmuAT54oC4Dw" target="_blank">NASDAQ:INTC</a>), Microsoft (<a href="https://www.google.com/finance?q=msft&amp;ei=lTsVVYGFKJKMuQSQgIGACQ" target="_blank">NASDAQ:MSFT</a>) and Nvidia (<a href="https://www.google.com/finance?q=nvidia&amp;ei=rzsVVcGXDdPMugTCtoDABw" target="_blank">NASDAQ:NVDA</a>), the test&#8217;s objective is to determine the &#8220;relative performance of different APIs on a single system.&#8221; Essentially, you&#8217;ll be able to gauge how your current system performs with DirectX 11, and the differences in performance when using DirectX 12.</p>
<p><iframe width="1140" height="641" src="https://www.youtube.com/embed/KwGtbmnhz9w?feature=oembed" frameborder="0" allowfullscreen></iframe></p>
<p>The API Overhead test works by sending a call to the GPU to draw an object on the screen, which is handled through the API. The more efficient the API, the more number of lines that can be drawn on the screen. The number of draw calls is increased with every iteration, with the final result calculated based on the maximum number of draw calls per second achieved by an API before the frame rate goes under 30 fps.</p>
<p>With DirectX 12 slated for commercial availability later this year, Futuremark&#8217;s test offers users a way to check how their current configurations will be able to handle Microsoft&#8217;s new API.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2015/03/28/futuremark-3dmark-api-overhead-feature-test-lets-you-benchmark-directx-12-and-mantle/">Futuremark 3DMark API Overhead Feature Test Lets You Benchmark DirectX 12 And Mantle</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2015/03/28/futuremark-3dmark-api-overhead-feature-test-lets-you-benchmark-directx-12-and-mantle/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Microsoft: DirectX 12 to Ship With Windows 10</title>
		<link>http://www.vrworld.com/2014/10/05/microsoft-directx-12-ship-windows-10/</link>
		<comments>http://www.vrworld.com/2014/10/05/microsoft-directx-12-ship-windows-10/#comments</comments>
		<pubDate>Mon, 06 Oct 2014 03:13:21 +0000</pubDate>
		<dc:creator><![CDATA[Sam Reynolds]]></dc:creator>
				<category><![CDATA[News]]></category>
		<category><![CDATA[DirectX 12]]></category>
		<category><![CDATA[Microsoft]]></category>
		<category><![CDATA[Windows]]></category>

		<guid isPermaLink="false">http://www.brightsideofnews.com/?p=39669</guid>
		<description><![CDATA[<p>Microsoft has confirmed that DirectX 12 will be included in Windows 10, but the company still needs to work on building support for the new API. </p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/10/05/microsoft-directx-12-ship-windows-10/">Microsoft: DirectX 12 to Ship With Windows 10</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="580" height="388" src="http://cdn.vrworld.com/wp-content/uploads/2014/10/directx-12-logo-100251209-large.png" class="attachment-post-thumbnail wp-post-image" alt="directx-12-logo-100251209-large" /></p><p>Microsoft (<a href="http://www.google.ca/finance?cid=358464">NASDAQ: MSFT</a>) confirmed late last week in a blog post that Windows 10 will be shipping with the upcoming DirectX 12 API &#8212; and a technical preview is already available to developers via the early build of Windows 10 recently released.</p>
<p>“The final version of Windows 10 will ship with DirectX 12, and we think it&#8217;s going to be awesome,” Microsoft’s Brian Langley wrote in a blog post. “Game developers who are part of our DirectX 12 <a href="http://1drv.ms/1dgelm6">Early Access</a> program have even more incentive to join the<a href="http://preview.windows.com/"> Windows Insider</a> program.  These game developers will receive everything they need to kickstart their DX12 development.”</p>
<p>For now, Microsoft will have to work overtime to convince developers about the technical virtues of the new API and why it deserves their full support. In some ways Microsoft finds itself competing with AMD (<a href="http://www.google.ca/finance?cid=327">NYSE: AMD</a>) and its Mantle API as both promise close to the metal access for developers. Though Mantle isn’t close to mainstream, it does count more support from developers than DirectX 12; Unreal Engine 4 is the only game engine to support DirectX 12 while AMD can count a handful of AAA titles and developers that support its Mantle (though the comparison isn’t entirely fair as DirectX 12 isn’t officially out yet).</p>
<p>Microsoft says that it’s currently working with developers to build relationships and support for DirectX 12.</p>
<p>Windows 10 is expected to be released in 2015.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/10/05/microsoft-directx-12-ship-windows-10/">Microsoft: DirectX 12 to Ship With Windows 10</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2014/10/05/microsoft-directx-12-ship-windows-10/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>GeForce GTX 980 Review: More Performance at Lower Power</title>
		<link>http://www.vrworld.com/2014/09/18/geforce-gtx-980-review-performance-lower-power/</link>
		<comments>http://www.vrworld.com/2014/09/18/geforce-gtx-980-review-performance-lower-power/#comments</comments>
		<pubDate>Fri, 19 Sep 2014 02:30:21 +0000</pubDate>
		<dc:creator><![CDATA[Anshel Sag]]></dc:creator>
				<category><![CDATA[Gaming]]></category>
		<category><![CDATA[Graphics]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[Reviews]]></category>
		<category><![CDATA[256 Bit]]></category>
		<category><![CDATA[290]]></category>
		<category><![CDATA[290X]]></category>
		<category><![CDATA[4K]]></category>
		<category><![CDATA[AA]]></category>
		<category><![CDATA[AMD]]></category>
		<category><![CDATA[API]]></category>
		<category><![CDATA[Asynchronous Warp]]></category>
		<category><![CDATA[bus]]></category>
		<category><![CDATA[DirectX 12]]></category>
		<category><![CDATA[DisplayPort]]></category>
		<category><![CDATA[DSR]]></category>
		<category><![CDATA[DX 11.3]]></category>
		<category><![CDATA[DX12]]></category>
		<category><![CDATA[GeForce GTX]]></category>
		<category><![CDATA[GeForce GTX 980]]></category>
		<category><![CDATA[Global Illumination]]></category>
		<category><![CDATA[Graphics Card]]></category>
		<category><![CDATA[GTX 980]]></category>
		<category><![CDATA[GTX 980 Review]]></category>
		<category><![CDATA[GTX980]]></category>
		<category><![CDATA[Nvidia]]></category>
		<category><![CDATA[price]]></category>
		<category><![CDATA[R9 290]]></category>
		<category><![CDATA[R9 290X]]></category>
		<category><![CDATA[Radeon]]></category>
		<category><![CDATA[Supersampling]]></category>
		<category><![CDATA[Voxel]]></category>
		<category><![CDATA[Voxel Global Illumination]]></category>
		<category><![CDATA[VXGI]]></category>

		<guid isPermaLink="false">http://www.brightsideofnews.com/?p=38897</guid>
		<description><![CDATA[<p>The Nvidia GeForce GTX 980 is Nvidia&#8217;s latest and greatest graphics card featuring the company&#8217;s new Maxwell GPU architecture. Nvidia claims that Maxwell is able to ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/09/18/geforce-gtx-980-review-performance-lower-power/">GeForce GTX 980 Review: More Performance at Lower Power</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="980" height="452" src="http://cdn.vrworld.com/wp-content/uploads/2014/09/NVIDIA_GeForce_GTX_980_Front.jpg" class="attachment-post-thumbnail wp-post-image" alt="NVIDIA GeForce GTX 980" /></p><p>The Nvidia GeForce GTX 980 is Nvidia&#8217;s latest and greatest graphics card featuring the company&#8217;s new Maxwell GPU architecture. Nvidia claims that Maxwell is able to maintain performance while delivering better power efficiency. Sure, the Kepler architecture brought some amazing improvements when compared to the infamous Fermi architecture, but it was less revolutionary than the Maxwell architecture which debuted last year in the GTX 750 Ti.</p>
<p>Below, you can see a single SMM block diagram of the Maxwell architecture, followed by the full GM-204 architecture. Keep in mind that this is not the full-blown version of Maxwell.</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/09/GeForce_GTX_980_SM_Diagram_FINAL.jpg" rel="lightbox-0"><img class="aligncenter size-medium wp-image-38907" src="http://www.brightsideofnews.com/wp-content/uploads/2014/09/GeForce_GTX_980_SM_Diagram_FINAL-320x600.jpg" alt="GeForce_GTX_980_SM_Diagram_FINAL" width="320" height="600" /></a></p>
<p>The GeForce GTX 980 is based upon Nvidia&#8217;s GM-204 GPU which is a mid-range version of Nvidia&#8217;s full Maxwell architecture. Even though the GTX 980 is being sold as a high-end card, it actually slots very similarly into Nvidia&#8217;s product lineup like the GTX 680 did.</p>
<p>The GTX 680 eventually became the GTX 770 and slotted in below the GTX 780 (a chopped down Titan) and the 780 Ti which was the full Kepler architecture and above the 760 Ti, also a chopped down card. So, with the GTX 980 we should be able to compare to the GTX 680 which was GK-104 and the GTX 780 Ti, which was full-blown Kepler. The GTX 980 is also thermally 30 watts less power than the GTX 680 Kepler card while performing far faster than it.</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/09/GeForce_GTX_980_Block_Diagram_FINAL.jpg" rel="lightbox-1"><img class="aligncenter size-medium wp-image-38905" src="http://www.brightsideofnews.com/wp-content/uploads/2014/09/GeForce_GTX_980_Block_Diagram_FINAL-600x559.jpg" alt="GeForce_GTX_980_Block_Diagram_FINAL" width="600" height="559" /></a></p>
<p>In the new GPU, one of the most notable improvements was the increase of the L2 cache from 512 Kb all the way up to 2048 Kb. You can also see that Nvidia has made some significant improvements to a lot of the GPU&#8217;s design in order to improve efficiency. And the net result is that the GTX 980 has a TDP of 165w while the GTX 680 had a TDP of 195w, that&#8217;s a reduction of 30W or just under 20% in a single generation (going from GK-104 to GM-204) using the same process node (28nm). However, in order to build a GM-210 Nvidia will need a process shrink to enable them to shrink the die size and gain even more power efficiency and build a very dense 10 billion+ transistor chip.</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/09/SpecsTable_980.jpg" rel="lightbox-2"><img class="aligncenter size-medium wp-image-38949" src="http://www.brightsideofnews.com/wp-content/uploads/2014/09/SpecsTable_980-600x486.jpg" alt="SpecsTable_980" width="600" height="486" /></a></p>
<p>In addition to the GM-204 GPU, Nvidia also opted to push for a standard 4GB of GDDR5 memory at 7 Gbps, resulting in some impressive memory bandwidth figures even though they only have a 256-bit memory bus.</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/09/Maxwell_GM204_DIE_3D_V17_Final.jpg" rel="lightbox-3"><img class="aligncenter size-medium wp-image-38918" src="http://www.brightsideofnews.com/wp-content/uploads/2014/09/Maxwell_GM204_DIE_3D_V17_Final-600x337.jpg" alt="Maxwell_GM204_DIE_3D_V17_Final" width="600" height="337" /></a></p>
<h2>Hardware</h2>
<p>Moving on from the GPU and GPU architecture of the GTX 980, it&#8217;s easy to see that the hardware bears a very strong resemblance to the Kepler years starting with the GTX Titan. However, it is different in a few ways, including the fact that the card has two 6-pin PCIe connectors which means that it can draw up to 225w of power from the PCIe slot and power connectors in total. So, even though this card has a TDP of 165w, it can theoretically draw up to 225w, which means that this card could be an impressive overclocker with the appropriate cooling and voltage regulation.</p>
<p>Nvidia also included a backplate for the GTX 980 in order to help more evenly cool the back of the graphics card. This backplate, though, does  partially come off near the power connectors in order to properly allow for airflow into the fan when run in a very close SLI configuration with two or more cards.</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/09/NVIDIA_GeForce_GTX_980_Front.jpg" rel="lightbox-4"><img class="size-medium wp-image-38929 aligncenter" src="http://www.brightsideofnews.com/wp-content/uploads/2014/09/NVIDIA_GeForce_GTX_980_Front-600x276.jpg" alt="NVIDIA GeForce GTX 980" width="600" height="276" /></a></p>
<p><img class="aligncenter size-medium wp-image-38928" src="http://www.brightsideofnews.com/wp-content/uploads/2014/09/NVIDIA_GeForce_GTX_980_BackPiece-600x366.jpg" alt="NVIDIA_GeForce_GTX_980_BackPiece" width="600" height="366" /></p>
<p>Below, you can see the GTX 980 with the fan shroud removed but with the GPU heatsink, memory heatsink and fan still attached.</p>
<p><img class="aligncenter size-medium wp-image-38931" src="http://www.brightsideofnews.com/wp-content/uploads/2014/09/NVIDIA_GeForce_GTX_980_FrontNoShroud-600x399.jpg" alt="NVIDIA_GeForce_GTX_980_FrontNoShroud" width="600" height="399" /></p>
<p>Once the GPU heatsink is removed you can see the bare GPU with the memory heatsink and fan (which are one assembly).</p>
<p>&nbsp;</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/09/NVIDIA_GeForce_GTX_980_FrontFan.jpg" rel="lightbox-5"><img class="aligncenter size-medium wp-image-38930" src="http://www.brightsideofnews.com/wp-content/uploads/2014/09/NVIDIA_GeForce_GTX_980_FrontFan-600x399.jpg" alt="NVIDIA_GeForce_GTX_980_FrontFan" width="600" height="399" /></a></p>
<p>Then, once the whole assembly is removed you can see the GPU, memory chips, power phases and all of the various PCB markings, which actually show us that Nvidia only included 5 power phases on the GTX 980 even though the PCB can accommodate up to 7 power phases which could mean that this card may have some seriously overclocked versions already available at launch using the reference PCB.<a href="http://cdn.vrworld.com/wp-content/uploads/2014/09/NVIDIA_GeForce_GTX_980_FrontNoShroud.jpg" rel="lightbox-6"><br />
</a> <a href="http://cdn.vrworld.com/wp-content/uploads/2014/09/NVIDIA_GeForce_GTX_980_FrontPCB.jpg" rel="lightbox-7"><img class="aligncenter size-medium wp-image-38932" src="http://www.brightsideofnews.com/wp-content/uploads/2014/09/NVIDIA_GeForce_GTX_980_FrontPCB-600x399.jpg" alt="NVIDIA_GeForce_GTX_980_FrontPCB" width="600" height="399" /></a></p>
<p>The card also features three DisplayPort 1.2 connectors as well as a dual-link DVI connector and an HDMI 2.0 connector which gives you the ability to drive 4K in multiple ways as well as run displays at up to 5K resolution per display even though HDMI 2.0 only supports 4K and DisplayPort 1.2 only technically supports 4K as well. So, really, the maximum resolution per display is still 4096 x 2160.</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/09/980Back_98.jpg" rel="lightbox-8"><img class="aligncenter size-medium wp-image-38965" src="http://www.brightsideofnews.com/wp-content/uploads/2014/09/980Back_98-600x232.jpg" alt="980Back_98" width="600" height="232" /></a></p>
<h2>Software</h2>
<p>During Nvidia&#8217;s recent Editor&#8217;s Day &#8212; which is used to brief the press on upcoming products &#8212; for the GTX 980 Nvidia showed off a lot of things that directly and indirectly involved the GTX 980. Many of the advancements of the GTX 980 come in the form of software, which includes DirectX 12 and DirectX 11.3.  But that doesn&#8217;t change the fact that Nvidia was already running a DX 12 ported demo of Fable running on two GTX 980s.</p>
<p>Nvidia made four big announcements about the GTX 980 that were outside of DX 12 and DX 11.3 and those pertain to Nvidia&#8217;s own VXGI, MFAA, DSR and their advancements with HMDs (head-mounted displays) like the Oculus VR.</p>
<p>MFAA &#8211; Multi-Frame Sampled Anti-Aliasing is Nvidia&#8217;s own technique of enabling higher AA visual quality while only experiencing a few percentage points of a performance hit compared to a lower quality MSAA. Essentially, Nvidia is claiming to deliver 4X MSAA-level quality at 2X MSAA performance (give or take a few percentage points). However, this feature is not quite finished yet and will be enabled in a future driver for testing and enabling higher quality AA at better performance levels.</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/09/MFAA.jpg" rel="lightbox-9"><img class="aligncenter size-medium wp-image-38919" src="http://www.brightsideofnews.com/wp-content/uploads/2014/09/MFAA-600x333.jpg" alt="MFAA" width="600" height="333" /></a></p>
<p>In addition to MFAA, Nvidia has also implemented DSR (Dynamic Super Resolution) which is essentially smart Supersampling with an applied filter. What it allows you to do is essentially trick the game into thinking you&#8217;ve got a much higher resolution display (like a 4K display) and as a result it will serve you higher quality textures and render the game in 4K. This generally results in much higher quality images even though Nvidia&#8217;s DSR technology will shrink the image back down to your monitor&#8217;s native resolution (like 1080P). This is great for both Nvidia and gamers because it means gamers can get a better looking game without needing to spend more money on a monitor and Nvidia can sell more expensive more powerful graphics cards without consumers needing to buy expensive 4K monitors.</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/09/DSR.jpg" rel="lightbox-10"><img class="aligncenter size-medium wp-image-38904" src="http://www.brightsideofnews.com/wp-content/uploads/2014/09/DSR-600x336.jpg" alt="DSR" width="600" height="336" /></a></p>
<p>Nvidia also talked about its own new technology called VXGI with a demonstration of the moon landing which uses the company&#8217;s own voxel-based global illumination engine. VXGI utilizes certain things within Maxwell&#8217;s hardware and within the game engine itself (Unreal Engine 4) in order to more efficiently and realistically recreate the bouncing of light off objects and to do it in realtime. VXGI itself isn&#8217;t implemented in any engine yet, but the expectation is that Unreal Engine 4 should have it by the fourth quarter of this year and we could very likely see it in games as soon as next year.</p>
<p>In addition to the VXGI stuff, Nvidia also took a stab at head-mounted displays and the latency problem. The company&#8217;s solution, dubbed Asynchronous Warp, is designed to half the latency of VR-related gaming in order to improve the overall experience and responsiveness of the platform. They went step by step looking for ways to improve VR performance until they reached Asynchronous Warp</p>
<p><img class="aligncenter size-medium wp-image-38913" src="http://www.brightsideofnews.com/wp-content/uploads/2014/09/HMDLatency-600x342.jpg" alt="HMDLatency" width="600" height="342" /></p>
<p><img class="aligncenter size-medium wp-image-38914" src="http://www.brightsideofnews.com/wp-content/uploads/2014/09/HMDLatency2-600x337.jpg" alt="HMDLatency2" width="600" height="337" /></p>
<p><img class="aligncenter size-medium wp-image-38915" src="http://www.brightsideofnews.com/wp-content/uploads/2014/09/HMDLatency3-600x337.jpg" alt="HMDLatency3" width="600" height="337" /></p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/09/HMDLatencyAsyncWarp.jpg" rel="lightbox-11"><img class="aligncenter size-medium wp-image-38916" src="http://www.brightsideofnews.com/wp-content/uploads/2014/09/HMDLatencyAsyncWarp-600x338.jpg" alt="HMDLatencyAsyncWarp" width="600" height="338" /></a> <a href="http://cdn.vrworld.com/wp-content/uploads/2014/09/HMDLatency3.jpg" rel="lightbox-12"><br />
</a>Asynchronous warp takes the last scene rendered by the GPU and updates it based on the latest head position information taken from the VR sensor. By warping the rendered image late in the pipeline to more closely match head position, Nvidia avoids discontinuities between head movement and action on screen while also dramatically reducing latency. We haven&#8217;t tested this out ourselves yet, but this is a pretty drastic leap forward for VR if it can actually be applied across the VR landscape.</p>
<h2>Performance</h2>
<p>For performance, we looked at the GTX 980&#8217;s synthetic, compute, and gaming benchmarks to evaluate whether or not it really is as significant of an improvement over the GTX 680 and possibly even the GTX 78o Ti. After all, Nvidia wouldn&#8217;t really be naming this card the GTX 980 unless it really could perform in such a way.</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/09/GTX980_980.jpg" rel="lightbox-13"><img class="aligncenter size-medium wp-image-38966" src="http://www.brightsideofnews.com/wp-content/uploads/2014/09/GTX980_980-600x337.jpg" alt="GTX980_980" width="600" height="337" /></a></p>
<p>The testbed consisted of an Intel Core i7 4960X cooled by a Corsair H100 on a Gigabyte X79 motherboard with 16 GB of DDR3 2400 MHz memory along with a Thermaltake 1475W Gold PSU and Patriot 128GB SSD all sitting atop a Dimastech Hard Bench.</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/09/3DMark-Fire-Strike-Extreme.jpg" rel="lightbox-14"><img class="aligncenter size-medium wp-image-38899" src="http://www.brightsideofnews.com/wp-content/uploads/2014/09/3DMark-Fire-Strike-Extreme-600x293.jpg" alt="3DMark Fire Strike Extreme" width="600" height="293" /></a></p>
<p>First, we tested 3DMark using the Fire Strike Extreme test in order to give the best idea of high-end performance against other cards. Here, it fell between two GTX 680&#8217;s in SLI and two 7970&#8217;s in CrossFireX. It did beat the GTX 780 Ti, and proved that it was indeed more than twice as fast as the GTX 680, which Nvidia was essentially claiming during the majority of the presentations.</p>
<p>After 3DMark, we also wanted to take a look at the Unigine set of synthetic benchmarks with Unigine&#8217;s Heaven and Valley benchmarks.</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/09/Unigine-Heaven-4.0-Benchmark.jpg" rel="lightbox-15"><img class="aligncenter size-medium wp-image-38935" src="http://www.brightsideofnews.com/wp-content/uploads/2014/09/Unigine-Heaven-4.0-Benchmark-600x265.jpg" alt="Unigine Heaven 4.0 Benchmark" width="600" height="265" /></a></p>
<p>&nbsp;</p>
<p>As you can see from Unigine Heaven, the GTX 980 outperformed the GTX Titan and R9 290 by a fairly healthy margin and sat somewhere close to the HD 7970 GHz editions in CrossFire. Obviously this is a single GPU, but the fact that it falls within the realm of multi-GPU performance is awesome on its own.</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/09/UnigineValley.jpg" rel="lightbox-16"><img class="aligncenter size-medium wp-image-38955" src="http://www.brightsideofnews.com/wp-content/uploads/2014/09/UnigineValley-600x266.jpg" alt="UnigineValley" width="600" height="266" /></a></p>
<p>In the Unigine Valley benchmark, we saw a much less drastic or impressive performance difference with the GTX 980 essentially falling between the GTX 780 and GTX Titan in terms of performance but still well out performing the R9 290 and AMD&#8217;s Hawaii GPU.</p>
<p>Following those benchmarks, we also took at look at two OpenCL benchmarks to see how Maxwell stacks up against AMD and how much Nvidia has improved over the previous Kepler generation. There was much talk that Nvidia had improved their OpenCL performance from one generation to the other so it was interesting to see if that was true and by how much. We tested LuxMark 2.0 and CompuBench 1.5 for our OpenCL testing.</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/09/LuxMarkOpenCL.jpg" rel="lightbox-17"><img class="aligncenter size-medium wp-image-38917" src="http://www.brightsideofnews.com/wp-content/uploads/2014/09/LuxMarkOpenCL-600x255.jpg" alt="LuxMarkOpenCL" width="600" height="255" /></a></p>
<p>In LuxMark, the GTX 980 performed fantastically, showing that it was faster than two GTX Titans and an R9 290. Of course, it wasn&#8217;t as fast as three GTX Titans or multiple 7970s, a 7990 or an R9 295X2, but I suspect that multiple GTX 980 GPUs could give AMD a run for their money since all of the faster AMD cards are multi-GPU.</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/09/Compubench-1.5.jpg" rel="lightbox-18"><img class="aligncenter size-medium wp-image-38901" src="http://www.brightsideofnews.com/wp-content/uploads/2014/09/Compubench-1.5-600x236.jpg" alt="Compubench 1.5" width="600" height="236" /></a></p>
<p>In Compubench we saw some interesting results with the GTX 980 trading punches with the R9 290X beating it in some OpenCL tests and losing to it in others. If anything, the GeForce GTX 980 shows that Nvidia is a far more capable OpenCL competitor to AMD than the GTX 780 Ti ever was.</p>
<p>Following those synthetic benchmarks, we ran a series of 4K benchmarks to see how the GTX 980 stacks up against the most stressful gaming environments. In our tests, we played Battlefield 4, Crysis 3 and Counter Strike: Global Offensive at varying levels of detail.</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/09/Battlefield-4-Benchmark.jpg" rel="lightbox-19"><img class="aligncenter size-medium wp-image-38900" src="http://www.brightsideofnews.com/wp-content/uploads/2014/09/Battlefield-4-Benchmark-600x247.jpg" alt="Battlefield 4 Benchmark" width="600" height="247" /></a></p>
<p>In Battlefield 4, we can clearly see that the GTX 980 outperforms the GTX 780 Ti as well as the R9 290 but still falls short of coming anywhere near the monstrous $1,500 R9 295X2. However, the GTX 980 was without a doubt playable FPS and never dipped below 30 FPS according to our measurements.</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/09/Crysis-3-4K-Benchmarks.jpg" rel="lightbox-20"><img class="aligncenter size-medium wp-image-38902" src="http://www.brightsideofnews.com/wp-content/uploads/2014/09/Crysis-3-4K-Benchmarks-600x276.jpg" alt="Crysis 3 4K Benchmarks" width="600" height="276" /></a></p>
<p>&nbsp;</p>
<p>In Crysis, we once again saw the GTX 980 outperform the GTX 780 Ti and the R9 290, but it still struggled to keep up with the R9 295X2 (which is triple the price). This is primarily because of the lack of memory and memory bandwidth to properly play Crysis 3 at those settings. So, if you want to run Crysis 3 at Very High settings with 4x MSAA, you&#8217;ll probably need a second GPU and then you should get pretty playable FPS.</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/09/CSGO-Benchmark.jpg" rel="lightbox-21"><img class="aligncenter size-medium wp-image-38903" src="http://www.brightsideofnews.com/wp-content/uploads/2014/09/CSGO-Benchmark-600x275.jpg" alt="CSGO Benchmark" width="600" height="275" /></a></p>
<p>&nbsp;</p>
<p>In Counter Strike: Global Offensive, we weren&#8217;t expecting to see anything but triple digit FPS, but what is important is that the GTX 980 beats out the R9 290 and 780 Ti in terms of 4K performance and did cap at 300 max FPS at times. If you want to have the ultimate 4K gaming experience in CSGO you can totally do it with any of these cards, but the GTX 980 does it at a fraction of the power.</p>
<h2>Power and Overclocking</h2>
<p>At idle, the card ran at about 10% of TDP, or 16W and draws up to 90% of TDP or 148W under most gaming scenarios that we measured. The card never went over 80C and idled at 36C under normal usage. The maximum temperatures as well as idle temps may actually be higher than expected because of the fact that the testing scenario had higher ambient temperatures than normal due to a heatwave.</p>
<p>Last but not least, was overclocking which was more surprising than anyone would have expected. Sure, this card is a very low power card with a lot of in-bound power, but the overclocks achieved were simply mind blowing. In order to test the overclocks, 3Dmark Fire Strike Extreme was run for validation purposes.</p>
<p>In overclocking this card, we were able to push it to a GPU clock offset of +260 on the GPU base clock and +100 on the memory&#8217;s frequency. As a result the GPU base clock went up to 1,387 MHz and boost clock of a whopping 1,553 MHz, something that we have never seen from an air cooled GPU (yes, the fans were at 100% at that point). Even so, this performance was astonishing and resulted in some amazing 3DMark Fire Strike Extreme scores. We&#8217;ve also included some of the other overclocks that were achieved on the way to the max overclock.</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/09/GTX-980-OC-1388.jpg" rel="lightbox-22"><img class="aligncenter size-medium wp-image-38912" src="http://www.brightsideofnews.com/wp-content/uploads/2014/09/GTX-980-OC-1388-600x442.jpg" alt="GTX 980 OC 1388" width="600" height="442" /></a></p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/09/GTX-980-3DMark-Overclocking.jpg" rel="lightbox-23"><img class="aligncenter size-medium wp-image-38909" src="http://www.brightsideofnews.com/wp-content/uploads/2014/09/GTX-980-3DMark-Overclocking-600x316.jpg" alt="GTX 980 3DMark Overclocking" width="600" height="316" /></a></p>
<p>As you can see above, the overclocked GTX 980 actually outperforms two Radeon HD 7970 GHz editions in CrossFire X as well as all the other cards anywhere near it. The only things that are faster are two GTX Titans in SLI and an R9 295X2. This is also being done at a very small amount of power, 206W to be exact, which means that there&#8217;s still more overclocking headroom left on this card, about 19W. As such, one would expect that consumers may see even more overclocked versions of the GTX 980 with some impressive manufacturer clocks that very likely could be pushed even further.</p>
<h2>Conclusion</h2>
<p>The GTX 980 is an absolutely stunning graphics card that delivers on many of Nvidia&#8217;s promises (namely the 2x + performance of the GTX 680) and does it at an absolutely amazing level of power. But that&#8217;s not even the best part, Nvidia released this card today at an even more competitive price of $549, which is the reason why AMD&#8217;s 290X recently had a price drop from $549 to $449. But do keep in mind that even though the R9 290X is cheaper, it still does draw more power and won&#8217;t overclock anywhere near as well as this card.</p>
<p>Nvidia is also releasing a cost-down version of the GTX 980 with the GTX 970, which understandably is a fairly slower version at $329. Unfortunately, we weren&#8217;t sent one for testing so we can&#8217;t tell you exactly how much slower it is, but it may be a major consideration if the GTX 980 is too rich for your blood.</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/09/NVIDIA_GeForce_GTX_980_3Qtr.jpg" rel="lightbox-24"><img class="aligncenter size-medium wp-image-38923" src="http://www.brightsideofnews.com/wp-content/uploads/2014/09/NVIDIA_GeForce_GTX_980_3Qtr-600x473.jpg" alt="NVIDIA_GeForce_GTX_980_3Qtr" width="600" height="473" /></a></p>
<p>Nvidia has without a doubt hit a homerun with the GTX 980 and Maxwell and it will be interesting to see what AMD has to answer this astounding performance and power improvement over the previous generation. This may not necessarily be a huge upgrade for anyone running a GTX 780 Ti, but it is a pretty serious upgrade for almost any other gamer out there that doesn&#8217;t already have that card. And not just that, the GTX 780 Ti is a $700 graphics card and you&#8217;re getting better performance at significantly lower wattage for much less money.</p>
<p>The GTX 980 is a great piece of GPU architecture and is a must buy for anyone looking to buy a new high-end graphics card this holiday season. It only makes us wonder what will eventually be possible once Nvidia unleashes the GM-210 full-blown Maxwell on this world, hopefully next year. As such, this card wins our Editor&#8217;s Choice Award and immediate buy recommendation.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/09/18/geforce-gtx-980-review-performance-lower-power/">GeForce GTX 980 Review: More Performance at Lower Power</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2014/09/18/geforce-gtx-980-review-performance-lower-power/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Intel and DirectX 12’s Big Day Out: Intel Chats On The Intel-Microsoft API Partnership</title>
		<link>http://www.vrworld.com/2014/08/25/intel-directx-12s-big-day-intel-chats-intel-microsoft-api-partnership/</link>
		<comments>http://www.vrworld.com/2014/08/25/intel-directx-12s-big-day-intel-chats-intel-microsoft-api-partnership/#comments</comments>
		<pubDate>Mon, 25 Aug 2014 17:13:52 +0000</pubDate>
		<dc:creator><![CDATA[Sam Reynolds]]></dc:creator>
				<category><![CDATA[Interviews]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Andrew Lauritzen]]></category>
		<category><![CDATA[DirectX]]></category>
		<category><![CDATA[DirectX 12]]></category>
		<category><![CDATA[Intel]]></category>
		<category><![CDATA[Microsoft]]></category>

		<guid isPermaLink="false">http://www.brightsideofnews.com/?p=38038</guid>
		<description><![CDATA[<p>For all the talk from AMD about Mantle being revolutionary for game developers and consumers, for a while it seemed to be forgotten that AMD doesn’t ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/08/25/intel-directx-12s-big-day-intel-chats-intel-microsoft-api-partnership/">Intel and DirectX 12’s Big Day Out: Intel Chats On The Intel-Microsoft API Partnership</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="1201" height="793" src="http://cdn.vrworld.com/wp-content/uploads/2014/04/IntelLogo1.jpg" class="attachment-post-thumbnail wp-post-image" alt="Intel Logo" /></p><p>For all the talk from <a href="http://www.google.com/finance?cid=327">AMD</a> about Mantle being revolutionary for game developers and consumers, for a while it seemed to be forgotten that AMD doesn’t have a monopoly on the competitive advantage Mantle promises. DirectX, with its near universal adoption amongst developers, is fully capable of offering the low overhead and close to the metal programming environment that Mantle promises.</p>
<p><a href="http://www.brightsideofnews.com/2014/08/14/needs-mantle-directx-12-shows-big-performance-gains-siggraph/">Earlier this month</a> at SIGGRAPH in Vancouver <a href="http://www.google.com/finance?q=NASDAQ%3AMSFT&amp;ei=ivH7U6jFKsS8kgXky4GACQ">Microsoft</a> proved just that, running DirectX 12 on an <a href="http://www.google.com/finance?q=NASDAQ%3AINTC&amp;ei=pPH7U4jrE8qnkgW3-oDwCA">Intel</a>-powered Surface Pro 3. During the benchmarks displayed at Intel’s booth on the show floor, DirectX 12 provided a fairly serious performance gains over the previous version.</p>
<p>Last week <i>Bright Side of News</i> caught up with Intel’s Andrew Lauritzen, a graphics software engineer in the Advanced Technology Group at Intel, to discuss what sort of gains DirectX 12 is going to have over its predecessor.</p>
<p><b><i>Bright Side of News: </i></b>What kind of advantages does DirectX 12 have over Mantle?</p>
<p>Mantle is something that only runs on AMD’s GPUs right now, so it’s something that can’t be compared directly compared to DirectX 12. In terms of what our hardware does with DirectX 12, we compare against Direct X 11 so we can show the benefits. The goals of Mantle are similar, but those things [the DirectX 12 vs. Mantle debate] could only be compared on AMD’s hardware once they have drivers for both.</p>
<p><b><i>BSN*: </i></b>What about comparing DirectX 12 to OpenGL and OpenCL?</p>
<p>OpenGL is structured similarly to Direct3D 10 and 11. It has a lot of the same overheads as those APIs had. Direct3D 12 is a new generation of APIs that gives a lot more explicit access to the hardware. Folks writing game engines are really the main people that have asked for it, and with it they can write more efficient rendering algorithms then they have been able to in the past.</p>
<p>OpenCL is more of a compute API. It’s not really for graphics. It’s more akin to the DirectCompute part of the APIs.</p>
<p><b><i>BSN*: </i></b>How long has Intel been working on the DirectX 12 effort with Microsoft?</p>
<p>That’s kind of a grey area, it’s like asking ‘since when has it been called DirectX 12?’. The ideas that have crystallized into DirectX 12 we’ve been discussing for many years. In fact, as many game developers will say, this has been an issue that’s been on their mind for many years as well and they’ve been giving feedback to both us and Microsoft with the goal of making things better.</p>
<p>As far as when those efforts crystallized into DirectX 12, well it was announced at GDC &#8212; that’s the most we can say. As long as we’ve been working on DirectX, we’ve been collaborating with Microsoft. That includes DirectX 11, and even before that. Discussions of this sort date back many years. It’s just a question of when they turned from discussions to an explicit plan.</p>
<p><b><i>BSN*</i></b><b>: </b>When did developers begin to request features that made it into DirectX 12, such as low overhead?</p>
<p>The requests go back as long as I’ve worked in the industry. For DirectX 10, one of the main goals was to lower the overhead compared to DirectX 9. It did succeed, in relative terms, but at the time the combination of hardware and software as well as other factors meant that they couldn’t make it go as far as they have with DirectX 12 in terms of making it go low overhead.</p>
<p>In DirectX 11, they tried to do the multithreading thing again. That was one of the big features of DirectX 11. But, it turned out again, that because of the number of API and driver issues they really never saw the benefit of that they were hoping for. It turned out not to be a huge win and not be very scalable. Really, with 12, what they were able to do is go back to the drawing board in a new era of both GPUs and a lot shared engine technology across different game developers. It made a lot more sense to basically go a significant step lower level than they had in the past.</p>
<p><b><i>BSN*:</i></b>Why is the performance jump so big between DirectX 11 to 12, then between 10 and 11?</p>
<p>There are things around how hazards were tracking the API [for more on that see <a href="https://developer.nvidia.com/sites/default/files/akamai/gameworks/events/gdc14/GDC_14_DirectX%20Advancements%20in%20the%20Many-Core%20Era%20Getting%20the%20Most%20out%20of%20the%20PC%20Platform.pdf">this </a>and <a href="https://software.intel.com/en-us/blogs/2014/08/07/direct3d-12-overview-part-4-heaps-and-tables">this</a>]. There were things how <a href="http://msdn.microsoft.com/en-us/library/sf4e5x7z(v=vs.110).aspx">Graphics State</a> was handled in the API, which basically made it a difficult problem for drivers to try to automatically make the API safe because it was a safer API before. Whereas DirectX 12 moved some of this into the users hands. So it&#8217;s a less safe API in terms of getting consistent correct rendering, but it allows game developers to do those things efficiently since they don’t always have to handle the most general cases like a driver does.</p>
<p><b><i>BSN*: </i></b>From what you at Intel have seen, what’s the response been like so far to DirectX 12 from developers?</p>
<p>I think it’s fair to say that it’s been really positive so far. This is something that they’ve been wanting for a long time. A combination of experience, and the fact that instead of everyone designing their own engines we’re getting specialist [companies] that are really focused on writing their own graphics engines. Having that core set of specialists have really let us open up stuff that game developers really want.</p>
<p>The API, in the past, has really been designed to handle mid-level; if every game is writing rendering of the minutiae of how to drive a GPU efficiently. But now more and more people are using [a handful of] engines so it makes sense to concentrate and optimize the technology in those engines as they are used across so many different games.</p>
<p><b><i>BSN*: </i></b>It seems like Intel is trying to make its own integrated GPUs competitive with low-to-mid range discrete GPUs. Is this the case?</p>
<p>We’re always trying to make our GPUs the best they can be in a given form factor and power budget. Increasingly our chips are going into lower and lower power budget things. Increasingly as we target these power constrained devices it becomes more and more important that we optimize all of the parts of the system as much as possible. You can’t just get away with ‘Oh, we have lots of extra CPU power so we’ll just eat the overhead from that,’ we have to make sure to optimize all parts of the stack.</p>
<p><b><i>BSN*</i></b><b>: </b>Will we see DirectX 12 on Intel’s lowest power devices like mobile Broadwell?</p>
<p>Yes. All of our chips based on Haswell will be DirectX 12 compatible. That includes Broadwell and any future chips.</p>
<p><b><i>BSN*: </i></b><b>Thanks for your time. </b></p>
<p><b><i>This interview has been edited for clarity and length. </i></b></p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/08/25/intel-directx-12s-big-day-intel-chats-intel-microsoft-api-partnership/">Intel and DirectX 12’s Big Day Out: Intel Chats On The Intel-Microsoft API Partnership</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2014/08/25/intel-directx-12s-big-day-intel-chats-intel-microsoft-api-partnership/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Mantle Goes Beta, Still Not Quite Open to All&#8230;</title>
		<link>http://www.vrworld.com/2014/05/01/mantle-goes-beta-still-quite-open/</link>
		<comments>http://www.vrworld.com/2014/05/01/mantle-goes-beta-still-quite-open/#comments</comments>
		<pubDate>Thu, 01 May 2014 17:37:13 +0000</pubDate>
		<dc:creator><![CDATA[Anshel Sag]]></dc:creator>
				<category><![CDATA[Graphics]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Software]]></category>
		<category><![CDATA[AMD]]></category>
		<category><![CDATA[AMD Radeon]]></category>
		<category><![CDATA[API]]></category>
		<category><![CDATA[Beta]]></category>
		<category><![CDATA[Developers]]></category>
		<category><![CDATA[DirectX 12]]></category>
		<category><![CDATA[Drivers]]></category>
		<category><![CDATA[DX12]]></category>
		<category><![CDATA[GDC]]></category>
		<category><![CDATA[Graphics Cards]]></category>
		<category><![CDATA[Low Level API]]></category>
		<category><![CDATA[Mantle]]></category>

		<guid isPermaLink="false">http://www.brightsideofnews.com/?p=34835</guid>
		<description><![CDATA[<p>AMD&#8217;s Mantle API, since its inception has been considered to be a fairly exclusive program with AMD getting hundreds of requests (if not thousands) from ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/05/01/mantle-goes-beta-still-quite-open/">Mantle Goes Beta, Still Not Quite Open to All&#8230;</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>AMD&#8217;s Mantle API, since its inception has been considered to be a fairly exclusive program with AMD getting hundreds of requests (if not thousands) from developers all around the world to test out Mantle. Obviously, a company of AMD&#8217;s size isn&#8217;t entirely capable of supporting thousands of developers, yet. AMD is still struggling to achieve profitability and cannot commit enough engineering resources to the Mantle team in order to really give Mantle the attention it needs. Yes, Mantle is a proprietary set of low-level APIs and does give game developers unparalleled flexibility and that is why so many developers are excited to take a crack at it. Even though Mantle only works on AMD hardware (and probably will for the foreseeable future), DirectX 12 is simply too far down the road (18 months) for anyone to even start thinking about anything other than Mantle. DirectX 12 will also have certain features that will only be available in hardware, which will mean new DirectX 12 capable GPU architectures will be necessary in order to enable those features.</p>
<p><iframe src="//www.youtube.com/embed/sSY2KXBoro0" width="1280" height="720" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p>Today&#8217;s announcement regarding Mantle is probably the biggest announcement AMD has made around Mantle since they talked about it at GDC in March. Back then, AMD was talking a lot about Mantle&#8217;s capabilities and games with Mantle, but they didn&#8217;t talk about the expansion of the program much, even though that was probably their most captive audience. In fact, making <a href="http://developer.amd.com/mantle/" target="_blank">today&#8217;s announcement about the closed beta</a> should probably have been done at GDC as that would have been their target audience and they could have even spoken directly with most of the interested developers. Alas, it was probably not ready yet for a beta release and they waited until today to make the announcement. Either way, you can now head over to an actual beta page for Mantle and access the NDA SDK if you are given access by AMD. In all honesty, Mantle has technically already been a closed-beta program for quite some time and this announcement doesn&#8217;t really seem to change much other than create a public page.</p>
<p>You still have to email mantleaccess [at] amd [dot] com in order to even get access to the page and/or SDK and you have to supply them with the following info:</p>
<ul>
<li>Name of company</li>
<li>Name and email of contact point for Mantle access</li>
<li>Game title(s) or codenames for which you are interested to evaluate Mantle for</li>
<li>Reasons for requesting Mantle access</li>
</ul>
<p>Once you&#8217;ve emailed them at that email address with that info you MIGHT get access to Mantle and you COULD start developing with Mantle, but there simply aren&#8217;t any guarantees since AMD is still selecting who they do and don&#8217;t want using Mantle. AMD claims that they&#8217;ve already got 40 different developers using Mantle  with this new beta program and are looking to expand it further, even though we don&#8217;t exactly know how many developers AMD can really support simultaneously and still do it well. I want Mantle to be successful and to be broadly accepted, but in the end it is still a proprietary graphics API which means it will only work on AMD&#8217;s GCN cores and nobody else&#8217;s and I&#8217;m not sure how many developers are going to want to develop games for both Mantle and DirectX 12.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/05/01/mantle-goes-beta-still-quite-open/">Mantle Goes Beta, Still Not Quite Open to All&#8230;</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2014/05/01/mantle-goes-beta-still-quite-open/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>

<!-- Performance optimized by W3 Total Cache. Learn more: http://www.w3-edge.com/wordpress-plugins/

Content Delivery Network via Amazon Web Services: CloudFront: cdn.vrworld.com

 Served from: www.vrworld.com @ 2015-04-10 15:42:20 by W3 Total Cache -->