<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>VR World &#187; GDDR5</title>
	<atom:link href="http://www.vrworld.com/tag/gddr5/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.vrworld.com</link>
	<description></description>
	<lastBuildDate>Fri, 10 Apr 2015 07:54:22 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.1.1</generator>
	<item>
		<title>Top Five Graphics Cards of 2014</title>
		<link>http://www.vrworld.com/2014/11/19/top-five-graphics-cards-2014/</link>
		<comments>http://www.vrworld.com/2014/11/19/top-five-graphics-cards-2014/#comments</comments>
		<pubDate>Thu, 20 Nov 2014 07:19:38 +0000</pubDate>
		<dc:creator><![CDATA[Anshel Sag]]></dc:creator>
				<category><![CDATA[Graphics]]></category>
		<category><![CDATA[Guides]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[Holiday Shopping Guides]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[4K]]></category>
		<category><![CDATA[AMD]]></category>
		<category><![CDATA[AMD Radeon R9 285]]></category>
		<category><![CDATA[AMD Radeon R9 290X]]></category>
		<category><![CDATA[AMD Radeon R9 295X2]]></category>
		<category><![CDATA[Black Friday]]></category>
		<category><![CDATA[Buying Guide]]></category>
		<category><![CDATA[Deals]]></category>
		<category><![CDATA[GDDR5]]></category>
		<category><![CDATA[GPU]]></category>
		<category><![CDATA[Nvidia]]></category>
		<category><![CDATA[Nvidia GeForce GTX 970]]></category>
		<category><![CDATA[Nvidia GeForce GTX 980]]></category>
		<category><![CDATA[sale]]></category>

		<guid isPermaLink="false">http://www.brightsideofnews.com/?p=41928</guid>
		<description><![CDATA[<p>We take a look at the best graphics cards of 2014 and give you an idea of which are the five best of the year in terms of value, performance and price.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/11/19/top-five-graphics-cards-2014/">Top Five Graphics Cards of 2014</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="1500" height="500" src="http://cdn.vrworld.com/wp-content/uploads/2014/11/Graphics-Cards.jpg" class="attachment-post-thumbnail wp-post-image" alt="Graphics Cards" /></p><p>There have been some fantastic graphics cards released this year, many with varying price points and performance levels. We are going to take a look at the best of all the GPUs available in 2014 and give you our explanation why. Do keep in mind, that some of these cards may have been released in late 2013, but remained relevant for most of 2014 and still are to this day. And without further ado, we give you the five best graphics cards of 2014. They are going to be listed in descending order from most expensive to least expensive and hopefully they will help you pick the right card for your budget. You can click on the image of the card below to find the card at the posted price on Amazon.</p>
<h2 style="text-align: center;">AMD Radeon R9 295X2 &#8211; $799</h2>
<p><a href="http://www.amazon.com/gp/product/B00JS8JRHW/ref=as_li_tl?ie=UTF8&amp;camp=1789&amp;creative=9325&amp;creativeASIN=B00JS8JRHW&amp;linkCode=as2&amp;tag=brsiofne0e-20&amp;linkId=MAZOAYARX2EDKUXX"><img class="aligncenter" src="http://ws-na.amazon-adsystem.com/widgets/q?_encoding=UTF8&amp;ASIN=B00JS8JRHW&amp;Format=_SL250_&amp;ID=AsinImage&amp;MarketPlace=US&amp;ServiceVersion=20070822&amp;WS=1&amp;tag=brsiofne0e-20" alt="" border="0" /></a><img class="aligncenter" style="border: none !important; margin: 0px !important;" src="http://ir-na.amazon-adsystem.com/e/ir?t=brsiofne0e-20&amp;l=as2&amp;o=1&amp;a=B00JS8JRHW" alt="" width="1" height="1" border="0" /></p>
<p>The AMD Radeon R9 295X2 was specifically designed for 4K gaming and to this day remains the best card for 4K gaming for a lot of reasons. In <a title="AMD Radeon R9 295X2 Review: The Definitive 4K Gaming Graphics Card" href="http://www.brightsideofnews.com/2014/04/09/amd-radeon-r9-295x2-review-the-definitive-4k-gaming-graphics-card/" target="_blank">our review of the R9 295X2</a> graphics card, we found it to be one of the best cards for 4K with a single graphics card. One of the other things that made the R9 295X2 so fantastic and worthy of this list is that the liquid system that cooled this card is probably the most effective cooling solution that has ever cooled a graphics card, ever. The AMD R9 295X2 is also a great card because instead of bringing down the level of the 290X in order to be thermally manageable they actually clocked it even higher and gave you a card that was faster than two R9 290Xs. Currently, you can find the <a href="http://www.amazon.com/gp/product/B00JS8JRHW/ref=as_li_tl?ie=UTF8&amp;camp=1789&amp;creative=9325&amp;creativeASIN=B00JS8JRHW&amp;linkCode=as2&amp;tag=brsiofne0e-20&amp;linkId=3QVQXTLEMM7JBUXT">R9 295X2 selling at retailers for around $799</a><img style="border: none !important; margin: 0px !important;" src="http://ir-na.amazon-adsystem.com/e/ir?t=brsiofne0e-20&amp;l=as2&amp;o=1&amp;a=B00JS8JRHW" alt="" width="1" height="1" border="0" /> which is a great deal when you consider it launched earlier this year for $1,500 and the price has dropped all the way down to $799. There are some places selling a <a href="http://www.newegg.com/Product/Product.aspx?Item=N82E16814129300&amp;cm_re=R9_295X2-_-14-129-300-_-Product" target="_blank">VisionTek R9 295X2 for $779</a>, but that&#8217;s just one card at one store and $799 is a much more realistic price because the Visiontek card is likely to sell out. But why, you may ask, is AMD&#8217;s card that launched back in April at $1,500 now selling at $799 in November, only 7 months later? That would be because of the competition from the next card on our list.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/11/19/top-five-graphics-cards-2014/">Top Five Graphics Cards of 2014</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2014/11/19/top-five-graphics-cards-2014/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Top Five Graphics Cards of 2014</title>
		<link>http://www.vrworld.com/2014/11/19/top-five-graphics-cards-2014/</link>
		<comments>http://www.vrworld.com/2014/11/19/top-five-graphics-cards-2014/#comments</comments>
		<pubDate>Thu, 20 Nov 2014 07:19:38 +0000</pubDate>
		<dc:creator><![CDATA[Anshel Sag]]></dc:creator>
				<category><![CDATA[Graphics]]></category>
		<category><![CDATA[Guides]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[Holiday Shopping Guides]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[4K]]></category>
		<category><![CDATA[AMD]]></category>
		<category><![CDATA[AMD Radeon R9 285]]></category>
		<category><![CDATA[AMD Radeon R9 290X]]></category>
		<category><![CDATA[AMD Radeon R9 295X2]]></category>
		<category><![CDATA[Black Friday]]></category>
		<category><![CDATA[Buying Guide]]></category>
		<category><![CDATA[Deals]]></category>
		<category><![CDATA[GDDR5]]></category>
		<category><![CDATA[GPU]]></category>
		<category><![CDATA[Nvidia]]></category>
		<category><![CDATA[Nvidia GeForce GTX 970]]></category>
		<category><![CDATA[Nvidia GeForce GTX 980]]></category>
		<category><![CDATA[sale]]></category>

		<guid isPermaLink="false">http://www.brightsideofnews.com/?p=41928</guid>
		<description><![CDATA[<p>We take a look at the best graphics cards of 2014 and give you an idea of which are the five best of the year in terms of value, performance and price.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/11/19/top-five-graphics-cards-2014/">Top Five Graphics Cards of 2014</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="1500" height="500" src="http://cdn.vrworld.com/wp-content/uploads/2014/11/Graphics-Cards.jpg" class="attachment-post-thumbnail wp-post-image" alt="Graphics Cards" /></p><p>There have been some fantastic graphics cards released this year, many with varying price points and performance levels. We are going to take a look at the best of all the GPUs available in 2014 and give you our explanation why. Do keep in mind, that some of these cards may have been released in late 2013, but remained relevant for most of 2014 and still are to this day. And without further ado, we give you the five best graphics cards of 2014. They are going to be listed in descending order from most expensive to least expensive and hopefully they will help you pick the right card for your budget. You can click on the image of the card below to find the card at the posted price on Amazon.</p>
<h2 style="text-align: center;">AMD Radeon R9 295X2 &#8211; $799</h2>
<p><a href="http://www.amazon.com/gp/product/B00JS8JRHW/ref=as_li_tl?ie=UTF8&amp;camp=1789&amp;creative=9325&amp;creativeASIN=B00JS8JRHW&amp;linkCode=as2&amp;tag=brsiofne0e-20&amp;linkId=MAZOAYARX2EDKUXX"><img class="aligncenter" src="http://ws-na.amazon-adsystem.com/widgets/q?_encoding=UTF8&amp;ASIN=B00JS8JRHW&amp;Format=_SL250_&amp;ID=AsinImage&amp;MarketPlace=US&amp;ServiceVersion=20070822&amp;WS=1&amp;tag=brsiofne0e-20" alt="" border="0" /></a><img class="aligncenter" style="border: none !important; margin: 0px !important;" src="http://ir-na.amazon-adsystem.com/e/ir?t=brsiofne0e-20&amp;l=as2&amp;o=1&amp;a=B00JS8JRHW" alt="" width="1" height="1" border="0" /></p>
<p>The AMD Radeon R9 295X2 was specifically designed for 4K gaming and to this day remains the best card for 4K gaming for a lot of reasons. In <a title="AMD Radeon R9 295X2 Review: The Definitive 4K Gaming Graphics Card" href="http://www.brightsideofnews.com/2014/04/09/amd-radeon-r9-295x2-review-the-definitive-4k-gaming-graphics-card/" target="_blank">our review of the R9 295X2</a> graphics card, we found it to be one of the best cards for 4K with a single graphics card. One of the other things that made the R9 295X2 so fantastic and worthy of this list is that the liquid system that cooled this card is probably the most effective cooling solution that has ever cooled a graphics card, ever. The AMD R9 295X2 is also a great card because instead of bringing down the level of the 290X in order to be thermally manageable they actually clocked it even higher and gave you a card that was faster than two R9 290Xs. Currently, you can find the <a href="http://www.amazon.com/gp/product/B00JS8JRHW/ref=as_li_tl?ie=UTF8&amp;camp=1789&amp;creative=9325&amp;creativeASIN=B00JS8JRHW&amp;linkCode=as2&amp;tag=brsiofne0e-20&amp;linkId=3QVQXTLEMM7JBUXT">R9 295X2 selling at retailers for around $799</a><img style="border: none !important; margin: 0px !important;" src="http://ir-na.amazon-adsystem.com/e/ir?t=brsiofne0e-20&amp;l=as2&amp;o=1&amp;a=B00JS8JRHW" alt="" width="1" height="1" border="0" /> which is a great deal when you consider it launched earlier this year for $1,500 and the price has dropped all the way down to $799. There are some places selling a <a href="http://www.newegg.com/Product/Product.aspx?Item=N82E16814129300&amp;cm_re=R9_295X2-_-14-129-300-_-Product" target="_blank">VisionTek R9 295X2 for $779</a>, but that&#8217;s just one card at one store and $799 is a much more realistic price because the Visiontek card is likely to sell out. But why, you may ask, is AMD&#8217;s card that launched back in April at $1,500 now selling at $799 in November, only 7 months later? That would be because of the competition from the next card on our list.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/11/19/top-five-graphics-cards-2014/">Top Five Graphics Cards of 2014</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2014/11/19/top-five-graphics-cards-2014/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>AMD Radeon R9 295X2 Review: The Definitive 4K Gaming Graphics Card</title>
		<link>http://www.vrworld.com/2014/04/09/amd-radeon-r9-295x2-review-the-definitive-4k-gaming-graphics-card/</link>
		<comments>http://www.vrworld.com/2014/04/09/amd-radeon-r9-295x2-review-the-definitive-4k-gaming-graphics-card/#comments</comments>
		<pubDate>Wed, 09 Apr 2014 17:55:31 +0000</pubDate>
		<dc:creator><![CDATA[Anshel Sag]]></dc:creator>
				<category><![CDATA[Hardware]]></category>
		<category><![CDATA[Reviews]]></category>
		<category><![CDATA[3840x2160]]></category>
		<category><![CDATA[4K]]></category>
		<category><![CDATA[4K Gaming]]></category>
		<category><![CDATA[500 Watts]]></category>
		<category><![CDATA[500w]]></category>
		<category><![CDATA[512-bit]]></category>
		<category><![CDATA[8GB]]></category>
		<category><![CDATA[AMD]]></category>
		<category><![CDATA[AMD Radeon]]></category>
		<category><![CDATA[AMD Radeon R9 295X2]]></category>
		<category><![CDATA[DisplayPort]]></category>
		<category><![CDATA[Dual GPU]]></category>
		<category><![CDATA[GDDR5]]></category>
		<category><![CDATA[Hawaii]]></category>
		<category><![CDATA[Islands]]></category>
		<category><![CDATA[memory bus]]></category>
		<category><![CDATA[power]]></category>
		<category><![CDATA[PSU]]></category>
		<category><![CDATA[Shader Cores]]></category>
		<category><![CDATA[Vesuvius]]></category>
		<category><![CDATA[Volcanic Islands]]></category>

		<guid isPermaLink="false">http://wp.bsne.ws/?p=34324</guid>
		<description><![CDATA[<p>AMD has been teasing their Radeon R9 295X2 codenamed Vesuvius for quite some time now, including a lengthy &#8216;viral&#8217; campaign that involved secret agents, semi-ambiguous ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/04/09/amd-radeon-r9-295x2-review-the-definitive-4k-gaming-graphics-card/">AMD Radeon R9 295X2 Review: The Definitive 4K Gaming Graphics Card</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="1920" height="1000" src="http://cdn.vrworld.com/wp-content/uploads/2014/04/DSC_00381.jpg" class="attachment-post-thumbnail wp-post-image" alt="DSC_0038" /></p><p>AMD has been teasing their Radeon R9 295X2 codenamed Vesuvius for quite some time now, including a lengthy &#8216;viral&#8217; campaign that involved secret agents, <a title="Viral Marketing for AMD Vesuvius" href="http://www.brightsideofnews.com/news/2014/3/19/amds-2betterthan1-vesuvius-viral-marketing-continues.aspx" target="_blank">semi-ambiguous packages of Volcanic Island water and chips</a> as well as <a title="AMD Teases new Dual GPU" href="http://www.brightsideofnews.com/news/2014/3/13/amd-teases-new-dual-gpu-card-with-2betterthan1-viral-ad.aspx" target="_blank">creepy photos of yours truly</a>. Now that the secret is out of the case, we can finally tell you about AMD&#8217;s new card and exactly what it is intended to do. First and foremost, this card&#8217;s sole purpose is to deliver a single graphics card 4K gaming experience. Something that is currently impossible even with the latest crop of AMD&#8217;s Radeon R9 290X and Nvidia&#8217;s GeForce GTX 780 Ti. In this review, we&#8217;ll be seeing whether or not AMD&#8217;s latest and greatest graphics card is capable of delivering true 4K gaming in a single card.</p>
<p>The name of the AMD Radeon R9 295X2 itself doesn&#8217;t quite make sense to me in terms of naming when you consider that it is really just two R9 290X&#8217;s in a single dual GPU graphics card. I would&#8217;ve called it the R9 290X2, but perhaps I&#8217;m missing the point here. But when you take into consideration that this card is effective two R9 290X&#8217;s strapped onto a single graphics card and you remember all of the thermal issues they had with a single R9 290X it was obvious they needed to go liquid cooling. And they did, so they tapped Asetek for a custom cooling solution to cool both Hawaii GPUs in this Vesuvius card.</p>
<p>The Radeon R9 290X2 has both an LED lit fan as well as an LED lit logo between the two liquid cooling tubes.</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p style="text-align: center;"><a href="http://cdn.vrworld.com/wp-content/uploads/2014/04/DSC_01051.jpg" rel="lightbox-0"><img class="aligncenter  wp-image-34313" alt="DSC_0105" src="http://cdn.vrworld.com/wp-content/uploads/2014/04/DSC_01051.jpg" width="1152" height="574" /></a></p>
<p><span style="line-height: 1.5em;">The R9 295X2 came to us in a briefcase, which we originally teased in an article, even though we&#8217;re not actually sure what the retail packaging for this card will be since these were press samples. Judging by </span><a style="line-height: 1.5em;" href="http://rog.asus.com/314282014/gaming-graphics-cards-2/asus-announces-r9-295x2-graphics-card/" target="_self">ASUS&#8217; own packaging</a><span style="line-height: 1.5em;">, it doesn&#8217;t look like it&#8217;ll be coming in a briefcase like ours did, what a shame.</span></p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/04/DSC_9979_9101.jpg" rel="lightbox-1"><img class="aligncenter size-full wp-image-34336" alt="DSC_9979_910" src="http://cdn.vrworld.com/wp-content/uploads/2014/04/DSC_9979_9101.jpg" width="910" height="490" /></a> <a href="http://cdn.vrworld.com/wp-content/uploads/2014/04/DSC_0004_9101.jpg" rel="lightbox-2"><img class="aligncenter size-full wp-image-34335" alt="DSC_0004_910" src="http://cdn.vrworld.com/wp-content/uploads/2014/04/DSC_0004_9101.jpg" width="910" height="910" /></a></p>
<p>Looking the card itself in detail, you are getting a dual Hawaii ASIC graphics card that allows for a maximum capacity of 8GB GDDR5 memory. When you compare it side to side against the R9 290X and the R9 290 you can see exactly what you&#8217;re getting in terms of performance and raw support.</p>
<p>Furthermore, the compute performance is actually more than 2x the 5.6 TFLOPS that the R9 290X is rated at at 11.5 TFLOPS instead of the expected 11.2 TFLOPS as the GPUs actually work up to 1018 MHz rather than 1 GHz on the R9 290X (the reason for the 295X2 naming?). Higher clocks were enabled thanks to the cooling and improved binning more than anything else, which we&#8217;ll talk about next. The overall compute capability of 11.5 TFLOPS is a result of the fact that it has 5,632 cores/stream processors enabling a texture fill rate up to 358.3 GT/s.</p>
<p style="text-align: center;"><a href="http://cdn.vrworld.com/wp-content/uploads/2014/04/CardSpecs1.jpg" rel="lightbox-3"><img class="aligncenter  wp-image-34294" alt="CardSpecs" src="http://cdn.vrworld.com/wp-content/uploads/2014/04/CardSpecs1.jpg" width="1152" height="624" /></a> <a href="http://cdn.vrworld.com/wp-content/uploads/2014/04/TransistorsAndShaders1.jpg" rel="lightbox-4"><img class="aligncenter  wp-image-34317" alt="TransistorsAndShaders" src="http://cdn.vrworld.com/wp-content/uploads/2014/04/TransistorsAndShaders1.jpg" width="1152" height="603" /></a></p>
<p>&nbsp;</p>
<p>In order to combat the issues that AMD had with the R9 290X and overheating causing thermal throttling of the GPU, AMD approached Asetek to help them water cool the entire graphics card. Asetek is very good at making self-contained liquid cooling systems and they&#8217;re capable of scaling in the thousands if not hundreds of thousands. So, as a result they helped AMD engineer a solution that would help them not only cool a single Hawaii GPU but two slightly higher clocked Hawaii GPUs, a pretty tall order.</p>
<p style="text-align: center;"><a href="http://cdn.vrworld.com/wp-content/uploads/2014/04/AsetekWatercooling11.jpg" rel="lightbox-5"><img class="aligncenter  wp-image-34337" alt="AsetekWatercooling" src="http://cdn.vrworld.com/wp-content/uploads/2014/04/AsetekWatercooling11.jpg" width="1152" height="620" /></a></p>
<p>If you look at the card itself, there are a few major parts to the whole system. Primarily the metal backplate, the PCB with the GPUS and memory, the Asetek liquid cooling blocks and full-length cooling plate, and the metal cover and fan. There is also a heat exchanger (radiator) attached to each of the Asetek waterblock/pumps and each block and pump is connected to the other. If the GPUs are already water cooled why have a fan on the cover of the card, some of you might say? Well, that&#8217;s because the fan&#8217;s purpose is to keep the memory chips cool as well as the voltage regulators in the middle. Remember, voltage regulators typically are the first failing points of most dual GPU cards.</p>
<p>&nbsp;</p>
<p style="text-align: center;"><a href="http://cdn.vrworld.com/wp-content/uploads/2014/04/CardBreakDown1.jpg" rel="lightbox-6"><img class="aligncenter  wp-image-34292" alt="CardBreakDown" src="http://cdn.vrworld.com/wp-content/uploads/2014/04/CardBreakDown1.jpg" width="1152" height="575" /></a> <a href="http://cdn.vrworld.com/wp-content/uploads/2014/04/CardDimensions1.jpg" rel="lightbox-7"><img class="aligncenter  wp-image-34293" alt="CardDimensions" src="http://cdn.vrworld.com/wp-content/uploads/2014/04/CardDimensions1.jpg" width="1152" height="508" /></a></p>
<p>As a result, you get a very very powerful graphics card that is fully contained and self-sufficient, as long as you give it 500 watts of power. That means that this card has two PCI Express 8-pin power connectors and it also means that you need to have the right PSU with enough current on the +12v rail in order to power it safely and in a stable manner. For this purpose, we will be using a Thermaltake Toughpower XT 1475W power supply. Obviously you don&#8217;t need that much unless you plan to run two of these cards in a single system.</p>
<p>&nbsp;</p>
<p>The back of the card is also populated with four MiniDP and one Dual-Link DVI connector, netting you a total of five monitors, at up to 2560&#215;1600 resolution if you use all 5. However, you could theoretically drive three 4K monitors off of this display &#8211; realistically you&#8217;d want to have two of these GPUs in order to drive a triple-4K display setup smoothly. Gaming in 25 million pixels? Everything is possible.</p>
<p>That takes us to the GPU&#8217;s overall performance and evaluating its capability as a 4K gaming card and whether or not it is actually worth the $1,499 pricetag. Not just that, but how does it utilize the 8GB of GDDR5 and does it run anywhere nearly as hot as the R9 290X with the new cooling solution?</p>
<p style="text-align: center;"><a href="http://cdn.vrworld.com/wp-content/uploads/2014/04/DSC_00301.jpg" rel="lightbox-8"><img class="aligncenter  wp-image-34303" alt="DSC_0030" src="http://cdn.vrworld.com/wp-content/uploads/2014/04/DSC_00301.jpg" width="1152" height="533" /></a></p>
<p>For our testing, we tested synthetic benchmarks, OpenCL benchmarks and game benchmarks. All of these benchmarks were tested with both Nvidia&#8217;s and AMD&#8217;s latest graphics drivers and AMD&#8217;s Mantle was enabled where possible (Battlefield 4). The system we tested the cards on was powered by a Core i7-4960X with 16GB of DDR3-1600 and on an X79 board in an x16 PCIe slot. The PSU, as mentioned earlier, was a Thermaltake Toughpower XT 1475W and we were using a SHARP Sharp PN-K321 monitor to test 4K. We did not test in any other resolution other than 4K because it seems ridiculous to spend $1,500 on a graphics card and use it for anything other than 4K, seriously. Theoretically, you could dedicate it towards compute, but in that case we&#8217;ve got you covered.</p>
<p>The tests that we ran in our benchmarking were 3DMark Fire Strike Extreme, Kishonti Compubench (CLBenchmark), Luxmark v2.0, Battlefield 4, Crysis 3 and Counter Strike: Global Offensive.</p>
<p>In our testing, we tested the R9 295X2, R9 290 and the GTX 780 Ti. We had some issues getting the last gen Radeon HD 7990 to work properly with our 4K monitor in the game tests so we had to throw it out and will likely have to figure out a solution to get it work in the future. Unfortunately, that issue presented AMD&#8217;s drivers at its worst. However, we do have some non-real-gaming HD 7990 benchmarks in our 3DMark and OpenCL benchmarks to give an idea of performance over the HD 7990.</p>
<p><span style="font-weight: bold;">Synthetic Benchmarks</span></p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/04/3DmarkFireStrikeExtremeFixed1.jpg" rel="lightbox-9"><img class="aligncenter size-full wp-image-34318" alt="3DmarkFireStrikeExtremeFixed" src="http://cdn.vrworld.com/wp-content/uploads/2014/04/3DmarkFireStrikeExtremeFixed1.jpg" width="1012" height="599" /></a></p>
<p>In 3DMark, which was the first test we ran, we were able to see that the R9 295X2 pretty much came in as the fastest single card that we&#8217;ve ever tested by a huge margin. The R9 295X2 scored a 3DMark Fire Strike Extreme score of 8403 in our testing, which bested our GTX Titan SLI setup with 7391 and the HD 7990 which scored 4639. It also beat the R9 290&#8217;s 4631 by almost twice, even though the 290X would have scored higher so the R9 295X2 isn&#8217;t quite double the performance of the Hawaii GPU in 3DMark. But even so, it is still the fastest multi-GPU setup we&#8217;ve tested in 3DMark.</p>
<p><span style="font-weight: bold;">Compute Performance</span></p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/04/KishontiCLBench1.jpg" rel="lightbox-10"><img class="aligncenter size-full wp-image-34322" alt="KishontiCLBench" src="http://cdn.vrworld.com/wp-content/uploads/2014/04/KishontiCLBench1.jpg" width="919" height="466" /></a></p>
<p>In Kishonti&#8217;s CompuBench, the new name for their CLBenchmark (since they&#8217;ve added renderscript on mobile) we wanted to see how each card handled compute as Nvidia has traditionally struggled against AMD and here we found some really interesting results. In terms of OpenCL compute, AMD&#8217;s Radeon R9 295X2 actually outperformed three of Nvidia&#8217;s GTX Titans and obviously the short lived Radeon HD 7990. This card scored 686,475 points which is higher than the three Titans&#8217; 646,233 and the Radeon HD 7990&#8217;s 560,695. What is interesting, though is that when compared to the R9 290&#8217;s 333,444 score, the R9 295X2 scored more than double, meaning that the R9 295X2 likely scores exactly double that of the R9 290X.</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/04/LuxMarkv21.jpg" rel="lightbox-11"><img class="aligncenter size-full wp-image-34323" alt="LuxMarkv2" src="http://cdn.vrworld.com/wp-content/uploads/2014/04/LuxMarkv21.jpg" width="913" height="440" /></a></p>
<p>The story was a little different in Luxmark v2.0 however, in the end the R9 295X2 was still much faster than all of the competitive GPU solutions and AMD actually took the top three slots even against three of Nvidia&#8217;s GTX Titans. The R9 295X2 is a really powerful card for compute and that is quite evident here with the score of 5350 over the 7970 GHz editions in CrossFireX (4679) and the HD 7990 (4475).</p>
<p><span style="font-weight: bold;">Real World Game Performance</span></p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/04/BF4_4K1.jpg" rel="lightbox-12"><img class="aligncenter size-full wp-image-34319" alt="BF4_4K" src="http://cdn.vrworld.com/wp-content/uploads/2014/04/BF4_4K1.jpg" width="769" height="353" /></a></p>
<p>In Battlefield 4 we wanted to get a playable scenario for all of the card that we tested so we set the settings to the high preset rather than the highest preset, which I believe is Ultra. We also opted for 4x MSAA instead of 8x MSAA. As a result, all of the card provided playable settings with the R9 295X2 delivering an average performance of 81 FPS compared to the R9 290 and 780 Ti which both got an average of 40 FPS. Now, considering that the R9 295X2 had an average of 81 FPS we would likely be able to crank the settings up to the ultra preset which would enable us to still have playable graphics at near maximum settings. Thanks to Mantle AMD&#8217;s R9 295X2 is able to deliver an impressive amount of performance in BF4 in 4K. During our benchmarking the R9 295X used a maximum of 6.7 GB out of the GPU&#8217;s available 8GB of memory, indicating some wiggle room for graphics settings bump, especially with Mantle, which would probably bring the card closer to 8GB.</p>
<p style="text-align: center;"><a href="http://cdn.vrworld.com/wp-content/uploads/2014/04/Crysis3_4K1.jpg" rel="lightbox-13"><img class="aligncenter" alt="Crysis3_4K" src="http://cdn.vrworld.com/wp-content/uploads/2014/04/Crysis3_4K1.jpg" width="763" height="394" /></a></p>
<p>With Crysis 3 we just wanted to make the graphics cards cry and to see what would happen if we ran the game at nearly the maximum settings possible while still allowing the R9 295X2 to be playable. This was achieved by setting the game settings to Very High and dialing in the MSAA at 4x instead of 8x. As a result, we got the R9 295X2 to perform at an average of 29 FPS while the 780 Ti performed at 20 FPS and the R9 290 at 18 FPS. Even though the game looked absolutely stunning, at these settings in 4K the game was only playable on the R9 295X2 and would&#8217;ve required two 780 Tis or two R9 290s to get anywhere near similar performance. What was the most interesting thing was that this game far surpassed our memory utilization on the GPU than Battlefield 4 did. At its peak, the R9 295X2 used 7.5GB of 8 GB of memory, indicating that we had effectively maxxed out the card&#8217;s capabilities at 4K.</p>
<p>Thus, if you think that 8GB of video memory is an overkill for a graphics card, think again.</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/04/CSGO_4k_6891.jpg" rel="lightbox-14"><img class="aligncenter size-full wp-image-34321" alt="CSGO_4k_689" src="http://cdn.vrworld.com/wp-content/uploads/2014/04/CSGO_4k_6891.jpg" width="760" height="387" /></a></p>
<p>In terms of testing Counter Strike: Global Offensive, we knew that we weren&#8217;t going to be getting anything crazy in terms of graphics, but we did know that we would be able to easily play the game at 4K on all of our cards and to get high enough FPS to be able to differentiate between the cards more effectively. Not to mention the fact that this would be a Source Engine game while the others were Frostbite and CryEngine. Personally, I play a lot of 4K so this interested me a lot and since Counter Strike: Global Offensive is on its way to be one of the most played games on Steam, it only makes sense to test it out in 4K and see what kind of performance it would get. We also set the game at its maximum possible settings in order to try to lower the FPS as much as possible and improve the overall visual quality at 4K.</p>
<p>No surprise, at max settings all of these cards managed over 150 FPS. The R9 295X2 managed to deliver an average of 250 FPS (300 is the max set by the Source Engine) while the R9 290 managed 163 FPS and the 780 Ti 148 FPS. So, with the R9 295X2 you&#8217;re basically getting almost 100 FPS over the single GPUs in 4K. And in this scenario the memory utilization of the R9 295X2 was expectedly much lower than the other two games with it only utilizing 3.6 GB of the 8GB of memory.</p>
<p>That wraps up our performance testing. But before we move on to thermals we wanted to talk about the reason why we mentioned memory utilization. We wanted to illustrate the purpose of going with an 8GB card rather than a 16GB card, 8GB is pretty much enough in almost any scenario unless you believe that you are going to be running Crysis 3 at 4K. Anyone trying to sell you a card with more than 8GB is probably selling you more memory than you need and if you&#8217;re not running 4K you probably don&#8217;t need more than 3 or 4GB. Just keep that in mind as add-in board partners try to charge more for memory that you&#8217;ll never use.</p>
<p>We also wanted to note that even with AMD&#8217;s latest drivers and a custom CrossFire profile, certain benchmarks simply would not run dual GPUs. We also had some problems with configuring the 4K monitor with the R9 295X2, however, we were able to work those out. To us, it seemed that this card&#8217;s drivers still needed some work, but were playable in the three games that we tested them in. Also, AMD&#8217;s frame pacing appeared to work well as we didn&#8217;t notice any stuttering or nearly as much tearing as we had with previous AMD dual GPU setups.</p>
<p><span style="font-weight: bold;">Temperatures</span></p>
<p style="text-align: center;"><a href="http://cdn.vrworld.com/wp-content/uploads/2014/04/DSC_00111.jpg" rel="lightbox-15"><img class="aligncenter  wp-image-34299" alt="DSC_0011" src="http://cdn.vrworld.com/wp-content/uploads/2014/04/DSC_00111.jpg" width="1152" height="394" /></a></p>
<p>Now, getting back to the card, we wanted to talk about the thermals. For me, when I first heard about this card and its possible existence my first concern was cooling and temperatures. AMD really struggled a lot with the R9 290X in terms of perception because the card topped out at water-boiling 95 Celsius under extreme loads and as a result actually throttled performance. Of course, their add-in board partners eventually released better cooling solutions that alleviated this issue to a degree, but that concern came back again once I heard that AMD would be making what was supposed to be a dual Hawaii GPU. To be honest, my first thought about a dual Hawaii card was that it had to be dual 290s and that it would have to be water cooled.</p>
<p>When I found out that this card would be two slightly higher clocked 290X&#8217;s I was a little concerned about how it would perform thermally and if they could handle the temperatures generated by these GPUs. Even when I saw the cooling solution I wasn&#8217;t sure that the 120mm Asetek radiator would be able to handle a maximum of 500W of GPU power dissipation. After all, most CPU coolers that are cooling 130W CPUs also have 120mm radiators and are effectively cooling half the TDP.</p>
<p style="text-align: center;"><a href="http://cdn.vrworld.com/wp-content/uploads/2014/04/DSC_00251.jpg" rel="lightbox-16"><img class="aligncenter  wp-image-34301" alt="DSC_0025" src="http://cdn.vrworld.com/wp-content/uploads/2014/04/DSC_00251.jpg" width="1152" height="766" /></a></p>
<p>Upon doing all of my testing, I was able to conclude that the first GPU at idle was running at a normal temperature of around 36C which is pretty good for a normal graphics card, but fairly expected for a water cooled graphics card. Now, what really caught me by surprise was the temperature of the first GPU (the most heavily used GPU) under maximum load. The GPU never got hotter than 67C, even when the radiator and tubes were warm to the touch. Because of that, this card is capable of delivering a very stable level of performance and to do it consistently time and time again. I don&#8217;t know exactly what the guys at Asetek and AMD did, <a href="http://asetek.com/press-room/news/2014/amd-selects-asetek-to-liquid-cool-the-world%E2%80%99s-fastest-graphics-card.aspx" target="_self">but they have engineered a pretty fantastic cooling solution</a> that I frankly thought might be underpowered for these GPUs.</p>
<p><span><span style="font-weight: bold;">Conclusion</span></span></p>
<p>The Radeon R9 295X2 is a $1,500 card. There&#8217;s no getting around that. In fact, it makes a lot of sense if you think about it. The R9 290X should sell for around $550, even though there&#8217;s still a fair amount of gouging as a result of last year&#8217;s cryptocurrency rush. However, <a href="http://www.newegg.com/Product/Product.aspx?gclid=CObM56jb1L0CFUNhfgod6ocAUg&amp;Item=N82E16814129278&amp;nm_mc=KNC-GoogleAdwords&amp;cm_mmc=KNC-GoogleAdwords-_-pla-_-Desktop+Graphics+Cards-_-N82E16814129278&amp;ef_id=UhMeXwAAAWKGwRMG:20140410005755:s" target="_self">Newegg is selling a Visiontek R9 290X</a> for just under $600 with a non-reference cooler. Now, if you take the $550 price and assume it is multiplied by two, you&#8217;ve already arrived at an $1,100 card purely by the assumption of 2X the price. However, this card is also water cooled, which normally carries a $100-$200 premium on any graphics card, bringing the expected retail price up to about $1,300. However, most dual GPU cards rarely sell for 2X the price of their base-model single GPU parents and <a href="http://www.brightsideofnews.com/news/2014/3/25/gtc-2014-keynote---gtx-titan-z-and-pascal-announced.aspx" target="_self">Nvidia announced the Titan Z was going to be selling for $3000</a>. This gave AMD some wiggle room on price and I suspected that they would be selling it somewhere betwen $1,300 and $1,500. And here we are, at $1,500.</p>
<p style="text-align: center;"><a href="http://cdn.vrworld.com/wp-content/uploads/2014/04/DSC_01021.jpg" rel="lightbox-17"><img class="aligncenter  wp-image-34310" alt="DSC_0102" src="http://cdn.vrworld.com/wp-content/uploads/2014/04/DSC_01021.jpg" width="1152" height="824" /></a></p>
<p>The card itself is when compared to buying two R9 290Xs simply does not provide more value than owning two cards. However, it is significantly quieter and cooler than running two R9 290Xs and it provides a pretty high level of gaming performance in 4K with a single graphics card. However, in order to properly make good use of either this GPU or any dual 290X setup, you should really be looking at a 4K display. So, this card makes sense if you want the absolute fastest or if you want to be able to cram four Hawaii GPUs into a board with only two PCIe slots or if your case simply isn&#8217;t big enough to fit four GPUs. Not to mention, cooling four Hawaii GPUs in any case on air would be significantly more difficult than two of these Vesuvius cards. In fact, you would be spending around $2,300 on four 290Xs and would probably have a much hotter and louder setup than if you spent the extra $700 and went with two 295X2s. So, there is some sense in going with these cards if you absolutely have to have the fastest and the best and use 4K.</p>
<p>In terms of value, just have one thing in mind: for the price of a single GeForce GTX Titan Z, you can purchase a R9 295X2 and not one, but TWO 28&#8243; 4K 60Hz panels from Samsung. If you dislike the TN panel, you can still afford a 24&#8243; IPS 4K Panel from Dell. Talk about a value deal.</p>
<p>I have to say that I was personally quite impressed with this card itself, especially on the performance and thermals side of things. If you want the absolute latest and greatest, not to mention fastest, fastest in the world, you have to buy the R9 295X2. And if you want to buy just one card for 4K gaming, this is it. And because of that, we&#8217;re awarding this card our editor&#8217;s choice award.</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/04/editors-choice_prosumer1.gif" rel="lightbox-18"><img class="aligncenter size-full wp-image-34342" alt="editors-choice_prosumer" src="http://cdn.vrworld.com/wp-content/uploads/2014/04/editors-choice_prosumer1.gif" width="618" height="68" /></a></p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/04/09/amd-radeon-r9-295x2-review-the-definitive-4k-gaming-graphics-card/">AMD Radeon R9 295X2 Review: The Definitive 4K Gaming Graphics Card</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2014/04/09/amd-radeon-r9-295x2-review-the-definitive-4k-gaming-graphics-card/feed/</wfw:commentRss>
		<slash:comments>7</slash:comments>
		</item>
		<item>
		<title>ATI and Nvidia cards for 2009 will be monsters</title>
		<link>http://www.vrworld.com/2008/11/26/ati-and-nvidia-cards-for-2009-will-be-monsters/</link>
		<comments>http://www.vrworld.com/2008/11/26/ati-and-nvidia-cards-for-2009-will-be-monsters/#comments</comments>
		<pubDate>Wed, 26 Nov 2008 13:00:29 +0000</pubDate>
		<dc:creator><![CDATA[Theo Valich]]></dc:creator>
				<category><![CDATA[3D]]></category>
		<category><![CDATA[AMD]]></category>
		<category><![CDATA[Business]]></category>
		<category><![CDATA[Graphics]]></category>
		<category><![CDATA[Intel]]></category>
		<category><![CDATA[2nd Gen GDDR5]]></category>
		<category><![CDATA[40nm]]></category>
		<category><![CDATA[55nm]]></category>
		<category><![CDATA[65nm]]></category>
		<category><![CDATA[8-pin power]]></category>
		<category><![CDATA[ATI]]></category>
		<category><![CDATA[GDDR5]]></category>
		<category><![CDATA[GHz]]></category>
		<category><![CDATA[GigaTransfers]]></category>
		<category><![CDATA[GT/s]]></category>
		<category><![CDATA[gt206]]></category>
		<category><![CDATA[gt212]]></category>
		<category><![CDATA[H5GQ1H24AFR]]></category>
		<category><![CDATA[Hynix]]></category>
		<category><![CDATA[Nvidia]]></category>
		<category><![CDATA[rv870]]></category>

		<guid isPermaLink="false">http://theovalich.wordpress.com/?p=598</guid>
		<description><![CDATA[<p>As the 2008 is drawing to a close, our thoughts are turning towards 2009 and what incredible hardware will come at our doorsteps. Upcoming year ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2008/11/26/ati-and-nvidia-cards-for-2009-will-be-monsters/">ATI and Nvidia cards for 2009 will be monsters</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>As the 2008 is drawing to a close, our thoughts are turning towards 2009 and what incredible hardware will come at our doorsteps. Upcoming year will bring a breeze of competitiveness, with AMD and Intel fighting for enthusiasts hearts and minds in the world of CPUs. GPUs will see a tough three-way battle between AMD GPG (ex-ATI), Nvidia and newcomer Intel with its Larrabee cGPU.</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2008/11/hynix_gddr5.jpg" rel="lightbox-0"><img class="alignleft size-full wp-image-599" title="hynix_gddr5" src="http://cdn.vrworld.com/wp-content/uploads/2008/11/hynix_gddr5.jpg" alt="hynix_gddr5" width="300" height="202" /></a>But one of main building block was launched yesterday, in 2008. Hynix introduced a chip with a friendly and &#8220;easily understandable&#8221; name: H5GQ1H24AFR. Even though the name looks like something that ENIGMA would encrypt, we&#8217;re talking about 128MB (1Gbit) memory chip that operates at the clock of 1.75 GHz in QDR mode, resulting in 7 GigaTransfers per second (7 GT/s or 7 &#8220;GHz&#8221;). Currently, ATI Radeon 4870 and 4870X2 come with 900 MHz chips that offer 3.6 GT/s, so we&#8217;re talking about doubling the memory bandwidth per chip.</p>
<p>This means that a GPU with a 256-bit memory controller would have roughly 219 GB/s of bandwidth, while 512-bit memory controller and these Hynix chips would result almost  A GPU with 256-bit memory controller and 438 GB/s. These numbers are astonishing and quite frankly, will open the doors for higher performance jump than previously imagined.</p>
<p>Best thing of them all: due to new manufacturing process, Hynix 2nd Gen GDDR5 chips at 1.75 GHz works at 1.35V rail, and consumes less power than initial 900 MHz chips (3.6 GT/s ones). Yep, the power consumption will go down, and performance per chip is now doubling. Who says you can&#8217;t have &#8220;wolves stuffed, and all sheep numbered&#8221; as the old Croatian saying go (english version: have your cake and eat it too)?</p>
<p>Now you know. Nvidia&#8217;s GT212, or the 40nm shrink of GT200 chips consumes around 25% of power eaten by the original 65nm chip, can have double the bandwidth and GDDR5 memory that eats less power than GDDR3 memory present on GTX280 cards. As far as ATI is concerned, the upcoming RV870 will be in the same boat as Nvidia.</p>
<p>Can you say, 8-pin power connector is going the way of do-do birds? Well, I would say yes, but don’t forget that GPU makers will use these power savings to clock their cards to absolute physical limits.<br />
H1 2009 will see $299 parts that enable 1920&#215;1200 in 16x AA/AF at 120 fps with no sweat.<br />
If you thought that GTX280 and 4870X2 are incredible… well, we haven&#8217;t seen anything yet. Now, will the game designers finally follow the path set by Race Driver GRID, Unreal Tournament III, Far Cry 2, Fallout 3 and offer absolutely fantastic gaming experience without constant crying that &#8220;hardware isn&#8217;t powerful enough&#8221;. Or at least, prove that it really isn&#8217;t.</p>
<p>P.S. Before you ask.. this is still single-ended GDDR5. Still waiting for that Differential GDDR5 to show up&#8230;of course, we need Differential GDDR5-capable memory controllers too.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2008/11/26/ati-and-nvidia-cards-for-2009-will-be-monsters/">ATI and Nvidia cards for 2009 will be monsters</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2008/11/26/ati-and-nvidia-cards-for-2009-will-be-monsters/feed/</wfw:commentRss>
		<slash:comments>6</slash:comments>
		</item>
		<item>
		<title>Sunday Blurb: Slow week ahead. Or not&#8230;</title>
		<link>http://www.vrworld.com/2008/11/23/sunday-blurb-slow-week-ahead-or-not/</link>
		<comments>http://www.vrworld.com/2008/11/23/sunday-blurb-slow-week-ahead-or-not/#comments</comments>
		<pubDate>Sun, 23 Nov 2008 22:30:04 +0000</pubDate>
		<dc:creator><![CDATA[Theo Valich]]></dc:creator>
				<category><![CDATA[News]]></category>
		<category><![CDATA[analysis]]></category>
		<category><![CDATA[California]]></category>
		<category><![CDATA[croatian]]></category>
		<category><![CDATA[English]]></category>
		<category><![CDATA[Folding@Home]]></category>
		<category><![CDATA[GDDR5]]></category>
		<category><![CDATA[long articles]]></category>
		<category><![CDATA[Personal]]></category>
		<category><![CDATA[theo valich]]></category>

		<guid isPermaLink="false">http://theovalich.wordpress.com/?p=562</guid>
		<description><![CDATA[<p>After going through the planning for next week, there is a lot of things to do. First of all, I will be busy with selecting ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2008/11/23/sunday-blurb-slow-week-ahead-or-not/">Sunday Blurb: Slow week ahead. Or not&#8230;</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>After going through the planning for next week, there is a lot of things to do. First of all, I will be busy with selecting a provider for hosting the new site, but have no worries, plenty of more articles to come. <img src="http://cdn.vrworld.com/wp-includes/images/smilies/icon_wink.gif" alt=";)" class="wp-smiley" /></p>
<p>But a lot of time will go on my articles &#8211; there are several big ones in the pipeline, just like the GDDR5 Analysis which surprised me with your visits. Usually, Saturday and Sunday are slow days, and my weekly lows always come on weekends. Not this time around, though. Over 3500 visitors came to the site to read the story, and this is the highest number in a day recorded on this blog.</p>
<p>This article proved to me that there is a hunger for long and detailed articles &#8211; over the past three years, I was actually teached exactly the opposite &#8211; according to my former editors, &#8220;nobody reads long articles&#8221;. Well, thank you for proving them wrong. You can expect longer pieces from now on, but don&#8217;t think I will abandon the news form &#8211; there is too much stuff going around.</p>
<p>I also wish to thank some critics that commented the site as &#8220;dropping the ball grammar-wise&#8221;. I am now speaking Croatian more than I did in the past year (lived in California), so I find myself in situations where I don&#8217;t have a word in croatian to express what i wanted to say. From the other hand, writing in english brings some grammar from croatian &#8211; promise I&#8217;ll be more careful in the future. Pioneer&#8217;s honor. <img src="http://cdn.vrworld.com/wp-includes/images/smilies/icon_wink.gif" alt=";-)" class="wp-smiley" /></p>
<p>As a final comment for this weekly closure, I wish to thank all of the people in my folding team. Daily production skyrocketed to 170,000 points and we&#8217;re on track to reach 200,000 points on a daily basis. We&#8217;ve passed the three million mark on Friday, and now are on course in reaching Top 500 teams during the course of next week. Since my goal was to reach Top 1000 by the end of 2008, needless to say how enthusiastic I am. This coming week, you can expect several articles in regards to folding, hopefully you&#8217;ll find them interesting.</p>
<p>Thank you for reading.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2008/11/23/sunday-blurb-slow-week-ahead-or-not/">Sunday Blurb: Slow week ahead. Or not&#8230;</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2008/11/23/sunday-blurb-slow-week-ahead-or-not/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>100th Story- ANALYSIS: Why will GDDR5 rule the world?</title>
		<link>http://www.vrworld.com/2008/11/22/100th-story-gddr5-analysis-or-why-gddr5-will-rule-the-world/</link>
		<comments>http://www.vrworld.com/2008/11/22/100th-story-gddr5-analysis-or-why-gddr5-will-rule-the-world/#comments</comments>
		<pubDate>Sat, 22 Nov 2008 21:00:46 +0000</pubDate>
		<dc:creator><![CDATA[Theo Valich]]></dc:creator>
				<category><![CDATA[3D]]></category>
		<category><![CDATA[AMD]]></category>
		<category><![CDATA[Business]]></category>
		<category><![CDATA[Graphics]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[Intel]]></category>
		<category><![CDATA[Memory & Storage Space]]></category>
		<category><![CDATA[256 Bit]]></category>
		<category><![CDATA[40nm]]></category>
		<category><![CDATA[512-bit]]></category>
		<category><![CDATA[55nm]]></category>
		<category><![CDATA[ATI]]></category>
		<category><![CDATA[differential]]></category>
		<category><![CDATA[Differential GDDR5]]></category>
		<category><![CDATA[FirePro]]></category>
		<category><![CDATA[gddr3]]></category>
		<category><![CDATA[gddr4]]></category>
		<category><![CDATA[GDDR5]]></category>
		<category><![CDATA[GeForce]]></category>
		<category><![CDATA[gt200]]></category>
		<category><![CDATA[gt206]]></category>
		<category><![CDATA[gt212]]></category>
		<category><![CDATA[joe macri]]></category>
		<category><![CDATA[larrabee]]></category>
		<category><![CDATA[Nvidia]]></category>
		<category><![CDATA[PlayStation 4]]></category>
		<category><![CDATA[Quadro]]></category>
		<category><![CDATA[Radeon]]></category>
		<category><![CDATA[S.E. GDDR5]]></category>
		<category><![CDATA[single-ended]]></category>
		<category><![CDATA[xbox 720]]></category>

		<guid isPermaLink="false">http://theovalich.wordpress.com/?p=534</guid>
		<description><![CDATA[<p>As &#8220;Theo&#8217;s Bright Side of IT&#8221; turns a century (100 stories) after 5 weeks of existence, it would be right to write an article about ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2008/11/22/100th-story-gddr5-analysis-or-why-gddr5-will-rule-the-world/">100th Story- ANALYSIS: Why will GDDR5 rule the world?</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>As &#8220;Theo&#8217;s Bright Side of IT&#8221; turns a century (100 stories) after 5 weeks of existence, it would be right to write an article about technology that is set to become an everyday word during the next couple of years: GDDR5.<br />
This memory standard will become a pervasive memory during next four years in much more fields than &#8220;just&#8221; graphics. Just like GDDR3 ended up in all three consoles, network switches, cellphones and even cars and planes, GDDR5 brings a lot of new features that are bound to win more customers from different markets.</p>
<p><strong>Background</strong><br />
The reason for development of radical ideas inside GDDR5 lies in the fact that ATI was looking at future GPU architectures, and concluded that the DRAM industry has to take a radical step in design and offer interface more flexible than any other memory standard. Then, ATI experienced huge issues with R600 and its huge monolithic die. After a lot of internal struggle, engineering teams came to agreement that a change of course is necessary for generations to come: R700/RV770, R800/RV870, R900, R1K… all of these engineering designs are reshaped and refocused. Current and future goal is to design a compact and affordable transistor design that would not play a game of Russian roulette with yields coming from <a title="MAD AMD or GloblaFoundries" href="http://www.tomshardware.com/news/amd-corporate-culture,5206.html" target="_blank">MAD AMD</a>, TSMC&#8217;s and UMC&#8217;s foundries.<br />
Development of this JEDEC certified standard happened under the lead of Joe Macri, Director of engineering at AMD and chairman of JEDEC&#8217;s Future DRAM Task Group JC42.3. Joe and his small ex-ATI/AMD GPGP team are mostly known for the development of the GDDR3 and GDDR4 memory standards, with former being probably the best thing ever to come out of the former ATI. ATI worked in solitude for a whole year before it sent initial specification to JEDEC in 2005. Then, Hynix, Qimonda and Samsung joined the effort to bring the new memory standard to life. When AMD acquired ATI in 2006, new management didn&#8217;t touch GDDR5 development and let the team to work in peace. Reason was simple: R&amp;D team warned the management that GDDR5 development is much more difficult from work done on GDDR3 and GDDR4.<br />
GDDR5 was seen as a path towards next-generation clients, that being consoles, desktop computing, networking equipment, HPC arena, handhelds&#8230; all of these roads start with one memory standard. At the time, engineers at ATI saw the path of success that GDDR3 took, and decided to create a spec that would outlive and outshine GDDR3.<br />
In May 2008, AMD finally announced the launch of GDDR5 memory standard. Soon after, the company revealed its Radeon 4800 series and cards equipped with GDDR5 memory. Given the performance of Radeon 4870 512MB, 4870 1GB and 4870X2 2GB, it is obvious that the future of graphics (and not just!) belongs to GDDR5 memory.<br />
At its very core, it is important to know that the main difference between LP-DDR (handhelds, PDAs), DDR (one fits all) and GDDR (Graphics) is the fact that capacity is not crucial, but performance is. Low-Power DDR and standard DDR are geared to enabling as much capacity as possible, while GDDR is usually referred to as the &#8220;Ferrari of the bunch&#8221;.</p>

<a href='http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_01_gpu-ram-roadmap1.jpg' rel="lightbox[gallery-0]"><img width="750" height="420" src="http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_01_gpu-ram-roadmap1-750x420.jpg" class="attachment-vw_medium" alt="Roadmap shows that DDR3 will replace DDR2 in low-end market, and GDDR5 will take over GDDR3" /></a>
<a href='http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_03_gddr345-diferences.jpg' rel="lightbox[gallery-0]"><img width="750" height="420" src="http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_03_gddr345-diferences-750x420.jpg" class="attachment-vw_medium" alt="Description of differences between the standards..." /></a>
<a href='http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_04_gddr345-diferences.jpg' rel="lightbox[gallery-0]"><img width="750" height="420" src="http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_04_gddr345-diferences-750x420.jpg" class="attachment-vw_medium" alt="... and continuing with differences." /></a>
<a href='http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_05_ram-roadmap.jpg' rel="lightbox[gallery-0]"><img width="750" height="420" src="http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_05_ram-roadmap-750x420.jpg" class="attachment-vw_medium" alt="In 2010, we should see Differential GDDR5, and then the available bandwidth on GPUs will double over the night." /></a>
<a href='http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_06_gddr5_key-features.jpg' rel="lightbox[gallery-0]"><img width="750" height="420" src="http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_06_gddr5_key-features-750x420.jpg" class="attachment-vw_medium" alt="According to Qimonda, these are key features of GDDR5 standard." /></a>
<a href='http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_07_gddr5-lowmedhighfr.jpg' rel="lightbox[gallery-0]"><img width="750" height="372" src="http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_07_gddr5-lowmedhighfr-750x372.jpg" class="attachment-vw_medium" alt="GDDR5 is divided into three different memory types, and clocks and voltage change according to specified role." /></a>
<a href='http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_08_gddr5-pcb-tracing_.jpg' rel="lightbox[gallery-0]"><img width="489" height="420" src="http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_08_gddr5-pcb-tracing_-489x420.jpg" class="attachment-vw_medium" alt="Note the absence of &quot;combs&quot; on PCB using GDDR5 memory. This will enable cheaper PCBs and higher performance at the same time." /></a>
<a href='http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_09_gddr5-overclocking.jpg' rel="lightbox[gallery-0]"><img width="750" height="420" src="http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_09_gddr5-overclocking-750x420.jpg" class="attachment-vw_medium" alt="GDDR5 is also the first memory standard designed with overclocking in mind." /></a>
<a href='http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_10_gddr5-clockingandd.jpg' rel="lightbox[gallery-0]"><img width="750" height="420" src="http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_10_gddr5-clockingandd-750x420.jpg" class="attachment-vw_medium" alt="The way how clock works...four data transfers over a single clock." /></a>
<a href='http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_11_gddr5_x16-mode.jpg' rel="lightbox[gallery-0]"><img width="750" height="420" src="http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_11_gddr5_x16-mode-750x420.jpg" class="attachment-vw_medium" alt="Clamshell mode - very important feature, will enable doubling the amount of memory in near future." /></a>

<p><strong><br />
DDR, DDR2, DDR3, GDDR3, GDDR4, GDDR5 … got it?</strong></p>
<p>If you can’t find your way through the jungle of different memory standards, don&#8217;t worry, you&#8217;re not alone. There is a lot of confusion in the world of DRAM memory, and sadly, there is no simple explanation. Most important thing to remember is that GDDR and DDR are not the same memory, and do not operate on same data sets.<br />
As you can see, GDDR memory transfers 32-bit data, while conventional DRAM transfers 64-bit data chunks. Previous generations of graphics memory (GDDR2, GDDR3) were remotely based on the DDR2-SDRAM memory standard, while GDDR5 is heading into a new direction.</p>
<p>In fact, GDDR5 standard actually splits into two different ways how DRAM operates: Single-Ended and Differential. This is a revolutionary step for GDDR memory, since it was widely expected that Single-Ended memory is the only way to go. In a way, you can say that ATI developed GDDR5 and GDDR &#8220;5.5&#8221; or &#8220;6&#8221; at the same time. Single-ended support is compatible with existing memory standards such as DDR1/2/3/GDDR3/4 and represents evolutional path for DRAM. First products to market will use single-ended chips, but as soon as Hynix, Qimonda and Samsung start manufacturing differential modules (2009-10), a new era will begin.<br />
Differential clock signaling is a method similar to interconnect buses such as HyperTransport, PCI Express, or Intel&#8217;s Quick Path Interface from Core i7. Differential introduces Reference clock, a clock that memory cell follows. Instead of using Ground wire as a passive driver, Differential mode enables precise communication and exactly this feature is the reason why available bandwidth is set for a dramatic change during lifetime of GDDR5.<br />
The sheer bandwidth gain from one GDDR generation to another is impressive. GDDR3 peaked at 2.4 Gbps, GDDR4 concluded at 3.2 Gbps. GDDR5 chips split into two: Single-Ended will offer between 3.4 and 6.4 Gbps of bandwidth, while differential chips will yield between 5.6 and 12.8 Gbps.</p>
<p>Besides Differential mode, GDDR5 also introduces an Error Correction Protocol based on a progressive algorithm that actually enables more aggressive overclocking. Major changes in internal chip design also include Quarter-Data Rate clock, continuous WRITE clock, CDR based READ (no reading clock/strobe information), DRAM Interface training, Internal and External VREF and x16 mode.<strong></strong></p>
<p><strong>Power Saving</strong></p>
<p>One of very important things with GDDR5 is power reduction. If you take GDDR3 and GDDR5 modules, clocked at 1.0 GHz each, GDDR3 will have to operate at 2.0V, while GDDR5 needs only 1.5V. This results in 30% reduction of power consumption, while raising available per-pin bandwidth by almost 100%.</p>
<p>GDDR5 is designed to operate at low, medium and high frequencies. Low frequency (0.2-1.5 Gbps) calls for low-voltage (0.8-1.0V), while medium (1.0-3.0 Gbps) and high (2.5-5.0 Gbps) frequencies call for higher voltage, in a range between 1.4-1.6V.<br />
High frequency is the only one that utilizes CDR (Command Data Rate) circuitry, while medium and low frequencies call for conventional mode (RDQS with Preamble).<br />
Seeing the drop in power below levels of FB-DIMM DDR2-800 only makes us wonder what would happen if CPU manufacturers would implement Differential GDDR5 as system memory. Would we really need Gigabytes of system memory if we would have system memory with higher bandwidth than L2 and L3 cache? Intel is looking in similar direction, considers <a href="http://www.tomshardware.com/news/Intel-DRAM-CPU,5697.html" target="_blank">replacing SRAM cache with DRAM technology</a>.</p>
<p>Sadly, the changes that would be required in memory controller are such that only place where GDDR5 will see the light of day as system memory are closed designs, such as consoles, set top boxes and so on. There is hope that some future AMD&#8217;s Fusion designs might implement GDDR support, but it is too early to tell.</p>
<p><strong>How to lower the cost of manufacturing?</strong></p>
<p><strong><br />
</strong>During design stages of GDDR5 memory, one of main concerns was how to simplify tracing on the PCB (Printed Circuit Board). On current GDDR3 and GDDR4 graphics boards, synchronization issues are solved by using traces of the same length from every pin on DRAM chip to the GPU. This causes quite a messy design, with traces going everywhere.</p>
<p>IF you&#8217;re PCB designer, there is one thing that you don&#8217;t want: complex routing of traces. This eventually leads to more PCB layers, higher cost and most importantly &#8211; more ways for *something* to go wrong. Every trace has increased isolation from electromagnetic interferences (EMI), while Asymmetrical Interface compensates for differences in length. In order to keep the signal integrity, several optimizations were made.<br />
As you could see on picture above, GDDR5 PCB route is much cleaner than GDDR3, and you can see that if you compare Radeon 4850 to Radeon 4870, for instance. This was paid by additional resistors around memory chips, but second generation of GDDR5 graphics cards should feature cleaner design.</p>
<p><strong>Memory designed for overclocking?</strong></p>
<p><strong><br />
</strong>With power saving and performance-related tweaks, it is obvious that this memory is designed for overclocking. This was confirmed to us just by looking at slides from AMD and Qimonda.</p>
<p>The GDDR5 specification delivers a combination of three technologies: Adaptive Training and CDR, Error Detection and an on-die thermal sensor. Adaptive Training is combined with the Error Detection algorithm and enables the memory controller of the GPU to keep thermals on a tight leash. If you want to overclock the memory, it will go up until the error correction algorithm hits a thermal wall.</p>
<p>Error Detection works with both read and write instructions, offering real time repeat and resend operations. Thanks to asynchronous clocks, memory controller can control flow of data and resend bits of information that fail to arrive in time (or arrive corrupted). Error Detection algorithm will try to avoid a crash until the number of errors passes 1 Error/sec.<br />
In order to maintain the signal stability, additional resistors were placed inside and outside the memory chip (take a look at the back of 4870 and compare it to 4850). AMD also addressed the issue spotted on GDDR4. Overclocking of GDDR4 memory was limited because DRAM timing loop would run out of power. GDDR5 changed the way how clock is generated and kept, so memory chip should never starve for power. No timing loop issue = no memory freeze. According to our sources, GDDR5 memory clocking in the end depends on the manufacturing process (used by the chip manufacturer) and the amount of voltage provided to the chip.<br />
But main difference in clocking of GDDR3 and GDDR5 is the fact that PVT (Power, Voltage, and Temperature) is no longer the unbreakable barrier. Now, it is GPU&#8217;s memory controller that will keep (or fail to keep) the flow of data.</p>
<p><strong>Coalition between the GPU and the RAM</strong></p>
<p>Unlike previous memory standards, in order to extract the best possible performance memory controller has to support ALL of the GDDR5 features. This especially goes to Asymmetrical interface, since WRITE and READ clocks are programmed by the GPU. Advanced Clock Training calibrates GPU-RAM signals &#8211; without this feature, you cannot count on high clocks or overclocking capabilities. With four bits of data being sent per clock (instead of two), memory controller is exposed to a lot of stress, and has to be able to do error checking on the fly. Any misses on GPU side will lead to cycle losses &#8211; leading to instability.<br />
Good example is memory controller tucked inside the Radeon 4800. This 256-bit controller supports DDR2, DDR3, GDDR3, GDDR4 and GDDR5 memory standards. The memory controller is tuned up to the point where bandwidth and clock limitation are on the side of the SGRAM chips: If the fastest GDDR5 memory chips were available today, you could build a 4800-series card with them. This also opens up revenue opportunities for Hynix, Samsung and Qimonda. All three manufacturers could earn a small fortune by selling gold sampled memory chips to premium graphics card manufacturers.<br />
When it comes to Nvidia, answer to the question why the company went with GDDR3 for GTX 200 series of cards is not a simple one: according to our sources, GT200 chip supports GDDR3 and GDDR4, while engineers ran out of time to adjust memory controller to asymmetrical interface (advanced interface training), key feature for stable operation. But, if Nvidia sticks with 512-bit memory controller for NV70 generation (GT300?), we should see Nvidia GPUs featuring bandwidth in excess of 300 GB/s, more than twice that is available today. There is also a question what will Nvidia do with its two refreshes, 55nm GT206 and 40nm GT212 chips.<br />
Intel is not giving out any details on Larrabee&#8217;s architecture, but we know for sure that the 1024-bit internal/512-bit external memory controller will support GDDR5 and its advanced features. Given the late 2009 release, support for differential mode should be a given. When it comes to christening, Larrabee with GDDR5 memory will debut during this winter, with <a href="http://www.tomshardware.com/news/intel-larrabee-graphics,5847.html" target="_blank">first graphics cards delivered to Dreamworks</a>.</p>
<p><strong>Capacity – just how big can we go?<br />
</strong>Now that you&#8217;ve seen all of the performance elements, time to write about capacity. While Joe told us that GDDR should be considered as &#8220;The Ferrari of DDR world&#8221;, GDDR5 introduces x16 mode. This mode has nothing to do with PCI Express x16 (to kill any potential confusion).</p>
<p>As you can see on the slide above, Clamshell mode is introduced to enable two memory chips sitting on a single x32 node. If we take ATI Radeon 4800 series, GPU features eight x32 I/O controllers. In theory, this should top at 16 memory chips per GPU, or 1GB of onboard memory using conventional 512Mbit chips. With x16 mode, card designer can put up to 32 chips (good luck with finding available space), or 2GB memory with 512Mbit (64MB) chips. With 1Gbit (128MB) chips, this number grows to 4GB. Qimonda is expected to ship 2Gbit (256MB) chips during 2009, enabling 8GB of on-board memory.</p>
<p>This number is increasingly important for GPGPU market, which wants as much on-board memory as possible. Bear in mind that Tesla 10-Series features 4GB of GDDR3 memory, and some contacts we&#8217;ve talked with &#8211; claim they would fill even more.</p>
<p>Eight GB of video memory may sound too much for consumer space, but if world is to usher into the era of <a href="http://www.tomshardware.com/news/Larrabee-Ray-Tracing,5769.html" target="_blank">Ray-tracing</a>, we have to get enough space for gigabytes of data. Jules Urbach from JulesWorld explained that he is working with datasets bigger than 300 GB, and has to resort using AMD&#8217;s CAL (Compression Algorithm) to fit all the data inside 1GB per GPU (Jules uses R700 boards).</p>
<p><strong>Conclusion</strong></p>
<p><strong></strong>GDDR5 ramped up during 2008 and we expect the technology becoming a standard for GPU add-in-boards in 2009. ATI will migrate to GDDR5, so will Nvidia. With Intel joining the pack with Larrabee, volumes should be ready to drive the cost of GDDR5 into budget for next generation of game consoles, starting in the 2010-11 timeframe.<br />
This is by far the most developed and well-thought memory standard that lacks childhood sicknesses like DDR2 and DDR3. GDDR5 is coming to market as a complete product, and offers solid future roadmap, with Differential GDDR5 even surpassing XDR2 DRAM in quest for highest possible per-pin bandwidth.<br />
By that time, Differential GDDR5 should be cheaper than GDDR3 is today.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2008/11/22/100th-story-gddr5-analysis-or-why-gddr5-will-rule-the-world/">100th Story- ANALYSIS: Why will GDDR5 rule the world?</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2008/11/22/100th-story-gddr5-analysis-or-why-gddr5-will-rule-the-world/feed/</wfw:commentRss>
		<slash:comments>8</slash:comments>
		</item>
		<item>
		<title>Some Radeon 5870 rumours are BS&#8230; some aren&#8217;t ;)</title>
		<link>http://www.vrworld.com/2008/11/03/some-radeon-5870-rumours-are-bs-some-arent/</link>
		<comments>http://www.vrworld.com/2008/11/03/some-radeon-5870-rumours-are-bs-some-arent/#comments</comments>
		<pubDate>Mon, 03 Nov 2008 21:59:07 +0000</pubDate>
		<dc:creator><![CDATA[Theo Valich]]></dc:creator>
				<category><![CDATA[3D]]></category>
		<category><![CDATA[AMD]]></category>
		<category><![CDATA[Graphics]]></category>
		<category><![CDATA[Intel]]></category>
		<category><![CDATA[256 Bit]]></category>
		<category><![CDATA[512-bit]]></category>
		<category><![CDATA[5800]]></category>
		<category><![CDATA[5830]]></category>
		<category><![CDATA[5850]]></category>
		<category><![CDATA[5870]]></category>
		<category><![CDATA[ATI]]></category>
		<category><![CDATA[directx 11]]></category>
		<category><![CDATA[dual-die]]></category>
		<category><![CDATA[DX11]]></category>
		<category><![CDATA[gddr3]]></category>
		<category><![CDATA[GDDR5]]></category>
		<category><![CDATA[lil dragon]]></category>
		<category><![CDATA[Microsoft]]></category>
		<category><![CDATA[Nvidia]]></category>
		<category><![CDATA[r800]]></category>
		<category><![CDATA[Radeon]]></category>
		<category><![CDATA[rumors]]></category>
		<category><![CDATA[rv870]]></category>
		<category><![CDATA[Windows 7]]></category>

		<guid isPermaLink="false">http://theovalich.wordpress.com/?p=301</guid>
		<description><![CDATA[<p>I&#8217;ve received word from a reader that some Germans wrote a story  containing details about RV870, e.g. Radeon &#8220;5870&#8221;. Neoseeker brought the translation forward , ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2008/11/03/some-radeon-5870-rumours-are-bs-some-arent/">Some Radeon 5870 rumours are BS&#8230; some aren&#8217;t ;)</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>I&#8217;ve received word from a reader that some Germans wrote a story  containing details about RV870, e.g. Radeon &#8220;5870&#8221;. <a href="http://www.neoseeker.com/news/9078-ati-hd5870-rumors-1-5-tflops-40nm-1000-shaders-and-multi-core-/" target="_blank">Neoseeker brought the translation forward</a> , and while some parts make a lot of sense, some really don&#8217;t.<br />
First of all, the RV870 is supposed to be a 40nm part, but that&#8217;s not something that we didn&#8217;t know already. Both Nvidia and AMD are going to bring 40nm half-node parts first, followed by 32 and 28nm full-nodes. According to the story, the GPU is supposed to contain 25% more shaders than Radeon 4800 series, bringing the theoretical computational power to 1.5 TFLOPS.<br />
Well, you don&#8217;t need 25% more shaders to get 50% performance increase. Radeon 4800 showed the path that the company is going to take, and the name of the game is how to increase the performance of those 10 shaders that now sit in one &#8220;pipeline&#8221; (or shader cluster), and increase the capacity of scratch cache to enable faster GPGPU computation.<br />
The alleged die size is 205mm2, and that would go in-line with die-shrink of 4800, which would be roughly 170mm2 if it was manufactured in 40nm (instead of actual 256mm2). 30-35mm2 should be enough to slap around 1000-1200 shaders, if those rumors are true.<br />
However, there is just one thing that does not hold ground in the story &#8211; and that is that RV870 should use 512-bit memory interface and GDDR5 memory. I may be forced to eat my own words, but no, ATI RV870 will not bring 512-bit memory controller. RV870 will feature much improved 256-bit memory controller, and it will offer bandwidth of some 150-200 GB/s per GPU. When you combine the two GPUs, possibly on the same substrate, you will get 512-bit memory controller&#8230; in a way. 512-bit memory controller with current GDDR5 memory (900 MHz QDR, e.g. 3.6 &#8220;GHz&#8221;) yields 230 GB/s. And that is the amount of bandwidth GTX280 would have if nV went for GDDR5 instead of older GDDR3 memory.<br />
Nvidia&#8217;s next-gen part will however, bring 512-bit memory controller coupled with GDDR5 memory, offering insane amount of bandwidth &#8211; 200-250 GB/s, to be more precise.</p>
<div id="attachment_302" style="width: 310px" class="wp-caption alignright"><a href="http://cdn.vrworld.com/wp-content/uploads/2008/11/intelpentiumdsmithfield.jpg" rel="lightbox-0"><img class="size-full wp-image-302" title="intelpentiumdsmithfield" src="http://cdn.vrworld.com/wp-content/uploads/2008/11/intelpentiumdsmithfield.jpg" alt="Dual-die GPU is a good idea, but can TSMC pack the chips like Intel can?" width="300" height="251" /></a><p class="wp-caption-text">Dual-die GPU is a good idea, but can TSMC pack the chips like Intel can?</p></div>
<p>It will be interesting to see can TSMC pack two RV870s on the same substrate, or that idea will go the way of do-do birds. We&#8217;ll see.<br />
Oh yeah, cooling will be vapor chamber, and we should see some really interesting cooling designs. AMD already got its feet wet with vapor-chamber technology (both 4870X2 and Sapphire&#8217;s Atomic 3870 and 4870 come with vapor chamber cooling).<br />
According to the story, the codename for the 5800 board is Lil&#8217; Dragon. However, claim about DX11 being ready in summer of 2009 is something that I would take with a big grain of salt. During PDC&#8217;08, held last week in LA, there were talk that Microsoft will even send Windows 7 to manufacturing without DirectX 11, putting 10.1 until DirectX 11 shows up at later date. My sources compared the situation for 2009 equal to the one in 2002, when ATI shipped DirectX 9 part five months before DX9 came out.<br />
As it stands right now, both Nvidia and ATI will have their DX11 parts ahead of actual API, giving developers enough time to optimize their respective drivers. Let&#8217;s hope for the best.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2008/11/03/some-radeon-5870-rumours-are-bs-some-arent/">Some Radeon 5870 rumours are BS&#8230; some aren&#8217;t ;)</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2008/11/03/some-radeon-5870-rumours-are-bs-some-arent/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
	</channel>
</rss>

<!-- Performance optimized by W3 Total Cache. Learn more: http://www.w3-edge.com/wordpress-plugins/

Content Delivery Network via Amazon Web Services: CloudFront: cdn.vrworld.com

 Served from: www.vrworld.com @ 2015-04-10 17:19:02 by W3 Total Cache -->