<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>VR World &#187; gt206</title>
	<atom:link href="http://www.vrworld.com/tag/gt206/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.vrworld.com</link>
	<description></description>
	<lastBuildDate>Fri, 10 Apr 2015 07:54:22 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.1.1</generator>
	<item>
		<title>Nvidia prepares GeForce Tree-Hugging Edition</title>
		<link>http://www.vrworld.com/2009/01/14/nvidia-prepares-geforce-tree-hugging-edition/</link>
		<comments>http://www.vrworld.com/2009/01/14/nvidia-prepares-geforce-tree-hugging-edition/#comments</comments>
		<pubDate>Wed, 14 Jan 2009 18:04:23 +0000</pubDate>
		<dc:creator><![CDATA[Theo Valich]]></dc:creator>
				<category><![CDATA[3D]]></category>
		<category><![CDATA[Graphics]]></category>
		<category><![CDATA[55nm]]></category>
		<category><![CDATA[9600gt green]]></category>
		<category><![CDATA[ecology]]></category>
		<category><![CDATA[g94]]></category>
		<category><![CDATA[geforce 9600gt]]></category>
		<category><![CDATA[gt206]]></category>
		<category><![CDATA[Nvidia]]></category>
		<category><![CDATA[nvidia green]]></category>
		<category><![CDATA[tree huggers]]></category>
		<category><![CDATA[tree hugging]]></category>

		<guid isPermaLink="false">http://theovalich.wordpress.com/?p=932</guid>
		<description><![CDATA[<p>What happens when you can't sell old stock? Sell it as "Green Edition".</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2009/01/14/nvidia-prepares-geforce-tree-hugging-edition/">Nvidia prepares GeForce Tree-Hugging Edition</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>What happens if you sell cars, and the new model is coming at the time when your stock is full of old models? Well, what will happen is that you will put a nice sticker on it, advertise same standard features as &#8220;new ones&#8221;, &#8220;only in this special edition&#8221;, offer discounts and so on. You will also put a nice spin on what&#8217;s in the news and there you go. In the world of IT, we had the same thing happening over and over again. Nobody is immune to this basic car strategy, that being AMD with its renaming of Radeon X1K parts into &#8220;HD 2000&#8243; (for low end and notebook parts&#8221;… etc.</p>
<p>Here comes Nvidia. The company is manufacturing GeForce 9600 chips in 55nm process, and the problem is that competing Radeon 4600 series is well, selling like hotcakes. The answer: GeForce 9600GT Green Edition (seriously, why the heck is this product not named 9600GE or just 9600 Green… who comes with these names, daamit?).</p>
<p>Anyways, 9600GT Tree Hugging edition is nothing else but lowering the voltage by 0.1V while keeping the same clock.  In that way, Nvidia could even brand GeForce GTX260 and 285 and call them Green… but of course, if there wasn&#8217;t for all that clockspeed raising to get the performance lead over Radeon 4870. Still, GeForce GTX285 is a nice power saver… if you can call 200W eating card a power saving one.</p>
<p>Funky part is, nobody is reporting about materials used in production of graphics cards. AMD, Intel and Nvidia can say whatever they want, but massive majority of their chips is produced using Lead and other hazardous materials. Certain CPU manufacturer may have announced lead-free production, but it is only in selective lines and yeah, in order to build a computer with their lead free component, you need to buy two or three more chips from them that contain lead. And other &#8220;nicely-named&#8221; elements. IT industry is not green, sans the color of substrate used on 99.9% of flip-chip packages (FC).</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2009/01/14/nvidia-prepares-geforce-tree-hugging-edition/">Nvidia prepares GeForce Tree-Hugging Edition</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2009/01/14/nvidia-prepares-geforce-tree-hugging-edition/feed/</wfw:commentRss>
		<slash:comments>10</slash:comments>
		</item>
		<item>
		<title>Galaxy&#8217;s 55nm GTX260 shows PALIT&#8217;s engineering skills and a new goal for the corporation</title>
		<link>http://www.vrworld.com/2009/01/06/galaxys-55nm-gtx260-shows-palits-engineering-skills-and-a-new-goal-for-the-corporation/</link>
		<comments>http://www.vrworld.com/2009/01/06/galaxys-55nm-gtx260-shows-palits-engineering-skills-and-a-new-goal-for-the-corporation/#comments</comments>
		<pubDate>Tue, 06 Jan 2009 14:21:02 +0000</pubDate>
		<dc:creator><![CDATA[Theo Valich]]></dc:creator>
				<category><![CDATA[3D]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[3-slot radeon]]></category>
		<category><![CDATA[55nm geforce]]></category>
		<category><![CDATA[Gainward]]></category>
		<category><![CDATA[Galaxy]]></category>
		<category><![CDATA[geforce gtx260]]></category>
		<category><![CDATA[gt206]]></category>
		<category><![CDATA[gt212]]></category>
		<category><![CDATA[gtx260-216]]></category>
		<category><![CDATA[gtx285]]></category>
		<category><![CDATA[Palit]]></category>
		<category><![CDATA[palit multimedia]]></category>
		<category><![CDATA[radeon 4870x2]]></category>

		<guid isPermaLink="false">http://theovalich.wordpress.com/?p=902</guid>
		<description><![CDATA[<p>PALIT moves to offer unique custom designed cards from both ATI and Nvidia, this time with 55nm Galaxy-branded GeForce GTX260-216 card.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2009/01/06/galaxys-55nm-gtx260-shows-palits-engineering-skills-and-a-new-goal-for-the-corporation/">Galaxy&#8217;s 55nm GTX260 shows PALIT&#8217;s engineering skills and a new goal for the corporation</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>The world of graphics cards experienced a dash of fresh air with the appearance of PALIT on the global scene. This OEM giant has set his sights on retail/e-tail dominance, making strategic acquisitions. After acquiring Gainward, XPERTview and Galaxy, the stage was set for attack on other vendors.</p>
<p>This company boasts large number of engineers who specialize in creating custom designs for the numerous OEMs, and PALIT Multimedia itself is the largest manufacturer of Nvidia graphics cards. If you are wondering what&#8217;s up with this intro, the answer is quite simple. Back on Nvision 2008, we spoke with the Palit North American team and they shed us some light on the future of the company, and those conversations are now coming to life.</p>
<div id="attachment_903" style="width: 510px" class="wp-caption aligncenter"><img class="size-full wp-image-903" title="galaxy_gtx260_custompcb" src="http://cdn.vrworld.com/wp-content/uploads/2009/01/galaxy_gtx260_custompcb.jpg" alt="View of the custom card, courtesy of Expreview.com" width="500" height="462" /><p class="wp-caption-text">View of the custom card, courtesy of Expreview.com</p></div>
<p>PALIT plans to offer a lot of custom designs onto the market for both ATI and Nvidia cards, and future designs will be geared towards price flexibility from one side and highest performance from another. The time for custom-time for building GTX260 1.8GB and GTX285 2GB cards is approaching, but that will not be the only change that is coming – PALIT paired up with Danger Den to create a special line of GPU cards, and that will include both ATI and Nvidia ones.</p>
<p>Palit mastered ATI with custom-built triple-slot Radeon  4870X2 (sold by Gainward and Palit), and the time has come <a href="http://en.expreview.com/2009/01/06/galaxy-unleashes-the-first-non-reference-55nm-geforce-gtx-260.html" target="_blank">for custom-designed Nvidia card with blue PCB, this time branded as Galaxy</a> (Galaxy &#8211; blue, Gainward &#8211; red, Palit &#8211; all of the above <img src="http://cdn.vrworld.com/wp-includes/images/smilies/icon_wink.gif" alt=";)" class="wp-smiley" /> ). You can expect a lot of custom designs for high-end cards as well, especially when GDDR5 takes over and enables PALIT to experiment with PCB layout even more (with GDDR3, your hands were tied). Besides PALIT, the only two companies that experiment with custom built high-end cards are ASUS and EVGA.</p>
<p>All in all, 2009 is looking to be a great year for enthusiasts, with some real unique products coming to market. We&#8217;re looking forward in seeing custom-design GTX285.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2009/01/06/galaxys-55nm-gtx260-shows-palits-engineering-skills-and-a-new-goal-for-the-corporation/">Galaxy&#8217;s 55nm GTX260 shows PALIT&#8217;s engineering skills and a new goal for the corporation</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2009/01/06/galaxys-55nm-gtx260-shows-palits-engineering-skills-and-a-new-goal-for-the-corporation/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>GeForce GTX285 on sale, our specs confirmed</title>
		<link>http://www.vrworld.com/2009/01/02/geforce-gtx285-on-sale-our-specs-confirmed/</link>
		<comments>http://www.vrworld.com/2009/01/02/geforce-gtx285-on-sale-our-specs-confirmed/#comments</comments>
		<pubDate>Fri, 02 Jan 2009 11:40:54 +0000</pubDate>
		<dc:creator><![CDATA[Theo Valich]]></dc:creator>
				<category><![CDATA[3D]]></category>
		<category><![CDATA[Graphics]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[fx5800]]></category>
		<category><![CDATA[GeForce]]></category>
		<category><![CDATA[Gigabyte]]></category>
		<category><![CDATA[gt206]]></category>
		<category><![CDATA[gt206 specs]]></category>
		<category><![CDATA[gtx285]]></category>
		<category><![CDATA[Hong Kong]]></category>
		<category><![CDATA[Quadro CX]]></category>
		<category><![CDATA[quadro fx4800]]></category>

		<guid isPermaLink="false">http://theovalich.wordpress.com/?p=891</guid>
		<description><![CDATA[<p>For the past couple of weeks, I&#8217;ve been closely following what&#8217;s going on with the 55nm refresh from Nvidia. GT200b (GT200-100-B2) series chips begun their ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2009/01/02/geforce-gtx285-on-sale-our-specs-confirmed/">GeForce GTX285 on sale, our specs confirmed</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>For the past couple of weeks, I&#8217;ve been closely following what&#8217;s going on with the 55nm refresh from Nvidia. GT200b (GT200-100-B2) series chips begun their life in Quadro CX and FX4800/5800 cards, and then started selling as GeForce GTX260 55nm.</p>
<p>On January 8, 2009, Nvidia will officially introduce GeForce GTX285 1GB and GTX295 1.8 GB cards. Or that was the theory. As it usually happens, manufacturers &#8220;accidentally&#8221; started to sell early, and this time, the &#8220;honor&#8221; of going on sale first goes to GigaByte.</p>
<p>Thanks to HKEPC, we learned that <a href="http://www.hkepc.com/2178" target="_blank">two Hong Kong shops sell GTX285 by Gigabyte</a>. This means GigaByte will be remembered as the first company to offer GTX285 on sale (first blood for GTX260 55nm went to EVGA). Prices are ranged between 410-440 USD, but you can expect it to drop further &#8211; this boards sell with at least $30-50 per store margin for being first (as usual).</p>
<p>GPU-wise, specifications are identical to Quadro FX 5800 &#8211; GPU is clocked to 648 MHz, while shaders are working at 1.48 GHz. GDDR3 memory is clocked to 1.24 GHz, meaning you have 158,976 MB/s or 155.25 GB/s to play with. Power consumption is set at 183W and this was the reason for putting 6+6-pin PEG connectors, instead of usual 8+6 configuration.<br />
While this may be good news for owners of older PSUs without 8-pin PEG connector, overclockers will turn their heads to enthusiast manufacturers such as BFG, EVGA, PALIT and others for the 8+6 versions of the card. 6+6+PCIe slot can only provide 236W of power, meaning you have 53W for overclocking.</p>
<p>In the days of original GTX280, TDP was set at 236W and 8+6+PCIe slot configuration could provide 300W of juice &#8211; or 64W. Still, I may be wrong on this one, since Shamino recently broke 3DMark world record by using single GeForce GTX 285 card with 1.1 GHz core and 2 GHz Shader clock (you think Peter did that with a 65nm GPU? Think again <img src="http://cdn.vrworld.com/wp-includes/images/smilies/icon_wink.gif" alt=";-)" class="wp-smiley" /> )</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2009/01/02/geforce-gtx285-on-sale-our-specs-confirmed/">GeForce GTX285 on sale, our specs confirmed</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2009/01/02/geforce-gtx285-on-sale-our-specs-confirmed/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>Leaked GTX295 scores are genuine</title>
		<link>http://www.vrworld.com/2008/12/16/leaked-gtx295-scores-are-genuine/</link>
		<comments>http://www.vrworld.com/2008/12/16/leaked-gtx295-scores-are-genuine/#comments</comments>
		<pubDate>Tue, 16 Dec 2008 22:57:17 +0000</pubDate>
		<dc:creator><![CDATA[Theo Valich]]></dc:creator>
				<category><![CDATA[3D]]></category>
		<category><![CDATA[Graphics]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[$499 card]]></category>
		<category><![CDATA[4870X2]]></category>
		<category><![CDATA[ati 2009]]></category>
		<category><![CDATA[Dual GPU]]></category>
		<category><![CDATA[GeForce]]></category>
		<category><![CDATA[graphics cards 2009]]></category>
		<category><![CDATA[gt200 gx2]]></category>
		<category><![CDATA[gt206]]></category>
		<category><![CDATA[gt212]]></category>
		<category><![CDATA[gtx295]]></category>
		<category><![CDATA[multi-gpu]]></category>
		<category><![CDATA[Nvidia]]></category>
		<category><![CDATA[nvidia 2009]]></category>
		<category><![CDATA[Radeon]]></category>
		<category><![CDATA[rv770]]></category>
		<category><![CDATA[Santa Clara]]></category>

		<guid isPermaLink="false">http://theovalich.wordpress.com/?p=823</guid>
		<description><![CDATA[<p>Far Eastern site leaked first performance results of Nvidia's answer to the awesome 4870X2. The name is GTX295, and it is based upon two 55nm GT206 chips and odd-numbered 1.79 GB of video memory.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2008/12/16/leaked-gtx295-scores-are-genuine/">Leaked GTX295 scores are genuine</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>IT168.com is a site located in the Far East, and these guys are known for snatching exclusives from the factory floors. In the case of everybody&#8217;s favorite green parts, guys from vga.it168.com managed to get their hands on upcoming GeForce GTX 295 card and ran some preliminary benchmarks. Sadly, IT168.com retracted the story, but it was too late &#8211; Internet caught up with the pictures, which I am bringing here for your viewing pleasure.</p>
<p>From what we can see, this card is an interesting combo between GTX260 and 280. For starters, this is nothing else but two GTX260 boards beside the shader part.</p>
<ul>
<li>Clock speeds? GTX260 x2.</li>
<li>Number of ROP units? GTX260 x2.</li>
<li>Number of Texture units? GTX260 x2.</li>
<li>Amount of memory? GTX260 x2.</li>
<li>Number of shaders? Well&#8230; GTX280. x2.</li>
</ul>

<a href='http://cdn.vrworld.com/wp-content/uploads/2008/12/nvidia_gtx295_02.jpg' rel="lightbox[gallery-0]"><img width="500" height="319" src="http://cdn.vrworld.com/wp-content/uploads/2008/12/nvidia_gtx295_02.jpg" class="attachment-vw_medium" alt="The board in its final design... HDMI and two DVIs make the end of one GTX295..." /></a>
<a href='http://cdn.vrworld.com/wp-content/uploads/2008/12/nvidia_gtx295.jpg' rel="lightbox[gallery-0]"><img width="500" height="319" src="http://cdn.vrworld.com/wp-content/uploads/2008/12/nvidia_gtx295.jpg" class="attachment-vw_medium" alt="...for LEGO lovers, this is how the part is going to look inside." /></a>
<a href='http://cdn.vrworld.com/wp-content/uploads/2008/12/nvidia_gtx295_03.jpg' rel="lightbox[gallery-0]"><img width="500" height="420" src="http://cdn.vrworld.com/wp-content/uploads/2008/12/nvidia_gtx295_03-500x420.jpg" class="attachment-vw_medium" alt="Spec comparison, courtesy of IT168.com" /></a>
<a href='http://cdn.vrworld.com/wp-content/uploads/2008/12/nvidia_gtx295_04.jpg' rel="lightbox[gallery-0]"><img width="500" height="420" src="http://cdn.vrworld.com/wp-content/uploads/2008/12/nvidia_gtx295_04-500x420.jpg" class="attachment-vw_medium" alt="There were two performance tables, but I will omit the PhysX one... just makes no point. Scores in Dead Space promise a lot of high-res fun." /></a>
<a href='http://cdn.vrworld.com/wp-content/uploads/2008/12/nvidia_gtx295_05.jpg' rel="lightbox[gallery-0]"><img width="500" height="320" src="http://cdn.vrworld.com/wp-content/uploads/2008/12/nvidia_gtx295_05.jpg" class="attachment-vw_medium" alt="Power consumption should put all those &quot;55nm eats too much power&quot; rumors to rest." /></a>

<p>So, we have a part that was intended to be doubled GTX260, but Nvidia saw that performance-wise, it may not be enough to overtake the Radeon 4870X2 or the upcoming overclocked versions&#8230; thus, the company decided to play it safe and unlock all the 240 shaders that each part posess.</p>
<p>Smart move or not? Well, the launch date was moved from early December to first day of CES, but what can you do. This plays perfectly with the decision of not going with separate 55nm GeForce GTX 270, because of all the 65nm inventory that company currently has. Thus, all those chips will become GTX260-216 and (if any) GTX260-192. However, GTX280 will not induce 55nm chips, but the company is preparing GTX285, part with same clocks as ones on Quadro FX5800. Core is 648 MHz, Shaders are set at 1.48 GHz, while 1GB of memory is set at 2.48 GHz.</p>
<p>GTX 295 is going to sit on top of the lineup, followed by GTX285, remaining stock of GTX280 and abundance of GTX260-216 parts (55nm/65nm combo). Unlike some sites, that claimed that 55nm is a power hog, it is now more than obvious that 55nm GTX260 consumes less power than Radeon 4870 and that is no small feat indeed. After all, RV770 features 999 million transistors, while the GT206 carries whopping 40% more, at 1.4 billion.</p>
<p>Now, if only people in charge of memory controller didn&#8217;t made a grave mistake and nuked GDDR5 support for political reasons (back in days, while Nvidia was well&#8230; feeling quite egoistic), who knows where the power consumption battle would end. The kicking part is that memory controller people are furious with upper echelons in Graphzilla, because if the company adopted GDDR5 support for the 55nm refresh, GTX295 could feature GDDR5 memory because the traces would be much more simpler to route and there would be no &#8220;PCB looking like a maze&#8221; issues that every GDDR3-based design has.</p>
<p>GDDR5 memory is the way to go for the future of this industry and even though GTX295 will have excellent performance, green goblins have only themselves to blame what would happen with GT206 + GDDR5. We won&#8217;t know the answer before the GT212 (40nm die-shrink) at earliest.</p>
<p>All we know is that we have a heated battle for the market in the $499 range again. But this time, with two dual-GPU parts with more than 1.5 GB of memory each. Well, ATI has the advantage there, 2GB GDDR5 vs. 1.79GB GDDR3.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2008/12/16/leaked-gtx295-scores-are-genuine/">Leaked GTX295 scores are genuine</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2008/12/16/leaked-gtx295-scores-are-genuine/feed/</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
		<item>
		<title>Nvidia to launch 55nm GPUs on Tuesday, December 16th?</title>
		<link>http://www.vrworld.com/2008/12/11/nvidia-to-launch-55nm-gpus-on-tuesday-december-16th/</link>
		<comments>http://www.vrworld.com/2008/12/11/nvidia-to-launch-55nm-gpus-on-tuesday-december-16th/#comments</comments>
		<pubDate>Thu, 11 Dec 2008 01:59:39 +0000</pubDate>
		<dc:creator><![CDATA[Theo Valich]]></dc:creator>
				<category><![CDATA[3D]]></category>
		<category><![CDATA[AMD]]></category>
		<category><![CDATA[Business]]></category>
		<category><![CDATA[Graphics]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[Intel]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[55nm gt200]]></category>
		<category><![CDATA[c1060]]></category>
		<category><![CDATA[fx5800]]></category>
		<category><![CDATA[GeForce]]></category>
		<category><![CDATA[graphics naming convention]]></category>
		<category><![CDATA[gt206]]></category>
		<category><![CDATA[gt212]]></category>
		<category><![CDATA[GTX260]]></category>
		<category><![CDATA[GTX260-216 1.79 GB]]></category>
		<category><![CDATA[GTX260-216 896MB]]></category>
		<category><![CDATA[GTX280-240 1.0 GB]]></category>
		<category><![CDATA[GTX280-240 2.0GB]]></category>
		<category><![CDATA[gtx290]]></category>
		<category><![CDATA[Nvidia]]></category>
		<category><![CDATA[nvidia marketing mess]]></category>
		<category><![CDATA[Pentium 4 Extreme Edition 955]]></category>
		<category><![CDATA[Quadro CX]]></category>
		<category><![CDATA[quadro fx4800]]></category>
		<category><![CDATA[Radeon X1800XTX CrossFire Edition]]></category>
		<category><![CDATA[Tesla]]></category>

		<guid isPermaLink="false">http://theovalich.wordpress.com/?p=783</guid>
		<description><![CDATA[<p>Nvidia prepares a launch of 55nm parts for 2008 - nope, they're not going to wait for CES 2009. At least, that's what I heard from couple of sources...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2008/12/11/nvidia-to-launch-55nm-gpus-on-tuesday-december-16th/">Nvidia to launch 55nm GPUs on Tuesday, December 16th?</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>We heard that nVIDIA is preparing something big for next week&#8230; in a form of properly timed, events taking place around the globe. Well, there are no confirmations, but there will be usual suspects, a gathering of press, partners, nVIDIA executives&#8230; and so on. If tose rumors are true, press would get a weekend of testing, and the products would be launched either on Tuesday, December 16th, or Thursday, December 18th. Personally, I feel that this belongs to &#8220;no way&#8221; category, but I cannot dismiss the rumor if there is something whispering from the rumor mill.</p>
<p>Everybody we asked remained coy about the timelines. Nobody wanted to confirm anything, but one thing is certain &#8211; according to my sources, nVIDIA is not going to wait for CES 2009 to introduce its 55nm parts. Are these rumors true? Well, like any rumor, take it with a fairly large grain of salt. But I am just the messenger here, don&#8217;t shoot <img src="http://cdn.vrworld.com/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley" /></p>
<p>The 55nm die-shrink, GT206 is already shipping in volume in the form of Quadro CX, FX 4800 and FX 5800 boards. Same story applies to Tesla cards &#8211; we saw some papers from system integrators mentioning reduced power consumption for C1060. </p>
<p>And with the number of leaked pictures floating around, showing GeForce GTX260 with 896 MB of memory, talks about GTX260 with 1.79GB, GTX280 cards with 2GB of GDDR3 memory&#8230; we know at least two partners are seriously contemplating releasing the 1.79 and 2GB cards, so the battle among nVIDIA partners is definitely heating up.</p>
<p>But will we see a surprise (and most certainly paper) launch already this week? Only time can tell. Back in 2005, ATI had a Christmas launch of Radeon X1800XTX CrossFire Edition, aligned with the launch of Pentium 4 Extreme Edition 955. Both turned to be power-hogs and performance duds, replaced by more powerful and elegant X1900XTX, while 955 was replaced with the worst overheating CPU of all times, the 965. The stigma about 965 was so strong that Intel decided to relaunch the number with Core i7 Extreme 965. We spoke with some Intel folk, and they told me that &#8220;it was time do do 965 right&#8221;.</p>
<p>Will nVIDIA have more luck with Christmas launch of 55nm parts, even if its only for show? Only time will tell. For now, there are some promising facts, like single 6-pin power connector on Quadro CX and FX 4800 (effectively GTX 260).</p>
<p>Is this 55nm line-up?</p>
<p>GTX260-216 896MB</p>
<p>GTX260-216 1.79 GB</p>
<p>GTX280-240 1.0 GB</p>
<p>GTX280-240 2.0GB</p>
<p>And here lies the question. Why the company did not rename the 55nm parts to GTX270 and GTX290 &#8211; clocks are different, power is different, only cooling is the same? Well, brace for impact. According to a large nVIDIA&#8217;s partner, it seems that the company ran out of time to allow its AIB partners to print new boxes, and the decision was made to simply rename the older parts and still have new stuff to ship. I cannot validate the source, since it looks, well&#8230; just incredible &#8211; but I have to leave this option open.</p>
<p>All in all, 2008 turned into a big mess as far as Nvidia is concerned. The company delivered world&#8217;s first GPU with hardware FP64 dual-precision and sacrificed hefty part of the die (bear in mind that DP unit takes space as three regular ones&#8230; and there is one unit per cluster of eight), but the naming confusion is something that this company should not allow.</p>
<p>If the company started anew with GTX 200 series, and had prepared renaming of the old parts as G80 (Geforce 9300, 9400, 9600) and GT130 (9800GT, 9800GTX+) why oh why oh why they didn&#8217;t went with GTX 240 (for GTX260-192), GTX  260 (GTX260-216), GTX 270 (55nm part) GTX 280 (GTX 280), GTX 290 (55nm part) and end up with GX2 295 for the dual part? Logic is so easy to find in the car industry, but impossible to find in the IT industry. Both AMD, Intel, ATI and Nvidia did a fine job of crapping on their engineer&#8217;s brilliant jobs. </p>
<p>What&#8217;s wrong with &#8220;AMD Phenom X4 2.5 GHz&#8221;, since that&#8217;s how most people are going to call their product anyways. Core i7 965? Call it The Overclocking Monster Core i7 3.2 GHz and we&#8217;re clear. Radeon 4870? Well, that&#8217;s a good continuation from 3870, let&#8217;s hope they won&#8217;t mess it up.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2008/12/11/nvidia-to-launch-55nm-gpus-on-tuesday-december-16th/">Nvidia to launch 55nm GPUs on Tuesday, December 16th?</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2008/12/11/nvidia-to-launch-55nm-gpus-on-tuesday-december-16th/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Zotac leaks pictures of 55nm GTX260</title>
		<link>http://www.vrworld.com/2008/12/05/zotac-leaks-pictures-of-55nm-gtx260-with-15-gb-of-memory/</link>
		<comments>http://www.vrworld.com/2008/12/05/zotac-leaks-pictures-of-55nm-gtx260-with-15-gb-of-memory/#comments</comments>
		<pubDate>Fri, 05 Dec 2008 11:02:33 +0000</pubDate>
		<dc:creator><![CDATA[Theo Valich]]></dc:creator>
				<category><![CDATA[Business]]></category>
		<category><![CDATA[Graphics]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[Internet]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[15GB]]></category>
		<category><![CDATA[3 gb]]></category>
		<category><![CDATA[55nm gpu]]></category>
		<category><![CDATA[896 mb]]></category>
		<category><![CDATA[fx5800]]></category>
		<category><![CDATA[GPU power consumption]]></category>
		<category><![CDATA[gt206]]></category>
		<category><![CDATA[gt212]]></category>
		<category><![CDATA[gtx260 overclocking]]></category>
		<category><![CDATA[gtx260-216]]></category>
		<category><![CDATA[Nvidia]]></category>
		<category><![CDATA[Quadro CX]]></category>
		<category><![CDATA[quadro fx4800]]></category>
		<category><![CDATA[Zotac]]></category>

		<guid isPermaLink="false">http://theovalich.wordpress.com/?p=709</guid>
		<description><![CDATA[<p>First leaked news about GeForce cards with the upcoming 55nm GPU. </p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2008/12/05/zotac-leaks-pictures-of-55nm-gtx260-with-15-gb-of-memory/">Zotac leaks pictures of 55nm GTX260</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>And so it happens&#8230; after several leaks <a href="http://theovalich.wordpress.com/2008/12/04/nvidia-55nm-gt206-reviewed-dramatic-reduction-in-power-consumption/" target="_blank">about the deployment of 55nm GPUs as Quadro CX / FX 4800 / 5800</a>, we finally received some solid 55nm GeForce news from the Far East. Chinese colleagues at<a href="http://www.expreview.com/news/hard/2008-12-05/1228468866d10731.html" target="_blank"> Expreview managed to get their hands on Zotac GTX 260-216 based on P654 PCB design</a>.</p>
<p style="text-align:left;"> </p>
<div id="attachment_710" style="width: 510px" class="wp-caption aligncenter"><img class="size-full wp-image-710" title="zotac_55nmgtx260216" src="http://cdn.vrworld.com/wp-content/uploads/2008/12/zotac_55nmgtx260216.jpg" alt="55nm chip on a GeForce card" width="500" height="344" /><p class="wp-caption-text">55nm chip on a GeForce card</p></div>
<p> </p>
<p> </p>
<p style="text-align:left;">This card features Volterra multiphase power regulation (<a href="http://theovalich.wordpress.com/2008/11/24/nvidias-deadly-flaw-and-how-to-fix-it-no-more-gtx280-squealing/" target="_blank">no more Nvidia squealing, yes!</a>), 14 memory chips (instead of standard seven) and 55nm GT200-103-B2 chip. 14 memory chips leaves room for cards with 1.5 GB of GDDR3 memory, and if dual-bank is used, GTX260 can support 3GB memory on the single card.</p>
<p style="text-align:left;">Does this mean GTX295 will feature 3GB of GDDR3 memory? Only time will tell&#8230;</p>
<p style="text-align:left;">Zotac board comes with standard GTX260-216 clocks, but the board features two 6-pin PEG adapters. Since Quadro FX 4800 works with just one, this board just may be overclockers dream. Second PEG adapter provides additional 75W, so the board can consume 225W instead of maximum 150W on Quadro CX/FX4800.</p>
<p style="text-align:left;">When this card hits the market, you can expect overclock it to at least 650 MHz for the GPU and 1500 MHz for the shaders (default clock on FX5800). It will be interesting to see how far can enthusiasts push the 55nm GPU, since this board should result in wonders when cooled with water or something even higher&#8230;</p>
<p style="text-align:left;">As it stands right now, the only card with 55nm GPU featuring all 240 shader units is Quadro FX 5800. It is possible that current yields suck so bad&#8230; until we see GTX &#8220;270&#8221; or GTX280 based on P656 PCB, we know that there aren&#8217;t many 55nm GPUs available for production with all 240 shaders on it.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2008/12/05/zotac-leaks-pictures-of-55nm-gtx260-with-15-gb-of-memory/">Zotac leaks pictures of 55nm GTX260</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2008/12/05/zotac-leaks-pictures-of-55nm-gtx260-with-15-gb-of-memory/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>Nvidia 55nm GT206 reviewed, dramatic reduction in power consumption</title>
		<link>http://www.vrworld.com/2008/12/04/nvidia-55nm-gt206-reviewed-dramatic-reduction-in-power-consumption/</link>
		<comments>http://www.vrworld.com/2008/12/04/nvidia-55nm-gt206-reviewed-dramatic-reduction-in-power-consumption/#comments</comments>
		<pubDate>Thu, 04 Dec 2008 17:00:49 +0000</pubDate>
		<dc:creator><![CDATA[Theo Valich]]></dc:creator>
				<category><![CDATA[3D]]></category>
		<category><![CDATA[AMD]]></category>
		<category><![CDATA[Business]]></category>
		<category><![CDATA[Graphics]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[Intel]]></category>
		<category><![CDATA[3d professor]]></category>
		<category><![CDATA[40nm]]></category>
		<category><![CDATA[55nm gpu]]></category>
		<category><![CDATA[6-pin PCIe]]></category>
		<category><![CDATA[6-pin PEG]]></category>
		<category><![CDATA[fx4800]]></category>
		<category><![CDATA[fx5800]]></category>
		<category><![CDATA[GeForce]]></category>
		<category><![CDATA[gpu-z]]></category>
		<category><![CDATA[gt200-b]]></category>
		<category><![CDATA[gt200-c]]></category>
		<category><![CDATA[gt206]]></category>
		<category><![CDATA[gt212]]></category>
		<category><![CDATA[GTX260]]></category>
		<category><![CDATA[GTX280]]></category>
		<category><![CDATA[Nvidia]]></category>
		<category><![CDATA[nvidia 55nm]]></category>
		<category><![CDATA[power consumption]]></category>
		<category><![CDATA[Quadro]]></category>

		<guid isPermaLink="false">http://theovalich.wordpress.com/?p=695</guid>
		<description><![CDATA[<p>  A while ago, I wrote a piece stating that Nvidia decided to launch 55nm GT206 as Quadros first. The reason for that is the ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2008/12/04/nvidia-55nm-gt206-reviewed-dramatic-reduction-in-power-consumption/">Nvidia 55nm GT206 reviewed, dramatic reduction in power consumption</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p> </p>
<p>A while ago, I wrote a piece stating that <a href="http://theovalich.wordpress.com/2008/11/11/55nm-gt206-gpu-powers-both-gtx290-and-quadro-fx-5800/" target="_blank">Nvidia decided to launch 55nm GT206 as Quadros first</a>. The reason for that is <a href="http://www.theinquirer.net/gb/inquirer/news/2008/12/03/nvidia-55nm-parts-update" target="_blank">the number of problems that Nvidia had in die-shrink process</a>, so the company had to roll-out GT206 in the same way as its old NV30 (Quadro FX 2000 shipped before GeForce FX5800) or as AMD likes to launch its CPUs &#8211; commercial parts (Opteron) are launched first, followed by consumer ones (Phenom, Athlon, Turnmeon).</p>
<p>Thus, GT206 (G200 B Series &#8211; A series marked 65nm parts, B series denominates 55nm parts, G200 C series should mark the 40nm GPUs) debuted as Quadro CX, FX 4800 and FX 5800. Quadro CX and FX 4800 are essentially identical parts: 55nm GPU with 192 shaders (48 shaders and 6 dual-precision units are disabled for yield purposes) with 1.5 GB of GDDR3 memory, while FX 5800 features a combo of 55nm GPU and 4GB of GDDR3 memory.</p>
<p> </p>
<div id="attachment_696" style="width: 508px" class="wp-caption aligncenter"><a href="http://cdn.vrworld.com/wp-content/uploads/2008/12/nvidia_55nmvs65nmgpu.jpg" rel="lightbox-0"><img class="size-full wp-image-696" title="nvidia_55nmvs65nmgpu" src="http://cdn.vrworld.com/wp-content/uploads/2008/12/nvidia_55nmvs65nmgpu.jpg" alt="55nm vs. 65nm parts, with power consumption and all..." width="498" height="203" /></a><p class="wp-caption-text">55nm vs. 65nm parts, with power consumption and all...</p></div>
<p> </p>
<p>Getting back on track with this story, the honor of being <a href="http://www.3dprofessor.org/Reviews%20Folder%20Pages/FX4800/FX4800P1.htm" target="_blank">the first review of GT206 GPU belong to no other than 3D Professor</a>. 3D Professor got his hands on Quadro FX 4800, part that was silently rolled out yesterday. In his review, the declared maximum power consumption was only 146 Watts. What makes the matters more important is the fact that this is a first high-end graphics card in three years to feature just one 6-pin power connector. It seems like the GPU manufacturers finally started to truly work on reducing the power consumption, while offering more and more performance.</p>

<a href='http://cdn.vrworld.com/wp-content/uploads/2008/12/nvidia_55nmvs65nmgpu.jpg' rel="lightbox[gallery-1]"><img width="498" height="203" src="http://cdn.vrworld.com/wp-content/uploads/2008/12/nvidia_55nmvs65nmgpu.jpg" class="attachment-vw_medium" alt="55nm vs. 65nm parts, with power consumption and all..." /></a>
<a href='http://cdn.vrworld.com/wp-content/uploads/2008/12/nvidia_quadro4800_3dprof_01.jpg' rel="lightbox[gallery-1]"><img width="500" height="420" src="http://cdn.vrworld.com/wp-content/uploads/2008/12/nvidia_quadro4800_3dprof_01-500x420.jpg" class="attachment-vw_medium" alt="The test system over at 3D Professor - Core i7 meets Quadro FX 4800" /></a>
<a href='http://cdn.vrworld.com/wp-content/uploads/2008/12/nvidia_quadro4800_3dprof_02.jpg' rel="lightbox[gallery-1]"><img width="500" height="213" src="http://cdn.vrworld.com/wp-content/uploads/2008/12/nvidia_quadro4800_3dprof_02.jpg" class="attachment-vw_medium" alt="High-end GPU is there, paired with only one 6-pin power connector..." /></a>
<a href='http://cdn.vrworld.com/wp-content/uploads/2008/12/nvidia_quadro4800_3dprof_03.jpg' rel="lightbox[gallery-1]"><img width="390" height="420" src="http://cdn.vrworld.com/wp-content/uploads/2008/12/nvidia_quadro4800_3dprof_03-390x420.jpg" class="attachment-vw_medium" alt="GPU-Z 0.2.8 was not able to detect the GPU properly, same case with 0.2.9." /></a>

<p>We managed to get a screenshot from GPU-Z, but as you can see for yourself, GPU-Z does not correctly recognize Quadro FX 4800 and its 55nm GPU. The only numbers that correlate to Nvidia&#8217;s official product page are ones that talk about GPU clocks. What makes the situation interesting is that  Nvidia declares memory bandwidth at 76.8 GB/s, or 700 MHz DDR. In fact, 1.5 gigs of GDDR3 memory comes clocked at 800 MHz DDR (1.6 GT/s) and has 87.5 GB/s to play with. </p>
<p>Well, more at 3D Professor&#8217;s page &#8211; enjoy in this <a href="http://www.3dprofessor.org/Reviews%20Folder%20Pages/FX4800/FX4800P1.htm" target="_blank">world&#8217;s first review of Quadro FX 4800</a> and the first review of 55nm GPU from Nvidia. Bear in mind this is a professional review of professional card for professionals – which means no 3DMark score :-(. But <a href="http://www.3dprofessor.org/Reviews%20Folder%20Pages/FX4800/FX4800P11.htm" target="_blank">PCMark score is here</a>.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2008/12/04/nvidia-55nm-gt206-reviewed-dramatic-reduction-in-power-consumption/">Nvidia 55nm GT206 reviewed, dramatic reduction in power consumption</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2008/12/04/nvidia-55nm-gt206-reviewed-dramatic-reduction-in-power-consumption/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>ATI and Nvidia cards for 2009 will be monsters</title>
		<link>http://www.vrworld.com/2008/11/26/ati-and-nvidia-cards-for-2009-will-be-monsters/</link>
		<comments>http://www.vrworld.com/2008/11/26/ati-and-nvidia-cards-for-2009-will-be-monsters/#comments</comments>
		<pubDate>Wed, 26 Nov 2008 13:00:29 +0000</pubDate>
		<dc:creator><![CDATA[Theo Valich]]></dc:creator>
				<category><![CDATA[3D]]></category>
		<category><![CDATA[AMD]]></category>
		<category><![CDATA[Business]]></category>
		<category><![CDATA[Graphics]]></category>
		<category><![CDATA[Intel]]></category>
		<category><![CDATA[2nd Gen GDDR5]]></category>
		<category><![CDATA[40nm]]></category>
		<category><![CDATA[55nm]]></category>
		<category><![CDATA[65nm]]></category>
		<category><![CDATA[8-pin power]]></category>
		<category><![CDATA[ATI]]></category>
		<category><![CDATA[GDDR5]]></category>
		<category><![CDATA[GHz]]></category>
		<category><![CDATA[GigaTransfers]]></category>
		<category><![CDATA[GT/s]]></category>
		<category><![CDATA[gt206]]></category>
		<category><![CDATA[gt212]]></category>
		<category><![CDATA[H5GQ1H24AFR]]></category>
		<category><![CDATA[Hynix]]></category>
		<category><![CDATA[Nvidia]]></category>
		<category><![CDATA[rv870]]></category>

		<guid isPermaLink="false">http://theovalich.wordpress.com/?p=598</guid>
		<description><![CDATA[<p>As the 2008 is drawing to a close, our thoughts are turning towards 2009 and what incredible hardware will come at our doorsteps. Upcoming year ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2008/11/26/ati-and-nvidia-cards-for-2009-will-be-monsters/">ATI and Nvidia cards for 2009 will be monsters</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>As the 2008 is drawing to a close, our thoughts are turning towards 2009 and what incredible hardware will come at our doorsteps. Upcoming year will bring a breeze of competitiveness, with AMD and Intel fighting for enthusiasts hearts and minds in the world of CPUs. GPUs will see a tough three-way battle between AMD GPG (ex-ATI), Nvidia and newcomer Intel with its Larrabee cGPU.</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2008/11/hynix_gddr5.jpg" rel="lightbox-0"><img class="alignleft size-full wp-image-599" title="hynix_gddr5" src="http://cdn.vrworld.com/wp-content/uploads/2008/11/hynix_gddr5.jpg" alt="hynix_gddr5" width="300" height="202" /></a>But one of main building block was launched yesterday, in 2008. Hynix introduced a chip with a friendly and &#8220;easily understandable&#8221; name: H5GQ1H24AFR. Even though the name looks like something that ENIGMA would encrypt, we&#8217;re talking about 128MB (1Gbit) memory chip that operates at the clock of 1.75 GHz in QDR mode, resulting in 7 GigaTransfers per second (7 GT/s or 7 &#8220;GHz&#8221;). Currently, ATI Radeon 4870 and 4870X2 come with 900 MHz chips that offer 3.6 GT/s, so we&#8217;re talking about doubling the memory bandwidth per chip.</p>
<p>This means that a GPU with a 256-bit memory controller would have roughly 219 GB/s of bandwidth, while 512-bit memory controller and these Hynix chips would result almost  A GPU with 256-bit memory controller and 438 GB/s. These numbers are astonishing and quite frankly, will open the doors for higher performance jump than previously imagined.</p>
<p>Best thing of them all: due to new manufacturing process, Hynix 2nd Gen GDDR5 chips at 1.75 GHz works at 1.35V rail, and consumes less power than initial 900 MHz chips (3.6 GT/s ones). Yep, the power consumption will go down, and performance per chip is now doubling. Who says you can&#8217;t have &#8220;wolves stuffed, and all sheep numbered&#8221; as the old Croatian saying go (english version: have your cake and eat it too)?</p>
<p>Now you know. Nvidia&#8217;s GT212, or the 40nm shrink of GT200 chips consumes around 25% of power eaten by the original 65nm chip, can have double the bandwidth and GDDR5 memory that eats less power than GDDR3 memory present on GTX280 cards. As far as ATI is concerned, the upcoming RV870 will be in the same boat as Nvidia.</p>
<p>Can you say, 8-pin power connector is going the way of do-do birds? Well, I would say yes, but don’t forget that GPU makers will use these power savings to clock their cards to absolute physical limits.<br />
H1 2009 will see $299 parts that enable 1920&#215;1200 in 16x AA/AF at 120 fps with no sweat.<br />
If you thought that GTX280 and 4870X2 are incredible… well, we haven&#8217;t seen anything yet. Now, will the game designers finally follow the path set by Race Driver GRID, Unreal Tournament III, Far Cry 2, Fallout 3 and offer absolutely fantastic gaming experience without constant crying that &#8220;hardware isn&#8217;t powerful enough&#8221;. Or at least, prove that it really isn&#8217;t.</p>
<p>P.S. Before you ask.. this is still single-ended GDDR5. Still waiting for that Differential GDDR5 to show up&#8230;of course, we need Differential GDDR5-capable memory controllers too.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2008/11/26/ati-and-nvidia-cards-for-2009-will-be-monsters/">ATI and Nvidia cards for 2009 will be monsters</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2008/11/26/ati-and-nvidia-cards-for-2009-will-be-monsters/feed/</wfw:commentRss>
		<slash:comments>6</slash:comments>
		</item>
		<item>
		<title>100th Story- ANALYSIS: Why will GDDR5 rule the world?</title>
		<link>http://www.vrworld.com/2008/11/22/100th-story-gddr5-analysis-or-why-gddr5-will-rule-the-world/</link>
		<comments>http://www.vrworld.com/2008/11/22/100th-story-gddr5-analysis-or-why-gddr5-will-rule-the-world/#comments</comments>
		<pubDate>Sat, 22 Nov 2008 21:00:46 +0000</pubDate>
		<dc:creator><![CDATA[Theo Valich]]></dc:creator>
				<category><![CDATA[3D]]></category>
		<category><![CDATA[AMD]]></category>
		<category><![CDATA[Business]]></category>
		<category><![CDATA[Graphics]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[Intel]]></category>
		<category><![CDATA[Memory & Storage Space]]></category>
		<category><![CDATA[256 Bit]]></category>
		<category><![CDATA[40nm]]></category>
		<category><![CDATA[512-bit]]></category>
		<category><![CDATA[55nm]]></category>
		<category><![CDATA[ATI]]></category>
		<category><![CDATA[differential]]></category>
		<category><![CDATA[Differential GDDR5]]></category>
		<category><![CDATA[FirePro]]></category>
		<category><![CDATA[gddr3]]></category>
		<category><![CDATA[gddr4]]></category>
		<category><![CDATA[GDDR5]]></category>
		<category><![CDATA[GeForce]]></category>
		<category><![CDATA[gt200]]></category>
		<category><![CDATA[gt206]]></category>
		<category><![CDATA[gt212]]></category>
		<category><![CDATA[joe macri]]></category>
		<category><![CDATA[larrabee]]></category>
		<category><![CDATA[Nvidia]]></category>
		<category><![CDATA[PlayStation 4]]></category>
		<category><![CDATA[Quadro]]></category>
		<category><![CDATA[Radeon]]></category>
		<category><![CDATA[S.E. GDDR5]]></category>
		<category><![CDATA[single-ended]]></category>
		<category><![CDATA[xbox 720]]></category>

		<guid isPermaLink="false">http://theovalich.wordpress.com/?p=534</guid>
		<description><![CDATA[<p>As &#8220;Theo&#8217;s Bright Side of IT&#8221; turns a century (100 stories) after 5 weeks of existence, it would be right to write an article about ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2008/11/22/100th-story-gddr5-analysis-or-why-gddr5-will-rule-the-world/">100th Story- ANALYSIS: Why will GDDR5 rule the world?</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>As &#8220;Theo&#8217;s Bright Side of IT&#8221; turns a century (100 stories) after 5 weeks of existence, it would be right to write an article about technology that is set to become an everyday word during the next couple of years: GDDR5.<br />
This memory standard will become a pervasive memory during next four years in much more fields than &#8220;just&#8221; graphics. Just like GDDR3 ended up in all three consoles, network switches, cellphones and even cars and planes, GDDR5 brings a lot of new features that are bound to win more customers from different markets.</p>
<p><strong>Background</strong><br />
The reason for development of radical ideas inside GDDR5 lies in the fact that ATI was looking at future GPU architectures, and concluded that the DRAM industry has to take a radical step in design and offer interface more flexible than any other memory standard. Then, ATI experienced huge issues with R600 and its huge monolithic die. After a lot of internal struggle, engineering teams came to agreement that a change of course is necessary for generations to come: R700/RV770, R800/RV870, R900, R1K… all of these engineering designs are reshaped and refocused. Current and future goal is to design a compact and affordable transistor design that would not play a game of Russian roulette with yields coming from <a title="MAD AMD or GloblaFoundries" href="http://www.tomshardware.com/news/amd-corporate-culture,5206.html" target="_blank">MAD AMD</a>, TSMC&#8217;s and UMC&#8217;s foundries.<br />
Development of this JEDEC certified standard happened under the lead of Joe Macri, Director of engineering at AMD and chairman of JEDEC&#8217;s Future DRAM Task Group JC42.3. Joe and his small ex-ATI/AMD GPGP team are mostly known for the development of the GDDR3 and GDDR4 memory standards, with former being probably the best thing ever to come out of the former ATI. ATI worked in solitude for a whole year before it sent initial specification to JEDEC in 2005. Then, Hynix, Qimonda and Samsung joined the effort to bring the new memory standard to life. When AMD acquired ATI in 2006, new management didn&#8217;t touch GDDR5 development and let the team to work in peace. Reason was simple: R&amp;D team warned the management that GDDR5 development is much more difficult from work done on GDDR3 and GDDR4.<br />
GDDR5 was seen as a path towards next-generation clients, that being consoles, desktop computing, networking equipment, HPC arena, handhelds&#8230; all of these roads start with one memory standard. At the time, engineers at ATI saw the path of success that GDDR3 took, and decided to create a spec that would outlive and outshine GDDR3.<br />
In May 2008, AMD finally announced the launch of GDDR5 memory standard. Soon after, the company revealed its Radeon 4800 series and cards equipped with GDDR5 memory. Given the performance of Radeon 4870 512MB, 4870 1GB and 4870X2 2GB, it is obvious that the future of graphics (and not just!) belongs to GDDR5 memory.<br />
At its very core, it is important to know that the main difference between LP-DDR (handhelds, PDAs), DDR (one fits all) and GDDR (Graphics) is the fact that capacity is not crucial, but performance is. Low-Power DDR and standard DDR are geared to enabling as much capacity as possible, while GDDR is usually referred to as the &#8220;Ferrari of the bunch&#8221;.</p>

<a href='http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_01_gpu-ram-roadmap1.jpg' rel="lightbox[gallery-2]"><img width="750" height="420" src="http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_01_gpu-ram-roadmap1-750x420.jpg" class="attachment-vw_medium" alt="Roadmap shows that DDR3 will replace DDR2 in low-end market, and GDDR5 will take over GDDR3" /></a>
<a href='http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_03_gddr345-diferences.jpg' rel="lightbox[gallery-2]"><img width="750" height="420" src="http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_03_gddr345-diferences-750x420.jpg" class="attachment-vw_medium" alt="Description of differences between the standards..." /></a>
<a href='http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_04_gddr345-diferences.jpg' rel="lightbox[gallery-2]"><img width="750" height="420" src="http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_04_gddr345-diferences-750x420.jpg" class="attachment-vw_medium" alt="... and continuing with differences." /></a>
<a href='http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_05_ram-roadmap.jpg' rel="lightbox[gallery-2]"><img width="750" height="420" src="http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_05_ram-roadmap-750x420.jpg" class="attachment-vw_medium" alt="In 2010, we should see Differential GDDR5, and then the available bandwidth on GPUs will double over the night." /></a>
<a href='http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_06_gddr5_key-features.jpg' rel="lightbox[gallery-2]"><img width="750" height="420" src="http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_06_gddr5_key-features-750x420.jpg" class="attachment-vw_medium" alt="According to Qimonda, these are key features of GDDR5 standard." /></a>
<a href='http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_07_gddr5-lowmedhighfr.jpg' rel="lightbox[gallery-2]"><img width="750" height="372" src="http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_07_gddr5-lowmedhighfr-750x372.jpg" class="attachment-vw_medium" alt="GDDR5 is divided into three different memory types, and clocks and voltage change according to specified role." /></a>
<a href='http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_08_gddr5-pcb-tracing_.jpg' rel="lightbox[gallery-2]"><img width="489" height="420" src="http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_08_gddr5-pcb-tracing_-489x420.jpg" class="attachment-vw_medium" alt="Note the absence of &quot;combs&quot; on PCB using GDDR5 memory. This will enable cheaper PCBs and higher performance at the same time." /></a>
<a href='http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_09_gddr5-overclocking.jpg' rel="lightbox[gallery-2]"><img width="750" height="420" src="http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_09_gddr5-overclocking-750x420.jpg" class="attachment-vw_medium" alt="GDDR5 is also the first memory standard designed with overclocking in mind." /></a>
<a href='http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_10_gddr5-clockingandd.jpg' rel="lightbox[gallery-2]"><img width="750" height="420" src="http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_10_gddr5-clockingandd-750x420.jpg" class="attachment-vw_medium" alt="The way how clock works...four data transfers over a single clock." /></a>
<a href='http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_11_gddr5_x16-mode.jpg' rel="lightbox[gallery-2]"><img width="750" height="420" src="http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_11_gddr5_x16-mode-750x420.jpg" class="attachment-vw_medium" alt="Clamshell mode - very important feature, will enable doubling the amount of memory in near future." /></a>

<p><strong><br />
DDR, DDR2, DDR3, GDDR3, GDDR4, GDDR5 … got it?</strong></p>
<p>If you can’t find your way through the jungle of different memory standards, don&#8217;t worry, you&#8217;re not alone. There is a lot of confusion in the world of DRAM memory, and sadly, there is no simple explanation. Most important thing to remember is that GDDR and DDR are not the same memory, and do not operate on same data sets.<br />
As you can see, GDDR memory transfers 32-bit data, while conventional DRAM transfers 64-bit data chunks. Previous generations of graphics memory (GDDR2, GDDR3) were remotely based on the DDR2-SDRAM memory standard, while GDDR5 is heading into a new direction.</p>
<p>In fact, GDDR5 standard actually splits into two different ways how DRAM operates: Single-Ended and Differential. This is a revolutionary step for GDDR memory, since it was widely expected that Single-Ended memory is the only way to go. In a way, you can say that ATI developed GDDR5 and GDDR &#8220;5.5&#8221; or &#8220;6&#8221; at the same time. Single-ended support is compatible with existing memory standards such as DDR1/2/3/GDDR3/4 and represents evolutional path for DRAM. First products to market will use single-ended chips, but as soon as Hynix, Qimonda and Samsung start manufacturing differential modules (2009-10), a new era will begin.<br />
Differential clock signaling is a method similar to interconnect buses such as HyperTransport, PCI Express, or Intel&#8217;s Quick Path Interface from Core i7. Differential introduces Reference clock, a clock that memory cell follows. Instead of using Ground wire as a passive driver, Differential mode enables precise communication and exactly this feature is the reason why available bandwidth is set for a dramatic change during lifetime of GDDR5.<br />
The sheer bandwidth gain from one GDDR generation to another is impressive. GDDR3 peaked at 2.4 Gbps, GDDR4 concluded at 3.2 Gbps. GDDR5 chips split into two: Single-Ended will offer between 3.4 and 6.4 Gbps of bandwidth, while differential chips will yield between 5.6 and 12.8 Gbps.</p>
<p>Besides Differential mode, GDDR5 also introduces an Error Correction Protocol based on a progressive algorithm that actually enables more aggressive overclocking. Major changes in internal chip design also include Quarter-Data Rate clock, continuous WRITE clock, CDR based READ (no reading clock/strobe information), DRAM Interface training, Internal and External VREF and x16 mode.<strong></strong></p>
<p><strong>Power Saving</strong></p>
<p>One of very important things with GDDR5 is power reduction. If you take GDDR3 and GDDR5 modules, clocked at 1.0 GHz each, GDDR3 will have to operate at 2.0V, while GDDR5 needs only 1.5V. This results in 30% reduction of power consumption, while raising available per-pin bandwidth by almost 100%.</p>
<p>GDDR5 is designed to operate at low, medium and high frequencies. Low frequency (0.2-1.5 Gbps) calls for low-voltage (0.8-1.0V), while medium (1.0-3.0 Gbps) and high (2.5-5.0 Gbps) frequencies call for higher voltage, in a range between 1.4-1.6V.<br />
High frequency is the only one that utilizes CDR (Command Data Rate) circuitry, while medium and low frequencies call for conventional mode (RDQS with Preamble).<br />
Seeing the drop in power below levels of FB-DIMM DDR2-800 only makes us wonder what would happen if CPU manufacturers would implement Differential GDDR5 as system memory. Would we really need Gigabytes of system memory if we would have system memory with higher bandwidth than L2 and L3 cache? Intel is looking in similar direction, considers <a href="http://www.tomshardware.com/news/Intel-DRAM-CPU,5697.html" target="_blank">replacing SRAM cache with DRAM technology</a>.</p>
<p>Sadly, the changes that would be required in memory controller are such that only place where GDDR5 will see the light of day as system memory are closed designs, such as consoles, set top boxes and so on. There is hope that some future AMD&#8217;s Fusion designs might implement GDDR support, but it is too early to tell.</p>
<p><strong>How to lower the cost of manufacturing?</strong></p>
<p><strong><br />
</strong>During design stages of GDDR5 memory, one of main concerns was how to simplify tracing on the PCB (Printed Circuit Board). On current GDDR3 and GDDR4 graphics boards, synchronization issues are solved by using traces of the same length from every pin on DRAM chip to the GPU. This causes quite a messy design, with traces going everywhere.</p>
<p>IF you&#8217;re PCB designer, there is one thing that you don&#8217;t want: complex routing of traces. This eventually leads to more PCB layers, higher cost and most importantly &#8211; more ways for *something* to go wrong. Every trace has increased isolation from electromagnetic interferences (EMI), while Asymmetrical Interface compensates for differences in length. In order to keep the signal integrity, several optimizations were made.<br />
As you could see on picture above, GDDR5 PCB route is much cleaner than GDDR3, and you can see that if you compare Radeon 4850 to Radeon 4870, for instance. This was paid by additional resistors around memory chips, but second generation of GDDR5 graphics cards should feature cleaner design.</p>
<p><strong>Memory designed for overclocking?</strong></p>
<p><strong><br />
</strong>With power saving and performance-related tweaks, it is obvious that this memory is designed for overclocking. This was confirmed to us just by looking at slides from AMD and Qimonda.</p>
<p>The GDDR5 specification delivers a combination of three technologies: Adaptive Training and CDR, Error Detection and an on-die thermal sensor. Adaptive Training is combined with the Error Detection algorithm and enables the memory controller of the GPU to keep thermals on a tight leash. If you want to overclock the memory, it will go up until the error correction algorithm hits a thermal wall.</p>
<p>Error Detection works with both read and write instructions, offering real time repeat and resend operations. Thanks to asynchronous clocks, memory controller can control flow of data and resend bits of information that fail to arrive in time (or arrive corrupted). Error Detection algorithm will try to avoid a crash until the number of errors passes 1 Error/sec.<br />
In order to maintain the signal stability, additional resistors were placed inside and outside the memory chip (take a look at the back of 4870 and compare it to 4850). AMD also addressed the issue spotted on GDDR4. Overclocking of GDDR4 memory was limited because DRAM timing loop would run out of power. GDDR5 changed the way how clock is generated and kept, so memory chip should never starve for power. No timing loop issue = no memory freeze. According to our sources, GDDR5 memory clocking in the end depends on the manufacturing process (used by the chip manufacturer) and the amount of voltage provided to the chip.<br />
But main difference in clocking of GDDR3 and GDDR5 is the fact that PVT (Power, Voltage, and Temperature) is no longer the unbreakable barrier. Now, it is GPU&#8217;s memory controller that will keep (or fail to keep) the flow of data.</p>
<p><strong>Coalition between the GPU and the RAM</strong></p>
<p>Unlike previous memory standards, in order to extract the best possible performance memory controller has to support ALL of the GDDR5 features. This especially goes to Asymmetrical interface, since WRITE and READ clocks are programmed by the GPU. Advanced Clock Training calibrates GPU-RAM signals &#8211; without this feature, you cannot count on high clocks or overclocking capabilities. With four bits of data being sent per clock (instead of two), memory controller is exposed to a lot of stress, and has to be able to do error checking on the fly. Any misses on GPU side will lead to cycle losses &#8211; leading to instability.<br />
Good example is memory controller tucked inside the Radeon 4800. This 256-bit controller supports DDR2, DDR3, GDDR3, GDDR4 and GDDR5 memory standards. The memory controller is tuned up to the point where bandwidth and clock limitation are on the side of the SGRAM chips: If the fastest GDDR5 memory chips were available today, you could build a 4800-series card with them. This also opens up revenue opportunities for Hynix, Samsung and Qimonda. All three manufacturers could earn a small fortune by selling gold sampled memory chips to premium graphics card manufacturers.<br />
When it comes to Nvidia, answer to the question why the company went with GDDR3 for GTX 200 series of cards is not a simple one: according to our sources, GT200 chip supports GDDR3 and GDDR4, while engineers ran out of time to adjust memory controller to asymmetrical interface (advanced interface training), key feature for stable operation. But, if Nvidia sticks with 512-bit memory controller for NV70 generation (GT300?), we should see Nvidia GPUs featuring bandwidth in excess of 300 GB/s, more than twice that is available today. There is also a question what will Nvidia do with its two refreshes, 55nm GT206 and 40nm GT212 chips.<br />
Intel is not giving out any details on Larrabee&#8217;s architecture, but we know for sure that the 1024-bit internal/512-bit external memory controller will support GDDR5 and its advanced features. Given the late 2009 release, support for differential mode should be a given. When it comes to christening, Larrabee with GDDR5 memory will debut during this winter, with <a href="http://www.tomshardware.com/news/intel-larrabee-graphics,5847.html" target="_blank">first graphics cards delivered to Dreamworks</a>.</p>
<p><strong>Capacity – just how big can we go?<br />
</strong>Now that you&#8217;ve seen all of the performance elements, time to write about capacity. While Joe told us that GDDR should be considered as &#8220;The Ferrari of DDR world&#8221;, GDDR5 introduces x16 mode. This mode has nothing to do with PCI Express x16 (to kill any potential confusion).</p>
<p>As you can see on the slide above, Clamshell mode is introduced to enable two memory chips sitting on a single x32 node. If we take ATI Radeon 4800 series, GPU features eight x32 I/O controllers. In theory, this should top at 16 memory chips per GPU, or 1GB of onboard memory using conventional 512Mbit chips. With x16 mode, card designer can put up to 32 chips (good luck with finding available space), or 2GB memory with 512Mbit (64MB) chips. With 1Gbit (128MB) chips, this number grows to 4GB. Qimonda is expected to ship 2Gbit (256MB) chips during 2009, enabling 8GB of on-board memory.</p>
<p>This number is increasingly important for GPGPU market, which wants as much on-board memory as possible. Bear in mind that Tesla 10-Series features 4GB of GDDR3 memory, and some contacts we&#8217;ve talked with &#8211; claim they would fill even more.</p>
<p>Eight GB of video memory may sound too much for consumer space, but if world is to usher into the era of <a href="http://www.tomshardware.com/news/Larrabee-Ray-Tracing,5769.html" target="_blank">Ray-tracing</a>, we have to get enough space for gigabytes of data. Jules Urbach from JulesWorld explained that he is working with datasets bigger than 300 GB, and has to resort using AMD&#8217;s CAL (Compression Algorithm) to fit all the data inside 1GB per GPU (Jules uses R700 boards).</p>
<p><strong>Conclusion</strong></p>
<p><strong></strong>GDDR5 ramped up during 2008 and we expect the technology becoming a standard for GPU add-in-boards in 2009. ATI will migrate to GDDR5, so will Nvidia. With Intel joining the pack with Larrabee, volumes should be ready to drive the cost of GDDR5 into budget for next generation of game consoles, starting in the 2010-11 timeframe.<br />
This is by far the most developed and well-thought memory standard that lacks childhood sicknesses like DDR2 and DDR3. GDDR5 is coming to market as a complete product, and offers solid future roadmap, with Differential GDDR5 even surpassing XDR2 DRAM in quest for highest possible per-pin bandwidth.<br />
By that time, Differential GDDR5 should be cheaper than GDDR3 is today.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2008/11/22/100th-story-gddr5-analysis-or-why-gddr5-will-rule-the-world/">100th Story- ANALYSIS: Why will GDDR5 rule the world?</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2008/11/22/100th-story-gddr5-analysis-or-why-gddr5-will-rule-the-world/feed/</wfw:commentRss>
		<slash:comments>8</slash:comments>
		</item>
		<item>
		<title>TSMC introduces 40nm volume production, advances in front of Intel</title>
		<link>http://www.vrworld.com/2008/11/18/tsmc-introduces-40nm-volume-production-advances-in-front-of-intel/</link>
		<comments>http://www.vrworld.com/2008/11/18/tsmc-introduces-40nm-volume-production-advances-in-front-of-intel/#comments</comments>
		<pubDate>Tue, 18 Nov 2008 09:56:21 +0000</pubDate>
		<dc:creator><![CDATA[Theo Valich]]></dc:creator>
				<category><![CDATA[3D]]></category>
		<category><![CDATA[AMD]]></category>
		<category><![CDATA[Business]]></category>
		<category><![CDATA[Graphics]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[Intel]]></category>
		<category><![CDATA[2009]]></category>
		<category><![CDATA[2010]]></category>
		<category><![CDATA[2011]]></category>
		<category><![CDATA[22nm]]></category>
		<category><![CDATA[28nm]]></category>
		<category><![CDATA[32nm]]></category>
		<category><![CDATA[40nm]]></category>
		<category><![CDATA[chip]]></category>
		<category><![CDATA[CPU]]></category>
		<category><![CDATA[GPU]]></category>
		<category><![CDATA[gt200]]></category>
		<category><![CDATA[gt206]]></category>
		<category><![CDATA[gt212]]></category>
		<category><![CDATA[larabee]]></category>
		<category><![CDATA[manufacturing]]></category>
		<category><![CDATA[TSMC]]></category>

		<guid isPermaLink="false">http://theovalich.wordpress.com/?p=470</guid>
		<description><![CDATA[<p>A while ago, I spoke with my sources at TSMC, who were quite decisive to make it to the front on the field of chip ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2008/11/18/tsmc-introduces-40nm-volume-production-advances-in-front-of-intel/">TSMC introduces 40nm volume production, advances in front of Intel</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>A while ago, I spoke with my sources at TSMC, who were quite decisive to make it to the front on the field of chip manufacturing. Heads of this Taiwanese giant decided to invest more than 10 billion USD in order to become world&#8217;s most advanced manufacturer, and their roadmap is more aggressive than anyone in the industry.</p>
<p>The results of that investment are slowly coming to life, and as of today, TSMC has more advanced manufacturing process than any other competitor in the manufacturing business. Intel will argue its (very important, though) Hafnium or High-K material, but ever since I became a journalist, Intel touted its manufacturing capabilities and ability to go small &#8220;sooner than anyone else&#8221;. Well, that is about to change.<br />
For instance, Intel will introduce 32nm process in late 2009, and mass production in 2010. Due to separation between AMD and &#8220;MAD AMD&#8221; (The Foundry Company) will introduce 32nm (bulkPG, not for CPUs) only at the end of 2009, with 2010 being the year of mass production. If all things go well, that is.<br />
During that same time, TSMC will introduce 32nm (Q4&#8217;2009), 28nm (Q2&#8217;2010) and 22nm will debut in first half of 2011. This is very, very aggressive roadmap that will give Nvidia and ATI leverage in development of graphics parts.</p>
<p>This also does not sound good for Intel&#8217;s own Larrabee, which will rely on Intel&#8217;s own manufacturing capabilities. While this was viewed as a huge strength in the previous years, TSMC may actually give AMD and Nvidia more than a fighting chance &#8211; a winning cost/die ratio.</p>
<p>As a case of demonstration, my source gave me a comparison while using Nvidia&#8217;s GT200 chip. This estimated comparison gave me shivers, because in 28nm (available in a bit more than a year), die size for 1.4 billion transistors would drop to incredible 160mm2. Of course, don&#8217;t expect that ATI or Nvidia will stand still. They will keep making big GPUs and put more and more core logic inside.</p>
<p><strong>GT200 die through different TSMC manufacturing processes (&#8220;wild&#8221; estimate):</strong><br />
65nm: 576 mm2 (GT200)<br />
55nm: 470 mm2 (GT206)<br />
40nm: 320 mm2 (GT212)<br />
32nm: 220 mm2 (die-shrink estimate)<br />
28nm: 150 mm2 (die-shrink estimate)</p>
<p>Given this table, we can see that if Nvidia would want to keep the 500mm2 die size, it could manufacture a chip with 500 processors in 40nm, 700 processors in 32nm or massive 1200 shader processors using 28nm process. But don&#8217;t expect that either ATI or Nvidia will go linear with their GPUs.<br />
What I personally expect is 512-bit bus, GDDR5 memory controller for both companies (regardless of what ATI is saying now), and increasing the capabilities of shaders. Currently, ATI is supporting FP64 DP format through their 80 shader lines (e.g. in RV780, you have 80 shader pipelines with 10 units in each &#8211; you can either output one FP64 Dual Precision or ten FP32 Single Precision number formats). Nvidia features one FP64 DualPrecision unit on eight of their regular shader cores.<br />
With 32nm available in 2009 and 28nm available a year later, it is easy to predict that we will see a tremendous increase in processing power not through the sheer number of shaders, but rather increasing existing shader capabilities.<br />
My $0.02 is that we will see 4-10TFLOPS parts coming in next 24 months, essentially increasing the computational power by anywhere between four and ten times. All thanks to massive effort put in by TSMC.<br />
For now, Nvidia can announce the mass production of Tegra mobile SoC chips and its notebook lineup, while ATI can launch their notebook line-up. 40nm High-Performance arrives in Q1&#8217;2009, and you can expect GT212 and RV870 coming &#8220;your way in May&#8221;.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2008/11/18/tsmc-introduces-40nm-volume-production-advances-in-front-of-intel/">TSMC introduces 40nm volume production, advances in front of Intel</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2008/11/18/tsmc-introduces-40nm-volume-production-advances-in-front-of-intel/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>UPDATE: 55nm GT206 GPU powers both GTX290 and Quadro FX 5800</title>
		<link>http://www.vrworld.com/2008/11/11/55nm-gt206-gpu-powers-both-gtx290-and-quadro-fx-5800/</link>
		<comments>http://www.vrworld.com/2008/11/11/55nm-gt206-gpu-powers-both-gtx290-and-quadro-fx-5800/#comments</comments>
		<pubDate>Tue, 11 Nov 2008 02:42:16 +0000</pubDate>
		<dc:creator><![CDATA[Theo Valich]]></dc:creator>
				<category><![CDATA[3D]]></category>
		<category><![CDATA[AMD]]></category>
		<category><![CDATA[Business]]></category>
		<category><![CDATA[Graphics]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[240 shaders]]></category>
		<category><![CDATA[4800]]></category>
		<category><![CDATA[4890]]></category>
		<category><![CDATA[55nm]]></category>
		<category><![CDATA[a3]]></category>
		<category><![CDATA[ATI]]></category>
		<category><![CDATA[christmas]]></category>
		<category><![CDATA[g200]]></category>
		<category><![CDATA[g200-200]]></category>
		<category><![CDATA[g200-202]]></category>
		<category><![CDATA[g200-300]]></category>
		<category><![CDATA[g200-302]]></category>
		<category><![CDATA[GPU]]></category>
		<category><![CDATA[gt200]]></category>
		<category><![CDATA[gt206]]></category>
		<category><![CDATA[gt212]]></category>
		<category><![CDATA[gtx 270]]></category>
		<category><![CDATA[gtx 290]]></category>
		<category><![CDATA[GTX270]]></category>
		<category><![CDATA[gtx290]]></category>
		<category><![CDATA[higher clock]]></category>
		<category><![CDATA[mod]]></category>
		<category><![CDATA[Radeon]]></category>
		<category><![CDATA[rv780]]></category>
		<category><![CDATA[rv790]]></category>
		<category><![CDATA[shopping]]></category>
		<category><![CDATA[Thanksgiving]]></category>

		<guid isPermaLink="false">http://theovalich.wordpress.com/?p=365</guid>
		<description><![CDATA[<p>The honor of being the first product powered by 55nm G200-302 chip (a.k.a. GT206/212) went to Quadro FX 4800/5800, products that launched with a lot ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2008/11/11/55nm-gt206-gpu-powers-both-gtx290-and-quadro-fx-5800/">UPDATE: 55nm GT206 GPU powers both GTX290 and Quadro FX 5800</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>The honor of being the first product powered by 55nm G200-302 chip (a.k.a. GT206/212) went to Quadro FX 4800/5800, products that launched with <a href="http://theovalich.wordpress.com/2008/11/10/nvidia-officially-unveils-civil-cx-and-fx5800-monster/" target="_blank">a lot of fanfare earlier today</a>.</p>
<p>Besides Quadro FX 4800 and 5800, the new 55nm GPU will also power GeForce GTX 270 and 290. Essentially, we&#8217;re talking about the same parts. Quadro FX 4800 is nothing more but GTX270 with double the amount of video memory, while Quadro FX 5800 is equal to GTX290, but with four times the video memory. ATI is not sleeping, as the company is preparing an RV790 part , beefed-up version of already existing RV770 chip.</p>
<p>G200-302 Rev A2 begun manufacturing back in September, and the first parts are now finding their way to mass manufacturing. The chip features a die size of 470mm2, 107mm2 less than the original G200 chip. This just goes to show the vast difference between 65nm and 55nm &#8211; if Nvidia had the balls to go with 55nm chip back in May, the prices of GTX260/280 parts could have been way cheaper and offer much more flexibility, but we can&#8217;t cry over spilt milk. 55nm part is here now, and it will consume much less power than is the case with the 65nm one.</p>
<p>The 55nm GPU consumes roughly 50% less power than it was the case with 65nm one, and this difference is more than massive. When I did quick power checks, the GTX280 at 650/1500/2200 would eat around 266W, while the default clocked GTX280 (600/1300/2200) was specc&#8217;ed at 238W.</p>
<p>Well, the 55nm GPU will eat around 170W at 650/1500/2200, meaning that GTX290 just got 100W of power to play with. If you&#8217;re into overclocking, you can now start dreaming about clocking those 240 shaders to 1.7-1.8 GHz range (perhaps even 2.0 if water-cooling setup is powerful enough), and achieve massive performance gains, all happening while you&#8217;re consuming <strong>less </strong>power than a stock clocked GTX280.</p>
<p>As far as the naming convention goes, Nvidia calls their chips NVxx (we&#8217;re at NV60 right now) or Gxx/Gxxx internally, and partners get the GT200-XXX name. But at the end of the day, the number that matters is the one on the chip.<br />
GTX 260 and 280 both came with G200-200 and G200-300 chips, while GTX270 and 290 will feature G206-202 and G206-302 chips. Essentially, there is no difference between the two, sans the hardwired part that decides how many shaders a certain chip has. If you&#8217;re brave enough, you&#8217;ll pop the massive HIS and play around with resistors. Who knows, perhaps you can enable 240 shaders on GTX260/270… or maybe not.<br />
In any case, we can&#8217;t wait for these new babies to show up. FX4800, FX5800, GTX270 and 290 are all coming to market very, very soon.<br />
My personal take is that Nvidia will try to steal the limelight of official Core i7 launch on 11/17 and ship the GTX270/290 to reviewers, trying to tell them that they&#8217;re still on top. All hopes with ATI lie in the form of upcoming 4890. But still, Nvidia does not offer a compelling $199 experience and this is where ATI will take them to the cleaners.</p>
<p>Of course&#8230; unless you see a GTX260-216 at a completely new price point, and GTX270 costing just $50 more, dropping to $199 for Christmas. Crazy scenario, but competition brings the best for us, consumers.</p>
<p><strong>UPDATE: </strong>Picture that accompanied the story did not feature GT206 chip, thus I removed it. The rest of the info is pretty valid <img src="http://cdn.vrworld.com/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley" /></p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2008/11/11/55nm-gt206-gpu-powers-both-gtx290-and-quadro-fx-5800/">UPDATE: 55nm GT206 GPU powers both GTX290 and Quadro FX 5800</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2008/11/11/55nm-gt206-gpu-powers-both-gtx290-and-quadro-fx-5800/feed/</wfw:commentRss>
		<slash:comments>15</slash:comments>
		</item>
	</channel>
</rss>

<!-- Performance optimized by W3 Total Cache. Learn more: http://www.w3-edge.com/wordpress-plugins/

Content Delivery Network via Amazon Web Services: CloudFront: cdn.vrworld.com

 Served from: www.vrworld.com @ 2015-04-10 17:27:42 by W3 Total Cache -->