<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>VR World &#187; FirePro</title>
	<atom:link href="http://www.vrworld.com/tag/firepro/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.vrworld.com</link>
	<description></description>
	<lastBuildDate>Fri, 10 Apr 2015 07:54:22 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.1.1</generator>
	<item>
		<title>AMD Launches W8100, Cuts GPUs Prices 50% for First GPU</title>
		<link>http://www.vrworld.com/2014/06/23/amd-launches-w8100-cuts-gpus-prices-50-first-gpu/</link>
		<comments>http://www.vrworld.com/2014/06/23/amd-launches-w8100-cuts-gpus-prices-50-first-gpu/#comments</comments>
		<pubDate>Tue, 24 Jun 2014 03:01:28 +0000</pubDate>
		<dc:creator><![CDATA[Anshel Sag]]></dc:creator>
				<category><![CDATA[Audio/Video]]></category>
		<category><![CDATA[Graphics]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Software]]></category>
		<category><![CDATA[AMD FirePro]]></category>
		<category><![CDATA[FirePro]]></category>
		<category><![CDATA[FirePro W8100]]></category>
		<category><![CDATA[GPU]]></category>
		<category><![CDATA[Hawaii]]></category>
		<category><![CDATA[K20]]></category>
		<category><![CDATA[K5000]]></category>
		<category><![CDATA[Kepler]]></category>
		<category><![CDATA[Nvidia]]></category>
		<category><![CDATA[OpenCL]]></category>
		<category><![CDATA[Professional]]></category>
		<category><![CDATA[W8100]]></category>

		<guid isPermaLink="false">http://www.brightsideofnews.com/?p=36140</guid>
		<description><![CDATA[<p>Today was an interesting day in AMDland, first the company announced their latest GPU, the FirePro W8100 and then later in the day they announced ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/06/23/amd-launches-w8100-cuts-gpus-prices-50-first-gpu/">AMD Launches W8100, Cuts GPUs Prices 50% for First GPU</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="980" height="431" src="http://cdn.vrworld.com/wp-content/uploads/2014/06/W8100_9801.jpg" class="attachment-post-thumbnail wp-post-image" alt="W8100" /></p><p>Today was an interesting day in AMDland, first the company <a href="http://www.amd.com/en-us/press-releases/Pages/new-amd-professional-2014jun23.aspx" target="_blank">announced their latest GPU</a>, the FirePro W8100 and then later in the day they announced a program where you could buy any of their latest GPUs for a whopping 50% as long as its the first one, every subsequent one will be full price.  But first, you have to go through <a href="http://www.fireprographics.com/experience/us/apply.asp" target="_blank">an &#8216;approval process&#8217;</a>. Now, let&#8217;s get back to the new GPU AMD just announced, what is it exactly? Well, the FirePro W8100 is part of AMD&#8217;s professional line of graphics cards branded as FirePro.</p>
<p>So, looking at the rough specs we can see that the W8100 delivers over 2 TFLOPs of double precision, which is actually less than what <a title="Intel’s New Knight’s Landing Xeon Phi Combines Omni Scale Fabric with HMC" href="http://www.brightsideofnews.com/2014/06/23/intel-new-knights-landing-combines-omni-scale-fabric-hmc/" target="_blank">Intel&#8217;s new Knight&#8217;s Landing</a> is capable of delivering, which was announced today. It does, however, also do over 4 TFLOPs of single precision which is quite impressive since its double Nvidia&#8217;s K5000&#8217;s 2.1 TFLOPs. This GPU is effectively a professional version of <a title="AMD Radeon R9 290: Blowing the Doors off the Competition" href="http://www.brightsideofnews.com/2013/11/08/amd-radeon-r9-290-blowing-the-doors-off-the-competition/" target="_blank">AMD&#8217;s R9 290 GPU which we reviewed</a> and found overall to be a very impressive GPU for the money, and it still is. What makes this GPU different, however is that it can drive four 4K displays simultaneously and has 8 GB of GDDR5 memory as opposed to 4 GB, making better use of the 512-bit memory bus on the Hawaii Pro GPU inside. This is, however, less than what the W9100 supports which is six 4K displays. But realistically you won&#8217;t be doing any gaming on these 4K displays so it doesn&#8217;t seem outrageous to think someone could be using 32 million pixels. AMD accomplishes this through putting four DisplayPort 1.2 connectors on the back of the card as you can see above and below.</p>
<div id="attachment_36145" style="width: 990px" class="wp-caption aligncenter"><a href="http://cdn.vrworld.com/wp-content/uploads/2014/06/W8100_6_9801.jpg" rel="lightbox-0"><img class="size-full wp-image-36145" src="http://cdn.vrworld.com/wp-content/uploads/2014/06/W8100_6_9801.jpg" alt="W8100" width="980" height="426" /></a><p class="wp-caption-text">W8100 Specifications, current and future</p></div>
<p>As you can see from the above specs, AMD has decided to change the GPU&#8217;s name to an engine and say its clocked at 824 MHz, a solid 123 MHz less than the R9 290 gaming graphics card that it mimics. It does, however have double the memory of the R9 290 which is why it is capable of driving up to four 4K displays. AMD also powers it with two 6-pin power connectors, drawing 220W and supporting PCIe 3.0, everything pretty standard here. It also supports OpenCL 1.2 and already has OpenCL 2.0 support baked-in, which is good to know for anyone planning to buy a &#8216;future-proof&#8217; GPU. It also supports OpenGL 4.3 and will support OpenGL 4.4, which isn&#8217;t that much of a feat as most of that support will be accomplished though a driver update. What is interesting, though, is that it supports DirectX 11.2, but AMD is making no mention of future compatibility with DirectX 12 at all, which seems a bit missing. It isn&#8217;t anything shocking since this graphics card is based on a GPU that was announced in 2013, but it is still interesting that AMD has nothing to mention there.</p>
<p>AMD also couldn&#8217;t help but compare themselves to Nvidia&#8217;s Quadro K5000, Nvidia&#8217;s older professional workstation GPU (as they&#8217;re currently on the K6000) so naturally, here in AMD&#8217;s comparison they basically spank Nvidia. Yes, the W8100 is $2499, which makes it more price comparable with the K5000 as opposed to the K6000 which sells for a whopping $4,999 and is more comparable with AMD&#8217;s W9100.</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/06/W8100_2_9801.jpg" rel="lightbox-1"><img class="size-full wp-image-36141" src="http://cdn.vrworld.com/wp-content/uploads/2014/06/W8100_2_9801.jpg" alt="W8100" width="980" height="484" /></a></p>
<p>AMD also draws a comparison against <a title="Nvidia Maximus 2 Reviewed – The Great One" href="http://www.brightsideofnews.com/2013/09/12/nvidia-maximus-2-reviewed-the-great-one/" target="_blank">Nvidia&#8217;s Maximus 2 development platform, which we also reviewed</a>, as that solution is absolutely bulletproof but also incredibly expensive. Here AMD is claiming that they deliver more performance and doing it with fewer GPUs and with comparable memory. However, AMD doesn&#8217;t talk about the development scenarios that it enables or how good their professional drivers are compared to Nvidia&#8217;s. The Maximus 2 platform (and subsequent versions) are all about stability and reliability and not necessarily about performance as we learned in our review. So, until AMD can put these GPUs in our hands and show us that their GPUs and platforms are as stable as Nvidia&#8217;s in the same applications, then we&#8217;re not entirely sure that AMD can draw these comparisons. Yes, fewer GPUs will consume less power, but sometimes power isn&#8217;t as much of a concern when in professional graphics scenarios.</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/06/W8100_3_9801.jpg" rel="lightbox-2"><img class="aligncenter wp-image-36142 size-full" src="http://cdn.vrworld.com/wp-content/uploads/2014/06/W8100_3_9801.jpg" alt="W8100" width="980" height="404" /></a></p>
<p>&nbsp;</p>
<p>Last but not least, AMD&#8217;s W8100 was benchmarked in a ton of AMD-favorable benchmarks and applications (mostly OpenCL heavy) and they obviously won pretty well. However, the most interesting benchmark to me that isn&#8217;t cherry picked by AMD was their DaVinci Resolve performance benchmark showing scaling in Resolve using W8100&#8217;s. In that benchmark they show almost 100% scaling with DaVinci Resolve, which may be incredibly attractive to professionals that do lots of heavy post-processing.</p>
<div id="attachment_36147" style="width: 990px" class="wp-caption aligncenter"><a href="http://cdn.vrworld.com/wp-content/uploads/2014/06/W8100_Resolve_9801.jpg" rel="lightbox-3"><img class="size-full wp-image-36147" src="http://cdn.vrworld.com/wp-content/uploads/2014/06/W8100_Resolve_9801.jpg" alt="W8100" width="980" height="479" /></a><p class="wp-caption-text">DaVinci Resolve performance scaling with W8100</p></div>
<p>Also, in regards to <a href="http://links.em.experience.amd.com/servlet/MailView?ms=MjEwMzQ4MTES1&amp;r=NzMzNTE5MTkwMzgS1&amp;j=MzQxMjc5MjU0S0&amp;mt=1&amp;rt=0" target="_blank">AMD&#8217;s 50% off promotion</a>, there are actually only specific GPUs eligible for the promotion, including the W9100. And frankly, if you&#8217;re going to use the 50% off promotion, you might as well use it on their fastest and most expensive (and capable) professional graphics card. Other options include $800 off the MSRP of the W8000, $450 off the MSRP of the W7000, $1250 off the S9000&#8217;s MSRP and $715 off the S7000 at MSRP price. So, obviously it isn&#8217;t 50% off all professional graphic cards, but rather up to 50% off some of them.</p>
<p>I&#8217;m not sure why AMD is doing this, maybe to introduce people to their GPUs by getting to buy one cheaply, which isn&#8217;t a bad sales strategy. However, it may also be that they&#8217;re desperate to sell these GPUs and are cherry picking specific models and prices in order to make sure that they&#8217;re still making a profit on them.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/06/23/amd-launches-w8100-cuts-gpus-prices-50-first-gpu/">AMD Launches W8100, Cuts GPUs Prices 50% for First GPU</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2014/06/23/amd-launches-w8100-cuts-gpus-prices-50-first-gpu/feed/</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
		<item>
		<title>100th Story- ANALYSIS: Why will GDDR5 rule the world?</title>
		<link>http://www.vrworld.com/2008/11/22/100th-story-gddr5-analysis-or-why-gddr5-will-rule-the-world/</link>
		<comments>http://www.vrworld.com/2008/11/22/100th-story-gddr5-analysis-or-why-gddr5-will-rule-the-world/#comments</comments>
		<pubDate>Sat, 22 Nov 2008 21:00:46 +0000</pubDate>
		<dc:creator><![CDATA[Theo Valich]]></dc:creator>
				<category><![CDATA[3D]]></category>
		<category><![CDATA[AMD]]></category>
		<category><![CDATA[Business]]></category>
		<category><![CDATA[Graphics]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[Intel]]></category>
		<category><![CDATA[Memory & Storage Space]]></category>
		<category><![CDATA[256 Bit]]></category>
		<category><![CDATA[40nm]]></category>
		<category><![CDATA[512-bit]]></category>
		<category><![CDATA[55nm]]></category>
		<category><![CDATA[ATI]]></category>
		<category><![CDATA[differential]]></category>
		<category><![CDATA[Differential GDDR5]]></category>
		<category><![CDATA[FirePro]]></category>
		<category><![CDATA[gddr3]]></category>
		<category><![CDATA[gddr4]]></category>
		<category><![CDATA[GDDR5]]></category>
		<category><![CDATA[GeForce]]></category>
		<category><![CDATA[gt200]]></category>
		<category><![CDATA[gt206]]></category>
		<category><![CDATA[gt212]]></category>
		<category><![CDATA[joe macri]]></category>
		<category><![CDATA[larrabee]]></category>
		<category><![CDATA[Nvidia]]></category>
		<category><![CDATA[PlayStation 4]]></category>
		<category><![CDATA[Quadro]]></category>
		<category><![CDATA[Radeon]]></category>
		<category><![CDATA[S.E. GDDR5]]></category>
		<category><![CDATA[single-ended]]></category>
		<category><![CDATA[xbox 720]]></category>

		<guid isPermaLink="false">http://theovalich.wordpress.com/?p=534</guid>
		<description><![CDATA[<p>As &#8220;Theo&#8217;s Bright Side of IT&#8221; turns a century (100 stories) after 5 weeks of existence, it would be right to write an article about ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2008/11/22/100th-story-gddr5-analysis-or-why-gddr5-will-rule-the-world/">100th Story- ANALYSIS: Why will GDDR5 rule the world?</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>As &#8220;Theo&#8217;s Bright Side of IT&#8221; turns a century (100 stories) after 5 weeks of existence, it would be right to write an article about technology that is set to become an everyday word during the next couple of years: GDDR5.<br />
This memory standard will become a pervasive memory during next four years in much more fields than &#8220;just&#8221; graphics. Just like GDDR3 ended up in all three consoles, network switches, cellphones and even cars and planes, GDDR5 brings a lot of new features that are bound to win more customers from different markets.</p>
<p><strong>Background</strong><br />
The reason for development of radical ideas inside GDDR5 lies in the fact that ATI was looking at future GPU architectures, and concluded that the DRAM industry has to take a radical step in design and offer interface more flexible than any other memory standard. Then, ATI experienced huge issues with R600 and its huge monolithic die. After a lot of internal struggle, engineering teams came to agreement that a change of course is necessary for generations to come: R700/RV770, R800/RV870, R900, R1K… all of these engineering designs are reshaped and refocused. Current and future goal is to design a compact and affordable transistor design that would not play a game of Russian roulette with yields coming from <a title="MAD AMD or GloblaFoundries" href="http://www.tomshardware.com/news/amd-corporate-culture,5206.html" target="_blank">MAD AMD</a>, TSMC&#8217;s and UMC&#8217;s foundries.<br />
Development of this JEDEC certified standard happened under the lead of Joe Macri, Director of engineering at AMD and chairman of JEDEC&#8217;s Future DRAM Task Group JC42.3. Joe and his small ex-ATI/AMD GPGP team are mostly known for the development of the GDDR3 and GDDR4 memory standards, with former being probably the best thing ever to come out of the former ATI. ATI worked in solitude for a whole year before it sent initial specification to JEDEC in 2005. Then, Hynix, Qimonda and Samsung joined the effort to bring the new memory standard to life. When AMD acquired ATI in 2006, new management didn&#8217;t touch GDDR5 development and let the team to work in peace. Reason was simple: R&amp;D team warned the management that GDDR5 development is much more difficult from work done on GDDR3 and GDDR4.<br />
GDDR5 was seen as a path towards next-generation clients, that being consoles, desktop computing, networking equipment, HPC arena, handhelds&#8230; all of these roads start with one memory standard. At the time, engineers at ATI saw the path of success that GDDR3 took, and decided to create a spec that would outlive and outshine GDDR3.<br />
In May 2008, AMD finally announced the launch of GDDR5 memory standard. Soon after, the company revealed its Radeon 4800 series and cards equipped with GDDR5 memory. Given the performance of Radeon 4870 512MB, 4870 1GB and 4870X2 2GB, it is obvious that the future of graphics (and not just!) belongs to GDDR5 memory.<br />
At its very core, it is important to know that the main difference between LP-DDR (handhelds, PDAs), DDR (one fits all) and GDDR (Graphics) is the fact that capacity is not crucial, but performance is. Low-Power DDR and standard DDR are geared to enabling as much capacity as possible, while GDDR is usually referred to as the &#8220;Ferrari of the bunch&#8221;.</p>

<a href='http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_01_gpu-ram-roadmap1.jpg' rel="lightbox[gallery-0]"><img width="750" height="420" src="http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_01_gpu-ram-roadmap1-750x420.jpg" class="attachment-vw_medium" alt="Roadmap shows that DDR3 will replace DDR2 in low-end market, and GDDR5 will take over GDDR3" /></a>
<a href='http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_03_gddr345-diferences.jpg' rel="lightbox[gallery-0]"><img width="750" height="420" src="http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_03_gddr345-diferences-750x420.jpg" class="attachment-vw_medium" alt="Description of differences between the standards..." /></a>
<a href='http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_04_gddr345-diferences.jpg' rel="lightbox[gallery-0]"><img width="750" height="420" src="http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_04_gddr345-diferences-750x420.jpg" class="attachment-vw_medium" alt="... and continuing with differences." /></a>
<a href='http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_05_ram-roadmap.jpg' rel="lightbox[gallery-0]"><img width="750" height="420" src="http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_05_ram-roadmap-750x420.jpg" class="attachment-vw_medium" alt="In 2010, we should see Differential GDDR5, and then the available bandwidth on GPUs will double over the night." /></a>
<a href='http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_06_gddr5_key-features.jpg' rel="lightbox[gallery-0]"><img width="750" height="420" src="http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_06_gddr5_key-features-750x420.jpg" class="attachment-vw_medium" alt="According to Qimonda, these are key features of GDDR5 standard." /></a>
<a href='http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_07_gddr5-lowmedhighfr.jpg' rel="lightbox[gallery-0]"><img width="750" height="372" src="http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_07_gddr5-lowmedhighfr-750x372.jpg" class="attachment-vw_medium" alt="GDDR5 is divided into three different memory types, and clocks and voltage change according to specified role." /></a>
<a href='http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_08_gddr5-pcb-tracing_.jpg' rel="lightbox[gallery-0]"><img width="489" height="420" src="http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_08_gddr5-pcb-tracing_-489x420.jpg" class="attachment-vw_medium" alt="Note the absence of &quot;combs&quot; on PCB using GDDR5 memory. This will enable cheaper PCBs and higher performance at the same time." /></a>
<a href='http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_09_gddr5-overclocking.jpg' rel="lightbox[gallery-0]"><img width="750" height="420" src="http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_09_gddr5-overclocking-750x420.jpg" class="attachment-vw_medium" alt="GDDR5 is also the first memory standard designed with overclocking in mind." /></a>
<a href='http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_10_gddr5-clockingandd.jpg' rel="lightbox[gallery-0]"><img width="750" height="420" src="http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_10_gddr5-clockingandd-750x420.jpg" class="attachment-vw_medium" alt="The way how clock works...four data transfers over a single clock." /></a>
<a href='http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_11_gddr5_x16-mode.jpg' rel="lightbox[gallery-0]"><img width="750" height="420" src="http://cdn.vrworld.com/wp-content/uploads/2008/11/gddr5_11_gddr5_x16-mode-750x420.jpg" class="attachment-vw_medium" alt="Clamshell mode - very important feature, will enable doubling the amount of memory in near future." /></a>

<p><strong><br />
DDR, DDR2, DDR3, GDDR3, GDDR4, GDDR5 … got it?</strong></p>
<p>If you can’t find your way through the jungle of different memory standards, don&#8217;t worry, you&#8217;re not alone. There is a lot of confusion in the world of DRAM memory, and sadly, there is no simple explanation. Most important thing to remember is that GDDR and DDR are not the same memory, and do not operate on same data sets.<br />
As you can see, GDDR memory transfers 32-bit data, while conventional DRAM transfers 64-bit data chunks. Previous generations of graphics memory (GDDR2, GDDR3) were remotely based on the DDR2-SDRAM memory standard, while GDDR5 is heading into a new direction.</p>
<p>In fact, GDDR5 standard actually splits into two different ways how DRAM operates: Single-Ended and Differential. This is a revolutionary step for GDDR memory, since it was widely expected that Single-Ended memory is the only way to go. In a way, you can say that ATI developed GDDR5 and GDDR &#8220;5.5&#8221; or &#8220;6&#8221; at the same time. Single-ended support is compatible with existing memory standards such as DDR1/2/3/GDDR3/4 and represents evolutional path for DRAM. First products to market will use single-ended chips, but as soon as Hynix, Qimonda and Samsung start manufacturing differential modules (2009-10), a new era will begin.<br />
Differential clock signaling is a method similar to interconnect buses such as HyperTransport, PCI Express, or Intel&#8217;s Quick Path Interface from Core i7. Differential introduces Reference clock, a clock that memory cell follows. Instead of using Ground wire as a passive driver, Differential mode enables precise communication and exactly this feature is the reason why available bandwidth is set for a dramatic change during lifetime of GDDR5.<br />
The sheer bandwidth gain from one GDDR generation to another is impressive. GDDR3 peaked at 2.4 Gbps, GDDR4 concluded at 3.2 Gbps. GDDR5 chips split into two: Single-Ended will offer between 3.4 and 6.4 Gbps of bandwidth, while differential chips will yield between 5.6 and 12.8 Gbps.</p>
<p>Besides Differential mode, GDDR5 also introduces an Error Correction Protocol based on a progressive algorithm that actually enables more aggressive overclocking. Major changes in internal chip design also include Quarter-Data Rate clock, continuous WRITE clock, CDR based READ (no reading clock/strobe information), DRAM Interface training, Internal and External VREF and x16 mode.<strong></strong></p>
<p><strong>Power Saving</strong></p>
<p>One of very important things with GDDR5 is power reduction. If you take GDDR3 and GDDR5 modules, clocked at 1.0 GHz each, GDDR3 will have to operate at 2.0V, while GDDR5 needs only 1.5V. This results in 30% reduction of power consumption, while raising available per-pin bandwidth by almost 100%.</p>
<p>GDDR5 is designed to operate at low, medium and high frequencies. Low frequency (0.2-1.5 Gbps) calls for low-voltage (0.8-1.0V), while medium (1.0-3.0 Gbps) and high (2.5-5.0 Gbps) frequencies call for higher voltage, in a range between 1.4-1.6V.<br />
High frequency is the only one that utilizes CDR (Command Data Rate) circuitry, while medium and low frequencies call for conventional mode (RDQS with Preamble).<br />
Seeing the drop in power below levels of FB-DIMM DDR2-800 only makes us wonder what would happen if CPU manufacturers would implement Differential GDDR5 as system memory. Would we really need Gigabytes of system memory if we would have system memory with higher bandwidth than L2 and L3 cache? Intel is looking in similar direction, considers <a href="http://www.tomshardware.com/news/Intel-DRAM-CPU,5697.html" target="_blank">replacing SRAM cache with DRAM technology</a>.</p>
<p>Sadly, the changes that would be required in memory controller are such that only place where GDDR5 will see the light of day as system memory are closed designs, such as consoles, set top boxes and so on. There is hope that some future AMD&#8217;s Fusion designs might implement GDDR support, but it is too early to tell.</p>
<p><strong>How to lower the cost of manufacturing?</strong></p>
<p><strong><br />
</strong>During design stages of GDDR5 memory, one of main concerns was how to simplify tracing on the PCB (Printed Circuit Board). On current GDDR3 and GDDR4 graphics boards, synchronization issues are solved by using traces of the same length from every pin on DRAM chip to the GPU. This causes quite a messy design, with traces going everywhere.</p>
<p>IF you&#8217;re PCB designer, there is one thing that you don&#8217;t want: complex routing of traces. This eventually leads to more PCB layers, higher cost and most importantly &#8211; more ways for *something* to go wrong. Every trace has increased isolation from electromagnetic interferences (EMI), while Asymmetrical Interface compensates for differences in length. In order to keep the signal integrity, several optimizations were made.<br />
As you could see on picture above, GDDR5 PCB route is much cleaner than GDDR3, and you can see that if you compare Radeon 4850 to Radeon 4870, for instance. This was paid by additional resistors around memory chips, but second generation of GDDR5 graphics cards should feature cleaner design.</p>
<p><strong>Memory designed for overclocking?</strong></p>
<p><strong><br />
</strong>With power saving and performance-related tweaks, it is obvious that this memory is designed for overclocking. This was confirmed to us just by looking at slides from AMD and Qimonda.</p>
<p>The GDDR5 specification delivers a combination of three technologies: Adaptive Training and CDR, Error Detection and an on-die thermal sensor. Adaptive Training is combined with the Error Detection algorithm and enables the memory controller of the GPU to keep thermals on a tight leash. If you want to overclock the memory, it will go up until the error correction algorithm hits a thermal wall.</p>
<p>Error Detection works with both read and write instructions, offering real time repeat and resend operations. Thanks to asynchronous clocks, memory controller can control flow of data and resend bits of information that fail to arrive in time (or arrive corrupted). Error Detection algorithm will try to avoid a crash until the number of errors passes 1 Error/sec.<br />
In order to maintain the signal stability, additional resistors were placed inside and outside the memory chip (take a look at the back of 4870 and compare it to 4850). AMD also addressed the issue spotted on GDDR4. Overclocking of GDDR4 memory was limited because DRAM timing loop would run out of power. GDDR5 changed the way how clock is generated and kept, so memory chip should never starve for power. No timing loop issue = no memory freeze. According to our sources, GDDR5 memory clocking in the end depends on the manufacturing process (used by the chip manufacturer) and the amount of voltage provided to the chip.<br />
But main difference in clocking of GDDR3 and GDDR5 is the fact that PVT (Power, Voltage, and Temperature) is no longer the unbreakable barrier. Now, it is GPU&#8217;s memory controller that will keep (or fail to keep) the flow of data.</p>
<p><strong>Coalition between the GPU and the RAM</strong></p>
<p>Unlike previous memory standards, in order to extract the best possible performance memory controller has to support ALL of the GDDR5 features. This especially goes to Asymmetrical interface, since WRITE and READ clocks are programmed by the GPU. Advanced Clock Training calibrates GPU-RAM signals &#8211; without this feature, you cannot count on high clocks or overclocking capabilities. With four bits of data being sent per clock (instead of two), memory controller is exposed to a lot of stress, and has to be able to do error checking on the fly. Any misses on GPU side will lead to cycle losses &#8211; leading to instability.<br />
Good example is memory controller tucked inside the Radeon 4800. This 256-bit controller supports DDR2, DDR3, GDDR3, GDDR4 and GDDR5 memory standards. The memory controller is tuned up to the point where bandwidth and clock limitation are on the side of the SGRAM chips: If the fastest GDDR5 memory chips were available today, you could build a 4800-series card with them. This also opens up revenue opportunities for Hynix, Samsung and Qimonda. All three manufacturers could earn a small fortune by selling gold sampled memory chips to premium graphics card manufacturers.<br />
When it comes to Nvidia, answer to the question why the company went with GDDR3 for GTX 200 series of cards is not a simple one: according to our sources, GT200 chip supports GDDR3 and GDDR4, while engineers ran out of time to adjust memory controller to asymmetrical interface (advanced interface training), key feature for stable operation. But, if Nvidia sticks with 512-bit memory controller for NV70 generation (GT300?), we should see Nvidia GPUs featuring bandwidth in excess of 300 GB/s, more than twice that is available today. There is also a question what will Nvidia do with its two refreshes, 55nm GT206 and 40nm GT212 chips.<br />
Intel is not giving out any details on Larrabee&#8217;s architecture, but we know for sure that the 1024-bit internal/512-bit external memory controller will support GDDR5 and its advanced features. Given the late 2009 release, support for differential mode should be a given. When it comes to christening, Larrabee with GDDR5 memory will debut during this winter, with <a href="http://www.tomshardware.com/news/intel-larrabee-graphics,5847.html" target="_blank">first graphics cards delivered to Dreamworks</a>.</p>
<p><strong>Capacity – just how big can we go?<br />
</strong>Now that you&#8217;ve seen all of the performance elements, time to write about capacity. While Joe told us that GDDR should be considered as &#8220;The Ferrari of DDR world&#8221;, GDDR5 introduces x16 mode. This mode has nothing to do with PCI Express x16 (to kill any potential confusion).</p>
<p>As you can see on the slide above, Clamshell mode is introduced to enable two memory chips sitting on a single x32 node. If we take ATI Radeon 4800 series, GPU features eight x32 I/O controllers. In theory, this should top at 16 memory chips per GPU, or 1GB of onboard memory using conventional 512Mbit chips. With x16 mode, card designer can put up to 32 chips (good luck with finding available space), or 2GB memory with 512Mbit (64MB) chips. With 1Gbit (128MB) chips, this number grows to 4GB. Qimonda is expected to ship 2Gbit (256MB) chips during 2009, enabling 8GB of on-board memory.</p>
<p>This number is increasingly important for GPGPU market, which wants as much on-board memory as possible. Bear in mind that Tesla 10-Series features 4GB of GDDR3 memory, and some contacts we&#8217;ve talked with &#8211; claim they would fill even more.</p>
<p>Eight GB of video memory may sound too much for consumer space, but if world is to usher into the era of <a href="http://www.tomshardware.com/news/Larrabee-Ray-Tracing,5769.html" target="_blank">Ray-tracing</a>, we have to get enough space for gigabytes of data. Jules Urbach from JulesWorld explained that he is working with datasets bigger than 300 GB, and has to resort using AMD&#8217;s CAL (Compression Algorithm) to fit all the data inside 1GB per GPU (Jules uses R700 boards).</p>
<p><strong>Conclusion</strong></p>
<p><strong></strong>GDDR5 ramped up during 2008 and we expect the technology becoming a standard for GPU add-in-boards in 2009. ATI will migrate to GDDR5, so will Nvidia. With Intel joining the pack with Larrabee, volumes should be ready to drive the cost of GDDR5 into budget for next generation of game consoles, starting in the 2010-11 timeframe.<br />
This is by far the most developed and well-thought memory standard that lacks childhood sicknesses like DDR2 and DDR3. GDDR5 is coming to market as a complete product, and offers solid future roadmap, with Differential GDDR5 even surpassing XDR2 DRAM in quest for highest possible per-pin bandwidth.<br />
By that time, Differential GDDR5 should be cheaper than GDDR3 is today.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2008/11/22/100th-story-gddr5-analysis-or-why-gddr5-will-rule-the-world/">100th Story- ANALYSIS: Why will GDDR5 rule the world?</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2008/11/22/100th-story-gddr5-analysis-or-why-gddr5-will-rule-the-world/feed/</wfw:commentRss>
		<slash:comments>8</slash:comments>
		</item>
		<item>
		<title>Nvidia aims at workstation market, desktops and notebooks</title>
		<link>http://www.vrworld.com/2008/11/01/nvidia-aims-at-workstation-market-desktops-and-notebooks/</link>
		<comments>http://www.vrworld.com/2008/11/01/nvidia-aims-at-workstation-market-desktops-and-notebooks/#comments</comments>
		<pubDate>Sat, 01 Nov 2008 18:00:04 +0000</pubDate>
		<dc:creator><![CDATA[Theo Valich]]></dc:creator>
				<category><![CDATA[3D]]></category>
		<category><![CDATA[AMD]]></category>
		<category><![CDATA[Business]]></category>
		<category><![CDATA[Graphics]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[Intel]]></category>
		<category><![CDATA[Asus]]></category>
		<category><![CDATA[Chipset]]></category>
		<category><![CDATA[FireGL]]></category>
		<category><![CDATA[FirePro]]></category>
		<category><![CDATA[GeForce]]></category>
		<category><![CDATA[intel motherboard]]></category>
		<category><![CDATA[nForce]]></category>
		<category><![CDATA[Nvidia]]></category>
		<category><![CDATA[nvidia motherboard]]></category>
		<category><![CDATA[p5n-vm]]></category>
		<category><![CDATA[Quadro]]></category>

		<guid isPermaLink="false">http://theovalich.wordpress.com/?p=298</guid>
		<description><![CDATA[<p>Fudo and his gang discovered MCP7A-GL motherboard over at Chinese Iworkstation.com.cn. This motherboard is &#8220;body of evidence&#8221; that Nvidia finally found the guts to go ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2008/11/01/nvidia-aims-at-workstation-market-desktops-and-notebooks/">Nvidia aims at workstation market, desktops and notebooks</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>Fudo and his gang <a href="http://www.fudzilla.com/index.php?option=com_content&amp;task=view&amp;id=10229&amp;Itemid=1" target="_blank">discovered MCP7A-GL motherboard</a> over at <a href="http://www.iworkstation.com.cn/news/2008-10-23/1943.html" target="_blank">Chinese Iworkstation.com.cn</a>. This motherboard is &#8220;body of evidence&#8221; that Nvidia finally found the guts to go after the workstation market with embedded Quadro chipset.</p>
<p>Over the course of years, I&#8217;ve seen couple of Quadro motherboards, but Nvidia never dedicated themselves to creating a market. Personally, I saw that as a big mistake, and often questioned chipset guys about professional solutions.<br />
Nvidia was afraid that the move would cannibalize their cash cow, Quadro series of cards, but that fear just didn&#8217;t made any sense &#8211; at the end of the day, a company has to increase the market share, not reduce it. I always viewed Quadro and FireGL as &#8220;AMG&#8221; and &#8220;M&#8221; divisions from Mercedes and BMW &#8211; divisions that tune up everyday products for special use. Thus, Nvidia not investing into something that is really easy to do.<br />
Here&#8217;s a guide for Nvidia and AMD to make a workstation chipset:<br />
1)    Take a chipset that you already manufacture &#8211; with integrated graphics, of course.<br />
2)    Add ID that will identify the integrated GPU as FirePro/Quadro<br />
3)    Build a motherboard with workstation-quality components<br />
4)    Qualify the motherboard<br />
5)    Launch the product for desktop and notebook, target niche segment with $10-50 higher ASP<br />
As you can see, easy-peasy &#8211; since the product has already been developed for consumer market, only thing you need is 5-year lasting components and qualy.<br />
But nevermind, good to see that someone did it. Now, we will see can Nvidia actually steal mobile workstation market with upcoming Quadro 2Go chipset (effectively this chipset packed in mobile form factor).</p>
<div id="attachment_299" style="width: 510px" class="wp-caption aligncenter"><a href="http://cdn.vrworld.com/wp-content/uploads/2008/11/asus_quadromotherboard.jpg" rel="lightbox-0"><img class="size-full wp-image-299" title="asus_quadromotherboard" src="http://cdn.vrworld.com/wp-content/uploads/2008/11/asus_quadromotherboard.jpg" alt="Here we are, ASUS making P5N-VM" width="500" height="473" /></a><p class="wp-caption-text">Here we are, ASUS making P5N-VM</p></div>
<p>Could it be that the next refresh of Apple&#8217;s MacBook Pro hardware will feature Quadro chipset instead of desktop one? Only time will tell.<br />
Next one should be AMD 780G-based FirePro. Of course, once that AMD finally releases Opteron for notebooks.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2008/11/01/nvidia-aims-at-workstation-market-desktops-and-notebooks/">Nvidia aims at workstation market, desktops and notebooks</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2008/11/01/nvidia-aims-at-workstation-market-desktops-and-notebooks/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Nvidia&#8217;s $50 card destroys ATI&#8217;s $500 one or &#8220;Why ATI sucks in Folding?&#8221;</title>
		<link>http://www.vrworld.com/2008/10/24/why-nvidia-destroys-ati-in-folding-at-hom/</link>
		<comments>http://www.vrworld.com/2008/10/24/why-nvidia-destroys-ati-in-folding-at-hom/#comments</comments>
		<pubDate>Fri, 24 Oct 2008 16:00:19 +0000</pubDate>
		<dc:creator><![CDATA[Theo Valich]]></dc:creator>
				<category><![CDATA[3D]]></category>
		<category><![CDATA[AMD]]></category>
		<category><![CDATA[Business]]></category>
		<category><![CDATA[Graphics]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[Intel]]></category>
		<category><![CDATA[Software]]></category>
		<category><![CDATA[8800]]></category>
		<category><![CDATA[9600]]></category>
		<category><![CDATA[9600 gso]]></category>
		<category><![CDATA[9800]]></category>
		<category><![CDATA[ATI]]></category>
		<category><![CDATA[berkeley]]></category>
		<category><![CDATA[EVGA]]></category>
		<category><![CDATA[FireGL]]></category>
		<category><![CDATA[FirePro]]></category>
		<category><![CDATA[Folding]]></category>
		<category><![CDATA[Folding@Home]]></category>
		<category><![CDATA[Gainward]]></category>
		<category><![CDATA[GeForce]]></category>
		<category><![CDATA[GPGPU]]></category>
		<category><![CDATA[GPU Computing]]></category>
		<category><![CDATA[GTX260]]></category>
		<category><![CDATA[GTX280]]></category>
		<category><![CDATA[LeadTek]]></category>
		<category><![CDATA[Nvidia]]></category>
		<category><![CDATA[Palit]]></category>
		<category><![CDATA[PowerColor]]></category>
		<category><![CDATA[Quadro]]></category>
		<category><![CDATA[Radeon]]></category>
		<category><![CDATA[Sapphire]]></category>
		<category><![CDATA[seti]]></category>
		<category><![CDATA[seti@home]]></category>
		<category><![CDATA[stanford university]]></category>
		<category><![CDATA[XFX]]></category>
		<category><![CDATA[Zotac]]></category>

		<guid isPermaLink="false">http://theovalich.wordpress.com/?p=190</guid>
		<description><![CDATA[<p>As you might already know, I am a bit enthusiastic when it comes to distributed computing. I&#8217;ve been looking for aliens through SETI@home, later with ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2008/10/24/why-nvidia-destroys-ati-in-folding-at-hom/">Nvidia&#8217;s $50 card destroys ATI&#8217;s $500 one or &#8220;Why ATI sucks in Folding?&#8221;</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>As you might already know, I am a bit enthusiastic when it comes to distributed computing. I&#8217;ve been looking for aliens through SETI@home, later with BOINC… but then, <a href="http://folding.stanford.edu/English/Science" target="_blank">Folding@Home</a> showed up and I became an enthusiast for this valuable project from Stanford University. My family had some share of dealings with Alzheimer&#8217;s (aka AD) and Parkinson&#8217;s diseases (aka PD) and I won&#8217;t go here into what psychological and ultimately financial stress that families around the world, including my own &#8211; have to endure.<br />
Folding@Home is also a project that pioneered the use of GPUs for distributed computing (if I am wrong on this one, feel free to correct me). Back in the summer of 2006, I heard that ATI and Stanford are working Folding@Home GPGPU client. I now remember my articles and articles from a lot of colleagues who all criticized Nvidia for not having a F@H client.</p>
<div id="attachment_196" style="width: 510px" class="wp-caption aligncenter"><a href="http://cdn.vrworld.com/wp-content/uploads/2008/10/folding_nvdavsati.jpg" rel="lightbox-0"><img class="size-full wp-image-196" title="folding_nvdavsati" src="http://cdn.vrworld.com/wp-content/uploads/2008/10/folding_nvdavsati.jpg" alt="Nvidia's client may not look as nice as ATI one, but it's the efficiency that counts..." width="500" height="348" /></a><p class="wp-caption-text">Nvidia&#39;s client may not look as nice as ATI one, but it&#39;s the efficiency that counts...</p></div>
<p>Fast forward to GTX280 launch and the Vijay Pande team debuted the Folding@Home client for Nvidia chips as well. Nvidia and ATI lead a short marketing war who can fold better and things went quiet… apparently, for a reason.<br />
The reason why things went quiet is probably the &#8220;inconvenient truth&#8221;: ATI showed up with Radeon 4800 series and demolished Nvidia&#8217;s dominance in the segment, with GTX260 and 280 going through radical price drops in order to stay competitive. However, ATI&#8217;s Radeon 4800 series has one field where the card is losing against 5-10x cheaper cards: Folding@Home.<br />
The 10x argument lies in comparison between current ATI&#8217;s flagship, the  Radeon 4870X2 and Nvidia&#8217;s GeForce 9600GSO. This $50 card can easily out-fold ATI Radeon 4870X2, which retails for more than 500 USD/450EUR in respective markets.<br />
In the past weeks, I&#8217;ve conducted a series of tests with various graphics cards (all that I own or could put my hands on), and the results were quite depressing if you own an ATI card. I&#8217;ve asked some of my contacts in AMD why the performance is so bad and the answers were ranging from &#8220;we wanted to make best gamer&#8217;s card, not a card for Folding&#8221; to sad silence. It seems to me that the difference lies in shader type and clock: ATI&#8217;s R6xx and RV7xx architecture lies around big fat units and lot of tiny ones (64+256 in case of Radeon 3800, 80+720 in case of Radeon 4800), and the clock is much lower than in case with GeForce cards. At the same time, Nvidia went the other route and came up with large number of &#8220;fat&#8221; units, while the company didn&#8217;t even count the &#8220;thin&#8221; (MADD) ones.<br />
When we compare the GTX280 and 4870X2, comparisons are just astounding: in a period of a month, EVGA&#8217;s GTX280 SSC achieved an average of 6,802 points per day, while ATI Radeon 4870X2 managed puny 3,870 ppd. At the same time, I&#8217;ve witnessed higher PPD scores achieved even by two-year old GeForce 8800GTS 640 MB, which was quite a surprise. Around two weeks ago, I started following PPD numbers using FahMon on a large number of systems that mostly bear the same configuration: dua-core processor or more, 2GB system memory or more and the graphics cards. In all cases, with the help of my friends, I&#8217;ve managed to check FahMon and KakaoStats for rougly 25 cards and came to a surprising result.<br />
With the recent update to the GPU2 client and new Fah_Core11.exe (ATI uses v1.17, Nvidia v1.15), the community witnessed further fall in number of completed packets per day. If you&#8217;re not familiar with Folding@Home packets, every package features certain number of mathematical simulations for tested protein &#8211; in case of Nvidia, packet consists out of 25 million, while ATI&#8217;s one features 10 million operations. However, due do different type of mathematical operations, Nvidia&#8217;s packet usually will result in 480 points, while ATI&#8217;s 10 million will return 548 points (or recently introduced ATI packets with 338 points).<br />
Like I previously wrote, the table below is not the result of one packet score and Excel calculation, but rather continuous number crunching over the course of several weeks, with one week used for measurement.</p>
<p><strong><br />
Improvised Top 20 Folding@Home GPUs:</strong></p>
<ol>
<li><span style="color:#339966;">Nvidia GeForce GTX280 1GB (EVGA SSC)</span></li>
<li><span style="color:#339966;">Nvidia GeForce GTX260-216 898MB (EVGA SSC)</span></li>
<li><span style="color:#339966;">Nvidia GeForce GTX260 898MB (EVGA Superclocked) </span></li>
<li><span style="color:#339966;">Nvidia GeForce 9800GTX+ 512MB (ASUS TOP)</span></li>
<li><span style="color:#339966;">Nvidia Quadro FX 4600 SDI 768MB (PNY)</span></li>
<li><span style="color:#339966;">Nvidia GeForce 9800GTX 512MB (ASUS TOP)</span></li>
<li><span style="color:#339966;">Nvidia GeForce 8800GTX 768MB (Zotac AMP! Edition)</span></li>
<li><span style="color:#339966;">Nvidia GeForce 8800Ultra 768MB (XFX XXX Edition)</span></li>
<li><span style="color:#339966;">Nvidia GeForce 8800GTS 512MB (Gainward)</span></li>
<li><span style="color:#339966;">Nvidia GeForce 8800GT 512MB (Gainward)</span></li>
<li><span style="color:#339966;">Nvidia GeForce 9600GSO 768MB (EVGA)</span></li>
<li><span style="color:#339966;">Nvidia GeForce 8800GTS 640MB (LeadTek)</span></li>
<li><span style="color:#ff0000;">ATI Radeon 4870X2 2GB (PowerColor)</span></li>
<li><span style="color:#ff0000;">ATI Radeon 4870 512MB (PALIT)</span></li>
<li><span style="color:#339966;">Nvidia GeForce 9600GT 256MB (Zotac)</span></li>
<li><span style="color:#ff0000;">ATI Radeon 4850 512MB (PALIT)</span></li>
<li><span style="color:#ff0000;">ATI Radeon 3870 512MB (Sapphire Atomic)</span></li>
<li><span style="color:#ff0000;">ATI FireGL V8600 1GB (ATI)</span></li>
<li><span style="color:#339966;">Nvidia GeForce 8600GTS 256MB (XFX XXX Edition)</span></li>
<li><span style="color:#ff0000;">ATI Radeon 3850 256MB (Sapphire)</span></li>
</ol>
<p>This is not a complete table by no means, since I am missing several new GPUs. But in this one, as you can see for yourself &#8211; results are quite dramatic for the red team. Two year old GeForce GPUs demolished otherwise-brilliant Radeon series, and it is incredible that even GeForce 9600 will outfold Radeon 4850. This is a rude wake-up call for guys at Markham, because this is just unbelievable.<br />
Personally, I am running a combination of AMD Spider platform (9850BE + 790GX + ATI Radeon 4870X2) and hybrid Intel&#8217;s V8-Skulltrail platform with Quadro FX 4600 SDI.<br />
Of course, everything can be changed with a simple driver update. I don&#8217;t understand what happened with AMD/ATI, company that lead the field of GPGPU computing for so long – why should AMD work on optimizing Folding@Home client&#8230; I am aware that AMD poached Mike Houston from Stanford to work on Brooke+ and now OpenCL APIs, but surely the performance didn&#8217;t went downhill from the influence of just one person. Or just maybe…<br />
Overall, I hope that Catalyst 8.11 or 8.12 will bring more performance for ATI cards, since I do not believe that it would be so hard to optimize drivers for GPGPU/GPU Computing usage. For now, in Folding@Home, ATI is complete washout.</p>
<p>For the end of this article, if you find that your GPU cycles could be used for something good, I invite you to <a href="http://theovalich.wordpress.com/2008/10/19/foldinghome-team/" target="_blank">read the following article</a> and join F@H family, regardless of what client (<a href="http://folding.stanford.edu/English/Download" target="_blank">CPU</a> or <a href="http://folding.stanford.edu/English/DownloadWinOther" target="_blank">GPU</a>) or team you choose in the end. Intel, AMD, ATI, Nvidia, Windows, Linux or Mac OS &#8211; it does not matter, just join &#8211; If you want, of course.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2008/10/24/why-nvidia-destroys-ati-in-folding-at-hom/">Nvidia&#8217;s $50 card destroys ATI&#8217;s $500 one or &#8220;Why ATI sucks in Folding?&#8221;</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2008/10/24/why-nvidia-destroys-ati-in-folding-at-hom/feed/</wfw:commentRss>
		<slash:comments>41</slash:comments>
		</item>
		<item>
		<title>AMD releasing professional cards to partners &#8211; Sapphire first</title>
		<link>http://www.vrworld.com/2008/10/20/amd-releasing-professional-cards-to-partners/</link>
		<comments>http://www.vrworld.com/2008/10/20/amd-releasing-professional-cards-to-partners/#comments</comments>
		<pubDate>Mon, 20 Oct 2008 12:34:34 +0000</pubDate>
		<dc:creator><![CDATA[Theo Valich]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[9250]]></category>
		<category><![CDATA[AMD]]></category>
		<category><![CDATA[Diamond]]></category>
		<category><![CDATA[FireGL]]></category>
		<category><![CDATA[FirePro]]></category>
		<category><![CDATA[FireStream]]></category>
		<category><![CDATA[GPGPU]]></category>
		<category><![CDATA[GPU Computing]]></category>
		<category><![CDATA[partners]]></category>
		<category><![CDATA[Radeon 2900]]></category>
		<category><![CDATA[Sapphire]]></category>

		<guid isPermaLink="false">http://theovalich.wordpress.com/?p=110</guid>
		<description><![CDATA[<p>Ever since AMD/ATI took over FireGL, the company was the only manufacturer of professional graphics cards. FireGL, FireStream, and now FirePro &#8211; they were all ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2008/10/20/amd-releasing-professional-cards-to-partners/">AMD releasing professional cards to partners &#8211; Sapphire first</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>Ever since AMD/ATI took over FireGL, the company was the only manufacturer of professional graphics cards. FireGL, FireStream, and now FirePro &#8211; they were all coming out with ATI logo on the box. But not anymore &#8211; AMD is going the Nvidia route and starting to introduce partners who will manufacture and sell the cards in a higher-standard program than is the case with consumer cards.<br />
As logic dictates, Sapphire Technologies was the first company to release a non-AMD manufactured professional card &#8211; FireStream 9250. We expect that more companies follow suit &#8211; I remember that Diamond introduced their FireGL cards in the Radeon 2900 era, but those cards never came to market.</p>
<div id="attachment_111" style="width: 510px" class="wp-caption alignnone"><a href="http://cdn.vrworld.com/wp-content/uploads/2008/10/sapphire_firestream_9250.jpg" rel="lightbox-0"><img class="size-full wp-image-111" title="sapphire_firestream_9250" src="http://cdn.vrworld.com/wp-content/uploads/2008/10/sapphire_firestream_9250.jpg" alt="Sapphire's board is identical to AMD ones.." width="500" height="258" /></a><p class="wp-caption-text">Sapphire</p></div>
<p>Sadly, Sapphire was not allowed to make any changes, so FireStream 9250 continues to come with only 1GB of GDDR3 memory, while the most GPGPU scientists we talked about are talking about the need for massive amount of memory. Nvidia&#8217;s respond was 1st generation Tesla with 1.5 GB and the latest one with massive 4GB of GDDR3 memory.<br />
We certainly hope that AMD will release FireStream with 2-4GB of memory, given their track record in professional graphics space. In any case, I congratulate to Sapphire for releasing the card.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2008/10/20/amd-releasing-professional-cards-to-partners/">AMD releasing professional cards to partners &#8211; Sapphire first</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2008/10/20/amd-releasing-professional-cards-to-partners/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Folding@Home team update, new stats page ;)</title>
		<link>http://www.vrworld.com/2008/10/19/foldinghome-team/</link>
		<comments>http://www.vrworld.com/2008/10/19/foldinghome-team/#comments</comments>
		<pubDate>Sun, 19 Oct 2008 13:43:38 +0000</pubDate>
		<dc:creator><![CDATA[Theo Valich]]></dc:creator>
				<category><![CDATA[3D]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Software]]></category>
		<category><![CDATA[Alzheimer]]></category>
		<category><![CDATA[ATI]]></category>
		<category><![CDATA[CPU]]></category>
		<category><![CDATA[DEC Alpha]]></category>
		<category><![CDATA[distributed computing]]></category>
		<category><![CDATA[F@H]]></category>
		<category><![CDATA[FireGL]]></category>
		<category><![CDATA[FirePro]]></category>
		<category><![CDATA[Folding]]></category>
		<category><![CDATA[Folding@Home]]></category>
		<category><![CDATA[GeForce]]></category>
		<category><![CDATA[GPU]]></category>
		<category><![CDATA[Intel]]></category>
		<category><![CDATA[K6-II]]></category>
		<category><![CDATA[Nvidia]]></category>
		<category><![CDATA[Parkinson]]></category>
		<category><![CDATA[Pentium]]></category>
		<category><![CDATA[Radeon]]></category>
		<category><![CDATA[The Bright side of IT]]></category>

		<guid isPermaLink="false">http://theovalich.wordpress.com/?p=93</guid>
		<description><![CDATA[<p>I’ve been a fan of distributed computing since late 1990s, with SETI@Home running on every computer that I ever had. However, the real attractive proposition ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2008/10/19/foldinghome-team/">Folding@Home team update, new stats page ;)</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>I’ve been a fan of distributed computing since late 1990s, with SETI@Home running on every computer that I ever had. However, the real attractive proposition to me was running distributed computing applications on graphics cards. GPUs are much more efficient in stream computing than any CPU you could find, and I’ve tried DC apps on computers with DEC Alpha, Intel Pentium onwards, AMD K6-II onwards etc etc., but biggest jump in performance was Folding@Home on ATI Radeon X1800XTX graphics card.<br />
With the launch of this blog and the new website, I’ve decided to launch a new group, number 69864. Current name is the name of this blog, but as soon as I am able to disclose the name of the new company, you’ll be the first to know <img src="http://cdn.vrworld.com/wp-includes/images/smilies/icon_wink.gif" alt=";-)" class="wp-smiley" /><br />
I invite you all to join team 69864, and in near future this will be much more than just a group. As the new website develops, so will this team. So far, my goal is to enter Top 1000 by end of 2008, and we’re on good way to achieve that.<br />
If you are interested, the site to download <a href="http://folding.stanford.edu/English/DownloadWinOther" target="_blank">CPU clients is here</a>, while <a href="http://folding.stanford.edu/English/DownloadWinOther" target="_blank">the GPU clients are here</a>. If you have Nvidia-based graphics card (compatible from GeForce 8000 and above), then <a href="http://www.stanford.edu/group/pandegroup/folding/release/Folding@home-Win32-NV-GPU-systray-620r1.msi" target="_blank">download this version</a>. I’ve tried both console and the regular version, and there isn’t much difference in performance. Of course, unless you leave the display version running.<br />
Performance-wise, <a href="http://theovalich.wordpress.com/2008/10/18/amd-reports-178b-revenue-records-first-profit-in-years-non-gaap/" target="_blank">Nvidia is destroying ATI at this moment</a>, which is something I already addressed here. Let’s hope ATI will optimize Folding performance in their upcoming drivers. We doubt this will happen before the release of next generation of Radeon hardware, but who knows.<br />
After you install the client, the configuration is really easy. If you have higher-performing hardware, then always use the large packet option, as shown in picture below.</p>
<div id="attachment_94" style="width: 457px" class="wp-caption alignnone"><a href="http://cdn.vrworld.com/wp-content/uploads/2008/10/foldingscreen_1.gif" rel="lightbox-0"><img class="size-full wp-image-94" title="foldingscreen_1" src="http://cdn.vrworld.com/wp-content/uploads/2008/10/foldingscreen_1.gif" alt="Initial option screen... put your name in, and the group is 69864 ;-)" width="447" height="563" /></a><p class="wp-caption-text">Initial option screen... put your name in, and the group is 69864 ;-)</p></div>
<div id="attachment_95" style="width: 457px" class="wp-caption alignnone"><a href="http://cdn.vrworld.com/wp-content/uploads/2008/10/foldingscreen_2.gif" rel="lightbox-1"><img class="size-full wp-image-95" title="foldingscreen_2" src="http://cdn.vrworld.com/wp-content/uploads/2008/10/foldingscreen_2.gif" alt="If you have 256MB or more video memory, check this box" width="447" height="563" /></a><p class="wp-caption-text">If you have 256MB or more video memory, check this box</p></div>
<div id="attachment_96" style="width: 457px" class="wp-caption alignnone"><a href="http://cdn.vrworld.com/wp-content/uploads/2008/10/foldingscreen_3.gif" rel="lightbox-2"><img class="size-full wp-image-96" title="foldingscreen_3" src="http://cdn.vrworld.com/wp-content/uploads/2008/10/foldingscreen_3.gif" alt="MachineID is important if you plan to run more than one client (multi-GPU setups, CPU/GPU combo etc.)" width="447" height="563" /></a><p class="wp-caption-text">MachineID is important if you plan to run more than one client (multi-GPU setups, CPU/GPU combo etc.)</p></div>
<p>After configuration, you’re good to go. The stats page is located at <a href="http://kakaostats.com/t.php?t=69864" target="_blank">KakaoStats.com</a> &#8211; yes, Kakao means Cocoa in Croatian ;-). You can also see the official one here, but the official stats page isn’t always available, since F@H servers suffer from tremendous load.<br />
Let’s fold together and hopefully simulate enough nanoseconds that become seconds that become minutes, hours, days, years… who knows, our CPU or GPU time might help scientists tofind cure for Alzheimer or Parkison’s disease (focus of F@H group at Stanford). We might even help ourselves in the future.</p>
<div id="attachment_97" style="width: 510px" class="wp-caption alignnone"><a href="http://cdn.vrworld.com/wp-content/uploads/2008/10/foldingscreen_4.gif" rel="lightbox-3"><img class="size-large wp-image-97" title="foldingscreen_4" src="http://cdn.vrworld.com/wp-content/uploads/2008/10/foldingscreen_4.gif?w=500" alt="Unofficial stats over at KakaoStats.com - enjoy this &quot;always available&quot; stats page... even if you're not in our group ;)" width="500" height="254" /></a><p class="wp-caption-text">Unofficial stats over at KakaoStats.com - enjoy this one even if you don&#39;t use our group #.</p></div>
<p>Pay it forward. You’re free to ping me at theo.valich @ gmail.com for a chat or if you have any questions. Always glad to help <img src="http://cdn.vrworld.com/wp-includes/images/smilies/icon_wink.gif" alt=";)" class="wp-smiley" /></p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2008/10/19/foldinghome-team/">Folding@Home team update, new stats page ;)</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2008/10/19/foldinghome-team/feed/</wfw:commentRss>
		<slash:comments>11</slash:comments>
		</item>
	</channel>
</rss>

<!-- Performance optimized by W3 Total Cache. Learn more: http://www.w3-edge.com/wordpress-plugins/

Content Delivery Network via Amazon Web Services: CloudFront: cdn.vrworld.com

 Served from: www.vrworld.com @ 2015-04-10 16:25:01 by W3 Total Cache -->