<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>VR World &#187; Kepler</title>
	<atom:link href="http://www.vrworld.com/tag/kepler/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.vrworld.com</link>
	<description></description>
	<lastBuildDate>Fri, 10 Apr 2015 07:54:22 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.1.1</generator>
	<item>
		<title>Nvidia In Discussions with &#039;Many&#039; GPU Licensees</title>
		<link>http://www.vrworld.com/2014/11/06/nvidia-discussions-many-gpu-licensees/</link>
		<comments>http://www.vrworld.com/2014/11/06/nvidia-discussions-many-gpu-licensees/#comments</comments>
		<pubDate>Fri, 07 Nov 2014 00:15:40 +0000</pubDate>
		<dc:creator><![CDATA[Anshel Sag]]></dc:creator>
				<category><![CDATA[Android]]></category>
		<category><![CDATA[Business]]></category>
		<category><![CDATA[Graphics]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[Mobile Computing]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Rumors]]></category>
		<category><![CDATA[4K]]></category>
		<category><![CDATA[display]]></category>
		<category><![CDATA[Efficiency]]></category>
		<category><![CDATA[Efficient]]></category>
		<category><![CDATA[GPU]]></category>
		<category><![CDATA[Kepler]]></category>
		<category><![CDATA[License]]></category>
		<category><![CDATA[Licensee]]></category>
		<category><![CDATA[Maxwell]]></category>
		<category><![CDATA[Mediatek]]></category>
		<category><![CDATA[Nvidia]]></category>
		<category><![CDATA[SoC]]></category>
		<category><![CDATA[Tegra]]></category>

		<guid isPermaLink="false">http://www.brightsideofnews.com/?p=41251</guid>
		<description><![CDATA[<p>Nvidia's CEO, Jen-Hsun Huang talked about Nvidia's GPU technology licensing discussions and possibilities during their latest earnings call with analysts.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/11/06/nvidia-discussions-many-gpu-licensees/">Nvidia In Discussions with &#039;Many&#039; GPU Licensees</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p></p><p>During today&#8217;s Nvidia (<a href="www.google.com/finance?cid=662925">NASDAQ:NVDA</a>) <a href="http://www.media-server.com/m/p/v76r59rw" target="_blank">earnings call</a> for <a title="Nvidia Reports Strong Earnings for Q3" href="http://www.brightsideofnews.com/2014/11/06/nvidia-reports-strong-earnings-q3/" target="_blank">Q3 with very good numbers</a>, Nvidia&#8217;s CEO Jen-Hsun Huang indicated that the company was already actively discussing licensing its GPU technologies in the mobile space to certain partners.</p>
<p>While Huang did not say how many partners they were working with or at what stage they were in the discussions, he did say the following, “Our licensing discussions are very active. And we have many in important stages.”</p>
<p>That statement was a response to an analyst question about the success of Maxwell and if it was bringing any potential progress on the licensing front. Currently, Nvidia is the sole user of its mobile GPUs and mobile GPU IP, however, with the success of Maxwell in desktop, there is a very good chance that some companies would become interested in utilizing it for their SoCs. Afterall, if you look at some of our <a title="GeForce GTX 980 Review: More Performance at Lower Power" href="http://www.brightsideofnews.com/2014/09/18/geforce-gtx-980-review-performance-lower-power/">Maxwell reviews</a>, you can see that Nvidia is getting much more performance out of their GPUs while using the same process (28nm) as the previous generation and simultaneously reducing power. These advancements eventually trickle down to Nvidia&#8217;s mobile products as the company&#8217;s product roadmaps have indicated in the past.</p>
<p>Currently, the Tegra K1 utilizes Nvidia&#8217;s Kepler architecture, which was originally launched in desktop and laptop GPUs back in 2012 and then perfected in 2013 with the GTX Titan. Following the Kepler release in 2012, Nvidia announced the Tegra K1 (formerly known as Tegra 5) with Kepler in the beginning of this year and has been shipping Tegra K1 SoCs since the summer. So, a realistic timeframe to see Maxwell GPU IP in mobile is very likely going to be towards the tail end of 2015 or the beginning of 2016, so that isn&#8217;t necessarily going to be as quick as the mobile refresh cycle usually is. However, with each generation of SoC Nvidia has vastly improved the speed of GPU architecture implementation so there&#8217;s no knowing exactly how soon we could see an Nvidia GPU in something like a MediaTek.</p>
<p>It will be interesting to see how Nvidia will balance their GPU IP licensing with customers if they are also simultaneously competing with them. After all, ARM and IMG license both CPUs and GPUs, but neither of them actually produce their own products that compete with their licensees. There is also a chance that Nvidia may be trying to muscle weaker companies into licensing their technology because of the strength of their patents, which may be why <a title="Nvidia Sues Samsung and Qualcomm For Alleged Patent Infringement" href="http://www.brightsideofnews.com/2014/09/04/nvidia-sues-samsung-qualcomm-patent-infringement/">Nvidia recently sued Samsung and Qualcomm</a>.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/11/06/nvidia-discussions-many-gpu-licensees/">Nvidia In Discussions with &#039;Many&#039; GPU Licensees</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2014/11/06/nvidia-discussions-many-gpu-licensees/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Nvidia Launches Ultimate Quest to Culminate in Product Launch</title>
		<link>http://www.vrworld.com/2014/07/18/nvidia-launches-ultimate-quest-culminate-product-launch/</link>
		<comments>http://www.vrworld.com/2014/07/18/nvidia-launches-ultimate-quest-culminate-product-launch/#comments</comments>
		<pubDate>Fri, 18 Jul 2014 23:22:01 +0000</pubDate>
		<dc:creator><![CDATA[Anshel Sag]]></dc:creator>
				<category><![CDATA[Android]]></category>
		<category><![CDATA[Event]]></category>
		<category><![CDATA[Graphics]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[Mobile Computing]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Adventure]]></category>
		<category><![CDATA[K1]]></category>
		<category><![CDATA[Kepler]]></category>
		<category><![CDATA[Nvidia]]></category>
		<category><![CDATA[Puzzle]]></category>
		<category><![CDATA[SHIELD Table]]></category>
		<category><![CDATA[Tegra K1]]></category>
		<category><![CDATA[Tegra K1 Tablet]]></category>
		<category><![CDATA[Text]]></category>
		<category><![CDATA[Twitter]]></category>
		<category><![CDATA[Ultimate Quest]]></category>

		<guid isPermaLink="false">http://www.brightsideofnews.com/?p=36604</guid>
		<description><![CDATA[<p>Nvidia sent us a link to a contest that they&#8217;re running which is a series of puzzles that will eventually lead you to a &#8216;new ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/07/18/nvidia-launches-ultimate-quest-culminate-product-launch/">Nvidia Launches Ultimate Quest to Culminate in Product Launch</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="1000" height="506" src="http://cdn.vrworld.com/wp-content/uploads/2014/07/NvidiaUltimateQuest1.png" class="attachment-post-thumbnail wp-post-image" alt="Nvidia Ultimate Quest" /></p><p>Nvidia <a href="http://blogs.nvidia.com/blog/2014/07/18/uq/" target="_blank">sent us a link to a contest</a> that they&#8217;re running which is a series of puzzles that will eventually lead you to a &#8216;new release&#8217;. By the looks of it, Nvidia is doing some sort of launch next week, July 22nd. This actually lines up with a recently rumored release of an Nvidia Tegra K1 tablet, which is <a href="http://wccftech.com/nvidias-tegra-k1-powered-shield-tablet-teaser-page-confirms-launch-22nd-july-official-render-specifications-unveiled/" target="_blank">supposed to launch on the 29th according to Videocardz</a>, but originally stated the 22nd (they changed the date since we last saw the rumor). It is rumored to have a full HD display, which is still quite desirable for mobile gaming and a separate game pad to accommodate gamers that still like the thumb sticks and dpad from the original SHIELD.</p>
<p>This contest is called <a href="https://ultimatequest.nvidia.com/" target="_blank">the Ultimate Quest</a> and there are five phases to the quest, with each one being available each day, leading up to the 5th on the 22nd of July which will reveal the &#8216;new release&#8217;. Obviously this is gaming related and there&#8217;s very likely something related to Tegra gaming, much like the SHIELD Tablet that is supposedly being rumored towards the end of the month.</p>
<p>In order to participate, you must use Twitter (which Nvidia will obviously be able to gain quite a bit of data from) and you must use Twitter to textually answer each puzzle along the way. The <a href="https://ultimatequest.nvidia.com/" target="_blank">Ultimate Quest</a> is supposed to be some creative endeavor between Nvidia and <a href="http://emshort.wordpress.com/" target="_blank">Emily Short</a>, a well known writer. In fact, she&#8217;s already left some hints on her own blog and there are some hits on the first page of the puzzle if you have already signed in via Twitter and checked on the first module of the story.</p>
<p>Hopefully we&#8217;ll see a Tegra K1 device, maybe it will be the rumored SHIELD Tablet, and maybe it won&#8217;t be. But ultimately, nobody will complain if it is because Tegra K1 looks like a very promising chip and hopefully this <a href="https://ultimatequest.nvidia.com/" target="_blank">Ultimate Quest</a> will keep some gamers occupied over the weekend.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/07/18/nvidia-launches-ultimate-quest-culminate-product-launch/">Nvidia Launches Ultimate Quest to Culminate in Product Launch</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2014/07/18/nvidia-launches-ultimate-quest-culminate-product-launch/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>AMD Launches W8100, Cuts GPUs Prices 50% for First GPU</title>
		<link>http://www.vrworld.com/2014/06/23/amd-launches-w8100-cuts-gpus-prices-50-first-gpu/</link>
		<comments>http://www.vrworld.com/2014/06/23/amd-launches-w8100-cuts-gpus-prices-50-first-gpu/#comments</comments>
		<pubDate>Tue, 24 Jun 2014 03:01:28 +0000</pubDate>
		<dc:creator><![CDATA[Anshel Sag]]></dc:creator>
				<category><![CDATA[Audio/Video]]></category>
		<category><![CDATA[Graphics]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Software]]></category>
		<category><![CDATA[AMD FirePro]]></category>
		<category><![CDATA[FirePro]]></category>
		<category><![CDATA[FirePro W8100]]></category>
		<category><![CDATA[GPU]]></category>
		<category><![CDATA[Hawaii]]></category>
		<category><![CDATA[K20]]></category>
		<category><![CDATA[K5000]]></category>
		<category><![CDATA[Kepler]]></category>
		<category><![CDATA[Nvidia]]></category>
		<category><![CDATA[OpenCL]]></category>
		<category><![CDATA[Professional]]></category>
		<category><![CDATA[W8100]]></category>

		<guid isPermaLink="false">http://www.brightsideofnews.com/?p=36140</guid>
		<description><![CDATA[<p>Today was an interesting day in AMDland, first the company announced their latest GPU, the FirePro W8100 and then later in the day they announced ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/06/23/amd-launches-w8100-cuts-gpus-prices-50-first-gpu/">AMD Launches W8100, Cuts GPUs Prices 50% for First GPU</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="980" height="431" src="http://cdn.vrworld.com/wp-content/uploads/2014/06/W8100_9801.jpg" class="attachment-post-thumbnail wp-post-image" alt="W8100" /></p><p>Today was an interesting day in AMDland, first the company <a href="http://www.amd.com/en-us/press-releases/Pages/new-amd-professional-2014jun23.aspx" target="_blank">announced their latest GPU</a>, the FirePro W8100 and then later in the day they announced a program where you could buy any of their latest GPUs for a whopping 50% as long as its the first one, every subsequent one will be full price.  But first, you have to go through <a href="http://www.fireprographics.com/experience/us/apply.asp" target="_blank">an &#8216;approval process&#8217;</a>. Now, let&#8217;s get back to the new GPU AMD just announced, what is it exactly? Well, the FirePro W8100 is part of AMD&#8217;s professional line of graphics cards branded as FirePro.</p>
<p>So, looking at the rough specs we can see that the W8100 delivers over 2 TFLOPs of double precision, which is actually less than what <a title="Intel’s New Knight’s Landing Xeon Phi Combines Omni Scale Fabric with HMC" href="http://www.brightsideofnews.com/2014/06/23/intel-new-knights-landing-combines-omni-scale-fabric-hmc/" target="_blank">Intel&#8217;s new Knight&#8217;s Landing</a> is capable of delivering, which was announced today. It does, however, also do over 4 TFLOPs of single precision which is quite impressive since its double Nvidia&#8217;s K5000&#8217;s 2.1 TFLOPs. This GPU is effectively a professional version of <a title="AMD Radeon R9 290: Blowing the Doors off the Competition" href="http://www.brightsideofnews.com/2013/11/08/amd-radeon-r9-290-blowing-the-doors-off-the-competition/" target="_blank">AMD&#8217;s R9 290 GPU which we reviewed</a> and found overall to be a very impressive GPU for the money, and it still is. What makes this GPU different, however is that it can drive four 4K displays simultaneously and has 8 GB of GDDR5 memory as opposed to 4 GB, making better use of the 512-bit memory bus on the Hawaii Pro GPU inside. This is, however, less than what the W9100 supports which is six 4K displays. But realistically you won&#8217;t be doing any gaming on these 4K displays so it doesn&#8217;t seem outrageous to think someone could be using 32 million pixels. AMD accomplishes this through putting four DisplayPort 1.2 connectors on the back of the card as you can see above and below.</p>
<div id="attachment_36145" style="width: 990px" class="wp-caption aligncenter"><a href="http://cdn.vrworld.com/wp-content/uploads/2014/06/W8100_6_9801.jpg" rel="lightbox-0"><img class="size-full wp-image-36145" src="http://cdn.vrworld.com/wp-content/uploads/2014/06/W8100_6_9801.jpg" alt="W8100" width="980" height="426" /></a><p class="wp-caption-text">W8100 Specifications, current and future</p></div>
<p>As you can see from the above specs, AMD has decided to change the GPU&#8217;s name to an engine and say its clocked at 824 MHz, a solid 123 MHz less than the R9 290 gaming graphics card that it mimics. It does, however have double the memory of the R9 290 which is why it is capable of driving up to four 4K displays. AMD also powers it with two 6-pin power connectors, drawing 220W and supporting PCIe 3.0, everything pretty standard here. It also supports OpenCL 1.2 and already has OpenCL 2.0 support baked-in, which is good to know for anyone planning to buy a &#8216;future-proof&#8217; GPU. It also supports OpenGL 4.3 and will support OpenGL 4.4, which isn&#8217;t that much of a feat as most of that support will be accomplished though a driver update. What is interesting, though, is that it supports DirectX 11.2, but AMD is making no mention of future compatibility with DirectX 12 at all, which seems a bit missing. It isn&#8217;t anything shocking since this graphics card is based on a GPU that was announced in 2013, but it is still interesting that AMD has nothing to mention there.</p>
<p>AMD also couldn&#8217;t help but compare themselves to Nvidia&#8217;s Quadro K5000, Nvidia&#8217;s older professional workstation GPU (as they&#8217;re currently on the K6000) so naturally, here in AMD&#8217;s comparison they basically spank Nvidia. Yes, the W8100 is $2499, which makes it more price comparable with the K5000 as opposed to the K6000 which sells for a whopping $4,999 and is more comparable with AMD&#8217;s W9100.</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/06/W8100_2_9801.jpg" rel="lightbox-1"><img class="size-full wp-image-36141" src="http://cdn.vrworld.com/wp-content/uploads/2014/06/W8100_2_9801.jpg" alt="W8100" width="980" height="484" /></a></p>
<p>AMD also draws a comparison against <a title="Nvidia Maximus 2 Reviewed – The Great One" href="http://www.brightsideofnews.com/2013/09/12/nvidia-maximus-2-reviewed-the-great-one/" target="_blank">Nvidia&#8217;s Maximus 2 development platform, which we also reviewed</a>, as that solution is absolutely bulletproof but also incredibly expensive. Here AMD is claiming that they deliver more performance and doing it with fewer GPUs and with comparable memory. However, AMD doesn&#8217;t talk about the development scenarios that it enables or how good their professional drivers are compared to Nvidia&#8217;s. The Maximus 2 platform (and subsequent versions) are all about stability and reliability and not necessarily about performance as we learned in our review. So, until AMD can put these GPUs in our hands and show us that their GPUs and platforms are as stable as Nvidia&#8217;s in the same applications, then we&#8217;re not entirely sure that AMD can draw these comparisons. Yes, fewer GPUs will consume less power, but sometimes power isn&#8217;t as much of a concern when in professional graphics scenarios.</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/06/W8100_3_9801.jpg" rel="lightbox-2"><img class="aligncenter wp-image-36142 size-full" src="http://cdn.vrworld.com/wp-content/uploads/2014/06/W8100_3_9801.jpg" alt="W8100" width="980" height="404" /></a></p>
<p>&nbsp;</p>
<p>Last but not least, AMD&#8217;s W8100 was benchmarked in a ton of AMD-favorable benchmarks and applications (mostly OpenCL heavy) and they obviously won pretty well. However, the most interesting benchmark to me that isn&#8217;t cherry picked by AMD was their DaVinci Resolve performance benchmark showing scaling in Resolve using W8100&#8217;s. In that benchmark they show almost 100% scaling with DaVinci Resolve, which may be incredibly attractive to professionals that do lots of heavy post-processing.</p>
<div id="attachment_36147" style="width: 990px" class="wp-caption aligncenter"><a href="http://cdn.vrworld.com/wp-content/uploads/2014/06/W8100_Resolve_9801.jpg" rel="lightbox-3"><img class="size-full wp-image-36147" src="http://cdn.vrworld.com/wp-content/uploads/2014/06/W8100_Resolve_9801.jpg" alt="W8100" width="980" height="479" /></a><p class="wp-caption-text">DaVinci Resolve performance scaling with W8100</p></div>
<p>Also, in regards to <a href="http://links.em.experience.amd.com/servlet/MailView?ms=MjEwMzQ4MTES1&amp;r=NzMzNTE5MTkwMzgS1&amp;j=MzQxMjc5MjU0S0&amp;mt=1&amp;rt=0" target="_blank">AMD&#8217;s 50% off promotion</a>, there are actually only specific GPUs eligible for the promotion, including the W9100. And frankly, if you&#8217;re going to use the 50% off promotion, you might as well use it on their fastest and most expensive (and capable) professional graphics card. Other options include $800 off the MSRP of the W8000, $450 off the MSRP of the W7000, $1250 off the S9000&#8217;s MSRP and $715 off the S7000 at MSRP price. So, obviously it isn&#8217;t 50% off all professional graphic cards, but rather up to 50% off some of them.</p>
<p>I&#8217;m not sure why AMD is doing this, maybe to introduce people to their GPUs by getting to buy one cheaply, which isn&#8217;t a bad sales strategy. However, it may also be that they&#8217;re desperate to sell these GPUs and are cherry picking specific models and prices in order to make sure that they&#8217;re still making a profit on them.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/06/23/amd-launches-w8100-cuts-gpus-prices-50-first-gpu/">AMD Launches W8100, Cuts GPUs Prices 50% for First GPU</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2014/06/23/amd-launches-w8100-cuts-gpus-prices-50-first-gpu/feed/</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
		<item>
		<title>New GeForce GTX 880, GTX 870 Details Leak</title>
		<link>http://www.vrworld.com/2014/06/17/new-geforce-gtx-880-gtx-870-details-leak/</link>
		<comments>http://www.vrworld.com/2014/06/17/new-geforce-gtx-880-gtx-870-details-leak/#comments</comments>
		<pubDate>Tue, 17 Jun 2014 17:04:49 +0000</pubDate>
		<dc:creator><![CDATA[Anshel Sag]]></dc:creator>
				<category><![CDATA[Graphics]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Rumors]]></category>
		<category><![CDATA[GeForce]]></category>
		<category><![CDATA[GeForce GTX 880]]></category>
		<category><![CDATA[GK-110]]></category>
		<category><![CDATA[GK104]]></category>
		<category><![CDATA[GM 204]]></category>
		<category><![CDATA[GM 207]]></category>
		<category><![CDATA[GM-104]]></category>
		<category><![CDATA[GM-110]]></category>
		<category><![CDATA[GM204]]></category>
		<category><![CDATA[GTX 870]]></category>
		<category><![CDATA[GTX 880]]></category>
		<category><![CDATA[Kepler]]></category>
		<category><![CDATA[Maxwell]]></category>
		<category><![CDATA[Nvidia]]></category>

		<guid isPermaLink="false">http://www.brightsideofnews.com/?p=35985</guid>
		<description><![CDATA[<p>As always, there are going to be a plethora of rumors about the next GPUs coming from AMD and Nvidia, so it comes as no ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/06/17/new-geforce-gtx-880-gtx-870-details-leak/">New GeForce GTX 880, GTX 870 Details Leak</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="668" height="258" src="http://cdn.vrworld.com/wp-content/uploads/2014/06/FeatureImage-geforce-gtx-6601.jpg" class="attachment-post-thumbnail wp-post-image" alt="GeForce GTX 880" /></p><p>As always, there are going to be a plethora of rumors about the next GPUs coming from AMD and Nvidia, so it comes as no surprise that new details are leaking about Nvidia&#8217;s next generation of GPUs based on Maxwell. Nvidia has already launched the Maxwell architecture with the <a title="Nvidia’s GTX 750 Ti Introduces Maxwell and a Whole Lotta Cache" href="http://www.brightsideofnews.com/2014/02/18/nvidias-gtx-750-ti-introduces-maxwell-and-a-whole-lotta-cache/">incredibly efficient and modular GTX 750 Ti</a>. However, this is their one and only Maxwell part and there are lots of people wondering when we&#8217;ll see the mid and high-end parts.</p>
<p>So, <a href="http://www.sweclockers.com/nyhet/18948-geforce-gtx-880-och-gtx-870-med-maxwell-anlander-till-hosten" target="_blank">today&#8217;s rumor</a>, about the GeForce GTX 880 and GTX 870 are clearly feeding off of this desire for information and likely enlighten when and what we can expect to see from Nvidia. The first and foremost important detail is that their sources claim that the GeForce GTX 880 and GTX 870 will be launching in the fourth quarter of 2014, most likely in October or November. This would give Nvidia ample time to prepare for the fall/winter game release rush and to have a new and exciting product for the holiday season.</p>
<p>Also, as expected this part will still be based upon the 28nm process from TSMC as the countless rumors of TSMC&#8217;s 20nm delays seem to never end. While we all believe that both AMD and Nvidia would love to manufacture their latest generation of high-end GPUs on TSMC&#8217;s 20nm process, it simply does not appear to be ready for their multi-billion transistor chips. Let&#8217;s remember that Nvidia&#8217;s GK-110 is actually a 7.1 billion transistor chip and there is a very strong liklihood that a Maxwell part like what we would expect to be the GM-110 is very likely going to be a GTX 9 series part and not a GeForce GTX 880.</p>
<p>As we saw from Nvidia in the past, they generally launch a cut-down version of the architecture first like the GTX 680 (GK104) and then once their process is more mature and their thermals/yields are manageable, then you&#8217;ll see a GTX 780 Ti (GK110) that fully follows the &#8216;full-blown&#8217; architecture design without any compromises. A lot of sites are referring to the Maxwell batch of GPUs as GM204 and GM-210 however there is no reason to believe that Nvidia would do this, especially since the GTX 750 Ti is a GM107 chip. If the process hasn&#8217;t changed and the architecture is the same, I don&#8217;t see any reason why the GeForce GTX 880 could not be a GM-104 with its full-blown successor being a GM110 or GM210 if they decide to go with the 20nm die shrink (very likely).</p>
<p>However, some of the rumors continually refer to the high-end GPUs as GM204, which leads some people to believe that we may see more than just higher performance out of a GeForce GTX 880 or GTX 870 part. Realistically, though, I don&#8217;t really see this being possible and those parts may be reserved for professional or HPC applications that will get released shortly after the GeForce GTX 880.</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/06/17/new-geforce-gtx-880-gtx-870-details-leak/">New GeForce GTX 880, GTX 870 Details Leak</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2014/06/17/new-geforce-gtx-880-gtx-870-details-leak/feed/</wfw:commentRss>
		<slash:comments>4</slash:comments>
		</item>
		<item>
		<title>Nvidia&#039;s Jetson TK1 is shipping, is it a &#039;Supercomputer&#039;?</title>
		<link>http://www.vrworld.com/2014/05/01/nvidias-jetson-tk1-shipping-supercomputer/</link>
		<comments>http://www.vrworld.com/2014/05/01/nvidias-jetson-tk1-shipping-supercomputer/#comments</comments>
		<pubDate>Thu, 01 May 2014 20:04:39 +0000</pubDate>
		<dc:creator><![CDATA[Anshel Sag]]></dc:creator>
				<category><![CDATA[News]]></category>
		<category><![CDATA[Kepler]]></category>
		<category><![CDATA[Nvidia]]></category>
		<category><![CDATA[Supercomputer]]></category>
		<category><![CDATA[Tegra]]></category>
		<category><![CDATA[Tegra K1]]></category>
		<category><![CDATA[Titan]]></category>

		<guid isPermaLink="false">http://www.brightsideofnews.com/?p=34854</guid>
		<description><![CDATA[<p>So last week, Nvidia launched a CUDA Vision Challenge that challenges developers to propose applications that might properly utilize the Jetson TK1&#8217;s embedded &#8216;Supercomputer&#8217; capability for ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/05/01/nvidias-jetson-tk1-shipping-supercomputer/">Nvidia&#039;s Jetson TK1 is shipping, is it a &#039;Supercomputer&#039;?</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="1920" height="1140" src="http://cdn.vrworld.com/wp-content/uploads/2014/05/Jetson_TK11.jpg" class="attachment-post-thumbnail wp-post-image" alt="Jetson TK1" /></p><p>So last week, <a href="https://developer.nvidia.com/tk1-vision-challenge" target="_blank">Nvidia launched a CUDA Vision Challenge</a> that challenges developers to propose applications that might properly utilize the Jetson TK1&#8217;s embedded &#8216;Supercomputer&#8217; capability for unique or novel purposes. <a href="http://blogs.nvidia.com/blog/2014/04/25/win-jetson-tk1/" target="_blank">Nvidia announced that they would be giving away 50 Jetson TK1 development boards</a> to the top 50 proposals submitted by developers and the winners would be announced on May 20th. And, if you don&#8217;t think you&#8217;ll win, but still want to get your hands on one, you can buy it for $192. That&#8217;s part of Nvidia&#8217;s own marketing campaign since the GPU onboard the Jetson TK1 is the Kepler GPU and part of the Tegra K1 SoC which has 192 GPU shader cores.</p>
<p>The unfortunate part is that a lot of people are letting Nvidia be far too liberal with the term Supercomputer much like when Nvidia invented the category for smartphones called Superphone. There really is no way that the Tegra K1 or the Jetson TK1 are supercomputers even if you consider them to be &#8216;embedded supercomputers&#8217;. Because, realistically, a supercomputer is going to need to have more horsepower than a quad core ARM processor and 192 GPU shader cores. Now, Nvidia does actually power a Supercomputer in the US which is known as Titan, the inspiration for an entire Titan line of GPUs. The <a href="http://www.top500.org/system/177975" target="_blank">Titan Supercomputer at Oakridge National Laboratory</a> is currently the second fastest supercomputer in the world with a total of 560,640 cores (including GPU and CPU) which is compared to 5 CPU cores and 192 GPU cores, which is a tiny tiny fraction of the over 560,000 cores inside the Titan Supercomputer. It just doesn&#8217;t make sense to consider the Jetson TK1 as a supercomputer when it would likely take hundreds of thousands if not millions of Jetson TK1 boards</p>
<p><a href="http://cdn.vrworld.com/wp-content/uploads/2014/05/1024px-titan11.jpg" rel="lightbox-0"><img class="aligncenter size-full wp-image-34856" src="http://cdn.vrworld.com/wp-content/uploads/2014/05/1024px-titan11.jpg" alt="1024px-titan1" width="1024" height="364" /></a></p>
<p>It just seems incredibly disingenous for Nvidia to be branding the Jetson TK1 as a supercomputer, because it simply isn&#8217;t anywhere near a supercomputer. Sure, it may be a pretty great self-contained embedded computer with a lot of connectivity and computing power, but the truth is that there are just not supercomputers by any measure. The TK1 is essentially a tablet or smartphone chip that&#8217;s being used for embedded, not a supercomputer chip.</p>
<p>&nbsp;</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/05/01/nvidias-jetson-tk1-shipping-supercomputer/">Nvidia&#039;s Jetson TK1 is shipping, is it a &#039;Supercomputer&#039;?</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2014/05/01/nvidias-jetson-tk1-shipping-supercomputer/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>Nvidia GeForce GTX Titan Z is&#8230; Coming Soon?</title>
		<link>http://www.vrworld.com/2014/04/30/nvidia-geforce-gtx-titan-z-coming-soon/</link>
		<comments>http://www.vrworld.com/2014/04/30/nvidia-geforce-gtx-titan-z-coming-soon/#comments</comments>
		<pubDate>Wed, 30 Apr 2014 17:03:26 +0000</pubDate>
		<dc:creator><![CDATA[Anshel Sag]]></dc:creator>
				<category><![CDATA[Graphics]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Rumors]]></category>
		<category><![CDATA[Asus]]></category>
		<category><![CDATA[GeForce]]></category>
		<category><![CDATA[GeForce GTX]]></category>
		<category><![CDATA[GK-110]]></category>
		<category><![CDATA[GTX Titan Z]]></category>
		<category><![CDATA[GTXTITANZ-12GD5]]></category>
		<category><![CDATA[Kepler]]></category>
		<category><![CDATA[Nvidia]]></category>
		<category><![CDATA[Nvidia GeForce]]></category>
		<category><![CDATA[Titan]]></category>
		<category><![CDATA[Titan Z]]></category>

		<guid isPermaLink="false">http://www.brightsideofnews.com/?p=34811</guid>
		<description><![CDATA[<p>We were a bit surprised to see an announcement on Techpowerup! that ASUS had launched their GTX Titan Z without any hoopla from Nvidia or ...</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/04/30/nvidia-geforce-gtx-titan-z-coming-soon/">Nvidia GeForce GTX Titan Z is&#8230; Coming Soon?</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="805" height="465" src="http://cdn.vrworld.com/wp-content/uploads/2014/04/ASUS_GTXTITANZ-12GD5_011.jpg" class="attachment-post-thumbnail wp-post-image" alt="ASUS_GTXTITANZ-12GD5_01" /></p><p>We were a bit surprised to see <a href="http://www.techpowerup.com/200339/asus-announces-the-geforce-gtx-titan-z-dual-gpu-graphics-card.html" target="_blank">an announcement on Techpowerup!</a> that ASUS had launched their GTX Titan Z without any hoopla from Nvidia or any of their other board partners. So, it comes as little surprise that the article itself has since been pulled and that any and all mentions of ASUS&#8217; GTX Titan Z have disappeared. But of course, the damage has already been done and Pandora&#8217;s box has been opened. However, there isn&#8217;t that much about this card that is really a mystery.</p>
<p>The GTX Titan Z is expected to be a dual Titan graphics card, air cooled, that enables some of the best dual precision compute performance on earth while still delivering an incredible gaming experience, all while doing it with only one graphics card. This is possible because of the dual GPUs and 12 GB of RAM, which in theory would make the Titan Z a possibly better single card solution for driving three 4K monitors simultaneously. Obviously, based on our findings with the AMD R9 295X2, the likelihood of driving all three of those 4K monitors while gaming is out of the question, but driving a single 4K monitor is very likely possible if not encouraged. What&#8217;s more interesting is that Nvidia told us the price of the GTX Titan Z before we actually knew anything else about the card and as a result, it costs as much as two of AMD&#8217;s latest high-end GPU, the R9 295X2 at $3,000 a pop.</p>
<p>&nbsp;</p>
<div id="attachment_34815" style="width: 845px" class="wp-caption aligncenter"><a href="http://cdn.vrworld.com/wp-content/uploads/2014/04/ASUS_GTXTITANZ-12GD5_021.jpg" rel="lightbox-0"><img class="size-full wp-image-34815" src="http://cdn.vrworld.com/wp-content/uploads/2014/04/ASUS_GTXTITANZ-12GD5_021.jpg" alt="ASUS GTX Titan Z" width="835" height="705" /></a><p class="wp-caption-text">ASUS GTX Titan Z</p></div>
<p>Based on what we saw from Techpowerup, the ASUS card will have its own real-time graphics tuning utility called GPU Tweak to allow on-the-fly adjustments, even though I don&#8217;t know many people that need to make on-the-fly adjustments to a $3,000 card. Remember, this isn&#8217;t quite a professional card, but it also isn&#8217;t quite a gaming card since it is $3,000 and that takes it out of the reach of about 99.99% of gamers. It will also have each GPU running at a base clock of  705 MHz and a boost clock of 876 MHz, meaning that this card is significantly slower per GPU than its single GPU counterparts, which isn&#8217;t necessarily unexpected considering the TDP of each GPU. With both GPUs and memory combined, this card will have a total of 12 GB of GDDR5 memory and 5760 CUDA cores, which are what this card is designed to deliver, lots of memory and lots of cores.</p>
<p>According to the Techpowerup! article this GPU is supposed to be available worldwide on April 29th, yesterday. However, we&#8217;re hearing that there is a slight delay with this card and that date has been pushed back to May 8th, essentially a week away from tomorrow. Which may explain why ASUS had Techpowerup pull the article and why there is no news about it, Techpowerup may have just not gotten the memo.</p>
<p style="text-align: center;"><b>SPECIFICATIONS: </b><b style="text-align: center;">GTXTITANZ-12GD5</b></p>
<p style="text-align: center;"><span style="text-align: center;">Graphics Engine: NVIDIA GeForce GTX TITAN Z</span><br />
<span style="text-align: center;">Bus Standard: PCI Express 3.0</span><br />
<span style="text-align: center;">OpenGL: OpenGL 4.4</span><br />
<span style="text-align: center;">Video Memory: 12 GB GDDR5</span><br />
<span style="text-align: center;">GPU Boost Clock: 876 MHz</span><br />
<span style="text-align: center;">GPU Base Clock: 705 MHz</span><br />
<span style="text-align: center;">CUDA Cores: 5760</span><br />
<span style="text-align: center;">Memory Clock: 7000 MHz</span><br />
<span style="text-align: center;">Memory Interface: 768 bit</span><br />
<span style="text-align: center;">Output: 1 x Native DVI-I, 1 x Native DVI-D,1 x Native HDMI, 1 x Native DisplayPort 1.2</span></p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2014/04/30/nvidia-geforce-gtx-titan-z-coming-soon/">Nvidia GeForce GTX Titan Z is&#8230; Coming Soon?</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2014/04/30/nvidia-geforce-gtx-titan-z-coming-soon/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>

<!-- Performance optimized by W3 Total Cache. Learn more: http://www.w3-edge.com/wordpress-plugins/

Content Delivery Network via Amazon Web Services: CloudFront: cdn.vrworld.com

 Served from: www.vrworld.com @ 2015-04-10 21:47:33 by W3 Total Cache -->