<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>VR World &#187; Supercomputing Frontiers 2015</title>
	<atom:link href="http://www.vrworld.com/category/event/supercomputing-frontiers-2015/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.vrworld.com</link>
	<description></description>
	<lastBuildDate>Thu, 09 Apr 2015 20:31:19 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.1.1</generator>
	<item>
		<title>Satoshi Matsuoka Interview on state of Japan&#8217;s HPC Market</title>
		<link>http://www.vrworld.com/2015/03/26/satoshi-matsuoka-interview-on-state-of-japans-hpc-market/</link>
		<comments>http://www.vrworld.com/2015/03/26/satoshi-matsuoka-interview-on-state-of-japans-hpc-market/#comments</comments>
		<pubDate>Thu, 26 Mar 2015 02:00:12 +0000</pubDate>
		<dc:creator><![CDATA[Sam Reynolds]]></dc:creator>
				<category><![CDATA[Asia Pacific (APAC)]]></category>
		<category><![CDATA[Business]]></category>
		<category><![CDATA[Event]]></category>
		<category><![CDATA[Interviews]]></category>
		<category><![CDATA[Japan]]></category>
		<category><![CDATA[Supercomputing Frontiers 2015]]></category>
		<category><![CDATA[HPC]]></category>
		<category><![CDATA[K Computer]]></category>
		<category><![CDATA[Riken Advanced Institute for Computational Science]]></category>
		<category><![CDATA[Satoshi Matsuoka]]></category>

		<guid isPermaLink="false">http://www.vrworld.com/?p=50849</guid>
		<description><![CDATA[<p>VR World chats with Satoshi Matsuoka to understand what is going on in the HPC space land of the rising sun. </p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2015/03/26/satoshi-matsuoka-interview-on-state-of-japans-hpc-market/">Satoshi Matsuoka Interview on state of Japan&#8217;s HPC Market</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="1600" height="1150" src="http://cdn.vrworld.com/wp-content/uploads/2015/03/WG-k-computer-1.jpg" class="attachment-post-thumbnail wp-post-image" alt="WG-k-computer (1)" /></p><p>Japan is a major player in the high performance computing space, but it is often overlooked in favor of discussions about the latest efforts out of China and the US. While China’s national showpiece of Tianhe-2 gets its share of attention, it’s important to remember that on the Linpack Top500 list of HPC systems, within the top 20 Japan holds two positions: the fourth and 15th.</p>
<p>Given Japan’s industrial and scientific might as the world’s third-largest economy, it’s expected that it would also be a major</p>
<div id="attachment_50937" style="width: 350px" class="wp-caption alignleft"><a href="http://cdn.vrworld.com/wp-content/uploads/2015/03/st20131129_tsubame03.jpg" rel="lightbox-0"><img class="size-full wp-image-50937" src="http://cdn.vrworld.com/wp-content/uploads/2015/03/st20131129_tsubame03.jpg" alt="Professor Satoshi Matsuoka" width="340" height="240" /></a><p class="wp-caption-text">Professor Satoshi Matsuoka</p></div>
<p>HPC power. Japanese firms are fast at work designing exascale systems, and the Riken Advanced Institute for Computational Science, home to the world’s fourth fastest supercomputer (which held the title as fastest when it was switched on in 2011), might be home to the world’s first <a href="http://www.pcworld.com/article/2690212/fujitsu-to-design-japanese-exascale-supercomputer.html">exascale system</a> even before the United States.</p>
<p>On the sidelines of the Supercomputing Frontiers 2015 conference in Singapore, the <i>VR World</i> team sat down with Dr. Satoshi Matsuoka of the Tokyo Institute of Technology, one of the leading figures in HPC in Japan to discuss the state of HPC in the country.</p>
<p><b><i>VR World: </i></b><b>What is the state of Japanese supercomputing when compared to the competitive landscapes of the United States and China?</b></p>
<p><b>Satoshi Matsuoka: </b>Historically, the Japanese HPC market and Japanese technology has always been fairly competitive especially in the system architecture space. US and Japan are now the two countries that are producing supercomputing platforms that are sold worldwide. What China creates is not sold to the outside market.</p>
<p>The Japanese market in computing has always come from the mainframe market. Hitachi, Fujitsu and NEC&#8230; they were all mainframe vendors. There were actually others but they have since moved away so there. These have always been the biggest mainframe vendors.</p>
<p>Fujitsu has gone the way of building their own MP piece since actually they built the first MP piece in the 90s, the AP 1000. And then they went to building its own SPARC processors so differs from Oracle, SUN and then now with the <a href="http://en.wikipedia.org/wiki/K_computer">K computer</a> [the world’s fourth fastest supercomputer].</p>
<p><b><i>VRW: </i></b><b>So you would say the Fujitsu SPARC in terms of computational performance, stability for HPC, or client computing is actually ahead of Oracle’s at this point?</b></p>
<p><b>SM: </b>Way ahead, yes; it is very HPC focused. So it is hard to say which one is better but for HPC, definitely Fujitsu’s [system] is better. Now looking at the hardware side, there are some advantages [over Oracle], because the Japanese vendors are focusing on building fairly special-purpose HPC hardware. They can really tailor the processors to be directed towards this specific market.</p>
<p>For example, the FX 100; the latest chip from Fujitsu has 34 cores.</p>
<p><b><i>VRW: </i></b><strong>Are you comfortable Fujitsu will continue in the medium to long-term. Are they committed to leading the industry in your mind?</strong></p>
<p><strong><b>SM:  </b></strong>Yes. They are. They have embedded the Tofu network into the latest FX 100. They were the first adopters of using HMC, the 3D stacking technology. And also they have enormously high injection bandwidth within the network, they also have RAS features [reliability, availability, and serviceability] that are extensive which makes them, in comparison to other processors, really competitive.</p>
<p>So it is becoming increasingly difficult for the Japanese HPC vendors to try to compete with the American counterparts because now designing these processors has become increasingly expensive. There are more transistors, now with lithography design becoming more complicated and you need more validation testing. So much of the Japanese HPC development is being funded by public money because still there are some centers that buy [the HPC systems] and also there are national projects like the K computer now post K computer project has been approved.</p>
<p>So this makes it very hard for the Japanese vendors. Now of course the Japanese vendors also have their own XV-6 line and so forth – Fujitsu sells XV-6 machines; so does NEC, so does Hitachi. And Hitachi has alliance with IBM now so they don’t make their own processors anymore. They work with IBM to design high-end systems.</p>
<p>I think the only way the Japanese vendors can survive is to – but it is my personal view – they will become more aligned with the overall, commoditization leveraging of the other markets. It is not to say they will produce something cheap. Again I will say that commoditization is about building cheap stuff. Commoditization is actually to apply the latest and greatest technologies however to be compliant with certain standards.</p>
<p><em><strong>VRW: </strong></em><strong>Thanks for your time. </strong></p>
<p><em><strong>This interview has been edited and condensed. </strong></em></p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2015/03/26/satoshi-matsuoka-interview-on-state-of-japans-hpc-market/">Satoshi Matsuoka Interview on state of Japan&#8217;s HPC Market</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2015/03/26/satoshi-matsuoka-interview-on-state-of-japans-hpc-market/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>The Evils of Floating Point, and the Joys of Unum</title>
		<link>http://www.vrworld.com/2015/03/24/the-evils-of-floating-point-and-the-joys-of-unum/</link>
		<comments>http://www.vrworld.com/2015/03/24/the-evils-of-floating-point-and-the-joys-of-unum/#comments</comments>
		<pubDate>Tue, 24 Mar 2015 03:46:54 +0000</pubDate>
		<dc:creator><![CDATA[Brandon Shutt]]></dc:creator>
				<category><![CDATA[Analysis]]></category>
		<category><![CDATA[Event]]></category>
		<category><![CDATA[Software]]></category>
		<category><![CDATA[Supercomputing Frontiers 2015]]></category>
		<category><![CDATA[floating point]]></category>
		<category><![CDATA[John Gustafson]]></category>
		<category><![CDATA[Universal numbers]]></category>
		<category><![CDATA[unums]]></category>

		<guid isPermaLink="false">http://www.vrworld.com/?p=50682</guid>
		<description><![CDATA[<p>Universal Numbers (Unum) and floating points are complicated. Here's an explainer on the subject. </p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2015/03/24/the-evils-of-floating-point-and-the-joys-of-unum/">The Evils of Floating Point, and the Joys of Unum</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="3600" height="2700" src="http://cdn.vrworld.com/wp-content/uploads/2015/03/coolness.jpg" class="attachment-post-thumbnail wp-post-image" alt="coolness" /></p><p>It may come as a surprise to many that the way computers handle numbers is not very accurate. Indeed, it can be said that error is built into the very foundation of digital computers, and while the end user often does not see the result of these errors, they can be very problematic for programmers, scientists, engineers, and calculation intense industries such as money management and military operations.</p>
<p>At the recent <a href="www.vrworld.com/category/event/supercomputing-frontiers-2015/">Supercomputing Frontiers 2015</a> conference in Singapore, computer scientist John Gustafson outlined the problems with floating points in his <a href="http://www.vrworld.com/2015/03/17/supercomputing-frontiers-2015-the-101x102-problem/">keynote</a> and later in an <a href="http://www.vrworld.com/2015/03/19/error-free-computing-unums-save-both-real-and-virtual-battles/">interview</a>. Given the complexity &#8212; and severity &#8212; of the problem, it&#8217;s worth taking a second in-depth look at the issue.</p>
<h2><strong>The Problem</strong></h2>
<p>Developer Richard Harris, who wrote a series of articles on the dangers of floating point, <a href="http://www.citeulike.org/user/bastibarry1/article/11060101">said in one post</a>, &#8220;The dragon of numerical error is not often roused from his slumber, but if incautiously approached he will occasionally inflict catastrophic damage upon the unwary programmer&#8217;s calculations. So much so that some programmers, having chanced upon him in the forests of IEEE 754 floating point arithmetic, advise their fellows against travelling in that fair land.&#8221;</p>
<p>Because computers &#8211; which are machines of precision and exactness &#8211; are often made to deal with unprecise and inexact numbers (such as pi, and irrationals), methods must be devised to compensate for computational error, and to make the end result as close to the correct answer as possible. One solution has been devised that is still in use today: floating point. Floating point is a method similar to scientific notation, which uses a decimal point, sign bit, and a number of exact digits to represent a number.</p>
<p>Since The IEEE Standard for Floating-Point Arithmetic was published in 1985, this standard has come to dominate the mathematical methods used by hardware and software engineers for the basic operations computers perform whenever running an application. Ideally, a one-size-fits-all standard such as this one would minimize error and promote uniformity of results across a broad spectrum of hardware.</p>
<p>Unfortunately, this has not been the practical result. Different processors and software packages, designed to handle floating point operations, often result in slightly different answers, due to rounding errors, and differing orders of operation.</p>
<p>One way that programmers often compensate is to use as many digits as possible to represent a number. In modern computers, this means that 32 &#8211; 64 bits of data are almost always used to represent a single floating point number. While modern computers are also very fast at calculations, this many bits must be stored and retrieved from memory, causing significant latency in calculations.</p>
<p>Furthermore, due to compounding error, traditional properties of algebra &#8211; such as the commutative and associative property &#8211; do not necessarily apply to floating point operations. In other words, (a + b) + c =/= a + (b + c), nor does c * (a + b) = c*a + c*b.</p>
<p>In the case of floating point, using these differing approaches often yields dissimilar results.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2015/03/24/the-evils-of-floating-point-and-the-joys-of-unum/">The Evils of Floating Point, and the Joys of Unum</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2015/03/24/the-evils-of-floating-point-and-the-joys-of-unum/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>Jack Dongarra on the Great Exascale Challenge and Rising HPC Powers</title>
		<link>http://www.vrworld.com/2015/03/23/jack-dongarra-on-the-great-exascale-challenge-and-rising-hpc-powers/</link>
		<comments>http://www.vrworld.com/2015/03/23/jack-dongarra-on-the-great-exascale-challenge-and-rising-hpc-powers/#comments</comments>
		<pubDate>Mon, 23 Mar 2015 11:11:34 +0000</pubDate>
		<dc:creator><![CDATA[Sam Reynolds]]></dc:creator>
				<category><![CDATA[Event]]></category>
		<category><![CDATA[Interviews]]></category>
		<category><![CDATA[Supercomputing Frontiers 2015]]></category>
		<category><![CDATA[Exascale Computing]]></category>
		<category><![CDATA[High Performance Computing]]></category>
		<category><![CDATA[HPC]]></category>
		<category><![CDATA[Jack Dongarra]]></category>
		<category><![CDATA[Supercomputing]]></category>

		<guid isPermaLink="false">http://www.vrworld.com/?p=50573</guid>
		<description><![CDATA[<p>VR World chats with the Oak Ridge National Laboratory's Jack Dongarra on the road to exascale computing, and rising national powers in the HPC space.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2015/03/23/jack-dongarra-on-the-great-exascale-challenge-and-rising-hpc-powers/">Jack Dongarra on the Great Exascale Challenge and Rising HPC Powers</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="1461" height="914" src="http://cdn.vrworld.com/wp-content/uploads/2015/03/dongarra-3-edited.jpg" class="attachment-post-thumbnail wp-post-image" alt="dongarra-3-edited" /></p><p>The next big leap in scientific computing is the race to <a href="http://en.wikipedia.org/wiki/Exascale_computing">exascale</a>, the capability for a computer to perform 1 million trillion floating-point operations per second.</p>
<p>The US Department of Energy, which will fund the development of such systems, has <a href="http://science.energy.gov/ascr/research/scidac/exascale-challenges/">set targets</a> of what it wants from exascale systems. It wants one available by 2018-2022 and to consume less than 20 megawatts of power.</p>
<p>For scientific computing having this much processing power available would mean that researchers could tackle the <i>next</i> big questions in science. It has been likened to the Hubble telescope, and the advantages it offered scientists in seeing far-off previously invisible stars.</p>
<p>But the problem is current technology is not at the level to accommodate the requirements of exascale computing. In order to reach exascale, at an efficient power and price point, new architecture will have to be developed that changes the way high performance computers compute and move data. Current generation hardware can not simply be scaled up until it reaches exascale level, the power required would simply be enormous and uneconomical.</p>
<p>While the US has put a great deal of resources into the necessary research required to hit exascale, in the end it may be beaten to exascale by another country.</p>
<p>In order to get a better understanding of what needs to happen before we reach exascale, and to get a perspective on some of the other rising powers in HPC computing, <i>VR World</i> spoke with Oak Ridge National Laboratory&#8217;s <a href="http://www.vrworld.com/tag/jack-dongarra/">Jack Dongarra</a> who delivered a keynote at the Supercomputing Frontiers 2015 conference in Singapore on the topic.</p>
<p><b><em>VR World</em>: During your keynote you mentioned the ‘exascale challenge’. In your opinion, how do we get there from here? What has to happen?</b></p>
<p><b>Jack Dongarra: </b>We can’t use today’s technology to build that exascale machine. It would cost too much money, and the power requirements would be way too much. It would take 30 Tianhe-2 clusters in order to get there. We have to have some way to reduce the power and keep the cost under control.</p>
<p>Today, all of our machines are over-provisioned for floating-point. They have an excess floating-point capability. The real issues are related to data movement. It’s related to bandwidth. For example, you have a chip. And this chip has increasing computing capability &#8212; you put more cores on it. Those cores need data, and the data has to come in from the sides. You’ve got area that’s increasing due to the computing capability but the perimeter is not increasing to compensate for it. The number of pins limits the data that can go in. That’s the crisis we have.</p>
<p>That has to change. One way it changes is by doing stacking. 3D stacking is a technology that we have at our disposal now. That will allow much more information flow in a way that makes a lot more sense in terms of increasing bandwidth. We have a mechanism for doing that, so we get increased bandwidth. That bandwidth is going to help reduce [power draw] as we don’t have to move data into the chip.</p>
<p>The other thing that’s going to happen is that photonics is going to take over. The data is going to move not over copper lines but over optical paths. The optical paths reduce the amount of power necessary. So that’s a way to enhance the data movement, and to reduce the power consumption of these processors. The chip gets much more affordable, and we can have a chance at turning that computing capability into realized performance &#8212; which is a key thing.</p>
<p>In the US, I think we’ll reach exascale in 2022. 2022 is the point where the money will be in place and it’s a question of money. We could build a machine today, but it it would be too expensive. The current thinking is it will be realizable around 2020, and the US is going to be able to deploy the machine in 2022. The money won’t be in place until then, but the technology will be ready ahead of time.</p>
<p><strong><i>VRW</i>: What’s your take on vendors&#8217; 3D stacking efforts so far?</strong></p>
<p><b>JD: </b>It’s great. It has to happen. It’s gotta be that way. It’s a natural way to move. It’s going to be the key thing in terms of performance enhancement in the next few years, and being able to effectively employ that as a device. Things look very positive.</p>
<p><b><i>VRW: </i></b><b>Over the last few years we’ve witnessed China becoming a rising CPU player, with its domestic Alpha and MIPS-based CPUs. Do you have a feeling that conventional CPU vendors have over complicated things for themselves?</b></p>
<p><b>JD: </b>China has an indigenous processor which may or may not come out and be deployed in a high performance machine. There are some rumors that the next big machine would be based on the <a href="http://en.wikipedia.org/wiki/ShenWei">ShenWei CPU</a>. I can understand the motivation for China wanting a processor, they don’t want to dependent on Western technology for these things. There are some issues here. It’s not going to be on x86 architecture, so software will have to be re-written for this machine. Software is a big deal on these systems, but that can be overcome.</p>
<p>When China does deploy this wide scale, Intel will stand up and take notice. It will be a big thing, now China will be in a position to use their product and not Intel’s product. That becomes a big issue.</p>
<p><b><i>VRW: </i></b><b>Do you see any emerging powers in the HPC space that are outside the traditional industrial powers of US, Japan, Europe and China?</b></p>
<p><b>JD: </b>Things have been dominated by the US, followed by the European Union and Japan. China is a more recent investor in high performance computing. Then there are other countries that claim to be wanting to be involved. Korea is a country that claims to wanting to be involved. They are making noise about buying a big machine. They aren’t going to build a machine &#8212; they don’t have the processors &#8212; they are going to buy a machine from the US.</p>
<p>India has made claims they want to do something. Again, they aren’t going to make their machine. They are going to purchase one.</p>
<p><b><i>VRW: </i></b><b>Thanks for your time. </b></p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2015/03/23/jack-dongarra-on-the-great-exascale-challenge-and-rising-hpc-powers/">Jack Dongarra on the Great Exascale Challenge and Rising HPC Powers</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2015/03/23/jack-dongarra-on-the-great-exascale-challenge-and-rising-hpc-powers/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Jack Dongarra: China Isn’t the Emerging HPC Power You Think It Is</title>
		<link>http://www.vrworld.com/2015/03/22/jack-dongarra-china-isnt-the-emerging-hpc-power-you-think-it-is/</link>
		<comments>http://www.vrworld.com/2015/03/22/jack-dongarra-china-isnt-the-emerging-hpc-power-you-think-it-is/#comments</comments>
		<pubDate>Sun, 22 Mar 2015 11:04:25 +0000</pubDate>
		<dc:creator><![CDATA[Sam Reynolds]]></dc:creator>
				<category><![CDATA[Event]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Supercomputing Frontiers 2015]]></category>
		<category><![CDATA[China High Performance Computing]]></category>
		<category><![CDATA[China HPC]]></category>
		<category><![CDATA[China Supercomputers]]></category>
		<category><![CDATA[High Performance Computing]]></category>
		<category><![CDATA[HPC]]></category>
		<category><![CDATA[Jack Dongarra]]></category>
		<category><![CDATA[Oak Ridge National Laboratory]]></category>
		<category><![CDATA[Tianhe-2]]></category>

		<guid isPermaLink="false">http://www.vrworld.com/?p=50513</guid>
		<description><![CDATA[<p>In an exclusive interview with VR World, Jack Dongarra of Oak Ridge National Laboratory says we need to take a second look at certain countries' claims of rising HPC power -- notably China.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2015/03/22/jack-dongarra-china-isnt-the-emerging-hpc-power-you-think-it-is/">Jack Dongarra: China Isn’t the Emerging HPC Power You Think It Is</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="741" height="506" src="http://cdn.vrworld.com/wp-content/uploads/2015/03/dongarra-banner.jpg" class="attachment-post-thumbnail wp-post-image" alt="dongarra-banner" /></p><p><em><strong>Read VR World&#8217;s <a href="http://www.vrworld.com/2015/03/23/jack-dongarra-on-the-great-exascale-challenge-and-rising-hpc-powers/">full interview</a> with Prof. Jack Dongarra here. </strong></em></p>
<p>Countries around the world, particularly emerging markets, all would love to have a top 100 supercomputer. Being able to have a supercomputer that ranks in the top 100, or even the top 10, would be a national showpiece &#8211; a sign of technological might &#8211; and would please many of the country’s politicians.</p>
<p>The United States is the world’s dominate high performance computing power, as it has more supercomputers in the <a href="http://www.top500.org/project/">top 500 list </a>than any other single country, but China would like to challenge this hegemony. After all, China has the world’s fastest supercomputer, <a href="http://www.top500.org/system/177999">Tianhe-2</a> , at the National Supercomputer Center at Sun Yat-sen University in Guangzhou.</p>
<p>But in an exclusive interview with <i>VR World</i>, Dr. Jack Dongarra of Oak Ridge National Laboratory and the University of Tennessee, said that China’s HPC stature may be something of a facade. Tianhe-2, while definitely the world’s fastest supercomputer, is somewhat idle and is not being used to its full capacity.</p>
<p>“The real question is: what are they going to use the machine for. I question, at some level, what the Chinese are doing with these big machines,” Dongarra said. “They are are not using the accelerator part of the machine.” [<a href="http://ark.intel.com/products/75798/Intel-Xeon-Phi-Coprocessor-3120P-6GB-1_100-GHz-57-core">48,000 Intel Xeon Phi 31S1P Accelerator cards</a>].</p>
<p>“I go visit the computing facilities [in China] &#8211; and I’m not saying that they are being used for things that are secret &#8211; I’m saying that I don’t know what they are being used for,” he continued.</p>
<p>Dongarra explained that part of the reason why Tianhe-2 is more idle than other top supercomputers is because of the funding model China’s government provides. The government paid for the costs to develop and construct the machine, but not for its operational costs which is not the norm in the scientific computing community.</p>
<p>The additional difficulty might be the machine setup China decided to go with. Intel&#8217;s (<a href="www.google.com/finance?cid=284784">NASDAQ: INTC</a>) Xeon Phi hasn’t proven itself in ease of use when compared to pure CPU code or accelerated code through GPGPU accelerators such as the Nvidia (<a href="www.google.com/finance?cid=662925">NASDAQ: NVDA</a>) Tesla or AMD (<a href="www.google.com/finance?cid=327">NASDAQ: AMD</a>) FirePro S Series.</p>
<p>“They have to come up with some mechanism to pay for it,” Dongarra said. “In scientific computing we don’t pay for computing time. It’s not in the culture of how we do business. A situation where people have to pay for computing time limits the computing time being used.”</p>
<p>&nbsp;</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2015/03/22/jack-dongarra-china-isnt-the-emerging-hpc-power-you-think-it-is/">Jack Dongarra: China Isn’t the Emerging HPC Power You Think It Is</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2015/03/22/jack-dongarra-china-isnt-the-emerging-hpc-power-you-think-it-is/feed/</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
		<item>
		<title>Error-Free Computing: Unums Save Both Real and Virtual Battles</title>
		<link>http://www.vrworld.com/2015/03/19/error-free-computing-unums-save-both-real-and-virtual-battles/</link>
		<comments>http://www.vrworld.com/2015/03/19/error-free-computing-unums-save-both-real-and-virtual-battles/#comments</comments>
		<pubDate>Thu, 19 Mar 2015 05:45:31 +0000</pubDate>
		<dc:creator><![CDATA[Sam Reynolds]]></dc:creator>
				<category><![CDATA[Event]]></category>
		<category><![CDATA[Exclusive]]></category>
		<category><![CDATA[Interviews]]></category>
		<category><![CDATA[Supercomputing Frontiers 2015]]></category>
		<category><![CDATA[floating point]]></category>
		<category><![CDATA[High Performance Computing]]></category>
		<category><![CDATA[HPC]]></category>
		<category><![CDATA[integer]]></category>
		<category><![CDATA[interview]]></category>
		<category><![CDATA[John Gustafson]]></category>
		<category><![CDATA[Universal numbers]]></category>
		<category><![CDATA[unums]]></category>

		<guid isPermaLink="false">http://www.vrworld.com/?p=50360</guid>
		<description><![CDATA[<p>VR World chats with John Gustafson about the challenges of implementing universal numbers into hardware, and the benefits they offer computing.  </p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2015/03/19/error-free-computing-unums-save-both-real-and-virtual-battles/">Error-Free Computing: Unums Save Both Real and Virtual Battles</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="640" height="360" src="http://cdn.vrworld.com/wp-content/uploads/2015/03/cpu_close_up.png" class="attachment-post-thumbnail wp-post-image" alt="cpu_close_up" /></p><p>To many people, the <a href="http://en.wikipedia.org/wiki/Floating_point">floating point</a>&#8211;<a href="http://en.wikipedia.org/wiki/Unum_%28number_format%29">universal number</a> debate is something extraneous: an academic issue that involves computer scientists, engineers, and hardware manufacturers.</p>
<p>But as <a href="http://en.wikipedia.org/wiki/John_Gustafson_%28scientist%29">John Gustafson</a> said <a href="http://www.vrworld.com/2015/03/17/supercomputing-frontiers-2015-the-101x102-problem/">during his keynote</a> at the <a href="http://www.vrworld.com/category/event/supercomputing-frontiers-2015/">Supercomputing Frontiers 2015</a> conference on Tuesday, the inaccuracies of floating point estimates have real world implications. They can be deadly both in the real sense  &#8212; with missile defense batteries mis-calculating intercept times &#8212; or as Gustafson explained they can also lose battles in a virtual sense.</p>
<p>During intense battles in multiplayer games, floating point estimates would give different answers for different players. The calculation of if a players’ shot would be a lethal headshot &#8212; or a frustrating miss &#8212; would have slightly different answers on different platforms. In order to get reliable, reproducible results in the event of discrepancy the software would need to switch back to integers.</p>
<p>In order to have a better understanding of the benefits of unums, and the challenges of implementing them into hardware, the <i>VR World</i> team spoke with Gustafson on the sidelines of the Supercomputing Frontiers 2015 conference in Singapore to learn more.</p>
<div id="attachment_50361" style="width: 510px" class="wp-caption alignleft"><a href="http://cdn.vrworld.com/wp-content/uploads/2015/03/VRW-Gustafson-interview.jpg" rel="lightbox-0"><img class="wp-image-50361 size-full" src="http://cdn.vrworld.com/wp-content/uploads/2015/03/VRW-Gustafson-interview-e1426743497115.jpg" alt="VRW-Gustafson-interview" width="500" height="375" /></a><p class="wp-caption-text">The VR World team interviews Dr. Gustafson</p></div>
<p><b><i>VR World:</i></b><b> You mentioned in your keynote that the implementation of Unum is challenging &#8212; in the words of one unnamed Intel executive ‘you can’t boil the ocean’. Why is this?</b></p>
<p><b>John Gustafson: </b>What he’s saying is that you can’t change the world. All you have is <a href="http://en.wikipedia.org/wiki/IEEE_floating_point">IEEE floats</a>. That’s the standard. ‘You can’t add a new number type, that’s not going to happen’ is what he said.</p>
<p><b><i>VRW</i></b><b>: How would you categorize the feedback you’ve gotten from CPU vendors about implementing unums?</b></p>
<p><b>JG: </b>People at AMD also didn’t get it. That was a kind of different opposition. They just didn’t see that I could save them so much power, electricity and bandwidth. Maybe it just looked too ambitious to them.</p>
<p>I’m not worried about what the hardware people think. I know they are going to hate it. They’ll have to build it, re-design circuits and all of that. I’m  more interested in everyone else.</p>
<p><b><i>VRW</i></b><b>: What’s the cost of keeping the existing floating point system, versus implementing Unums? What’s the cost of transitioning hardware to support this, versus the cost of errors in everyday life?</b></p>
<p><b>JG: </b>Remember: everything you can do with floats you can do with Unums. They are a subset. It’s a choice between one or the other; if it were I think it would never get off the ground. But if you can do everything you can do now if you have Unums, and you can also do other things, you can then incrementally work your way into them.</p>
<p>The other thing is right now we have to deal with at least two, or three, different precisions. Half precision is now out there. Nvidia has got the half precision out there in hardware as a native type, and single precision as well as double precision are everywhere. Quad precision is not supported by anyone’s hardware… I keep watching to see if it’s going to pop up.</p>
<p>But we already have to manage two, or three, different sizes.</p>
<p>I say replace it with one. And the hardware will let that slide continuously from all different sizes. It will simplify things so it may be cheaper and smaller on chip to do it that way then to have a bunch of single precision units and double precision units. That’s the way they do it now. They have to build separate hardware. Which is very wasteful.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2015/03/19/error-free-computing-unums-save-both-real-and-virtual-battles/">Error-Free Computing: Unums Save Both Real and Virtual Battles</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2015/03/19/error-free-computing-unums-save-both-real-and-virtual-battles/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Supercomputing Frontiers 2015: The 101&#215;10^2 Problem. Solution: Unums</title>
		<link>http://www.vrworld.com/2015/03/17/supercomputing-frontiers-2015-the-101x102-problem/</link>
		<comments>http://www.vrworld.com/2015/03/17/supercomputing-frontiers-2015-the-101x102-problem/#comments</comments>
		<pubDate>Tue, 17 Mar 2015 13:31:47 +0000</pubDate>
		<dc:creator><![CDATA[Sam Reynolds]]></dc:creator>
				<category><![CDATA[Analysis]]></category>
		<category><![CDATA[Event]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Software]]></category>
		<category><![CDATA[Space and Science]]></category>
		<category><![CDATA[Supercomputing Frontiers 2015]]></category>
		<category><![CDATA[floating point]]></category>
		<category><![CDATA[IEEE 754]]></category>
		<category><![CDATA[John Gustafson]]></category>
		<category><![CDATA[Universal numbers]]></category>
		<category><![CDATA[unums]]></category>

		<guid isPermaLink="false">http://www.vrworld.com/?p=50121</guid>
		<description><![CDATA[<p>We’ve almost reached the acceptance limit for floating point rounding errors. What’s the future?  One potential solution was explained at Supercomputing Frontiers 2015.</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2015/03/17/supercomputing-frontiers-2015-the-101x102-problem/">Supercomputing Frontiers 2015: The 101&#215;10^2 Problem. Solution: Unums</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="638" height="479" src="http://cdn.vrworld.com/wp-content/uploads/2015/03/unum-computing-an-energy-efficient-and-massively-parallel-approach-to-valid-numerics-12-638.jpg" class="attachment-post-thumbnail wp-post-image" alt="unum-computing-an-energy-efficient-and-massively-parallel-approach-to-valid-numerics-12-638" /></p><p>As the processing power of the world’s fastest high-performance computers gets faster and faster, we eventually need to think about an era after the processing speed arms race, argued John Gustafson at the Supercomputing Frontiers 2015 conference in Singapore on Tuesday.</p>
<p>Gustafson said that the big challenge for the future of HPC is not necessarily faster processors, but more accurate processors. Using the metaphor of HPC doesn’t need a faster horse, but rather start thinking about a world “post-horse” era, Gustafson proposed moving beyond floating point rounding numbers &#8212; which he described as having become sloppy &#8212; to something called the universal number or “Unum”.</p>
<p>Unum, as Gustafson first proposed in his book <a href="http://www.crcpress.com/product/isbn/9781482239867"><i>The End of Error</i>,</a> is a new way to represent numbers that’s more accurate than the floating point estimate found in the IEEE 754 standard which Gustafson hopes it would ultimately replace. The IEEE 754 standard is based on the 101&#215;10^2 floating point (which adds up to 64-bits) first introduced by L. Torres y Quevedo in Madrid in 1914.</p>
<p>Unums, which are 29-bits, contain metadata that allows for a longer and more in-depth answer rather than the rounding that floating point integers contain &#8212; including the overflow and underflow that goes along with it. Unums also obey algebraic laws and are safe to parallelize. With the mathematically complex problems that comes with the parallelism and sheer power found in modern HPC clusters, complex physics equations are reduced to mere “guesswork”.</p>
<p>The use of rounding can lead to disastrous results. Gustafson gave the example of how during the first Gulf War the 24-bit integer clock used in the Patriot missile batteries miscalculated the approach of a Scud missile by 0.34 seconds &#8212; killing 28 and injuring 100. The reason why the missile launched late is because of integer crowding. This came from the inaccuracy of the computer’s system clock due to it multiplying the time from milliseconds to seconds by multiplying 1/10. The 1/10 value was chopped after 24 decimal points. As the system had been on for 100 hours, the continued decimal chopping made the system continually less accurate. When dealing with missiles that travel hundreds of meters per second, this inaccuracy is unacceptable.</p>
<p>The other advantage of Unums is that due to their shorter float size, they take less external memory bandwidth to process. For a data center the largest single line item is its power bill. If power can be saved for the lowest level, for things like RAM calls, this would add up substantially in a massive data center. The US Department of Energy wants vendors to be able to produce an exascale system by 2019-2020 that uses less than 20 MW and to do this power savings has to happen everywhere.</p>
<p>Gustafson said that the next steps to get Unums to go “mainstream” is to convert the <a href="http://www.wolfram.com/mathematica/">Mathematica </a>C library into Unums. After that a strictly Unum compatible FPGA will need to be created. These are the first steps to the long road to a fully Unum compatible CPU.</p>
<p>For more on Unums, Gustafson’s book <a href="http://www.crcpress.com/product/isbn/9781482239867"><i>The End of Errors</i></a> is worth a read.</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2015/03/17/supercomputing-frontiers-2015-the-101x102-problem/">Supercomputing Frontiers 2015: The 101&#215;10^2 Problem. Solution: Unums</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2015/03/17/supercomputing-frontiers-2015-the-101x102-problem/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>Supercomputing Frontiers 2015 to Feature Acclaimed Researcher Jack Dongarra</title>
		<link>http://www.vrworld.com/2015/03/16/supercomputing-frontiers-2015-to-feature-acclaimed-researcher-jack-dongarra/</link>
		<comments>http://www.vrworld.com/2015/03/16/supercomputing-frontiers-2015-to-feature-acclaimed-researcher-jack-dongarra/#comments</comments>
		<pubDate>Mon, 16 Mar 2015 03:55:23 +0000</pubDate>
		<dc:creator><![CDATA[Brandon Shutt]]></dc:creator>
				<category><![CDATA[Event]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Supercomputing Frontiers 2015]]></category>
		<category><![CDATA[High Performance Computing]]></category>
		<category><![CDATA[HPC]]></category>
		<category><![CDATA[Singapore]]></category>
		<category><![CDATA[Supercomputing frontiers]]></category>

		<guid isPermaLink="false">http://www.vrworld.com/?p=50029</guid>
		<description><![CDATA[<p>Jack Dongarra from Oak Ridge National Laboratory and the University of Tennessee will be giving one of the keynotes at Supercomputing Frontiers 2015. </p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2015/03/16/supercomputing-frontiers-2015-to-feature-acclaimed-researcher-jack-dongarra/">Supercomputing Frontiers 2015 to Feature Acclaimed Researcher Jack Dongarra</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="600" height="300" src="http://cdn.vrworld.com/wp-content/uploads/2015/03/b911ff34f6fe906d3fd696321cf6b2ab_f554.jpg" class="attachment-post-thumbnail wp-post-image" alt="b911ff34f6fe906d3fd696321cf6b2ab_f554" /></p><p>Supercomputing Frontiers 2015 kicks off March 17 in Singapore and computer scientist Jack Dongarra is set to deliver one of the opening keynotes for the event, titled <em>Current Trends in Parallel Numerical Computing and Challenges for the Future.</em></p>
<p>Dongarra is well known within academic and commercial high performance computing circles within the United States and around the world.</p>
<p>Dongarra did not start his academic career with the intention of numbering among the foremost supercomputer experts and innovators. Enrolling in Chicago State University in the 1960s, Dongarra majored in mathematics which he intended to teach as a subject.</p>
<p>But Dongarra&#8217;s career objectives soon changed after he learned about a brilliant machine that took the human error and tedium out of math altogether: the digital computer. While it was still emerging at the time as a proper tool for academia, Dongarra quickly found that 16 x 16 matrices were much harder to solve by hand than with a machine, which could do them effortlessly.<br />
Dongarra became so proficient at programming the computer to do math problems, that he eventually changed his pursuit from mathematics to computing, and went on to get a masters in Computer Science at the Illionois Institute of Technology.</p>
<p>Math and computers were a harmonious mix, and at the prestigious Argonne national lab, Dongarra worked with a group to develop a software library based on the algorithms of computer scientist James Wilkinson, and the result was EISPACK, a highly influential library of matrix solving routines. Funded by the U.S Department of Defense, Dongarra went on to develop LINPACK, a similar library for numerical algebra.</p>
<p>Linpack went on to become a de facto benchmark for computing power, and in 1993, Dongarra began compiling the TOP500 list, which remains the most influential authority on rankings of supercomputers around the world.</p>
<p>Dongarra currently lives in Oak Ridge Tennessee, near the Oak Ridge National Laboratory where he works as a researcher. ORNL is home to Jaguar, the world’s second fastest supercomputer. Dongarra is also a Distinguished Professor in the Department of Electrical Engineering and Computer Science at the University of Tennessee, and director of UT’s Innovative Computing Laboratory.</p>
<p>In the last decade, Dongarra has continued to focus on supercomputers, and more specifically, the future of exascale computing. In 2013, Dongarra received a $1 million dollar grant from the Department of Defense to work on the problem of scaling supercomputers to 1,000+ petaflops of strength.</p>
<p>Seeing the project as vitally important to the understanding and management of weather, climate, and other natural systems, Dongarra has worked to overcome the limitations on traditional computing which prevent them from breaking the 1,000 petaflop barrier.</p>
<p>While exascale computers are not yet possible, that hasn’t stopped Dongarra from planning for the future, and part of his efforts include the Parallel Runtime Scheduling and Execution Controller, or PaRSEC, a project aimed at developing algorithms and solutions to manage exascale computers when they arrive.</p>
<p>The expert that experts consult, arguably nobody is better qualified for the task. After receiving the Association for Computing Machinery (ACM)-Institute for Electrical and Electronics Engineers (IEEE) Computer Society Ken Kennedy Award in 2013, widely renowned &#8220;father of the Internet&#8221; Vint Cerf said of Dongarra &#8220;his innovations have contributed immensely to the steep growth of high-performance computing and its ability to illuminate a wide range of scientific questions facing our society.&#8221;</p>
<p>Wayne Davis, dean of the college of Engineering at The University of Tennessee remarked &#8220;it is hard to imagine what would have not been discovered without [Dongarra&#8217;s] work.&#8221;</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2015/03/16/supercomputing-frontiers-2015-to-feature-acclaimed-researcher-jack-dongarra/">Supercomputing Frontiers 2015 to Feature Acclaimed Researcher Jack Dongarra</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2015/03/16/supercomputing-frontiers-2015-to-feature-acclaimed-researcher-jack-dongarra/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>Supercomputing Frontiers 2015 Singapore Begins March 17</title>
		<link>http://www.vrworld.com/2015/03/16/supercomputing-frontiers-2015-singapore-begins-march-17/</link>
		<comments>http://www.vrworld.com/2015/03/16/supercomputing-frontiers-2015-singapore-begins-march-17/#comments</comments>
		<pubDate>Mon, 16 Mar 2015 03:35:52 +0000</pubDate>
		<dc:creator><![CDATA[Sam Reynolds]]></dc:creator>
				<category><![CDATA[Event]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Supercomputing Frontiers 2015]]></category>
		<category><![CDATA[High Performance Computing]]></category>
		<category><![CDATA[HPC]]></category>
		<category><![CDATA[Singapore]]></category>
		<category><![CDATA[Supercomputing]]></category>
		<category><![CDATA[Supercomputing frontiers]]></category>

		<guid isPermaLink="false">http://www.vrworld.com/?p=50025</guid>
		<description><![CDATA[<p>Jack Dongarra and other HPC thought leaders will all be speaking at Supercomputing Frontiers 2015. </p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2015/03/16/supercomputing-frontiers-2015-singapore-begins-march-17/">Supercomputing Frontiers 2015 Singapore Begins March 17</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><img width="1600" height="1067" src="http://cdn.vrworld.com/wp-content/uploads/2015/01/Singapore_CBD_skyline_from_Esplanade_at_dusk-1.jpg" class="attachment-post-thumbnail wp-post-image" alt="Singapore_CBD_skyline_from_Esplanade_at_dusk (1)" /></p><p>Singapore’s Supercomputing Frontiers 2015 conference starts Tuesday in the city-state, putting the regional high performance computing center in the spotlight.</p>
<p>Organised by Singapore’s A*STAR Computational Resource Centre, the conference will provide a vital platform for industry and academic experts to interact and explore the latest global trends and innovations in high performance computing.</p>
<p>Singapore has been a growing regional HPC center for the last decade, and the conference co-insides with the deployment of a 1-2 Petaflop multi-platform machine. This is the first machine of this scale in Singapore, and it’s scheduled to be deployed by the end of this year.</p>
<p>The conference themes include the following:</p>
<ul>
<li>Supercomputing applications in domains of critical impact in economic and human terms, and especially those requiring computing resources approaching Exascale;</li>
<li>Big data science merging with supercomputing with associated issues of I/O, high bandwidth networking, storage, workflows and real time processing;</li>
<li>Architectural complexity of Exascale systems with special focus on supercomputing interconnects, interconnect topologies and routing, and interplay of interconnect topologies with algorithmic communication patterns for both numerically intensive computations and big data; and</li>
<li>Any other topic that push the boundaries of supercomputing to exascale and beyond</li>
</ul>
<p>Jack Dongarra, Thomas Sterling and Satoshi Matsuoka are among the keynote speakers scheduled to present at the event.</p>
<p>The event is sponsored by Singapore’s Nanyang Technological University and the National University of Singapore.</p>
<p>Supercomputing Frontiers is expected to be the largest event of its kind organised in South East Asia, and the completely focused main session without vendor marketing presence adds to the technical and strategic value of the conference.</p>
<p>Supercomputing Frontiers begins Tuesday in Singapore.</p>
<p>&nbsp;</p>
<p>The post <a rel="nofollow" href="http://www.vrworld.com/2015/03/16/supercomputing-frontiers-2015-singapore-begins-march-17/">Supercomputing Frontiers 2015 Singapore Begins March 17</a> appeared first on <a rel="nofollow" href="http://www.vrworld.com">VR World</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.vrworld.com/2015/03/16/supercomputing-frontiers-2015-singapore-begins-march-17/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>

<!-- Performance optimized by W3 Total Cache. Learn more: http://www.w3-edge.com/wordpress-plugins/

Content Delivery Network via Amazon Web Services: CloudFront: cdn.vrworld.com

 Served from: www.vrworld.com @ 2015-04-10 12:04:01 by W3 Total Cache -->