Supercomputing for All at a Research University: Is There Really a Debate?


karst_660

What could open supercomputing deliver to your research team or organization? Indiana University



Here’s a story all tech geeks can relate to. You’re at a party, and someone asks the standard small-talk question, “So, what do you do?” By the time I get to the words “supercomputing” or “high performance computing” (HPC) in describing my job, a wave of misunderstanding often engulfs my questioner. The situation either progresses into a thoughtful exchange or, more often, ends up with someone asking me to diagnose their Windows PC problems.


These kinds of conversations don’t just happen in social situations — they happen at the CIO and presidential level of large research universities around the country. After 15 years at a university that invests in supercomputing, I’m still puzzled about this lack of knowledge regarding HPC resources and the debate about funding supercomputing infrastructure to support the research and education mission elsewhere.


My time spent at a research institution has allowed me to see many systems come and go and observe the effects of the space race in HPC, the Top500 list. If you separate the hype from sustained investment, you can see organizations that have been consistent in this area have persevered, or even grown, in the midst of a global financial crisis.


As hype turns to reality in cloud computing and big data analytics, supercomputing approaches can provide insights into architecting next-generation environments. What was a niche of parallel and distributed file systems has become interesting to a broader audience. The commercial sector has already begun to leverage supercomputing as a tool of competitive advantage.


While the race to No. 1 on the list is really exciting, it’s more exciting for me to see supercomputing principles spread to more organizations, to observe kids excited by robotics and programming, and to think we can make these environments more accessible to benefit everyone.


At Indiana University (IU), we’re doing just that. Our new high throughput system, Karst, is open to anyone at the university — faculty members, undergraduate and graduate students, and departments. The new system replaces Quarry, IU’s soon-to-be-retired Linux cluster computing environment for research and research instruction. During its seven-year run, researchers using Quarry have secured $365,419,648 in total grant award dollars. Karst is expected to be just as important to the IU community — paving the way to discoveries in fields like physics, polar research, and drug discovery.


If working with technology is key to future growth of the US economy — and I think we can all agree on that — then we need to support research universities that invest in these machines. How can students be equipped for the real world if they can’t learn using state-of-the-art tools? How can new faculty begin fundamental research that requires computing if their institution doesn’t support them? How can we transport the next generation of 3D, high-resolution cat videos without building upon technology from the past?


I still hope for a day where Cray doesn’t mean crazy, IBM BlueGenes don’t inspire thoughts of casual Friday attire, and I’m no longer asked how to remove malware. In the meantime, whether it’s modeling potential hurricane paths, designing next-generation engines, or reducing the time to market for new drugs, our future depends on high performance computers.


Are you still debating the value of supercomputing?


David Hancock is manager of the High Performance Systems team at the Indiana University Pervasive Technology Institute’s Research Technologies division.



No comments:

Post a Comment