Final Paper

Final Paper - John Xu 12/10/07 Research at the Cornell...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
John Xu 12/10/07 Research at the Cornell Theory Center Started in 1984, the Cornell Theory Center is one of the five main supercomputing centers funded by the National Science Foundation. During its early days, the theory center primarily relied on IBM to fulfill all of its supercomputing needs. Although Cray was the more well known manufacturer of supercomputers during the 80s, Super Computer Network reported that “IBM appears to be moving into competition with traditional supercomputer vendors via their 3090 computer with its Vector Facility” Furthermore, IBM has developed even more advanced computing technology with “their new parallel FORTRAN compiler” (CIT Manuscripts Box 26 “Super Computer Network”, March 1989). The concept behind IBM’s parallel FORTRAN is that vendors want to ensure that their programs are taking advantage of the supercomputer’s processing speed. While many programs could be improved if they were rewritten to be parallelized, this is a time consuming and expensive process for most businesses. With IBM’s parallel FORTRAN, major portions of the programs can be rewritten automatically. The prototype work for this program was done at Cornell targeting the IBM 3090 machines. Consequently, the 3090s were exactly what Cornell used during the 1980s. With its arrival in 1985, the IBM 3090 200 model was used at Cornell in 1986, costing 4.5 million dollars for 2 central processors. Within the same year, Cornell replaced the 200 model with a 4 processor model, costing an additional 4 million dollars. A year later, the 600E model, IBM’s most powerful mainframe computer to date was purchased for 10 million dollars, with the addition of a second computer later that year for a total of 12 processors. Interestingly, in an interview with Dr. David Caughey, the Acting Director of the Cornell Theory Center, IBM is described as being
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
able to outshine some of the better known computer manufacturers. Caughy says, “You brought up the issue earlier whether anyone can do supercomputing on an IBM system. John Dawson from UCLA came to mind. He migrated to us from a Cray site because they didn’t have enough memory” (CIT Manuscripts Box 26 “Super Computer Network”, March 1989). Based on the idea of parallel processing, computers were able to accomplish tasks, particularly those involving intense mathematical calculations at a much faster rate than previously possible. With multiple portions of a program being executed at the same time, each processor was able to handle a chunk of the data that required analysis. The two main types of parallel processing used are data parallelism and functional parallelism. The former involves cases where large amounts of data needed to be analyzed. An example is the collection of census
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 04/07/2008 for the course S&TS 3551 taught by Professor Ratcliff,j during the Fall '07 term at Cornell.

Page1 / 6

Final Paper - John Xu 12/10/07 Research at the Cornell...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online