High performance parallel computing for Computational Fluid Dynamics (CFD)

Page created by Mike Deleon
 
CONTINUE READING
High performance parallel computing for Computational Fluid Dynamics (CFD)
Technical Paper

             High performance parallel computing for Computational Fluid
             Dynamics (CFD)

                                                                                                       Atsushi Itou
                                                                                                       Toshikazu Nakanishi
                                                                                                       Takashi Mizuguchi
                                                                                                       Masatake Yoshida
                                                                                                       Tei Saburi

            The supersonic aerodynamic coefficient of a sphere that represents a broken piece of structure under explo-
        sion was obtained by using Computational Fluid Dynamics (CFD).The computer that was used in this study
        was self-made cluster type parallel machine and available maximum number of CPU was 128. The CFD soft-
        ware named CFD++ was used in this study. CFD++ is general purpose CFD software to solve
        three-dimensional Navier-Stokes equations that represent characteristic of fluid viscosity and compressibility
        by using finite volume method. Using the parallel computer, computational time was reduced dramatically. The
        importance of network performance was mentioned in this study. Since Ethernet required more time for
        communication between CPUs, that performance was limited under the condition of small number of CPUs.
        However, using high-speed network system (Myrinet), we could get more performance even large number of
        CPUs. The use of 64bit CPUs was also effective to reduce computational time, in addition to provide advan-
        tage for building a large-scale computational model through an expansion of memory space.

        Key Words: CFD, parallel computer, TOP 500, National Institute of Advanced Industrial Science and
                   Technology (AIST), supercomputer, personal computer, scalability, CFD++, LINUX, CPU

1.   Introduction
      Consideration of safety in storing and keeping of explo-       is a technology that can be used in forecasting the dispersion of
sives is becoming a global issue recently in businesses that         flying objects, in studying the impact of air blasts and for other
handle explosives. Because of their characteristics, strict          purposes. In addition to problems faced in turning physical
safety administration is obliged for handling explosives, and        phenomena into numerical models, another problem with CFD
they are often stored in one place. Forecasts of impact areas        is that the number of meshes expands to several million to
by blasts and the dispersion of flying objects in case of an acci-   several tens of millions depending on the experiment task when
dental explosion are important for the design of storage sheds       spatial meshes have to be divided. Computational time is
and settings of the ambient safety environment. Various data         several tens of hours to several hundred hours using computers
on explosives explosions are collected through explosion tests.      that are normally available for computations, and this is not
In Japan, the amount of explosives that can be tested is limited     practical in many cases.
to only about several tens of kilograms due to limited test                 In this study, through joint research with the Research
grounds, and it is not possible to test cases where several hun-     Center for Explosion Safety of the National Institute for
dreds of kilograms to several tens of tons of explosives that are    Advanced Industrial Science and Technology, CFD analysis
actually stored are exploded. Forecasts of blasting states us-       was conducted using a parallel computer that connected plural
ing a large amount of explosives are therefore necessary using       CPUs, to make simulations feasible through a drastic reduction
data of experiments that use small amounts of explosives.            in this computation time. An increase that could be achieved
      In these forecasts, numerical simulations using computers      in the processing speed is reported.
are made. In particular, computational fluid dynamics (CFD)

2005 ② VOL. 51 NO.156               High performance parallel computing for Computational Fluid
                                                        Dynamics (CFD)

                                                                 ― 1 ―
High performance parallel computing for Computational Fluid Dynamics (CFD)
2.   Background                                                    environments such as Linux. The basic approach is simple.
                                                                   Several tens of personal computers to several hundred sold on
       The need for high-performance computers in numerical        the market are connected by a network. For this reason, a
simulations that require mesh and time division as in CFD are      concept called “self-made parallel machines” has emerged.
not new, and at one time, a development competition between        The concept calls for the assembly of an inexpensive
Japan and the United States for a supercomputer evolved into       high-performance supercomputer using personal computer
an economic problem. At the time when high calculation             components sold on the market and Linux, OS of open source
speeds of one processor were pursued, American supercom-           licenses. Even though special computer knowledge and net-
puter manufacturers including Cray Research were leading,          work knowledge is needed to some extent, the layperson wish-
and Fujitsu and NEC of Japan caught up with them. Large            ing to do something with this computer can handle it.
host computer manufacturers competed with each other to de-              In this study also, a self-made computer developed by the
velop new CPUs, and machines costing several hundred mil-          Research Center for Explosion Safety of the National Institute
lion yen were installed in the research organizations of univer-   for Advanced Industrial Science and Technology for CFD
sities and the state, large automobile manufacturers and other     simulations was used.
purchasers. These hardware units were housed in workshops
with air conditioning facilities, becoming status symbols show-
ing their technical level.
       On the other hand, information service companies rented                          1000000
                                                                                                                                          TOP500 service in
                                                                                                                                              TOP500開始

supercomputers purchased by themselves by the hour and were                                                                                                                          IBM BlueGene(131072)

selling CPUs in the form of rental per hour. Komatsu used a                                   100000                                                                    NEC SX-8(4096)

                                                                                                                                                                               NEC EARTHSIMULATOR(5120)
supercomputer in this style temporarily.                                                       10000                                                                               IBM ASCIWhite(8192)

       Since then, the performance of CPUs made by so-called                                                                                                                   INTEL ASCIRed(9632)
                                                                                   (GFLOPS)

                                                                                               1000                                                                                                              BAKOO
UNIX-based workstation (EWS) manufacturers such as DEC
                                                                                性能(GFLOPS)

                                                                                                                                                                                         AIST BAKOO(128)         Xeon(2.8GHz)
                                                                                                                                             FUJITSU NWT(140)
                                                                                                                                                                                                                 TOP500
and SGI has enhanced, and many users seem to have changed                                       100
                                                                                                                                                            ThinkingMachine CM-5(1024)                           CRAY
                                                                                                                                                         HITACHI S-3000                                          FUJITSU
                                                                       Performance

to simulations on a level that better suits the development lead                                  10
                                                                                                                                              NEC SX-3
                                                                                                                                                                                                                 HITACHI
                                                                                                                                            FUJITSU VP-2600/20                           Xeon(2.8GHz)            NEC
time using available EMS rather than conducting large-scale                                                          CRAY CRAY-2
                                                                                                                  NEC SX-2
                                                                                                                                         HITACHI S-820/80
                                                                                                                                         FUJITSU VP-400E
                                                                                                   1
simulations using a supercomputer. Excluding some research
                                                                                                                           HITACHI S-810/20
                                                                                                                  FUJITSU VP-200
                                                                                                         CRAY CRAY-1
organizations and large automobile manufacturers, the super-                                    0.1

computer has clearly become “not cost competitive” in terms
                                                                                                           FUJITSU FACOM507-75APU
                                                                                               0.01
                                                                                                  1975          1980          1985           1990    1995                   2000          2005            2010
of cost of installation, maintenance and management.                                                                                            Year
                                                                                                                                                年代(年)

       The development competition for supercomputers waned
                                                                                                 Fig. 1 Development history of high-speed computers
thereafter, while the performance of CPUs for personal com-
puters manufactured by Intel and AMD has improved signifi-
cantly. The application of these CPUs has expanded from
word processing and spreadsheets for business use to image
processing and Internet utilization, resulting in high-volume
                                                                   3.                            Parallel Computer in this Study
production by finding outlets in the homes of citizens as a mar-                                 3.1 Parallel computer of the Research Center
ket. The cost per CPU has been drastically reduced through                                           for Explosion Safety (BAKOO)
high-volume production. At present, the basic performances               The parallel computer of the Research Center for Explo-
of CPUs for personal computers, of memories and of hard            sion Safety of the National Institute for Advanced Industrial
disks have reached a level suitable for use in simulations in-     Science and Technology (BAKOO) is shown in Fig. 2. The
cluding CFD. In fact, recently, Windows personal computers         parallel computer has 128 CPUs comprising 64 nodes, each
are replacing UNIX EWSs. Figure 1 traces the development           node packaging two Intel Xeon processors on one motherboard.
history of high-performance computers called supercomputers.       The total memory capacity is 192 Gbytes. The network has a
One Intel CPU has performance better than that of a supercom-      Myrinet card, a high-speed communication device, and a
puter made 15 years ago.                                           Myrinet switch.
       In the area of software, operating systems (OSs) have             The BAKOO configuration is summarized in Table 1.
been unified to UNIX in EWSs from OSs of host computer             BAKOO is a perfect self-made computer for use in CFD.
manufacturers. At present, Linux can build an environment          However, it ranks 351st in the world and 27th in Japan in the
almost equal to that of UNIX, also for personal computers.         TOP500 ranking (rankings of top 500 supercomputer of the
Linux is an open system and is modified every day, paving the      world by a Linpack benchmark based on the linear equation
way to use state-of-the-art technology in parallel calculations    solution by LU resolution) for 2003. (See Fig. 1. The Earth
also.                                                              Simulator SX6 of NEC [5120 CPUs, 35Tflops] ranked first for
       The approach of parallel computation by plural CPUs,        2003)
which could be accomplished in the past only by certain super-
computers, has become feasible thanks to the high performance
of hardware for personal computers, low price and software

2005 ② VOL. 51 NO.156              High performance parallel computing for Computational Fluid
                                                       Dynamics (CFD)

                                                               ― 2 ―
High performance parallel computing for Computational Fluid Dynamics (CFD)
parts stores. The configuration of the KHPC computer is
                                                                  specified in Table 2. The KHPC has eight nodes, 16 CPUs
                                                                  and a memory capacity of 32 Gbytes.

                 Full view of BAKOO System

                                                                          Fig. 3 Appearance and internal views of KHPC

                                                                                   Table 2    KHPC configuration

                                                                     CPU                     AMD Opteron 2.2GHz
                                                                     Number of CPUs          16 (8 nodes)
                                                                     Memories                32Gbyte
                                                                     Hard disk               75Gbytes for each node
                                                                                             1.5Tbytes for data storage (RAID5)
                                                                     Network                 Gigabit Ethernet
                         Myrinet hub
                                                                     Operating system        Linux
                    Fig. 2 BAKOO System

                                                                  4. CFD Software
               Table 1 BAKOO configuration                              The CFD code used in this study is “CFD++ (CFD plus
   CPU                 Intel Xeon 2.8GHz                          plus).” “CFD++” is general-purpose CFD software devel-
   Number of CPUs      128 (64 nodes)                             oped and sold by Metacomp of the United States mainly target-
   Memories            192 Gbytes                                 ing the aviation and space industry.
   Hard disk           30 Gbytes for each node                          Three-dimensional Navier-Stokes equations that repre-
                       3.4Tbytes for data storage (RAID5)         sent the characteristics of fluid viscosity and compressibility
   Network             Myrinet                                    are made discrete by the TVD scheme based on the finite vol-
                       (LANai9.2 and PCI-X                        ume method. 1)−7) Several types of turbulent flows including
                       interconnectivity)                         Model k-ε can be selected. CFD++ can therefore be used
   Operating system    Linux
                                                                  widely in Mach and Re numbers from subsonic to supersonic
                                                                  speeds. The software package that was used in this study was
                                                                  also used in aerodynamic calculations of flying objects at high
                                                                  velocity and simulations of explosion phenomena. Parallel
     3.2     64-bit prototype computer (KHPC)                     processing is also possible with the Intel 32-bit and AMD
      CPUs of personal computers are catching up with UNIX        64-bit versions.
EWSs in installing 64-bit CPUs. CPUs of the 64-bit architec-            Coupled calculations with rigid body motions with six
ture eliminate memory limits of 2 Gbytes per process, dramati-    degrees of freedom are embedded as a module, enabling calcu-
cally increasing the amount of data that can be handled. An       lation of a flying distance taking variations in air drag caused
increase in the bus width enhances the data transfer speed.       by changes in altitude into consideration by CFD while the
      A parallel computer using 64-bit processors (AMD:           solving motion equations of the flying objects. This study
Opteron 2.2 GHz) was manufactured for use in a study of a         will be reported later.
next-generation self-made parallel computer (KHPC). An
appearance and the inside of KHPC are shown in Fig. 3.
KHPC is housed in a server casing and is installed in a 19”
rack. The parts inside KHPC were purchased from electronic

2005 ② VOL. 51 NO.156             High performance parallel computing for Computational Fluid
                                                      Dynamics (CFD)

                                                             ― 3 ―
High performance parallel computing for Computational Fluid Dynamics (CFD)
5.   Themes of Analysis
      This study focuses on two problems in the safety of ex-
plosion. The first problem is how far flying objects in an
explosion would fly, and the second problem, what impact an
air blast by an explosion would have. The performance of the
parallel computer that could be obtained in the computation
process of the aerodynamic characteristics of the flying objects
in an explosion by CFD is described.
     5.1 Analysis model
      A sphere of 14 mm in diameter was used in the calcula-                           Fig. 6 Surface of 1.23M-mesh model
tions as the shape of the flying objects. Calculations for each
Mach number were performed under conditions in which fly-                5.2 Calculation results
ing objects fly above the ground (pressure 101325 Pa, tempera-            The calculation results of the air drag coefficient (Cd) to
ture 288.15 K, air density 1.225 kg/m3).                            the Mach number are plotted in Fig. 7. CFD calculations
      Figure 4 shows the entire calculation model and external      solve the three-dimensional velocity components, pressure,
boundaries. The calculation model was performed by 3D               temperature and turbulent flow characteristics (k, ε). The
CAD (Pro/Engineer) with external boundaries flying objects          force applied to the model can be calculated as an integral of
(sphere). An IGES file was fetched in ICEM/CFD, mesh-               the surface pressure by post-processing of CFD++. The air
creating software, and meshes were created. Tetrahedrons            drag coefficient (Cd) that used the diameter as a representative
(tetra meshes) were used as spatial meshes.                         length was calculated using this force. As informative infor-
      The behavior of the meshes near the flying object is illus-   mation, Cd by experiment is also shown.8) The diagram
trated in Fig. 5. The meshes are made fine nearer the object        shows that the calculations reproduce experiment values fairly
and trailing area, and made coarse nearer the boundary of the       accurately.
periphery. A pentahedron (prism meshes) is created for the                The spatial differential of the density in a flow direction
surfaces of the flying object, taking the boundary layer into       calculated by CFD is shown in Fig. 8. CFD enables the
consideration.                                                      visualization of various physical quantities. As shown in the
      Two nets were created to examine calculation perform-         diagram, the spatial differential in a flow direction can also be
ance by the model scale, one with about 1.23 million meshes         obtained. The spatial differential in a flow direction corre-
and the other with about 6 million meshes. The meshes on a          sponds to a schlieren image in a wind tunnel test. The image
1.23M-mesh surface of a symmetrical object are shown in Fig.        clearly shows the behavior of a detached shock wave in front
6. The calculation model is a half model, and symmetrical           of the object and of the shock wave in the tail.
conditions were set on the symmetrical plane. The k-ε model                1.5
                                                                                                                               CFD
was used as a turbulent flow model.                                                                                            Experiment
                                                                                                                               実験値
                                                                                                                               value

                                                                           1.0
                                                                         Cd0

                                                                           0.5

                                                                           0.0
                                                                                 0.0      0.5   1.0   1.5          2.0   2.5   3.0          3.5
                                                                                                            Mach

                                                                               Fig. 7 Air drag coefficient (Cd) to each Mach

     Fig. 4 Computational space and boundary conditions

              Fig. 5 Mesh of symmetrical surface                               Fig. 8 Output corresponding to schlieren image

2005 ② VOL. 51 NO.156              High performance parallel computing for Computational Fluid
                                                       Dynamics (CFD)

                                                                ― 4 ―
High performance parallel computing for Computational Fluid Dynamics (CFD)
6.   Higher Performance by Parallel Computa-                      the domain divisions, thereby reducing the calculation volume
     tion                                                         per CPU. On the other hand, the overhead (other time needed
                                                                  for computation) of data communications between nodes deter-
     6.1 Calculations by BAKOO of the National                    mines the rate, and the CPUs idle when latency during commu-
         Institute for Advanced Industrial Science                nications is large. Networks with large latency (100 µs - )
         and Technology                                           such as Ethernet nullify the effect of having more CPUs. This
     Computation by BAKOO, a 32-bit machine, is described.        shows that a high-speed network device ( - 10 µs) is needed in
BAKOO enables calculations by a maximum of 128 CPUs, and          this case.
the performance of networks (Ethernet and Myrinet) can also             The computation time shows that it is not reduced simply
be compared.                                                      in proportion to an increase in the number of CPUs even
     Calculations were made at Mach 3.0 of a half model of        though Myrinet is used. In other words, the computation time
the sphere described earlier using two CPUs as a minimum          does not necessarily decrease to half (1/2) even though the
configuration. The number of iteration cycles needed for the      number of CPUs is twice increased. The relationship between
convergence of the calculations was fixed to 600 cycles.          increase in the number of CPUs and calculation time reduction
     An example of spatial division with four CPUs is shown       is called “scalability.” Scalability exists if the computation
in Fig. 9. Parallel computation requires spatial division in      speed increases twice (computation time 1/2) when the number
accordance with the number of CPUs. CFD++ comes with a            of CPUs increases twice. However, normally, the overhead
domain division function. Using this function, a space was        for communications and other factors does not allow scalability
divided from 2 to 128. Each divided domain is allocated to        commensurate with increases in the number of CPUs.
one CPU, and calculations are executed while mutually                   Figure 11 plots the inverse numbers of the proportions of
communicating information at the boundaries.                      computation time for increases in the number of CPUs based
                                                                  on computation time for two CPUs (network not used). The
                                                                  dotted line in the diagram shows a case of a computation time
                                                                  in proportion to increases in the number of CPUs. This dotted
                                                                  line shows the ideal effect of the parallel computer. This
                                                                  diagram shows what effects an increase in the computation
                                                                  time illustrated in Fig. 10 has on the number of CPUs. In
                                                                  Myrinet also, the effect of having more CPUs wanes beginning
                                                                  at about 32 CPUs. Similarly, the computation time of 6-Mega
                                                                  elements increases in proportion to increases in the number of
                                                                  CPUs up to about 32 CPUs. A large number of elements in
     Fig. 9 An example of spatial division with four CPUs         the domain allocated to each CPU relatively reduces the pro-
                    (symmetrical face)                            portion of communication time to computation time, thereby
                                                                  maintaining the effect of the increase in the number of CPUs.
     The computation time needed for 1.23-Mega elements is        At 6-Mega elements, the computation speed for 32 CPUs is
shown in Fig. 10. The time needed for computation with two        theoretically 32 times, but is actually about 18 times, resulting
CPUs was 10.9 hours. The time with four CPUs on Ethernet          in drastically low efficiency.
was 6.1 hours and with four CPUs on Myrinet, 5.1 hours, or              This shows that, in terms of cost vs. performance, the
about half. However, with 16 CPUs, the time with Ethernet         rational number of CPUs is predetermined by the model scale
was 3.7 hours and with Myrinet, 1.6 hours, widening the gap.      (number of elements) of the analysis target when CFD++ is
With 32 CPUs, the time with Ethernet was 4.5 hours, taking a      used, even though a high-speed communication device is used
longer time even though the number of CPUs was doubled.           (Myrinet in this case).
Myrinet accomplished calculations in 1.0 hour, reducing the
computation time. Increases in the number of CPUs increase

                                                                   Fig. 11 Computational speed dependent on the number of CPUs
                                                                                         (Magnification)
        Fig. 10 Computation time of 1.23Mega elements

2005 ② VOL. 51 NO.156             High performance parallel computing for Computational Fluid
                                                      Dynamics (CFD)

                                                            ― 5 ―
High performance parallel computing for Computational Fluid Dynamics (CFD)
6.2 Calculations by prototype machine KHPC                    8.   Concluding Remarks
      A comparison of computation speeds by KHPC, a proto-
                                                                         The prototyped parallel computer is an ordinary personal
type 64-bit machine, and BAKOO is shown in Fig. 12. The
                                                                   computer except for the power source and other parts that
computation speed of the prototype machine is faster on
                                                                   require reliability, which were substituted by a server, a RAID5
Ethernet when the number of elements is the same, 1.23 Mega,
                                                                   1.5-Tbyte storage and rack mount. Partly because software
and with the same number of CPUs. The computation speed
                                                                   (OS) and a monitor are not packaged, the price per node (2
of the prototype machine is 1.5 times with two CPUs and about
                                                                   CPUs) is about half that of a 2CPU Windows machine used
1.4 times with four CPUs. BAKOO is about 27% faster in the
                                                                   in-house for analysis. Nevertheless, the machine may have
CPU clock, but KHPC has higher calculation performance.
                                                                   maintenance servicing problems because it is a self-made com-
The 64-bit bus width seems to contribute to not only memory
                                                                   puter. Even though the hardware cost is dramatically low, the
space, but also calculation performance in these calculations.
                                                                   software license cost relatively increases for parallel use.
With 16 CPUs, however, BAKOO on Myrinet is faster. This
                                                                   There are other several issues. However, the fact that compu-
means that the computation speed of KHPC with 32 CPUs
                                                                   tation can be performed in several tens of hours instead of sev-
lowers below that with 16 CPUs unless a high-speed communi-
                                                                   eral hundred hours will drastically change thinking toward
cation device is provided. The calculation results for one
                                                                   simulations and simulation targets. This technology must be
CPU are also described as brief information. The computa-
                                                                   observed by machinery manufacturers such as Komatsu.
tion time with one CPU was 15.5 hours, about twice that with
                                                                         This study will be continued for both hardware and CFD
two CPUs.
                                                                   analysis techniques. In addition to analysis by CFD, the
                                                                   effectiveness of parallel computation with FEM and other
                                                                   analyses will be verified.

                                                                   References:
                                                                   1) A.A. Amsden and F.H. Harlow, “The SMAC Method: A
                                                                      Numerical Technique for Calculating Incompressible
                                                                      Fluid Flows”, Los Alamos Scientific Laboratory Report,
                                                                      LA-4370.
                                                                   2) S.V. Patankar, “Numerical Heat Transfer and Fluid Flow”,
                                                                      Hemisphere Pub, New York (1980).
                                                                   3) S.V. Patanker and D.B. Spolding, “A Calculation Proce-
                                                                      dure for Heat, Mass and Momentum Transfer in Three-
     Fig. 12 Computational speeds of BAKOO and KHPC
                                                                      Dimensional Parabolic Flows”, Int. J. Heat Mass Transfer,
7.   Conclusion                                                       15(1972) 1787.
                                                                   4) Harten, A., “High resoluation schemes for hyperbolic
      The following conclusions were obtained through this
                                                                      conservation laws”, J. Comp. Phys. 49(1983) 357-393.
study.
1) A self-made parallel computer assembled by using compo-         5) Harten. A., “On a Class of High Resolution Total-
    nents for general-purpose personal computers reduces com-         Variation-Stable Finite-Difference Schemes”, SIAM J.
    putation time more than 10 times using CFD software sold          Numer. Anal., 21 (1984) 1-23.
    on the market.                                                 6) S.R. Chakravarthy and S. Osher, “A New Class of High
2) The address space of the 64-bit CPU can be expanded by
                                                                      Accuracy TVD Schemes for Hyperbolic Conservation
    CFD, offering the advantages of handling a large-scale
                                                                      Laws”, AIAA Paper 85-0363 (1985).
    model and a high calculation speed.
3) More CPUs beyond a certain number conversely lowers the         7) Robert L. McCoy “Modern Exterior Ballistics”, Schiffer
    computation speed even though the number of CPUs is in-           Publishing Ltd.
    creased. A high-speed communication device to replace          ○myrinet
    Ethernet is mandatory and is an important key technology       http://www.myricom.com
    for parallel computation.                                      ○Top500
4) A parallel computer with several thousand CPUs is a hot
                                                                   http://www.top500.org
    topic as “the world’s fastest machine.” However, the
    limits of scalability emerge early (with fewer CPUs), such     ○metacomp
    as in the combination of general-purpose software and the      http://www.metacomptech.com
    analysis model used in this study. A system with a good        ○linpack
    cost vs. performance ratio needs be used suiting application   As HPL (High-Performance LINPACK Benchmark)
    work.
                                                                   http://www.netlib.org/benchmark/hpl/

2005 ② VOL. 51 NO.156              High performance parallel computing for Computational Fluid
                                                       Dynamics (CFD)

                                                               ― 6 ―
High performance parallel computing for Computational Fluid Dynamics (CFD)
Introduction of the writer                                          [A few words from the writer]
                                                                        It is amazing that CFD computation that requires about a
                  Atsushi Itou                                      dozen hours can be completed in one hour. What is more amaz-
                  Entered Komatsu in 2002                           ing is that this can be accomplished by adding some functions to a
                  Currently belongs to Research and                 personal computer that can be bought by anyone. The advances
                  Development Center,                               in CPU and network technologies let us enjoy the Internet at home
                  Defense System Division                           through broadband technology. Applying these technologies,
                                                                    simulations by supercomputing, which were once something to
                  Toshikazu Nakanishi                               gaze at in a shop window, are now becoming more readily avail-
                                                                    able. We hear that parallel computers that are very low in price,
                  Doctor of Engineering
                                                                    but very high in performance using CPUs contained in a
                  Entered Komatsu in 1986. Currently belongs
                                                                    PlayStation (video game machine) are also sold. We are living in
                  to Research and Development Center,
                  Defense System Division                           an interesting age. No matter how fast simulations can be made,
                                                                    they are meaningless unless design concepts and ideas are offered.
                                                                    We need to produce better products quickly at low price by skill-
                  Takashi Mizuguchi
                                                                    fully using these tools.
                  Entered Meitec Corp. in 1998.
                  Currently belongs to Research and
                  Development Center,
                  Defense System Division

                  Masatake Yoshida,
                  Doctor of Engineering
                  Currently belongs to Research Center for
                  Explosion Safety, National Institute of
                  Advanced Industrial Science and Technology,
                  as deputy director

                  Tei Saburi
                  Doctor of Engineering
                  Currently belongs to Research Center for
                  Explosion Safety, National Institute of
                  Advanced Industrial Science and Technology,
                  as research fellow

2005 ② VOL. 51 NO.156             High performance parallel computing for Computational Fluid
                                                      Dynamics (CFD)

                                                                ― 7 ―
High performance parallel computing for Computational Fluid Dynamics (CFD) High performance parallel computing for Computational Fluid Dynamics (CFD) High performance parallel computing for Computational Fluid Dynamics (CFD)
You can also read