First off, your system will need an MPI installed before it can run an MPP executable of LS-DYNA. We don't provide guidance on installation and testing of the MPI. Secondly, go to www.lstc.com/download to download the MPP executable for your system. Refer to the "LS-DYNA MPP User Guide", which is an appendix in the LS-DYNA User's Manual. Also, please refer to the notes in the following text files: http://ftp.lstc.com/anonymous/outgoing/support/FAQ/mpp.getting_started (the file you're reading now) http://ftp.lstc.com/anonymous/outgoing/support/FAQ/mpp_faq http://ftp.lstc.com/anonymous/outgoing/support/FAQ/mpp_bind_to_core jd Ticket#2017122010000176 _____________________________________________________________________________________ There are two forms of parallel processing supported by LS-DYNA, SMP and MPP. The filename of the LS-DYNA exectuable clearly indicates whether it's for SMP or MPP. For a large model run on more than 6-8 cores, MPP is recommended. MPP can also be used on small models, with the number of cores specified accordingly (even 1 core is acceptable). In general, contact algorithms are more advanced/evolved in MPP. MPP does require that an MPI or Message Passing Interface (e.g., Platform MPI, Open MPI, Intel MPI, etc.) be installed on your system. LSTC does not assist in MPI installation. Beyond the appendix "LS-DYNA MPP User Guide" in the User's Manual, here are some other sources of information for running MPP LS-DYNA, followed by some FAQ. https://www.lstc.com/download/ls-dyna (linux systems) https://www.lstc.com/download/ls-dyna_%28win%29 (windows systems) http://ftp.lstc.com/user/mpp-dyna/doc http://ftp.lstc.com/anonymous/outgoing/support/PRESENTATIONS (see all files containing *mpp*, i.e. mpp-presentation-mrj-2009.pdf mpp_201305.pdf mpp_Part1.pdf.gz mpp_Part2.pdf Also http://ftp.lstc.com/anonymous/outgoing/support/OTRS/MPP_class_handout.pdf FAQ Q1: The physical location of the MPI to be executed: Does it need to be installed on the compute node’s local drive or can it be installed on the management node and executed from a local directory on the compute node? A1: Yes, MPI software and dynamic libraries need to be installed on the local machine and the LS-DYNA exe needs to be able to reference them. Most MPIs do not know the core affinity. Please run "mpirun --help" to find out the way to "bind" the processor to a fixed core. Q2: The location of the LS Dyna binary to be executed: Does it need to be installed on the compute node’s local drive or can it be installed on the management node and executed from a local directory on the compute node? A2: The recommendation is to install it on the node's local drive. MPI is able to remote copy exe to remote compute node and start the execution, and we used to do this for a diskless cluster but that approach has fallen by the wayside. Q3: Directory for job to be run from. Local to the compute node or on the management node? A3: Either way. If the cluster has separated NFS network for file system and IB for MPI, the system is good enough to run on NFS disk. But for the best performance, we recommend using the local disk. Q4: How to run MPP job across nodes of a cluster? A4: (a) Create a file with commandline options. As you know for one node, it was straight-forward "mpirun -np $NCPU ./mppdyna i=input.key jobid=my_job_name memory=$memory memory2=$memory2". For multiple nodes, you will put this commandlines into a file, let's call this file appfile. Let us say, you have 4 nodes to use, with names host1, host2, host3 and host4. Each node has 12cores, and you want to use all 12 cores. You appfile will contain the following lines: -h host1 -np 12 /path/to/solver/mppdyna i=input_file.key jobid=my_job_name memory=$memory memory2=$memory2 -h host2 -np 12 /path/to/solver/mppdyna -h host3 -np 12 /path/to/solver/mppdyna -h host4 -np 12 /path/to/solver/mppdyna As you may have noticed, LS-DYNA binary should be hosted on a network drive where all the hosts have access to. (b) Once the appfile is ready, you simply issue the following command from head node (host1) - /opt/platform_mpi/bin/mpirun -prot -ibv -f ./appfile This is a quick explanation, but the users can/will customize their scripts, add environment variables and such, these are system dependent and I am sure you can figure out the options (if needed) to use apart from these general commandline options. If you have followup questions, provide detailed information on the cluster hardware, interconnect, number of cores/nodes, run script, version of LS-DYNA you're using, etc. jd Ticket#2017050810000079