当前位置:首页 > IBM Platform LSF家族安装和配置简介 V1.0
编译helloworld示例程序:
/opt/pmpi/opt/ibm/platform_mpi/bin/mpicc -o helloworld /opt/pmpi/opt/ibm/platform_mpi/help/hello_world.c
[root@server3 help]# /opt/pmpi/opt/ibm/platform_mpi/bin/mpirun -f ../help/hosts warning: MPI_ROOT /opt/pmpi/opt/ibm/platform_mpi/ != mpirun path /opt/pmpi/opt/ibm/platform_mpi Hello world! I'm 1 of 4 on server3 Hello world! I'm 0 of 4 on server3 Hello world! I'm 3 of 4 on computer007 Hello world! I'm 2 of 4 on computer007 [root@server3 help]# cat ../help/hosts
-h server3 -np 2 /opt/pmpi/opt/ibm/platform_mpi/help/helloworld -h computer007 -np 2 /opt/pmpi/opt/ibm/platform_mpi/help/helloworld 3.5.3 通过LSF提交
export MPI_REMSH=blaunch $ mpirun -np 4 -IBV ~/helloworld $ mpirun -np 32 -IBV ~/helloworld $ mpirun -np 4 -TCP ~/helloworld 或者
[root@server3 conf]# bsub -o %J.out -e.%J.err -n 4
/opt/pmpi/opt/ibm/platform_mpi/bin/mpirun -lsb_mcpu_hosts /opt/pmpi/opt/ibm/platform_mpi/help/helloworld Job <210> is submitted to default queue
JOBID USER STAT QUEUE FROM_HOST EXEC_HOST JOB_NAME SUBMIT_TIME 210 root PEND normal server3 *elloworld May 9 10:55 [root@server3 conf]# cat 210.out
21 | Page
Sender: LSF System
Subject: Job 210: in cluster
Job
/opt/pmpi/opt/ibm/platform_mpi/help/helloworld> was submitted from host
Job was executed on host(s) <4*computer007>, in queue
was used as the home directory.
was used as the working directory. Started at Thu May 9 18:49:06 2013 Results reported at Thu May 9 18:49:07 2013
Your job looked like:
------------------------------------------------------------ # LSBATCH: User input
/opt/pmpi/opt/ibm/platform_mpi/bin/mpirun -lsb_mcpu_hosts /opt/pmpi/opt/ibm/platform_mpi/help/helloworld ------------------------------------------------------------
Successfully completed.
Resource usage summary:
CPU time : 0.23 sec.
22 | Page
Max Memory : 2 MB Average Memory : 2.00 MB Total Requested Memory : - Delta Memory : -
(Delta: the difference between total requested memory and actual max usage.) Max Swap : 36 MB
Max Processes : 1 Max Threads : 1
The output (if any) follows:
Hello world! I'm 2 of 4 on computer007 Hello world! I'm 0 of 4 on computer007 Hello world! I'm 1 of 4 on computer007 Hello world! I'm 3 of 4 on computer007 PS:
Read file <.210.err> for stderr output of this job.
或者更多参数
$ /opt/platform_mpi/bin/mpirun -np 120 -ibv -hostlist \22-004 cn-22-005 cn-22-006 cn-22-007 cn-22-008 cn-22-009 cn-22-010\如果希望 MPI作业不通过LSF提交运行,修改MPI_USELF环境变量为n
3.6 Openmpi作业
下载openmpi软件包
23 | Page
./configure LIBS=-ldl --with-lsf=yes -prefix=/usr/local/ompi/ Open mpi1.3.2 之上版本已经于LSF blaunch紧密集成。 提交openmpi作业:
bsub –n2 –o %J.out –e %J.err mpiexec mympi.out
3.7 Intel MPI作业
3.7.1 Express版本不记账方式
如果需要对作业记账,需要使用blaunch的集成方式。 环境变量设置.bsahrc
export
PATH=/gpfs/software/intel/composerxe/bin/:/gpfs/software/intel/mpi_41_0_024/include:/gpfs/software/intel/mpi_41_0_024/bin64:/gpfs/software/intel/composerxe/mkl:$PATH source /gpfs/software/intel/composerxe/bin/compilervars.sh intel64 source /gpfs/software/intel/mpi_41_0_024/bin64/mpivars.sh source /gpfs/software/intel/composerxe/mkl/bin/mklvars.sh intel64 MPI测试程序 Helloworld.c
#include \ #include
int main(int argc,char**argv) {
int myid, numprocs; int namelen;
char processor_name[MPI_MAX_PROCESSOR_NAME]; MPI_Init(&argc,&argv);
24 | Page
共分享92篇相关文档