In MPI, there are usually two ways to send and receive this derived data type. One way is to create a new data structure that is the same as the derived data type defined in Fortran90 program by using the command of creating derived data type provided by MPI, then generate a derived data type of MPI, and finally pass the derived data type of MPI; Another method is to send data in the form of data packets and then unpack them at the receiving end. Recently, I am studying the former method, and here is a brief introduction of my experience.
For example, the following derived data types are defined in Fortran90 program:
type type_global! INT(5+5),character(2* 10),real(3),real*8(2)
Integers npoin, nelem, ngroup, nmat, nblks.
Real x, y, z
True *8 x8, y8
Integer icmatrix(5)
character*20 outplot,type_abc
End type type_global
This derived data type has four standard data types, namely integer (10, including 1 array of five scalars and five elements), real type (3), double-precision real type (3) and string type (2, 20 characters each). In order to transfer the data of this derived type, its data structure should be defined as an MPI derived type, and two MPI commands should be used, namely MPI_TYPE_STRUCT and MPI_TYPE_COMMIT. The former is used to define the derived data structure, and the latter is used to submit the confirmation. The calling format of MPI_TYPE_STRUCT is as follows:
Call MPI_TYPE_STRUCT(ndatatype, blocklens_global, offsets_global, oldtypes_global, &
type_global_MPI,ierr)
The explanation is as follows:
Ndatatype is an integer variable, and its value is the number of standard types contained in the derived data type (in this example, the same standard type data is merged). In this example, the value is 4, that is, integer, string, real and double-precision real;
Blockens _ global (0: 3) is an integer array with n data types, and the value of each element is the number of corresponding standard data types in this derived type. If the order is integer, string, real number and double-precision real number, the value of the array is;
Offsets_global(0:3) is an integer array with ndatatype elements, and the value of each element is the offset of the corresponding standard data type in the derived type. This requires knowing the type table of the derived type data structure. Simply put, in this derived type, the displacement offset of the 1 th standard type (integer) is 0, and the displacement offset of the second standard type (string type) should be the number of the previous standard type (1 th standard type) multiplied by the number of bytes occupied by the standard type (4 bytes *10 = In this example, the value is. In Fortran90, the default integer point is 4 bytes, the real type is 4 bytes, and the double-precision real type is 8 bytes. According to the length of the string, one character accounts for one byte.
Oldtypes_global(0:3) is an integer array with n data types, and the value of each element is the MPI type corresponding to the standard data type in this derived type. In the above order, its value should be [MPI _ integer, MPI _ character, MPI _ real, MPI _ double _ precedent].
Blocklines _ global, Offsets _ global and oldtypes _ global should have the same standard data type order. It doesn't matter which comes first or which comes last, but the order of the three data must be the same.
Type_global_MPI is an integer variable, which becomes a derived data type name that MPI can use after definition, and can be used like MPI_INTEGER, MPI_REAL and so on.
Ierr is an integer variable, and you should know it like the back of your hand.
Finally, you can use call MPI _ type _ commit (type _ global _ MPI, IERR) to submit the confirmation, and then use MPI_SEND and MPI_RECV as derived data types to pass.
Here is a complete code for transmitting Fortran90' s derived data types with MPI, which mainly uses MPI to transmit a derived structure data, then modifies the data in each process, and then prints out the modified data in each process to see if it is successful.
! ==============================================================
Program MPI_TypeData_SendRecv
Using mpi
character *(MPI _ MAX _ PROCESSOR _ NAME)PC NAME,text*20
Integer, parameter:: ndatatype=4
Integers myid, npc, namelen, re, ierr, ver, subver, m, n, status(MPI_STATUS_SIZE), ipc.
Integers type_block_MPI, type_global_MPI, &
block lens _ global(0:ndatatype- 1),offsets _ global(0:ndatatype- 1),& amp
old types _ global(0:ndatatype- 1),& amp
blocklens_block(0:2),offsets_block(0:2),oldtypes_block(0:2)
Integer (8):: Range
type type_global! INT(5+5),character(2* 10),real(3),real*8(2)
Integers npoin, nelem, ngroup, nmat, nblks.
Real x, y, z
True *8 x8, y8
Integer icmatrix(5)
character*20 outplot,type_abc
End type type_global
Type Type _ Block
integer,allocatable::appear_process(:,:),matno_process(:,:)
End type type_block
Type (type _ global) GHM _ global, GHM _ global 1
Type (type block) GHM block
Call MPI_INIT(ierr)
Call MPI_COMM_RANK(MPI_COMM_WORLD, myid, ierr).
Call MPI_COMM_SIZE(MPI_COMM_WORLD, npc, ierr).
btime=MPI_WTIME()
Call MPI_GET_PROCESSOR_NAME(pcname, namelen, ierr).
Call MPI_GET_VERSION(ver, subver, ieer).
Write (*, 1000)myid, npc, trim(pcname), ver, subver.
if(myid==0)then! Initial type array in processor 0
GHM _ global% npoin =10; GHM _ Global% n Group =5
GHM _ global %nblks=2
Do ipc= 1, size(GHM _ global %icmatrix)
GHM _ global %icmatrix(ipc)=ipc- 1
Endor
GHM _ global% y = 0.4GHM _ global %y8=0.8
GHM _ global% output plan =' GIDGHM _ global %type_ABC='VIE'
N group = GHM _ global %ngroup
Nblks = GHM _ Global %nblks
Allocate(GHM block percentage occurrence process (n groups, nblks), &
GHM _ Blocks % matno _ process(n group,nblks))
GHM _ block% appears _ process = 0; GHM_Blocks%matno_process=0
endif
blocklens_global(0)= 10! 10 integer
offsets_global(0) =0
oldtypes_global(0) =MPI_INTEGER
Call MPI_TYPE_EXTENT(MPI_INTEGER, EXTENT, ierr).
blocklens_global( 1)=40! 40 characters
Offsets _ global (www.cshangzj.com1) = Offsets _ global (0)+blocklens _ global (0) * range.
oldtypes _ global( 1)= MPI _ CHARACTER
Call MPI_TYPE_EXTENT(MPI_CHARACTER, EXTENT, ierr).
blocklens_global(2)=3! 3 real number
Offsets _ global (2) = Offsets _ global (1)+blocklens _ global (1) * range.
oldtypes_global(2) =MPI_REAL
Call MPI _ type _ extend (MPI _ real, extend, ierr).
blocklens_global(3)=2! 2 Real number *8
Offsets _ global (3) = Offsets _ global (2)+Blocklens _ global (2) * range.
oldtypes _ global(3)= MPI _ DOUBLE _ PRECISION
Call MPI_TYPE_EXTENT(MPI_INTEGER, EXTENT, ierr).
write(*,'(a, 10i4)')'myid_int = ',myid,extent
Call MPI _ type _ extend (MPI _ real, extend, ierr).
write(*,'(a, 10i4)')'myid_real= ',myid,extent
Call MPI _ type _ extend (MPI _ double _ precision, extend, ierr).
write(*,'(a, 10i4)')'myid_doub= ',myid,extent
Call MPI_TYPE_EXTENT(MPI_CHARACTER, EXTENT, ierr).
write(*,'(a, 10i4)')'myid_char= ',myid,extent,offsets_global
Call MPI_TYPE_STRUCT(ndatatype, blocklens_global, offsets_global, oldtypes_global, &
type_global_MPI,ierr)
Call MPI _ type _ commit (type _ global _ MPI, ierr).
if(myid==0)then! Send GHM _ global from 0 processor to other processors.
do ipc= 1,npc- 1
Call MPI _ SEND(GHM _ Global, 1, Type _ Global _MPI, ipc, 9, MPI _ Communication _ World, ierr).
Endor
other
Call MPI _ RECV(GHM _ global 1, 1, type _ global _MP.