This is an example for Sending and Receiving Program in MPJExpress. Overview about  sending and receiving with MPI will be added later 🙂

package MPJExpress;
import mpi.MPI;
/**
*
* @author pbasari
*/
public class SendReceive {
      public static void main (String args[]) {
          MPI.Init(args);
          int rank = MPI.COMM_WORLD.Rank();
          int size = MPI.COMM_WORLD.Size();
        // sending
        if (rank == 0) {
        int data [] = {1, 2, 3, 4};
               for (int i = 1; i < size; i++) {
                      System.out.println(“From 0 Send to ” +i);
                      MPI.COMM_WORLD.Send(data, 0, data.length, MPI.INT, i, 0);
              }
       }
       else { // receiving
             int data [] = new int[4];
             System.out.println(“Received From 0 “);
             MPI.COMM_WORLD.Recv(data, 0, data.length, MPI.INT, 0, 0);
             for (int i = 0; i < data.length; i++)
                    System.out.println(“Rank:” + rank + “; ” + data[i]);
             }
       MPI.Finalize();
}
}

Advertisements

MPJExpress Tutorials

December 21, 2017

Tutorials


Welcome to the MPIExpress tutorials! In these tutorials, you will learn a wide array of concepts about MPI (Message Passing Interface) from Wes Kendall and its implementation using Java, MPJExpress. Below are the available lessons, each of which contain example code.

The tutorials assume that the reader has a basic knowledge of Java Programming Language.

Introduction and MPI installation

  •  

    Blocking point-to-point communication

    • Sending and receiving with MPI.COMM_WORLD.Send and MPI.COMM_WORLD.Recv
    • Point-to-point communication application

    Basic collective communication

    • Collective communication introduction with MPI.COMM_WORLD.Bcast
    • Common collectives – MPI.COMM_WORLD.Scatter, MPI.COMM_WORLD.Gather, and MPI.COMM_WORLD.Allgather
    • Application example – Performing parallel rank computation with basic collectives

    Advanced collective communication

    • Using MPI.COMM_WORLD.Reduce and MPI.COMM_WORLD.Allreduce for parallel number reduction

    Groups and communicators

    • Introduction to groups and communicator

     

 

© 2017 MPJExpress Tutorial. All rights reserved.

MPI Tutorial Introduction

December 21, 2017

MPI Tutorial Introduction From http://mpitutorial.com/tutorials/mpi-introduction/

Brief introduction to MPI, short but clear enough!. I hope you can enjoy it. Thanks for this tutorial, Wes.

MPI Tutorial Introduction

Author: Wes Kendall


Parallel computing is now as much a part of everyone’s life as personal computers, smart phones, and other technologies are. You obviously understand this, because you have embarked upon the MPI Tutorial website. Whether you are taking a class about parallel programming, learning for work, or simply learning it because it’s fun, you have chosen to learn a skill that will remain incredibly valuable for years to come. In my opinion, you have also taken the right path to expanding your knowledge about parallel programming – by learning the Message Passing Interface (MPI). Although MPI is lower level than most parallel programming libraries (for example, Hadoop), it is a great foundation on which to build your knowledge of parallel programming.

Before I dive into MPI, I want to explain why I made this resource. When I was in graduate school, I worked extensively with MPI. I was fortunate enough to work with important figures in the MPI community during my internships at Argonne National Laboratory and to use MPI on large supercomputing resources to do crazy things in my doctoral research. However, even with access to all of these resources and knowledgeable people, I still found that learning MPI was a difficult process.

Learning MPI was difficult for me because of three main reasons. First of all, the online resources for learning MPI were mostly outdated or not that thorough. Second, it was hard to find any resources that detailed how I could easily build or access my own cluster. And finally, the cheapest MPI book at the time of my graduate studies was a whopping 60 dollars – a hefty price for a graduate student to pay. Given how important parallel programming is in our day and time, I feel it is equally important for people to have access to better information about one of the fundamental interfaces for writing parallel applications.

Although I am by no means an MPI expert, I decided that it would be useful for me to expel all of the information I learned about MPI during graduate school in the form of easy tutorials with example code that can be executed on your very own cluster! I hope this resource will be a valuable tool for your career, studies, or life – because parallel programming is not only the present, it is the future.

A brief history of MPI

Before the 1990’s, programmers weren’t as lucky as us. Writing parallel applications for different computing architectures was a difficult and tedious task. At that time, many libraries could facilitate building parallel applications, but there was not a standard accepted way of doing it.

During this time, most parallel applications were in the science and research domains. The model most commonly adopted by the libraries was the message passing model. What is the message passing model? All it means is that an application passes messages among processes in order to perform a task. This model works out quite well in practice for parallel applications. For example, a master process might assign work to slave processes by passing them a message that describes the work. Another example is a parallel merge sorting application that sorts data locally on processes and passes results to neighboring processes to merge sorted lists. Almost any parallel application can be expressed with the message passing model.

Since most libraries at this time used the same message passing model with only minor feature differences among them, the authors of the libraries and others came together at the Supercomputing 1992 conference to define a standard interface for performing message passing – the Message Passing Interface. This standard interface would allow programmers to write parallel applications that were portable to all major parallel architectures. It would also allow them to use the features and models they were already used to using in the current popular libraries.

By 1994, a complete interface and standard was defined (MPI-1). Keep in mind that MPI is only a definition for an interface. It was then up to developers to create implementations of the interface for their respective architectures. Luckily, it only took another year for complete implementations of MPI to become available. After its first implementations were created, MPI was widely adopted and still continues to be the de-facto method of writing message-passing applications.

An accurate representation of the first MPI programmers.

An accurate representation of the first MPI programmers.

MPI’s design for the message passing model

Before starting the tutorial, I will cover a couple of the classic concepts behind MPI’s design of the message passing model of parallel programming. The first concept is the notion of a communicator. A communicator defines a group of processes that have the ability to communicate with one another. In this group of processes, each is assigned a unique rank, and they explicitly communicate with one another by their ranks.

The foundation of communication is built upon send and receive operations among processes. A process may send a message to another process by providing the rank of the process and a unique tag to identify the message. The receiver can then post a receive for a message with a given tag (or it may not even care about the tag), and then handle the data accordingly. Communications such as this which involve one sender and receiver are known as point-to-point communications.

There are many cases where processes may need to communicate with everyone else. For example, when a master process needs to broadcast information to all of its worker processes. In this case, it would be cumbersome to write code that does all of the sends and receives. In fact, it would often not use the network in an optimal manner. MPI can handle a wide variety of these types of collective communications that involve all processes.

Kali ini, kita akan menggunakan Netbeans IDE untuk pemrograman paralel. Penulis menggunakan Netbeans versi 8.2 dengan spesifikasi sebagai berikut:

  • Product Version: NetBeans IDE 8.2 (Build 201609300101)
    Updates: Updates available to version NetBeans 8.2 Patch 2
    Java: 1.8.0_112; Java HotSpot(TM) 64-Bit Server VM 25.112-b15
    Runtime: Java(TM) SE Runtime Environment 1.8.0_112-b15
    System: Windows 7 version 6.1 running on amd64; Cp1252; en_US (nb)
    User directory: C:\Users\Toshiba\AppData\Roaming\NetBeans\8.2
    Cache directory: C:\Users\Toshiba\AppData\Local\NetBeans\Cache\8.2

Langkah-langkah instalasi sebagai berikut:

Buat project baru: Pada contoh berikut ini, project diberi nama MPJExpress

1-BuatProjectBaru

Tambahkan library. Klik kanan pada paket Libraries sehingga muncul tampilan berikut:

2-AddJar

Pilih JAR/Folder. Cari folder C:\mpj\lib; pilih file mpj.jar seperti pada gambar berikut ini:

3-AddJar-mpj

Tekan tombol open. Pastikan library mpj.jar sudah ditambahkan ke paket libraries seperti pada gambar berikut ini:

4-AddJar-mpj_finished

Siapkan lingkungan virtual machine untuk MPJExpress sebagai berikut:

Customisasi configurasi seperti pada gambar berikut ini:

5-config

Sehingga muncul tampilan berikut ini. Lakukan pemilihan New Confif seperti pada gambar berikut ini:

6-AddConfig

Isilah configurasi baru tersebut, beri nama MPJExpress seperti pada gambar berikut ini:

 

7-NewConfig

Tekan tombol OK. Lalu lengkapi isian Working Directory dan VM Option sebagai berikut:

  • Working Directory: C:\mpj\mpj-user
  • VM Option:-jar C:\mpj\lib\starter.jar -np 4

(empat menunjukkan banyaknya pemroses yang digunakan sebagai virtual machine). Configurasi akhir menjadi sebagai berikut:

8-AddWorkingDirTekan tombol OK.

Gunakan configurasi MPJExpress ini sebagai configurasi utama ketika mengcompile dan running program Paralel.

Berikut contoh program parallel HelloWorld dan hasil eksekusinya di Netbeans IDE:

9-HelloWorld

Demikian instalasi MPJExpress di Netbeans IDE dan contoh program HelloWorld dengan 4 elemen pemroses.

 

This section shows how MPJ Express programs can be executed in the multicore, cluster and hybrid configuration

Pre-requisites

  • Java 1.6 (stable) or higher (Mandatory).
  • Apache ant 1.6.2 or higher (Optional): ant is required for compiling MPJ Express source code.
  • Perl (Optional): MPJ Express needs Perl for compiling source code because some of the Java code is generated from Perl templates. The build file will generate Java files from Perl templates if it detects perl on the machine. It is a good idea to install Perl if you want to do some development with MPJ Express.
  • A native MPI library (Optional):  Native MPI library such as MS-MPI is required for running MPJ Express in cluster configuration with native device.
  • Visual Studio (Optional): MPJ Express needs Visual Studio to build JNI wrapper library for the native device

Installing MPJ Express

This section outlines steps to download and install MPJ Express software.

  1. Download MPJ Express and unpack it
  2. Assuming unpacked     ‘mpj     express’     is      in     ‘c:\mpj’,     Right-click     My ComputeràPropertiesàAdvanced tabàEnvironment Variables and export the following system variables (user variables are not enough)
    • Set the value of variable  MPJ_HOME as c:\mpj
    • Append the value of variable Path as c:\mpj\bin

Compiling and Running User Applications

This section shows how to compile a simple Hello World parallel Java program.

  1. Write Hello World MPJ Express program and save it as HelloWorld.java

import mpi.MPI;

/**
*
* @author pbasari
*/

public class HelloWorld {
     public static void main(String args[]) throws Exception {
         MPI.Init(args);
         int me = MPI.COMM_WORLD.Rank();
         int size = MPI.COMM_WORLD.Size();
         System.out.println(“Hi from <“+me+”>”);
         MPI.Finalize();
     }
}

  1. Compile: javac -cp .;%MPJ_HOME%/lib/mpj.jar HelloWorld.java
  2. Running  (Multi-core Configuration): mpjrun.bat -np 2 HelloWorld

Example:

C:\mpj\mpj-user>javac -cp .;%MPJ_HOME%/lib/mpj.jar HelloWorld.java
C:\mpj\mpj-user>mpjrun.bat -np 2 HelloWorld
MPJ Express (0.40) is started in the multicore configuration
Hi from <1>
Hi from <0>

Membaca beberapa kasus, yang ngaku-ngaku profesor suatu institusi (padahal asisten / bimbingan saja); sebenernya bukan ybs nya yang mengaku. Tapi ntah media atau panitia yang membuat salah kaprah; dan ybs tidak klarifikasi dan membiarkan saja. 🙄

Kebetulan juga sedang sign up ke suatu situs konferensi; ditanya posisi. Biasanya saya isi dengan Mrs. atau Lecturer. Padahal ternyata bisa lho diisi sesuatu yang tampak lebih keren he he. Biasanya saya malu kalo dipanggil dengan sesuatu yang ga sesuai kapasitas, tapi ini malah sebenernya ketahuan lamban dalam jabatan akademik 🤣. Masih Lektor.

Berdasarkan Keputusan Senat Akademik Institut Teknologi Bandung Nomor : 043/SK/K01-SA/2002 Tentang Sebutan dalam Bahasa Inggris Jabatan-Jabatan Fungsional Dosen Institut Teknologi Bandung. yang saya dapatkan dari hasil googling dan itu SK 2002 (belum tahu ada update atau tidak, maafkan); beginilah sebutan dalam bahasa Inggris untuk Jabatan Fungsional Dosen (tapi dosen ITB, dosen Unpas boleh ga ya? 😚). Jadi begini:

  • Asisten Ahli = Instructor;
  • Lektor = Assistant Professor;
  • Lektor Kepala = Associate Professor;
  • Guru Besar = Professor

Nah, karena saya Lektor, berarti saya bisa menyatakan diri sebagai Assistant Professor. Sounds cool, huh? Tapi karena saya ga mau dipanggil Prof. (belum pantas); apa saya jadi dipanggil Ass. ? Duh koq ga asyik sih 🤦

 

jabatan fungsional

Ada 5 tahap untuk mengkoneksikan aplikasi java dengan database dengan manggunakan JDBC. Yaitu:

  • Register the driver class
  • Creating connection
  • Creating statement
  • Executing queries
  • Closing connection

Udah, gitu aja? 🤔