Mempelajari konsep Objek oriented tidak berarti semata-mata mempelajari bahasa pemrogramannya saja. Idealnya, dengan memahami konsep Object Ortiented, maka bahasa apapun dapat dijadikan tools untuk memperjelas konsep tersebut. Adakalah, kita langsung mempraktekkan bahasa pemrograman, tanpa memahami konsep OO yang terkadung di dalamnya. Tulisan beriku tini merangkum kembali konsep OO yang harus dikuasai dengan baik oleh seorang pemrogram.

Terdapat 5 materi besar dalam mempelajari Pemrograman Beroientasi Objek. Yaitu:

  1. Kelas dan Objek
  2. Interaksi antar Objek
  3. Relasi antar Objek
  4. Collection
  5. Advance topics:
    1. Abstract dan Interface
    2. Polimorphism
    3. Static dan Final 

Menurut Grady Booch, objek adalah benda; yang memiliki id, state, dan behavior. State adalah kumpulan value untuk setiap atribut yang menempel pada objek tersebut. Sedangkan behavior menunjukkan perilaku objek, direpresentasikan dengan metode/fungsi yang dapat mengubah state. Sebagai contoh, kelas Student/Mahasiswa adalah objek. Mahasiswa memilik atribut nama dan nomor induk serta IPK. Mahasiswa memiliki perilaku belajar agar value dari IPK dapat berubah.

minions

Kelas digambarkan sebagai template atau cetak biru dari sebuah objek. Kelas menggambarkan struktur dari sebuah objek. Template atau cetak biru ini menjadi dasar pembuatan objek. Objek yang berasal dari Kelas yang sama, akan memiliki atribut dan metode/fungsi yang sama. DIkatakan bahwa objek adalah instansiasi dari Kelas.

Mengacu ke notasi Unified Modeling Language (UML), kelas digambarkan dalam bentuk diagram sebagai berikut. Terdiri dari 3 area. Area pertama menunjukkan Nama Kelas, area kedua menunjukkan kumpulan atribut, dan area ketiga menunjukkan kumpulan metode/fungsi

KelasMahasiswa

Kelas Mahasiswa, notasi UM
L

Berdasarkan digram kelas tersebut, kita dapat melakukan pemrograman dengan bahasa Java sebagai berikut:

public class Mahasiswa {

String nama;
String nim;
Double ipk;

}

Sedangkan di bahasa PHP, didapat kode sebagai berikut:

class Mahasiswa {

private $nama;
private $nim;
private $ipk;

}

Sedangkan dalam bahasa Pyhton:

class Mahasiswa:

nama = “”
nim = “”
ipk = 0

Sedih itu adalah.. Lupa menyimpan program-program lama, yang ternyata  masih diperlukan untuk pekerjaan/penelitian terkini. Mau nangis, masak harus dikoding ulang semua? Koq bisa sih lupa. Rasanya saya rajin back up ke drive, atau dicopy ke hdd. Tapi dicari, ga ada. Programnya mungkin sederhana, anak tingkat II juga pasti bisa. Yaitu tentang Knapsack’s Problem, solusi dengan Brute Force dan Dynamic Programming. Sudah dicustom untuk persoalan Combinatorial Spectrum Auction.  Akan saya gunakan sebagai acuan dasar sebelum masuk ke algoritma Heuristic/Metaheuristic.

Baiklah, saya ulangi kodinganya. Untuk Brute Force, artinya kita harus membangkitkan Read the rest of this entry »

This is an example for Sending and Receiving Program in MPJExpress. Overview about  sending and receiving with MPI will be added later 🙂

package MPJExpress;
import mpi.MPI;
/**
*
* @author pbasari
*/
public class SendReceive {
      public static void main (String args[]) {
          MPI.Init(args);
          int rank = MPI.COMM_WORLD.Rank();
          int size = MPI.COMM_WORLD.Size();
        // sending
        if (rank == 0) {
        int data [] = {1, 2, 3, 4};
               for (int i = 1; i < size; i++) {
                      System.out.println(“From 0 Send to ” +i);
                      MPI.COMM_WORLD.Send(data, 0, data.length, MPI.INT, i, 0);
              }
       }
       else { // receiving
             int data [] = new int[4];
             System.out.println(“Received From 0 “);
             MPI.COMM_WORLD.Recv(data, 0, data.length, MPI.INT, 0, 0);
             for (int i = 0; i < data.length; i++)
                    System.out.println(“Rank:” + rank + “; ” + data[i]);
             }
       MPI.Finalize();
}
}

MPJExpress Tutorials

December 21, 2017

Tutorials


Welcome to the MPIExpress tutorials! In these tutorials, you will learn a wide array of concepts about MPI (Message Passing Interface) from Wes Kendall and its implementation using Java, MPJExpress. Below are the available lessons, each of which contain example code.

The tutorials assume that the reader has a basic knowledge of Java Programming Language.

Introduction and MPI installation

  •  

    Blocking point-to-point communication

    • Sending and receiving with MPI.COMM_WORLD.Send and MPI.COMM_WORLD.Recv
    • Point-to-point communication application

    Basic collective communication

    • Collective communication introduction with MPI.COMM_WORLD.Bcast
    • Common collectives – MPI.COMM_WORLD.Scatter, MPI.COMM_WORLD.Gather, and MPI.COMM_WORLD.Allgather
    • Application example – Performing parallel rank computation with basic collectives

    Advanced collective communication

    • Using MPI.COMM_WORLD.Reduce and MPI.COMM_WORLD.Allreduce for parallel number reduction

    Groups and communicators

    • Introduction to groups and communicator

     

 

© 2017 MPJExpress Tutorial. All rights reserved.

MPI Tutorial Introduction

December 21, 2017

MPI Tutorial Introduction From http://mpitutorial.com/tutorials/mpi-introduction/

Brief introduction to MPI, short but clear enough!. I hope you can enjoy it. Thanks for this tutorial, Wes.

MPI Tutorial Introduction

Author: Wes Kendall


Parallel computing is now as much a part of everyone’s life as personal computers, smart phones, and other technologies are. You obviously understand this, because you have embarked upon the MPI Tutorial website. Whether you are taking a class about parallel programming, learning for work, or simply learning it because it’s fun, you have chosen to learn a skill that will remain incredibly valuable for years to come. In my opinion, you have also taken the right path to expanding your knowledge about parallel programming – by learning the Message Passing Interface (MPI). Although MPI is lower level than most parallel programming libraries (for example, Hadoop), it is a great foundation on which to build your knowledge of parallel programming.

Before I dive into MPI, I want to explain why I made this resource. When I was in graduate school, I worked extensively with MPI. I was fortunate enough to work with important figures in the MPI community during my internships at Argonne National Laboratory and to use MPI on large supercomputing resources to do crazy things in my doctoral research. However, even with access to all of these resources and knowledgeable people, I still found that learning MPI was a difficult process.

Learning MPI was difficult for me because of three main reasons. First of all, the online resources for learning MPI were mostly outdated or not that thorough. Second, it was hard to find any resources that detailed how I could easily build or access my own cluster. And finally, the cheapest MPI book at the time of my graduate studies was a whopping 60 dollars – a hefty price for a graduate student to pay. Given how important parallel programming is in our day and time, I feel it is equally important for people to have access to better information about one of the fundamental interfaces for writing parallel applications.

Although I am by no means an MPI expert, I decided that it would be useful for me to expel all of the information I learned about MPI during graduate school in the form of easy tutorials with example code that can be executed on your very own cluster! I hope this resource will be a valuable tool for your career, studies, or life – because parallel programming is not only the present, it is the future.

A brief history of MPI

Before the 1990’s, programmers weren’t as lucky as us. Writing parallel applications for different computing architectures was a difficult and tedious task. At that time, many libraries could facilitate building parallel applications, but there was not a standard accepted way of doing it.

During this time, most parallel applications were in the science and research domains. The model most commonly adopted by the libraries was the message passing model. What is the message passing model? All it means is that an application passes messages among processes in order to perform a task. This model works out quite well in practice for parallel applications. For example, a master process might assign work to slave processes by passing them a message that describes the work. Another example is a parallel merge sorting application that sorts data locally on processes and passes results to neighboring processes to merge sorted lists. Almost any parallel application can be expressed with the message passing model.

Since most libraries at this time used the same message passing model with only minor feature differences among them, the authors of the libraries and others came together at the Supercomputing 1992 conference to define a standard interface for performing message passing – the Message Passing Interface. This standard interface would allow programmers to write parallel applications that were portable to all major parallel architectures. It would also allow them to use the features and models they were already used to using in the current popular libraries.

By 1994, a complete interface and standard was defined (MPI-1). Keep in mind that MPI is only a definition for an interface. It was then up to developers to create implementations of the interface for their respective architectures. Luckily, it only took another year for complete implementations of MPI to become available. After its first implementations were created, MPI was widely adopted and still continues to be the de-facto method of writing message-passing applications.

An accurate representation of the first MPI programmers.

An accurate representation of the first MPI programmers.

MPI’s design for the message passing model

Before starting the tutorial, I will cover a couple of the classic concepts behind MPI’s design of the message passing model of parallel programming. The first concept is the notion of a communicator. A communicator defines a group of processes that have the ability to communicate with one another. In this group of processes, each is assigned a unique rank, and they explicitly communicate with one another by their ranks.

The foundation of communication is built upon send and receive operations among processes. A process may send a message to another process by providing the rank of the process and a unique tag to identify the message. The receiver can then post a receive for a message with a given tag (or it may not even care about the tag), and then handle the data accordingly. Communications such as this which involve one sender and receiver are known as point-to-point communications.

There are many cases where processes may need to communicate with everyone else. For example, when a master process needs to broadcast information to all of its worker processes. In this case, it would be cumbersome to write code that does all of the sends and receives. In fact, it would often not use the network in an optimal manner. MPI can handle a wide variety of these types of collective communications that involve all processes.