Bulan-bulan ini adalah masa-masa penerimaan siswa baru. Banyak dibicarakan, dari tahun ke tahun, selalu berulang yawn. Kadang, saya memutuskan tidak turut dalam perbincangan karena belum ada anak yang terlibat; tapi kadang justru merasa perlu tahu dari awal  hurry up!meski tahun-tahun ke depan sistem akan diubah. Terutama tentang aturan zonasi whew!.

Saya percaya, bahwa pertimbangan jarak dengan sistem zonasi adalah hasil dari grand design pemerintah. Bahwa terkait dengan pengaturan transportasi dan juga peningkatan mutu sekolah. Semoga saja, aturan tersebut menjadi enabler dari target peningkatan mutu tersebut, meski sekarang baru terasa pahit manisnya karena tidak semua pihak dapat diuntungkan dari aturan tersebut crying.

Read the rest of this entry »


IC-ICT4T 2018

May 23, 2018

EXTENDED Submission Deadline [IC-ICT4T] International Conference on Information and Communication Technology for Transformation (IC-ICT4T) 2018

Dear Sir,

We would like to invite you to submit a paper to the International Conference on Information and Communication Technology for Transformation (IC-ICT4T) 2018, to be held on October 3 – 5, 2018.
Please see the CFP below for details.

Our apologies if you receive multiple copies of this CFP
Call For Papers for IC-ICT4T 2018
Paper submission deadline: May 31, 2018

Paper Submission: http://www.ict4t.cms.net.my/ict4t2018/submission.html
Webpage: http://www.ict4t.cms.net.my/ict4t2018/

The International Conference on Information and Communication Technology for Transformation 2018 (IC-ICT4T 2018) is a conference platform which shares the state of the art of digital opportunities for the underserved communities. It is the 6th conference which previously known as Rural ICT Development (RICTD) Conference since 2007. IC-ICT4T 2018 will be held at Universitas Pasundan, Bandung, Jawa Barat, Indonesia from 3 to 5 Oktober 2018. This conference aims to share, exchange and extend the knowledge of digital opportunities for the underserved communities. Conference theme is:

Transforming Information Ecosystem towards Digital Economy

IC-ICT4T 2018 committee welcomes both academic and practitioner papers on a wide range of topics and scholarly approaches including theoretical and empirical papers employing qualitative, quantitative and critical methods. Case studies and research in progress are welcomed. IC-ICT4T 2018 invites full-paper submissions, which may address theoretical, methodological or practical aspects related to Digital Opportunities for Underserved Communities and should revolve around the conference theme. Papers not explicitly addressing the following topics are also welcomed.

Topics & Tracks

· Digital Economy

· Digital Inclusion

· Digital Content

· Universal Access and Service

· Internet of Things (IoT)

· Applications and Services

· Entrepreneurship transformation

· Policy and Regulations

· Disaster Management

· Community Transformation

· Smart Village/Smart City

· ICT for Underserved Community (rural, urban poor, disabled, senior citizen, youth, single parents, indigenous, and small and medium industries)

Important Dates

· Paper Submission Deadline (Extended): 31 May 2018

· Notification of Acceptance: Starting 30 June2018

· Camera Ready: 31 July 2018

· Registration Deadline: 30 August 2018

Authors can submit full paper(s) only. Authors MUST use the conference template to prepare papers. All submitted papers will go through a double blind peer review process by two to three competent reviewers.

Paper Format & Template

Journal publication (flowchart)
  1. [Template | Example] Information Systems Journal
  2. [Template | Example] Information & Management
  3. [Template | Example] Technology and Culture
  4. [Template | Example] Journal of Information Science
  5. [Template | Example] Rural and Remote Health
  6. [Template | Example] Malaysian Journal of Learning and Instruction (MJLI)
  7. [Template | Example] Journal Sampurasun
E-Proceedings (flowchart)

Template for E-Proceedings [Download].

Submit your paper online and choose publication category.

Publication Category

1. Poster

The participant(s) need to bring a poster to present in the IC-ICT4T 2018 poster session. All posters must be displayed at the conference. Poster presentations are to be reviewed by judges at the event. Awards will be given to the best poster (Three awards will be given for the first, second and third places).

2. E-Proceeding

All accepted papers at IC-ICT4T 2018 conference are reviewed and will be published in the IC-ICT4T 2018 conference e-proceeding with an e-ISBN number: 978-967-11768-X-X. The papers will be available in our website and other open source academic site (i.e. RePec, Google Scholar) for a wider view. All presenters will get the papers in digital version (pendrive form). The proceeding will be submitted to Google Scholars for evaluation and Indexing.

3. Journal

Upon acceptance by the reviewers and editors, paper submitted for journals will be published in ISI/Scopus indexed journals.

Special awards will be given to the Best Poster and Best Presenter.

For more information, please email ic_ict4t@uum.edu.my, ic_ict4t@unpas.ac.id, if@unpas.ac.id

This is an example for Sending and Receiving Program in MPJExpress. Overview about  sending and receiving with MPI will be added later 🙂

package MPJExpress;
import mpi.MPI;
* @author pbasari
public class SendReceive {
      public static void main (String args[]) {
          int rank = MPI.COMM_WORLD.Rank();
          int size = MPI.COMM_WORLD.Size();
        // sending
        if (rank == 0) {
        int data [] = {1, 2, 3, 4};
               for (int i = 1; i < size; i++) {
                      System.out.println(“From 0 Send to ” +i);
                      MPI.COMM_WORLD.Send(data, 0, data.length, MPI.INT, i, 0);
       else { // receiving
             int data [] = new int[4];
             System.out.println(“Received From 0 “);
             MPI.COMM_WORLD.Recv(data, 0, data.length, MPI.INT, 0, 0);
             for (int i = 0; i < data.length; i++)
                    System.out.println(“Rank:” + rank + “; ” + data[i]);

MPJExpress Tutorials

December 21, 2017


Welcome to the MPIExpress tutorials! In these tutorials, you will learn a wide array of concepts about MPI (Message Passing Interface) from Wes Kendall and its implementation using Java, MPJExpress. Below are the available lessons, each of which contain example code.

The tutorials assume that the reader has a basic knowledge of Java Programming Language.

Introduction and MPI installation


    Blocking point-to-point communication

    • Sending and receiving with MPI.COMM_WORLD.Send and MPI.COMM_WORLD.Recv
    • Point-to-point communication application

    Basic collective communication

    • Collective communication introduction with MPI.COMM_WORLD.Bcast
    • Common collectives – MPI.COMM_WORLD.Scatter, MPI.COMM_WORLD.Gather, and MPI.COMM_WORLD.Allgather
    • Application example – Performing parallel rank computation with basic collectives

    Advanced collective communication

    • Using MPI.COMM_WORLD.Reduce and MPI.COMM_WORLD.Allreduce for parallel number reduction

    Groups and communicators

    • Introduction to groups and communicator



© 2017 MPJExpress Tutorial. All rights reserved.

MPI Tutorial Introduction

December 21, 2017

MPI Tutorial Introduction From http://mpitutorial.com/tutorials/mpi-introduction/

Brief introduction to MPI, short but clear enough!. I hope you can enjoy it. Thanks for this tutorial, Wes.

MPI Tutorial Introduction

Author: Wes Kendall

Parallel computing is now as much a part of everyone’s life as personal computers, smart phones, and other technologies are. You obviously understand this, because you have embarked upon the MPI Tutorial website. Whether you are taking a class about parallel programming, learning for work, or simply learning it because it’s fun, you have chosen to learn a skill that will remain incredibly valuable for years to come. In my opinion, you have also taken the right path to expanding your knowledge about parallel programming – by learning the Message Passing Interface (MPI). Although MPI is lower level than most parallel programming libraries (for example, Hadoop), it is a great foundation on which to build your knowledge of parallel programming.

Before I dive into MPI, I want to explain why I made this resource. When I was in graduate school, I worked extensively with MPI. I was fortunate enough to work with important figures in the MPI community during my internships at Argonne National Laboratory and to use MPI on large supercomputing resources to do crazy things in my doctoral research. However, even with access to all of these resources and knowledgeable people, I still found that learning MPI was a difficult process.

Learning MPI was difficult for me because of three main reasons. First of all, the online resources for learning MPI were mostly outdated or not that thorough. Second, it was hard to find any resources that detailed how I could easily build or access my own cluster. And finally, the cheapest MPI book at the time of my graduate studies was a whopping 60 dollars – a hefty price for a graduate student to pay. Given how important parallel programming is in our day and time, I feel it is equally important for people to have access to better information about one of the fundamental interfaces for writing parallel applications.

Although I am by no means an MPI expert, I decided that it would be useful for me to expel all of the information I learned about MPI during graduate school in the form of easy tutorials with example code that can be executed on your very own cluster! I hope this resource will be a valuable tool for your career, studies, or life – because parallel programming is not only the present, it is the future.

A brief history of MPI

Before the 1990’s, programmers weren’t as lucky as us. Writing parallel applications for different computing architectures was a difficult and tedious task. At that time, many libraries could facilitate building parallel applications, but there was not a standard accepted way of doing it.

During this time, most parallel applications were in the science and research domains. The model most commonly adopted by the libraries was the message passing model. What is the message passing model? All it means is that an application passes messages among processes in order to perform a task. This model works out quite well in practice for parallel applications. For example, a master process might assign work to slave processes by passing them a message that describes the work. Another example is a parallel merge sorting application that sorts data locally on processes and passes results to neighboring processes to merge sorted lists. Almost any parallel application can be expressed with the message passing model.

Since most libraries at this time used the same message passing model with only minor feature differences among them, the authors of the libraries and others came together at the Supercomputing 1992 conference to define a standard interface for performing message passing – the Message Passing Interface. This standard interface would allow programmers to write parallel applications that were portable to all major parallel architectures. It would also allow them to use the features and models they were already used to using in the current popular libraries.

By 1994, a complete interface and standard was defined (MPI-1). Keep in mind that MPI is only a definition for an interface. It was then up to developers to create implementations of the interface for their respective architectures. Luckily, it only took another year for complete implementations of MPI to become available. After its first implementations were created, MPI was widely adopted and still continues to be the de-facto method of writing message-passing applications.

An accurate representation of the first MPI programmers.

An accurate representation of the first MPI programmers.

MPI’s design for the message passing model

Before starting the tutorial, I will cover a couple of the classic concepts behind MPI’s design of the message passing model of parallel programming. The first concept is the notion of a communicator. A communicator defines a group of processes that have the ability to communicate with one another. In this group of processes, each is assigned a unique rank, and they explicitly communicate with one another by their ranks.

The foundation of communication is built upon send and receive operations among processes. A process may send a message to another process by providing the rank of the process and a unique tag to identify the message. The receiver can then post a receive for a message with a given tag (or it may not even care about the tag), and then handle the data accordingly. Communications such as this which involve one sender and receiver are known as point-to-point communications.

There are many cases where processes may need to communicate with everyone else. For example, when a master process needs to broadcast information to all of its worker processes. In this case, it would be cumbersome to write code that does all of the sends and receives. In fact, it would often not use the network in an optimal manner. MPI can handle a wide variety of these types of collective communications that involve all processes.

Kali ini, kita akan menggunakan Netbeans IDE untuk pemrograman paralel. Penulis menggunakan Netbeans versi 8.2 dengan spesifikasi sebagai berikut:

  • Product Version: NetBeans IDE 8.2 (Build 201609300101)
    Updates: Updates available to version NetBeans 8.2 Patch 2
    Java: 1.8.0_112; Java HotSpot(TM) 64-Bit Server VM 25.112-b15
    Runtime: Java(TM) SE Runtime Environment 1.8.0_112-b15
    System: Windows 7 version 6.1 running on amd64; Cp1252; en_US (nb)
    User directory: C:\Users\Toshiba\AppData\Roaming\NetBeans\8.2
    Cache directory: C:\Users\Toshiba\AppData\Local\NetBeans\Cache\8.2

Langkah-langkah instalasi sebagai berikut:

Buat project baru: Pada contoh berikut ini, project diberi nama MPJExpress


Tambahkan library. Klik kanan pada paket Libraries sehingga muncul tampilan berikut:


Pilih JAR/Folder. Cari folder C:\mpj\lib; pilih file mpj.jar seperti pada gambar berikut ini:


Tekan tombol open. Pastikan library mpj.jar sudah ditambahkan ke paket libraries seperti pada gambar berikut ini:


Siapkan lingkungan virtual machine untuk MPJExpress sebagai berikut:

Customisasi configurasi seperti pada gambar berikut ini:


Sehingga muncul tampilan berikut ini. Lakukan pemilihan New Confif seperti pada gambar berikut ini:


Isilah configurasi baru tersebut, beri nama MPJExpress seperti pada gambar berikut ini:



Tekan tombol OK. Lalu lengkapi isian Working Directory dan VM Option sebagai berikut:

  • Working Directory: C:\mpj\mpj-user
  • VM Option:-jar C:\mpj\lib\starter.jar -np 4

(empat menunjukkan banyaknya pemroses yang digunakan sebagai virtual machine). Configurasi akhir menjadi sebagai berikut:

8-AddWorkingDirTekan tombol OK.

Gunakan configurasi MPJExpress ini sebagai configurasi utama ketika mengcompile dan running program Paralel.

Berikut contoh program parallel HelloWorld dan hasil eksekusinya di Netbeans IDE:


Demikian instalasi MPJExpress di Netbeans IDE dan contoh program HelloWorld dengan 4 elemen pemroses.


This section shows how MPJ Express programs can be executed in the multicore, cluster and hybrid configuration


  • Java 1.6 (stable) or higher (Mandatory).
  • Apache ant 1.6.2 or higher (Optional): ant is required for compiling MPJ Express source code.
  • Perl (Optional): MPJ Express needs Perl for compiling source code because some of the Java code is generated from Perl templates. The build file will generate Java files from Perl templates if it detects perl on the machine. It is a good idea to install Perl if you want to do some development with MPJ Express.
  • A native MPI library (Optional):  Native MPI library such as MS-MPI is required for running MPJ Express in cluster configuration with native device.
  • Visual Studio (Optional): MPJ Express needs Visual Studio to build JNI wrapper library for the native device

Installing MPJ Express

This section outlines steps to download and install MPJ Express software.

  1. Download MPJ Express and unpack it
  2. Assuming unpacked     ‘mpj     express’     is      in     ‘c:\mpj’,     Right-click     My ComputeràPropertiesàAdvanced tabàEnvironment Variables and export the following system variables (user variables are not enough)
    • Set the value of variable  MPJ_HOME as c:\mpj
    • Append the value of variable Path as c:\mpj\bin

Compiling and Running User Applications

This section shows how to compile a simple Hello World parallel Java program.

  1. Write Hello World MPJ Express program and save it as HelloWorld.java

import mpi.MPI;

* @author pbasari

public class HelloWorld {
     public static void main(String args[]) throws Exception {
         int me = MPI.COMM_WORLD.Rank();
         int size = MPI.COMM_WORLD.Size();
         System.out.println(“Hi from <“+me+”>”);

  1. Compile: javac -cp .;%MPJ_HOME%/lib/mpj.jar HelloWorld.java
  2. Running  (Multi-core Configuration): mpjrun.bat -np 2 HelloWorld


C:\mpj\mpj-user>javac -cp .;%MPJ_HOME%/lib/mpj.jar HelloWorld.java
C:\mpj\mpj-user>mpjrun.bat -np 2 HelloWorld
MPJ Express (0.40) is started in the multicore configuration
Hi from <1>
Hi from <0>