Running Hadoop Cluster in Netbeans

In the development phase of Hadoop MapReduce program, you will be involved with testing your program on a real cluster with small data to make sure that it’s working correctly. To do that, you must package your application into jar file, then run it with Hadoop jar command on the terminal. Then, you check the output target directory of your program, are the outputs correct? If not, you must delete the output directory in HDFS, check and repair your program, then start the build jar – run Hadoop – check output circle. For once or twice, it’s okay. But in the development process, we will surely make hell a lot of mistakes in our program. Doing the build jar – run Hadoop – check output – delete output directory repeatly could take a lot of time. Not to mention the typo when you interact with Hadoop shell command. To make this testing process easier, we can use Karmasphere: a Hadoop plugin for Netbeans IDE. This article is about how to test your Hadoop program on a real cluster easily using Netbeans. Continue reading

Quickly Switching Hadoop Mode

The Three Modes of Hadoop

As you may already knew, we can configure and use Hadoop in three modes. These modes are:

Standalone mode

This mode is the default mode that you get when you’re downloading and extracting Hadoop for the first time. In this mode, Hadoop didn’t utilize HDFS to store input and output files. Hadoop just use local filesystem in its process. This mode is very useful for debugging your MapReduce code before you deploy it on large cluster and handle huge amounts of data. In this mode, the Hadoop’s configuration file triplet (mapred-site.xml, core-site.xml, hdfs-site.xml) still free from custom configuration.

Pseudo distributed mode (or single node cluster)

In this mode, we configure the configuration triplet to run on a single cluster. The replication factor of HDFS is one, because we only use one node as Master Node, Data Node, Job Tracker, and Task Tracker. We can use this mode to test our code in the real HDFS without the complexity of fully distributed cluster. I’ve already covered the configuration process on my previous post.

Fully distributed mode (or multiple node cluster)

In this mode, we use Hadoop at its full scale. We can use cluster consists of a thousand nodes working together. This is the production phase, where your code and data are used and distributed across many nodes. You use this mode when your code is ready and work properly on the previous mode. Continue reading

Simple Crawling with Nutch

Like I tell you on the last post, in order to create automatic part-of-speech tagging for text document, I need to collect some corpora. In fact, because I wanna do it on distributed system, I need a large corpora. One great source to collect corpora is from web. But extracting plain text from HTML manually is quite cumbersome. So I heard that we can use a crawler to extract text from the web. Then I stumbled into Nutch.

A Little About Nutch

Nutch is an open source search engine, builds on Lucene and Solr. According to Tom White, Nutch basically consists of two parts: crawler and searcher. The crawler fetches pages from the web and creates an inverted index from it. The searcher accepts user’s queries to the fetched pages. Nutch can run on a single computer, but also can works great on multinode cluster. Nutch use Hadoop MapReduce in order to work well on distributed environment.

Simple Crawling with Nutch

Let’s get to the point. The objective that I defined here is to make corpora from web pages. In order to achieve that, I’m just gonna crawl some web pages and extract its text. So I’ll not writing about searching for now, but I consider to write it on the other post. Okay, this is my environment when I do this experiment:

  • Ubuntu 10.10 Maverick Meerkat
  • Java 6 OpenJDK
  • Nutch version 1.0, you can download here.

After you’re ready, let’s get started, shall we? Continue reading

Let’s Get Started

Hello, it has been a while since I updated this blog. I’m a little busy with college stuffs and something like that. And finally, I have came to the last year of my graduate study. After doing some consultations with some professors in my college, I got something as my research focus. Actually, it still at proposal stage, but I hope this will works, because so many people are counting on me about it.

So, I wanna implement MapReduce to optimize processing in automatic part-of-speech tagging (POS tagging). POS tagging is a process of assigning types of words in entire collection of text document. To make the process automatic, we can use some approaches that involves natural language processing techniques. Some approaches involve supervised learning, it means it needs to train the models with tagged corpus before we use the models to tag the real world text document. We can use MapReduce to optimize the learning and the real tagging process.

Since this is my first time dealing with (yeah) MapReduce and natural language processing, I feel a little bit anxious. Even, my anxiety is taking over my excitement already. Hearing this, maybe you’ll say how come I feel anxiety more than excitement. The answer is “I don’t know”, but I hope this will works out and I can finish the research on time. Oh, maybe because there is time variable. Well, if we don’t have time variable then when we will start to do the work?

Well, this is just me rambling around. Thank you for all the readers who have asked some questions, comments, and anything in this blog. I hope we can keep in touch. Wish me luck. I’ll write about my research little by little in this blog. So, be aware.. And let’s get started!!

Hadoop on Single Node Cluster

Hello there? S’up?

On my previous post, we’ve learned how to develop Hadoop MapReduce application in Netbeans. After our application run well on the Netbeans, now it’s the time to deploy it on cluster of computers. Well, it supposed to be multi node cluster, but for now, let’s try it on a single node cluster. This article will give a step-by-step guide on how to deploy MapReduce application on a single node cluster.

In this tutorial, I’m using Ubuntu 9.10 Karmic Koala. For the Hadoop MapReduce application, I’ll use the code from my previous post. You can try it by yourself or you can just download the jar file. Are you ready? Let’s go then..

Preparing the Environment

First time first, we must preparing the deploying environment. We must install and configure all the software required. For this process, I followed a great tutorial by Michael Noll about how to run Hadoop on single node cluster. For simplicity, I’ll write a summary of all the steps mentioned on Michael’s post. I do recommend you to read it for the details. Continue reading