It has been two years since I wrote about programming Hadoop in Netbeans using Karmasphere Studio. Meanwhile, apparently Netbeans is no longer supported by them, and they focused on the other IDE, Eclipse. I have relatively no problem in using Eclipse, thanks to some Android projects that I’m working on right now. In this post, I’ll show you another example of programming Hadoop in Eclipse by implementing distributed inverted index in MapReduce. So, let’s get started, shall we? Continue reading
Hello, it has been a while since I updated this blog. I’m a little busy with college stuffs and something like that. And finally, I have came to the last year of my graduate study. After doing some consultations with some professors in my college, I got something as my research focus. Actually, it still at proposal stage, but I hope this will works, because so many people are counting on me about it.
So, I wanna implement MapReduce to optimize processing in automatic part-of-speech tagging (POS tagging). POS tagging is a process of assigning types of words in entire collection of text document. To make the process automatic, we can use some approaches that involves natural language processing techniques. Some approaches involve supervised learning, it means it needs to train the models with tagged corpus before we use the models to tag the real world text document. We can use MapReduce to optimize the learning and the real tagging process.
Since this is my first time dealing with (yeah) MapReduce and natural language processing, I feel a little bit anxious. Even, my anxiety is taking over my excitement already. Hearing this, maybe you’ll say how come I feel anxiety more than excitement. The answer is “I don’t know”, but I hope this will works out and I can finish the research on time. Oh, maybe because there is time variable. Well, if we don’t have time variable then when we will start to do the work?
Well, this is just me rambling around. Thank you for all the readers who have asked some questions, comments, and anything in this blog. I hope we can keep in touch. Wish me luck. I’ll write about my research little by little in this blog. So, be aware.. And let’s get started!!
What? NoSQL? Yeah, you read it correctly. NoSQL. I forgot when and where I heard about this for the first time. But I noticed about this data store technology again when I was attending the second Bancakan 2.0 meet up in last March. When I listened to the speaker, lynxluna, I remember about HBase, a scalable distributed database that becomes part of Apache Hadoop project. For your own sake, Apache Hadoop is just one implementation of MapReduce framework.
What is NoSQL?
So, what the hell is NoSQL? Here is the definition of NoSQL in Wikipedia:
NoSQL is a movement promoting a loosely defined class of non-relational data stores that break with a long history of relational databases. These data stores may not require fixed table schemas, usually avoid join operations and typically scale horizontally. Academics and papers typically refer to these databases as structured storage.
I’m sorry for the long delay from the first part. I’ve been pretty busy lately. On this part, I write about the idea of MapReduce, how is it work, and how it distributes the data and process. This article is heavily referenced from MapReduce paper by Google. I write it again to deepen my knowledge about the concept. Enjoy!
What is MapReduce?
According to Wikipedia, MapReduce is a software framework patented by Google to support distributed computing on large data sets on clusters of computers. This framework is presented by Jeffery Dean and Sanjay Ghemawat in OSDI’04: Sixth Symposium on Operating System Design and Implementation on December 2004. The main idea is to utilize functional programming techniques, to obtain processing simplification in distributed environment.
MapReduce processing data using list concept that usually used in functional programming. The process consists of two function, map and reduce function. Each function take list of input elements and produce list of output. Map function take inputs and produce intermediate key-value pairs. These pairs then sent to the reduce function. The reduce function take these intermediate key-value pairs as a input. Then, for the same intermediate key, the function merges together the values to produce output. According to the paper, for every reduce invocation typically produces zero or one output value. Continue reading
There are so many article outside about what is MapReduce, the basic concepts behind it, how it works, and many other things. Even that, I still wanna write a little introduction to MapReduce. It’s mandatory, at least for me, to write about “something” in order to understand the “something”. I challenge my understanding about MapReduce in this post. I’ll use some resources available on the clouds like I mentioned earlier. This is just another introduction to MapReduce.
Data, Data, Data
We are living in the clouds era. Internet provide us with such a great resource to help our lives. In the progress, we created a lot of data. Consider a search engine like Google or Bing. They indexed all of sites across the network. If we are talking about sites these days, that’s a big number we are talking about. Netcraft reported that there are more than 200 Millions sites in the world. It means the search engine must process and analysis a lot of data. Continue reading