Sponsored Links
Ad by Google
Well data type in any programming language is very important aspect and without knowing data types you can't do programming, although, I had already shared the built in data types of Hadoop in our previous post with proper example here at Hadoop built in data types for MapReduce
And in this post going to show you an example of custom data type in Hadoop sometimes built in data types would not solve our problem, so wee need custom one that's why, I am planning to post an example of custom data type in Hadoop for MapReduce programming.
Here is a stock exchange sample data format, having below columns with tab separator
exchange, stock_symbol, date, stock_price_open, stock_price_high, stock_price_low, stock_price_close, stock_volume, stock_price_adj_close
Actual Data :
Solution : To solve above problem, we need custom data type to hold the different columns(fields) value like class with above fields. So to create custom data type for MapReduce, we have Writable and WritableComparable interface, to solve our problem we need custom data type for value not for key, because key is a simple Text type with date text from the above data set.
To create a custom data type for value, we must have to implement Writable interface and provide the implementation of below methods,
Prerequisites for this Tutorial-
Hadoop must be installed, if not you may follow Hadoop Single Node Setup guide.
Better to have working knowledge of Word Count example in Map Reduce, if not you may follow Hello World in Map Reduce step by step guide.
OUT PUT:
Main Objects of this project are:
Go to File Menu then New->Maven Project, and provide the required details, see the below attached screen.
Step 2. Edit pom.xml
Double click on your project's pom.xml file, it will looks like this with very limited information.
pom.xml
This is our custom data type, here we have implemented the Writable interface and provided the implementation of methods.
i. Start Hadoop components,open your terminal and type
iii. Create input folder on HDFS with below command.
Step 8. Create & Execute jar file
We almost done,now create jar file of CustomWritable source code. You can create jar file using eclipse or by using mvn package command.
To execute CustomWritable-1.0.jar file use below command
Step 9. Verify output
Now time to verify the output of the CustomWritable project, the above command will create an output folder inside HDFS and inside output folder it will create two file _SUCCESS and part-r-00000, let see the output using below cat command.
You may also verify it on Web-UI on "http://localhost:50070/explorer.html#/output" below is the web ui screen shot
Note - To verify your logic you may directly run the CustomWritable from the eclipse also.
Done :)
And in this post going to show you an example of custom data type in Hadoop sometimes built in data types would not solve our problem, so wee need custom one that's why, I am planning to post an example of custom data type in Hadoop for MapReduce programming.
Here is a stock exchange sample data format, having below columns with tab separator
exchange, stock_symbol, date, stock_price_open, stock_price_high, stock_price_low, stock_price_close, stock_volume, stock_price_adj_close
Actual Data :
NYSE AEA 2010-02-08 4.42 4.42 4.21 4.24 205500 4.24 NYSE AEA 2010-02-05 4.42 4.54 4.22 4.41 194300 4.41 NYSE AEA 2010-02-04 4.55 4.69 4.39 4.42 233800 4.42 NYSE AEA 2010-02-03 4.65 4.69 4.5 4.55 182100 4.55 NYSE AEA 2010-02-02 4.74 5 4.62 4.66 222700 4.66 NYSE AEA 2010-02-01 4.84 4.92 4.68 4.75 194800 4.75 NYSE AEA 2010-01-29 4.97 5.05 4.76 4.83 222900 4.83 NYSE AEA 2010-01-28 5.12 5.22 4.81 4.98 283100 4.98 NYSE AEA 2010-01-27 4.82 5.16 4.79 5.09 243500 5.09 NYSE AEA 2010-01-26 5.18 5.18 4.81 4.84 554800 4.84Problem : From the above dataset, we want to print the stock data corresponding to date(column[2]), we all know that Map Reduce works in key value pattern, so the expected out needs to be in date(key), value(stockInfo) pair.
Solution : To solve above problem, we need custom data type to hold the different columns(fields) value like class with above fields. So to create custom data type for MapReduce, we have Writable and WritableComparable interface, to solve our problem we need custom data type for value not for key, because key is a simple Text type with date text from the above data set.
To create a custom data type for value, we must have to implement Writable interface and provide the implementation of below methods,
public void write(DataOutput out) throws IOException public void readFields(DataInput in) throws IOExceptionOK, lets implement and solve the above problem, of-course at the end you will find an attachment with complete source code and sample data.
Prerequisites for this Tutorial-
Hadoop must be installed, if not you may follow Hadoop Single Node Setup guide.
Better to have working knowledge of Word Count example in Map Reduce, if not you may follow Hello World in Map Reduce step by step guide.
Tools and Technologies we are using here:
- Java 8
- Eclipse Mars
- Hadoop 2.7.1
- Maven 3.3
- Ubuntu 14(Linux OS)
OUT PUT:
Main Objects of this project are:
- StockInfo.java
- StockMapper.java
- StockReducer.java
- StockDriver.java
- pom.xml
Go to File Menu then New->Maven Project, and provide the required details, see the below attached screen.
Step 2. Edit pom.xml
Double click on your project's pom.xml file, it will looks like this with very limited information.
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.javamakeuse.hadoop.dt</groupId> <artifactId>CustomWritable</artifactId> <version>1.0</version> <name>CustomWritable</name> <description>Custom data type example using Writable</description> </project>Now add Hadoop dependencies entry inside your pom.xml, below is the complete pom.xml
pom.xml
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.javamakeuse.hadoop.dt</groupId> <artifactId>CustomWritable</artifactId> <version>1.0</version> <name>CustomWritable</name> <description>Custom data type example using Writable</description> <dependencies> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-client</artifactId> <version>2.7.1</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-mapreduce-client-core</artifactId> <version>2.7.1</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.7</source> <target>1.7</target> </configuration> </plugin> </plugins> </build> </project>Step 3. StockInfo.java(Custom Data Type)
This is our custom data type, here we have implemented the Writable interface and provided the implementation of methods.
package com.javamakeuse.hadoop.dt; import java.io.DataInput; import java.io.DataOutput; import java.io.IOException; import org.apache.hadoop.io.Writable; public class StockInfo implements Writable { private String exchange; private String stockSymbol; private double stockOpenPrice; private double stockPriceHigh; private double stockPriceLow; private double stockPriceClosed; private long stockVolume; public String getExchange() { return exchange; } public void setExchange(String exchange) { this.exchange = exchange; } public String getStockSymbol() { return stockSymbol; } public void setStockSymbol(String stockSymbol) { this.stockSymbol = stockSymbol; } public double getStockOpenPrice() { return stockOpenPrice; } public void setStockOpenPrice(double stockOpenPrice) { this.stockOpenPrice = stockOpenPrice; } public double getStockPriceHigh() { return stockPriceHigh; } public void setStockPriceHigh(double stockPriceHigh) { this.stockPriceHigh = stockPriceHigh; } public double getStockPriceLow() { return stockPriceLow; } public void setStockPriceLow(double stockPriceLow) { this.stockPriceLow = stockPriceLow; } public double getStockPriceClosed() { return stockPriceClosed; } public void setStockPriceClosed(double stockPriceClosed) { this.stockPriceClosed = stockPriceClosed; } public long getStockVolume() { return stockVolume; } public void setStockVolume(long stockVolume) { this.stockVolume = stockVolume; } @Override public void write(DataOutput out) throws IOException { out.writeUTF(exchange); out.writeUTF(stockSymbol); out.writeDouble(stockOpenPrice); out.writeDouble(stockPriceHigh); out.writeDouble(stockPriceLow); out.writeDouble(stockPriceClosed); out.writeLong(stockVolume); } @Override public void readFields(DataInput in) throws IOException { exchange = in.readUTF(); stockSymbol = in.readUTF(); stockOpenPrice = in.readDouble(); stockPriceHigh = in.readDouble(); stockPriceLow = in.readDouble(); stockPriceClosed = in.readDouble(); stockVolume = in.readLong(); } @Override public String toString() { return "StockInfo [exchange=" + exchange + ", stockSymbol=" + stockSymbol + ", stockOpenPrice=" + stockOpenPrice + ", stockPriceHigh=" + stockPriceHigh + ", stockPriceLow=" + stockPriceLow + ", stockPriceClosed=" + stockPriceClosed + ", stockVolume=" + stockVolume + "]"; } }Step 4. StockMapper.java (Mapper class)
package com.javamakeuse.hadoop.dt; import java.io.IOException; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Mapper; public class StockMapper extends Mapper<LongWritable, Text, Text, StockInfo> { private Text dateTxt = new Text(); @Override protected void map(LongWritable key, Text value, Mapper<LongWritable, Text, Text, StockInfo>.Context context) throws IOException, InterruptedException { String[] columns = value.toString().split("\t"); if (columns.length > 6) { StockInfo sinfo = new StockInfo(); sinfo.setExchange(columns[0]); sinfo.setStockSymbol(columns[1]); sinfo.setStockOpenPrice(Double.parseDouble(columns[3])); sinfo.setStockPriceHigh(Double.parseDouble(columns[4])); sinfo.setStockPriceLow(Double.parseDouble(columns[5])); sinfo.setStockPriceClosed(Double.parseDouble(columns[6])); sinfo.setStockVolume(Long.parseLong(columns[7])); dateTxt.set(columns[2]); context.write(dateTxt, sinfo); } } }Step 5. StockReducer.java (Reducer class)
package com.javamakeuse.hadoop.dt; import java.io.IOException; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Reducer; public class StockReducer extends Reducer<Text, StockInfo, Text, StockInfo>{ @Override protected void reduce(Text key, Iterable<StockInfo> values, Reducer<Text, StockInfo, Text, StockInfo>.Context context) throws IOException, InterruptedException { // iterate the values for(StockInfo stockInfo:values){ context.write(key, stockInfo); } } }Step 6. StockDriver.java (Main class to execute)
package com.javamakeuse.hadoop.dt; import org.apache.hadoop.conf.Configured; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.util.Tool; import org.apache.hadoop.util.ToolRunner; public class StockDriver extends Configured implements Tool { public static void main(String[] args) { try { int exitCode = ToolRunner.run(new StockDriver(), args); System.exit(exitCode); } catch (Exception e) { e.printStackTrace(); } } @Override public int run(String[] args) throws Exception { Job job = Job.getInstance(); System.out.println("Starting job !!"); job.setJarByClass(StockDriver.class); // input/output path FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); // set mapper/reducer class job.setMapperClass(StockMapper.class); job.setReducerClass(StockReducer.class); job.setMapOutputKeyClass(Text.class); job.setMapOutputValueClass(StockInfo.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(StockInfo.class); int returnValue = job.waitForCompletion(true) ? 0 : 1; System.out.println("done !!"); return returnValue; } }Step 7. Steps to execute our Map-Reduce Program
i. Start Hadoop components,open your terminal and type
subodh@subodh-Inspiron-3520:~/software$ start-all.shii. Verify Hadoop started or not with jps command
subodh@subodh-Inspiron-3520:~/software$ jps 8385 NameNode 8547 DataNode 5701 org.eclipse.equinox.launcher_1.3.100.v20150511-1540.jar 9446 Jps 8918 ResourceManager 9054 NodeManager 8751 SecondaryNameNodeYou may verify with web-ui also using "http://localhost:50070/explorer.html#/" url.
iii. Create input folder on HDFS with below command.
subodh@subodh-Inspiron-3520:~/software$ hadoop fs -mkdir /inputThe above command will create an input folder on HDFS, you can verify it using web UI, Now time to move input file which we need to process, below is the command to copy the nyse_sample input file on HDFS inside input folder.
subodh@subodh-Inspiron-3520:~$ hadoop fs -copyFromLocal /home/subodh/programs/input/nyse_sample /input/Note - You will find nyse_sample file inside our this project, you may download it from our this tutorial, at the end you will find a downloadable link.
Step 8. Create & Execute jar file
We almost done,now create jar file of CustomWritable source code. You can create jar file using eclipse or by using mvn package command.
To execute CustomWritable-1.0.jar file use below command
hadoop jar /home/subodh/CustomWritable-1.0.jar com.javamakeuse.hadoop.dt.StockDriver /input/nyse_sample /outputAbove will generate below output and also create an output folder with output of CustomWritable project.
16/02/20 23:22:40 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting job !! 16/02/20 23:22:41 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id 16/02/20 23:22:41 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId= 16/02/20 23:22:41 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 16/02/20 23:22:41 INFO input.FileInputFormat: Total input paths to process : 1 16/02/20 23:22:41 INFO mapreduce.JobSubmitter: number of splits:1 16/02/20 23:22:41 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local1404022867_0001 16/02/20 23:22:41 INFO mapreduce.Job: The url to track the job: http://localhost:8080/ 16/02/20 23:22:41 INFO mapreduce.Job: Running job: job_local1404022867_0001 16/02/20 23:22:41 INFO mapred.LocalJobRunner: OutputCommitter set in config null 16/02/20 23:22:41 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1 16/02/20 23:22:41 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 16/02/20 23:22:41 INFO mapred.LocalJobRunner: Waiting for map tasks 16/02/20 23:22:41 INFO mapred.LocalJobRunner: Starting task: attempt_local1404022867_0001_m_000000_0 16/02/20 23:22:41 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1 16/02/20 23:22:41 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ] 16/02/20 23:22:41 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/input/nyse_sample:0+517 16/02/20 23:22:42 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584) 16/02/20 23:22:42 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100 16/02/20 23:22:42 INFO mapred.MapTask: soft limit at 83886080 16/02/20 23:22:42 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600 16/02/20 23:22:42 INFO mapred.MapTask: kvstart = 26214396; length = 6553600 16/02/20 23:22:42 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer 16/02/20 23:22:42 INFO mapred.LocalJobRunner: 16/02/20 23:22:42 INFO mapred.MapTask: Starting flush of map output 16/02/20 23:22:42 INFO mapred.MapTask: Spilling map output 16/02/20 23:22:42 INFO mapred.MapTask: bufstart = 0; bufend = 620; bufvoid = 104857600 16/02/20 23:22:42 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214360(104857440); length = 37/6553600 16/02/20 23:22:42 INFO mapred.MapTask: Finished spill 0 16/02/20 23:22:42 INFO mapred.Task: Task:attempt_local1404022867_0001_m_000000_0 is done. And is in the process of committing 16/02/20 23:22:42 INFO mapred.LocalJobRunner: map 16/02/20 23:22:42 INFO mapred.Task: Task 'attempt_local1404022867_0001_m_000000_0' done. 16/02/20 23:22:42 INFO mapred.LocalJobRunner: Finishing task: attempt_local1404022867_0001_m_000000_0 16/02/20 23:22:42 INFO mapred.LocalJobRunner: map task executor complete. 16/02/20 23:22:42 INFO mapred.LocalJobRunner: Waiting for reduce tasks 16/02/20 23:22:42 INFO mapred.LocalJobRunner: Starting task: attempt_local1404022867_0001_r_000000_0 16/02/20 23:22:42 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1 16/02/20 23:22:42 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ] 16/02/20 23:22:42 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@6144d11f 16/02/20 23:22:42 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=334338464, maxSingleShuffleLimit=83584616, mergeThreshold=220663392, ioSortFactor=10, memToMemMergeOutputsThreshold=10 16/02/20 23:22:42 INFO reduce.EventFetcher: attempt_local1404022867_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events 16/02/20 23:22:42 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1404022867_0001_m_000000_0 decomp: 642 len: 646 to MEMORY 16/02/20 23:22:42 INFO reduce.InMemoryMapOutput: Read 642 bytes from map-output for attempt_local1404022867_0001_m_000000_0 16/02/20 23:22:42 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 642, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->642 16/02/20 23:22:42 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning 16/02/20 23:22:42 INFO mapred.LocalJobRunner: 1 / 1 copied. 16/02/20 23:22:42 INFO reduce.MergeManagerImpl: finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs 16/02/20 23:22:42 INFO mapred.Merger: Merging 1 sorted segments 16/02/20 23:22:42 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 629 bytes 16/02/20 23:22:42 INFO reduce.MergeManagerImpl: Merged 1 segments, 642 bytes to disk to satisfy reduce memory limit 16/02/20 23:22:42 INFO reduce.MergeManagerImpl: Merging 1 files, 646 bytes from disk 16/02/20 23:22:42 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce 16/02/20 23:22:42 INFO mapred.Merger: Merging 1 sorted segments 16/02/20 23:22:42 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 629 bytes 16/02/20 23:22:42 INFO mapred.LocalJobRunner: 1 / 1 copied. 16/02/20 23:22:42 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords 16/02/20 23:22:42 INFO mapreduce.Job: Job job_local1404022867_0001 running in uber mode : false 16/02/20 23:22:42 INFO mapreduce.Job: map 100% reduce 0% 16/02/20 23:22:43 INFO mapred.Task: Task:attempt_local1404022867_0001_r_000000_0 is done. And is in the process of committing 16/02/20 23:22:43 INFO mapred.LocalJobRunner: 1 / 1 copied. 16/02/20 23:22:43 INFO mapred.Task: Task attempt_local1404022867_0001_r_000000_0 is allowed to commit now 16/02/20 23:22:43 INFO output.FileOutputCommitter: Saved output of task 'attempt_local1404022867_0001_r_000000_0' to hdfs://localhost:9000/output9/_temporary/0/task_local1404022867_0001_r_000000 16/02/20 23:22:43 INFO mapred.LocalJobRunner: reduce > reduce 16/02/20 23:22:43 INFO mapred.Task: Task 'attempt_local1404022867_0001_r_000000_0' done. 16/02/20 23:22:43 INFO mapred.LocalJobRunner: Finishing task: attempt_local1404022867_0001_r_000000_0 16/02/20 23:22:43 INFO mapred.LocalJobRunner: reduce task executor complete. 16/02/20 23:22:43 INFO mapreduce.Job: map 100% reduce 100% 16/02/20 23:22:43 INFO mapreduce.Job: Job job_local1404022867_0001 completed successfully 16/02/20 23:22:43 INFO mapreduce.Job: Counters: 35 File System Counters FILE: Number of bytes read=15694 FILE: Number of bytes written=570076 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=1034 HDFS: Number of bytes written=1588 HDFS: Number of read operations=13 HDFS: Number of large read operations=0 HDFS: Number of write operations=4 Map-Reduce Framework Map input records=11 Map output records=10 Map output bytes=620 Map output materialized bytes=646 Input split bytes=104 Combine input records=0 Combine output records=0 Reduce input groups=10 Reduce shuffle bytes=646 Reduce input records=10 Reduce output records=10 Spilled Records=20 Shuffled Maps =1 Failed Shuffles=0 Merged Map outputs=1 GC time elapsed (ms)=7 Total committed heap usage (bytes)=496500736 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=517 File Output Format Counters Bytes Written=1588 done !!
Step 9. Verify output
Now time to verify the output of the CustomWritable project, the above command will create an output folder inside HDFS and inside output folder it will create two file _SUCCESS and part-r-00000, let see the output using below cat command.
subodh@subodh-Inspiron-3520:~$ hadoop fs -cat /output/part* 16/02/20 23:25:35 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2010-01-26 StockInfo [exchange=NYSE, stockSymbol=AEA, stockOpenPrice=5.18, stockPriceHigh=5.18, stockPriceLow=4.81, stockPriceClosed=4.84, stockVolume=554800] 2010-01-27 StockInfo [exchange=NYSE, stockSymbol=AEA, stockOpenPrice=4.82, stockPriceHigh=5.16, stockPriceLow=4.79, stockPriceClosed=5.09, stockVolume=243500] 2010-01-28 StockInfo [exchange=NYSE, stockSymbol=AEA, stockOpenPrice=5.12, stockPriceHigh=5.22, stockPriceLow=4.81, stockPriceClosed=4.98, stockVolume=283100] 2010-01-29 StockInfo [exchange=NYSE, stockSymbol=AEA, stockOpenPrice=4.97, stockPriceHigh=5.05, stockPriceLow=4.76, stockPriceClosed=4.83, stockVolume=222900] 2010-02-01 StockInfo [exchange=NYSE, stockSymbol=AEA, stockOpenPrice=4.84, stockPriceHigh=4.92, stockPriceLow=4.68, stockPriceClosed=4.75, stockVolume=194800] 2010-02-02 StockInfo [exchange=NYSE, stockSymbol=AEA, stockOpenPrice=4.74, stockPriceHigh=5.0, stockPriceLow=4.62, stockPriceClosed=4.66, stockVolume=222700] 2010-02-03 StockInfo [exchange=NYSE, stockSymbol=AEA, stockOpenPrice=4.65, stockPriceHigh=4.69, stockPriceLow=4.5, stockPriceClosed=4.55, stockVolume=182100] 2010-02-04 StockInfo [exchange=NYSE, stockSymbol=AEA, stockOpenPrice=4.55, stockPriceHigh=4.69, stockPriceLow=4.39, stockPriceClosed=4.42, stockVolume=233800] 2010-02-05 StockInfo [exchange=NYSE, stockSymbol=AEA, stockOpenPrice=4.42, stockPriceHigh=4.54, stockPriceLow=4.22, stockPriceClosed=4.41, stockVolume=194300] 2010-02-08 StockInfo [exchange=NYSE, stockSymbol=AEA, stockOpenPrice=4.42, stockPriceHigh=4.42, stockPriceLow=4.21, stockPriceClosed=4.24, stockVolume=205500]
You may also verify it on Web-UI on "http://localhost:50070/explorer.html#/output" below is the web ui screen shot
Note - To verify your logic you may directly run the CustomWritable from the eclipse also.
Done :)
Download the complete example from here Source Code
Sponsored Links
what is file format of stock file above you had given
ReplyDeleteFile format is Textfile
Delete