Copyright © All rights reserved micansinfotech.com

 +91 90036 28940    micansinfotech@gmail.com   WhatsApp chat  

SHOPPING CART




JAVA PROJECT



ABSTRACT
|

Existing parallel mining algorithms for frequent item sets lack a mechanism that enables automatic parallelization, load balancing, data distribution, and fault tolerance on large clusters. As a solution to this problem, we design a parallel frequent item sets mining algorithm called FiDoop using the MapReduce programming model. To achieve compressed storage and avoid building conditional pattern bases, FiDoop incorporates the frequent items ultra metric tree, rather than Conventional FP trees. In FiDoop, three MapReduce jobs are implemented to complete the mining task. In the crucial third MapReduce job, the mappers independently decompose item sets, the reducers perform combination operations by constructing small ultra metric trees, and the actual mining of these trees separately. We implement FiDoop on our in-house Hadoop cluster. We show that FiDoop on the cluster is sensitive to data distribution and dimensions, because item sets with different lengths have different decomposition and construction costs. To improve FiDoop’s performance, we develop a workload balance metric to measure load balance across the cluster’s computing nodes. We develop FiDoop-HD, an extension of FiDoop, to speed up the mining performance for high-dimensional data analysis. Extensive experiments using real-world celestial spectral data demonstrate that our proposed solution is efficient and Scalable.




FREQUENT ITEMSET MINING

Frequent Itemset Mining is one of the most critical and time-consuming tasks in association rule mining (ARM), an often-used data mining task, provides a strategic resource for decision support by extracting the most important frequent patterns that simultaneously occur in a large transaction database. A typical application of ARM is the famous market basket analysis.


MAPREDUCE FRAMEWORK

MapReduce is a popular data processing paradigm for efficient and fault tolerant workload distribution in large clusters. A MapReduce computation has two phases, namely, the Map phase and the Reduce phase. The Map phase splits an input data into a large number of fragments, which are evenly distributed to Map tasks across a cluster of nodes to process. Each Map task takes in a key-value pair and then generates a set of intermediate key-value pairs. After the MapReduce runtime system groups and sorts all the intermediate values associated with the same intermediate key, the runtime system delivers the intermediate values to Reduce tasks.


PARALLEL FP-GROWTH ALGORITHM

This is based on a popular FP-Growth algorithm called Parallel FP-Growth or Pfp for short. Pfp implemented in Mahout is a parallel version of the FPGrowth algorithm. Mahout is an open source machine learning library developed on Hadoop clusters. FP-Growth efficiently discovers frequent itemsets by constructing and mining a compressed data structure (i.e., FP-tree) rather than an entire database. Pfp was designed to address the synchronization issues by partitioning transaction database into independent partitions, because it is guaranteed that each partition contains all the data relevant to the features (or items) of that group.


Existing System

In Existing System Rather than considering Apriori and FP-growth, we incorporate the frequent items ultra metric tree (FIU-tree) in the design of our parallel FIM technique. We focus on FIU-tree because of its four salient advantages, which include reducing I/O overhead, offering a natural way of partitioning a dataset, compressed storage, and averting recursively traverse.


Proposed System

In Proposed System a new data partitioning method to well balance computing load among the cluster nodes; we develop FiDoop-HD, an extension of FiDoop, to meet the needs of high-dimensional data processing.


Conclusion




To solve the scalability and load balancing challenges in the existing parallel mining algorithms for frequent item sets, we applied the MapReduce programming model to develop a parallel frequent item sets mining algorithm called FiDoop. FiDoop incorporates the frequent items ultra metric tree or FIU-tree rather than conventional FP trees, thereby achieving compressed storage and avoiding the necessity to build conditional pattern bases.

FiDoop seamlessly integrates three MapReduce jobs to accomplish parallel mining of frequent item sets. The third MapReduce job plays an important role in parallel mining; its mappers independently decompose item sets whereas its reducers construct small ultra metric trees to be separately mined. We improve the performance of FiDoop by balancing I/O load across data nodes of a cluster.

Assignment Help,IEEE projects,ieee projects pondicherry,final year projects,project centre in pondicherry,best project centre in pondicherry,Matlab projects in pondicherry,NS2 projects in pondicherry,IEEE-PROJECTS-CSE-2018-2019.html#IEEE PROJECTS-JAVADOTNET" title="Assignment Help,IEEE projects,ieee projects pondicherry,final year projects,project centre in pondicherry,ieee MCA projects,ieee PHD projects,ieee m.tech projects,mechanical projects,IEEE matlab projects,IEEE ns2 projects,php projects,application projects,MATLAB PROJECTS PONDICHERRY,NS2 PROJECTS PONDICHERRY,IEEE PROJECTS IN BIG DATA,BULK IEEE PROJECTS,BEST IEEE PROJECT CENTRE IN PONDICHERRY,IEEE 2015 PROJECTS,IEEE IMAGE PROCESSING PROJECTS PONDICHERRY,IEEE PROJECTS IN SINGAPORE,IEEE PROJECTS in AUSTRALIA ,IEEE PROJECTS IN TINDIVANAM, IEEE PROJECTS IN CHENNAI,IEEE PROJECTS IN TAMILNADU,FINAL YEAR PROJECTS IN USA,FINAL YEAR PROJECTS IN UK,ACADEMIC PROJECTS IN AUSTRALIA,THESIS WORK IN USA,IEEE THESIS WORK IN UK,IEEE PROJECT THESIS WORK IN AUSTRALIA,RESEARCH Assignment Help,Assignment Help,Assignment Help,IEEE projects,ieee projects pondicherry,final year projects,project centre in pondicherry,ieee MCA projects,ieee PHD projects,ieee m.tech projects,mechanical projects,IEEE matlab projects,IEEE ns2 projects,php projects,application projects,MATLAB PROJECTS PONDICHERRY,NS2 PROJECTS PONDICHERRY,IEEE PROJECTS IN BIG DATA,BULK IEEE PROJECTS,BEST IEEE PROJECT CENTRE IN PONDICHERRY,IEEE 2015 PROJECTS,IEEE IMAGE PROCESSING PROJECTS PONDICHERRY,IEEE PROJECTS IN SINGAPORE,IEEE PROJECTS in AUSTRALIA ,IEEE PROJECTS IN TINDIVANAM, IEEE PROJECTS IN CHENNAI,IEEE PROJECTS IN TAMILNADU