Big Data: Issues, Challenges and Techniques in Business Intelligence Mudasir Mudasir Ahmad Wani1, Suraiya Jabin1 1
Department of Computer Science, Jamia Millia Islamia, New Delhi-110025, India [email protected]
, [email protected] [email protected]
Abstract. During the last decade, the most challenging problem the world envisaged was Big Data problem. The big data problem means that data is growing at a much faster rate than computational speeds. And it is the result of the fact that storage cost is getting cheaper day by day, so people as well as almost all business or scientific organizations are storing more and more data. Social activities, scientific experiments, biological explorations along with the sensor devices are great big data contributors. Big data is beneficial to the society and business but at the same time, it brings challenges to the scientific communities. The existing traditional tools, machine learning algorithms and techniques are not capable of handling, managing and analyzing big data. Although various scalable machine learning algorithms, techniques and tools (e.g. Hadoop and Apache Spark open source platforms) are prevalent. In this paper we have identified the most pertinent issues and challenges related to big data and point out a comprehensive comparison of various techniques for handling big data problem.
Keywords: Big Data, Business intelligence, Online Social Networks, Big Data Analytics, Hadoop MapReduce, Apache Spark
1 Introduction Data is growing exponentially as it is being generated and recorded from everyone and everywhere for example online social networks, sensor devices, health records, human genome sequencing, phone logs, government records, professionals such as scientists, journalists, writers etc . Formation of such huge amount of data from multiple sources with high volume and velocity by variety of digital devices gives birth to the term Big Data. As the big data grows with high velocity (speed), it becomes very complex to handle, manage and analyze by using existing traditional systems. Data stored within the data warehouses is different from the big data. The former one is cleaned, managed, known and trusted and the later one includes all the warehouse data as well as the data which these warehouses are not capable to store . The big data problem means that a single machine can no longer process or even hold all of the data that we want to analyze. The only solution we have is to distribute the data over large clusters. An example of a large cluster is one of Google's data centers that contain tens of thousands of machines. 1.1 Big Data: Definition Big data can be described as the ample amount of data which differs from the traditional warehouse data in terms of size and structure. It can be viewed as the mixture of unstructured, semi structured and structured data and its volume is considered in the range of Exabytes (10 18). Different authors have given different definitions to the big data e.g.  used variety, volume, velocity, variability, complexity and value to define the big data. Authors in  defined the big data as volume of data in the range of Exabyte for which the existing technology is not capable to effectively hold, manage and process. According to  big data refers to the explosion of information. Analysts at Gartner [5, 6] described the characteristics of big data as huge Volume, fast Velocity, and diverse Variety, also termed as 3Vs. Most commonly big data is the ample amount of data (mostly semistructured or unstructured data) for which various technologies and architectures are needed to mine the valuable information. Online social media networks (Facebook, Twitter, LinkedIn, Quora and Google+ etc) are
the main contributors of big data. Sharing of information, status updates, videos, photos etc all has never been the same. Following figure 1 shows a snip of some big data contents generated in one minute. According to a study, more than 80% of the data today on the planet got populated in last couple of years only 
Figure 1: Some Big Data ingredients.
Along with different varieties of data, huge quantity of data is also getting populated every second and requires organizations to make real time decisions and responses . But the existing analytical techniques can hardly extract the useful information in real time from the huge volume of data with various verities. If the data has reached terabytes (1012) or petabytes (1015) in size or a single organization does not have enough space to store it, it is considered as big . Also according to Lenay  big data has three key characteristics those are high variety, huge volume and greater velocity. Various other studies have introduced the fourth V as one more dimension of big data and all the four Vs are shown in the Figure 2.
Figure 2: Four Vs of Big Data
1.2 Business Intelligence and Big Data Business intelligence (BI) relates to a technology oriented process for analyzing data and presenting actionable information to help scientists, corporate executives, business managers and other end users make more informed business decisions. BI covers a number of tools, applications and methods that helps business firms to gather data from internal as well as external sources, make it ready for analysis, create and execute queries in order to gain valuable information from the data, generate reports and charts for data visualizations so that the analytical results generated will help the organizations to make accurate and quick decisions. Business intelligence usually include methods like statistical/quantitative analysis , data mining and analytics [3, 8], predictive modeling/analytics , big data analytics and text analytics [39, 40, 41] etc for effective decision making. The
process of analyzing the large amount of data-sets (big data) containing different variety of data types in order to reveal unseen patterns, unknown relations, customer interests, new marketing strategies and other important information about business is called big data analytics. This big data analytics plays an important role in making business more effective, helping to achieve for more customer satisfaction, enhancing outputs and other business profits. Actually the key objective of big data analytics is to aid data scientists, analysts and other business professionals to make effective and accurate business decisions by analyzing the ample amount of transactional data and other forms of data which was not possible with conventional business intelligences. Business organizations are taking the advantage of analytical tools and techniques to gain the profit from the data available, also they are employing data scientists who are adept in managing big data and bringing useful insights to big data. Big data is going to change the way we think, make decisions and do our business. Managing big data usefully, has the potential to help companies to take faster and more intelligent decisions. The most prevalent method of storage and management of data has been relational database management system (RDBMS). But RDBMSs can be used for structured data only and it cannot deal with semi-structured or unstructured data. Also RDBMSs cannot handle large amount of data as well as heterogeneous data. Capability to analyze big data effectively is considered as one of the reasons for the success and popularity of any business organization. The question arises here is how companies tackle the situation while dealing with the everincreasing amount of data. According to  the main problem why companies are losing competitiveness is not analyzing the information in a systematic manner. According to  it will be more beneficial for the companies to store and analyze the large datasets with MapReduce  instead of traditional data bases. Mining of Big Data has unwrapped many new opportunities and challenges in the business . Even though the big data contains the greater value (ability to analyze data to develop actionable information ), it encounters many challenges in extracting the hidden valuable information from big data because the traditional database systems and data mining techniques are not scalable for big data. The existing systems and technology need to have immense parallel processing architectures and distributed storage systems to cope up with the big data. NoSQL and distributed file systems (DFS)  can be the choices to store and manage large datasets, but their capacity is also limited. Some of the most popular techniques Hadoop MapReduce  and Apache Spark  have been introduced and compared for the solution towards big data analytics in section 4. No doubt, big data analytics is one of the effective ways to identify business opportunities and the firms lacking in it would not gain the competitive advantage. For any business organization what is actually important, is to convert the data into information and extract the valuable and deep understanding of things from this information. In the present paper we put an effort to congregate the issues, challenges and techniques of big data all at one place. Section 2 precisely reviews the issues of big data. Section 3 addresses some emerging challenges of big data from business intelligence perspective. And the section 4 provides a comparative discussion on two most widely used big data processing techniques Hadoop MapReduce and Apache Spark, then finally section 5 concludes the discussion.
2 Issues of Big data Many researchers have discussed and suggested various big data issues in the literature; we have tried to summarize most relevant big data issues in this section. 2.1
Unmanaged data is always treated as unwanted data. Since the big data is formed by multiple heterogeneous sources with different formats, representations etc , so managing the big data requires high performance and multi dimensional management tools, otherwise we are likely to get unacceptable results. Also as one of the characteristics of big data is its variety , therefore to manage the data with heterogeneous formats and structures business organizations need to have more sophisticated data stores with the feature of elasticity and scalability as well. For better marketing strategies, business professionals often need relevant, cleaned, accurate complete and managed data to perform analysis. Management of data includes tasks like cleaning, transforming, clarification, dimension reduction, validation etc. Firms can make the use of business intelligence to manage the large amount of data for example quantum computing and in-memory database management systems allow economically effective and quick management of large datasets . But the existing businesses are already established on traditional platforms, moving the whole business to the new platform can be very expensive and time consuming. As the big data is not in the managed form, it becomes very complex for business organizations to analyze and extract meaningful information from it. There is still need to upgrade the existing big data
management techniques and/or tools and deployment of scalable data management tools and techniques since beginning for setting up a new business. 2. 2 Storage Issue The more the information we have, the more accurate decisions (marketing strategies) we can make . Also according to the big data professionals, a good amount of the world’s information exists within the massive, unstructured big data . From the above statement and observation, we can realize that how much important is big data for any business organization to grow. But unfortunately we lack the devices which can store this ample amount of data as a result our decisions, marketing strategies, recommendation systems etc. seems to be very poor. Our existing systems have the storage capacity up to 4 terabytes per disk  and big data is usually populated in exabytes. So, to store 1 exabytes we need 25000 disk spaces and it could be very complex, almost not feasible task to attach such huge number of disks to a single system. One possible solution could be to store the data onto the cloud. But storing the big data onto a cloud (or any storage place) is like filling a swimming pool with a drinking straw. It would take very long time to transfer data from multiple data sources to the cloud and back from cloud to processing point. To overcome this transferring issue two methods have been proposed . First, just process the data at the same place where it is stored and only transfer the required information. More specifically, bring the processing code to the stored data instead of transferring the stored data to the processing code, known as map reduce algorithm . Second, transfer only the part of data which seems more critical for analysis . Since the data is populated in terabytes (figure 2 shows a scale of data in bytes) and the existing storage capacity is very limited, it is quite confusing for the business organizations to choose the part of the data that can be skipped and the part of the data is of greater value or which optimal set of attributes can represent the whole dataset. So, there is a pertinent need of tools and methods which can help different firms to identify optimal features (or principal components) out of thousands of attributes to understand customers in depth. 2.3
Nowadays, the on-time results really matters a lot specially for business organizations. If the results are not generated accurately and timely, they will be of least use . In the current scenario most of the organizations have transferred their mode of business from ‘brick and mortar’ mode to online mode in order to grab the customers and boost the sales globally which results in storm of data. Our existing infrastructure, machinery and techniques are not capable to process such ample amount of data in real time  which leaves the business organizations handicapped. Although some advanced indexing schemes (like FastBit)  and processing methods like map reduce ,  are available to boost the processing speed but processing of Zettabytes (1021) and even Exabytes (1018) of data is still a challenging task. As shown in the figure 2 one of the big data characteristics is the velocity with which it is generated. Data comes from multiple sources in greater speed (figure 1) which needs to be processed in real time by business organizations in order to gain the competitive edge in the market. Many organizations are using MapReduce for long running time batch jobs. For real time processing of big data, simple scalable streaming systems (S4) have been proposed . Besides the accurate processing of real time data, organizations are also looking for its fast processing therefore the conventional data processing systems should be upgraded to become not only accurate but also fast. Figure 3 shows the growth of data in bytes from a single bit to a Yottabyte. For the sake of convenience we have shown the bytes using exponential power from Megabyte to a Yottabytes.
Figure 3: Data scale from a bit to a Yottabyte
From the above figure, it is clearly shown that the data has grown beyond the terabytes and even petabytes. Our advanced machineries (like supercomputers) are capable to store and process the data up-to petabytes only. Therefore organizations dealing with big data need such advanced machines and methods which can store and process the data beyond petabytes. Table1: Summary of issues Issue Possible solutions
Quantum computing and inmemory database management systems
Moving the whole business to the new platform can be very expensive and time consuming.
NoSQL , Distributed File Systems and Cloud Computing.
Storing one exabyte needs 25000 no. of disk space which is complex and loading onto cloud is time consuming
Advanced Indexing schemas, MapReduce and Simple scalable streaming systems (S4).
Processing of Zettabytes (1021) and even Exabytes (1018) of data is still seems a matter of concern.
For glimpse, table 1 shows the comprehensive conclusion of above identified issues. Efficient big data processing can be one of the effective ways to identify business opportunities but in order to gain the competitive edge; the firms certainly need to handle the above summarized issues.
3 Challenges for Big Data Opportunities and challenge always travel along with each other. Big data from one hand brings various openings for the society and business but on the other hand it also brings huge number of challenges . So for various researchers have identified and addressed plentiful challenges faced while dealing with big data like storage and transport, management and processing issues , variety and heterogeneity, scalability, velocity, accuracy etc, privacy and security, data accessing and sharing of information, skill requirements, technical or hardware related challenges, analytical challenges  etc. On the basis of literature survey, we have addressed some of the most pertinent challenges which need immediate attention of researchers.
3.1 Lack of Big Data Professionals Most recently devised big data processing tools and algorithms include MapReduce, Hadoop, Dryad, Apache Spark, Apache mahout, Tableau [9,11] etc. But besides the development these high processing, complex technologies for big data processing, organizations need highly skilled professionals to handle and make use of these tools according to the needs of an organization. No doubt there are experts around the big data as well, but looking at current scenario, a special kind of training should be given to these naïve experts so that they become proficient to deal with the big data from different dimensions including data modeling, data architecture, data integration etc . According to a report by McKinesey & Company , the US might realize the requirement of 140,000 to 190,000 skilled persons for data analysis as well as more than one million managers and analysts with advanced analytical knowledge and skills to make correct and accurate decisions. So, finally we can conclude, for the business organizations who are engaged with big data analytics and frameworks, there are huge demand of professionals namely data scientists/engineers to address challenges like data architecture and management etc. for efficient decision making. Every business firm needs to recruit big data analysts in order to stand in the competing market. According to a study  every organization should have special data force (SDF) with great analytical skills to deal with the big data. Big data analysts along with the business intelligence have been seen as one of the reasons for the exponential growth of businesses . 3.2 Interactiveness (or Designing) Interactiveness of a data mining system is the capability which allows the users to interact with system efficiently  including user feedback, guidance, suggestions etc. Coming to the big data mining, interactiveness is considered as one of the critical challenges for the system designers especially in business
organizations. Better interactiveness can overcome the challenges found around the 3Vs (Volume, velocity and Variety) . Adequate user interaction provide users better way to identify their domain of interest out of huge volume of data which also allow marketing experts to easily obtain the mining results. Since now the user is concerned with only his or her subspace, which boosts the processing speed (velocity) and also the scalability of the system gets increased. Also the heterogeneity (or variety) of the big data introduces complexity in the data which may also complicate the mining results. However the better interactiveness provides the ways to easily visualize the big data processing and mining results. It can be concluded that mining results of a system with better interactiveness has fairer chance to be accepted by the potential users; lack of interactiveness of the data mining system will likely end up with poor or unacceptable mining results. According to authors in [47,9] the organization dealing with big data needs to design the systems in such a way that they can understand both the customer needs as well as the technology to be used to solve the problem. Also it has been shown that various valuable customers were paid least attention by business organizations because of poor interaction system. Therefore in order to grab valuable customers and understand the needs of each individual customer out of thousands of customers, designers need to take care of interfaces, graphics, conceptual models, user feedback systems etc . 3.3 Loading and Synchronization Data loading includes getting data from multiple heterogeneous data sources into a single data repository . Loading process suffers from various issues those need the keen attention of researchers and practitioners for example multiple data sources should be mapped into a unified structural framework, tools and infrastructure which cooperate with the size and speed of big data should be available and should transfer the data in a timely manner. Along with these loading issues, synchronization across different data sources is also considered as one of the critical challenges. Once the data get loaded into a big data platform from different data sources, at different time intervals with different velocity, it is likely to get away from synchronization. Synchronization among data sources refers to the process of establishing consistency of data over time between different data sources and common repository. In other words the data coming from different sources should match with each other in respect to time and sequence . If the big data processing system is unable to guarantee synchronization it is likely to get inconsistent or even invalid data which leads to poor and/or incorrect mining results. Therefore a great attention should be paid towards the synchronization among data sources in order to help business organizations to avoid the risks in analysis process and hence draw accurate and appropriate conclusions. Also the heterogeneous nature of data makes it more complex for businesses to transform and clean before loading them into warehouse for analysis . Hadoop and MapReduce employed by many firms, provides the different ways to efficiently process the unstructured data . 3.4 Visualization Data visualization is the process of representing knowledge in an understandable way in order to enable more efficient decision making . As the big data is growing exponentially with unbridled velocity and huge volume, it becomes very difficult to extract the hidden information because of unavailability of scalable visualization tools. There is no doubt that online marketplaces (e.g. ebay) are using big data visualization tools like Tableau  and other tools mentioned in section 3.1, for transforming their large, complex data sets into picture formats to make all the data clearly understandable. But as the tsunami of big data is approaching to us with a very high speed , these existing visualization tools are likely to be of no use in the near future. So far researchers have paid their attention towards the visualizing challenges of big data in the current scenario, but we must be prepared to face the future challenges of big data as well. In other words, in order to hold and visualize the Zettabytes (1021) or even Yottabytes (1024) of data or beyond we should have tools ready. As the data is generated from everywhere by everyone, like online social networks, medical science, geo-stationary satellites, sensors etc , the big data is becoming “bigger data”. So there are more chances of facing challenges in the future, therefore we should be prepared in advance with the new technologies and tools to deal with challenges which are approaching us horrendously and uninformed. Big data visualization techniques have the responsibility to visually present the analytical results by using different graphs for decision making . A visual report speaks two times more than a text report and also the visualizations techniques have been proved to be sophisticated for complex customer data. Visualization tools like Tableau, QlikView etc.  has been used by businesses to increase the throughput over big data, also provides the visualizations specific to the business and ensure the meaningful data discovery.
Table2: Summary of challenges Challenge Possible approaches
Big Data Professionals
Establishment of special data force (SDF) with advanced analytical skills
Expensive but necessary to survive.
Design of systems by taking user needs and technology under consideration
User interactive designs satisfy customers. And a satisfied customer is itself an advertisement.
Hadoop and MapReduce to Loading and Synchronization load various formats of data in
Heterogeneous nature of data is the reason which raised the challenge.
Businesses use visualization tools to increase the throughput over big data.
a distributed and synchronous manner Tableau, QlikView etc.
For glimpse, table 2 above shows the comprehensive conclusion of identified challenges and their associated solutions applied so far. But still the shortcomings exist there in the available approaches which need the great attention of researchers.
4. Techniques for Big Data processing Keeping processing latency as prime concern, two methods have been suggested and employed for big data processing: batch-based stored data processing and real-time data-stream processing. The two most promising and upcoming open source methods Hadoop MapReduce and Apache Spark have been explored along with a discussion of when to use which in the following subsections. 4.1 Hadoop Hadoop is a freely available framework and has been used by data analysts, researchers and other professionals from more than eight years as a big data processing platform . Hadoop MapReduce is a good choice for processing of data which require all its inputs to be read exactly once (one-pass computation) while it seems very lazy in case of multi-pass computations. Google developed MapReduce with two components to process large data-sets. Map and Reduce are the two components, a map is used to calculate the key/value pairs for the inputs and reduce combines the results of map function into a scalar. In order to take the full advantage of Hadoop MapReduce we need to convert inputs into MapReduce form. Also this framework is responsible for scheduling, monitoring tasks and execution of failed tasks . Since in the MapReduce framework after every step and before the next step begins, the output data is stored into Distributed File System (DFS) which results in slow down of processing speed. Also it is deals with the large amount of clusters which are very complicated and hectic to manage. Furthermore integration of several approaches is required in many cases of big data processing, like for stream-data processing and to produce machine learning algorithms we need Storm  and Mahout  respectively. Hence in order to execute a set of complicated tasks, we would have to run a series of MapReduce jobs to execute them. MapReduce is designed for high speed jobs to be executed in some specific sequence. 4.2 Apache Spark Like MapReduce, Spark is also a cluster computing framework with language-integrated APIs and parallel operators . It was developed 6 years before at AMP Lab, Berkeley and is open sourced from 2010 as an Apache Project . Spark is more advantageous than Hadoop and Storm (MapReduce technologies) and other big data technologies. It provides a united approach to manage processing requirements of big data along with the enhancement of speed of applications in Hadoop cluster to run 100 times faster in memory and 10 times faster on the disk. Further it assists in writing applications (in scala, java or python language) with more than 75 high level operators. In addition to Map and Reduce operations, it also processes SQL queries, streaming data, machine learning and graph based data. Direct Cyclic Graphs (DCG) is used in Spark to develop complex, multi-step data pipelines and support in-memory sharing among different jobs.
4.3 Comparison of Hadoop MapReduce and Apache Spark Spark is designed to run on top of Hadoop and it is an alternative to the traditional batch map/reduce model. Although Hadoop MapReduce and Apache Spark, both are developed to deal with big data, still there are some differences between the two , . Following table 3 highlights some the differences between Hadoop MapReduce and Apache Spark. Although there exist other big data tools as well like GridGain, HPCC, and Storm etc. But the Hadoop MapReduce and Apache Spark are the most popular and commonly used. Table 3: Key differences between Hadoop MapReduce and Apache Spark
After each Map task, output is written to a buffer
Output of Map tasks is directly written to disk after completion.
A parallel data processing method to process long running jobs that take minutes even hours to complete.
Spark is designed to process real-time stream-data and SQL queries that take few seconds to complete.
In order to achieve recovery from errors, hadoop uses the concept of ‘replication’.
Recovery from errors is achieved by using different data storage models, RDD (resilient distributed datasets) etc which allows to transparently store data on memory and reconstructs it automatically in case of a failure.
Hadoop does not have any Memory requirements memory issue, but it is not good
Spark performs well in case of iterative operations, but has high memory requirements. Spark uses more RAM instead of network and disk I/O. Its relatively fast as compared to hadoop. But as it uses large RAM it needs a dedicated high end physical machine for producing effective results.
for iterative algorithms.
Thus, Hadoop Spark can be recommended as the most suitable choice for the future big data applications that possibly would require lower latency queries, iterative computation and real time processing on similar data .
5. Conclusions and Future directions As we live in the era of big data, here comes the need of modern, high performance and capable equipments along with scalable techniques and algorithms to deal with the issues and challenges which must come across while playing with the large data-sets. Big data analytics is one of the reasons for the universal success of any business organization. Organizations lagging behind in big data analytics are likely to be visually and physically handicapped as they would suffer with monitory losses in terms of their future customers and better future investments. The birth of big data revealed the shortcomings of existing data mining technologies which in turn raised new challenges. In this paper, we have presented a brief overview of big data along with its key properties, also identified some challenges of big data. A very brief introduction and a comparison for most popular big data processing frameworks; Hadoop MapReduce and Apache Spark is presented which helps young researchers and data scientists to analyze the big data and uncover hidden, unknown patterns. A rigorous effort from researchers is needed to overcome the existing challenges and to be ready to deal with upcoming challenges both in terms of hardware and software. It can be concluded that Apache Spark is perceived as a better alternative than Hadoop MapReduce as it offers more efficiency for stream processing e.g. Log processing and Fraud detection in live streams for alerts, aggregates and analysis. The most recent and future research in big data analysis includes fake identity detection using online social networks , , , identification and ranking of influential personalities in online social media , to gain competitive advantage by enhancing their supply chain innovation capabilities thus helping business economics, understanding the
basis of crop diseases from plant genomics data, getting more insights into the human diseases by analyzing human genome ,  and next generation sequencing data  etc.
References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21.
22. 23. 24. 25. 26. 27. 28. 29. 30. 31.
Kaisler, S., Armour, F., Espinosa, J.A., Money, W.: Big data: Issues and challenges moving forward. In: System Sciences (HICSS), 2013 46th Hawaii International Conference on. pp. 995-1004. IEEE (2013). Katal, A., Wazid, M., Goudar, R.: Big data: Issues, challenges, tools and good practices. In: Contemporary Computing (IC3), 2013 Sixth International Conference on. pp. 404-409. IEEE (2013). Jabin, S., & Zareen, F. J. Biometric Signature Verification. International Journal of Biometrics, 7(2), 97-118 (2015). Fan, J., Han, F., Liu, H.: Challenges of big data analysis. National science review 1(2), 293-314 (2014). Beyer, M.A., Laney, D.: The importance of big data: A definition. gartner (2012). Laney, D.: 3d data management: Controlling data volume, velocity and variety. META Group Research Note 6, 70 (2001). Che, D., Safran, M., Peng, Z.: From big data to big data mining: challenges, issues, and opportunities. In: Database Systems for Advanced Applications. pp. 1-15. Springer (2013). Jabin, S. Stock Market Prediction using Feed-forward Artificial Neural Network. International Journal of Computer Application, 99(9), 4-8 (2014). Wu, X., Zhu, X., Wu, G.Q., Ding, W.: Data mining with big data. IEEE Transactions on Knowledge and Data Engineering, 26(1), 97-107 (2014). Manyika, J., Chui, M., Brown, B., Bughin, J., Dobbs, R., Roxburgh, C., Byers, A.H.: Big data: The next frontier for innovation, competition, and productivity (2011) Chen, C.P., Zhang, C.Y.: Data-intensive applications, challenges, techniques and technologies: A survey on big data. Information Conference on. pp. 404-409. IEEE (2013). Simoff , S., Bohlen, M.H., Mazeika, A.: Visual data mining: theory, techniques and tools for visual analytics, vol. 4404. Springer Science & Business Media (2008) O. Driscoll, A., Daugelaite, J., Sleator, R.D.: big data, hadoop and cloud computing in genomics. Journal of biomedical informatics 46(5), 774-781 (2013). Addressing five challenges of Big Data, https://www.progress.com/docs/default-source/default-documentlibrary/Progress/Documents/Papers/Addressing-Five-Emerging-Challenges-of-Big-Data.pdf Webpopedia : Unstructured Data http://www.webopedia.com/TERM/U/unstructured_data.html Marx, V.: Biology: The big challenges of big data. Nature 498(7453), 255-260 (2013) Jabin, S. Learning Classifier Systems Approach for Automated Discovery of Hierarchical Censored Production Rules. In Information and Communication Technologies 68-77. Springer (2010). Hashem, I. A. T., Yaqoob, I., Anuar, N. B., Mokhtar, S., Gani, A., & Khan, The rise of “big data” on cloud computing: review and open research issues. Information Systems, 47, 98-115 (2015). Wu, K.: Fastbit: an effcient indexing technology for accelerating data-intensive science. In: Journal of Physics: Conference Series. vol. 16, p. 556. IOP Publishing (2005) Assuncao, M.D., Calheiros, R.N., Bianchi, S., Netto, M.A., Buyya, R.: Big data computing and clouds: Trends and future directions. Journal of Parallel and Distributed Computing 79, 3-15 (2015) Ebner, K., Buhnen, T., Urbach, N.: Think big with big data: Identifying suitable big data strategies in corporate environments. In: System Sciences (HICSS), 2014 47th Hawaii International Conference on. pp. 3748-3757. IEEE (2014). Kim, G. H., Trimi, S., & Chung, J. H. Big-data applications in the government sector. Communications of the ACM, 57(3) 78-85 (2014). Kambatla, K., Kollias, G., Kumar, V., Grama, A.: Trends in big data analytics. Journal of Parallel and Distributed Computing 74(7), 2561-2573 (2014). Zaharia, M., Chowdhury, M., Franklin, M.J., Shenker, S., Stoica, I.: Spark: cluster computing with working sets. In: Proceedings of the 2nd USENIX conference on Hot topics in cloud computing. vol. 10, p. 10 (2010). Apache Software Foundation: The Apache Software Foundation Blog (August 2014). https://blogs.apache.org/foundation/entry/the_apache_software_foundation_announces80 MAHOUT (September 2015): http://www.tutorialspoint.com/mahout/ Spark 0.6.2 (August 2015). Spark Overview: http://spark.apache.org/docs/0.6.2/ Apache Storm (September 2015) https://storm.apache.org/ Han, J., Kamber, M.: Data mining: concepts and techniques. United States of America: Morgan Kau mann Publishers (2001). Hadoop: MapReduce Tutorial http://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduceclient-core/MapReduceTutorial.html#MapReduce Tutorial. Gu, L., Li, H.: Memory or time: Performance evaluation for iterative operation on hadoop and spark. In: High Performance Computing and Communications & 2013 IEEE International Conference on Embedded and Ubiquitous Computing (HPCC EUC), 2013 IEEE 10th International Conference on. pp. 721-727. IEEE (2013). Laney, D. (2001). 3D data management: Controlling data volume, velocity and variety. META Group Research Note, 6, 70.
33. Dean, J., Ghemawat, S.: MapReduce: simplified data processing on large clusters. Communications of the ACM 51(1), 107-113 (2008). 34. Dittrich, J., Quiane-Ruiz, J.A.: Efficient big data processing in hadoop MapReduce. Proceedings of the VLDB Endowment 5(12), 2014-2015 (2012) 35. Triguero, I., Peralta, D., Bacardit, J., Garc a, S., Herrera, F.: MRPR: A MapReduce solution for prototype reduction in big data classification. neurocomputing 150, 331-345 (2015). 36. Shanahan, J.G., Dai, L.: Large scale distributed data science using apache spark. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. pp. 2323-2324. ACM (2015). 37. Zheng, X., Zeng, Z., Chen, Z., Yu, Y., Rong, C.: Detecting spammers on social networks. Neurocomputing 159, 27-34 (2015). 38. Conti, M., Poovendran, R., Secchiero, M.: Fakebook: detecting fake profiles in on-line social networks. In: Proceedings of the 2012 International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2012). pp. 1071-1078. IEEE Computer Society (2012) 39. Xiong, H.Y., Alipanahi, B., Lee, L.J., Bretschneider, H., Merico, D., Yuen, R.K., Hua, Y., Gueroussov, S., Najafabadi, H.S., Hughes, T.R., et al.: The human splicing code reveals new insights into the genetic determinants of disease. Science 347(6218), 1254806 (2015). 40. Kellis, M., Wold, B., Snyder, M.P., Bernstein, B.E., Kundaje, A., Marinov, G.K., Ward, L.D., Birney, E., Crawford, G.E., Dekker, J., et al.: Defining functional DNA elements in the human genome. Proceedings of the National Academy of Sciences 111(17), 6131-6138 (2014). 41. Tsikerdekis, M., Zeadally, S.: Multiple account identity deception detection in social media using nonverbal behavior. IEEE Transactions on Information Forensics and Security, 9(8), 1311-1321 (2014). 42. McKenna, A., Hanna, M., Banks, E., Sivachenko, A., Cibulskis, K., Kernytsky, A., Garimella, K., Altshuler, D., Gabriel, S., Daly, M., et al.: The genome analysis toolkit: a MapReduce framework for analyzing next-generation DNA sequencing data. Genome research 20(9), 1297-1303 (2010). 43. Stich, V., Jordan, F., Birkmeier, M., O azgil, K., Reschke, J., Diews, A.: Big data technology for resilient failure management in production systems. In: Advances in Production Management Systems: Innovative Production Management Towards Sustainable Growth, pp. 447-454. Springer (2015). 44. Agrawal, D., Das, S., El Abbadi, A.: Big data and cloud computing: current state and future opportunities. In: Proceedings of the 14th International Conference on Extending Database Technology. pp. 530-533. ACM (2011). 45. McDaniel, M.A.: Big-brained people are smarter: A meta-analysis of the relation-ship between in vivo brain volume and intelligence. Intelligence 33(4), 337-346 (2005) 46. Anwar, T., Abulaish, M. Ranking Radically Influential Web Forum Users. IEEE Transactions on Information Forensics and Security, 10(6), 1289-1298 (2015). 47. Stonebraker, M. and J. Hong. Researchers 'Big Data Crisis; Understanding Design and Functionality, Communications of the ACM, 55(2), 10-11 (2012). 48. Sawant, N., & Shah, H. Big Data Visualization Patterns. In Big Data Application Architecture Q & A 79-90 (2013). 49. Tan, K. H., Zhan, Y., Ji, G., Ye, F., & Chang, C. Harvesting big data to enhance supply chain innovation capabilities: An analytic infrastructure based on deduction graph. International Journal of Production Economics, 165, 223-233 (2015). 50. Minelli, M., Chambers, M., & Dhiraj, A. Big data, big analytics: emerging business intelligence and analytic trends for today's businesses. John Wiley & Sons (2012). 51. Vossen, G. Big data as the new enabler in business and other intelligence. Vietnam Journal of Computer Science, 1(1), 3-14 (2014). 52. Chen, M., Mao, S., & Liu, Y. Big data: A survey. Mobile Networks and Applications, 19(2), 171-209 (2014). 53. Schön, Donald A., and Chris Argyris. "Organizational learning: a theory of action perspective." Reis: Revista española de investigaciones sociológicas 77, 345-350 (1997). 54. Buhl, H. U., Röglinger, M., Moser, D. K. F., & Heidemann, J. Big data.Business & Information Systems Engineering, 5(2), 65-69 (2013). 55. Neumeyer, L., Robbins, B., Nair, A., & Kesari, A. S4: Distributed stream computing platform. 2010 IEEE International Conference on Data Mining Workshops (ICDMW), 170-177 (2010).