Uncertainty in Big Data Analytics

The amount of data collected is staggering. The article was written in the middle of 2019; how much data is now collected daily? The National Security Agency monitors hot spots for terrorist activities using drone feeds. They admitted several years ago that analyzing what they had already collected would take decades, and the collection continues. The key to effective analysis is identifying the most relevant datasets and applying the correct analytic techniques, returning to our mix of art and science. As the article indicates, very little has been studied on removing uncertainty from the value of datasets growing daily. At least with BI, you are typically looking mainly at the data created within your firm, which places some limits on the amounts and type of data, but in a firm as large as, say, Amazon, imagine the amount of data created every day, not only at the point of purchase but in all of its hundreds (maybe thousands) of automated fulfillment centers around the world. Looking at figure 1, the 5Vs of Big Data characteristics, think about the challenges of the kinds and amount of data collected daily by your firm. Is it housed in a common system or different systems depending on the department collecting and using it? How would you characterize its various Vs? Is it manageable? What level and types of uncertainty would you assign to the various datasets you regularly work with?

Abstract

Big data analytics has gained wide attention from both academia and industry as the demand for understanding trends in massive datasets increases. Recent developments in sensor networks, cyber-physical systems, and the ubiquity of the Internet of Things (IoT) have increased the collection of data (including health care, social media, smart cities, agriculture, finance, education, and more) to an enormous scale. However, the data collected from sensors, social media, financial records, etc. is inherently uncertain due to noise, incompleteness, and inconsistency. The analysis of such massive amounts of data requires advanced analytical techniques for efficiently reviewing and/or predicting future courses of action with high precision and advanced decision-making strategies. As the amount, variety, and speed of data increases, so too does the uncertainty inherent within, leading to a lack of confidence in the resulting analytics process and decisions made thereof. In comparison to traditional data techniques and platforms, artificial intelligence techniques (including machine learning, natural language processing, and computational intelligence) provide more accurate, faster, and scalable results in big data analytics. Previous research and surveys conducted on big data analytics tend to focus on one or two techniques or specific application domains. However, little work has been done in the field of uncertainty when applied to big data analytics as well as in the artificial intelligence techniques applied to the datasets. This article reviews previous work in big data analytics and presents a discussion of open challenges and future directions for recognizing and mitigating uncertainty in this domain.


Source: Reihaneh H. Hariri, Erik M. Fredericks, and Kate M. Bowers, https://link.springer.com/article/10.1186/s40537-019-0206-3#:~:text=Several%20advanced%20data%20analysis%20techniques,can%20be%20used%20to%20make
Creative Commons License This work is licensed under a Creative Commons Attribution 4.0 License.