Το work with title Approximate decision making in large-scale distributed systems by Huang Ling, Garofalakis Minos, Joseph Anthony, Taft Nina is licensed under Creative Commons Attribution 4.0 International
Bibliographic Citation
L. Huang, M. Garofalakis, A. Joseph and N. Taft, "Approximate decision making in large-scale distributed systems", in the Proceedings of MLSys'2007, December.
As the Internet has evolved into a valuable and critical service platform for business and daily life, the research community has enthusiastically applied data mining methods to improve application performance by analyzing and optimizing the behaviors of the underlying systems (e.g., datacenter design, network resource provisioning, network security, etc.) These data mining procedures often use large-scale widelydistributed monitoring systems, which continuously generate numerous distributed data streams, and backhaul all of the data to a central location (e.g., a Network Operation Center or NOC) for data analysis and decision making. This application scenario presents both new opportunities and challenges in efficient data analysis and online decision making, where a decision function depends on aggregating and analyzing continuous data streams from distributed monitors. The statistics and machine learning communities have performed extensive research into decision making methods [1], including outlier detection, clustering, classification, etc., with the results being algorithms that mainly assume all data have been collected at a central point, and focus on post-collection data analysis and problem diagnosis, with little consideration of the more general distributed, continuous data collection and analysis problem. We believe that the machine learning community should now focus on the design of algorithms that function well with limited data. We envision two open problems: efficiently performing online decision making with low communication overhead, and providing fine-grain control over the tradeoff between decision accuracy and communication overhead. Most existing research has focused on sampling techniques, however, the randomness in this type of sampling could discard key information needed by decision making algorithms. Instead, we advocate using smart filtering for data reduction, where the filtering is designed to carefully select which data to not ship. Specifically, the filtered data should be that which has minimal impact on decision making performance or its accuracy.