By contrast, performance counters, which are a set of registers in processors that log specific hardware-related events, do not generate any overhead therefore, these counters are suitable for use in different ML applications. ![]() Although simulations enable fine-grained energy estimation at the architecture and instruction levels, the use of simulations for large-scale ML tasks is not feasible due to the considerable overhead involved. Currently, simulation and performance counters are the two main approaches for experimentally estimating energy consumption. In addition to the aforementioned estimation formulas, experimental approaches can be used for estimating energy consumption. However, these formulas are available for only specific ML models and cannot be expanded to all algorithms. Studies have established formulas for estimating energy consumption these formulas sum the energy consumption levels of different elementary operations on the basis of complexity theory and benchmark results. Computational complexity can be used for theoretically approximating the number of operations thus, it can be used to estimate energy consumption ( Multimedia Appendix 1). Several methods exist for evaluating the energy consumption of ML models. The characteristics of clinical laboratory data sets are unique, and the energy efficiency of different ML algorithms for processing clinical laboratory data sets warrants investigation.Ī partial explanation for the poor understanding of energy efficiency is that estimating energy consumption is more difficult than estimating other metrics (eg, accuracy). Thus, the data obtained from such tests usually comprise features that are highly associated with the prediction targets. In real-world settings, single laboratory tests must be subjected to strict validation procedures before their clinical use. Data formats in the medical field are diverse, and clinical laboratory data are a common type of medical data. However, most relevant studies have only focused on comparing the predictive performance of different ML algorithms and have not thoroughly explored the energy efficiency of different ML algorithms in the medical domain. The growth of model size has been well reflected in neural networks (NNs) over the last decade, which are considered as the main ML algorithms implemented during this period.Īn optimal ML model should achieve balanced predictive performance and energy efficiency. Second, to achieve high predictive performance, the computation and memory requirements of ML models have increased. First, energy constraints constitute a major issue when ML is deployed into battery-powered medical devices. The increasing focus on inference energy can primarily be attributed to two reasons. Machine learning (ML) methods have been successfully employed in various medical fields, and energy consumption during ML inference has been attracting increasing attention. ![]() In all experiments, the XGB algorithm exhibited the best performance in terms of accuracy, run time, and energy efficiency. Compared with a five-hidden-layer NN, the XGB and LR algorithms achieved 16%-24% and 9%-13% lower power consumption levels for the mass spectrometry and urinalysis data sets, respectively. In terms of energy efficiency, the XGB algorithm exhibited the lowest power consumption for the mass spectrometry data set (9.42 Watts) and the LR algorithm exhibited the lowest power consumption for the urinalysis data set (9.98 Watts). Compared with the RF algorithm, the XGB and LR algorithms exhibited a 45% and 53%-60% reduction in inference time for the mass spectrometry and urinalysis data sets, respectively. The XGB and LR algorithms exhibited the shortest inference time for both data sets (0.47 milliseconds for both in the mass spectrometry data set 0.39 and 0.47 milliseconds, respectively, for the urinalysis data set). The experimental results indicated that the RF and XGB algorithms achieved the two highest AUROC values for both data sets (84.7% and 83.9%, respectively, for the mass spectrometry data set 91.1% and 91.4%, respectively, for the urinalysis data set).
0 Comments
Leave a Reply. |