当前位置: X-MOL 学术IEEE Trans. Softw. Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
MMO: Meta Multi-Objectivization for Software Configuration Tuning
IEEE Transactions on Software Engineering ( IF 6.5 ) Pub Date : 2024-04-15 , DOI: 10.1109/tse.2024.3388910
Pengzhou Chen 1 , Tao Chen 2 , Miqing Li 2
Affiliation  

Software configuration tuning is essential for optimizing a given performance objective (e.g., minimizing latency). Yet, due to the software's intrinsically complex configuration landscape and expensive measurement, there has been a rather mild success, particularly in preventing the search from being trapped in local optima. To address this issue, in this paper we take a different perspective. Instead of focusing on improving the optimizer, we work on the level of optimization model and propose a meta multi-objectivization (MMO) model that considers an auxiliary performance objective (e.g., throughput in addition to latency). What makes this model distinct is that we do not optimize the auxiliary performance objective, but rather use it to make similarly-performing while different configurations less comparable (i.e. Pareto nondominated to each other), thus preventing the search from being trapped in local optima. Importantly, by designing a new normalization method, we show how to effectively use the MMO model without worrying about its weight—the only yet highly sensitive parameter that can affect its effectiveness. Experiments on 22 cases from 11 real-world software systems/environments confirm that our MMO model with the new normalization performs better than its state-of-the-art single-objective counterparts on 82% cases while achieving up to $2.09\times$ speedup. For 68% of the cases, the new normalization also enables the MMO model to outperform the instance when using it with the normalization from our prior FSE work under pre-tuned best weights, saving a great amount of resources which would be otherwise necessary to find a good weight. We also demonstrate that the MMO model with the new normalization can consolidate recent model-based tuning tools on 68% of the cases with up to $1.22\times$ speedup in general.

中文翻译:


MMO:用于软件配置调整的元多目标化



软件配置调整对于优化给定的性能目标(例如,最小化延迟)至关重要。然而,由于该软件本质上复杂的配置环境和昂贵的测量,取得了相当轻微的成功,特别是在防止搜索陷入局部最优方面。为了解决这个问题,在本文中我们采取不同的视角。我们没有专注于改进优化器,而是致力于优化模型的级别,并提出了一种考虑辅助性能目标(例如,除了延迟之外的吞吐量)的元多目标化(MMO)模型。该模型的独特之处在于,我们不优化辅助性能目标,而是使用它来使性能相似但不同配置的可比性较差(即帕累托相互非支配),从而防止搜索陷入局部最优。重要的是,通过设计一种新的归一化方法,我们展示了如何有效地使用 MMO 模型,而不必担心其权重——唯一但高度敏感的参数,可以影响其有效性。对来自 11 个真实世界软件系统/环境的 22 个案例进行的实验证实,我们采用新归一化的 MMO 模型在 82% 的案例中比最先进的单目标模型表现更好,同时实现了高达 2.09 美元\times 的加速。对于 68% 的情况,新的归一化还使 MMO 模型在与我们之前在预先调整的最佳权重下进行的 FSE 工作的归一化一起使用时,能够超越实例,从而节省大量资源,否则需要找到这些资源一个好的体重。 我们还证明,采用新归一化的 MMO 模型可以在 68% 的情况下整合最新的基于模型的调整工具,总体速度提升高达 1.22 倍。
更新日期:2024-04-15
down
wechat
bug