当前位置: X-MOL 学术Complex Intell. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A robust adaptive meta-sample generation method for few-shot time series prediction
Complex & Intelligent Systems ( IF 5.0 ) Pub Date : 2024-12-19 , DOI: 10.1007/s40747-024-01638-2
Chao Zhang, Defu Jiang, Kanghui Jiang, Jialin Yang, Yan Han, Ling Zhu, Libo Tao

The research and exploration of time series prediction (TSP) have attracted much attention recently. Researchers can achieve effective TSP based on the deep learning model and a large amount of data. However, when sufficient high-quality data are not available, the performance of prediction models based on deep learning techniques may degrade. Therefore, this paper focuses on few-shot time series prediction (FTSP) and plans to combine meta-learning and generative models to alleviate the problems caused by insufficient training data. When using meta-learning techniques to process FTSP tasks, researchers set the meta-parameter in model-agnostic meta-learning (MAML) as a meta-sample and construct meta-sample generation methods based on advanced generative modeling theory to achieve better uncertainty coding. The existing meta-sample generation methods in FTSP scenes have an inherent limitation: With the increase of the complexity of prediction tasks, samples based on Gaussian distribution may be sensitive to noise and outliers in the meta-learning environment and lack of uncertainty expression, thus affecting the robustness and accuracy of prediction. Therefore, this paper proposes an adaptive sample generation method called JLSG-Diffusion. Based on the Jensen constraint framework and Laplace modeling theory, this method constructs a sample adapter with reasonable adaptive steps and fast convergence for specific tasks. The advantage is to realize fast adaptive convergence of samples to new tasks at lower cost, effectively control the overall generalization error, and improve the robustness and non-Gaussian generalization of sample posterior reasoning. Moreover, the meta sampler of JLSG-Diffusion embeds meta-learning from the implicit probability measure level of Denoising Diffusion Probabilistic Models (DDPM), which makes the meta-sample distribution directly establish a function mapping with the new task and effectively quantifies the uncertainty of spatiotemporal dimension. Experimental results on three real datasets prove the efficiency and effectiveness of JLSG-Diffusion. Compared with the benchmark methods, the prediction model combined with JLSG-Diffusion shows better accuracy.



中文翻译:


一种用于小样本时间序列预测的稳健自适应元样本生成方法



近年来,时间序列预测 (TSP) 的研究和探索引起了广泛关注。研究人员可以基于深度学习模型和大量数据实现有效的 TSP。但是,当没有足够的高质量数据可用时,基于深度学习技术的预测模型的性能可能会下降。因此,本文专注于小样本时间序列预测(FTSP),并计划将元学习和生成模型相结合,以缓解训练数据不足带来的问题。在使用元学习技术处理 FTSP 任务时,研究人员将模型不可知元学习 (MAML) 中的元参数设置为元样本,并基于先进的生成建模理论构建元样本生成方法,以实现更好的不确定性编码。FTSP 场景中现有的元样本生成方法存在固有的局限性:随着预测任务复杂度的增加,基于高斯分布的样本可能对元学习环境中的噪声和异常值敏感,缺乏不确定性表达,从而影响预测的鲁棒性和准确性。因此,本文提出了一种称为 JLSG-Diffusion 的自适应样本生成方法。该方法基于 Jensen 约束框架和拉普拉斯建模理论,针对特定任务构建具有合理自适应步骤和快速收敛的样本适配器。其优点是能够以较低的成本实现样本快速自适应收敛到新任务,有效控制整体泛化误差,提高样本后验推理的鲁棒性和非高斯泛化。 此外,JLSG-Diffusion 的元采样器从去噪扩散概率模型 (DDPM) 的隐式概率测度水平嵌入元学习,这使得元样本分布直接与新任务建立函数映射,有效量化了时空维度的不确定性。在三个真实数据集上的实验结果证明了 JLSG-Diffusion 的效率和有效性。与基准方法相比,预测模型结合 JLSG-Diffusion 显示出更好的准确性。

更新日期:2024-12-19
down
wechat
bug