当前位置: X-MOL 学术IEEE Trans. Signal Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Spectral Graph Learning With Core Eigenvectors Prior via Iterative GLASSO and Projection
IEEE Transactions on Signal Processing ( IF 4.6 ) Pub Date : 2024-08-22 , DOI: 10.1109/tsp.2024.3446453
Saghar Bagheri 1 , Tam Thuc Do 1 , Gene Cheung 1 , Antonio Ortega 2
Affiliation  

Before the execution of many standard graph signal processing (GSP) modules, such as compression and restoration, learning of a graph that encodes pairwise (dis)similarities in data is an important precursor. In data-starved scenarios, to reduce parameterization, previous graph learning algorithms make assumptions in the nodal domain on i) graph connectivity (e.g., edge sparsity), and/or ii) edge weights (e.g., positive edges only). In this paper, given an empirical covariance matrix $\bar{{\mathbf{C}}}$ estimated from sparse data, we consider instead a spectral-domain assumption on the graph Laplacian matrix ${\mathcal{L}}$ : the first $K$ eigenvectors (called “core” eigenvectors) $\{{\mathbf{u}}_{k}\}$ of ${\mathcal{L}}$ are pre-selected—e.g., based on domain-specific knowledge—and only the remaining eigenvectors are learned and parameterized. We first prove that, inside a Hilbert space of real symmetric matrices, the subspace ${\mathcal{H}}_{\mathbf{u}}^{+}$ of positive semi-definite (PSD) matrices sharing a common set of core $K$ eigenvectors $\{{\mathbf{u}}_{k}\}$ is a convex cone. Inspired by the Gram-Schmidt procedure, we then construct an efficient operator to project a given positive definite (PD) matrix onto ${\mathcal{H}}_{\mathbf{u}}^{+}$ . Finally, we design a hybrid graphical lasso/projection algorithm to compute a locally optimal inverse Laplacian ${\mathcal{L}}^{-1}\in{\mathcal{H}}_{\mathbf{u}}^{+}$ given $\bar{{\mathbf{C}}}$ . We apply our graph learning algorithm in two practical settings: parliamentary voting interpolation and predictive transform coding in image compression. Experiments show that our algorithm outperformed existing graph learning schemes in data-starved scenarios for both synthetic data and these two settings.

中文翻译:


通过迭代 GLASSO 和投影先验使用核心特征向量进行谱图学习



在执行许多标准图信号处理(GSP)模块(例如压缩和恢复)之前,学习对数据中的成对(非)相似性进行编码的图是一个重要的前提。在数据匮乏的场景中,为了减少参数化,先前的图学习算法在节点域中对i)图连通性(例如,边稀疏性)和/或ii)边权重(例如,仅正边)做出假设。在本文中,给定从稀疏数据估计的经验协方差矩阵 $\bar{{\mathbf{C}}}$ ,我们考虑图拉普拉斯矩阵 ${\mathcal{L}}$ 上的谱域假设: ${\mathcal{L}}$ 的前 $K$ 特征向量(称为“核心”特征向量)$\{{\mathbf{u}}_{k}\}$ 是预先选择的 - 例如,基于域-特定知识——仅学习和参数化剩余的特征向量。我们首先证明,在实对称矩阵的希尔伯特空间内,共享公共集合的正半定(PSD)矩阵的子空间 ${\mathcal{H}}_{\mathbf{u}}^{+}$核心$K$特征向量$\{{\mathbf{u}}_{k}\}$是一个凸锥体。受 Gram-Schmidt 过程的启发,我们构建了一个高效的算子,将给定的正定 (PD) 矩阵投影到 ${\mathcal{H}}_{\mathbf{u}}^{+}$ 上。最后,我们设计了一种混合图形套索/投影算法来计算局部最优逆拉普拉斯算子 ${\mathcal{L}}^{-1}\in{\mathcal{H}}_{\mathbf{u}}^{ +}$ 给定 $\bar{{\mathbf{C}}}$ 。我们将图学习算法应用于两种实际情况:议会投票插值和图像压缩中的预测变换编码。实验表明,在合成数据和这两种设置的数据匮乏场景中,我们的算法优于现有的图学习方案。
更新日期:2024-08-22
down
wechat
bug