当前位置:
X-MOL 学术
›
Inform. Fusion
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Disentangling the hourly dynamics of mixed urban function: A multimodal fusion perspective using dynamic graphs
Information Fusion ( IF 14.7 ) Pub Date : 2024-12-07 , DOI: 10.1016/j.inffus.2024.102832 Jinzhou Cao, Xiangxu Wang, Guanzhou Chen, Wei Tu, Xiaole Shen, Tianhong Zhao, Jiashi Chen, Qingquan Li
Information Fusion ( IF 14.7 ) Pub Date : 2024-12-07 , DOI: 10.1016/j.inffus.2024.102832 Jinzhou Cao, Xiangxu Wang, Guanzhou Chen, Wei Tu, Xiaole Shen, Tianhong Zhao, Jiashi Chen, Qingquan Li
Traditional studies of urban functions often rely on static classifications, failing to capture the inherently dynamic nature of urban environments. This paper introduces the Spatio-temporal Graph for Dynamic Urban Functions (STG4DUF), a novel framework that combines multimodal data fusion and self-supervised learning to uncover dynamic urban functionalities without ground truth labels. The framework features a dual-branch encoder and dynamic graph architecture that integrates diverse urban data sources: street view imagery, building vector data, Points of Interest (POI), and hourly mobile phone-based human trajectory data. Through a self-supervised learning approach combining dynamic graph neural networks with Spatio-Temporal Fuzzy C-Means (STFCM), STG4DUF extracts parcel-level functional patterns and their temporal dynamics. Using Shenzhen as a case study, we validate the framework through static proxy tasks and demonstrate its effectiveness in capturing multi-scale urban dynamics. Our analysis, based on pyramid functional-semantic interpretation, uncovers intricate functional topics related to human activity, livability, social services, and industrial development, along with their temporal transitions and mixing patterns. These insights provide valuable guidance for evidence-based smart city planning and policy-making.
中文翻译:
解开混合城市功能的每小时动态:基于动态图的多模态融合视角
传统的城市功能研究通常依赖于静态分类,无法捕捉城市环境固有的动态性质。本文介绍了动态城市函数的时空图 (STG4DUF),这是一个新颖的框架,它结合了多模态数据融合和自我监督学习,以揭示没有地面实况标签的动态城市功能。该框架具有双分支编码器和动态图形架构,可集成各种城市数据源:街景图像、建筑物矢量数据、兴趣点 (POI) 和每小时基于手机的人类轨迹数据。通过将动态图神经网络与时空模糊 C 均值 (STFCM) 相结合的自监督学习方法,STG4DUF提取宗地级功能模式及其时间动态。以深圳为案例研究,我们通过静态代理任务验证了该框架,并展示了其在捕捉多尺度城市动态方面的有效性。我们的分析基于金字塔功能语义解释,揭示了与人类活动、宜居性、社会服务和工业发展相关的复杂功能主题,以及它们的时间转换和混合模式。这些见解为基于证据的智慧城市规划和政策制定提供了宝贵的指导。
更新日期:2024-12-07
中文翻译:
解开混合城市功能的每小时动态:基于动态图的多模态融合视角
传统的城市功能研究通常依赖于静态分类,无法捕捉城市环境固有的动态性质。本文介绍了动态城市函数的时空图 (STG4DUF),这是一个新颖的框架,它结合了多模态数据融合和自我监督学习,以揭示没有地面实况标签的动态城市功能。该框架具有双分支编码器和动态图形架构,可集成各种城市数据源:街景图像、建筑物矢量数据、兴趣点 (POI) 和每小时基于手机的人类轨迹数据。通过将动态图神经网络与时空模糊 C 均值 (STFCM) 相结合的自监督学习方法,STG4DUF提取宗地级功能模式及其时间动态。以深圳为案例研究,我们通过静态代理任务验证了该框架,并展示了其在捕捉多尺度城市动态方面的有效性。我们的分析基于金字塔功能语义解释,揭示了与人类活动、宜居性、社会服务和工业发展相关的复杂功能主题,以及它们的时间转换和混合模式。这些见解为基于证据的智慧城市规划和政策制定提供了宝贵的指导。