[go: up one dir, main page]

skip to main content
research-article

Instrumental Variable-Driven Domain Generalization with Unobserved Confounders

Published: 28 June 2023 Publication History

Abstract

Domain generalization (DG) aims to learn from multiple source domains a model that can generalize well on unseen target domains. Existing DG methods mainly learn the representations with invariant marginal distribution of the input features, however, the invariance of the conditional distribution of the labels given the input features is more essential for unknown domain prediction. Meanwhile, the existing of unobserved confounders which affect the input features and labels simultaneously cause spurious correlation and hinder the learning of the invariant relationship contained in the conditional distribution. Interestingly, with a causal view on the data generating process, we find that the input features of one domain are valid instrumental variables for other domains. Inspired by this finding, we propose an instrumental variable-driven DG method (IV-DG) by removing the bias of the unobserved confounders with two-stage learning. In the first stage, it learns the conditional distribution of the input features of one domain given input features of another domain. In the second stage, it estimates the relationship by predicting labels with the learned conditional distribution. Theoretical analyses and simulation experiments show that it accurately captures the invariant relationship. Extensive experiments on real-world datasets demonstrate that IV-DG method yields state-of-the-art results.

References

[1]
Joshua D. Angrist and Jörn-Steffen Pischke. 2008. Mostly Harmless Econometrics: An Empiricist’s Companion. Princeton University Press.
[2]
Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. 2019. Invariant risk minimization. arXiv:1907.02893. Retrieved from https://arxiv.org/abs/1907.02893.
[3]
Yogesh Balaji, Swami Sankaranarayanan, and Rama Chellappa. 2018. Metareg: Towards domain generalization using meta-regularization. In Proceedings of the 32nd International Conference on Neural Information Processing Systems. 998–1008.
[4]
Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. 2010. A theory of learning from different domains. Machine Learning 79, 1–2 (2010), 151–175.
[5]
Andrew Bennett, Nathan Kallus, and Tobias Schnabel. 2019. Deep generalized method of moments for instrumental variable analysis. In Proceedings of the 33rd International Conference on Neural Information Processing Systems. 3564–3574.
[6]
Gilles Blanchard, Gyemin Lee, and Clayton Scott. 2011. Generalizing from several related classification tasks to a new unlabeled sample. In Proceedings of the 24th International Conference on Neural Information Processing Systems . 2178–2186.
[7]
Guanyu Cai, Yuqin Wang, Lianghua He, and MengChu Zhou. 2019. Unsupervised domain adaptation with adversarial residual transform networks. IEEE Transactions on Neural Networks and Learning Systems 31, 8 (2019), 3073–3086.
[8]
Fabio Maria Carlucci, Antonio D’Innocente, S. Bucci, B. Caputo, and T. Tommasi. 2019. Domain generalization by solving jigsaw puzzles. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2224–2233.
[9]
Prithvijit Chattopadhyay, Yogesh Balaji, and Judy Hoffman. 2020. Learning to balance specificity and invariance for in and out of domain generalization. In Proceedings of the European Conference on Computer Vision. Springer, 301–318.
[10]
Rita Chattopadhyay, Qian Sun, Wei Fan, Ian Davidson, Sethuraman Panchanathan, and Jieping Ye. 2012. Multisource domain adaptation and its application to early detection of fatigue. ACM Transactions on Knowledge Discovery from Data 6, 4 (2012), 1–26.
[11]
Rune Christiansen, Niklas Pfister, Martin Emil Jakobsen, Nicola Gnecco, and Jonas Peters. 2021. A causal framework for distribution generalization. IEEE Transactions on Pattern Analysis and Machine Intelligence 44, 10 (2021), 6614–6630.
[12]
Qi Dou, Daniel Coelho de Castro, Konstantinos Kamnitsas, and Ben Glocker. 2019. Domain generalization via model-agnostic learning of semantic features. In Proceedings of the 33rd International Conference on Neural Information Processing Systems. 6450–6461.
[13]
Jingtao Fan, Lu Fang, Jiamin Wu, Yuchen Guo, and Qionghai Dai. 2020. From brain science to artificial intelligence. Engineering 6, 3 (2020), 248–252.
[14]
Yaroslav Ganin and Victor Lempitsky. 2015. Unsupervised domain adaptation by backpropagation. In Proceedings of the International Conference on Machine Learning. PMLR, 1180–1189.
[15]
Muhammad Ghifary, W. Bastiaan Kleijn, Mengjie Zhang, and David Balduzzi. 2015. Domain generalization for object recognition with multi-task autoencoders. In Proceedings of the IEEE International Conference on Computer Vision. 2551–2559.
[16]
Mingming Gong, Kun Zhang, Tongliang Liu, Dacheng Tao, Clark Glymour, and Bernhard Schölkopf. 2016. Domain adaptation with conditional transferable components. In Proceedings of the International Conference on Machine Learning. PMLR, 2839–2848.
[17]
Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Schölkopf, and Alexander Smola. 2012. A kernel two-sample test. The Journal of Machine Learning Research 13, 1 (2012), 723–773.
[18]
Jason Hartford, Greg Lewis, Kevin Leyton-Brown, and Matt Taddy. 2017. Deep IV: A flexible approach for counterfactual prediction. In Proceedings of the International Conference on Machine Learning. 1414–1423.
[19]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 770–778.
[20]
C. Heyde. 2006. Central limit theorem. Encyclopedia of Actuarial Science.
[21]
Zeyi Huang, Haohan Wang, Eric P. Xing, and Dong Huang. 2020. Self-challenging improves cross-domain generalization. In Proceedings of the European Conference on Computer Vision. 124–140.
[22]
Byungju Kim, Hyunwoo Kim, Kyungsu Kim, Sungjin Kim, and Junmo Kim. 2019. Learning not to learn: Training deep neural networks with biased data. In Proceedings of the Conference on Computer Vision and Pattern Recognition. 9012–9020.
[23]
Kun Kuang, Lian Li, Zhi Geng, Lei Xu, Kun Zhang, Beishui Liao, Huaxin Huang, Peng Ding, Wang Miao, and Zhichao Jiang. 2020. Causal inference. Engineering 6, 3 (2020), 253–263.
[24]
Da Li, Yongxin Yang, Yi-Zhe Song, and Timothy Hospedales. 2018. Learning to generalize: Meta-learning for domain generalization. In Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 32.
[25]
Da Li, Yongxin Yang, Yi-Zhe Song, and Timothy M. Hospedales. 2017. Deeper, broader and artier domain generalization. In Proceedings of the IEEE International Conference on Computer Vision. 5542–5550.
[26]
Da Li, J. Zhang, Yongxin Yang, Cong Liu, Yi-Zhe Song, and Timothy M. Hospedales. 2019. Episodic training for domain generalization. In Proceedings of the IEEE International Conference on Computer Vision . 1446–1455.
[27]
Haoliang Li, Sinno Jialin Pan, Shiqi Wang, and Alex C. Kot. 2018. Domain generalization with adversarial feature learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 5400–5409.
[28]
Haoliang Li, Yufei Wang, Renjie Wan, Shiqi Wang, Tie-Qiang Li, and Alex C. Kot. 2020. Domain generalization for medical imaging classification with linear-dependency regularization. In Proceedings of the 34th International Conference on Neural Information Processing Systems.
[29]
Ya Li, Mingming Gong, Xinmei Tian, Tongliang Liu, and Dacheng Tao. 2018. Domain generalization via conditional invariant representations. In Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 32.
[30]
Ya Li, Xinmei Tian, Mingming Gong, Yajing Liu, Tongliang Liu, Kun Zhang, and Dacheng Tao. 2018. Deep domain generalization via conditional invariant adversarial networks. In Proceedings of the European Conference on Computer Vision. 624–639.
[31]
Yiying Li, Yongxin Yang, Wei Zhou, and Timothy Hospedales. 2019. Feature-critic networks for heterogeneous domain generalization. In Proceedings of the International Conference on Machine Learning. PMLR, 3915–3924.
[32]
Jian Liang, D. Hu, and Jiashi Feng. 2020. Do we really need to access the source data? Source hypothesis transfer for unsupervised domain adaptation. In Proceedings of the International Conference on Machine Learning. PMLR.
[33]
Wanyu Lin, Hao Lan, Hao Wang, and Baochun Li. 2022. OrphicX: A causality-inspired latent variable model for interpreting graph neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 13729–13738.
[34]
Yong Lin, Hanze Dong, Hao Wang, and Tong Zhang. 2022. Bayesian invariant risk minimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 16021–16030.
[35]
Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. 2015. Learning transferable features with deep adaptation networks. In Proceedings of the International Conference on Machine Learning. PMLR, 97–105.
[36]
Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I. Jordan. 2018. Conditional adversarial domain adaptation. In Proceedings of the 32nd International Conference on Neural Information Processing Systems. 1640–1650.
[37]
Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I. Jordan. 2017. Deep transfer learning with joint adaptation networks. In Proceedings of the International Conference on Machine Learning. PMLR, 2208–2217.
[38]
Wang Lu, Jindong Wang, Haoliang Li, Yiqiang Chen, and Xing Xie. 2022. Domain-invariant feature exploration for domain generalization. arXiv:2207.12020. Retrieved from https://arxiv.org/abs/2207.12020.
[39]
Zheqi Lv, Zhengyu Chen, Shengyu Zhang, Kun Kuang, Wenqiao Zhang, Mengze Li, Beng Chin Ooi, and Fei Wu. 2023. IDEAL: Toward high-efficiency device-cloud collaborative and dynamic recommendation system. arXiv:2302.07335. Retrieved from https://arxiv.org/abs/2302.07335.
[40]
Zheqi Lv, Feng Wang, Shengyu Zhang, Kun Kuang, Hongxia Yang, and Fei Wu. 2022. Personalizing intervened network for long-tailed sequential user behavior modeling. arXiv:2208.09130. Retrieved from https://arxiv.org/abs/2208.09130.
[41]
Zheqi Lv, Wenqiao Zhang, Shengyu Zhang, Kun Kuang, Feng Wang, Yongwei Wang, Zhengyu Chen, Tao Shen, Hongxia Yang, Beng Chin Ooi, and Fei Wu. 2023. DUET: A tuning-free device-cloud collaborative parameters generation framework for efficient device model generalization. In Proceedings of the ACM Web Conference 2023.
[42]
Ao Ma, Jingjing Li, Ke Lu, Lei Zhu, and Heng Tao Shen. 2021. Adversarial entropy optimization for unsupervised domain adaptation. IEEE Transactions on Neural Networks and Learning Systems 33, 11 (2021), 6263–6274.
[43]
Xu Ma, Junkun Yuan, Yen-wei Chen, Ruofeng Tong, and Lanfen Lin. 2022. Attention-based cross-layer domain alignment for unsupervised domain adaptation. Neurocomputing 499 (2022), 1–10.
[44]
Divyat Mahajan, Shruti Tople, and Amit Sharma. 2021. Domain generalization using causal matching. In International Conference on Machine Learning PMLR, 7313–7324.
[45]
Chengzhi Mao, Augustine Cha, Amogh Gupta, Hao Wang, Junfeng Yang, and Carl Vondrick. 2021. Generative interventions for causal learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 3947–3956.
[46]
Chengzhi Mao, Kevin Xia, James Wang, Hao Wang, Junfeng Yang, Elias Bareinboim, and Carl Vondrick. 2022. Causal transportability for visual recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 7521–7531.
[47]
T. Matsuura and T. Harada. 2020. Domain generalization using a mixture of multiple latent domains. In Proceedings of the AAAI Conference on Artificial Intelligence.
[48]
Qiaowei Miao, Junkun Yuan, and Kun Kuang. 2022. Domain generalization via contrastive causal learning. arXiv:2210.02655. Retrieved from https://arxiv.org/abs/2210.02655.
[49]
Whitney K. Newey and James L. Powell. 2003. Instrumental variable estimation of nonparametric models. Econometrica 71, 5 (2003), 1565–1578.
[50]
Ziwei Niu, Junkun Yuan, Xu Ma, Yingying Xu, Jing Liu, Yen-Wei Chen, Ruofeng Tong, and Lanfen Lin. 2023. Knowledge distillation-based domain-invariant representation learning for domain generalization. IEEE Transactions on Multimedia (2023).
[51]
Judea Pearl. 2009. Causality. Cambridge University Press.
[52]
Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. 2019. Moment matching for multi-source domain adaptation. In Proceedings of the IEEE International Conference on Computer Vision. 1406–1415.
[53]
Vihari Piratla, Praneeth Netrapalli, and Sunita Sarawagi. 2020. Efficient domain generalization via common-specific low-rank decomposition. In Proceedings of the International Conference on Machine Learning.
[54]
Fengchun Qiao, Long Zhao, and Xi Peng. 2020. Learning to learn single domain generalization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 12556–12565.
[55]
Joaquin Quionero-Candela, Masashi Sugiyama, Anton Schwaighofer, and Neil D. Lawrence. 2009. Dataset Shift in Machine Learning. The MIT Press.
[56]
Kui Ren, Tianhang Zheng, Zhan Qin, and Xue Liu. 2020. Adversarial attacks and defenses in deep learning. Engineering 6, 3 (2020), 346–360.
[57]
Dominik Rothenhäusler, Nicolai Meinshausen, Peter Bühlmann, and Jonas Peters. 2021. Anchor regression: Heterogeneous data meet causality. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 83, 2 (2021), 215–246.
[58]
Kuniaki Saito, Kohei Watanabe, Yoshitaka Ushiku, and Tatsuya Harada. 2018. Maximum classifier discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3723–3732.
[59]
Seonguk Seo, Yumin Suh, D. Kim, Jongwoo Han, and B. Han. 2020. Learning to optimize domain specific normalization for domain generalization. In Proceedings of the European Conference on Computer Vision.
[60]
S. Shankar, Vihari Piratla, Soumen Chakrabarti, S. Chaudhuri, P. Jyothi, and Sunita Sarawagi. 2018. Generalizing across domains via cross-gradient training. In Proceedings of the International Conference on Learning Representations.
[61]
Rahul Singh, Maneesh Sahani, and Arthur Gretton. 2019. Kernel instrumental variable regression. In Proceedings of the 33rd International Conference on Neural Information Processing Systems. 4593–4605.
[62]
Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. 2017. Deep hashing network for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 5018–5027.
[63]
Riccardo Volpi, Hongseok Namkoong, Ozan Sener, John C. Duchi, Vittorio Murino, and Silvio Savarese. 2018. Generalizing to unseen domains via adversarial data augmentation. In Proceedings of the 32nd International Conference on Neural Information Processing Systems. 5334–5344.
[64]
Shujun Wang, Lequan Yu, Caizi Li, Chi-Wing Fu, and P. Heng. 2020. Learning from extrinsic and intrinsic supervisions for domain generalization. In Proceedings of the European Conference on Computer Vision.
[65]
Yufei Wang, Haoliang Li, Lap-Pui Chau, and Alex C. Kot. 2021. Variational disentanglement for domain generalization. arXiv:2109.05826. Retrieved from https://arxiv.org/abs/2109.05826.
[66]
Philip G. Wright. 1928. Tariff on Animal and Vegetable Oils. Macmillan Company, New York, NY.
[67]
Anpeng Wu, Junkun Yuan, Kun Kuang, Bo Li, Runze Wu, Qiang Zhu, Yue Ting Zhuang, and Fei Wu. 2022. Learning decomposed representations for treatment effect estimation. IEEE Transactions on Knowledge and Data Engineering 35, 5 (2022), 4989–5001.
[68]
Hanrui Wu and Michael K. Ng. 2022. Multiple graphs and low-rank embedding for multi-source heterogeneous domain adaptation. ACM Transactions on Knowledge Discovery from Data 16, 4 (2022), 1–25.
[69]
Ruijia Xu, Guanbin Li, Jihan Yang, and Liang Lin. 2019. Larger norm more transferable: An adaptive feature norm approach for unsupervised domain adaptation. In Proceedings of the IEEE International Conference on Computer Vision. 1426–1435.
[70]
Shuai Yang, Kui Yu, Fuyuan Cao, Lin Liu, Hao Wang, and Jiuyong Li. 2021. Learning causal representations for robust domain adaptation. IEEE Transactions on Knowledge and Data Engineering 35, 3 (2021), 2750–2764.
[71]
Junkun Yuan, Xu Ma, Defang Chen, Fei Wu, Lanfen Lin, and Kun Kuang. 2023. Collaborative semantic aggregation and calibration for federated domain Generalization. IEEE Transactions on Knowledge and Data Engineering.
[72]
Junkun Yuan, Xu Ma, Defang Chen, Kun Kuang, Fei Wu, and Lanfen Lin. 2022. Label-efficient domain generalization via collaborative exploration and generalization. In Proceedings of the 30th ACM International Conference on Multimedia. 2361–2370.
[73]
Junkun Yuan, Xu Ma, Defang Chen, Kun Kuang, Fei Wu, and Lanfen Lin. 2023. Domain-specific bias filtering for single labeled domain generalization. International Journal of Computer Vision 131, 2 (2023), 552–571.
[74]
Junkun Yuan, Anpeng Wu, Kun Kuang, Bo Li, Runze Wu, Fei Wu, and Lanfen Lin. 2022. Auto IV: Counterfactual prediction via automatic instrumental variable decomposition. ACM Transactions on Knowledge Discovery from Data 16, 4 (2022), 1–20.
[75]
Cheng Zhang, Kun Zhang, and Yingzhen Li. 2020. A causal view on robustness of neural networks. In Proceedings of the 34th International Conference on Neural Information Processing Systems.
[76]
Kun Zhang, Mingming Gong, and Bernhard Schölkopf. 2015. Multi-source domain adaptation: A causal view. In Proceedings of the AAAI Conference on Artificial Intelligence 1 (2015), 3150–3157.
[77]
Kun Zhang, Mingming Gong, Petar Stojanov, Biwei Huang, Qingsong Liu, and Clark Glymour. 2020. Domain adaptation as a problem of inference on graphical models. In Proceedings of the 34th International Conference on Neural Information Processing Systems .
[78]
Kun Zhang, Bernhard Schölkopf, Krikamol Muandet, and Zhikun Wang. 2013. Domain adaptation under target and conditional shift. In Proceedings of the International Conference on Machine Learning. 819–827.
[79]
Lei Zhang, Jingru Fu, Shanshan Wang, David Zhang, Zhaoyang Dong, and CL Philip Chen. 2019. Guide subspace learning for unsupervised domain adaptation. IEEE Transactions on Neural Networks and Learning Systems 31, 9 (2019), 3374–3388.
[80]
Xingxuan Zhang, Peng Cui, Renzhe Xu, Linjun Zhou, Yue He, and Zheyan Shen. 2021. Deep stable learning for out-of-distribution generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5372–5382.
[81]
Yuchen Zhang, Tianle Liu, Mingsheng Long, and Michael I. Jordan. 2019. Bridging theory and algorithm for domain adaptation. In Proceedings of the International Conference on Machine Learning.
[82]
Shanshan Zhao, M. Gong, T. Liu, H. Fu, and Dacheng Tao. 2020. Domain generalization via entropy regularization. In Proceedings of the 34th International Conference on Neural Information Processing Systems.
[83]
Kaiyang Zhou, Yongxin Yang, Timothy Hospedales, and Tao Xiang. 2020. Deep domain-adversarial image generation for domain generalisation. In Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 34, 13025–13032.
[84]
Kaiyang Zhou, Yongxin Yang, Timothy Hospedales, and Tao Xiang. 2020. Learning to generate novel domains for domain generalization. In Proceedings of the European Conference on Computer Vision. 561–578.
[85]
Kaiyang Zhou, Yongxin Yang, Yu Qiao, and Tao Xiang. 2021. Domain generalization with mixstyle. In Proceedings of the International Conference on Learning Representations.
[86]
Ming Zhou, Nan Duan, Shujie Liu, and Heung-Yeung Shum. 2020. Progress in neural NLP: Modeling, learning, and reasoning. Engineering 6, 3 (2020), 275–290.

Cited By

View all
  • (2024)Domaindiff: Boost out-of-Distribution Generalization with Synthetic DataICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)10.1109/ICASSP48485.2024.10446788(5640-5644)Online publication date: 14-Apr-2024
  • (2023)Universal Domain Adaptation via Compressive Attention Matching2023 IEEE/CVF International Conference on Computer Vision (ICCV)10.1109/ICCV51070.2023.00642(6951-6962)Online publication date: 1-Oct-2023

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Knowledge Discovery from Data
ACM Transactions on Knowledge Discovery from Data  Volume 17, Issue 8
September 2023
348 pages
ISSN:1556-4681
EISSN:1556-472X
DOI:10.1145/3596449
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 28 June 2023
Online AM: 29 April 2023
Accepted: 20 April 2023
Revised: 14 March 2023
Received: 25 July 2022
Published in TKDD Volume 17, Issue 8

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Causal learning
  2. instrumental variable
  3. domain generalization
  4. unobserved confounder

Qualifiers

  • Research-article

Funding Sources

  • National Key Research and Development
  • National Natural Science Foundation of China
  • Young Elite Scientists Sponsorship Program by CAST
  • Zhejiang Provincial Natural Science Foundation of China
  • Major Technological Innovation Project of Hangzhou
  • Zhejiang Province Natural Science Foundation
  • Project by Shanghai AI Laboratory
  • Program of Zhejiang Province Science and Technology
  • StarryNight Science Fund of Zhejiang University Shanghai Institute for Advanced Study
  • Fundamental Research Funds for the Central Universities

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)230
  • Downloads (Last 6 weeks)16
Reflects downloads up to 03 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Domaindiff: Boost out-of-Distribution Generalization with Synthetic DataICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)10.1109/ICASSP48485.2024.10446788(5640-5644)Online publication date: 14-Apr-2024
  • (2023)Universal Domain Adaptation via Compressive Attention Matching2023 IEEE/CVF International Conference on Computer Vision (ICCV)10.1109/ICCV51070.2023.00642(6951-6962)Online publication date: 1-Oct-2023

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media