Abstract
In the age of Artificial Intelligence (AI), the use of trained models is a common practice to solve a huge amount of different problems. However, it is highly complicated to understand the decision-making process of these models and how the used training data affect them during the production phase. For this reason, multiple techniques related to model extraction have appeared. These techniques consist of analyzing the behavior of a (sometimes partially) unknown model and generating a clone that reacts similarly. This process is relatively simple with basic models, but it becomes arduous when complex models must be analyzed and replicated. This paper tackles this issue by presenting the Neural NetwOrk Models (VENNOM) system. It is a general framework architecture to extract knowledge and provide explainability to unknown models. The proposed approach uses low-capacity, high-explainability neural networks to produce flexible and interpretable models. The proposed framework offers several advantages, particularly in terms of obtaining explanations through visual and textual content for models that are otherwise opaque. The framework has been tested on tabular data sets, demonstrating its performance and potential.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)
Attaoui, M., Fahmy, H., Pastore, F., Briand, L.: Black-box safety analysis and retraining of DNNs based on feature extraction and clustering. ACM Trans. Softw. Eng. Methodol. 32(3), 1–40 (2023)
Bastani, O., Kim, C., Bastani, H.: Interpretability via model extraction. arXiv preprint arXiv:1706.09773 (2017)
Cánovas Izquierdo, J.L., García Molina, J.: Extracting models from source code in software modernization. Softw. Syst. Model. 13, 713–734 (2014)
Chang, C.C., Pan, J., Xie, Z., Hu, J., Chen, Y.: Rethink before releasing your model: ML model extraction attack in EDA. In: 28th Asia and South Pacific Design Automation Conference, ASPDAC 2023, pp. 1–6 (2023)
De Diego, I.M., Redondo, A.R., Fernández, R.R., Navarro, J., Moguerza, J.M.: General performance score for classification problems. Appl. Intell. 52(10), 12049–12063 (2022)
Ding, W., Abdel-Basset, M., Hawash, H., Ali, A.M.: Explainability of artificial intelligence methods, applications and challenges: a comprehensive survey. Inf. Sci. 615, 238–292 (2022)
Dwivedi, R., et al.: Explainable AI (XAI): core ideas, techniques, and solutions. ACM Comput. Surv. 55(9), 1–33 (2023)
Ghorbani, A., Wexler, J., Zou, J.Y., Kim, B.: Towards automatic concept-based explanations. In: Advances in Neural Information Processing Systems, vol. 32 (2019)
Holzinger, A.: Introduction to machine learning & knowledge extraction (make) (2019)
Hopkins, M., Reeber, E., Forman, G., Suermondt, J.: UCI spambase data set (1999). https://archive.ics.uci.edu/ml/datasets/Spambase
Janiesch, C., Zschech, P., Heinrich, K.: Machine learning and deep learning. Electron. Mark. 31(3), 685–695 (2021). https://doi.org/10.1007/s12525-021-00475-2
Junejo, K.N., Goh, J.: Behaviour-based attack detection and classification in cyber physical systems using machine learning. In: Proceedings of the 2nd ACM International Workshop on Cyber-Physical System Security, pp. 34–43 (2016)
Molnar, C., König, G., Bischl, B., Casalicchio, G.: Model-agnostic feature importance and effects with dependent features: a conditional subgroup approach. Data Min. Knowl. Discov. 1–39 (2023)
Razzak, I., Zafar, K., Imran, M., Xu, G.: Randomized nonlinear one-class support vector machines with bounded loss function to detect of outliers for large scale iot data. Futur. Gener. Comput. Syst. 112, 715–723 (2020)
Saleem, R., Yuan, B., Kurugollu, F., Anjum, A., Liu, L.: Explaining deep neural networks: a survey on the global interpretation methods. Neurocomputing 513(7), 165–180 (2022)
Sharkawy, A.N.: Principle of neural network and its main types. J. Adv. Appl. Comput. Math. 7, 8–19 (2020)
Srihari, S.: Explainable artificial intelligence: an overview. J. Wash. Acad. Sci. 106(4), 9–38 (2020)
Sullivan, E.: Understanding from machine learning models. Br. J. Philos. Sci. 73(1) (2022)
Tramèr, F., Zhang, F., Juels, A., Reiter, M.K., Ristenpart, T.: Stealing machine learning models via prediction APIs. In: USENIX Security Symposium, vol. 16, pp. 601–618 (2016)
Wang, L., Han, M., Li, X., Zhang, N., Cheng, H.: Review of classification methods on unbalanced data sets. IEEE Access 9, 64606–64628 (2021)
Wu, B., Yang, X., Pan, S., Yuan, X.: Model extraction attacks on graph neural networks: taxonomy and realisation. In: Proceedings of the 2022 ACM on Asia Conference on Computer and Communications Security, pp. 337–350 (2022)
Yao, X., Liu, Y.: Towards designing artificial neural networks by evolution. Appl. Math. Comput. 91(1), 83–90 (1998)
Ye, J., et al.: A comprehensive capability analysis of GPT-3 and GPT-3.5 series models. arXiv preprint arXiv:2303.10420 (2023)
Zhang, L., Bao, C., Ma, K.: Self-distillation: towards efficient and compact neural networks. IEEE Trans. Pattern Anal. Mach. Intell. 44(8), 4388–4403 (2021)
Acknowledgments
This work has been partially supported by the Spanish MICINN under the XMIDAS project (PID2021-122640OB-I00), the VAE project: TED2021-131295B-C33 funded by MCIN/AEI/ 10.13039/501100011033, and by the “European Union NextGenerationEU/PRTR”, and donation of the Titan V GPU by NVIDIA.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
D. Peribáñez, A., Fernández-Isabel, A., Martín de Diego, I., Condado, A., M. Moguerza, J. (2023). Extracting Knowledge from Incompletely Known Models. In: Quaresma, P., Camacho, D., Yin, H., Gonçalves, T., Julian, V., Tallón-Ballesteros, A.J. (eds) Intelligent Data Engineering and Automated Learning – IDEAL 2023. IDEAL 2023. Lecture Notes in Computer Science, vol 14404. Springer, Cham. https://doi.org/10.1007/978-3-031-48232-8_24
Download citation
DOI: https://doi.org/10.1007/978-3-031-48232-8_24
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-48231-1
Online ISBN: 978-3-031-48232-8
eBook Packages: Computer ScienceComputer Science (R0)