Abstract
Maps provide various sources of information. An important example of such information is textual labels such as cities, neighborhoods, and street names. Although we treat this information as facts, and despite the massive effort done by providers to continuously improve their accuracy, this data is far from perfect. Discrepancies in textual labels rendered on the map are one of the major sources of inconsistencies across map providers. These discrepancies can have significant impacts on the reliability of the derived information and decision-making processes. Thus, it is important to validate the accuracy and consistency in such data. Most providers treat this data as their propriety data and it is not available to the public, thus we cannot compare the data directly. To address these challenges, we introduce a novel computer vision-based approach for automatically extracting and classifying labels based on the visual characteristics of the label, which indicates its category based on the format convention used by the specific map provider. Based on the extracted data, we detect the degree of discrepancies across map providers. We consider three map providers: Bing Maps, Google Maps, and OpenStreetMaps. The neural network we develop classifies the text labels with an accuracy up to 93% in all providers. We leverage our system to analyze randomly selected regions in different markets. The studied markets are USA, Germany, France, and Brazil. Experimental results and statistical analysis reveal the amount of discrepancies across map providers per region. We calculate the Jaccard distance between the extracted text sets for each pair of map providers, which represents the discrepancy percentage. Discrepancies percentages as high as 90% were found in some markets.





















Similar content being viewed by others
Data availability
No datasets were generated or analysed during the current study.
References
Ranjan, A., Behera, V.N.J., Reza, M.: An efficient and highly scalable parallel architecture is presented to segment input images. In: Das, S.K., Das, S.P., Dey, N., Hassanien, A.-E. (eds.) OCR Using Computer Vision and Machine Learning, pp. 83–105. Springer, Cham (2021)
Bandil, A., Girdhar, V., Chau, H., Ali, M., Hendawi, A., Govind, H., Cao, P., Song, A.: Geodart: A system for discovering maps discrepancies. In: 2021 IEEE 37th International Conference on Data Engineering (ICDE), pp. 2535–2546 (2021)
Tabet, F., Pentyala, S., Patel, B.H., Hendawi, A., Cao, P., Song, A., Govind, H., Ali, M.: Osmrunner : A system for exploring and fixing osm connectivity. In: 2021 22nd IEEE International Conference on Mobile Data Management (MDM), pp. 193–200 (2021)
Brovelli, M.A., Minghini, M., Molinari, M., Mooney, P.: Towards an automated comparison of openstreetmap with authoritative road datasets. Trans. in GIS 21(2), 191–206 (2017)
Bandil, A., Girdhar, V., Dincer, K., Govind, H., Cao, P., Song, A., Ali, M.: An interactive system to compare, explore and identify discrepancies across map providers. In: Proceedings of the 28th International Conference on Advances in Geographic Information Systems, pp. 425–428 (2020)
Jilani, M., Corcoran, P., Bertolotto, M.: Automated quality improvement of road network in openstreetmap. In: Agile Workshop (action and Interaction in Volunteered Geographic Information), p. 19 (2013)
Boottho, P., Goldin, S.E.: Automated evaluation of online mapping platforms. In: 2017 International Electrical Engineering Congress (iEECON), pp. 1–4 (2017)
Helbich, M., Amelunxen, C., Neis, P., Zipf, A.: Comparative spatial analysis of positional accuracy of openstreetmap and proprietary geodata. Proceedings of GI_Forum 4, 24 (2012)
Chiang, Y.-Y., Knoblock, C.A.: An approach for recognizing text labels in raster maps. In: 2010 20th International Conference on Pattern Recognition, pp. 3199–3202 (2010)
Deseilligny, M.P., Le Men, H., Stamon, G.: Character string recognition on maps, a rotation-invariant recognition method. Pattern Recognit. Lett. 16(12), 1297–1310 (1995). https://doi.org/10.1016/0167-8655(95)00084-5
Schlegel, I.: Automated extraction of labels from large-scale historical maps. AGILE: GIScience Series 2, 12 (2021)
Jackson, S.P., Mullen, W., Agouris, P., Crooks, A., Croitoru, A., Stefanidis, A.: Assessing completeness and spatial error of features in volunteered geographic information. ISPRS Int. J. Geo-Inf. 2(2), 507–530 (2013)
Maps, B.: Bing Maps Tile System. https://learn.microsoft.com/en-us/bingmaps/articles/bing-maps-tile-system?redirectedfrom=MSDN Accessed 2022-12-15
Azure: Azure cognitive services. https://azure.microsoft.com/en-us/products/cognitive-services/ Accessed 2022-12-03
CVAT.ai corporation: Computer vision annotation Tool (CVAT). https://github.com/opencv/cvat
Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In: European Conference on Computer Vision, pp. 740–755. Springer (2014)
Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 28, 01497 (2015)
Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: European Conference on Computer Vision, pp. 213–229. Springer, (2020)
Lin, T.-Y., DolláGirshick, R.P., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2117–2125 (2017)
Wikipedia: Jaccard Index. https://en.wikipedia.org/wiki/Jaccard_index Accessed 15 January 2023
Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101 (2017)
Biewald, L.: Experiment tracking with weights and biases. software available from wandb. com. 2020. URL https://www wandb. com (2020)
Falcon, W.: The pyTorch lightning team: pyTorch lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning
Smith, L.N.: Cyclical learning rates for training neural networks. In: 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 464–472. IEEE, (2017)
Author information
Authors and Affiliations
Contributions
All authors contributed equally.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing intersets.
Additional information
Communicated by SI - SSDBM 2023.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Salama, A., Elkamhawy, M., Hendawi, A. et al. A computer vision approach for detecting discrepancies in map textual labels. Distrib Parallel Databases 43, 9 (2025). https://doi.org/10.1007/s10619-025-07453-z
Accepted:
Published:
DOI: https://doi.org/10.1007/s10619-025-07453-z