Abstract
A software project measurement model plays a crucial role in assessing, monitoring, and improving software development processes and product quality. However, existing models often focus on localized optimization, limiting their effectiveness. This study introduces the 5P model to software project measurement, aiming to achieve global optimization and control. The article presents a conceptual overview of the 5P model and a detailed illustration of its steps using a real development process. By adopting a system thinking approach, the proposed model enables global optimization by aligning project objectives, product outcomes, and stakeholders’ needs. Three feedback loops are utilized to ensure the alignment of project purposes, performance measurement, and stakeholder requirements. The 5P model provides transparent metrics, enabling stakeholders to track progress and make informed decisions. Results confirm the model's significant value in software project measurement, facilitating improved project outcomes and stakeholder satisfaction.











Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Data availability
Enquiries about data availability should be directed to the authors.
References
Aithal PS, Kumar PM (2017) Ideal analysis for decision making in critical situations through six thinking hats method. Int J Appl Eng Manag Lett. https://doi.org/10.5281/zenodo.838378
Andru P, Botchkarev A (2011) The use of return on investment (ROI) in the performance measurement and evaluation of information systems. https://www.researchgate.net/publication/251422412
Basili VR et al (1994) The goal question metrics approach. In: Marciniak JJ (ed) Encyclopedia of Software Engineering, vol. 1, John Wiley & Sons, pp. 528–532
Basili V, et al (2007) GQM^+ Strategies—Aligning Business Strategies with Software Measurement. In: First International Symposium on Empirical Software Engineering and Measurement (ESEM 2007), Madrid, pp. 488–490. https://doi.org/10.1109/ESEM.2007.66
Becker SA, Bostelman ML (1999) Aligning strategic and project measurement systems. IEEE Softw. https://doi.org/10.1109/52.765786
BIPM et al (2012) International vocabulary of metrology—basic and general concepts and associated terms, 3th edn. JCGM 200
Card DN (2003) Practical software measurement. In: 25th International Conference on Software Engineering, pp. 738–739. https://doi.org/10.1109/ICSE.2003.1201263.
Gharaibeh H (2014) Developing a scoring model to evaluate project management software packages based on ISO/IEC software evaluation criterion. J Softw Eng Appl 07(01):27–41. https://doi.org/10.4236/jsea.2014.71004
Gopal A et al (2002) Measurement programs in software development: determinants of success. IEEE Trans Software Eng 28(9):863–875. https://doi.org/10.1109/TSE.2002.1033226
Greene J (2020) The top 9 reasons for IT project failure: Is your project at risk?”. https://www.atspoke.com/blog/it/reasons-for-it-project-failure/
Gurumurthy A, Kodali R (2009) Application of benchmarking for assessing the lean manufacturing implementation. Benchmark Int J 16(2):274–308. https://doi.org/10.1108/14635770910948268
Hagen M et al (2016) Do it all wrong! Using reverse-brainstorming to generate ideas, improve discussions, and move students to action. Manag Teach Rev 1(2):85–90. https://doi.org/10.1177/2379298116634738
Juned S (2013) FISH BONE analysis: an effective tool for identifying causes for employee attrition in MNC’s located in Bangalore. In: National Conference, Bangalore
Južnik Rotar L, Kozar M (2017) The use of the Kano model to enhance customer satisfaction. Organizacija 50(4):339–351. https://doi.org/10.1515/orga-2017-0025
Kaplan R, Norton D (1992) The balanced scorecard-measures that drive performance. Harv Bus Rev
Larrabee R (2003) Practical software measurement: objective information for decision makers. IEEE Softw 20(4):85–86. https://doi.org/10.1109/ms.2003.1207470
Luburić R (2014) Total quality management as a paradigm of business success. J Cent Bank Theory Pract 3(1):59–80. https://doi.org/10.2478/jcbtp-2014-0005
Lucero A (2015) Using affinity diagrams to evaluate interactive prototypes. Hum-Comput Interact INTERACT. https://doi.org/10.1007/978-3-319-22668-2_19
Naveh E, Erez M (2004) Innovation and attention to detail in the quality improvement paradigm. Manage Sci 50(11):1576–1586. https://doi.org/10.1287/mnsc.1040.0272
Offen RJ, Jeffery R (1997) Establishing software measurement programs. IEEE Softw 14(2):45–53. https://doi.org/10.1109/52.582974
Onur D et al (2018) Customer segmentation by using RFM model and clustering methods: a case study in retail industry. Int J Contemporary Econ Admin Sci 8(1):1–19
Pedhazur E, Schmelkin L (1991) Measurement, design, and analysis: an integrated approach, 1st edn. Lawrence Erlbaum Associates, Mahwah, pp 15–29
Trudel JD (1997) The balanced scorecard: Translating strategy into action. Consult Manage 9(4):74
Van Latum F et al (1998) Adopting GQM based measurement in an industrial environment. IEEE Softw 15(1):78–86. https://doi.org/10.1109/52.646887
Weiss DM, Basili VR (1985) Evaluating software development by analysis of changes: some data from the software engineering laboratory. IEEE Transact Softw Eng SE-11(2):157–168. https://doi.org/10.1109/TSE.1985.232190
Funding
This work has been supported by the National Key Research and Development Program No. 2018YFB2100100, National Natural Science Foundation of China under Grant No. 62066048, Postdoctoral Science Foundation of China No. 2020M673312, Postdoctoral Science Foundation of Yunnan Province, Project of the Yunnan Provincial Department of Education scientific research fund No. 2019J0010, and DongLu Young and Middle-aged backbone Teachers Project of Yunnan University, Open Foundation of Key Laboratory in Software Engineering of Yunnan Province under Grant No.2020SE311.
Author information
Authors and Affiliations
Contributions
ZZ: writing paper; JW: grammatical correction; NZ: supervision.
Corresponding author
Ethics declarations
Conflict of interest
There is no conflict of interest for my paper.
Ethical approval
No ethical issue in this paper.
Informed consent
Not applicable.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A
Original data per iteration.
Types | Name | S1 | S2 | S3 | S4 | S5 | S6 | S7 | S8 | S9 | S10 | S11 | S12 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Quality | Cycle of fixing internal defects | 11 | 13 | 9 | 12 | 15 | 13 | 8 | 7 | 9 | 10 | 8 | 7 |
Cycle of fixing external defects | 8 | 7 | 9 | 6 | 7 | 11 | 9 | 8 | 6 | 7 | 4 | 4 | |
Number of external leakage defects | 15 | 18 | 14 | 23 | 25 | 19 | 14 | 12 | 11 | 12 | 18 | 9 | |
Number of internal leakage defects | 37 | 48 | 38 | 64 | 61 | 34 | 27 | 20 | 47 | 44 | 41 | 63 | |
Number of severe external leakage defects | 1 | 0 | 3 | 0 | 4 | 2 | 0 | 1 | 2 | 2 | 1 | 2 | |
Number of severe internal leakage defects | 3 | 1 | 4 | 8 | 6 | 5 | 5 | 2 | 8 | 3 | 4 | 6 | |
Leakage rate of defects | 7% | 0 | 21% | 0 | 16% | 11% | 0 | 8% | 18% | 17% | 6% | 22% | |
Leakage rate of severe defects | 8% | 2% | 11% | 13% | 10% | 15% | 19% | 10% | 17% | 7% | 10% | 10% | |
Requirement | Requirement analysis cycle | 3.4 | 3.6 | 3.1 | 3.4 | 2.9 | 3.2 | 3.1 | 2.7 | 3 | 2.5 | 2.7 | 2.6 |
Requirement development cycle | 6.3 | 5.7 | 6.1 | 6.5 | 5.8 | 5.5 | 6.3 | 5.3 | 5.8 | 5.2 | 5.3 | 5.5 | |
Requirement integration testing cycle | 4.5 | 4.1 | 4.4 | 4.6 | 4.2 | 3.8 | 3.7 | 4.5 | 3.9 | 3.6 | 3.8 | 3.4 | |
Requirement system testing cycle | 5.4 | 5.2 | 4.8 | 4.3 | 5.1 | 4.2 | 3.5 | 3.8 | 4.2 | 4.1 | 4.1 | 3.6 | |
Proportion of high-priority requirements | 47% | 44% | 52% | 48% | 57% | 67% | 58% | 53% | 62% | 66% | 77% | 67% | |
Number of emergency requirements | 3.2 | 2.4 | 1.8 | 2.5 | 3.3 | 2.3 | 2.1 | 1.8 | 1.6 | 2.2 | 2.8 | 1.4 | |
Average tasks per person | 3 | 5 | 3 | 3 | 1 | 2 | 0 | 2 | 0 | 4 | 1 | 4 | |
Team 1 Organizational Structure | Delay days of the sprint planning meeting | 2 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
Duration of the daily scrum meeting | 12 | 15 | 13 | 11 | 8 | 9 | 12 | 8 | 11 | 9 | 7 | 10 | |
Delay days of the sprint retrospective meeting | 1 | 2 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | |
Number of product requirements | 121 | 117 | 111 | 135 | 141 | 126 | 115 | 131 | 143 | 138 | 125 | 129 | |
Number of user stories | 209 | 189 | 178 | 212 | 197 | 213 | 217 | 215 | 231 | 216 | 185 | 217 | |
Team 2 Organizational Structure | Delay days of the sprint planning meeting | 2 | 3 | 2 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 |
Duration of the daily scrum meeting | 18 | 25 | 18 | 18 | 13 | 15 | 13 | 8 | 14 | 11 | 13 | 9 | |
Delay days of the sprint retrospective meeting | 4 | 2 | 2 | 3 | 1 | 2 | 3 | 1 | 4 | 3 | 4 | 1 | |
Number of product requirements | 116 | 108 | 93 | 109 | 123 | 110 | 111 | 117 | 89 | 85 | 113 | 127 | |
Number of user stories | 176 | 165 | 159 | 191 | 163 | 175 | 169 | 191 | 201 | 179 | 158 | 183 | |
Automation Testing | Number of unit testing cases per requirement | 2.2 | 2.5 | 2.3 | 3.1 | 2.8 | 3.3 | 3.5 | 4.2 | 3.4 | 3.6 | 4.1 | 4.6 |
Number of system testing cases per requirement | 1.3 | 1.6 | 1.8 | 2.1 | 2.5 | 2.3 | 2.2 | 2.8 | 3.1 | 3.4 | 2.7 | 3.2 | |
Number of new automation cases | 18 | 21 | 21 | 26 | 27 | 28 | 29 | 35 | 33 | 35 | 34 | 39 | |
Find the number of errors through automation user cases | 9 | 11 | 11 | 13 | 14 | 14 | 15 | 18 | 17 | 18 | 17 | 20 | |
Test success rate per iteration | 75% | 78% | 79% | 72% | 68% | 81% | 88% | 78% | 91% | 90% | 86% | 93% | |
DevOps | Deployment time of new branch | 82 | 77 | 74 | 80 | 68 | 71 | 67 | 62 | 72 | 69 | 65 | 58 |
Appendix B
Original data per day.
Types | Name | 15 Jan | 31 Jan | 15 Feb | 28 Feb | 15 Mar | 31 Mar | 15 Apr | 30 Apr | 15 May | 31 May | 15 Jun | 30 Jun |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Automation Testing | Success rate of daily testing | 83% | 83% | 81% | 91% | 88% | 91% | 92% | 96% | 94% | 87% | 89% | 92% |
DevOps | Compile success rate | 87% | 92% | 93% | 84% | 86% | 93% | 97% | 94% | 90% | 89% | 91% | 95% |
Appendix C
Original data per week.
Types | Name | W1 | W2 | W3 | W4 | W5 | W6 | W7 | W8 | W9 | W10 | W11 | W12 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Automation Testing | Success rate of weekly testing | 85% | 81% | 84% | 91% | 92% | 87% | 87% | 87% | 92% | 92% | 96% | 98% |
Appendix D
Original data per month.
Types | Name | M1 | M2 | M3 | M4 | M5 | M6 |
---|---|---|---|---|---|---|---|
Capability | C++ language skills | 20% | 23% | 28% | 33% | 38% | 43% |
Domain-driven design skills | 58% | 65% | 68% | 75% | 81% | 85% | |
Number of summary technical reports | 1 | 3 | 3 | 2 | 2 | 1 | |
Number of micro-sharing sessions | 3 | 2 | 0 | 2 | 2 | 1 |
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Zhao, Z., Deng, S., Ma, Y. et al. Software project measurement based on the 5P model. Soft Comput 28, 2083–2105 (2024). https://doi.org/10.1007/s00500-023-09175-9
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00500-023-09175-9