Software Quality: Definitions and Strategic Issues
Conclusion
As defined by international organizations during the 1970s and 80s, software quality definitions have become a little outdated in that they do not reflect the huge technological developments of the 1990s and the early 2000s. Software quality models and factors which were first defined in the late 1970s have also become outdated as a result of industry developments. In particular, the impact of the microcomputer industry in the 1970s needed to be reflected in models. In the 2000s, the impacts of accelerating technological advances are again impacting quality definitions and models. Exponential increases in scale for data storage, large increases in computing speed, and improved algorithms for databases, network computing, distributed servers, and AI applications are impacting factors such as usability, efficiency, and maintainability, which must be redefined. Moreover, new factors like suitability, learnability, adaptability, confidentiality, and privacy are evolving. The interrelationships between quality factors set out by Perry need to be amended to reflect these evolutions. Furthermore, the tremendous impact on human-computer interaction must be understood and addressed.
The importance of quality for both the supplier and the purchaser was also investigated in this subunit. Issues covered were marketing strategies, tendering and cost estimation, human resource issues, productivity, and system acquisition.
Changes in technology can impact models, including principles, processes, procedures, practices, and tools. Typically, the impact on a software engineering model affects the latter three, namely, processes/procedures/tools. For quality models, the impact can include every aspect of the model.