Reading the book The Economics of Software Quality, it reads that:
To further complicate the definition, quality often depends on the context in which a software component or feature operates. The quality of a software component is not an intrinsic property – the exact same component can be of excellent quality or highly dangerous depending on the environment in which it operates or the intent of the user. This contextual nature of SW quality is a fundamental challenge …
I wonder, why this is specific to software? Maybe I misunderstood (sometimes I do not see the point because of the language barrier) but I would say that quality of just anything is contextuall.
The author seems to be trying to convey that the “quality” of a software product is often more about the perceptions and expectations of its end user(s), and not some quantifiable attribute you can apply to it (e.g. coding standards, design standards, security standards, etc), or the perceptions of its creators.
For example – a competent developer will point to an ancient codebase which has devolved into a “big ball of mud”, written in an obsolete language, violates every modern design principle, is overflowing (pun not intended) with security flaws and backdoors, and is fragile for maintenance engineers, and has a diabolically unfriendly over-complicated UI, and say “This software is terrible!”.
However, that same bit of software in the eyes of the end-users might do everything that those users need or expect it to do. Those users will not ‘see’ the horrendous codebase underneath, and any flaws which do appear are often able to be patched up some way or other.
To the eyes of experienced users of such software (and particularly the people who own the rights to use the software, and may have paid millions for its development/maintenance over many decades), if it’s deemed “good enough” and “stable enough” and “does the job it’s meant to do”, then that will often earn a big rubber-stamp in the quality box.
This perspective isn’t unique to software, although it’s most commonly found in software because there are millions of businesses out in the world who still rely on software built decades ago; and their judgement of “quality” tends to be – “Does it make us more money than it costs us? (and/or would cost us to replace it)” and “Does it keep our reputation in good health?”
Could you apply this to other domains? Maybe, but it’s a lot less common with “things” which are tangible, which can be picked up and held in your hand, and exist “in the real world” – particulary where physical/mechanical things have a tendency to decay and fade over time (e.g. Cars have a mileage, mechanical things wear out, things get damaged by over-zealous users, etc – software does not suffer from this!)
So, aside from the fact that code does not “decay” (Except through malpractice by software engineers), the inner-workings of software are so opaque, and users are so far removed from them, that the “quality” of the code, along with the security flaws, the usability problems and the obscure bugs/defects tend to be almost invisible to the average user.
There’s nothing in the quoted section that indicates that only software quality is dependent on the context. Since the book is about software quality, it makes sense that it takes about software components instead of hardware components and software quality instead of hardware quality.
There are two aspects to look at.
First, consider how you measure quality.
With physical items, you can apply the rules of chemistry and physics. You can make highly detailed models and run computer-based simulations of the design of physical things. Then, after manufacturing, you can take samples and perform tests (including destructive tests that meet or exceed the desired characteristics).
With software, you can write test code and manual testing procedures to execute various portions of the software, but the results of testing are only as good as the test cases. There are different measures and metrics of both process and product quality, but none are exact.
Next, consider what you can do with your measure of quality.
With physical items, you can characterize the limits of the product. You know that your widget can accelerate so fast or hold so much weight at a certain angle or reach a certain temperature before it no longer functions. You can describe not only the environment(s) in which the component has been tested, but the environment in which you are confident that the widget will be able to do its job.
With software, the environment has many variables. You can’t always control the operating system, third-party libraries and software package versions, processor speed, memory, graphics card, disk space and speed, network latency, user load and so on. You can specify any requirements or dependencies, but users may use your software in unexpected ways or the environment may be degraded or poorly configured. Using it outside of the environment where you tested it may lead to discovering problems with the software. Your software may have a high level of quality built into the process, but deploying it in a different environments can lead to a difference in perceived quality.
I don’t think it’s specific to software. I have an axe in the garage, and its quality varies greatly depending on whether I’m using it to split firewood (okay quality), as a paperweight (great quality), or to remove snow and ice from my car’s windshield (very low quality, removes as much glass as it does ice).