This is a generic question about the idea of inferring some of the system configurations from the existence of a file or the lacking of it.
For example, we have a module of the system which is optional, and it requires another software package, what we are doing inside the code is to check the existence of that package during the load time, and if that package is not there, we disable that module (we configure the system using the existence of the file).
From a good software engineering and best practices prospective, is it bad? should I be described as a bad software engineer when I defend this idea!?
Dynamic configuration depending on the set of installed fetaures on the target system isn’t bad in itself. It’s just that going into the file system to perform that check may be slow and unreliable.
If at all possible, you should try to go the official route. If there is a package management system that is responsible for configuring a target system, it will have a well-defined API for checking installed features, and you should use it. If your interaction with that feature consists in a specific external call, it is often useful to perform such a call vacuously just to see whether it succeeds or not.
Checking the existence of a file might do the wrong thing if some administrator has decided that they don’t actually need this specific file for the subset functionality that his site uses, or the file might not be as canonical as you think. If possible, try to check for something that is defined in some file system hierarchy standard, the more official the better. For instance, the Linux Standard Base recommends that every distribution should provide a file
/etc/DISTRONAME-release for reliably checking which distribution is running and what verison.