How to reuse production code – for writing supporting tools?

A little history – skip to the TL;DR if you so wish to.

So I have an UWP application written in C++ that utilizes V8 and have synthetic module which allows calling native methods from the embedded JS code.

Now until recently I was writing Node.JS tool that parses C header files and creates JS modules which can be used by my production JS code to ease calling COM methods (i.e. instead of raw_call(object, 5, ...) you can do something like raw_call(object, Interface.Method_Name, ...) or even in the future just object.Method_Name(...). Anyway that is not important. The important part is that for creating this modules I use regular expressions – it turned out that however JS Regular Expressions doesn’t support recursion – so I sneaked some php executer solely for the purpose of executing PHP regular expressions inside my tool (so I can utilize recursion). Then later on I realized that PHP regex doesn’t support capturing groups inside recursion and that is a feature only present in pcre2.

Pcre2 is C only which is fine for my production code – I can just use my already created infrastructure to invoke those directly from JS – problem is that this is a feature for my tool not production code.

I’m using MSVC 2019 Preview and I can probably do some shenanigans with multiple projects but like I would have to support two identical code bases – one for production and one for a tool. Hmm maybe I can include the same source files?

TL;DR – How to most efficiently reuse production code in writing supporting tools for your application?


Where I work we have a similar situation where we have the code base for a product A which is sold externally, and another code base B for internal use only. Over the years, we run sometimes into situations where we want to reuse parts from the former inside the latter. We solve this in the following way:

  • We strictly forbid dependencies from A to B. Product development, maintenance and versioning for A shall intentionally stay independend from any internal development.

  • When we reuse parts from A inside B, we only use code which related to a released and deployed version from A, no half-baked intermediate versions.

  • We only reuse full libraries from A, with (ideally) stable interfaces, not just some loosely taken source code files. So when a new version of A is published, cases where we have to adapt B because of this are rare.

  • Since the code of A is in one repository, and the code of B in a different one, and not all devs in the team have full access rights on both, we mirror the source code parts which are required inside B by some script from repo A to repo B, which pulls a tagged revision from code base A and puts it into B. We also have added some measures to prevent any direct editing of the mirrored source, so any changes have to be done in the codebase / repo A. I guess for this purpose it could also be possible to utilize Microsoft’s C++ package manager vcpkg.

So in short, we have a split development where the part of the team which develops A takes also the role of a component vendor, and the components are reused in a black-box manner by the (part of) the team which develops B.

I guess that model could work for your situation, too.

In my organisation we have both Code, and Binary repositories. (Git, and Artifactory though any equivalent will do).

When a code repository is updated, our CI tool builds the code and runs some Unit and Module level tests. If they pass the binaries are pushed into the binary repository.

In our other projects that need these binaries we have a package manager pull the dependencies down, and we check them in alongside the code in the repo. This way it can always be built, even if we were to: clean up the binary repo, or change tools entirely. Both have happened over the last 10 years. The CI never grabs packages itself, just the one commit from the repo its building.

You’ll quickly find that you have:

  • a few amalgmation repositories where many components are brought together with a little glue to make some tool or product.
  • a few interface libraries that describe the communication models, like how task look, or what Data and endpoints are avialble for an API.
  • a number of service and implementation libraries providing those API’s and infrastructure services.

Trả lời

Email của bạn sẽ không được hiển thị công khai. Các trường bắt buộc được đánh dấu *