I have some (or just a little) experience in writing web application
backend with interpreted language,
node js). When writing the backend app, usually what I do is write
some code, see the result on the browser, if anything wrong fix the code,
see the result on the browser, and then add some code. Because it is
interpreted language, the changes will be available immediately.
go language which is compiled language. If the code base
grow very large, may be more than ten thousands lines of code, the compile
time will take very long to be done. So, the development workflow like
I mentioned above will not efficient anymore because changing code take
times to compile.
Yes, I’ve tried
go, just tried with some a few of code and of course
it just take around a second to compile. But, I want to know what to do
if code base grow very large. How the development workflow will be?
I work on a code base in a compiled language (Scala) which is tens if not hundreds of thousands of lines long. The first thing commonly done in such situations is to break the application into microservices that usually max out at two or three thousand lines of code each, spread out among maybe 50 source files. Many are much smaller.
Next, as others have mentioned, you use incremental compilation, so you are only recompiling a handful of files each time, not the entire project. This compile time is generally under a second. Clean builds are reserved for continuous integration servers.
Third, on projects of this size, you very rarely open a browser to test, at least if you’re not a UX designer, and most of the code of the largest projects is in the back end. My quick cycle tests are all unit tests, which are absolutely critical to maintaining continuous delivery on a project this size. If I exclude integration tests, I can usually do a write/compile/test cycle in 5-10 seconds, and that’s tests for the entire microservice, not just the small part I’m working on. It’s faster if I limit it to only the class I’m working on.
I know that’s faster than a write/browser refresh/manual test cycle, and my unit tests are testing way more things with more accuracy than a human can. It should be to the point where your unit tests are so good, you work in them all day, then have a good set of automated integration tests, and checking it manually in a real browser is just a formality before you push your changes.
In other words, your workflow isn’t inefficient because of the compiler in the loop, but because of the human in the loop. As your code base scales, that’s what you have to take into account.
There’s a widely used tool for Java which allows you to reload classed on the fly when they are recompiled. I’ve used it and it’s reliable for the kind of thing you are talking about. Combine that with an IDE that compiles each class on saving (faster than I can blink,) it’s basically just like what you describe. I don’t know if anything like that exists for Go but it’s been done for compiled languages.
Go’s niche is really back-end services. While Go can certainly run your blog, its targeted purpose is to do the heavier lifting on the back-end where the design and coding part of the development cycle is typically more involved. So the ‘inefficiency’ of waiting for the compile isn’t a big factor.
Plus, there’s some perspective involved. The code-compile-results cycle of Go is slower than PHP, but on the other hand, is significantly faster than C++.
Not compiling everything after changing a single line in a single file is a big part of how we keep compile times low. This is partly what makes build management tools like Make or Gradle so much better for this than general purpose scripting languages like Bash or Groovy. That is, Make and Gradle provide easy syntax for defining dependencies, whereas Bash and Groovy do not. Instead of recompiling everything, we recompile only the things which changed, and the things that depend (directly or transitively) on what changed.