Does it still make sense to think about effective code on micro scale in C++?

  softwareengineering

Early in my programming career I was in love with pointer twiddling, lean code, doing everything with as little layers of abstraction as possible and so on. The code was very C like I would say, close to metal and clever.

Time has passed, I have a lot more experience, and I’m noticing my attitude towards the code has changed.

And with changed, I mean, I don’t want to bother about everyday stuff anymore, unless it’s super easy to do.

If there is a string concatenation, I will never choose raw pointers deliberately. If I need to concatenate stuff, I might add a quick guess reserve for an STL container, but I don’t want to bother with preallocated buffers, C like APIs, caches, avoiding hitting the disk, or trashing. I know realloc might happen from time to time, including all the heap locks, extra copying and so on, but I don’t even bother anymore.

Maybe, it is because I have spent some time early in my career where I thought of a clean and beautiful schemes with worker threads that process large amount of trivial data, only to find out later that the whole unoptimized realloc+copy+realloc+copy+realloc+copy+5 layers of wrappers doesn’t even register in profiler.

Maybe it’s because I’ve seen how compilers sometimes do magic.

However, such trivial things do add up. When all your code base uses higher order constructs, it has chances that there will be some realloc all over the place, or cold disk hits, that can cause a general perception of a slow application.

Sort of like WTL vs MFC. Or how you instantly know that the application is written in Java when you launch it.

But is it really? Is there any comparisons of software that’s using only very lean and smart code vs software that uses generic containers all over the place?

Would you even feel a difference if the string allocation throughout the program would be using prereserves and smart moves vs lazily calling string.append in every place where there is a need for such functionality?

Do you try to implement mitigation solutions for such simple, well known things that are cheap to execute anyway?

Can software turn into bloatware just because extra caution is not applied to every line of code?

3

No. Profilers can quickly and adequately tell you exactly where the problem spots are.

The reason why some frameworks are problematic is because when the problem spot is in a framework or virtual machine function, you are screwed. If you have, realistically, complete performance control over the application then go nuts. This is why you can tell when an app is launched with Java- when the slowdown is in the JVM code, there’s nothing any dev can do about it. However, this tends not to be an issue with C++ development in general.

Not to mention that the power of compilers to cut the crap and make your very high-level code go much faster grows every year.

Is there any comparisons of software that’s using only very lean and smart code vs software that uses generic containers all over the place?

Hate to break it to you, buddy, but generic containers are lean and smart. One of their primary advantages is that because you don’t have to re-write them for every type that ever lived, you can afford to write high-performance code once and re-use it. It’s a simple application of DRY that improves both performance and reliability. Writing it yourself 999999 times isn’t the smart code, that’s the dumb code.

2

The language doesn’t make a big difference in what it makes sense to care about. If you’re working on a project where your code is currently 4.2 KiB, and you need to add two more functions, and fit the result into a 4 KiB ROM, then coding to minimize size probably makes a lot of sense.

On the other hand, if you’re coding for desktop machines, where 16 gigabytes of RAM and 2 Terabytes of drive space costs a total of about $200, spending a lot of time on making sure your code is the smallest and fastest it could possibly be is probably misplaced zeal.

The obvious place the language enters the question is that in the case of C++, a lot of work has been put into providing higher-level constructs while minimizing the penalty associated with using them instead of tailoring all the code to the exact task at hand. This wasn’t always the case — years ago, Alexander Stepanov wrote a benchmark showing the penalty associated with using the higher level constructs in the standard template library. Most of the tests had at least two or three “steps” from code written in maximally C-style to code taking full advantage of the standard containers, algorithms and iterator.

When it was new, it was pretty much a given that every step you took away from C-style code toward making more complete use of containers/algorithms/iterators had a penalty, so by the time you got to the maximally-STLish version, it was often anywhere from 2 to 5 times slower (and sometimes closer to 10 times slower).

With current compilers that’s no longer the case — in fact, it’s no longer all that rare for the C-like code to actually be a bit slower than the STL code (though rarely by enough to care much about).

This has taken a great deal of effort on the part of compiler vendors — but since they’ve done that work, it usually makes the most sense to take advantage of it. The fact that you can write equivalent code much more quickly and easily it handy — but the fact that in the process you typically reduce bugs dramatically is generally much more important.

1

Yes, of course, in the right places. Would you rely on well tuned libraries such as the C++ standard libraries if they were written poorly and 10 times slower than they are today? Would everybody you’ve worked with have the same response to the previous question?

“C like” and “clever” aren’t generally compliments in C++. Generic containers aren’t inherently slow. In C++, abstraction layers can be written such that they cost next to nothing at runtime (if that is a concern).

Your energy (assuming it is finite) should not be evenly divided among lines. instead, look at the areas which get the most reuse/use and which more commonly serve as the foundation of other implementations if speed, size, and/or resource utilization are considerations. Staying close to the metal is easy when you are reusing and writing good implementations.

As libraries, kernels, hardware, compilers, etc. vary and advance, you will realize that you can’t predict the optimal solution for every platform/system. You can’t always win, nor should one attempt to always win given those great variations (consider somebody always twiddling implementations which use std::string for multiple SSO sizes). However, inefficient general purpose abstractions can quickly add up and cost several times more than well executed and efficient implementations.

The bottom line is “Yes, lazily written programs can easily be many times slower and resource hungry than well written programs”.

Whether your drawing code (as an example) needs to be (or should be) ten times faster than it is today is another question. If the program were a small one trick pony, then it may not need any additional optimization. As complexity grows, these inefficiencies and poor resource utilization quickly become very significant problems. If you know the users will take your program to the ceiling of the hardware’s capabilities often and time or performance are critical, you should take some time to consider multiple approaches to an implementation, and to write programs which are efficient early on.

Imagine how much better your computing experience would be if all programs you use today were 10 times faster, used far less memory, used good parallel forms, etc. — the complexity of software grows almost as fast as hardware advances, but a lot of that ‘growth’ is also bloat and implementations which have not been well optimized. There’s a lot of room for improvement.

Compilers of advanced languages are like credit cards – they are really handy, and they make it so easy to buy great things you don’t really need, but if you can keep that whole tendency under control, they are the best thing.
The problem is if, like most of us, you lose self-control and spend more than you should.

If you only do little academic-style programs, it’s never going to be much of an issue, but here’s an example where a realistically complex program was done in C++ using seemingly sensible collection classes and design approach.
In that program, there were six(6) performance enhancements, resulting in (pay attention) a speedup factor of seven hundred thirty (730) times.
One of those six enhancements was the kind of thing an optimizing compiler could have produced.
The rest were simply design changes, irrespective of C++ vs. C.

So if you simply think “Compiler optimization is getting so good that I can just run wild”, that’s like assuming if you have a really good credit card with low interest and great on-line technology you can’t outspend yourself.
You sure can, and probably will, and the technology won’t help.

In the credit-card world I can’t offer you a magic bullet, but in the software world I can.
I use a low-tech manual profiling technique that actually engages your brain and finds things that ordinary profilers do not find.
It is explained here.

Theme wordpress giá rẻ Theme wordpress giá rẻ Thiết kế website Kho Theme wordpress Kho Theme WP Theme WP

LEAVE A COMMENT