It’s kind of a broad question, but come on.
The biggest risk is using a low-level programming language, type C, because in it you are forced to use pointers to manipulate strings and buffers. Sooner or later the programmer makes a mistake and opens a gap.
Using higher-level language decreases the risk. Of course the compiler or interpreter of the language itself may have bugs, because it is probably written in C, but as a rule this type of code goes through more scrutiny, and the risk is concentrated in a smaller area.
If using C is mandatory, the ideal is to use a library for strings, for example the venerable Qmail used an author’s own library. Again, the library may have bugs but you focus the risk on a relatively small code instead of spreading throughout the code of each program.
The operating system can help a lot to mitigate the risks of buffer overflow, with features in style:
randomization of the memory layout of each process: avoids that the same elements always occupy the same address in the virtual memory. Attacks like "script kiddie" assume a fixed address.
Bit NX (requires CPU support): prevents code on the stack from being executable, which makes some buffer overflow attack modes unfeasible;
The compiler can collaborate by detecting some types of buffer overflow by checking the stack. I think that every modern compiler has already incorporated the ideas of "Stackguard", a popular GCC Fork in the 90s.
The C library (libc) collaborates by checking errors such as double free(), which are not buffer overflow but are also attack vectors.
In both compiler and libc there are even stronger protections, but they cause performance disruption, so the developer can opt for them if the exchange is advantageous. You often inherit poor quality C code, you can’t rewrite it and the way is to defend yourself.
In short, every modern operating system has protections at various levels, and the situation is much better than in the 1990s. But the best prophylaxis is certainly to avoid low-level language unnecessarily.