Do Not Get Deep In ... Mud
General advice points for programmers summarizing practical experience and conclusions that are still surprisingly perceived as controversial. This post wants to help get rid of enterprise software development superstition and activate critical thinking abilities.
This post contains a lot of lies. However, those are lies in a sense "a Newton's theory of gravity is a lie". We know it is a lie, because we know it fails in some conditions, but we also know those cases and can act accordingly. So let's say this post contains a lot of simplifications, though useful ones. Some points are valid just for Java-like languages, others apply to a wider selection.
Do not use XML for anything... Ever
This point is big so I will start with it. Many will hate me and my family for this. The industry is still recovering from XML and not all people realize that. Introducing XML brings more problems (that are quite hidden at first) than it tries to (questionably) solve. XML was not designed to contain structured data, it was designed to decorate text files with some simple tags. (Arguably poorly — I recommend Douglas Crockford's presentation, where he gets to XML at minute 24)
An especially bad idea is to generate, bind, or modify code according to XML files. If you work in statically typed language like Java, XML completely subverts its type system (and you spent so much time to learn those badly designed restrictive generics to embrace it a bit more). Refactoring tools generally stop working and any change to code potentially introduces some mysterious run-time exceptions.
The worst practical examples I saw was using XML to write input validators that automagically generated modules that hijacked the control flow and inserted the validation steps. There were many bad things about that in practice, the worst being spreading multiplicity of validators that did the same (or slightly different) thing in different places, and binding to classes using their names (that are not updated using any refactoring tools). A real maintainability nightmare. Similar reusability problems affects XAML for .NET and GUI for Android, although there was quite a lot of time invested to design it as good as possible to provide some benefits as a compensation.
I hear you saying: "I use it for DI with XYZ framework and it makes my life so much easier." I am pretty sure that you mean Dependency Injection by DI here. You should first think Dependency Inversion instead, which is the indisputably essential principle behind that and it does not require dependency injection as a mechanism. If you use dependency injection, do others a favor and isolate it in one place and prevent it from leaking to other parts of the system.
For your custom data format you send or store, use JSON, it has some warts, but is orders of magnitude easier to work with and library APIs are significantly simpler and can map JSON to language data structures smoothly. (Heck, you can even write your JSON parser in one lazy afternoon if you do not like the available ones — trust me, I did it.)
Use the language in the way that fits the problem
There are many cases when programmers do or do not use some parts of language because they read or heard about it and remember one or two related anecdotes. The problem is usually that the original context is not emphasized or is completely forgotten. There is always room for a reasonable doubt, and you should use it to your advantage. In this point I try to address weird habits I met the most.
Do not be afraid to use new to create an object. There is nothing wrong with instantiating your objects in your code. You do not really need to let some magic create your objects according to some holy Egzemel scrolls. Just keep in mind design principles like dependency inversion, separate and/or isolate the object construction code if you need flexibility, and do not literate your code with new statements uncontrollably.
Beware of getters/setters. They usually point to a flawed code design when not used in very special cases like a context object with many internal states and not too limited transitions (e.g. canvas, external device object, ...), builder objects to construct a consistent object incrementally, and a few others. They too often lead to too tight coupling, difficulties with concurrency and keeping the object state consistent. If you need something with publicly exposed properties, it is usually much better to expose an immutable data structure instead.
Do not design for future inheritance I did not say future extension. That is a different thing. Inheritance is just one form of extending a functionality; and in most aspects the inferior one. When you speculatively use inheritance for future extensions, it is essentially predicting the future — a skill in which we especially suck. Inheritance can be useful for code reuse locally, but even then there are limitations. A common way to avoid some traps (e.g. breaking transitivity of equivalence) is to make classes either abstract or final. However, even then there are too many hidden contracts that must be satisfied. For future extensions, it is easier to design an interface or more, and put there just what is really needed (unless you are publishing to an unknown audience you care about, then you rather think this through much more thoroughly). If it seems that the user would appreciate some template code behind the interface, the interface is either too big and should be split to several smaller ones, and/or you can provide the required functionality in separate library functions.
Use immutable state by default. Majority of programmers now agrees with this to some extent, if you do not then you are either very lucky or very careless. Having your objects (data) immutable simplifies too many difficult things: security, concurrent access, deciding where and how to defensively copy, caching, equality comparison and sorting, storing in maps and sets, ... Even just reasoning about the code becomes simpler and less context dependent. Actually, if you use mutable objects in maps and sets (particularly sorted ones, or when you redefined equals+hashcode functions), please stop, or find another job.
If you are in Java for example, abuse final as much as appropriate (not necessary to use it for method arguments and all local variables). Fortunately, many new languages significantly simplify defining immutable fields and variables, that you do not have to think about it much anymore.
Do not wrap or extend collections. I cannot even count how many times I saw an object like People or Team, that has a collection of Person objects as its only data. This may look like a nice encapsulation, especially when you add some generally useful functions to the object. However, if you then see a tenth class encapsulating the same list while replicating some of the functionality from other similar objects, or even worse, you start seeing Team object where it does not even make sense, just because "Team is a kind of collection of people, and we need people-like object here, and we are reusing the code!" it is not a very positive feeling (the sound of your.teeth grinding does not help much either).
Standard collections coming with the language are already a very good and flexible abstraction. You can use a set of Person objects where you want... well a set of people. You should use a map of a string to a Person object where you expect a name—pesson mappings. If you need a special functions to process collections, you can write them as static functions that may even work on many more collections than you planned for.
Do not use checked exceptions. The debate is over here. There are good reasons why no other languages (even those considered safer than Java) use them. Languages on JVM must even actively work around them (usually by declaring all functions as throwing Exception base class).
Do not become a slave of a framework
Here I do not mean .NET framework or java.* libraries, or any set of useful composable libraries, but frameworks like JSF, ASP.NET, or Rails. These frameworks try to do a lot on your behalf (including class names, project file structure and automatically generated code and configuration files) and let you hook your code in some predefined form to some predefined places. It works like magic until it starts doing something unexpected or not doing something required. At that point it is often already quite late to redesign the code and involuntary hacking takes place.
The problem is that these frameworks unlike e.g. language libraries are not made to be taken apart, one part identified and customized, and put back together. They are the finished program and your code is its plugin system. A plugin is not supposed to customize the program outside of the plugin's responsibility. The only viable way is to take the control back and isolate the framework as an implementation detail from most parts of the application. Usually, stripping down the framework to its smallest essential well-understood core and separating that is a good start for a very predictable behavior. Then it is possible to start incrementally enabling more original features only as needed and replacing by your own if the functionality does not match the expectations, preventing getting stuck in a long investigation of what the hell is going on... again.
Do not create your own framework for others to use. There are several typical outcomes: Your framework will be weak, preventing others from having their work done; the framework will force others deforming and twisting their code to the will of The Framework; the framework will almost approximate the framework that came with the language in the first place, accomplishing less than nothing. Eventually, everyone will hate it including you, because you will be constantly asked to put there something you do not want to. If you have the temptation, build reusable libraries that anyone can use or ignore as they wish instead. That is already difficult enough.
ORM is for losers
A critique of Object-relational mapping is another big point that may cause you to want to dance on my face so I saved it until the very end and use something called a metaphor that I am particularly proud of.
ORM is like riding a bicycle with support wheels without noticing that it can be done without them ("what an insane idea, that cannot work, I know my physics"). You think you are riding a bicycle and cannot understand why people are laughing at you and everyone else is actually much faster and more elegant without even trying. If you do not take the "leap of faith", throw away your support wheels, and take a bit of effort to learn the skill properly you will never know. You may even invent best practices for using the support wheels, and write books about support wheel types and handling differences. You may even win every argument by pointing out that you will beat anyone riding without support wheels in a race across a frozen lake (even though everyone else would succeed faster just going around it). Other people will enjoy riding a bike freely, which you will never experience.
Conclusion: Be rational, skeptical, but not cynical
I wanted to put pragmatic to the subtitle, but it is being used more and more as an excuse for being sloppy. (If you hear your manager with weak understanding of the code saying that you are taking a pragmatic approach, more often than not it means the wrong, short-sighted approach). So I chose values that are generally useful even outside the world of programming.
To be skeptical does not mean dismissing possibilities just for the emotions it brings to you. Often you can be right, but I saw too many "You mean this is from XYZ? That must be expensive crap. Poor suckers that have to use it." Many times without even realizing that it is for instance a strongly supported open source project that became popular just because it efficiently solves the exact problem you need to solve.
Always use the right tools for the right job. Learning what "right" is is the tricky part, but there are smart people in this area and sometimes it is "only" about putting aside prejudices and the "enterprise" superstitions for new possibilities to appear. Then mix in some critical evaluation of the real compromises in the available approaches, and just try more of them if you are not sure. Try to commit to decisions about fuzzy areas as late as possible. This experience becomes priceless and gives you a lot of credible leverage (hopefully) when dealing with someone with very narrow and skewed field of view later.