There are 10 common mistakes made by backend developers. Understand them, learn how to prevent each one of them and what to do if they already happened.
This article is targeted to those who write any kind of backend code, no matter how experienced they are, whether they are fully backend or fullstack, or mostly frontend and write backend occasionally, and no matter which platform they use (Node.js, PHP, Python, Ruby, Java, .NET, Golang etc.).
Here you will learn about the top 10 mistakes frequently made by backend developers and list of 16 other mistakes. After reading this article, you should know how to prevent a given mistake, its consequences and how to deal with it, if it already happened.
I listed this backend development mistake as the first because it seems to be the most frequent, and if not early treated, it becomes difficult to treat later, sometimes requiring a huge part of the project to be rewritten. Too much technical debt means generally using bad practices like breaking SOLID and DRY rules, very long functions/methods, a large number of indentation levels, missing tests (described in the next point), missing documentation, poor variables naming, poor commit message naming, not caring enough about performance, etc. It could also mean a bad architecture, which is usually even worse. Essentially, if the code is ugly, you can refactor it microservice by microservice or file by file, but if the entire system is badly designed, it’s much harder to fix.
Over-engineering happens when backend developers use a far too sophisticated pattern at a given moment when a much simpler solution would work just fine – it breaks the KISS or YAGNI rule. Over-optimization means caring too much about the performance in a place where it doesn’t really matter e.g. shortening a given procedure from 0.1 s to 0.01 s while another procedure, which is a bottleneck, takes about 15 s.
In the long term, each of the subproblems leads to wasted development time as developing each new feature generally becomes more and more difficult and at some point close to impossible. In an extreme case, the time to implement each next feature increases exponentially because it requires all the possible paths to be tested or all the existing conditions in the production code to be updated, combining the old features with the new.
Both technical debt and over-engineering makes the code too difficult to maintain. Over-optimization may cause that as well if it’s connected with over-engineering, which is often the case.
When developing a given feature, backend developers should consider the upcoming features in order to know whether it’s worth over-engineering or over-optimizing. We should also consider its functional lifespan (how long this feature will be used in production) and its technological lifespan (how long may this feature be used with the current technologies) in order to know how serious technical debt is at a given moment. It’s critical to code review each feature and each bug fix (another point).
When we realize that a given technical debt or over-engineering causes the maintenance too difficult, we should refactor it to satisfy the good practices and be simple enough. However, over-optimization, if not connected with over-engineering, shouldn’t be treated because it’s good that our code execution is faster. We’ve already wasted time over-optimizing, which is a problem, but we cannot get back the time.
Missing tests means a given scenario isn’t covered by tests or even there are no tests at all for a given feature. Not testing at each level of the pyramid means we miss tests at least at one level of the pyramid.
It results in production code that is difficult to maintain for the backend developers as they are afraid to break anything, which increases the number of bugs. However, let’s remember full testing is impossible.
Writing all the tests at each level together with a new feature and regression tests together with a bug fix should help. The tests should be incorporated into the daily routine. This may be enforced by every backend developer individually, using TDD (test-driven development) and static code analysis, and checking code coverage.
Another but not exclusive approach to enforce writing tests is with teamwork – pushing the code as soon as possible and reviewing it by another backend developer or sometimes performing pair-programming. Sometimes, someone may oppose writing tests. An opposer can be either a junior backend developer who has no experience in writing tests or a product owner who wants to have the stuff implemented as fast as possible. So it’s crucial to explain to them why the tests are so important. The main reasons are: a higher work performance in the long term, sometimes even a higher work performance in the short term if a given feature is complicated enough, less bugs, and finally designing a better architecture. For a more detailed explanation, you can try searching “why should I write tests” in Google, Quora and YouTube.
Generally performing the stuff from the “Prevention” subsection should fix things, but you should analyze which features and scenarios are the most essential in order to cover them by tests in the first order.
Potential consequences are bad practices or inconsistent code style which make code more difficult to maintain, vulnerabilities and missed edge cases handling.
To keep this from happening, review each PR/MR before merging (and of course never pushing directly to any of the protected branches which may be enforced in GitHub or GitLab), and use continuous integration to run static code analysis after pushing each commit.
In order to treat this backend development mistake, we should take steps like starting code reviews, configuring static code analysis and reviewing code which is already on the default branch but hasn’t been reviewed yet.
Sometimes, there can be some problems with reviewing each code so I recommend the following ways to fix them:
Having inconsistent technologies or approaches makes the maintenance more difficult because it requires much more knowledge, there’s a higher risk of bugs/vulnerabilities due to a higher number of dependencies, and slower installation as more dependencies have to be installed.
Using the same technologies & patterns for a given problem and code reviewing (the previous point).
We should decide which technology or approach is the best for our project and refactor the remaining code to always use the same technology or approach.
The production database might sometimes be fully or partially deleted or obtain some invalid data due to a bug or some manual operations. Production data is usually even more precious than our code because the code can be a result of work of dozens of backend developers while the production data can be the work of thousands or even millions of application users. Moreover, our code can be replaced by another application or sometimes a spreadsheet while the production database might be impossible to recover without having a backup.
Our system users might lose the result of their work. In contrast to other points, we’re likely to be obliged to pay damages in the case that production data is lost.
Configuring the automatic backup e.g. MongoDB Atlas offers out-of-the-box periodic automatic backup, but it’s good to check it.
If not configured before, configure the automatic backup ASAP. If the database has already been deleted without a backup, it’s too late, but if we’re lucky, we may retrieve a part of production data from the logs (another point) or ask our teammates whether any of them has done a manual backup.
The production should be as accessible as possible, so if any microservice is down for a few hours, it can cause major problems. Therefore we must monitor all the microservices and if any is down, we should get it up again as fast as possible. This is a DevOps mistake, but besides DevOps work, backend developers have to implement a status endpoint for each microservice. When a microservice is down, it’s likely some backend development work is needed to fix it.
Major bugs due to a microservice being down or the entire system being down when the most essential microservices are down.
We should configure a monitor for each production microservice as one of the first actions being done after creating the production environment.
We should configure the missing monitors and resurrect all the dead (down) microservices.
The data model is a key part of the system architecture, so if it’s badly designed, it can cause some of the issues described later.
It can cause invalid production data, data being difficult to analyze or maintain, and very slow data queries.
We should carefully design the data model, discussing it with the whole backend team if it’s relatively small (up to 10 backend team members). In the case of a larger team, we should discuss the model with the subteam accountable for the logic related to this model and additionally with some general data experts or specialized data experts like a key-value DBMS expert.
If a bad data model isn’t deployed to production yet, we can just update the data model and remove the invalid data created at lower environments. If a bad data model is already deployed to production, besides updating the data model, we need to write a migration to fix the invalid data.
Unfortunately, in some situations, invalid production data can be impossible to fix fully. It happens if we save some ambiguous data, e.g. a city name instead of the city ID, as there can be many cities with the same name.
This is a mistake made mostly by junior backend developers, but sometimes even seniors can make this one. SQL injection is a dangerous possibility that the user will type a query which is then executed in the database due to a string concatenation. Anyway, it’s a general term because it can be an injection of another query language like AQL (ArangoDB Query Language), GraphQL, etc. A similar problem can occur when using the eval function or running an operating system command.
A user (hacker) obtaining data which he shouldn’t be able to read or what’s worse, updating, inserting or deleting data.
In order to prevent this backend development mistake, we should take steps like passing parameters to each query instead of concatenating them, doing a careful code review (a previous point), using as few string concatenations as possible, making sure each string part is validated so it won’t create a dangerous query, using many databases and many database users, so even if a user exploits something, his power is limited, and automatic production database backup (a previous point) in order to be able to restore the data deleted or broken by a hacker.
Applying the prevention steps and deploying them to production as soon as possible.
Logging is essential for reporting any problems and collecting statistics about the system usage.
Fixing already deployed (to dev or stage or production) bugs is more difficult or sometimes even impossible.
We should log all the server errors for the production environment, including stack trace, timestamp, request body/path/headers, and possibly decoded username and client IP address. Moreover, for the dev environment it’s good to log each request with a full response. We must also redact the credentials before logging them in order to prevent them leaking. Sometimes we may need a partial logging like keeping only a few elements of the array and after them something like “<20 other elements>” in order to fit in the disk limits.
It’s also good to log very basic info about each production request like timestamp, request path, decoded username and client IP in order to collect statistics showing how often a given endpoint is used and at which time/date of week the traffic is the highest.
Implementing the logging ASAP.
Each client error request (a status code starting with 4) should contain details so that users (usually frontend developers or other web developers) are able to easily fix the passed params. However, server error requests (a status code starting with 5) shouldn’t expose any details because they could be potentially used to exploit the system.
Moreover, when a stacktrace is exposed, there’s a risk it will contain some credentials. Therefore, when there’s a server error, we should return a generic message like “Internal server error” and log the error details (a part of the previous point).
Missing client error details make using the given endpoint much more difficult, and exposed server error details can cause system exploitation.
Implementing each endpoint according to the rules defined in this point and a decent code review.
Endpoint tests covering various scenarios including client errors (generally, we’re unable to predict server errors).
Detecting both situations and fixing them.
Besides these 10 mistakes, there are some others which are worth remembering and avoiding (although they are less important than those described earlier):
No matter how large your backend experience is, I hope you learned about some mistakes worth avoiding or just some more details like possible consequences or ways to prevent or treat a given mistake.
If you enjoyed reading, feel free to share this article with your friends and network on social media (Facebook, Twitter, Linkedin).
Become a better tech leader.
Join 200+ CTOs, founders and engineering managers and get weekly bite-sized leadership lessons that take <60 seconds to read.
No previous chapters
No next chapters