[REPORT] From Vision to Code: A Guide to Aligning Business Strategy with Software Development Goals is published!
GET IT here

Key Architectural Decisions for Ensuring Long-Term Project Success

readtime
Last updated on
November 26, 2024

A QUICK SUMMARY – FOR THE BUSY ONES

TABLE OF CONTENTS

Key Architectural Decisions for Ensuring Long-Term Project Success

Starting a new project is always exciting and full of promise. Yet, it also presents many challenges, as it requires thorough navigation through numerous decisions to chart a course that optimally secures the system's future development.

This article delves into the decision-making, strategies, and choices concerning a new project aimed at gradually replacing an existing large system with a novel platform.

The architecture we present below offers us simplicity at the beginning of the project, when we need a lot of flexibility due to sudden changes in the requirements, but not compromising the security of the solution as well as quality and observability.

Despite the circumstances of the described project, we believe that these concepts could also be considered for any other scenarios, as they form a set of universal techniques and patterns that may be generic across various projects, providing high scalability and resilience to demanding changes.

Architecture approach - detailed guide

In the realm of architecture, many practices and principles are instrumental in shaping a project's trajectory. Recognizing their importance, we've put together a detailed guide to explore why they matter and how they help projects grow and adapt smoothly.

In this section, we will focus extensively on architectural considerations; we aim to explain why we find them so valuable and why they help in facilitating seamless expansion and modification of the system.

Now, let’s dig into that.

Modular monolith

When embarking on a project with potential for expansion and evolution, choosing the right architectural approach lays the groundwork for future success. One such approach gaining traction is the Modular Monolith, a design philosophy that offers a pragmatic solution to scalability without the immediate complexity of a microservices architecture.

At its core, the Modular Monolith embraces the concept of modules, encapsulating specific business domains or features, allowing for independent development and deployment. This modular design not only fosters a clear separation of concerns but also facilitates incremental expansion as new modules can be seamlessly integrated into the existing structure.

Focus on loose coupling

At the heart of the Modular Monolith architecture lies a commitment to minimizing coupling between system components, a fundamental principle to ensure flexibility, maintainability, and scalability. Coupling, where components become overly dependent on one another, can lead to brittle systems that are difficult to modify and prone to cascading failures. For instance, tightly coupled systems may experience ripple effects when a single component is updated, resulting in unintended consequences across the entire system.

One approach is for modules to call each other via well-defined APIs, invoking queries for data needed in processing, though nested queries may become unwieldy, increase interdependency, and result in performance issues in some scenarios.

Another more loosely coupled event-driven approach promotes decoupling event producers from consumers, enabling flexible system design and allowing asynchronous communication. Events serve as concise descriptions of occurrences within the system that can be emitted from a single source and consumed by multiple handlers, each with its own processing logic.

Central to this paradigm are domain events, intricately linked to the domain model and constructed using a shared, ubiquitous language. By embracing domain events, developers encode meaningful changes within the system, fostering a deeper understanding of its behavior and enabling precise, context-aware responses.

Designing systems around events offers inherent flexibility, allowing for the addition of new use cases with minimal impact on existing components. This low coupling between event producers and consumers not only promotes modularity but also enhances system resilience and scalability, as modules remain largely independent of one another.

Moreover, events can serve as a powerful tool for capturing system state and providing a comprehensive history of updates. By persisting events, we can gain insights into the evolution of the domain use cases over time, facilitating debugging, auditing, and analysis tasks.

What’s also very important in our context, events also emerge as powerful tools when integrating with legacy systems. By leveraging events, organizations can bridge the gap between modern architectures and legacy systems, enabling seamless communication and gradual migration strategies. We'll delve into this in detail later in the article.

Modules communication

By definition, each module should be decoupled and independent of the other, so in order to provide a communication mechanism, we implemented integration events.

Integration events are crucial mechanisms for enabling asynchronous communication and coordination between modules or distributed systems. By utilizing integration events, we can effectively decouple domain events from external influences, mitigating the risk of accidentally coupling other modules to our domain events when broadcasting them out to the world.

Be ready to scale

One of the key advantages of the Modular Monolith lies in its inherent readiness for future segmentation. While initially structured as a cohesive unit, the architecture is designed to accommodate potential division into independently deployable units as the project scales. This flexibility mitigates the risk of architectural bottlenecks and minimizes the disruption typically associated with transitioning to a microservices architecture.

By adopting a Modular Monolith approach, teams can strike a balance between the agility required for iterative development and the stability necessary for long-term scalability. This pragmatic approach not only streamlines initial development efforts but also sets the stage for future growth, empowering organizations to adapt to evolving business needs without sacrificing stability or scalability.

Keep your architecture “clean”

With a modular structure defined at a high level, the choice of application architecture for individual modules' internals becomes paramount, especially since this is the area where we, as software engineers, will spend the most time during the project lifecycle.

Without such foresight, the growth of coupling within the codebase may hamper both the speed and safety of changes, increasing both risk and costs over time.

Establishing a robust architectural foundation is a crucial investment for future evolution, yet it also brings forth some significant considerations:

  • it introduces a slight overhead during the initial setup,
  • developers must understand and adhere to these conventions,
  • it requires diligence in the review process to prevent erosion of its benefits.

While adherence to architectural conventions may, when taken to extremes, overly complicate code, at their core, Clean and Hexagonal architectures (the latter being a specialization of the former) offer readable and easily enforceable choices that are widely understood by software engineers and serve as the lingua franca of substantial solutions.

Clean Architecture is a structured approach that emphasizes the separation of concerns, enabling clear delineation between business logic and technical implementation, focusing on defining use cases to drive application behavior.

For the described project, a decision has been made to split each business component into modules with strict boundaries around them, resulting in loosely coupled modules with high cohesion in each. This gave us small, easy-to-understand work units that could later be extended if needed.

In our case, the decision to physically structure each module within the project was as follows:

  • create a core layer that holds all the business logic in DDD aggregates and the use cases that are using that logic,
  • create the infrastructure layer that keeps all information needed to interact with the infrastructure that this module needs for business operations,
  • create the integration layer, which is used to communicate between application modules, no matter if that communication is synchronous or asynchronous,
  • create the tests, which verify the existing functionality for that module, both unit and integrations.

Focus on feature-centric vertical slices

We believe that a feature-centric approach to software development facilitates the seamless translation of business requirements into application functionalities, simplifying the process of meeting client needs. Through workshops, discovery processes (using techniques such as event storming), and meticulous design, we can effectively map business cases to code, enabling a clear and business-focused design of the application.

This approach allows us to construct the application core as a collection of features aligning with essential business activities, offering well-defined and tested flows. By leveraging well-understood abstractions and behaviors, we can compose, sequence, and invoke features, ensuring the integrity of operations and their associated data while consistently enforcing core business logic across all scenarios.

In addition, it allows for use cases to trigger side effects using previously described events, effectively separating concerns such as the logic of making a booking from the necessity of sending a confirmation email to the user, thus enabling the transfer of responsibilities between features over time.

This methodology empowers us to implement user stories, deliver application features, and develop multiple features simultaneously, significantly reducing time-to-production without adversely impacting other areas. In this case, the focus remains on the functionality rather than the technical implementation.

Spotlight Observability

Observability is particularly significant in systems architecture, especially during the transition from legacy systems to new ones. By leveraging well-tested standards like OpenTelemetry, we want to ensure comprehensive observability of the entire system. This entails understanding the system's behavior, identifying potential issues, and ultimately resolving them efficiently. With a fully observable system, we gain insights into its functioning and empower ourselves to address any challenges that arise effectively.

We were very keen to have Observability playing a pivotal role in our system architecture, particularly during the migration of legacy systems to modern ones.

We chose OpenTelemetry due to its robustness and widespread adoption within the industry. OpenTelemetry is an open-source project that provides a unified approach to collecting observability data such as metrics, logs, and traces from various components of our system. This standardization allows for seamless integration with existing tools and platforms, facilitating easier monitoring and troubleshooting across different environments.

By implementing OpenTelemetry, we ensure that our system is fully observable, enabling us to gain deep insights into its behavior, identify performance bottlenecks, and promptly address any issues that may arise. This not only enhances our ability to maintain and operate the system effectively but also improves overall system reliability and performance.

Dev experience & business needs

When it comes to our project, a lot of effort was placed on the ability for the code to be produced fast and with ease, with multiple engineers being able to contribute at the same time, allowing for great developer experience and satisfying business needs. Such a solution can have changes deployed quickly, with ease and confidence due to the project structure itself, as well as robust observability.

In the later stages, this architecture is not closed to extension and evolution, i.e., into microservices if the development team is extended or the sudden need for scaling appears. Overall this setup is something a lot of .NET projects at the early stages could take from.

Check also our article on Harnessing .Net for Scalable and Maintainable Solutions.

Frequently Asked Questions

No items found.

Our promise

Every year, Brainhub helps 750,000+ founders, leaders and software engineers make smart tech decisions. We earn that trust by openly sharing our insights based on practical software engineering experience.

Authors

Jan Król
github
.Net Software Engineer

Jan is a .Net Software Engineer at Brainhub, passionate about AWS public cloud, Domain Driven Design and software development.

Kamil Sałabun
github
.Net Software Engineer

Developer with a background in multiple technologies. His primary focus is on the .NET ecosystem, application and system architecture, and optimizing the performance of robust solutions.

Michał Szopa
github
JavaScript Software Engineer

JavaScript and AWS specialist, Cloud enthusiast, with 7 years of professional experience in software development.

Jan Król
github
.Net Software Engineer

Jan is a .Net Software Engineer at Brainhub, passionate about AWS public cloud, Domain Driven Design and software development.

Kamil Sałabun
github
.Net Software Engineer

Developer with a background in multiple technologies. His primary focus is on the .NET ecosystem, application and system architecture, and optimizing the performance of robust solutions.

Michał Szopa
github
JavaScript Software Engineer

JavaScript and AWS specialist, Cloud enthusiast, with 7 years of professional experience in software development.

Read next

No items found...

previous article in this collection

No items found.

It's the first one.

next article in this collection

It's the last one.