[SURVEY RESULTS] The 2024 edition of State of Software Modernization market report is published!
GET IT here

Harnessing .NET for Scalable and Maintainable Solutions

readtime
Last updated on
August 27, 2024

A QUICK SUMMARY – FOR THE BUSY ONES

TABLE OF CONTENTS

Harnessing .NET for Scalable and Maintainable Solutions

Embarking on a new project is always thrilling and promising but comes with numerous challenges. It necessitates careful decision-making to ensure the system's future growth. In this article, we explore the strategies and choices involved in a project designed to replace a large existing system with a new platform progressively. While the project specifics are unique, the principles discussed can be applied universally. These techniques and patterns offer high scalability and resilience, making them adaptable to various projects and capable of handling significant changes. 

Implementation

Tech stack overview

As we choose our technology, one of the keys to building scalable web solutions is abiding by the tenets of 12-factor application and industry standards at large.

From console applications to Web APIs, contemporary .NET largely supports and promotes these by design. The .NET ecosystem offers inherent support for platform agnosticism, observability, containerization, and abstractions, promoting software maintainability and configurability.

Apps developed within this ecosystem effortlessly scale to business needs, exhibit cross-environment support, and demonstrate deployment agnosticism. Microsoft's commitment to an open-source ecosystem is complemented by formal guidance, standards, and battle-tested practices, ensuring stability, reliability, and growth. The ecosystem provides enterprise-grade libraries, such as MediatR and Entity Framework Core, supporting diverse software craftsmanship practices and closely reflecting business patterns.

The combination of open-source contributions and vendor oversight ensures stability, adherence to good practices, and rapid solutions for both common and niche development challenges, minimizing costs and accelerating time to production.

Observability

Observability is paramount for any deployed web application, enabling efficient support, issue identification, and ensuring user satisfaction. With diverse usage patterns and the inherent complexity of web stacks, a robust, future-proof solution adhering to industry standards is essential.

After researching available .NET tools, we decided to go with Serilog. Serilog is a well-established logging solution in .NET, offering multiple integrations (through multiple logging sink integrations - such as Seq, Grafana Loki, or ElasticSearch) and enrichers for transparently adding additional data.

Serilog is integrable both at the code and configuration level - the latter can be utilized to make the application code agnostic to the logging sinks attached. Logging severity levels, specific overrides, etc., can be made environment-dependent through configuration. For more information, see https://github.com/serilog/serilog-settings-configuration

Its configuration flexibility allows for seamless integration without code modifications, and setting it up is as easy as providing it under the generic .NET framework logging abstraction without changing any code semantics.

Additionally, the .NET ecosystem supports the OpenTelemetry standard for instrumentation and data propagation, ensuring low-impact integration and compliance with industry standards without leaking into application code.

This makes the underlying solution low-impact on the application code, ensuring that most developers are familiar with code conventions and compliant with industry standards regarding format of data produced and separation of concerns.

Local environment

A convenient, efficient, stable, and trustworthy development environment is crucial for the development process. Even with the advent of Cloud Development Environments, developing multi-part solutions often happens on a local machine, where the different components of the system have their infrastructure wired through tools such as Docker Compose.

As we require our system to consume various events from external sources, we explored several options to create a convenient setup for testing different flows locally as well as on the CI pipeline. Our aim was to identify a solution that doesn't burden our local machines excessively while providing all the necessary functionalities our system needs.

One approach that is frequently considered in such cases is to create typical debugging routes, with care taken to ensure this part of the API surface is not exposed in a production environment, but we found that to be too risky and provide too much overhead on production code.

Having defined proper abstractions at the application level, effectively separating application logic from the underlying infrastructure, we considered setting the local environment much differently than the production one, using lighter solutions yet providing us with all the functionalities we need.

To verify logic invoked by external means, such as consuming messages or events, we decided to use Redis as a messaging solution, which can closely replicate the production message bus setup.

Besides its typical usage scenarios of distributed caching, distributed lock implementations, and safely incremented counters across multiple processes, Redis can also serve as a very convenient, schema-less messaging solution with implicitly created message channels/topics. For more information, see https://redis.com/glossary/pub-sub/ or https://redis.io/docs/interact/pubsub/.

Implementing such a connector was trivial and didn’t require solution-specific code, as long as the message format of the part relevant to handler logic is that of a production setting (JSON, for example). It can also be toggled via flags or environment-specific configuration.

Interacting with the database

At the stage of the project, when we only knew for sure that we wanted to utilize a relational database, but due to the changing circumstances, we hadn't finalized the selection of the specific production tool, we opted to utilize one of the most widely used ORMs in the .NET ecosystem, which is EF Core, that, combined with appropriate layering of the application, allowed us to create a robust and flexible system.

EF Core is extensively documented, providing insights into efficient usage as well as potential limitations. With support for various database engines, including NoSQL, EF Core prioritizes developer convenience, language-centric semantics, concurrency support, and performance optimizations, making it suitable for diverse applications. You can read here for more details.

While ORM solutions like EF Core may sometimes introduce querying inefficiencies, they streamline common fetching and persisting tasks, reducing project complexity by abstracting away database query language.

For scenarios where more complex quering will be needed, we considered using another popular library called Dapper, which enables direct usage of query language and stored procedures safely, guarding against attack vectors such as SQL injection.

Integrating with legacy data

As our project centers around a rewrite, with Users gradually transitioning to new functionality while the old system remains active, we carefully weighed data integration strategies.

To implement that, we have considered a few scenarios that we found potentially matching to our requirements, being:

Scheduled jobs: While scheduled or on-demand synchronization jobs (or a combination) could be utilized to solve our problem, implementing a differential strategy or replicating the entire state of the old system, we found out that it may introduce too much impact on the performance and present a lot of challenges in achieving true consistency. We also found that polling for the data lacks insight into deletions since the previous polling iteration, potentially coupling the new system excessively to unnecessary data, risking future issues.

This also, by its very nature, poses the risk of leaking schema internals outside of the system governing the data, so care, and discipline must be taken not to couple the new system too extensively.

Utilizing API: While leveraging the old system's API for data retrieval might be tempting, as it ensures that the data leaves through established logic, it may burden the system and introduce direct coupling, which in many contexts may not align with the rewrite initiative.

There might also be a risk that the legacy may not expose the proper API, so such a strategy may involve doing additional work before the actual integration.

Extending the Old System with Event Production: A more granular approach involves identifying spots in the old system that produce events for state changes. Depending on the system's code complexity, this may be challenging and very risky and require additional error-handling mechanisms. Event-driven propagation complexities and the required investment in understanding and modifying the existing system's internals may vary.

Capturing data changes at the database level:

Another approach is utilizing the Change Data Capture database mechanism, which offers the benefits of a low-overhead, push-based approach to propagating data without modifying the existing system’s code.

This approach is transparent to both the producer and consumer of events and may prove to be the optimal choice, where modification of a stable legacy system is not an investment worth undertaking.

Solutions like Debezium provide ready-made connectors for publishing data change events. It’s worth noting that it may require some business logic implemented to transform captured changes into more meaningful information, which exposes the minimal set of data needed for others to process.

Testing approach

In the journey towards a robust and adaptable system architecture, choosing the right testing strategy is essential. As our focus lies in ensuring not only the stability and functionality of our system but also its seamless integration with existing legacy components, we focused our attention firmly on integration tests.

One of the primary challenges in integration testing revolves around managing dependencies, especially in complex environments where multiple services must be orchestrated. To streamline this process, we've adopted Testcontainers—a powerful tool that simplifies the setup of Docker containers for our dependencies. By leveraging Testcontainers alongside Xunit's test fixtures, we establish a unified testing environment, eliminating the hassle of manual dependency management.

This approach not only enhances the efficiency of our testing procedures but also reinforces the reliability and consistency of our testing environment. With all required dependencies seamlessly resolved, we can confidently validate the interoperability of our system with legacy components, providing smoother and more secure integration and enhancing system stability.

.Net and future-proof solutions

Our reasoning and assumptions, rooted in the modular monolith, event-driven, and feature-centric architecture's emphasis on minimizing coupling and providing a fully observable platform, hold promise for successful system design and evolution.

This is all facilitated by the choice of .NET as the ecosystem, with its stability, robustness, completeness, and well-known and practiced conventions.

With these, we can focus on solving the Client’s problem with confidence and ease, knowing that the code we write will not only be easily maintained and quickly provide value but will also benefit from any future improvements in terms of ease of use and performance, which has long been stable in .NET design.

Check also our article on Key Architectural Decisions for Project Success.

Frequently Asked Questions

No items found.

Our promise

Every year, Brainhub helps 750,000+ founders, leaders and software engineers make smart tech decisions. We earn that trust by openly sharing our insights based on practical software engineering experience.

Authors

Jan Król
github
.Net Software Engineer

Jan is a .Net Software Engineer at Brainhub, passionate about AWS public cloud, Domain Driven Design and software development.

Kamil Sałabun
github
.Net Software Engineer

Developer with a background in multiple technologies. His primary focus is on the .NET ecosystem, application and system architecture, and optimizing the performance of robust solutions.

Michał Szopa
github
JavaScript Software Engineer

JavaScript and AWS specialist, Cloud enthusiast, with 7 years of professional experience in software development.

Jan Król
github
.Net Software Engineer

Jan is a .Net Software Engineer at Brainhub, passionate about AWS public cloud, Domain Driven Design and software development.

Kamil Sałabun
github
.Net Software Engineer

Developer with a background in multiple technologies. His primary focus is on the .NET ecosystem, application and system architecture, and optimizing the performance of robust solutions.

Michał Szopa
github
JavaScript Software Engineer

JavaScript and AWS specialist, Cloud enthusiast, with 7 years of professional experience in software development.

Read next

No items found...

previous article in this collection

No items found.

It's the first one.

next article in this collection

It's the last one.