My way to apply the test pyramid to an Onion Style architecture with Blazor

It's crucial to understand that the testing pyramid must be tailored to a project's unique requirements. The team is responsible for fine-tuning the ideal testing architecture for their project.

I'm offering an example of a basic test pyramid suited to the Onion architecture I commonly employ in my projects. My intent is to showcase what a testing pyramid might look like for DDD (Domain-Driven Design) projects. However, always consider that it might need further adjustments to fit specific project circumstances.

At the core of my testing strategy lie unit tests. These focus on individual components or functions and are quick to write and maintain. They ensure the fundamental components of our application remain robust.

It's worth noting that a pyramid doesn't stand solely on its base. While unit tests are foundational, they represent just one facet. Beyond unit tests, I implement Service tests to validate the entire backend's integration. These play a pivotal role in ensuring proper interactions with databases and external services. They're generally slower and more intricate than unit tests, which is why I house them in separate projects.

The term "Service test" originates from the idea that these tests operate from the service layer.

By segregating Service Tests from unit tests and placing them in distinct projects, I enhance the manageability and scalability of my testing approach. This separation allows for unit tests to validate individual components, while integration tests or Service tests verify interactions with databases and services.

Unit Tests At the pyramid's base, you find the unit test. This positioning implies a higher volume of unit tests compared to other kinds. A unit test is an automated test that targets a small, isolated code segment, typically an individual method or class. The unit test's objective is to confirm the correctness of this specific code piece.

The included diagram showcases typical software layers for an Onion/Clean architecture, with color-coding to highlight their testing suitability.

  • Green: Layers such as ViewModels, Application, and Domain are primary unit testing candidates. They offer high value when tested in isolation due to their quick execution and maintainability.
  • Yellow: The yellow layers are less efficient when tested standalone. Their unit tests often bring limited added value since they usually rely heavily on the layer below. Therefore, integration tests are generally used to verify the accurate functionality and interaction of these components.
  • Red: Testing red layers usually results in fragile or sluggish tests that demand considerable maintenance. These layers typically offer lower testing ROI, making integration tests a more pragmatic choice.

Some attributes my unit tests adhere to include:

  • Testing Single Units: A unit test evaluates one class or method at a time. This limited scope makes it simpler to identify issues, aiding codebase maintenance and refactoring.
  • Testing Pure Logic: The primary purpose of a unit test is to confirm the class or method logic. This encompasses checking if the method returns expected results for given inputs or handles edge cases effectively.
  • Mocking Dependencies: To ensure code isolation, unit tests shouldn't depend on external systems. For instance, a class interacting with a database should be tested with a mock or stub, not a real database connection.
  • Fast Execution: Due to their foundational role, unit tests need swift execution to facilitate frequent runs, ideally after each code alteration.

UI Test

UI Tests validate user interactions and the application's interface. They replicate real user scenarios involving UI elements, ensuring the application operates as anticipated. My UI Test are done in isolation of the Service Tier, they exclusively test an application's presentation layer. As these test are performed in isolation you can see them as a specialized subset of unit tests. Using the MVVM pattern within my Blazor components enables to abstract away the UI logic and free the test from depending on UI specific details that make the test brittle.  For testing the logic in the Razor views I typically use BUnit but try to avoid them as much as possible.

However, while UI Tests are insightful, they can be fragile and require regular updates, especially if UI elements undergo changes. They might not detect all UI issues, emphasizing the need for a comprehensive testing strategy that incorporates End2End tests for thorough application quality verification.

Service Tests

Occupying the pyramid's middle layer, Service Tests anchor my integration testing strategy. They center on the application's back-end components, ensuring that services, APIs, and business logic layers cohesively function. They also serve as a type of acceptance testing for complex logic. Due to their ability to test business flows without UI test fragility, Service Tests often yield the highest ROI, making them indispensable in my testing strategy.

For Service Tests, I utilize Specflow and follow the BDD (Behavior-Driven Development) approach. This method bridges the gap between non-technical and technical team members, fostering shared application behavior understanding.

Domain-Driven Design's concept of Ubiquitous Language pairs seamlessly with BDD. It denotes a shared language among team members, ensuring mutual understanding of domain concepts and requirements.

SpecFlow, a renowned BDD tool, enables business analysts to describe expected application behaviors in Gherkin syntax. Developers then create step definitions, which are code pieces that map scenarios to test actions. By adopting a BDD approach with tools like SpecFlow, I ensure the Ubiquitous Language isn't just documentation but living specifications executable as tests. This promotes clear communication and a cohesive application.

 

Database: In-memory or Disk-based?

Efficiency and speed are paramount for all tests, especially when integrated into a continuous integration process on a build server. Database selection significantly influences this.

From my experience, an in-memory database like SQLite is typically more build-time efficient. However, there are instances where SQLite might be unsuitable, especially when integrating with complex databases with features like stored procedures. In such cases, a closer representation of the production environment, like SQL Server, is essential. To optimize tests without compromising build times, I suggest making tests configurable to different databases. For instance, SQLite can be used on the build server for rapid tests, while SQL Server is employed locally.

It's also crucial to emphasize that tests should mock external services, ensuring the application's logic is the primary testing focus.

Conclusion Structuring my test pyramid with unit tests at the base, Service Tests in the middle, and UI tests at the peak results in a comprehensive testing strategy. While UI tests play a role, Service Tests truly underpin a quality product. Combining these tests ensures both the individual components' integrity and the smooth interaction between the application's various tiers.

Going All-In on UI Testing Automation: a Recipe for Failure!

Many professionals in our industry believe that the key to an effective testing strategy is a robust system of automated tests that interact with the User Interface (UI). Imagine an advanced tool that can meticulously record user interactions and replay them to check for consistent results.

However, when applied in practice, this approach reveals several problems. The most significant issue is the fragility of these tests. Even minor UI changes can cause numerous tests to fail, resulting in a time-consuming process of re-recording. Moreover, these tests tend to run at a slow pace and require substantial infrastructure to execute effectively. A testing strategy solely focused on automating UI tests is too expensive, delicate, and time intensive.

My consideration of the effectiveness of automated testing was restored when I discovered the concept of the test pyramid. This concept, integrated within the Software Development Life Cycle (SDLC), organizes tests across different layers of the application stack. Visualize a pyramid with a sturdy and balanced structure. The broad base consists of quick and efficient unit tests, while the narrow peak represents a smaller set of slower, more complex end-to-end tests, often performed through the UI. This approach prioritizes comprehensive coverage at the foundational levels where tests are simpler, faster, and more cost-effective, while still conducting sufficient high-level tests to ensure the harmonious functioning of system components.

However, the test pyramid does not stop at the base and the peak. It also incorporates an intermediate layer of tests that operate discreetly through an application's service layer. These tests combine the strengths of end-to-end tests while bypassing the complexities of UI frameworks. For applications based on a three-tier architectures, this corresponds to testing through the second tier, the Backend For Frontend. For two-tier applications, this can be done through the Application layer.

It is also important to note that there can be variations of the test pyramid based on different philosophies or specific needs of a project. It is up to the team to fine-tune the right testing strategy for his project. Here I provide a straightforward strategy that should fit most scenarios but not all. You should take account of your specific circumstances and adapt this strategy to your specific situation.

Best Practices for Effective Unit Testing in C#

In my previous blog post, I explained the importance of writing testable code. In this post, I will explore some common best practices for unit testing in C#, focusing on key areas such as test structuring, mocking frameworks, test naming, and differentiating between unit and integration tests.

Follow the Arrange-Act-Assert (AAA) Pattern

The AAA pattern provides a structured approach for organizing unit tests. Follow these steps:

  • Arrange: Set up the object to be tested and its dependencies.
  • Act: Perform the action to be tested.
  • Assert: Verify that the action led to the expected result.

Using the AAA pattern enhances readability and maintainability of your tests.

using Microsoft.VisualStudio.TestTools.UnitTesting;

namespace MyApp.Tests
{
    [TestClass]
    public class CalculatorTests
    {
        [TestMethod]
        public void Add_GivenTwoIntegers_ReturnsTheirSum()
        {
            // Arrange
            var calculator = new Calculator();
            int number1 = 2;
            int number2 = 3;

            // Act
            int result = calculator.Add(number1, number2);

            // Assert
            Assert.AreEqual(5, result, "The addition of 2 and 3 should equal 5");
        }
    }
}

 Use Mocking Frameworks Wisely

Mocking frameworks, like Moq, allow the creation of mock implementations of dependencies. However, use them judiciously to avoid potential pitfalls:

  • False Sense of Security: Over-mocking can lead to tests passing despite real-world failures.
  • Brittle Tests: Unmet mock expectations due to code changes can cause unnecessary test failures.
  • Overemphasis on Implementation Details: Too many mocks may shift focus from testing behavior to testing implementation.
  • Increased Complexity: Tests with excessive mocks become harder to understand and maintain.
  • Balance the use of mocking frameworks to isolate code units effectively without introducing unnecessary complexity.

Write Meaningful Test Names

A good test name should describe what the test does and provide a clear idea of what went wrong in case of failure. Follow a convention like "MethodName_StateUnderTest_ExpectedBehavior" to improve test name clarity and simplicity.

Separate Unit and Integration Tests

Distinguish between unit tests and integration tests to ensure effective testing and maintainable code.

  • Unit Tests: Focus on testing individual components in isolation, with mocked or stubbed dependencies. They are fast, cheap, and provide valuable feedback on small code units.
  • Integration Tests: Verify interactions and integration between components, including infrastructure. They have higher code coverage but can be more complex and fragile.

Separating these test types in separate projects or folders or namespaces allows developers to leverage the strengths of each approach while maintaining code quality.

Utilize Parametrized or Data-Driven Tests

Parametrized tests allow running the same test method with different inputs and expected outcomes. In C#, you can use parameterized tests or data-driven tests with frameworks like MSTest. These tests provide flexibility and code reuse.

[TestClass]
public class CalculatorTests
{
private Calculator _calculator;

[TestInitialize]
public void Setup()
{
    _calculator = new Calculator();
}

[TestMethod]
public void Add_PositiveNumbers_CorrectSum()
{
    AddAndVerify(1, 1, 2);
    AddAndVerify(2, 2, 4);
    AddAndVerify(3, 3, 6);
}

[TestMethod]
public void Add_NegativeNumbers_CorrectSum()
{
    AddAndVerify(-1, -1, -2);
    AddAndVerify(-2, -2, -4);
    AddAndVerify(-3, -3, -6);
}

private void AddAndVerify(int a, int b, int expected)
{
    // Arrange - setup is done before this

    // Act
    int result = _calculator.Add(a, b);

    // Assert
    Assert.AreEqual(expected, result);
}

 

By applying these best practices, you can write robust, reliable, and maintainable unit tests in C#. The Arrange-Act-Assert pattern brings clarity and structure to your tests. Using mocking frameworks wisely, writing meaningful test names, separating unit and integration tests, and leveraging parametrized or data-driven tests enhance the effectiveness and efficiency of your test suite. Unit testing is an essential tool for building high-quality software, and following these best practices will greatly contribute to your success.

6 Key Principles to Elevate Testability and Maintainability in C#

Testable code stands as a cornerstone in the realm of quality software development. Crafting code that can be effortlessly verified for correctness through automated testing is not just a skill, but an art. In this blog post, we will embark on a journey through six fundamental principles that serve as the pillars of writing testable and maintainable code in C#. Whether you are an experienced software engineer or taking your first steps in the C# landscape, these principles will equip you with the tools needed to sculpt your code into a masterpiece of quality and reliability.

1. Single Responsibility Principle (SRP): Each class or method should have a single responsibility, meaning it should only have one reason to change. If a class or method does too many things, it becomes harder to test because each test must consider the different paths through the code.

2. Use Dependency Injection: Dependency injection is a technique where an object's dependencies (the other objects it needs to do its job) are provided to it, rather than creating them itself. This allows the dependencies to be swapped out with fake or mock objects in tests, making the code easier to test. For example, you could inject a database service into your class, and replace it with a mock database service in your tests.

public class OrderService
{
    private readonly IOrderDatabase _database;

    public OrderService(IOrderDatabase database)
    {
        _database = database;
    }

    public void PlaceOrder(OrderedParallelQuery order)
    {
        // ... some logic ...
        _database.AddOrder(order);
    }

    // ... rest of class ...
}

[TestClass]
public class OrderServiceTests
{
    [TestMethod]
    public void PlaceOrder_OrderIsValid_AddsOrderToDatabase()
    {
        // Arrange
        var mockDatabase = new Mock<IOrderDatabase>();
        var orderService = new OrderService(mockDatabase.Object);
        var order = new Order { /* initialize order here */ };

        // Act
        orderService.PlaceOrder(order);

        // Assert
        mockDatabase.Verify(db => db.AddOrder(order), Times.Once, "Order should be added to database once");
    }
}

 

3. Use Interfaces and Abstractions: Relying on concrete classes makes your code more tightly coupled and harder to test. By depending on abstractions (like interfaces or abstract classes), you can easily substitute real implementations with mock ones for testing.

4. Avoid Static and Singleton Classes: Static methods and singleton classes cannot be substituted with mock implementations, making your code harder to test. They also hold state between tests, leading to tests that are not independent.

Static and Singleton classes can hinder testability by preventing substitution with mock implementations, making testing more difficult and relying on real-world dependencies. Additionally, static methods and singleton classes can hold state between tests, leading to interdependent tests and potential inconsistencies.

While static and singleton classes should usually be avoided, there are cases where using static classes is appropriate. This includes scenarios such as defining utility functions or extension methods, which encapsulate reusable pure functions or extend the behavior of existing types without modifying their source code. In these cases, static classes can provide organization and improve code readability.

For more explicit examples on when using static methods see the last part: 6. Write Pure Functions When Possible

5. Favor Composition Over Inheritance: Composition refers to building complex objects by combining simpler ones, while inheritance describes a relationship between parent and child classes. Composition is more flexible and easier to test because you can replace parts of the composed object with mocks.
This example highlights the benefits of using composition for achieving better testable code:

// Composed object that uses composition instead of inheritance
public class ComposedObject : ISimpleObject
{
    private readonly ISimpleObject _simpleObject;

    public ComposedObject(ISimpleObject simpleObject)
    {
        _simpleObject = simpleObject;
    }

    public void PerformAction()
    {
        // Additional logic can be added here
        Console.WriteLine("Performing action in ComposedObject");

        // Delegating the action to the composed object
        _simpleObject.PerformAction();
    }
}

[TestClass]
public class ComposedObjectTests
{
    [TestMethod]
    public void PerformAction_Should_Call_PerformAction_Method_On_SimpleObject()
    {
        // Arrange
        var mockSimpleObject = new Mock<ISimpleObject>();
        var composedObject = new ComposedObject(mockSimpleObject.Object);

        // Act
        composedObject.PerformAction();

        // Assert
        mockSimpleObject.Verify(m => m.PerformAction(), Times.Once);
    }
}

 

By favoring composition over inheritance, the code becomes more flexible and easier to test. The example demonstrates how the ComposedObject class is constructed by injecting dependencies through its constructor, allowing for easy substitution of real implementations with mock objects during testing. This decoupling of dependencies enables isolated testing, focusing solely on the behavior of the ComposedObject without being tightly coupled to the implementation details of its dependencies. The ability to inject and replace dependencies with mocks or stubs provides greater control over the testing environment and leads to more reliable and maintainable tests. Embracing composition enhances testability and promotes modular and testable code.

6. Write Pure Functions When Possible: A pure function's output is solely determined by its input, and it does not have any side effects (like changing global variables). This makes it easy to test, because you just need to check the output for a given input.

Take this example where the method “StoreAverageOfPrimes” is extremely hard to test:

 

 
    public void StoreAverageOfPrimes(int number)
    {
        List<int> primeNumbers = new List<int>();

        using (SqlConnection connection = new SqlConnection(_connectionString))
        {
            connection.Open();

            for (int i = 2; i <= number; i++)
            {
                bool isPrime = true;

                for (int j = 2; j < i; j++)
                {
                    if (i % j == 0)
                    {
                        isPrime = false;
                        break;
                    }
                }

                if (isPrime)
                {
                    primeNumbers.Add(i);

                    SqlCommand command = new SqlCommand("INSERT INTO PrimeNumbers (Value) VALUES (@Value)", connection);
                    command.Parameters.AddWithValue("@Value", i);
                    command.ExecuteNonQuery();
                }
            }

            SqlCommand command = new SqlCommand("INSERT INTO Results (Value) VALUES (@Value)", connection);
            command.Parameters.AddWithValue("@Value", primeNumbers.Average());
            command.ExecuteNonQuery();
        }
    }

 

This code suffers from the same challenges when it comes to testing the calculation logic in isolation. The code tightly couples the calculation logic with the database access code, making it difficult to write focused unit tests. The lack of separation and abstraction hinders code reusability and maintainability.

In the refactored code here under, the StoreAverageOfPrimes method has been modified to separate the calculation logic from the data access code:

 
public static List<int> GetPrimeNumbers(int number)
    {
        List<int> primeNumbers = new List<int>();

        for (int i = 2; i <= number; i++)
        {
            if (IsPrime(i))
            {
                primeNumbers.Add(i);
            }
        }

        return primeNumbers;
    }

    public static bool IsPrime(int number)
    {
        if (number < 2)
            return false;

        for (int i = 2; i <= Math.Sqrt(number); i++)
        {
            if (number % i == 0)
                return false;
        }

        return true;
    }

    public void StoreAverageOfPrimes(int number)
    {
        List<int> primeNumbers = GetPrimeNumbers(number);
        double average = primeNumbers.Average();

        using (SqlConnection connection = new SqlConnection(_connectionString))
        {
            connection.Open();

            StorePrimeNumbers(primeNumbers, connection);
            StoreResult(average, connection);
        }
    }

    private void StorePrimeNumbers(List<int> primeNumbers, SqlConnection connection)
    {
        foreach (int number in primeNumbers)
        {
            SqlCommand command = new SqlCommand("INSERT INTO PrimeNumbers (Value) VALUES (@Value)", connection);
            command.Parameters.AddWithValue("@Value", number);
            command.ExecuteNonQuery();
        }
    }

    private void StoreResult(double result, SqlConnection connection)
    {
        SqlCommand command = new SqlCommand("INSERT INTO Results (Value) VALUES (@Value)", connection);
        command.Parameters.AddWithValue("@Value", result);
        command.ExecuteNonQuery();
    }

 

In this updated code, the pure functions GetPrimeNumbers and IsPrime have been made static. By making them static, they can be accessed directly without an instance of the CalculationService class.

The static nature of these functions allows them to be more easily testable and promotes code reusability, as they can be used independently without requiring an instance of the class.

When writing testable code, it is important to consider the nature of the functions you are working with. Pure functions, which have the characteristic of producing the same output for a given set of inputs and having no side effects, are particularly beneficial for testability.

In the context of testability, pure functions can be made static to further enhance their usability. By making pure functions static, they can be accessed and used directly without the need to create an instance of the class containing them. This eliminates the dependency on the object's state, making the functions more independent and self-contained.

The static nature of pure functions simplifies the testing process. Since they rely only on the provided input and produce a deterministic output, you can easily write focused unit tests for these functions without the need to set up complex object states or manage external dependencies. You can simply pass different inputs to the functions and verify their outputs against expected results.

In contrast, functions with side effects, such as those interacting with databases or modifying global variables, are more difficult to test due to their reliance on external resources and potential interference with the test environment. These functions often require complex setup and teardown procedures to isolate and restore the system state, which can complicate the testing process.

By separating pure functions and making them static, you create modular and testable units of code that are easier to verify in isolation. This promotes code maintainability, reusability, and testability, allowing you to write more effective and reliable tests for your applications.

 

Chasing the Mythical 80%: When Code Coverage Becomes a Treasure Hunt

Ah, the famous 80% code coverage - the golden number that many developers I've encountered seem to believe has magical powers to keep all the coding gremlins away. It means that 80% of the new code should be put to the test, as if saying, “Prove your worth, code!” This bar is high, no doubt, and is set to make sure that most of our codebase is given a good check.

However, let’s not get too starry-eyed about this 80%. Sometimes, depending on the type of project or part of the code, it’s like trying to fit a square peg in a round hole. It just doesn’t work! In such cases, it’s smarter to focus on creating solid, quality tests rather than running a race to hit that 80%.

Now, when deciding how much testing is just right, we need to think about complexity and risk. Imagine them as partners in crime – when complexity goes up, risk is right there with it. That means more chances of things going wrong. To set the right target for code coverage, you have to keep an eye on how complicated the code is. The SonarQube Coverage Overview report is like a helpful friend in this mission. It’s easy on the eyes and shows the connection between code coverage and complexity.

Here’s the quick tour of what the report shows:

  • Bottom-right corner: This is where complex but well-tested code hangs out. Think of it like a tough puzzle that’s been solved.
  • Bottom-left corner: The dream zone! Code here is simple and has been tested a lot. It’s where you’d want most of your code to be.
  • Top-right corner: Red alert area! Code here is like a tangled mess and hasn’t been tested enough. It’s risky and needs attention ASAP.
  • Top-left corner: This area has simple code, but it hasn’t been tested much. It’s not ideal, but not a total disaster.

 

So, in summary, this report is like a handy map for your code. It helps you see what needs your attention and testing effort. This way, you can make sure your software is in good shape. But always remember, 80% is a guiding light, not a magic spell!

Orchestrating Success: The Harmonious Ensemble Behind Automated Testing!

Hello everyone! Before we dive deep into the technical side of automated testing in my upcoming articles, I want to hit the pause button and shine a spotlight on something super important: writting automated test is a team sport!

Yes, as developers, sometimes we feel like testing is our own little kingdom. But you know what? Building amazing software is like painting a beautiful picture - it needs different brushes and colors. That’s why testing should be a high-five moment with the whole team involved. We developers must open the doors and roll out the red carpet for everyone to join in. So, let’s bring everyone into the spotlight and see how this teamwork makes the dream work! Onward to the backstage tour!

Developers and Analysts as the Architects of Software

As developers, we are like the master builders of the software world. We draw up nifty blueprints – which are, of course, our much-loved automated tests – making sure the software is solid from the ground up. But here’s the thing: we’ve got teammates! Analysts are the awesome folks who help set the stage. They’re like the planners who make sure everything’s in the right place. They use their smarts and know-how to create a shared lingo – kind of like a secret code that everyone on the team can understand. This means everyone’s singing from the same song sheet, which makes everything tick along nicely.

The Testers’ Detective Work in Polishing the Software Masterpiece

Next up, let’s hear it for the testers – our software’s very own detective squad! They swing into action when it's time to put the software through its paces, sniffing out any sneaky bugs or issues. They have a knack for spotting the little things that might trip us up, and they’re great at making sure everything is shipshape. And it's even better when they join forces with developers and analysts to make sure our software is top-notch.

Scrum Masters and Project Managers Harmonizing the Ensemble

Hang on, we’re not done with the shout-outs! We can’t forget the Scrum Masters and Project Managers. These superstars are like the conductors of our orchestra. They keep the tunes flowing, helping everyone to play in sync. They’re the ones who make sure the communication lines are open, that roadblocks are smashed, and that everyone's on the same page with testing goals.

The secret sauce

Now, for the secret sauce that glues everything together – SpecFlow! Especially for the .Net folks out there, SpecFlow is like a magic wand in the world of Behavior-Driven Development (BDD). It’s an ace tool for cooking up automated and testable game plans. What’s cool is that SpecFlow speaks a language called Gherkin, which is easy-peasy for everyone to understand, even if you’re not a coding whiz. So everyone can jump in and help shape things up, making the team bond even stronger.

So, my friends, before we get our geek on with all the techy details of automated testing, let’s remember this: testing is all about teamwork. It's the special moment where everyone – analysts, testers, Scrum Masters, Project Managers – all huddle up for a group high-five, aiming to build the best software ever. Together, with smarts, teamwork, and nifty tools like SpecFlow, we’re unstoppable. Trust me, this team adventure is the real deal, and you don’t want to miss it!

 

Building Code Like a Skyscraper: The Scaffolding Magic of Automated Testing!

Today, I want to chat about how automated testing in software development is like scaffolding in building construction. Why? Because both of them give amazing support and strength to what we are building.

  • The Strong Support: Just like how a building needs scaffolding to keep it steady, our software needs automated testing for support. Scaffolding holds up the building while it’s being built. In the same way, automated testing holds our code together, making sure it's strong and safe.
  • The Watchful Eyes: Scaffolding helps workers see and reach every part of the building. Similarly, automated testing keeps an eye on the software. It checks every nook and cranny to make sure that everything is running smoothly and correctly.
  • The Safety Net: When constructing a building, scaffolding acts as a safety net. If something goes wrong, it's there to catch it. Likewise, automated testing catches little problems in our code before they become big headaches.
  • The Flexible Friend: Scaffolding can be adjusted and moved as the building goes up. This is just like automated testing, which we can change and adapt as our software grows and evolves.
  • Building The Master Plan: In construction, scaffolding is part of the big plan to build something amazing. For us developers, automated testing is a crucial part of our master plan for our software. It guides us and helps us make changes safely.
  • Not A One-Man Show: But wait, scaffolding can’t build a building on its own, and automated testing can’t make perfect software by itself. That’s why we have the construction workers, and in coding, we have manual testers. They are like the heroes who check everything up close and find anything that might have been missed.
  • Making It Future-Proof: Using scaffolding means that the building will be built right, so it will last a long time. In the same way, when we use automated testing, our code is solid. This means it will keep working great even as time goes by, and we can add new features without worries.

Now, let’s not end this chat without looking ahead. This post is like saying “Hello” before we start an awesome adventure. Over the next few weeks, I’ll be sharing even more posts. We’ll talk about the best ways to do things, tools that make life easier, and examples from real life that show how all of this stuff works.

So, if you’re excited to learn how to write even better code, make sure you keep coming back to check out what’s new. Whether you’re new to coding or have been doing it for years, there will be something for you. Can’t wait to see you here! Stay tuned, friends!

What is the best UI technology: Angular or Blazor ?

What is the best UI technology: Angular or Blazor ? 

(When you build enterprise business apps using DotNet)

Introduction

This post compares and explains in detail the pros and cons of the 2 most used SPA frameworks by .net developers, Angular and Blazor.

My customers are enterprises that use .Net for building their line of business applications. I compared these 2 UI technologies, to find which one is the best fit for them and when they should use them instead of traditional ASP.NET MVC.

This study is primarily based on my many years of experience in architecting and developing solutions based on Angular and .NET. It is based on research over the internet and the feedback of other architects and IT consultants. Because I had no formal experience with Blazor I invested time in studying the Blazor framework and in building some small applications with Blazor. I also interviewed other developers that had experience with Blazor and integrated their feedback into this study.

Single Page Applications

There are two general approaches to building web applications today: traditional web applications also called Multi Page Applications (MPA’s) because they are made of multiple pages. These perform most of the application logic on the server. On the other hand, you have single-page applications (SPAs) that perform most of the user interface logic in a web browser, communicating with the web server primarily using web APIs. A hybrid approach is also possible, the simplest being to host one or more rich SPA-like sub-applications within a larger traditional web application.

MPA reloads the entire page and displays the new one when a user interacts with the web app. 

A SPA is more like a traditional application because it’s initially loaded and then it refreshes part of the page without reloading it. When a user interacts with the application, it displays content without the need to be fully updated since different pieces of content are downloaded automatically as per request. It is possible thanks to AJAX technology. In single-page applications, there is only one HTML page, and this one page downloads a bunch of assets, CSS, and images but typically also a lot of JavaScript. Speaking of the latter, the code will then listen to clicks on links and then re-render parts of the DOM in the loaded page whenever a user needs something. 

Although SPA applications are extremely popular because they usually account for a better user experience it does not mean that every web application should be built with a SPA architecture. Let’s take, for instance, a news portal like CNN. It is a good example of a multi-page application. How can we see it? Just click any link and watch the reload icon on the top of your browser. You see that reloading has started because a browser is now reaching out to our public server and fetching that page and all the resources needed. The interesting thing about multi-page applications is that every new page is downloaded. Every request we send to the server, like whenever we type a new URL or click on a link, leads to a new page being sent back from the server. Notable examples of multiple-page applications are giants like Amazon or eBay. Using them, you always get a new file for every request.

ASP. Net Core MVC is an excellent choice for websites that are better developed as MPAs (see later). But as explained here above SPAs usually supplies a better user experience when our application is more like an interactive application.   

Angular

Angular still is by far the most used SPA framework by .NET developers. This is because it has a lot of similarities with the .Net ecosystem.

  • TypeScript: Angular use TypeScript by default. This means that all documentation, articles, and source code are provided in TypeScript. TypeScript is a superset of JavaScript that compiles JavaScript. C# and Typescript are both designed by Anders Hejlsberg, and in many ways, TypeScript feels like the result of adding parts of C# to JavaScript. They share similar features and syntax, and both use the compiler to check for type safety. So, it's more straightforward to learn TypeScript for a C# dev as JavaScript.
  • MVVM: Angular uses the MVVM pattern. This Pattern was invented for WPF & Silverlight (.NET). So many developers coming from .NET that wanted to do Web development have turned to Angular because it used the same pattern.
  • All in the box: Unlike other Js frameworks like Vue or React, Angular is a complete framework that tends to supply everything you need, libs, and tooling and spare you of the dependency hell so that you can focus on your application. Just like the .NET framework. 

 Most of the other Js frameworks are exactly the opposite. They only focus on one aspect but leave you with a lot of choices. Although focusing on one thing results in frameworks that are simpler to learn, .NET enterprise developers are annoyed when they must make a lot of choices. They prefer opinionated frameworks that help them in getting productive faster with better supportability.

Angular is a part of the JavaScript ecosystem and one of the most popular SPA frameworks today. AngularJs the preceding version, was introduced by Google in 2009 and was largely adopted by the development community.

In September 2016, Google released Angular 2. It was a complete rewrite of the framework by the same team. The recent version of the framework uses TypeScript as a programming language. TypeScript is a typed superset of JavaScript which has been built and maintained by Microsoft. The presence of types makes the code written in TypeScript less prone to run-time errors. As explained here above, TypeScript is the primary reason Angular is so popular in the .NET community.

The difference between the old Angular and the second version is so radical that you can’t update from AngularJs (the older version). Migrating the applications to Angular requires too many modifications due to different syntax and architecture. This means that upgrading our existing AngularJs apps to the new Angular framework will lead to a complete rewrite. 

TypeScript compiles JavaScript and integrates perfectly with the JavaScript ecosystem. So Angular is part of the JavaScript ecosystem and uses npm to manage dependencies. It can count on the largest community and number of libraries today. With Angular, you use the largest open-source ecosystem where you can find everything that you need. 

But the vast number of packages and their fast-changing rate is also a challenge in the enterprise. Making these packages available is the biggest challenge. Take a new Angular application as a sample. An empty new Angular application has the following direct dependencies:

  • @angular
  • @angular/cli
  • @ngrx
  • rxjs
  • typescript
  • tslint
  • codelyzer
  • zone.js
  • core-js

 

And now let’s install all the dependencies by typing:

npm i
... one eternity later ...
added 1174 packages in 116.846s

A new regular Angular application without any add-on uses already 1200 packages! 

When your ecosystem has this number of dependencies, you can’t inspect them all yourself. You can’t hope they’re all secure. You can’t assume they’ve all got permissive licenses. You can’t maintain your registry by hand. You can’t formally approve packages by comity, and you can’t maintain this as many packages you’re using will get an update every single hour. You must move from manual, discrete processes to automated, continuous processes. 

Angular comes with great tools, supported by a large community. Angular has been around for over a decade, while Blazor has been in the market for only a few years. Angular is a production-ready framework with full support for MVVM applications and it is being used by many large companies. Blazor, while being used by some large brands, is early in its lifecycle. Angular and other popular Js frameworks like Razor or View come with much more content in the form of courses, books, blogs, videos, and other materials. Angular has dedicated tech events held worldwide, a huge community, and it supplies a broad choice of third-party integrations.

To illustrate the difference in adoption, consider the graphic here that compares the number of jobs offered by frontend technology. Angular comes second when Blazor is somewhere between the others:

As illustrated here above, despite being one of the oldest and one of the most mature SPA frameworks Angular is not the most used or loved framework in the JavaScript community. This is primarily due to his complexity. Even for seasoned JavaScript developers, Angular is difficult to master. His strength of being an opinionated framework using TypeScript comes at the expense of a steep learning curve. 

This is even more challenging for most .NET developers who usually have only a marginal knowledge of JavaScript. They not only have to learn a new language they also must learn an entirely new ecosystem and framework. In my own experience, this is the greatest challenge for organizations using .NET. This can be solved by recruiting Angular specialists that focus on the frontend part. Having a frontend specialist usually improves the user experience but it comes at the expense of having to segregate your devs in frontend (Angular) and backend devs (.NET). 

Mixing two technologies coming from different worlds diminishes the flexibility of resource allocation. People must specialize and the potential number of tasks they can work on diminishes. It also increases development time as you need to put more effort into integrating both technologies.

My own experience with working with .net developers having to learn Angular confirms that the learning curve is very steep for them. In the past, to cope with this problem, we recruited Angular specialists to coach our .Net teams. We also had to invest in our CI/CD pipeline to support our new technology stack. As a .Net shop that desired to adopt Angular, we had to make a considerable investment in recruiting the right profiles, supplying the right guidance, and investing in building and adapting our package management process and tools to the high velocity and number of packages that are typical for the Javascript ecosystem. Also recruiting the right skills was a challenge for the reasons explained here above and because Angular is somewhat losing traction in the Js community.

When to choose Angular over ASP.NET MVC?

If the team is unfamiliar with JavaScript or TypeScript but is familiar with pure server-side MVC development, then they will probably be able to deliver a traditional MVC App more quickly than an Angular app. Unless the team has already learned Angular or unless the user experience afforded by Angular is needed, the traditional MVC is the more productive choice. 

Blazor

Blazor is a relatively new Microsoft ASP.NET Core web framework that allows developers to write code for browsers in C#. Blazor is based on existing web technologies like HTML and CSS but then allows the developer to use C# and Razor syntax instead of JavaScript. Razor is the template markup syntax used by ASP.NET MVC and MVC developers can reuse most of their skills to build apps with Blazor. 

Blazor is very promising for .NET developers as it enables them to create SPA applications using C#, without the need for significant JavaScript skills. 

An advantage of Blazor is that it uses the latest web standards and does not need added plugins or add-ons to run in different deployment models. It can run in the browser using WebAssembly and offers a server-side ASP.NET Core option.  

Learning Blazor for .NET developers is nothing compared to Angular or other Js frameworks. First, they don’t need to master JavaScript and TypeScript but also, but they can use the enterprise-friendly .NET ecosystem they already know. For a .NET dev already accustomed to web development through MVC, we can say that developers can be productive in a matter of days/weeks compared to months/years with Angular.

But the biggest advantage of Blazor is its productivity. It allows .NET developers to build entire applications with only C#. It feels more like building desktop apps than web apps. When using the server-side version developers don’t have to think about data transfer between client and server. Also, with the client-side version, it’s a lot easier than with Angular as it’s all integrated within one .NET solution. 

Blazor seems even more productive than traditional MVC as it doesn’t require the developer to cope with server page refreshes and can use client rendering and events. It feels like what was developing with Silverlight or WPF but using Razor, Html, and CSS instead of Xaml. This was also confirmed by our discussion with a consultant at Gartner. They confirmed that based on experiences with several customers, Blazor was the most productive technology for building UIs inside the .NET ecosystem, even compared to MVC.

The adoption of Blazor outside the .NET community will probably remain low. Blazor is hard to sell to the current web developer because it means leaving behind many of the libraries and technologies that have been up for over a decade of modern JavaScript development. This means that Blazor will probably stay a niche technology. Blazor is and will be used by teams that know .NET for building internal business applications but will probably not be adopted outside this community. That means that the community and tools will mostly be supported by Microsoft. The tool and libraries support from the open-source community and vendors will remain less compared to Angular or other mainstream Js frameworks. 

One of the impediments to Blazor adoption by the .NET community is that Blazor will be the next Silverlight. They fear that it will be around only for a brief period. More than 10 years after the final Microsoft Silverlight release, some developers still fear being 'Silverlighted,' or seeing a development product in which they have invested heavily abandoned by Microsoft. This is because both technologies are similar, both are developed by Microsoft and allow us to use the .NET platform to create web applications. They both share the same core idea: Instead of using JavaScript, we can use C# as the programming language to create web applications. 

Inside the enterprise, this risk is largely mitigated by the fact that Microsoft has always supplied relatively long support periods. Even for Silverlight Microsoft continued to support it for nearly 10 years before retiring it. This risk has also a lot less impact as Blazor is not dependent on a browser plugin. Microsoft learned a lot from this story. First, ASP.NET and later ASP.NET Core do not require any browser plugins to run. Also, Microsoft decided to only use open web standards for Blazor. As long as the browsers support those open web standards, they will also support Blazor.

 

Client Side (Wasm) vs. Server Side Blazor

Blazor has a separation between how it calculates UI changes (app/component model) and how those changes are applied (renderer). This decouples the renderer from the platform and allows to have the same programming model for different platforms. 

Currently, Microsoft supports only 2 renderers, Blazor client (WebAssembly Renderer) and Blazor Sever (Remote Renderer). The implementation code is the same but what differs is the configuration of the hosting middleware. A good practice is to isolate your code from changes to the hosting model by using a class library where you put all the rendering code. This will ease the job if you later must migrate to another hosting model or when you need to support another supplementary model.

Blazor Server

With Blazor Server you use a classic ASP.NET Core website like ASP.NET MVC but the UI refreshes are handled over a SignalR connection.  

As for a SPA, one page is displayed on the browser and then parts of it are refreshed. On the server at the first request, the Html is generated from the Razor pages. When UI updates are triggered by user interaction and app events a UI diff is calculated and sent to the client over a SignalR connection. Then, the client-side Js library used by Blazor server applies the changes in the browser.  

Using Blazor Server behind load balancers or proxy servers requires special attention. As described in the article here, forwarded headers should be enabled (default configuration in IIS) and the load balancers should support sticky sessions. You can also use Redis to create a backplane as described here

Blazor Client (WebAssembly)

Here .Net code is running inside the browsers. The application code is compiled in WebAssembly (a sort of Assembly language for the web) and sent together with the .Net runtime to the browser. The Blazor WebAssembly runtime uses JavaScript interop to handle DOM manipulation and browser API calls.

WebAssembly is an open web standard and is supported by all modern web browsers. Blazor is not the only technology using WebAssembly, most of the other big vendors develop tools and platforms that make use of WebAssembly.

 

Blazor Server vs. client recommendation

For traditional enterprise business applications, Blazor Server seems usually the best option but for certain scenarios, Blazor Wasm has clear advantages :

-       When the responsiveness of your app must be optimal, the low latency of Blazor server will optimize the responsiveness.

-       You need offline support.

-       You expect many users (>100). Blazor Server is a less scalable solution as many users require server resources to handle multiple client connections and client states.

 

Weight

Blazor Server

Score

Total

Blazor Client

Score

Total

Learning curve

5

Easier to configuration because security model.

8

40

More complex, same problems as with SPA but not significantly harder as server side Blazor.

7

35

Productivity

5

Enable to build 2 tier apps that demand less effort as 3 tier apps.

9

45

Enforce 3 tier and use of API.

7

35

Supportability

5

The same as traditional MVC apps.

9

45

More difficult to support due to security model.

7

35

Maturity

4

First version with production support.

7

28

Came later but now it's supported in production by Microsoft.

7

28

Community

3

For the moment adoption seems better.

6

18

WebAssembly has highest buzz so probably the most adoption in the long run.

7

21

Enforce Best practice

3

Does not enforce 3 tiers or MVVM.

4

12

Enforce 3 tiers, using 3 tiers is usually a good practice.

8

24

Performance

2

Better performance at load time but a lot less scalable.

6

12

Longer load time but a lot more scalable.

7

14

 

 

 

 

 

 

 

 

Risk

Impact

 

 

 

 

 

 

Security

4

No more risk as a classical MVC.

0

0

Higher risk for misconfiguration.

-1

-4

Lifecycle support

5

No dependency on browser support.

0

0

Browsers need to continue to support WebAssembly standard, but the probability is low.

-1

-5

 

 

 

 

 

 

 

 

Total (Max 360)

 

 

 

200

 

 

183

 

When to choose Blazor over ASP.NET MVC ?

As developing or maintaining apps using Blazor is more productive than ASP.NET MVC and easy to learn, Blazor is usually the best choice except when:

-       Your application has simple, possibly read-only requirements.

Many web applications are primarily consumed in a read-only fashion by most of their users. Read-only (or read-mostly) applications tend to be much simpler than those applications that maintain and manipulate a great deal of state. For example, our intranet or public websites where anonymous users can easily make requests, and there is little need for client-side logic. This type of public-facing web site usually consists mainly of content with little client-side behavior. Such applications are easily built as traditional ASP.NET MVC web applications, which perform logic on the web server and render HTML to be displayed in the browser. The fact that each unique page of the site has its own URL that can be bookmarked and indexed by search engines (by default, without having to add this functionality as a separate feature of the application) is also a clear benefit in such scenarios.

-       Your application ecosystem is based on MVC and is too small or too simple to invest in Blazor.

When your applications consist mainly of server-side processes and your UI is quite simple (e.g., Dashboard, back-office CRUD apps, …) and you’ve already made a strong investment in MVC then adding Blazor could not be worth the investment.

Angular vs. Blazor Score Card

Here we selected several criteria and attributed weight to the function of the specific needs and environment of Enterprises using .Net as their main technology stack. But, each organization is unique and adapt the scoring based on its specific needs.   

The weight ranges from 1 to 5. 5 indicates an especially important criterion, starting from 1 as minor importance to 5 as the top priority.

The score for each criterion ranges from 0 to 10. 10 shows an exceptionally high-quality attribute, 1 is the minimum.

We also take account of Risks at the bottom of the scorecard. Here the weight indicates the risk potential impact and the score the probability. Staring from 0 as 0% probability to -10 indicates a risk that will occur with 100% probability.

  • Learning curve: the amount time a developer, on average, needs to invest to become productive with the technology.
  • Productivity: how productive the developers are with the given platform once it has mastered the technology.
  • Supportability: how well can we efficiently support the platform?
  • Community: how valuable, responsive and broad is the developer's community? When a platform has a strong community the organization, its developers and its users all benefit.
  • Maturity: is the platform used for long enough that most of its first faults and inherent problems have been removed or reduced?
  • Full Stack efficiency: how efficient is it for the project to work and to cope with the separation of the frontend and the backend? Does the adoption of this technology diminish the efficiency of the project teams and the organization because they must deal with different skill sets for backend and frontend?
  • Finding resources: how easy is it to recruit developers on the market?
  • Libraries choices: a good library well suited for the needs and well maintained improves productivity while minimizing the TCO. A large choice of libraries increases the probability of finding a library that satisfies the needs.
  • Opinionated: a platform that believes a certain way of approaching a problem is inherently better and enables the developer to spend less time finding the right way of solving a problem.
  • Performance: which platform has the fastest render time, load time, and refresh time, and consumes a minimal number of resources?

 

 

 

Weight

Blazor

Score

Total

Angular

Score

Total

Learning curve

5

Easy to learn for .Net devs, especially with MVC/Razor experience.  Non-included features require Js integration (limited).

9

45

Steep learning curve even for experimented Js devs.

3

15

Productivity

5

More tuned for RAD as Angular, even more productive as bare MVC (Gartner)

9

45

Lot of ceremony but good tooling is provided (e.g., Visual Studio Code).

4

20

Supportability

5

Included in. Net6 & Visual Studio, Server Blazor Server is same as MVC web app towards supportability. (Blazor server is included in .Net 6 => not a Nuget package)

9

45

Big challenges with lifecycle due to number of packages.

2

10

Community

4

Backed by Micrsosoft but small community made out exclusively of .NET devs.

3

12

Backed by Google and exceptionally large community.

8

32

Maturity

4

Is supported since .NET core 3.1.  and is considered by community as mature since .NET 6.

6

24

Very mature & battle tested.

9

36

Full Stack efficiency

3

Although some devs specialize in backend or frontend work, Blazor is an integrated environment with 1 stack where 1 dev can easily work on both with high efficiency.

9

27

Some Angular/.NET real full stack devs are available. Experience shows that when 2 different stacks (.NET & Angular) are used this often leads to a segregation of devs in frontend and backend devs what diminish efficiency.

4

12

Finding resources

3

Finding experienced Blazor devs is more difficult as finding Angular devs. Nevertheless, for MVC devs, learning Blazor is easy.

4

12

Angular is one of the mainstream Js frameworks but a lot of JS frameworks exist and finding Angular devs that know .Net is not easy.

6

18

Libraries choices

3

Limited choice of Open-source library. Major vendors Teleric or DevExpress have Blazor component UI libraries. 

3

9

Large choice open source and commercial libraries.

8

24

Opinionated

2

Although it's a full framework with databinding, Devs have more choices to make as with Angular.

7

14

Angular is very opinionated and complete.

8

16

Performance

2

Less performant but still very acceptable

7

14

Framework was tuned over the years towards performance.

8

16

 

 

 

 

 

 

 

 

Risk

 

 

 

 

 

 

 

Abandon technology within 5J

4

Taking Ms lifecycle into consideration the chance is extremely low for 5Y.

-1

-4

Chances are low but AngularJs has proven that even when the tech is successful vendors can abandon it.

-1

-4

Adoption falls within the community

3

The probability is low but exists that .NET devs don't continue to adopt the tech due to "Silverlight" history.

-2

-6

Angular has a large community but it's losing traction.

-1

-3

Total (Max 360)

 

 

 

237

 

 

192

 

 

What Is a Single Page Application and when should we use it?

 

There are two general approaches to building web applications today: traditional web applications also called Multi Page Applications (MPA’s) because they are made of multiple pages. These perform most of the application logic on the server.  On the other hand, you have single-page applications (SPAs) that perform most of the user interface logic in a web browser, communicating with the web server primarily using web APIs. A hybrid approach is also possible, the simplest being to host one or more rich SPA-like sub applications within a larger traditional web application.

MPA reloads the entire page and displays the new one when a user interacts with the web app. 

A SPA is more like a traditional application because it’s initially loaded and then it refreshes part of the page without reloading it. When a user interacts with the application, it displays content without the need to be fully updated since different pieces of content are downloaded automatically as per request. It is possible thanks to AJAX technology. In single-page applications, there is only one HTML page, and this one page downloads a bunch of assets, CSS, images but typically also a lot of JavaScript. Speaking of the latter, the code will then listen to clicks on links and then re-render parts of the DOM in the loaded page whenever a user need something.

Although SPA applications are extremely popular because they usually account for a better user experience it does not mean that every web application should be built with a SPA architecture. Let’s take, for instance, a news site like CNN. It is a good example of a multi-page application. How can we see it? Just click any link and watch the reload icon on the top of your browser. You see that reloading has started because a browser is now reaching out to our public server and fetching that page and all the resources needed. The interesting thing about multi-page applications is that every new page is downloaded. Every request we send to the server, like whenever we type a new URL or click on a link, leads to a new page being sent back from the server. Notable examples of multiple page applications are giants like Amazon or eBay. Using them, you always get a new file for every request.

MPA is an excellent choice for websites that provides mostly read-only content. SPAs usually supplies a better user experience when our application is more like an interactive application.

OIDC with auth0, Angular and DotNet Core 3.1

 

On my github repo you’ll find a demo app based on auth0 SPA Angular quickstart. It is integrated with a .Net Core 3.1 backend. The purpose is to demonstrate how to use auth0 to secure Api calls to a NetCore31 app from an Angular 8+ SPA.

To make it work you need to:

  1. Have a auth0 subscription.
  2. Create an Application and a API.
  3. Edit following files with your auth0 configuration info:
  • \auth0-backend-demo\appsettings.config
  • \auth0-angular-demo\src\app\auth.service.ts
  1. In the angular app install the npm packages: cd .\auth0-angular-demo
    npm install
  2. start the angular app: npm start
  3. Compile and run the .NetCore Api app: cd .\auth0-backend-demo
    dotnet restore dotnet run
  4. Open your browser and navigate to http://http://localhost:3000/

To better understand how to configure your app it's recommended to first complete the auth0 angular2 quickstart.

Have fun!